Comments inlined.
Hi Sebastian,
the "intelligent routing" of Hot Rod being one of - if not the main - reason to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with HTTP (REST) in such architectures.
Several people have suggested in the past the need to have an HTTP smart load balancer which would be able to route the external REST requests to the right node. Essentially have people use REST over the wider network, up to reaching the Infinispan cluster where the service endpoint (the load balancer) can convert them to optimised Hot Rod calls, or just leave them in the same format but routing them with the same intelligence to the right nodes.
I realise my proposal requires some work on several fronts, at very least we would need:
- feature parity Hot Rod / REST so that people can actually use it
- a REST load balancer
But I think the output of such a direction would be far more reusable, as both these points are high on the wish list anyway.
Unfortunately I'm not convinced into this idea. Let me elaborate...
It goes without saying that HTTP payload is simply larger and require much more processing. That alone makes it slower than Hot Rod (I believe Martin could provide you some numbers on that). The second arguments is that switching/routing inside Kubernetes is bloody fast (since it's based on iptables) and some cloud vendors optimize it even further (e.g. Google Andromeda [1][2], I would be surprised if AWS didn't have anything similar). During the work on this prototype I wrote a simple async binary proxy [3] and measured GCP load balancer vs my proxy performance. They were twice as fast [4][5]. You may argue whether I could write a better proxy. Probably I could, but the bottom line is that another performance hit is inevitable. They are really fast and they operate on their own infrastructure (load balancers is something that is provided by the cloud vendor to Kubernetes, not the other way around).
So with all that in mind, are we going to get better results comparing to my proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which comes really soon I hope). The second question is whether this new "REST load balancer" will work better than a standard load balancer using round robin strategy? Again I dare to doubt, even if you you're faster at routing request to proper node, you introduce another layer of latency.
Of course the priority of this is up to Tristan but I definitely wouldn't place it high on todo list. And before even looking at it I would recommend taking Netty HTTP Proxy, putting it in the middle between real load balancer and Infinispan app and measure performance with and without it. Another test could be with 1 and 10 replicas to check the performance penalty of hitting 100% and 10% requests into proper node.
Not least having a "REST load balancer" would allow to deploy Infinispan as an HTTP cache; just honouring the HTTP caching protocols and existing standards would allow people to use any client to their liking,
Could you please give me an example how this could work? The only way that I know is to plug a cache into reverse proxy. NGNIX supports pluggable Redis for example [6].
without us having to maintain Hot Rod clients and support it on many exotic platforms - we would still have Hot Rod clients but we'd be able to pick a smaller set of strategical platforms (e.g. Windows doesn't have to be in that list).
As I mentioned before, I really doubts HTTP will be faster then Hot Rod in any scenario.
Such a load balancer could be written in Java (recent WildFly versions are able to do this efficiently) or it could be written in another language, all it takes is to integrate an Hot Rod client - or just the intelligence of it- as an extension into an existing load balancer of our choice.
As I mentioned before, with custom load balancer you're introducing another layer of latency. It's not a free ride.
Allow me a bit more nit-picking on your benchmarks ;)
As you pointed out yourself there are several flaws in your setup: "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's a flawed setup I'd rather not publish figures, especially not suggest to make decisions based on such results.
Why not? Infinispan is a public project and anyone can benchmark it using JMH and taking decisions based on figures is always better than on intuition. Even though there were multiple unknown factors involved in this benchmark (this is why I pointed them out and asked to take the results with a grain of salt), the test conditions for all scenarios were the same. For me this is sufficient to give a general recommendation as I did. BTW, this recommendation fits exactly my expectations (communication inside Kube the fastest, LB per Pod a bit slower and no advanced routing the slowest). Finally, the recommendation is based on a POC which by definition means it doesn't fit all scenarios. You should always measure your system!
So unless you can prove the benchmark results are fundamentally wrong and I have drawn wrong conclusions (e.g. a simple client is the fastest solution whereas inside Kubernetes communication is the slowest), please don't use "naaah, that's wrong" argument. It's rude.
At this level of design need to focus on getting the architecture right; it should be self-speaking that your proposal of actually using intelligent routing in some way should be better than not using it.
My benchmark confirmed this. But as always I would be happy to discuss some alternatives. But before trying to convince me to "REST Router", please prove that introducing a load balancer (or just a simple async proxy for start) gives similar or better performance than a simple load balancer with round robin strategy.
Once we'll have an agreement on a sound architecture, then we'll be able to make the implementation efficient.
Thanks,
Sanne
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev