Hey Sanne,
Comments inlined.
Thanks,
Sebastian
On Tue, May 30, 2017 at 5:58 PM Sanne Grinovero <sanne(a)infinispan.org>
wrote:
Hi Sebastian,
the "intelligent routing" of Hot Rod being one of - if not the main -
reason to use Hot Rod, I wonder if we shouldn't rather suggest people to
stick with HTTP (REST) in such architectures.
Several people have suggested in the past the need to have an HTTP smart
load balancer which would be able to route the external REST requests to
the right node. Essentially have people use REST over the wider network, up
to reaching the Infinispan cluster where the service endpoint (the load
balancer) can convert them to optimised Hot Rod calls, or just leave them
in the same format but routing them with the same intelligence to the right
nodes.
I realise my proposal requires some work on several fronts, at very least
we would need:
- feature parity Hot Rod / REST so that people can actually use it
- a REST load balancer
But I think the output of such a direction would be far more reusable, as
both these points are high on the wish list anyway.
Unfortunately I'm not convinced into this idea. Let me elaborate...
It goes without saying that HTTP payload is simply larger and require much
more processing. That alone makes it slower than Hot Rod (I believe Martin
could provide you some numbers on that). The second arguments is that
switching/routing inside Kubernetes is bloody fast (since it's based on
iptables) and some cloud vendors optimize it even further (e.g. Google
Andromeda [1][2], I would be surprised if AWS didn't have anything
similar). During the work on this prototype I wrote a simple async binary
proxy [3] and measured GCP load balancer vs my proxy performance. They were
twice as fast [4][5]. You may argue whether I could write a better proxy.
Probably I could, but the bottom line is that another performance hit is
inevitable. They are really fast and they operate on their own
infrastructure (load balancers is something that is provided by the cloud
vendor to Kubernetes, not the other way around).
So with all that in mind, are we going to get better results comparing to
my proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which
comes really soon I hope). The second question is whether this new "REST
load balancer" will work better than a standard load balancer using round
robin strategy? Again I dare to doubt, even if you you're faster at routing
request to proper node, you introduce another layer of latency.
Of course the priority of this is up to Tristan but I definitely wouldn't
place it high on todo list. And before even looking at it I would recommend
taking Netty HTTP Proxy, putting it in the middle between real load
balancer and Infinispan app and measure performance with and without it.
Another test could be with 1 and 10 replicas to check the performance
penalty of hitting 100% and 10% requests into proper node.
[1]
https://cloudplatform.googleblog.com/2014/08/containers-vms-kubernetes-an...
[2]
https://cloudplatform.googleblog.com/2014/04/enter-andromeda-zone-google-...
[3]
https://github.com/slaskawi/external-ip-proxy/blob/Benchmark_with_proxy/P...
[4]
https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/resul...
[5]
https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/resul...
Not least having a "REST load balancer" would allow to
deploy Infinispan
as an HTTP cache; just honouring the HTTP caching protocols and existing
standards would allow people to use any client to their liking,
Could you please give me an example how this could work? The only way that
I know is to plug a cache into reverse proxy. NGNIX supports pluggable
Redis for example [6].
[6]
https://www.nginx.com/resources/wiki/modules/redis/
without us having to maintain Hot Rod clients and support it on many
exotic platforms - we would still have Hot Rod clients but we'd be able to
pick a smaller set of strategical platforms (e.g. Windows doesn't have to
be in that list).
As I mentioned before, I really doubts HTTP will be faster then Hot Rod in
*any* scenario.
Such a load balancer could be written in Java (recent WildFly
versions are
able to do this efficiently) or it could be written in another language,
all it takes is to integrate an Hot Rod client - or just the intelligence
of it- as an extension into an existing load balancer of our choice.
As I mentioned before, with custom load balancer you're introducing another
layer of latency. It's not a free ride.
Allow me a bit more nit-picking on your benchmarks ;)
As you pointed out yourself there are several flaws in your setup: "didn't
tune", "running in a VM", "benchmarked on a mac mini", ...if you
know it's
a flawed setup I'd rather not publish figures, especially not suggest to
make decisions based on such results.
Why not? Infinispan is a public project and anyone can benchmark it using
JMH and taking decisions based on figures is always better than on
intuition. Even though there were multiple unknown factors involved in this
benchmark (this is why I pointed them out and asked to take the results
with a grain of salt), the test conditions for all scenarios were the same.
For me this is sufficient to give a general recommendation as I did. BTW,
this recommendation fits exactly my expectations (communication inside Kube
the fastest, LB per Pod a bit slower and no advanced routing the slowest).
Finally, the recommendation is based on a POC which by definition means it
doesn't fit all scenarios. You should always measure your system!
So unless you can prove the benchmark results are fundamentally wrong and I
have drawn wrong conclusions (e.g. a simple client is the fastest solution
whereas inside Kubernetes communication is the slowest), please don't use
"naaah, that's wrong" argument. It's rude.
At this level of design need to focus on getting the architecture
right;
it should be self-speaking that your proposal of actually using intelligent
routing in some way should be better than not using it.
My benchmark confirmed this. But as always I would be happy to discuss some
alternatives. But before trying to convince me to "REST Router", please
prove that introducing a load balancer (or just a simple async proxy for
start) gives similar or better performance than a simple load balancer with
round robin strategy.
Once we'll have an agreement on a sound architecture, then
we'll be able
to make the implementation efficient.
Thanks,
Sanne
On 30 May 2017 at 13:43, Sebastian Laskawiec <slaskawi(a)redhat.com> wrote:
> Hey guys!
>
> Over past few weeks I've been working on accessing Infinispan cluster
> deployed inside Kubernetes from the outside world. The POC diagram looks
> like the following:
>
> [image: pasted1]
>
> As a reminder, the easiest (though not the most effective) way to do it
> is to expose a load balancer Service (or a Node Port Service) and access it
> using a client with basic intelligence (so that it doesn't try to update
> server list based on topology information). As you might expect, this won't
> give you much performance but at least you could access the cluster.
> Another approach is to use TLS/SNI but again, the performance would be even
> worse.
>
> During the research I tried to answer this problem and created "External
> IP Controller" [1] (and corresponding Pull Request for mapping
> internal/external addresses [2]). The main idea is to create a controller
> deployed inside Kubernetes which will create (and destroy if not needed) a
> load balancer per Infinispan Pod. Additionally the controller exposes
> mapping between internal and external addresses which allows the client to
> properly update server list as well as consistent hash information. A full
> working example is located here [3].
>
> The biggest question is whether it's worth it? The short answer is yes.
> Here are some benchmark results of performing 10k puts and 10k puts&gets
> (please take them with a big grain of salt, I didn't optimize any server
> settings):
>
> - Benchmark app deployed inside Kuberenetes and using internal
> addresses (baseline):
> - 10k puts: 674.244 ± 16.654
> - 10k puts&gets: 1288.437 ± 136.207
> - Benchamrking app deployed in a VM outside of Kubernetes with basic
> intelligence:
> - *10k puts: 1465.567 ± 176.349*
> - *10k puts&gets: 2684.984 ± 114.993*
> - Benchamrking app deployed in a VM outside of Kubernetes with
> address mapping and topology aware hashing:
> - *10k puts: 1052.891 ± 31.218*
> - *10k puts&gets: 2465.586 ± 85.034*
>
> Note that benchmarking Infinispan from a VM might be very misleading
> since it depends on data center configuration. Benchmarks above definitely
> contain some delay between Google Compute Engine VM and a Kubernetes
> cluster deployed in Google Container Engine. How big is the delay? Hard to
> tell. What counts is the difference between client using basic intelligence
> and topology aware intelligence. And as you can see it's not that small.
>
> So the bottom line - if you can, deploy your application along with
> Infinispan cluster inside Kubernetes. That's the fastest configuration
> since only iptables are involved. Otherwise use a load balancer per pod
> with External IP Controller. If you don't care about performance, just use
> basic client intelligence and expose everything using single load balancer.
>
> Thanks,
> Sebastian
>
> [1]
https://github.com/slaskawi/external-ip-proxy
> [2]
https://github.com/infinispan/infinispan/pull/5164
> [3]
https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
SEBASTIAN ŁASKAWIEC
INFINISPAN DEVELOPER
Red Hat EMEA <
https://www.redhat.com/>
<
https://red.ht/sig>