Hey Gustavo,

Comments inlined.

Thanks,
Sebastian

On Mon, May 8, 2017 at 11:13 AM Gustavo Fernandes <gustavo@infinispan.org> wrote:
Questions inlined:

On Mon, May 8, 2017 at 8:57 AM, Sebastian Laskawiec <slaskawi@redhat.com> wrote:
Hey guys!

A while ago I started working on exposing Infinispan Cluster which is hosted in Kubernetes to the outside world:


What about SNI, wasn't this scenario the reason why it was implemented, IOW to allow HR clients to access an ispn hosted in the cloud?

The short answer is no.

There are at least two major disadvantages of using SNI to connect a Pod:
  1. You still need to pass a FQDN in the SNI field. FQDN looks like this [1] transactions-repository-1-myproject.192.168.0.17.nip.io. This allows you to send TCP packets to a desired Route. In order to reach a specific Pod (assuming one among many), you need to get through a Route and a Service. So it seems you will need a "Pod <-> Service <-> Route" combination per each Pod. Ouch!!
  2. TLS slows everything down (by ~50% from my benchmark)
Also you statement that SNI is needed to access an Infinispan Server hosted in the cloud is misleading. I think it originated a year ago and even then it wasn't quite accurate even then. You can create a Service per Pod and expose it using a LoadBalancer or a NodePort. In my experience creating a Load Balancer per Pod is much simpler than creating a Clustered Service + Route combination and enforcing TLS/SNI. 

[1] https://github.com/slaskawi/presentations/blob/master/2017_multi_tenancy/cache-checker/src/main/java/org/infinispan/microservices/Main.java#L29
 

 

pasted1

I'm currently struggling to get solution like this into the platform [1] but in the meantime I created a very simple POC and I'm testing it locally [2]. 

What does "application" mean in the diagram? Are those different pods, or single containers part of a pod?

Those are Pods. Sorry, I made this image too generic.
 

There isn't much doc available at [2], how does it work?

What I'm trying to solve here is accessing the data using shortest possible path - using a "single hop" as we used to call it. 

In order to do that the client and all the servers need to have the same consistent hash (which is obtained by the client from one of the servers). The problem is that this obtained consistent hash contains internal IP addresses used by the servers to form a cluster. Those addresses are not achievable by the client - it needs to use external ones. So the idea is to let the client use the Consistent Hash with internal addresses but right before sending get request, remap the internal address to the external one. I haven't tried it but looking at the code it shouldn't be that hard.
 
 

There are two main problems with the scenario described above:
  1. Infinispan server announces internal addresses (172.17.x.x) to the client. The client needs to remap them into external ones (172.29.x.x).

How would the external address be allocated, e.g. during scaling up and down and how the HR client would know how to map them correctly?

This is the discovery part of the problem and it is pretty hard to be solved. For Kubernetes we can expose a 3rd party REST service which will expose this information. I'm experimenting with this approach in my solution: https://github.com/slaskawi/external-ip-proxy/blob/master/Main.go#L57 (later this week I plan to expose also runtime configuration with internal <-> external mapping).

Unfortunately the same problem exists also in some OpenStack configurations (OpenStack also uses internal/external addresses). Therefore some custom REST service would also be needed there. But this is very low priority to me.
 
 
  1. A custom Consistent Hash needs to be supplied to the Hot Rod client. When accessing cache, the Hot Rod Client needs to calculate server id for internal address and then map it to the external one.
If there will be no strong opinions regarding to this, I plan to implement this shortly. There will be additional method in Hot Rod Client configuration (ConfigurationBuilder#addServerMapping(String mappingClass)) which will be responsible for mapping external addresses to internal and vice-versa.

Thoughts?

Thanks,
Sebastian


_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--

SEBASTIAN ŁASKAWIEC

INFINISPAN DEVELOPER