Hey Emmanuel!

Comments inlined.

There is one more thing to discuss - how SNI [1] for Hotrod server fits into the Router design. Obviously there is some overlap and the for SSL+SNI needs to be also implemented in the Router [2] (it potentially needs to decrypt an encrypted "switch-to-tenant" command). Moreover, if the client sends his SNI Host Name with the request - we can connect him to proper CacheContainer even without sending "switch-to-tenant" command. Of course there is some overhead here as well - if someone has only one Hot Rod server and he want's to use SNI - he would need to configure a Router, which would always send everything to a single server. 

Thanks
Sebastian

[1] https://github.com/infinispan/infinispan/pull/4279
[2] https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#implementation-details

On Fri, May 6, 2016 at 8:37 PM, Emmanuel Bernard <emmanuel@hibernate.org> wrote:
Is the router a software component of all nodes in the cluster ?

Yes
 
Does the router then redirect all request to the same cache-container for all tenant? How is the isolation done then?

Each tenant have its own Cache Container, so they are fully isolated. As the matter of fact this is how it is done now - you can run multiple Hot Rod server in one node (but each of them is attached to different port). The router takes this concept one step further and offers "one entry point" for all embedded Hot Rod servers.
 
Or does each tenant have effectively different cache containers and thus be "physically" isolated?
Or is that config dependent (from a endpoint to the cache-container) and some tenants could share the same cache container. In which case will they see the same data ?

All tenants operate of their own Cache Containers, so there will not see each other's data. However if you create 2 CacheContainers with the same cluster name (//subsystem/cache-container/transport/@cluster) they should see each other's data. I think this should be a recommended way for handling this kind of things. 
 

Finally I think the design should allow for "dynamic" tenant configuration. Meaning that I don't have to change the config manually when I add a new customer / tenant. 

I totally agree. @Tristan - could you please tell me how dynamic reconfiguration via CLI works? I probably should fit into that with router configuration (I assume all existing Protocol Server and Endpoint configuration support it).
 

That's all, and sorry for the naive questions :)

No problem - they were very good questions.
 

On 29 avr. 2016, at 17:29, Sebastian Laskawiec <slaskawi@redhat.com> wrote:

Dear Community,

Please have a look at the design of Multi tenancy support for Infinispan [1]. I would be more than happy to get some feedback from you.

Highlights:
  • The implementation will be based on a Router (which will be built based on Netty)
  • Multiple Hot Rod and REST servers will be attached to the router which in turn will be attached to the endpoint
  • The router will operate on a binary protocol when using Hot Rod clients and path-based routing when using REST
  • Memcached will be out of scope
  • The router will support SSL+SNI
Thanks
Sebastian

_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev