Hey guys!

Any last call on this? I'm going to start the implementation on Monday.

Thanks
Sebastian

On Wed, May 11, 2016 at 10:38 AM, Sebastian Laskawiec <slaskawi@redhat.com> wrote:
Hey Tristan!

If I understood you correctly, you're suggesting to enhance the ProtocolServer to support multiple EmbeddedCacheManagers (probably with shared transport and by that I mean started on the same Netty server).

Yes, that also could work but I'm not convinced if we won't loose some configuration flexibility. 

Let's consider a configuration file - https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use authentication for CacheContainer cc1 (and not for cc2) and encryption for cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using this kind of different options makes sense in terms of multi tenancy. And please note that if we start a new Netty server for each CacheContainer - we almost ended up with the router I proposed.

The second argument for using a router is extracting the routing logic into a separate module. Otherwise we would probably end up with several if(isMultiTenent()) statements in Hotrod as well as REST server. Extracting this has also additional advantage that we limit changes in those modules (actually there will be probably 2 changes #1 we should be able to start a ProtocolServer without starting a Netty server (the Router will do it in multi tenant configuration) and #2 collect Netty handlers from ProtocolServer).

To sum it up - the router's implementation seems to be more complicated but in the long run I think it might be worth it. 

I also wrote the summary of the above here: https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server#alternative-approach

@Galder - you wrote a huge part of the Hot Rod server - I would love to hear your opinion as well.

Thanks
Sebastian



On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <ttarrant@redhat.com> wrote:
Not sure I like the introduction of another component at the front.

My original idea for allowing the client to choose the container was:

- with TLS: use SNI to choose the container
- without TLS: enhance the PING operation of the Hot Rod protocol to
also take the server name. This would need to be a requirement when
exposing multiple containers over the same endpoint.

 From a client API perspective, there would be no difference between the
above two approaches: just specify the server name and depending on the
transport, select the right one.

Tristan

On 29/04/2016 17:29, Sebastian Laskawiec wrote:
> Dear Community,
>
> Please have a look at the design of Multi tenancy support for Infinispan
> [1]. I would be more than happy to get some feedback from you.
>
> Highlights:
>
>   * The implementation will be based on a Router (which will be built
>     based on Netty)
>   * Multiple Hot Rod and REST servers will be attached to the router
>     which in turn will be attached to the endpoint
>   * The router will operate on a binary protocol when using Hot Rod
>     clients and path-based routing when using REST
>   * Memcached will be out of scope
>   * The router will support SSL+SNI
>
> Thanks
> Sebastian
>
> [1]
> https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev