Hey Anton!
Just to clarify - the router is a concept implemented in Infinispan
*Server*. In the endpoint to be 100% precise. Each server will have this
component up and running and it will take the incomming TCP connection and
pass it to proper NettyServer or RestServer instance (after choosing proper
tenant or negotiating protocol with ALPN once we have the implementation).
On the client side we will need something a little but different. A
polyglot client will need to look into available protocol implementations
(let's imagine we have a client which supports Hot Rod and HTTP/2 protocol)
during the TLS handshake and pick the best one. For the sake of this
example - Hot Rod could have a higher priority because it's faster.
I assume your questions are slightly missed (since they assume the router
on a client side) but let me try to answer them...
Thanks
Sebastian
On Mon, Sep 12, 2016 at 11:10 AM, Anton Gabov <
gabovantonnikolaevich(a)gmail.com> wrote:
Sebastian, correct me if I'm wrong.
As I understand, client will have Router instance, which has info about
servers, caches in these servers and support protocols (HotRod, HTTP/1,
HTTP/2).
So, I have some questions:
1) Will Router keep all connections up or close connection after the
request? For instance, client need to make request for some server. It
creates connection, make request and close connection (or we keep
connection and leave it opened).
I believe it should keep (or pool) them.
Moreover when considering Kubernetes we need to go through an Ingress [1].
Plus there are also PetSets [2]. I've heard some rumors that the routing
for them might use SNI. So we might need to use TLS/SNI differently
depending on scenario and possibly holding more than one connection per
server. Unfortunately I can not confirm this at this stage.
[1]
2) How update from HTTP/2 to HotRod can be done? I cannot imagine
this
situation, but I would like to know it :)
We can not upgrade since since HTTP/2 doesn't support the upgrade
procedure.
However you can upgrade from HTTP 1.1 using the Upgrade header [3] or
negotiate using HTTP/2 using ALPN [4]. The same approach might be used to
upgrade (or negotiate) any TCP based protocol (including HTTP for REST,
Memcached since it's plain text or Hot Rod).
[3]
Since this is a server component - only XML will be available for the
client*.
[*] But if you look carefully, the implementation allows you to bootstrap
everything from java using proper ConfigurationBuilders. Of course they
should be used only internally.
Best wishes,
Anton.
2016-09-12 9:57 GMT+03:00 Sebastian Laskawiec <slaskawi(a)redhat.com>:
> After investigating ALPN [1] and HTTP/2 [2] support I revisited this
> feature to see how everything fits together.
>
> Just as a reminder - the idea behind multi-tenant router is to implement
> a component which will have references to all deployed Hot Rod and REST
> servers (Memcached and WebSockets are out of the scope at this point) [3]
> and will be able to forward requests to proper instance.
>
> Since we'd like to create an ALPN-based, polyglot client at some point, I
> believe the router concept should be a little bit more generic. It should
> be able to use SNI for routing as well as negotiate the protocol using ALPN
> or even switch to different protocol using HTTP 1.1/Upgrade header. Having
> this in mind, I would like to rebase multi-tenancy feature and slightly
> modify router endpoint configuration to something like this:
>
> <router-connector socket-binding="router">
> <default-encryption security-realm="other"/>
> <multi-tenancy>
> <hotrod name="hotrod1">
> <sni host-name="sni1" security-realm="other"/>
> </hotrod>
> <rest name="rest1">
> <path prefix="rest1"/>
> <!-- to be implemented in the future - HTTP + host header as Dan
> suggested -->
> <host name="test" />
> </rest>
> </multi-tenancy>
> <!-- to be implemented in the future -->
> <polyglot>
> <hotrod name="hotrod1">
> <priority />
> </hotrod>
> </polyglot>
> </router-connector>
>
>
> With this configuration, the router should be really flexible and
> extendable.
>
> If there will be no negative comments, I'll start working on that
> tomorrow.
>
> Thanks
> Sebastian
>
> [1]
https://issues.jboss.org/browse/ISPN-6899
> [2]
https://issues.jboss.org/browse/ISPN-6676
> [3]
https://github.com/infinispan/infinispan/wiki/Multi-
> tenancy-for-Hotrod-Server
>
> On Mon, Jul 18, 2016 at 9:14 AM, Sebastian Laskawiec <slaskawi(a)redhat.com
> > wrote:
>
>> Hey!
>>
>> Dan pointed out a very interesting thing [1] - we could use host header
>> for multi-tenant REST endpoints. Although I really like the idea (this
>> header was introduced to support this kind of use cases), it might be a bit
>> problematic from security point of view (if someone forgets to set it,
>> he'll be talking to someone else Cache Container).
>>
>> What do you think about this? Should we implement this (now or later)?
>>
>> I vote for yes and implement it in 9.1 (or 9.0 if there is enough time).
>>
>> Thanks
>> Sebastian
>>
>> On Wed, Jun 29, 2016 at 8:55 AM, Sebastian Laskawiec <
>> slaskawi(a)redhat.com> wrote:
>>
>>> Hey!
>>>
>>> The multi-tenancy support for Hot Rod and REST has been implemented
>>> [2]. Since the PR is gigantic, I marked some interesting places for review
>>> so you might want to skip boilerplate parts.
>>>
>>> The Memcached and WebSockets implementations are currently out of
>>> scope. If you would like us to implement them, please vote on the following
>>> tickets:
>>>
>>> - Memcached
https://issues.jboss.org/browse/ISPN-6639
>>> - Web Sockets
https://issues.jboss.org/browse/ISPN-6638
>>>
>>> Thanks
>>> Sebastian
>>>
>>> [2]
https://github.com/infinispan/infinispan/pull/4348
>>>
>>> On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec <
>>> slaskawi(a)redhat.com> wrote:
>>>
>>>> Hey Galder!
>>>>
>>>> Comments inlined.
>>>>
>>>> Thanks
>>>> Sebastian
>>>>
>>>> On Wed, May 25, 2016 at 10:52 AM, Galder Zamarreño
<galder(a)redhat.com>
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> Sorry for the delay getting back on this.
>>>>>
>>>>> The addition of a new component does not worry me so much. It has
the
>>>>> advantage of implementing it once independent of the backend
endpoint,
>>>>> whether HR or Rest.
>>>>>
>>>>> What I'm struggling to understand is what protocol the clients
will
>>>>> use to talk to the router. It seems wasteful having to build two
protocols
>>>>> at this level, e.g. one at TCP level and one at REST level. If
you're going
>>>>> to end up building two protocols, the benefit of the router
component
>>>>> dissapears and then you might as well embedded the two routing
protocols
>>>>> within REST and HR directly.
>>>>>
>>>>
>>>> I think I wasn't clear enough in the design how the routing works...
>>>>
>>>> In your scenario - both servers (hotrod and rest) will start
>>>> EmbeddedCacheManagers internally but none of them will start Netty
>>>> transport. The only transport that will be turned on is the router. The
>>>> router will be responsible for recognizing the request type (if HTTP -
find
>>>> proper REST server, if HotRod protocol - find proper HotRod) and
attaching
>>>> handlers at the end of the pipeline.
>>>>
>>>> Regarding to custom protocol (this usecase could be used with Hotrod
>>>> clients which do not use SSL (so SNI routing is not possible)), you and
>>>> Tristan got me thinking whether we really need it. Maybe we should
require
>>>> SSL+SNI when using HotRod protocol with no exceptions? The thing that
>>>> bothers me is that SSL makes the whole setup twice slower:
>>>>
https://gist.github.com/slaskawi/51f76b0658b9ee0c935
>>>> 1bd17224b1ba2#file-gistfile1-txt-L1753-L1754
>>>>
>>>>
>>>>>
>>>>> In other words, for the router component to make sense, I think it
>>>>> should:
>>>>>
>>>>> 1. Clients, no matter whether HR or REST, to use 1 single protocol
to
>>>>> the router. The natural thing here would be HTTP/2 or similar
protocol.
>>>>>
>>>>
>>>> Yes, that's the goal.
>>>>
>>>>
>>>>> 2. The router then talks HR or REST to the backend. Here the router
>>>>> uses TCP or HTTP protocol based on the backend needs.
>>>>>
>>>>
>>>> It's even simpler - it just uses the backend's Netty Handlers.
>>>>
>>>> Since the SNI implementation is ready, please have a look:
>>>>
https://github.com/infinispan/infinispan/pull/4348
>>>>
>>>>
>>>>>
>>>>> ^ The above implies that HR client has to talk TCP when using HR
>>>>> server directly or HTTP/2 when using it via router, but I don't
think this
>>>>> is too bad and it gives us some experience working with HTTP/2
besides the
>>>>> work Anton is carrying out as part of GSoC.
>>>>
>>>>
>>>>> Cheers,
>>>>> --
>>>>> Galder Zamarreño
>>>>> Infinispan, Red Hat
>>>>>
>>>>> > On 11 May 2016, at 10:38, Sebastian Laskawiec
<slaskawi(a)redhat.com>
>>>>> wrote:
>>>>> >
>>>>> > Hey Tristan!
>>>>> >
>>>>> > If I understood you correctly, you're suggesting to enhance
the
>>>>> ProtocolServer to support multiple EmbeddedCacheManagers (probably
with
>>>>> shared transport and by that I mean started on the same Netty
server).
>>>>> >
>>>>> > Yes, that also could work but I'm not convinced if we
won't loose
>>>>> some configuration flexibility.
>>>>> >
>>>>> > Let's consider a configuration file -
>>>>>
https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9,
>>>>> how for example use authentication for CacheContainer cc1 (and not
for cc2)
>>>>> and encryption for cc1 (and not for cc1)? Both are tied to
>>>>> hotrod-connector. I think using this kind of different options makes
sense
>>>>> in terms of multi tenancy. And please note that if we start a new
Netty
>>>>> server for each CacheContainer - we almost ended up with the router
I
>>>>> proposed.
>>>>> >
>>>>> > The second argument for using a router is extracting the
routing
>>>>> logic into a separate module. Otherwise we would probably end up
with
>>>>> several if(isMultiTenent()) statements in Hotrod as well as REST
server.
>>>>> Extracting this has also additional advantage that we limit changes
in
>>>>> those modules (actually there will be probably 2 changes #1 we should
be
>>>>> able to start a ProtocolServer without starting a Netty server (the
Router
>>>>> will do it in multi tenant configuration) and #2 collect Netty
handlers
>>>>> from ProtocolServer).
>>>>> >
>>>>> > To sum it up - the router's implementation seems to be more
>>>>> complicated but in the long run I think it might be worth it.
>>>>> >
>>>>> > I also wrote the summary of the above here:
>>>>>
https://github.com/infinispan/infinispan/wiki/Multi-tenancy-
>>>>> for-Hotrod-Server#alternative-approach
>>>>> >
>>>>> > @Galder - you wrote a huge part of the Hot Rod server - I would
>>>>> love to hear your opinion as well.
>>>>> >
>>>>> > Thanks
>>>>> > Sebastian
>>>>> >
>>>>> >
>>>>> >
>>>>> > On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant <
>>>>> ttarrant(a)redhat.com> wrote:
>>>>> > Not sure I like the introduction of another component at the
front.
>>>>> >
>>>>> > My original idea for allowing the client to choose the
container
>>>>> was:
>>>>> >
>>>>> > - with TLS: use SNI to choose the container
>>>>> > - without TLS: enhance the PING operation of the Hot Rod
protocol to
>>>>> > also take the server name. This would need to be a requirement
when
>>>>> > exposing multiple containers over the same endpoint.
>>>>> >
>>>>> > From a client API perspective, there would be no difference
>>>>> between the
>>>>> > above two approaches: just specify the server name and depending
on
>>>>> the
>>>>> > transport, select the right one.
>>>>> >
>>>>> > Tristan
>>>>> >
>>>>> > On 29/04/2016 17:29, Sebastian Laskawiec wrote:
>>>>> > > Dear Community,
>>>>> > >
>>>>> > > Please have a look at the design of Multi tenancy support
for
>>>>> Infinispan
>>>>> > > [1]. I would be more than happy to get some feedback from
you.
>>>>> > >
>>>>> > > Highlights:
>>>>> > >
>>>>> > > * The implementation will be based on a Router (which
will be
>>>>> built
>>>>> > > based on Netty)
>>>>> > > * Multiple Hot Rod and REST servers will be attached to
the
>>>>> router
>>>>> > > which in turn will be attached to the endpoint
>>>>> > > * The router will operate on a binary protocol when using
Hot
>>>>> Rod
>>>>> > > clients and path-based routing when using REST
>>>>> > > * Memcached will be out of scope
>>>>> > > * The router will support SSL+SNI
>>>>> > >
>>>>> > > Thanks
>>>>> > > Sebastian
>>>>> > >
>>>>> > > [1]
>>>>> > >
https://github.com/infinispan/infinispan/wiki/Multi-tenancy-
>>>>> for-Hotrod-Server
>>>>> > >
>>>>> > >
>>>>> > > _______________________________________________
>>>>> > > infinispan-dev mailing list
>>>>> > > infinispan-dev(a)lists.jboss.org
>>>>> > >
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>> > >
>>>>> >
>>>>> > --
>>>>> > Tristan Tarrant
>>>>> > Infinispan Lead
>>>>> > JBoss, a division of Red Hat
>>>>> > _______________________________________________
>>>>> > infinispan-dev mailing list
>>>>> > infinispan-dev(a)lists.jboss.org
>>>>> >
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>> >
>>>>> > _______________________________________________
>>>>> > infinispan-dev mailing list
>>>>> > infinispan-dev(a)lists.jboss.org
>>>>> >
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev(a)lists.jboss.org
>>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>
>>>>
>>>
>>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev