[infinispan-dev] Single Port Client

Sebastian Laskawiec slaskawi at redhat.com
Thu Dec 13 08:45:20 EST 2018


On Thu, Dec 13, 2018 at 10:23 AM Radim Vansa <rvansa at redhat.com> wrote:

> On 12/12/2018 03:30 PM, Sebastian Laskawiec wrote:
> > On Tue, Dec 11, 2018 at 11:13 AM Radim Vansa <rvansa at redhat.com
> > <mailto:rvansa at redhat.com>> wrote:
> >
> >     I dislike having any logic based on the port number in some range;
> >     it's
> >     not common that behaviour would change if you set port to 9xxx
> >     instead
> >     of 8xxx.
> >
> >
> > That's not a problem with my approach, since you can always manually
> > turn the setting off or on. Here's how you do it:
> > ConfigurationBuilder cb = ...
> > cb.singlePort(SinglePortMode.ENABLED); // other options: DISABLED and
> AUTO
>
> Adding config options is just a way to avoid solving problems :)
> Remember the famous quote: "Less knobs!"
>

It depends on the problem in my opinion. I always preferred to have more
configuration options with good defaults.


>
> >
> >
> >     Is there an (up-to-date) design doc?
> >
> >
> > No, this is just a proposal. I was hoping that you guys like it and
> > then, with some thumbs up, I could update the design doc.
> >
> > Here's the most up-to-date version in case you were looking for it:
> >
> https://github.com/infinispan/infinispan-designs/blob/master/Single_port.adoc
> >
> >
> >     I don't fully follow, but if there's a problem in the HTTP
> >     handlers you
> >     can add a PING-detecting handler below...?
> >
> >
> > Thanks for the hint Radim!
> >
> > Inspired by your idea I went ahead and checked how OpenShift Router
> > behaves. It turns out that it responds HTTP 400 if you throw Hotrod
> > bytes at it and then drops the connection.
>
> I understand that the reason to have Hot Rod PING sent as the first
> operation is to make sure that a new client that tries to connect to old
> server won't confuse the server, is that correct? Or is there anything
> else?
>

Not only. We (Tristan and I when we were discussing it) wanted to introduce
the smallest possible hit on a default scenario - a Hotrod client
connecting to Infinispan without Single Port. If we mess up this scenario,
we might quickly get into all sorts of problem (including performance).


>
> I'll assume that a new server will handle both Hot Rod PING and HTTP
> request correctly without any prelude, and old one will be ok with Hot
> Rod PING only. I don't really understand the:
>

I'm not sure if I follow. The Single Port functionality for non-TLS
scenario requires only REST interface. Hotrod is optional. We upgrade to
Hotrod using HTTP/1.1 Upgrade procedure (see RFC 7230 [1]). This
essentially means, that you need to send an HTTP request in order to
upgrade to another protocol (such as Hotrod) and reuse the same TCP
connection.

So as you can see, the Single Port can not speak both protocols, HTTP and
Hotrod, at the same time. It requires sending an HTTP request with proper
header, and then upgrading to Hotrod. The reason it has been implemented
this way is to support Reverse Proxies (like HAProxy in the OpenShift
Router, but of course there are more of them). As I mentioned in my last
email, the HA proxy immediately responds with HTTP 400 if you throw binary
payload at it. That's why we need to follow the HTTP upgrade procedure - to
get through it.

[1]
https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#header.upgrade


>
>  > implementing it this way seems a bit "inconvenient" to me. The Ping
>  > Operation uses 60s timeout, which seems to be a good fit as a default.
>  > Unfortunately, for the Single Port functionality, this means we'd need
>  > to wait 60s until we try to send HTTP request and do an upgrade
>
> why would you wait for 60 seconds? If the other end is Infinispan server
> (old or new), you just send HR PING and you're done, server will proceed
> correctly. If the other server is a router, you'll get a response
> starting with 'HTTP': in Hot Rod protocol that would be parsed as opCode
> 0x54 which is illegal response code (the id belongs to
> COUNTER_RESET_REQUEST). At this point you know that this connection is
> going to be closed, and can immediately start another one (is *this* the
> problem?) that will send a HTTP request with Upgrade header.
>

This is correct. However, you silently assumed that the HTTP Server (either
our own Infinispan REST Server or any type of Reverse Proxy we're passing
through) will respond with an HTTP message (either 4xx or 5xx) when
receiving binary traffic. At this point I know that HAProxy does that, and
the same does Envoy (as you told us in the bottom of this email).
Infinispan REST server just ignores such a request.

So it is safe to assume, that all Reverse Proxies will return an HTTP
message when we send a bunch of bytes at it? If yes, than we need to
correct the REST implementation (that should not be too hard) and kick the
Single Port in, when a Hotrod client receives a HTTP message back when
sending PING. But then, if some Reverse Proxy between the server and client
does something different (ignoring such a request as our Infinispan REST
server does now, or simply refuse a connection, or anything else) we won't
be able to upgrade the connection.

My proposal is to set the Single Port support on the client, so that the
client starts the communication with HTTP message (and not the Hotrod
PING). Doing it this way, allows us to forget about all those crazy
situations with Reverse Proxies. The key point is that we never throw raw
bytes at them. In my opinion this can save us a lot of headache.

So, to be honest, I can implement it whichever way you prefer. My personal
feeling is that an additional setting on the Hotrod client is much safer
bet (at least for now).

@Tristan, @Ryan, I think you guys also have some experience in this matter.
I would be very interested in hearing your opinion as well.


>
> > I also realized, there's one more moving bit - TCP Keepalive. Luckily,
> > we can control this setting over configuration in our standalone.xml.
> > However, it is perfectly legal what I've seen in Netty (do not respond
> > and keep the connection alive assuming that TCP Keepalive is set to
> true).
> >
> > The more I think about this, the more I'm convinced that the Single
> > Port support should be explicitly set in the client (or inferred from
> > the configuration). I do not know how Nginx, Linkerd or Envoy behaves
> > in situation when they expect HTTP and get a stream of bytes. Relying
> > on this partially unknown behavior for doing our upgrade procedure
> > doesn't seem right to me.
>
> FYI Envoy does the same, send 400 and terminate the connection.
>

Ok, so it behaves the same as HA Proxy.


>
> >
> > Just in case you're worried about the additional logic on the client
> > side - it's super small. Really, only 13 lines including brackets ;)
> >
> https://github.com/infinispan/infinispan/pull/6133/files#diff-684a10c939f31fcfef0f5f48d469393aR618
>
> I am not worried about any logic in the client code, I am worried about
> logic between chair and keyboard.
>

Ouch! That was pretty rude, Radim. Words like that make me feel very
uncomfortable and actually offend me. I would like to ask you to stick to
the technical aspect of this thread and do not go personal.

As we've known each other for a pretty long time, I will pretend I didn't
see this and I just read "please have a look what I just wrote, maybe you
will reconsider your implementation?".


>
> R.
>
> >
> >     Radim
> >
> >     On 12/10/2018 03:27 PM, Sebastian Laskawiec wrote:
> >     > Hey guys,
> >     >
> >     > During Infinispan F2F, I had a short discussion with Tristan on
> >     Single
> >     > Port client-side implementation. Back then, we agreed that the
> >     client
> >     > should always send a Hotrod Ping request and if won't get any
> >     response
> >     > (or get some HTTP content back), it will try to upgrade to the
> >     Hotrod
> >     > protocol using Single Port.
> >     >
> >     > I've been playing with the implementation for a while, and
> >     > implementing it this way seems a bit "inconvenient" to me. The Ping
> >     > Operation uses 60s timeout, which seems to be a good fit as a
> >     default.
> >     > Unfortunately, for the Single Port functionality, this means
> >     we'd need
> >     > to wait 60s until we try to send HTTP request and do an upgrade.
> >     Also,
> >     > another problematic part is in Netty's HTTP handlers
> >     > (HttpObjectDecoder, HttpServerCodec and ByteToMessageDecoder). When
> >     > those classes fail to decode a message (REST expects HTTP rather
> >     than
> >     > a stream of bytes specific to Hotrod protocol), they just ignore it
> >     > and keep the channel in active state (which also makes sense for
> >     > HTTP/1.1 and HTTP/2).
> >     >
> >     > At this point, my intuition tells, that this doesn't look right and
> >     > seems to be a over-complicated. The whole HTTP upgrade idea
> >     seems to
> >     > work the other way around, use HTTP as a fallback and then
> >     upgrade to
> >     > other protocols. Forcing it to work a bit differently requires some
> >     > more effort.
> >     >
> >     > What if we preserved the Single Port setting in the client
> >     > configuration but implemented it as an enum with the following
> >     values
> >     > - true/false/auto. In automatic mode, the client would check if the
> >     > server port is set to 8\d{1,3} (this covers 80, 8080, 8081, 8443
> >     and
> >     > friends). If that is true, we'd try to follow HTTP Upgrade
> >     procedure.
> >     > This looks very simple and I think this might actually work. Please
> >     > note, that we need the single port setting in the client
> >     configuration
> >     > to cover some corner cases like the Single Port exposed on
> >     different
> >     > port (like 4444) or Hot Rod exposed on port that starts with 8.
> >     >
> >     > What do you think about such simplification?
> >     >
> >     > Thanks,
> >     > Sebastian
> >     >
> >     >
> >     >
> >     >
> >     > _______________________________________________
> >     > infinispan-dev mailing list
> >     > infinispan-dev at lists.jboss.org
> >     <mailto:infinispan-dev at lists.jboss.org>
> >     > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> >
> >     --
> >     Radim Vansa <rvansa at redhat.com <mailto:rvansa at redhat.com>>
> >     JBoss Performance Team
> >
> >     _______________________________________________
> >     infinispan-dev mailing list
> >     infinispan-dev at lists.jboss.org <mailto:
> infinispan-dev at lists.jboss.org>
> >     https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> --
> Radim Vansa <rvansa at redhat.com>
> JBoss Performance Team
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20181213/2227dec2/attachment-0001.html 


More information about the infinispan-dev mailing list