On Dec 6, 2013, at 3:52 PM, Mircea Markus <mmarkus(a)redhat.com> wrote:
Some notes:
"This means that the Hot Rod protocol will be extended so that operation headers
always carry a Source ID field."
- shall we add a new intelligence level to handle this? Besides reducing the payload,
would allow upgrading the java and Cpp clients independently.
Hmmm, not sure about the usability of intelligence level in this context. We added that
flag to deal with different responses from server, so there's always a request first.
Independent upgrading is possible today, since the server talks earlier protocol versions.
So, Java clients could talk protocol version 1.4 and Cpp clients 1.3. When Cpp clients
support all features in 1.4, they can start talking that protocol.
Also, the source ID can be any byte array. If clients are not interested in registering
listeners, they can just send a empty byte[]. When they want to register listeners, then
it becomes important to have a good source ID.
In one of our discussions, you've also mentioned that you'd
want to use the cluster listeners as a foundation for this functionality. That doesn't
seem to be the case from the document, or? Not that it's a bad thing, just that I want
to clarify the relation between the two.
In both the clustered listeners and remote listeners, we're gonna need some kind of
cluster wide information about the listeners. For clustered listeners, it's needed in
order to route events. For remote listeners, pretty much the same thing. If a client C
registers a listener L in server S1, and then a distributed put arrives in S2, somehow S2
is gonna need to know that it needs to send an event remotely. I see both using the
clustered registry for this.
The difference between the two is really the communication layer and protocol used to send
those events. For clustered listeners, you'd send them through JGroups, for remote
events, they go through Netty, formatted as per Hot Rod protocol.
Another way to handle connection management, based on clustered
listeners, would be:
- the node on which the listeners ID hashes is the only one responsible for piggyback
notifications to the remote client
- it creates a cluster listener to be notified on what to send to the client (can make
use cluster listener's filtering and transformer capabilities here)
From a connection management perspective, that's an interesting
idea, but there's a slight mistmatch in the filtering area. As shown in the design
document for remote events, the clustered listener's API receives different
information to the callback API for remote events. IOW, for remote events, there's
stuff such as source ID of the Hot Rod protocol operation and the source ID for which the
event is directed.
The convertor stuff would work probably as it is, but the callback API you wrote for
metadata might be a bit limited (no previous value, no previous metadata)
Comparing the two approaches: this approach reuses some code (not
sure how much, we might be able to do that anyway) from the cluster listeners and also
reduces the number of connections required between clint and server, but at the cost of
performance/network hops. Also the number of connections a client is required to have
hasn't been a problem yet.
One more note on ST: during ST a node might receive the same notification multiple times
(from old owner and new owner). I guess it makes sense documenting that?
I already mentioned something along those lines in the `Non-`ACK`d events` section.
Cheers,
On Dec 5, 2013, at 4:16 PM, Galder Zamarreño <galder(a)redhat.com> wrote:
> Hi all,
>
> Re:
https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>
> Thanks a lot for the feedback provided in last thread. It was very constructive
feedback :)
>
> I've just finished updating the design document with the feedback provided in the
previous email thread. Can you please have another read and let the list know what you
think of it?
>
> Side note: The scope has got bigger (with the addition of filters/converters), so we
might need to consider whether we want all features in next version, or whether some parts
could be branched out to next iterations.
+1. Can we include the notification ack in the optionals category?
What about leaving these as the last bit to be implemented? If time allows (not to delay
the release) we can add them, otherwise just add them in future iterations?
>
> Cheers,
> --
> Galder Zamarreño
> galder(a)redhat.com
>
twitter.com/galderz
>
> Project Lead, Escalante
>
http://escalante.io
>
> Engineer, Infinispan
>
http://infinispan.org
>
Cheers,
--
Mircea Markus
Infinispan lead (
www.infinispan.org)
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org