[infinispan-dev] Remote Hot Rod events wiki updated

Radim Vansa rvansa at redhat.com
Fri Apr 4 03:29:58 EDT 2014


Hi,

I still don't think that the document covers properly the description of 
failover.

My understanding is that client registers clustered listeners on one 
server (the first one it connects, I guess). There's some space for 
optimization, as the notification will be sent from primary owner to 
this node and only then over hotrod to the client, but I don't want to 
discuss it now.

 > Listener registrations will survive node failures thanks to the 
underlying clustered listener implementation.

I am not that much into clustered listeners yet, but I think that the 
mechanism makes sure that when the primary owner changes, the new owner 
will then send the events. But when the node which registered the 
clustered listener dies, others will just forgot about it.

 > When a client detects that the server which was serving the events is 
gone, it needs to resend it’s registration to one of the nodes in the 
cluster. Whoever receives that request will again loop through its 
contents and send an event for each entry to the client.

Will that be all entries in the whole cache, or just from some node? I 
guess that the first is correct. So, as soon as one node dies, all 
clients will be bombarded by the full cache content (ok, filtered). Even 
if these entries have not changed, because the cluster can't know.

 > This way the client avoids loosing events. Once all entries have been 
iterated over, on-going events will be sent to the client.

 > This way of handling failure means that clients will receive 
/at-least-once/ delivery of cache updates. It might receive multiple 
events for the cache update as a result of topology changes handling.

So, if there are several modifications before the client reconnects and 
the new target registers the listener, the clients will get only 
notification about the last modification, or rather just the entry 
content, right?

Radim


On 04/02/2014 01:14 PM, Galder Zamarreño wrote:
> Hi all,
>
> I’ve finally managed to get around to updating the remote hot rod event design wiki [1].
>
> The biggest changes are related to piggybacking on the cluster listeners functionality in order to for registration/deregistration of listeners and handling failure scenarios. This should simplify the actual implementation on the Hot Rod side.
>
> Based on feedback, I’ve also changed some of the class names so that it’s clearer what’s client side and what’s server side.
>
> A very important change is the fact that source id information has gone. This is primarily because near-cache like implementations cannot make assumptions on what to store in the near caches when the client invokes operations. Such implementations need to act purely on the events received.
>
> Finally, a filter/converter plugging mechanism will be done via factory implementations, which provide more flexibility on the way filter/converter instances are created. This opens the possibility for filter/converter factory parameters to be added to the protocol and passed, after unmarshalling, to the factory callbacks (this is not included right now).
>
> I hope to get started on this in the next few days, so feedback at this point is crucial to get a solid first release.
>
> Cheers,
>
> [1] https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
> --
> Galder Zamarreño
> galder at redhat.com
> twitter.com/galderz
>
> Project Lead, Escalante
> http://escalante.io
>
> Engineer, Infinispan
> http://infinispan.org
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


-- 
Radim Vansa <rvansa at redhat.com>
JBoss DataGrid QA

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140404/9c763049/attachment-0001.html 


More information about the infinispan-dev mailing list