On Fri, Apr 4, 2014 at 3:29 AM, Radim Vansa <rvansa(a)redhat.com> wrote:
Hi,
I still don't think that the document covers properly the description of
failover.
My understanding is that client registers clustered listeners on one server
(the first one it connects, I guess). There's some space for optimization,
as the notification will be sent from primary owner to this node and only
then over hotrod to the client, but I don't want to discuss it now.
There could be optimizations, but we have to worry about reordering if
the primary owner doesn't do the forwarding. You could have the case
of multiple writes to the same key from the clients and lets say they
send the message to the listener after they are written to the cache,
there is no way to make sure they are done in the order they were
written to the cache. We could do something with versions for this
though.
> Listener registrations will survive node failures thanks to the underlying
> clustered listener implementation.
I am not that much into clustered listeners yet, but I think that the
mechanism makes sure that when the primary owner changes, the new owner will
then send the events. But when the node which registered the clustered
listener dies, others will just forgot about it.
That is how it is, I assume Galder was referring to node failures not
on the one that registered the listener, which is obviously talked
about in the next point.
> When a client detects that the server which was serving the events is
> gone, it needs to resend it's registration to one of the nodes in the
> cluster. Whoever receives that request will again loop through its contents
> and send an event for each entry to the client.
Will that be all entries in the whole cache, or just from some node? I guess
that the first is correct. So, as soon as one node dies, all clients will be
bombarded by the full cache content (ok, filtered). Even if these entries
have not changed, because the cluster can't know.
The former being that the entire filtered/converted contents will be sent over.
> This way the client avoids loosing events. Once all entries have been
> iterated over, on-going events will be sent to the client.
> This way of handling failure means that clients will receive at-least-once
> delivery of cache updates. It might receive multiple events for the cache
> update as a result of topology changes handling.
So, if there are several modifications before the client reconnects and the
new target registers the listener, the clients will get only notification
about the last modification, or rather just the entry content, right?
This is all handled by the embedded cluster listeners though. But the
end goal is you will only receive 1 event if the modification comes
before value was retrieved from the remote node or 2 if afterwards.
Also these modifications are queued by key and so if you had multiple
modifications before it retrieved the value it would only give you the
last one.
Radim
On 04/02/2014 01:14 PM, Galder ZamarreƱo wrote:
Hi all,
I've finally managed to get around to updating the remote hot rod event
design wiki [1].
The biggest changes are related to piggybacking on the cluster listeners
functionality in order to for registration/deregistration of listeners and
handling failure scenarios. This should simplify the actual implementation
on the Hot Rod side.
Based on feedback, I've also changed some of the class names so that it's
clearer what's client side and what's server side.
A very important change is the fact that source id information has gone.
This is primarily because near-cache like implementations cannot make
assumptions on what to store in the near caches when the client invokes
operations. Such implementations need to act purely on the events received.
Finally, a filter/converter plugging mechanism will be done via factory
implementations, which provide more flexibility on the way filter/converter
instances are created. This opens the possibility for filter/converter
factory parameters to be added to the protocol and passed, after
unmarshalling, to the factory callbacks (this is not included right now).
I hope to get started on this in the next few days, so feedback at this
point is crucial to get a solid first release.
Cheers,
[1]
https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
--
Galder ZamarreƱo
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Radim Vansa <rvansa(a)redhat.com>
JBoss DataGrid QA
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev