On Dec 13, 2013, at 3:11 PM, Radim Vansa <rvansa(a)redhat.com> wrote:
On 12/13/2013 02:44 PM, Galder Zamarreño wrote:
> On Dec 6, 2013, at 10:45 AM, Radim Vansa <rvansa(a)redhat.com> wrote:
>
>> Hi,
>>
>> 1) IMO, filtering for specific key is a very important use case. Registering a
filterId is a very powerful feature, but as long as you don't provide runtime
parameter for this filter, you cannot implement one-key filtering.
> What do you mean by runtime parameter exactly? Can you give a concrete example of
what you want to achieve that is not possible with what I've written up?
As I stressed, if the client wants to listen for events on key_123456, then you can
deploy a filter matching key_{number} (and additional constraints) but the 123456 is not
known at deployment time.
True, that's a limitation of the current approach, but I don't see it crucial as
long as we have some static filtering in place. The feature itself is already pretty
large, so I'd consider this (dynamic filtering) at a later point.
>
>> 2) setting ack/no ack in listener, and then configuring server-wise whether you
should ack each / only last event sounds weird. I'd replace the boolean with enum {
NO_ACK, ACK_EACH, ACK_LAST }.
> Makes a lot of sense, +1.
>
>> 3) should the client provide source id when registering listener or when starting
RemoteCacheManager? No API for that.
> Every operation will require a source ID from now on, so clients must provide it from
first operation sent to the server. From a Java client perspective, you'd have this
from the start via the configuration.
>
>> 4) clustered events design does not specify any means to replicating the
clustered event listener - all it does is that you register the listener on one node and
the other nodes then route events to this node, until the node dies/deregisters the
listener. No replication. Please specify, how should it piggyback on clustered events, and
how should the listener list be replicated.
> In clustered listeners, the other nodes you talk about are gonna need to know about
the clustered listeners so that they route events. Some kind of information about these
clustered listeners will need to be sent around the cluster. The exact details are
probably implementation details but we have a clustered registry already in place for this
kind of things. In any case, it'd make a lot of sense that both use cases reuse as
much as logic in this area.
OK, this is probably the desired behaviour, it just is not covered by the Clustered
Events design draft. Probably something to add - I'll ping Mircea about that. And
you're right that it would make a lot of sense to have shared structure for the
listeners, and two implementations of the delivery boy (one to the node where a clustered
event has been registered and second to local component handling HotRod clients).
>
>> 5) non-acked events: how exactly do you expect the ack data to be replicated, and
updated? I see three options:
>> A) Let non-acked list be a part of the listener record in replicated cache, and
the primary owner which executes the event should update these via delta messages. I guess
for proper reliability it should add operation record synchronously before confirming the
operation to the originator, and then it might asynchronously remove it after the ack from
client. When a node becomes primary owner, it should send events to client for all
non-acked events.
>> B) Having the non-acked list attached directly to cache entry (updating it
together with regular backup), and then asynchronously updating the non-ack list after ack
comes
>> C) Separate cache for acks by entry keys, similar to B, consistent hash synced
with the main entry cache
> Definitely not B. I don't wanna tie the internal cache entry to the ACKs. The two
should be independent. Either C or A. For C, you'd wished to have a single cache for
all listeners+caches, but you'd have to think about the keys and to have the same
consistent hash, you'd have to have same keys. A might be better, but you certainly
don't want this ACK info in a replicated structure. You'd want ACKs in a
distributed cache preferably, and clustered listener info in the clustered replicated
registry.
There already is some CH implementation which aims at sharing the same distribution for
all caches, SyncConsistentHash. Is there some problem with C and forcing this for the
caches? Dan?
Radim
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org