[infinispan-dev] Design of Remote Hot Rod events

Galder Zamarreño galder at redhat.com
Mon Dec 2 10:44:22 EST 2013


On Dec 2, 2013, at 10:57 AM, Radim Vansa <rvansa at redhat.com> wrote:

> On 11/26/2013 04:10 PM, Galder Zamarreño wrote:
>> Hi Radim,
>> 
>> Thanks for the excellent feedback, comments below:
>> 
>> On Nov 13, 2013, at 11:33 AM, Radim Vansa <rvansa at redhat.com> wrote:
>> 
>> 
>>> 3. IMO, registering events for particular keys is not that optional. If
>>> you allow only all-keys listener, you end up with users screwing
>>> performance by registering listeners with if (key.equals(myKey)) {…}.
>> Yeah, if users do that, there's a lot of traffic wasted, but again, I had the near cache use case in mind where you're interested in all data in the cache, as opposed to a subset. However, it could be added to the design.
> 
> I can imagine the near cache to be caching only events the client was 
> previously interested in. You don't want to cache all the petabytes of 
> data Infinispan will cache in the cluster, on one client. That does not 
> scale, and Infinispan is all about scaling.

^ It's common for near caches to have an agressive eviction policy.

> Besides that, being interested in all data and not providing the CREATE 
> event seems somewhat contradictory to me.
> 
>> 
>>> 4. It seems to me that one global listener per client per cache is
>>> enough. Will the client code register such single listener and multiplex
>>> all the events to the registered listeners? Related to 3. if you don't
>>> implement the filtering by key on server, you should at least already
>>> provide this as client API and do the equals check locally.
>>> Nevertheless, this would require client equality on keys.
>> Not sure I understand your point ^.
> 
> The application could register multiple identical listeners. If the 
> client code was dumb, it would register the same listener twice on 
> server -> send notifications twice -> redundant traffic & processing on 
> both client and server.
> Let's decide whether it's a responsibility of application code to evade 
> this scenario or if the client should do that.

True. I don't have an answer for that yet. 

How doable that is might depend on how clients maintain listener information.

It's an interesting edge case for sure.

>> 
>> 
>>> 8. As the client itself is responsible for contacting each server and
>>> registering the listener, there's another scenario besides server
>>> failure. It takes some time before client receives new topology, so
>>> another server might join and become primary owner - the client does not
>>> register to that server until it's late and does not receive the update.
>>> Even after the client joins, the server has not tracked the listener and
>>> can't see that it should send the update.
>>> Solution for this would be to keep a cache of listeners (replicated for
>>> global ones, distributed for key-filtered), delay all writes until this
>>> cache is replicated and then keep the event in memory even if the client
>>> is not yet connected.
>> That's certainly an interesting scenario. I'm not sure there's a need for replicaed/distributed cache at all here. In fact, in the design I've done I've tried to avoid any type of clustered state for this work. Any new joining node could keep a buffer of events for a X amount of time to allow all clients to have the time to register their listeners with the new server and receive events in case they are late.
> OK, keeping some history would solve that as well.

^ A better way to solve this is by maintaining listener information cluster wide, which we'll have as a result of clustered listeners. This way, clients do not need to re-register when a new node joins in. See side note in [1]

[1] http://lists.jboss.org/pipermail/infinispan-dev/2013-November/014230.html

> Now, as there will be some code feeding the client with updates, I think 
> that information about topology change should go through that channel as 
> well in order to reduce the history period.

Nice to have, but out of the scope of this. Hot Rod will reuse the same channels that client opened in order to send the updates. Although topology changes could be handled in the same way too, I don't expect to apply that change at this stage.

Cheers,

> 
> Radim
> 
>> 
>> Cheers,
>> 
>> [1] https://issues.jboss.org/browse/ISPN-694
>> 
>>> Radim
>>> 
>>> 
>>> On 11/12/2013 04:17 PM, Galder Zamarreño wrote:
>>>> Hi all,
>>>> 
>>>> Re: https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>>>> 
>>>> I've just finished writing up the Hot Rod remote events design document. Amongst many other use cases, this will enable near caching use cases with the help of Hot Rod client callbacks.
>>>> 
>>>> Cheers,
>>>> --
>>>> Galder Zamarreño
>>>> galder at redhat.com
>>>> twitter.com/galderz
>>>> 
>>>> Project Lead, Escalante
>>>> http://escalante.io
>>>> 
>>>> Engineer, Infinispan
>>>> http://infinispan.org
>>>> 
>>>> 
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> 
>>> -- 
>>> Radim Vansa <rvansa at redhat.com>
>>> JBoss DataGrid QA
>>> 
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> --
>> Galder Zamarreño
>> galder at redhat.com
>> twitter.com/galderz
>> 
>> Project Lead, Escalante
>> http://escalante.io
>> 
>> Engineer, Infinispan
>> http://infinispan.org
>> 
>> 
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> -- 
> Radim Vansa <rvansa at redhat.com>
> JBoss DataGrid QA
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Galder Zamarreño
galder at redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org




More information about the infinispan-dev mailing list