[infinispan-dev] Design of temote event handling in Hot Rod

Galder Zamarreño galder at redhat.com
Tue Feb 21 11:00:21 EST 2012


On Feb 20, 2012, at 8:13 PM, Manik Surtani wrote:

> Thanks for putting this together, it looks good.  A few comments:
> 
> *  The Hot Rod Java Client API should probably look more like JSR 107's listener API rather than Infinispan's annotation-based one.  In future (Infinispan 6?) we'll deprecate our core annotation based API in favour of JSR 107's one.

Ok, I'll have a look 107's API and modify the doc accordingly.

> *  Not sure I get the "option #1" in your doc?  If you cared about locally originating events (and want to behave differently), you'd just register that listener using the embedded API and not the remote one?

Hmmm, not sure I understand your question...

> *  Not sure I understand option #3.  Is this to allow attaching a listener to a key while a key is added?

Yeah, effectively is a put+add_listener for the key stored. An optimisation. 

If you're interested in a notification wrt a particular key, you're likely to want it from the moment that you store it the 1st time.

> *  For option #4, no - not for now anyway.  Too much complexity.

+1

> 
> For the implementation, I'd be interested in what you have in mind, especially from a performance perspective.  I'm adding Clebert and Mike in cc, since some of the stuff they do is related to such event bus/notification/pub-sub models and they may have insights to add.

So far I've found 2 factors to be important here when it comes to performance:
1. Avoid remote eventing to have a major impact on consumption of cache operations.
2. Avoid swamping clients with too many notifications.

For 1, I had thought about implementing notification in a sync=false listener. 

For 2, imagine a client that starts a remote cache manager, signs up for notifications in cache C, and has 50 threads interacting with cache C concurrently (so, 50 channels are open with the server). I don't want the server to send back 50 events for each interested cache operation that happens on the server side. 1 notification should be enough. This is one of the reasons I want "option #1".

Let's have a quick chat on the phone if you want to clarify.

> 
> Cheers
> Manik
> 
> On 20 Feb 2012, at 14:29, Galder Zamarreño wrote:
> 
>> Hi all,
>> 
>> Re: https://community.jboss.org/docs/DOC-17571
>> 
>> Over the past week and a bit I've been working on a rough prototype for remote event handling in Hot Rod that covers the server side (I've not done any work on the Hot Rod client).In the link above you can find my design notes. 
>> 
>> I wanted to get some feedback on the minimum requirements explained and I wanted to discuss the need of the optional requirements, in particular the 1st of the optional requirements.
>> 
>> The idea is that at a logical level, it'd be interesting to know what the origin of a modification for a couple of reasons:
>> 
>> - It allows clients be able to know whether the modification is originated locally from a logical point of view. If it sounds to abstract, think about near caches (see preso in https://www.jboss.org/dms/judcon/presentations/London2011/day1track2session2.pdf) and imagine a local cache (near cache) configured with a remote cache store (a Java hot rod client with N channels open at the same time). Remote listeners could decide to act differently if the modification was originated locally or not, i.e. if it's not local then remove the key from cache, if local, it means the modification comes from this remote cache store and so I have the latest data in memory. This is a very nice optimisation for at least this use case.
>> 
>> - This can be extended further. If all channels opened with can be associated with a logical origin, we could optimise sending back events. For example, imagine a remote cache store (it has 1 remote cache manager) that has N channels open with the server. There's no need for all N channels to receive a notification for a cache removal. As long as one of the channels gets it event, it's good enough to act to on the local cache.
>> 
>> As you can see, what I'm heading towards is that for each remote cache manager started to be id'd uniquely, and this id to be shipped with all Hot Rod operations. It could be possible to limit the operations that carry such an id, but this could complicate the protocol.
>> 
>> Thoughts?
>> 
>> Also, any thoughts on the need for the 4th optional requirement? For near caches, remote events are important, but you could limit the need for it with an aggressive eviction policy in the near cache to cover against lost events.
>> 
>> Cheers,
>> --
>> Galder Zamarreño
>> Sr. Software Engineer
>> Infinispan, JBoss Cache
>> 
>> 
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> --
> Manik Surtani
> manik at jboss.org
> twitter.com/maniksurtani
> 
> Lead, Infinispan
> http://www.infinispan.org
> 
> 
> 
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache




More information about the infinispan-dev mailing list