Hi Galder,
Some notes/questions:
- each listener will be identified on the server through a "source
id"/"listener id" combo? the "source id" must be unique through
all the clients, and "source id" unique through all the listeners of the same
client?
"It assumes that clients are able to maintain persistent connections to the
servers"
- does that mean that each listener node would have to keep an active connection to all
the existing hotrod servers? ATM the client shrinks the connection pool in the case some
connections are not used, that should be disabled if we go this way. Also is this approach
scalable? In the case of large number of clients (c) and servers (s), you'd end up
with a mesh of c*s active connections between clients and servers.
- how does a server send the event to the client? piggybacks on other operation (ping,
put...) ? E.g. if client A is interested in create events, and a key is created on node X,
X doesn't seem to have any mean to initiate the sending of events to A
- I imagine that the listener information is going to be kept across the cluster and not
only on the server node where the add listener request arrived. If so, how is this going
to be achieved? What about housekeeping for the crashed clients?
- the notifications will only be triggered by a single server nodes so that the client
wont't receive numOwners notifications for an entry created, right?
- isn't optional requirement 1 achievable through the source id/listener id combo?
Cheers,
Mircea
On 20 Feb 2012, at 14:29, Galder Zamarreño wrote:
Hi all,
Re:
https://community.jboss.org/docs/DOC-17571
Over the past week and a bit I've been working on a rough prototype for remote event
handling in Hot Rod that covers the server side (I've not done any work on the Hot Rod
client).In the link above you can find my design notes.
I wanted to get some feedback on the minimum requirements explained and I wanted to
discuss the need of the optional requirements, in particular the 1st of the optional
requirements.
The idea is that at a logical level, it'd be interesting to know what the origin of a
modification for a couple of reasons:
- It allows clients be able to know whether the modification is originated locally from a
logical point of view. If it sounds to abstract, think about near caches (see preso in
https://www.jboss.org/dms/judcon/presentations/London2011/day1track2sessi...) and
imagine a local cache (near cache) configured with a remote cache store (a Java hot rod
client with N channels open at the same time). Remote listeners could decide to act
differently if the modification was originated locally or not, i.e. if it's not local
then remove the key from cache, if local, it means the modification comes from this remote
cache store and so I have the latest data in memory. This is a very nice optimisation for
at least this use case.
- This can be extended further. If all channels opened with can be associated with a
logical origin, we could optimise sending back events. For example, imagine a remote cache
store (it has 1 remote cache manager) that has N channels open with the server.
There's no need for all N channels to receive a notification for a cache removal. As
long as one of the channels gets it event, it's good enough to act to on the local
cache.
As you can see, what I'm heading towards is that for each remote cache manager
started to be id'd uniquely, and this id to be shipped with all Hot Rod operations. It
could be possible to limit the operations that carry such an id, but this could complicate
the protocol.
Thoughts?
Also, any thoughts on the need for the 4th optional requirement? For near caches, remote
events are important, but you could limit the need for it with an aggressive eviction
policy in the near cache to cover against lost events.
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev