]
Tristan Tarrant updated ISPN-5093:
----------------------------------
Fix Version/s: 9.2.0.Final
(was: 9.1.0.Final)
Granularity of remote event listener implementations doing the same
job
-----------------------------------------------------------------------
Key: ISPN-5093
URL:
https://issues.jboss.org/browse/ISPN-5093
Project: Infinispan
Issue Type: Enhancement
Components: Remote Protocols
Reporter: Galder ZamarreƱo
Assignee: Galder ZamarreƱo
Fix For: 9.2.0.Final
Currently, if N clients add the same listener to a cache that does the same job, e.g.
keeping a near cache consistent, this results in N server-side cluster listeners created,
each potentially installed in different nodes. If one of those nodes fails, all clients
that had a listener registered to that node will have to find a different node for this
listener.
The downsides of this approach is that there are as many cluster listeners installed as
clients have added listeners (or have near cache enabled), which might not very efficient.
If a node goes down, all clients that have cluster listeners there need to failover to
some other node.
The advantage of this approach is simplicity of the approach to decide where to add the
listener and where to failover to.
For this type of scenarios, an alternative set up might be worth exploring:
If all these client side listeners are interested in exactly the same events, and the
client ID would be exposed via the RemoteCache API, a server side cluster listener
multi-plexing between all these clients could be potentially built. In other words,
instead of having N clients register N cluster listeners, the first client would register
the cluster listener with a client listener ID, and if more registrations were added with
the same client listener ID, the connections would be added to the existing cluster
listener implementation.
The maximise the efficiency of this solution, all clients (even running in different
JMVs), given the same client listener ID, should agree upon the node to add the listener
in. For a distributed cache, hashing on the cache name would work. For replicated caches,
since there's no hashing available, the first node of the view could be used.
Since the logic to be executed server-side varies between being the first node adding the
client listener vs the others, synchronization would be added to make sure that the first
invocation only creates the cluster listener, and the others simply add the channel to the
listener.
Failover is a bit more tricky too, because if the node with the cluster listener goes
down, all the clients have to failover, which again exposes a 1st vs the others type of
logic.
Advantages of this approach is the reduction in number of cluster listeners and
potentially efficiency coming from a single cluster listener implementation server side.
The disadvantages come from the server side logic to add/failover a cluster listener,
which need to take into account if the listener is present or not. Other disadvantages
come from needing the clients to use some specific routing for adding listeners for same
node.