[infinispan-dev] Clustered Listener

William Burns mudokonman at gmail.com
Mon Jul 7 16:58:40 EDT 2014


On Fri, Jul 4, 2014 at 10:41 AM, Pierre Sutra <pierre.sutra at unine.ch> wrote:
> Hello,
>
>> Are you talking about non clustered listeners? It seems unlikely you
>> would need so many cluster listeners. Cluster listeners should allow
>> you to only install a small amount of them, usually you would have
>> only additional ones if you have a Filter applied limiting what
>> key/values are returned.
> Our usage of the clustered API is a corner case, but the installation of
> a listener for a specific key (or key range) could be of general
> purpose. My point was that installing all filters everywhere is costly
> as every node should iterate over all filters for every modification.
> Our tentative code for key-specific filtering is available at
> github.com/otrack/Leads-infinispan
> (org.infinispan.notifications.KeySpecificListener and
> org.infinispan.notifications.cachelistener.CacheNotifierImpl).

In this case it still has to iterate over the listener for
modifications that live on the same node, but in this case the chance
of having the listener present is smaller.

It doesn't look like what you have currently is safe for rehashes
though since the owners would change nodes.  You would need to move
the listener between nodes in this case.  Also you removed an edge
case when a listener might not be installed if a CH change occurs
right when sending to nodes (talked about later).

>
>> Is the KeyFilter or KeyValueFilter not sufficient for this? void
>> addListener(Object listener, KeyFilter<? super K> filter); <C> void
>> addListener(Object listener, KeyValueFilter<? super K, ? super V>
>> filter, Converter<? super K, ? super V, C> converter); Also to note if
>> you are doing any kind of translation of the value to another value it
>> is recommended to do that via the supplied Converter. This can give
>> good performance as the conversion is done on the target node and not
>> all in 1 node and also you can reduce the payload if the resultant
>> value has a serialized form that is smaller than the original value.
> Indeed, this mechanism suffices for many purposes,  I was just pointing
> out that it might be sometimes expensive.

I think I better understand what part you are talking about here.
Your issue was around the fact that every node has the listener
installed and thus any modification must be checked against the
filter.  If you have a large amount of filters I agree this could be
costly, however cluster listeners was not envisioned to have hundreds
installed.  I think maybe if I better understood your use case we
could add some support for this which would work better.
Unfortunately a Filter currently doesn't designate a key (which is the
core of the issue from my understanding), however we could look into
enhancing it to support something like you have.

One thing that I haven't implemented yet that I was hoping to get to
was doing a single notification on an event occuring instead of N of
matches.  Say in the case you have 10 listeners installed as cluster
listeners and you have 1 modification, this could cause 10 remote
calls to occur, 1 for each listener.  I was thinking instead I could
batch those events so only a single event is sent.  I wonder if you
are running into this as well?

>
>>> In such a case, the listener is solely
>>> installed at the key owners. This greatly helps the scalability of the
>>> mechanism at the cost of fault-tolerance since, in the current state of
>>> the implementation, listeners are not forwarded to new data owners.
>>> Since as a next step [1] it is planned to handle topology change, do you
>>> plan also to support key (or key range) specific listener ?
>> These should be covered with the 2 overloads as I mentioned above.
>> This should be the most performant way as the filter is replicated to
>> the node upon installation so a 1 time cost.  But if a key/value pair
>> doesn't pass the filter the event is not sent to the node where the
>> listener is installed.
> I agree.
>
>>
>>> Besides,
>>> regarding this last point and the current state of the implementation, I
>>> would have like to know what is the purpose of the re-installation of
>>> the cluster listener in case of a view change in the addedListener()
>>> method of the CacheNotifierImpl class.
>> This isn't a re-installation.  This is used to propgate the
>> RemoteClusterListener to the other nodes, so that when a new event is
>> generated it can see that and subsequently send it back to the node
>> where the listener is installed.  There is also a second check in
>> there in case if a new node joins in the middle.
> The term re-installation was inappropriate. I was meaning here that in
> my understanding of the code of CacheNotifierImpl, a second check seems
> of no need since if a new node joins the cluster afterward it still has
> to install the pair <filter,converter>.

The problem was that there is a overlap when you have a node joining
while you are sending the initial requests that it wouldn't install
the listener.

Cluster -> Node A, B, C

1. User installs listener on Node C
2. Node C is sending listeners to Nodes A + B
3. Node D joins in the time in between and asks for the listener (from
coordinator), but it isn't fully installed yet to be retrieved
4. Node C finishes installing listeners on Nodes A + B

in this case Node D never would have gotten listener, so Node C also
sees if anyone else has joined.

The difference is that Node D only sends 1 message to coordinator to
ask for listeners instead of sending N # of messages to all nodes
(which would be required on every JOIN from any node).  This should
scale better in the long run especially since most cases this
shouldn't happen.

>
> Best,
> Pierre
>
> ps: sorry for the late answer.
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


More information about the infinispan-dev mailing list