Hi Emmanuel,
On Nov 19, 2013, at 9:48 AM, Emmanuel Bernard <emmanuel(a)hibernate.org> wrote:
Hey there,
Here are a few comments based on a quick reading.
I might have totally misread or misinterpreted what was exposed, feel
free to correct me.
## General
I think you are restricting the design to listeners:
* that only listen to raw entry changes
* whose processing is remote
* with no way to filter out the event from the server
Is that correct? I can see that it does address the remote L1 use case
but I feel like it will close the doors to many more use cases. An
interesting example being continuous query.
Perfect. This is precisely the kind of feedback I was hoping to get :)
Indeed I had the remote L1 use case in mind, since that's probably one of the most
asked questions whenever I've presented about Infinispan Servers, but of course I
welcome other use cases, and I'm all in to make sure that the solution accomodates
other interesting use cases.
In that use case the listener code runs a filtering logic server
side
and only send keys that are impacted by the query plus some flag
defining whether it's added to changed or removed from the corpus.
The key is filtering event before sending it to the client.
I like this idea. It'd easier to manage this filtering on the server side than on the
client side, plus it would reduce traffic by making the filtering happen before it's
left the server. Assuming that filtering is done per-cache, the user would add a remote
listener for a cache, and the filtering of which keys to notify clients on would be
defined server side. The main capability you lose by doing this is the ability for
different clients to filter differently keys on the same cache, but I'm not sure how
common that'd be.
I wish the design document was showing how we can achieve a general
purpose remote listener approach but have a step 1 that is only
targeting a restricted set of listeners if you feel that it's too much
to chew. I don't want us to be trapped in a situation where backward
compatibility prevent us from adding use cases.
## Specific questions
When the topology changes, it is the responsibility of the client to add
the listener to the new servers that show up. Correct? The API is a
global addRemoteListener but I imagine the client implementation will
have to transparently deal with that.
I wonder if a server approach is not more convinient. At least it does
not put the burden and bugs in several implementations and several
languages.
I consider that but the thing is that clients already have to deal with cluster topology
changes. If a node joins or leaves, they already need to do some work to be able to
potentially redirect requests to the new node. I think registering listeners with newly
joining nodes would be a simple extension of that logic. IOW, after stablishing a
connection to the newly joined server, register listeners there. This avoids the need to
distribute state WRT listener registration, but as Radim pointed out in an earlier email,
there could be edge cases to cover if there's a delay in the registration of the
listener and some updates happen.
You never send code at the moment. Only one kind of listener is
available and listeners to all entry change and deletion. Correct?
Hmmm, not really. You shoud be able to add as many listeners as you want per cache.
Why not have the ability to listen to new entry events? That would
limit
generic listeners as it is.
This could be part of the filtering logic somehow. I mean, there's two types of
filtering:
1. Filtering by type of operation: create, update, remove
2. Filter by key
Filtering by key can possibly done on the server side as you suggest. What about filtering
by type of operation? Doing it server side again would hugely reduce traffic.
Do you have plans to make the ACK optional depending on the listener
requirement? Looks like an expensive process.
It could be optional indeed.
"Only the latest event is tracked for ACK for a given key"
It seems it's fine for L1 but would be a problem for many more generic
listeners.
Again, we could make it optional to either track ACKs for latest event, or track ACKs for
all events, but putting a limit somewhere.
Once again, thanks for the excellent feedback.
Cheers,
Emmanuel
On Tue 2013-11-12 16:17, Galder Zamarreño wrote:
> Hi all,
>
> Re:
https://github.com/infinispan/infinispan/wiki/Remote-Hot-Rod-Events
>
> I've just finished writing up the Hot Rod remote events design document. Amongst
many other use cases, this will enable near caching use cases with the help of Hot Rod
client callbacks.
>
> Cheers,
> --
> Galder Zamarreño
> galder(a)redhat.com
>
twitter.com/galderz
>
> Project Lead, Escalante
>
http://escalante.io
>
> Engineer, Infinispan
>
http://infinispan.org
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org