Once we have the server->client notification mechanism working IMO we should allow the user to decide which notifications he wants to listen to, and not be restrictive about it. Re:key change notifications, I'm not sure that will work with the current ISP architecture: right now the notifications are local, i.e. one will only be notified by the keys changed in the local cache. So if a used want to be notified when the "account" key changed, that will only happen if he is connected to the server on which the "account" key was hashed. Even more, if he will connect to another server, which contains "account", the notification behavior might be different, which might be confusing.
On 5 Jan 2010, at 09:52, Galder Zamarreno wrote:On 01/04/2010 10:44 PM, Alex Kluge wrote:- No events. I used events, that is asynchronous messages originatingfrom the server when certain conditions are met, to notify clientsthat data was written to the server. This was mostly driven by theneed to notify an L1 cache of a write to the cache server. It wouldbe good to allow for this usage in this protocol. Note that this isanother case where the op code is useful to have as part of themessage.Isn't this expensive? Doesn't this mean the serverhas to fire off messages to each and every connected client,whether the clients are interested in these messages ornot?Not necessarily. There are a number of options. It wouldn't bedifficult to allow the clients to register for the events, andthen only send them to the interested clients. The events canbe sent asynchronously (a separate thread), thus they won'tdelay the response to a write. Piratically speaking, they aren'tthat expensive.I'm still not convinced by this either. What you're talking about soundslike JMS to me :). Right now, the only situation where I can see thisbeing useful is for sending back cluster formation changes but we foundsimpler way to deal with it and that doesn't require the added complexity.
I think eventing is pretty important. Especially if clients decide to add a further layer of client-side caching to prevent network lookups.
I can see how they can be abused to form a JMS-like message-passing layer, but then a lot of stuff could be open to abuse. Perhaps the events should just be restricted to notifying when a key has changed (i.e., a bit like Invalidation) but with no values/payload passed on, forcing the client to do a GET if the value was required.
<SNIP />In my implementation I have a size attached to each field, and thisallows the messages to be handled easily. I retrieve more data fromthe network if there is not enough data to complete the processing ofa field. There is no need to know the size of the full message.I borrowed the idea from the memcached binary protocol, but you have a agood point. I'll revisit the model to include size together with each field.Finally, I've noted that you add cache name in the requests. This isinteresting because until now, I had thought of a Hot Rod server to map1 to 1 to a cache, but if you add the cache name, you can map 1 Hot Rodserver to a Cache manager and the cache allows you to direct requests todifferent caches under the same cache manager.
I think this makes sense, since you reduce the overhead of a cache server per cache. Would be divergent from the memcached server design though (as we wouldn't be able to change the memcached server to add cache name to requests)
--
Manik Surtani
manik@jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev