On 5 Jan 2010, at 09:52, Galder Zamarreno wrote:
On 01/04/2010 10:44 PM, Alex Kluge wrote:
>>> - No events. I used events, that is asynchronous messages originating
>>> from the server when certain conditions are met, to notify clients
>>> that data was written to the server. This was mostly driven by the
>>> need to notify an L1 cache of a write to the cache server. It would
>>> be good to allow for this usage in this protocol. Note that this is
>>> another case where the op code is useful to have as part of the
>>> message.
>>
>> Isn't this expensive? Doesn't this mean the server
>> has to fire off messages to each and every connected client,
>> whether the clients are interested in these messages or
>> not?
>
> Not necessarily. There are a number of options. It wouldn't be
> difficult to allow the clients to register for the events, and
> then only send them to the interested clients. The events can
> be sent asynchronously (a separate thread), thus they won't
> delay the response to a write. Piratically speaking, they aren't
> that expensive.
I'm still not convinced by this either. What you're talking about sounds
like JMS to me :). Right now, the only situation where I can see this
being useful is for sending back cluster formation changes but we found
simpler way to deal with it and that doesn't require the added complexity.
I think eventing is pretty important. Especially if clients decide to add a further layer
of client-side caching to prevent network lookups.
I can see how they can be abused to form a JMS-like message-passing layer, but then a lot
of stuff could be open to abuse. Perhaps the events should just be restricted to
notifying when a key has changed (i.e., a bit like Invalidation) but with no
values/payload passed on, forcing the client to do a GET if the value was required.
<SNIP />
> In my implementation I have a size attached to each field, and
this
> allows the messages to be handled easily. I retrieve more data from
> the network if there is not enough data to complete the processing of
> a field. There is no need to know the size of the full message.
I borrowed the idea from the memcached binary protocol, but you have a a
good point. I'll revisit the model to include size together with each field.
Finally, I've noted that you add cache name in the requests. This is
interesting because until now, I had thought of a Hot Rod server to map
1 to 1 to a cache, but if you add the cache name, you can map 1 Hot Rod
server to a Cache manager and the cache allows you to direct requests to
different caches under the same cache manager.
I think this makes sense, since you reduce the overhead of a cache server per cache.
Would be divergent from the memcached server design though (as we wouldn't be able to
change the memcached server to add cache name to requests)
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org