<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Jan 5, 2010, at 12:04 PM, Manik Surtani wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div><br>On 5 Jan 2010, at 09:52, Galder Zamarreno wrote:<br><br><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">On 01/04/2010 10:44 PM, Alex Kluge wrote:<br></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> - No events. I used events, that is asynchronous messages originating<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> from the server when certain conditions are met, to notify clients<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> that data was written to the server. This was mostly driven by the<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> need to notify an L1 cache of a write to the cache server. It would<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> be good to allow for this usage in this protocol. Note that this is<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> another case where the op code is useful to have as part of the<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"> message.<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Isn't this expensive? Doesn't this mean the server<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">has to fire off messages to each and every connected client,<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">whether the clients are interested in these messages or<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">not?<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"> Not necessarily. There are a number of options. It wouldn't be<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"> difficult to allow the clients to register for the events, and<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"> then only send them to the interested clients. The events can<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"> be sent asynchronously (a separate thread), thus they won't<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"> delay the response to a write. Piratically speaking, they aren't<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"> that expensive.<br></blockquote></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I'm still not convinced by this either. What you're talking about sounds <br></blockquote><blockquote type="cite">like JMS to me :). Right now, the only situation where I can see this <br></blockquote><blockquote type="cite">being useful is for sending back cluster formation changes but we found <br></blockquote><blockquote type="cite">simpler way to deal with it and that doesn't require the added complexity.<br></blockquote><br>I think eventing is pretty important. Especially if clients decide to add a further layer of client-side caching to prevent network lookups.<br><br>I can see how they can be abused to form a JMS-like message-passing layer, but then a lot of stuff could be open to abuse. Perhaps the events should just be restricted to notifying when a key has changed (i.e., a bit like Invalidation) but with no values/payload passed on, forcing the client to do a GET if the value was required.<br></div></blockquote>Once we have the server->client notification mechanism working IMO we should allow the user to decide which notifications he wants to listen to, and not be restrictive about it. Re:key change notifications, I'm not sure that will work with the current ISP architecture: right now the notifications are local, i.e. one will only be notified by the keys changed in the local cache. So if a used want to be notified when the "account" key changed, that will only happen if he is connected to the server on which the "account" key was hashed. Even more, if he will connect to another server, which contains "account", the notification behavior might be different, which might be confusing. </div><div>Not a protocol design expert, but is it common for this "push" approach for protocols? </div><div><blockquote type="cite"><div><font class="Apple-style-span" color="#000000"> </font></div><div><font class="Apple-style-span" color="#000000"><br></font><SNIP /><br><br><blockquote type="cite"><blockquote type="cite"> In my implementation I have a size attached to each field, and this<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"> allows the messages to be handled easily. I retrieve more data from<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"> the network if there is not enough data to complete the processing of<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"> a field. There is no need to know the size of the full message.<br></blockquote></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I borrowed the idea from the memcached binary protocol, but you have a a <br></blockquote><blockquote type="cite">good point. I'll revisit the model to include size together with each field.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Finally, I've noted that you add cache name in the requests. This is <br></blockquote><blockquote type="cite">interesting because until now, I had thought of a Hot Rod server to map <br></blockquote><blockquote type="cite">1 to 1 to a cache, but if you add the cache name, you can map 1 Hot Rod <br></blockquote><blockquote type="cite">server to a Cache manager and the cache allows you to direct requests to <br></blockquote><blockquote type="cite">different caches under the same cache manager.<br></blockquote><br>I think this makes sense, since you reduce the overhead of a cache server per cache. Would be divergent from the memcached server design though (as we wouldn't be able to change the memcached server to add cache name to requests)<br><br><br>--<br>Manik Surtani<br><a href="mailto:manik@jboss.org">manik@jboss.org</a><br>Lead, Infinispan<br>Lead, JBoss Cache<br>http://www.infinispan.org<br>http://www.jbosscache.org<br><br><br><br><br><br>_______________________________________________<br>infinispan-dev mailing list<br>infinispan-dev@lists.jboss.org<br>https://lists.jboss.org/mailman/listinfo/infinispan-dev<br></div></blockquote></div><br></body></html>