On 12/16/2009 01:16 PM, Manik Surtani wrote:
On 16 Dec 2009, at 12:13, Mircea Markus wrote:
>
> On Dec 16, 2009, at 12:08 PM, Manik Surtani wrote:
>
>>
>> On 16 Dec 2009, at 11:54, Mircea Markus wrote:
>>
>>>
>>> On Dec 16, 2009, at 9:26 AM, Galder Zamarreno wrote:
>>>
>>>>
>>>>
>>>> On 12/15/2009 01:13 PM, Manik Surtani wrote:
>>>>> A few comments:
>>>>>
>>>>> - Why do you have OpCode in your response header? Surely this is
redundant? If the client is sync, it knows what it sent. If it is async, it has a
message ID.
>>>>
>>>> True, I fixed that.
>>>>
>>>>> - 'Not So Dumb' and 'Clever' response headers should
be optional? Surely this stuff is only sent when there is a topology change? We also may
need some extra info here - how does the back-end know to send this info? If a client
hits different back-end nodes, and there is a topology change, how does Node A decide that
it should not bother with topology info since the client already hit Node B after the topo
change and has the new topology map? Perhaps a TopologyVersion (== JGroups View ID)
should be sent back with any topo map, and the client would send it's current
TopologyVersion with every request (non-dumb clients only)? Could be a vlong...
>>>>
>>>> Yeah, as you and Mircea suggest, clients sending the view id could help
>>>> better manage size of responses.
>>>>
>>>> Something to note here is that not all cluster view changes necessarily
>>>> involve a client side topology view change, cos if you have N infinispan
>>>> nodes in the cluster, you could have M running hot rod server where
M<=
>>>> N. So, Hot rod will need to figure out when there's been a view
change
>>>> in the cluster that involves the addition or removal of a hot rod
>>>> running Infinispan instance (similar thing to what happens when a
>>>> clustered EJB is deployed, the target list for that clustered EJB gets
>>>> updated).
>>> yep, that's a good point. Every Hotrod server has to know exactly what
other hotrod server are there running. I assume that because one way or the other each
hotrod node will have to piggyback the list of active hotrod servers to the clients.
>>
>> Yes, nodes that are not running HotRod are removed from the list prior to
transmission.
>>
>>> Can't the hotrod server track the topology changes (topology of hot rod
servers) and increase the view id number at that time only?
>>
>> That would be reimplementing a lot of logic that is already in JGroups.
Maintaining view IDs is non-trivial if (a) any node can update this and (b) this needs to
be kept coherent even as nodes leave/join (including any node maintaining this state).
> some logic will need to be there to make the difference between an jgroups node
leaving and a an jgroups node where hotrod is deployed leaving. Each HR server will need
to know the other HR servers running (different from jgroups level view), so we already
have some view management logic.
I expect this to be declarative (in XML). Then you just filter the JGroups view against
the nodes you have in the list of HR servers to determine what to send back.
I was expecting to this in similar to way to how clustered EJBs do. When
you you deploy a clustered EJB, it sends a sync message around the
cluster indicating to the other servers that there's a new endpoint for
that clustered EJB and hence, they update a server side list of HATarget
which is what is send back to the clients in piggyback.
I was thinking that when a HR server deploys, I could do a similar
thing. You could then use the existing JGroups view id and associate it
with the current HR server target list.
This however assumes that the server side HR servers are actually joined
in a single cluster, which might not necessarily suit all situations.
>>
>>> Actually I think it can also rely on the jgroups view id, but only update the
hotrod-view-id when jgroups cluster change overlaps with hotrod cluster change (i.e. a
infinispan instance that has a hotrod serv started is added or removed).
>>
>> Precisely.
>>
>>>> So, bearing in mind this, could we just use the JGroups view id for
>>>> this? AFAIK, it's 0 based long but shouldn't cause problems with
>>>> complete cluster restarts. If the whole cluster gets restarted, existing
>>>> connections will be closed and at that point, clients could revert back
>>>> to trying to connect one of their known hot rod servers and pass -1
>>> wouldn't they pass the old view Id at this time rather than -1? Guess the
client will realize that the Socket is closed, and at that time will send an "I
don't have view" request, i.e. -1?
>>
>> No, socket being closed doesn't mean your view is out of date - it just means
you fail over to the next server. A socket could close for reasons other than a server
leaving the group.
>>
>>>> as
>>>> view id which means that I have no view, so the responding server would
>>>> send back the hot rod cluster view.
>>>>
>>>> I'll add this to the wiki.
>>>
>>>>
>>>>>
>>>>> Cheers
>>>>> Manik
>>>>>
>>>>>
>>>>> On 14 Dec 2009, at 20:08, Galder Zamarreno wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> Re:
http://community.jboss.org/wiki/HotRodProtocol
>>>>>>
>>>>>> I've updated the wiki with the following stuff:
>>>>>> - Renamed replaceIfEquals to replaceIfUnmodified
>>>>>> - Added remove and removeIfUnmodified.
>>>>>> - Added containsKey command.
>>>>>> - Added getWithCas command so that cas value can be returned. I
decided
>>>>>> for a separate command rather than adding cas to get return
because you
>>>>>> don't always want cas to be returned. Having a separate
command makes
>>>>>> better use of network bandwith.
>>>>>> - Added stats command. JMX attributes are basically accessible
through
>>>>>> this, including cache size.
>>>>>> - Added error handling section and updated status codes.
>>>>>>
>>>>>> Note that Mircea added some interesting comments and I replied to
them
>>>>>> directly in the wiki.
>>>>>>
>>>>>> Still remaining to add:
>>>>>> - Commands: putForExternalRead evict, clear, version, name and
quite
>>>>>> commands.
>>>>>> - Passing flags.
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> p.s. Updating this has been quite a struggle due to F12 + FF
3.5.5
>>>>>> crashing at least 5 times, plus parts of the wiki dissapearing
after
>>>>>> publishing them!
>>>>>> --
>>>>>> Galder Zamarreño
>>>>>> Sr. Software Engineer
>>>>>> Infinispan, JBoss Cache
>>>>>> _______________________________________________
>>>>>> infinispan-dev mailing list
>>>>>> infinispan-dev(a)lists.jboss.org
>>>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>> --
>>>>> Manik Surtani
>>>>> manik(a)jboss.org
>>>>> Lead, Infinispan
>>>>> Lead, JBoss Cache
>>>>>
http://www.infinispan.org
>>>>>
http://www.jbosscache.org
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev(a)lists.jboss.org
>>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>> --
>>>> Galder Zamarreño
>>>> Sr. Software Engineer
>>>> Infinispan, JBoss Cache
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev(a)lists.jboss.org
>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> --
>> Manik Surtani
>> manik(a)jboss.org
>> Lead, Infinispan
>> Lead, JBoss Cache
>>
http://www.infinispan.org
>>
http://www.jbosscache.org
>>
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache