Hi,
I've been a little busy recently tracking what appears to be an issue
with JBoss Cache under significant concurrent load, but I have had some
time to look at this.
> Are there requirements for an infinispan
> protocol documented anywhere?
Manik and I discussed the extra bits that we think the
binary protocol
should contain and these are now explained in:
http://www.jboss.org/community/wiki/HotRodProtocol
I don see anything here that would be particularly difficult.
The only thing that confuses me is the exact meaning of "piggy back".
It sounds like "piggy back" as used here refers to overloading a
response and providing additional information in some cases. I tend
to like clearly defined requests and responses, so clients can ask
for information that they want. There are also events, so that when
something changes, the clients can receive asynchronous events.
Perhaps if I had a slightly better understanding of the desired
behaviour I could fit the protocol to it.
> - It is easier to coordinate the smaller number of servers
when a new
> one comes online, or goes offline.
Hmmmm, I'm not sure I understand why this would be a problem for
client
side routing. The client's would get new cluster formation information
next time they make a call. IOW, the response to the client would
contain cluster formation info of the nodes configured with hot rod
server, so no need to coordinate server join/leaves with clients. This
is how clustered EJBs work in AS.
Consider the case where you have a thousand clients, and three servers.
Each client sends a put or get request to the appropriate server according
to a consistent hash.
Now, add a new server. If the clients are doing the CH, then they all
need to be consistent at the same time. Otherwise one client may write
data to one server, and another client may read or write the same data
to another server. Consistency among a large number of clients becomes
a hard problem. It is simply easier to manage the dynamics when you
have a simpler system under tighter control, such as a set of servers.
Alex
--- On Fri, 11/20/09, Galder Zamarreno <galder(a)redhat.com> wrote:
From: Galder Zamarreno <galder(a)redhat.com>
Subject: Re: [infinispan-dev] ISPN-29 and a "custom protocol"
To: infinispan-dev(a)lists.jboss.org
Date: Friday, November 20, 2009, 4:50 AM
Hi Alex,
On 11/19/2009 06:02 PM, Alex Kluge wrote:
> Hi,
>
> > Is this protocol
documented somewhere?
>
> Putting this online is one of the
main things I will be able to do
> along this path over the next week.
>
> > Galder is working on the
HotRod protocol which may well benefit
> > from this.
>
> Well, it may not be necessary to
reinvent it. We'll see. Are there
> requirements for an infinispan
protocol documented anywhere?
Manik and I discussed the extra bits that we think the
binary protocol
should contain and these are now explained in:
http://www.jboss.org/community/wiki/HotRodProtocol
If you can add docu of your binary protocol there, we can
look into
accomodation this stuff into your binary protocol
potentially?
>
> > The plan is for HotRod to
support (but not mandate)
> > client-side CH
>
> I did this originally, however, I
moved to a server side message
> routing for a few reasons.
>
> - It is easier to coordinate the
smaller number of servers when a new
> one comes online, or goes
offline.
Hmmmm, I'm not sure I understand why this would be a
problem for client
side routing. The client's would get new cluster formation
information
next time they make a call. IOW, the response to the client
would
contain cluster formation info of the nodes configured with
hot rod
server, so no need to coordinate server join/leaves with
clients. This
is how clustered EJBs work in AS.
>
> - It allows fewer connections per
server, client side routing requires
> that each client connect to
each server. For our scale, thousands of
> servers, this is an issue.
That's a good point.
>
> - It guarantees a consistent mapping
of data to servers for potentially
> disperate clients.
>
> - It makes the client side code
easier to write.
True :)
>
>
> That said, if the client does do the
hashing, the server will still
> do it, and make sure that the data
arrives at the correct server. So
> nothing prevents the client from
doing the hashing.
>
>
Alex
>
>
> --- On Mon, 11/16/09, Manik Surtani<manik(a)jboss.org>
wrote:
>
>> From: Manik Surtani<manik(a)jboss.org>
>> Subject: Re: [infinispan-dev] ISPN-29 and a
"custom protocol"
>> To: "infinispan -Dev List"<infinispan-dev(a)lists.jboss.org>
>> Date: Monday, November 16, 2009, 5:06 AM
>> Hi Alex - comments in line:
>>
>> On 16 Nov 2009, at 07:25, Alex Kluge wrote:
>>
>>> Hi,
>>>
>>> It is worth mentioning that I
have a full
>> implementation of a
>>> client-server binary protocol used for
jboss-cache,
>> which is similar
>>> to the current project and can be easily
adapted to
>> it. There are a
>>> number of interesting points
>>>
>>> - Built on Apache MINA
framework, but
>> the layers are well separated,
>>> so replacing
one would not be
>> too difficult.
>>
>> Nice.
>>
>>> - Language neutral
binary protocol
>> (non Java clients are planned).
>>
>> Is this protocol documented somewhere?
Galder is
>> working on the HotRod protocol which may well
benefit from
>> this.
>>
>>> - Integrated into
Jboss Cache as an
>> L2 cache, but easily used
>>> independently.
>>> - Performance is quite
reasonable,
>> with request/response cycles on
>>> the order of
600 microseconds
>> for small request/response.
>>> - Easily extensible, a
different
>> codec can be supplied to support
>>> different
protocols. Some
>> refactoring could be done to make this
>>> much easier.
>>> - Non trivially tested
already.
>>> - Inherently
asynchronous -
>> synchronous responses are achieved by
>>> immediately
waiting on the
>> response future object.
>>> - Server side
consistent hashing,
>> clients connect to any server.
>>
>> The plan is for HotRod to support (but not
mandate)
>> client-side CH as well for "smart" connections.
>>
>>> There is raw source at
>>> http://www.vizitsolutions.com/org.jboss.cache.tcpcache.tar.gz
>>>
>>> I'll see about some explanations and examples
over the
>> next weeks.
>>
>> Great!
>>
>>> This was intended to be contributed back to
the Jboss
>> Cache project from the beginning, hence the
organization
>> into jboss.cache packages. Oh, and
>>> I never gave it a snazzy name - I just called
it the
>> Jboss Cache binary
>>> protocol.
>>
>> :)
>>
>>
>>>
>>>
>>
>> Alex
>>>
>>> --- On Wed, 8/19/09, Manik Surtani<manik(a)jboss.org>
>> wrote:
>>>
>>>> From: Manik Surtani<manik(a)jboss.org>
>>>> Subject: Re: [infinispan-dev] ISPN-29 and
a
>> "custom protocol"
>>>> To: "Jeff Ramsdale"<jeff.ramsdale(a)gmail.com>
>>>> Cc: infinispan-dev(a)lists.jboss.org
>>>> Date: Wednesday, August 19, 2009, 12:05
PM
>>>> Nice one Jeff ... so far this is
>>>> winning in my mind!
>>>>
>>>> On 19 Aug 2009, at 18:01, Jeff Ramsdale
>>>> wrote:
>>>> How about Hot Rod? It has a connection to
the
>>>> word "custom" and implies speed...
>>>> -jeff
>>>>
>>>> On Wed, Aug 19, 2009 at 7:05 AM,
>>>> Manik Surtani<manik(a)jboss.org>
>>>> wrote:
>>>> Regarding
>>>> ISPN-29 [1], I've made some notes about
what this
>> will
>>>> provide on this wiki page [2]. I'm
kinda
>> tired
>>>> of referring to the
>>>> 'custom binary protocol' as a 'custom
binary
>>>> protocol'! Can anyone
>>>> think of a snazzy name for this protocol?
>> Keep in
>>>> mind that we would
>>>> want others to implement clients using
this
>> protocol as
>>>> well on other
>>>> platforms.
>>>>
>>>> Here are a few thoughts to get the
creative
>> juices
>>>> flowing:
>>>>
>>>> * ICBP
(Infinispan
>> cache binary
>>>> protocol - BORING!)
>>>> *
Adhesive (the 'glue'
>>>> between the client and server)
>>>> *
Elastiglue
>>>> *
StickyFingers (after
>> the
>>>> Rolling Stones album?)
>>>>
>>>> - Manik
>>>>
>>>> [1]
https://jira.jboss.org/jira/browse/ISPN-29
>>>> [2]
http://www.jboss.org/community/wiki/Clientandservermodules
>>>> --
>>>> Manik Surtani
>>>> manik(a)jboss.org
>>>> Lead, Infinispan
>>>> Lead, JBoss Cache
>>>>
http://www.infinispan.org
>>>>
http://www.jbosscache.org
>>>>
>>>>
>>>>
>>>>
>>>>
_______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev(a)lists.jboss.org
>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>>
>>>> --Manik
>>>> Surtanimanik(a)jboss.orgLead,
>>>> InfinispanLead, JBoss
Cachehttp://www.infinispan.orghttp://www.jbosscache.org
>>>>
>>>>
>>>>
>>>>
>>>> -----Inline Attachment Follows-----
>>>>
>>>>
_______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev(a)lists.jboss.org
>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>>
>>>
>>>
>>>
_______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> --
>> Manik Surtani
>> manik(a)jboss.org
>> Lead, Infinispan
>> Lead, JBoss Cache
>>
http://www.infinispan.org
>>
http://www.jbosscache.org
>>
>>
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev