[infinispan-dev] Use of Inifnispan across data centers

Bela Ban bban at redhat.com
Tue Aug 10 03:13:21 EDT 2010



Galder Zamarreño wrote:
>
>> * When the coordinator in Bilbao crashes, someone else has to take
>> over. During this transition you have to ensure that there are no
>> lost or duplicate messages. This can be done with queuing and
>> sequence numbers, plus retransmission, but then again, is this
>> something you want to do in Hot Rod ?
>
> I don't think there's a need to worry about this. The client side, 
> Boston, can keep track of a number of Hot Rod servers in Bilbao. IOW, 
> you can have all nodes in Bilbao running Hot Rod servers. So, if the 
> first one fails, it sends it to the second one. I don't see potential 
> for loss/duplicate at all. It's the client driving it and as long as 
> the servers are there and the client has the view, it's all fine.

What happens if a client thinks the first server failed, and start 
forwarding messages to the second server, but the first server still did 
receive a few messages ? Then we could end up with duplicate messages...

Also, the clients can have different 'views' of the cluster, due to the 
ping protocol having different latencies across different network links. 
This way, we can end up with different views thus the consistent hash 
function can pick different servers for a given key.

Definitely worth a design / brainstorming session once we start tackling 
distribution across data centers...


>> * Assume Boston is up and running, then Bilbao is started. How does
>> Bilbao get the initial state of Boston ?
>
> Yup, that's missing. I think you'd need Bilbao to be Hot Rod client to 
> Boston and do a bulk get.

Probably something equivalent to the non-blocking state transfer already 
present in JBossCache / Infinispan...

>> An unrelated question I have is what happens when the switch between
>> Boston and Bilbao crashes ? I assume Hot Rod requires TCP underneath, so
>> does it (or Netty) implement some heartbeating over the link ?
>
> I'm not aware of Netty sending any hearbeats, but Hot Rod has a ping 
> command which can be used for that.

Still another point to consider.

My overall argument is that, while HotRod is fine to access servers from 
clients, it needs work to connect servers or server islands. Or it may 
not be the right tool to do that, and JGroups serves you better in 
solving issues like the ones mentioned above.

I'll be looking into issues revolving around wide area data replication 
/ distribution on the JGroups level shortly, to see if there's a 
solution requiring minimal or no changes in Infinispan. Although I doubt 
this, at least for distribution; there an Infinispan centered solution 
is probaby better. But we actually do have a design for this, that we 
came up with at our last Neuchatel meeting, what about it ? Still relevant ?

-- 
Bela Ban
Lead JGroups / Clustering Team
JBoss


More information about the infinispan-dev mailing list