Galder ZamarreƱo wrote:
On top of the issues you mentioned, you force all to have same
communications layer, TCP or UDP.
Correct
Did you have a look at my Hot Rod presentation for JUDCon?
http://www.jboss.org/events/JUDCon/presentations.html. You might check
out the last bit of my presentation where I explained very briefly how
Infinispan accross data centres could potentially be written with
RemoteCacheStore hooking up different data centres via Hot Rod. Each
dc could be configured with JGroups TCP/UDP while the inter dc traffic
goes over Netty based TCP-NIO. An active/passive kind of layout would
be relatively easy to achieve this way.
Yes, this would probably work, too. However, there are a few drawbacks:
* You have to configure an initial list of Hot Rod end points (in
the example, in Bilbao). I guess this is minor though; once
connected, clients get updated with the new membership list
* When the coordinator in Bilbao crashes, someone else has to take
over. During this transition you have to ensure that there are no
lost or duplicate messages. This can be done with queuing and
sequence numbers, plus retransmission, but then again, is this
something you want to do in Hot Rod ?
* Assume Boston is up and running, then Bilbao is started. How does
Bilbao get the initial state of Boston ?
An unrelated question I have is what happens when the switch between
Boston and Bilbao crashes ? I assume Hot Rod requires TCP underneath, so
does it (or Netty) implement some heartbeating over the link ?
--
Bela Ban
Lead JGroups / Clustering Team
JBoss