[infinispan-dev] design for cluster events (wiki page)

Mircea Markus mmarkus at redhat.com
Fri Nov 1 06:38:31 EDT 2013


Thanks for the input Erik.
Indeed at this stage (wiki document) is about how to basically react to cluster partitions.
Even before considering vector clocking, support for merging state is needed. Something like:

Map.Entry merge(Map.Entry fromPartition1, Map.Entry fromPartition2)

This is to be invoked during cluster merges, rather than passing all the responsibility for this to the user.

On Oct 31, 2013, at 9:50 PM, Erik Salter <an1310 at hotmail.com> wrote:

> Thanks for the wiki and the discussion.  Since I'm encountering this in the
> wild, allow me to offer my thoughts.
> 
> Basically my feeling is that no matter what you do in split-brain handling,
> it's going to be wrong.
> 
> In my use case, I have individual blades where each blade runs a suite of
> application nodes; one of which is a data grid node.  Each node is
> single-homed.  And they wire into the same switch.  This setup is orthogonal
> across a data center (WAN).  In this deployment, these two DCs make up a
> single cluster.  There is a concept of a set of keys for my caches being
> "owned" by a site, i.e. only one set of clients will access these keys.
> These keys are striped across the WAN with a TACH.
> 
> So a split brain on a local data center only can occur when a NIC on one of
> the blades goes bad and the node is still running.  The merge will always be
> of the [subgroups=N-1, 1] variety, where N is the number of running nodes in
> the cluster.  Since these nodes are single-homed, they cannot receive
> requests if they are "offline" from the NIC.  I don't have to worry about
> state collision, but I DO have to worry about stale state from the merged
> node.
> 
> In this case, it's easy to tell when I might be in a split-brain.  The FD
> protocol will suspect and exclude a node.  Currently, though, I have no way
> of knowing how or why a node was excluded.
> 
> If the WAN goes down, I have a rather large problem.  First off is
> detection.  If there's an ACL blockage, or worse, a unidirectional outage
> (i.e. east can see west, but not vice-versa), it takes the cluster a minute
> (really, about 60 seconds) to figure things out.  One side will have
> spurious MERGEs, the other side will have leaves from either FD_SOCK or FD.
> So again, there's a problem of detection.  It's a pretty safe bet, though,
> that if I don't see nodes from the other data center, I'm in a split brain.
> But I still can't differentiate between a normal leave, a leave caused by a
> socket error, or a leave caused by an FD exclusion.
> 
> But if a WAN goes down, both sides are available to their respective
> clients.
> 
> Since my use case is such that the keys are unique to each side,  all the
> CRUD operations of a key come from a client on one side of the WAN, even
> though the primary owner of that key may be on the other side.  Ex: East and
> West are a single cluster.  East makes modifications to Key K that has its
> primary owner in West (suboptimal, but it works).  The WAN splits apart.
> Clients from East continue to make modifications to K.  If clients fail over
> to West, they still are the only ones making modifications to K.
> 
> The real problem occurs when the WAN event is over and the clusters merge.
> The merge handling has no way of knowing where to source K from.  
> 
> So I think we're missing vector clocking.
> 
> And taking it back a step, this doesn't have to be a WAN.  The use case is:
> split brain, but clients can only write to one side.  I would tend to think
> that's pretty common.  This can be solved with vector clocking (or any other
> versioning scheme).  In my use case, it would be "pick the latest".  I
> mentioned vector clocking because if you detected a split brain event at
> time T, and you were trying to merge keys with modifications after this
> time, this would be some application-level interface that would need to be
> called.
> 
> When we talk about controlling availability, I tend to think users really
> care about when the event started and how to detect it.   Availability is a
> "damned if you do, damned if you don't" scenario, due to the complex
> networking scenarios you can envision.  From my perspective, I really care
> about "how do I detect it?"  Then it becomes "how do I recover?"  
> 
> Just some disparate thoughts,
> 
> Erik
> 
> P.S.  (Yes, I know about XSite.  It has some limitations that necessitated
> the architecture above)

Cheers,
-- 
Mircea Markus
Infinispan lead (www.infinispan.org)







More information about the infinispan-dev mailing list