[infinispan-dev] ISPN-263 and handling partitions

Dan Berindei dan.berindei at gmail.com
Wed Apr 17 07:23:55 EDT 2013


On Wed, Apr 17, 2013 at 1:28 PM, Bela Ban <bban at redhat.com> wrote:

> Well, first of all, we won't *have* any conflicting topology IDs, as the
> minority partitions don't change them after becoming minority.
>
>
We don't have the notion of "conflicting topology IDs" with the current
algorithm, either. After a merge, it doesn't matter which partition had the
highest topology id before, we just pick a topology id that we know wasn't
used in any of the partitions.

Then we assume that each node has latest data in the segments that it owned
in its pre-merge consistent hash. Obviously, if any value changed while the
partitions were separated, we would have lost consistency - hence the chaos
that Adrian mentioned.



> Secondly, we can end up with the coordinator of a minority partition
> becoming the coordinator of the new merged partition, so we shouldn't
> rely on that (but I don't think we do so anyway?).
>
>
No, we don't care what partition the merge coordinator was in, we tread all
partitions the same (with some extra work for overlapping partitions).



> On a merge, everyone knows whether it came from a minority or majority
> partition, and the algorithm for state transfer should always clear the
> state in members in the minority partition and overwrite it from members
> of the primary partition.
>

Actually, the merge coordinator is the only one that has to know which node
is from a minority or a majority partition.

I like the idea of always clearing the state in members of the minority
partition(s), but one problem with that is that there may be some keys that
only had owners in the minority partition(s). If we wiped the state of the
minority partition members, those keys would be lost.

Of course, you could argue that the cluster already lost those keys when we
allowed the majority partition to continue working without having those
keys... We could also rely on the topology information, and say that we
only support partitioning when numOwners >= numSites (or numRacks, if there
is only one site, or numMachines, if there is a single rack).

One other option is to perform a more complicated post-merge state
transfer, in which each partition sends all the data it has to all the
other partitions, and on the receiving end each node has a "conflict
resolution" component that can merge two values. That is definitely more
complicated than just going with a primary partition, though.

One final point... when a node comes back online and it has a local cache
store, it is very much as if we had a merge view. The current approach is
to join as if the node didn't have any data, then delete everything from
the cache store that is not mapped to the node in the consistent hash.
Obviously that can lead to consistency problems, just like our current
merge algorithm. It would be nice if we could handle both these cases the
same way.



> On 4/17/13 10:58 AM, Radim Vansa wrote:
> > And the nice behaviour is that if we have partitions P1 and P2 with
> latest common topology 20 , when P2 increased it's topology to, say 40,
> while P1 only to 30, when a new coordinator from P1 will be elected it will
> try to compare these topology ids directly (assuming which one is newer or
> older) which won't end up well.
> >
> > Radim
> >
> > ----- Original Message -----
> > | From: "Adrian Nistor" <anistor at redhat.com>
> > | To: "infinispan -Dev List" <infinispan-dev at lists.jboss.org>
> > | Cc: "Manik Surtani" <msurtani at redhat.com>
> > | Sent: Wednesday, April 17, 2013 10:31:39 AM
> > | Subject: Re: [infinispan-dev] ISPN-263 and handling partitions
> > |
> > | In case of MergeView the cluster topology manager running on (the new)
> > | coordinator will request the current cache topology from all members
> and
> > | will compute a new topology as the union of all. The new topology id is
> > | computed as the max + 2 of the existing topology ids. Any currently
> > | pending rebalance in any subpartition is ended now and a new rebalance
> > | is triggered for the new cluster. No data version conflict resolution
> is
> > | performed => chaos :)
> > |
> > | On 04/16/2013 10:05 PM, Manik Surtani wrote:
> > | > Guys - I've started documenting this here [1] and will put together a
> > | > prototype this week.
> > | >
> > | > One question though, perhaps one for Dan/Adrian - is there any
> special
> > | > handling for state transfer if a MergeView is detected?
> > | >
> > | > - M
> > | >
> > | > [1]
> https://community.jboss.org/wiki/DesignDealingWithNetworkPartitions
> > | >
> > | > On 6 Apr 2013, at 04:26, Bela Ban <bban at redhat.com> wrote:
> > | >
> > | >>
> > | >> On 4/5/13 3:53 PM, Manik Surtani wrote:
> > | >>> Guys,
> > | >>>
> > | >>> So this is what I have in mind for this, looking for opinions.
> > | >>>
> > | >>> 1.  We write a SplitBrainListener which is registered when the
> > | >>> channel connects.  The aim of this listener is to identify when we
> > | >>> have a partition.  This can be identified when a view change is
> > | >>> detected, and the new view is significantly smaller than the old
> > | >>> view.  Easier to detect for large clusters, but smaller clusters
> will
> > | >>> be harder - trying to decide between a node leaving vs a partition.
> > | >>> (Any better ideas here?)
> > | >>>
> > | >>> 2.  The SBL flips a switch in an interceptor
> > | >>> (SplitBrainHandlerInterceptor?) which switches the node to be
> > | >>> read-only (reject invocations that change the state of the local
> > | >>> node) if it is in the smaller partition (newView.size <
> oldView.size
> > | >>> / 2).  Only works reliably for odd-numbered cluster sizes, and the
> > | >>> issues with small clusters seen in (1) will affect here as well.
> > | >>>
> > | >>> 3.  The SBL can flip the switch in the interceptor back to normal
> > | >>> operation once a MergeView is detected.
> > | >>>
> > | >>> It's no way near perfect but at least it means that we can
> recommend
> > | >>> enabling this and setting up an odd number of nodes, with a cluster
> > | >>> size of at least N if you want to reduce inconsistency in your grid
> > | >>> during partitions.
> > | >>>
> > | >>> Is this even useful?
> > | >>
> > | >> So I assume this is to shut down (or make read-only) non primary
> > | >> partitions. I'd go with an approach similar to [1] section 5.6.2,
> which
> > | >> makes a partition read-only once it drops below a certain number of
> nodes
> > | >> N.
> > | >>
> > | >>
> > | >>> Bela, is there a more reliable mechanism to detect a split in (1)?
> > | >> I'm afraid no. We never know whether a large number of members being
> > | >> removed from the view means that they left, or that we have a
> partition,
> > | >> e.g. because a switch crashed.
> > | >>
> > | >> One thing you could do though is for members who are about to leave
> > | >> regularly to broadcast a LEAVE messages, so that when the view is
> > | >> received, the SBL knows those members, and might be able to
> determine
> > | >> better whether we have a partition, or not.
> > | >>
> > | >> [1] http://www.jgroups.org/manual-3.x/html/user-advanced.html,
> section
> > | >> 5.6.2
> > | >>
> > | >> --
> > | >> Bela Ban, JGroups lead (http://www.jgroups.org)
> > | >> _______________________________________________
> > | >> infinispan-dev mailing list
> > | >> infinispan-dev at lists.jboss.org
> > | >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> > | > --
> > | > Manik Surtani
> > | > manik at jboss.org
> > | > twitter.com/maniksurtani
> > | >
> > | > Platform Architect, JBoss Data Grid
> > | > http://red.ht/data-grid
> > | >
> > | >
> > | > _______________________________________________
> > | > infinispan-dev mailing list
> > | > infinispan-dev at lists.jboss.org
> > | > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> > |
> > | _______________________________________________
> > | infinispan-dev mailing list
> > | infinispan-dev at lists.jboss.org
> > | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> > |
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
>
> --
> Bela Ban, JGroups lead (http://www.jgroups.org)
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20130417/b14f2d02/attachment.html 


More information about the infinispan-dev mailing list