On Wed, Jun 13, 2012 at 2:12 PM, Manik Surtani <manik@jboss.org> wrote:

On 13 Jun 2012, at 09:05, Dan Berindei wrote:

Sanne and I resumed the meeting later yesterday afternoon, but we
basically just rehashed the stuff that we've been discussing before
lunch. Logs here:

(07:10:10 PM) jbott: Meeting ended Tue Jun 12 16:09:55 2012 UTC.
Information about MeetBot at http://wiki.debian.org/MeetBot . (v
0.1.4)
(07:10:10 PM) jbott: Minutes:
http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2012/infinispan.2012-06-12-15.26.html
(07:10:10 PM) jbott: Minutes (text):
http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2012/infinispan.2012-06-12-15.26.txt
(07:10:10 PM) jbott: Log:
http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2012/infinispan.2012-06-12-15.26.log.html


The main conclusion was that the number of total virtual nodes/hash
segments will be fixed per cluster, not per node. Kind of like the old
AbstractWheelConsistentHash.HASH_SPACE, only configurable. A physical
node will have a variable number of vnodes/segments over its lifetime.

We also decided to add a pull component to our state transfer. The
current NBST design requires all the nodes to push state to a joiner
more or less at the same time, which results in lots of congestion at
the network layer and sometimes even in the joiner being excluded from
the cluster. We have decided that a node will not start pushing data
as soon as it receives the PREPARE_VIEW command from the coordinator,
but instead it will wait for a START_PUSH command from the receiver.
The receiver will only ask one previous owner at a time, thus
eliminating the congestion.


We've had a lot of back-and-forth discussions about whether the CH
should be "non-deterministic". We agreed in the end that (I think)
that it's fine if the creation of the CH is not based solely on the
current members list, and it depends on the previous CH as well. This
is quite important, I think it would be hard to find an algorithm
based only on member list that doesn't change ownership for a lot of
nodes in case of a leave (even if we use the previous members list as
well): see https://issues.jboss.org/browse/ISPN-1275.

Would this still be encapsulated by the existing ConsistentHash interface?  We should be careful about impl details leaking into the rest of the codebase.


Regular code would not have to deal with the key -> segment or segment -> physical node mapping, but the query module would need access to this stuff in order to build a separate index for each vnode/segment. There may be others who could use this functionality as well...


I had an idea (that I'm pretty sure I didn't explained properly in the
chat) that we could avoid state transfer blocking everything while
receiving the transaction table from a previous owner by splitting the
state transfer in two:
* In the first phase, we'd pick the new backup owners for each
segment, and we'd transfer all the state to them (entries, transaction
table, etc.)
* In the second phase, we'd pick a new primary owner for each segment,
but the primary owner can only be one of the existing backup owners.
Since the data has already been transferred, we can now also remove
the extra owners.

Who is "we"?  The joiner?  The coordinator?  Everyone (deterministic, triggered on an event/message)?


This is the basic flow I was thinking of:
1. Upon a cache membership change, the coordinator computes a new CH that removes the dead nodes and doesn't add any new owners unless there is a segment with 0 owners.
2. The coordinator broadcasts this CH to all the cache members, there is no state transfer at this point.
3. The coordinator checks the CH and notices that the number of segments owned by each node is not balanced. This could be immediately or triggered manually by the administrator.
4. The coordinator computes a new CH that balances the number of owners for each segment (but without removing any of the old owners, so some segments will have > numOwners owners). It starts state transfer for this CH.
5. After the state transfer ends, the coordinator checks the CH again. If the number of "primary owned" segments for each node is not balanced, it creates a new CH where each node primary owner for about the same number of segments, and broadcasts this new CH. Again, no state transfer is necessary.

So the coordinator creates the CHs, but everyone can compute it based on the "base" CH and the new list of members:
1. If a node in the base CH is no longer a member, remove it from all owner lists. Stop here. Otherwise go to 2.
2. If the number of owned segments per node is not balanced, add owners (algorithm TBD, but it won't be random). Stop here. Otherwise, go to 3.
3. If the number of primary-owned segments per node is not balanced, pick different primary owners from the existing owners. Remove extra owners.

During the first phase, a segment could have more than numOwners
owners, and commands would reach both the new owners and the old
owners. We will need to handle commit commands for transactions that
the new owner doesn't have yet in its transaction table, but we would
not need to block prepare commands (like the current NBST design
does). During the second phase, the new primary owner already has the
transaction table, so we don't need a blocking phase either.

So the new primary owner would queue up these commit commands? Or just ignore them/respond with a +1?


During the 1st phase there is no new primary owner, just new backup owners. They would need to queue commit commands or otherwise handle commit commands for prepares that they haven't received yes via state transfer - in the current NBST design this is handled by blocking everything until we have received transaction information from the old owners. I would prefer marking the transactions as 1PC, like TOB does.

During the 2nd phase, the new primary owner already has the prepare commands (either received via state transfer or received as a backup owner), so it doesn't need to block/queue.


I didn't explain this properly in the chat because I was certain it
would only make sense if the coordinator initiated state transfer one
node at a time, making it non-deterministic. But I think if we allow
the CH creation algorithm to use the previous CH, we can
deterministically decide if the backup owners are properly balanced
(if not, we need to start phase 1) and if the primary owners are
properly balanced (if not, we need to start phase 2).

+1 to a deterministic CH.


Does that mean -1 to using the previous CH to compute the new CH?
 


There is something else that I've been thinking about since yesterday
that might improve performance and even simplify the state transfer at
the cost of determinism. When state transfer fails (usually because a
node has died, but not necessarily), the coordinator could ask each
node how far it got with the state transfer in progress (how many
segments they got, from which owners, etc). The coordinator would then
create a new "base CH" based on the actually transferred data instead
of the actual start CH or the "pending CH", or even the whole
chain/tree of CHs, none of which reflect how data is effectively
stored in the clustered at that moment. Because this base CH would
reflect the actual owners of each segment, there would be less data
moving around in the new state transfer and we wouldn't need to keep a
chain/tree of previous owner lists either.

When you say at the cost of determinism, what are the consequences of this?  Some nodes may get "wrong answers" from their CH instances?  And if so, then what?  When these nodes contact "wrong nodes" for a specific key, would this "wrong node" then proxy to the coordinator (who has the definitive CH)?


At the cost of determinism = only the coordinator gets the "status updates" from the members after the interruption, so only the coordinator can compute the new CH. It can then broadcast the new CH, and everyone will use that new CH.

The individual members will never compute their own CH, they would use the old CH until they got the new CH from the coordinator. Because we only remove owners for a key after everyone else has it, there are only two ways a command can reach the wrong node:
1. The node was an owner a long time ago but is no longer an owner (or may even be dead). If the node is still alive, it can forward the command to any/all owners in the new CH. Otherwise, the originator will have to deal with a SuspectException and retry once it gets the updated CH from the coordinator.
2. The node is an owner in the latest CH, but hasn't received all the entries/prepares yet. Again, it can forward to a node that's an owner in both the latest CH and the base CH (if it's a read), mark the transaction as 1PC (commit), or perform the command normally (write/prepare).

I'm going to take a stab at implementing a new CH with a fixed number
of vnodes, that can take an existing CH as input and change owners as
little as possible. Then I'm going to try and implement the balanced
backup owners/balanced primary owners check as well, just to see if
it's really possible. I'm not going to modify the design document just
yet, I need to see first if it does work and what you guys think about
it…

+1.  It will also give you an idea of the effort involved in implementing this, and how to break up the work into subtasks, how to test, etc.

Cheers
Manik




Cheers
Dan


On Tue, Jun 12, 2012 at 4:02 PM, Manik Surtani <manik@jboss.org> wrote:
Meeting minutes from part 1.  Had to break for lunch.  :)


Meeting ended Tue Jun 12 13:00:43 2012 UTC.  Information about MeetBot athttp://wiki.debian.org/MeetBot . (v 0.1.4)
14:01
Minutes:        http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2012/infinispan.2012-06-12-09.58.html
14:01
Minutes (text): http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2012/infinispan.2012-06-12-09.58.txt
14:01
Log:            http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2012/infinispan.2012-06-12-09.58.log.html


--
Manik Surtani
manik@jboss.org
twitter.com/maniksurtani

Project Lead, Infinispan
http://www.infinispan.org

Platform Architect, JBoss Data Grid
http://www.redhat.com/promo/dg6beta


_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani

Project Lead, Infinispan

Platform Architect, JBoss Data Grid


_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev