[infinispan-dev] [infinispan-internal] PutMapCommand is ineffective

Dan Berindei dan.berindei at gmail.com
Mon Jun 10 11:30:37 EDT 2013


Yes, putAll is really heavy in non-tx (concurrent) mode, because the same
PutMapCommand is forwarded from each primary owner to all the backup owners
of the keys it primary-owns. However, I don't think

However, in non-tx mode locks are owned by threads. A separate lock command
would acquire a lock and associate it with its execution thread, making it
impossible for a following write command to use the same lock. Changing
putAll to implement Radim's proposal would indeed make it very similar to a
transactional putAll: you'd need a pseudo-transaction object to associate
the locks with, and a reaper to clean up the pseudo-transaction objects
when the originator leaves the cluster.




On Mon, Jun 10, 2013 at 1:33 PM, Manik Surtani <msurtani at redhat.com> wrote:

> Agreed.  It does sound pretty heavy.  We should investigate a better
> implementation - the two approaches you suggest both sound good, could you
> create a JIRA for this?
>
> Adding infinispan-dev, that's the correct place to discuss this.
>
> Cheers
> Manik
>
> On 7 Jun 2013, at 13:39, Radim Vansa <rvansa at redhat.com> wrote:
>
> > Hi,
> >
> > recently I was looking into the performance of PutMapCommand and what's
> in fact going on under the hood. From what I've seen (not from the code but
> from message flow analysis), in non-transactional synchronous mode this
> happens:
> >
> > A wants to execute PutMapCommand with many keys - let's assume that in
> fact the keys span all nodes in the cluster.
> >
> > 1. A locks all local keys and sends via unicast a message to each
> primary owner of some of the keys in the map
> > 2. A sends unicast message to each node, requesting the operation
> > 3. Each node locks its keys and sends multicast message to ALL other
> nodes in the cluster
>

I don't think that's right... Each primary owner only sends this message to
all the backup owners of the keys for which that node is the primary owner.
So it will only send the message to all the other nodes (optimized as a
multicast) if every other node is a backup owner for one of its primary
keys.



> > This happens N - 1 times:
> > 4. Each node receives the multicast message, (updates the non-primary
> segments) and sends reply back to the sender of mcast message.
> > 5. The primary owners send confirmation back to A.
> >
> > Let's compute how many messages are here received - it's
> > N - 1 // A's request
> > (N - 1) * (N - 1) // multicast message received
> > (N - 1) * (N - 1) // reply to the multicast message received
> > N - 1 // response to A
> > That's 2*N^2 - 2*N messages. In case nobody needs flow control
> replenishments, nothing is lost etc. I don't like that ^2 exponent - does
> not look like the cluster is really scaling. It could be fun to see execute
> it on 64-node cluster, spawning thousands of messages just for one putAll
> (with, say 100 key-value pairs - I don't want to compute the exact
> probability on how many nodes would such set of keys have primary segments).
> >
> > Could the requestor orchestrate the whole operation? The idea is that
> all messages are sent only between requestor and other nodes, never between
> the other nodes. The requestor would lock the primary keys by one set of
> messages (waiting for reply), updating the non-primaries by another set of
> messages and then again unlocking all primaries by last message.
> > The set of messages could be either unicast with selected keys only for
> the recipient, or multicast with whole map - rationalization which one is
> actually better is subject to performance test.
> > This results in 6*N - 6 messages (or 5*N - 5 if the last message
> wouldn't require the reply). You can easily see when 5*(N - 1) is better
> than 2*N*(N - 1).
> > Or is this too similar to transactions with multiple keys?
> >
> > I think that with current implementation, the putAll operation should be
> discouraged as it does not provide better performance than multiple put
> (and in terms of atomicity it's probably not much better either).
> >
> > WDYT?
> >
> > Radim
> >
> > -----------------------------------------------------------
> > Radim Vansa
> > Quality Assurance Engineer
> > JBoss Datagrid
> > tel. +420532294559 ext. 62559
> >
> > Red Hat Czech, s.r.o.
> > Brno, Purkyňova 99/71, PSČ 612 45
> > Czech Republic
> >
> >
>
> --
> Manik Surtani
> manik at jboss.org
> twitter.com/maniksurtani
>
> Platform Architect, JBoss Data Grid
> http://red.ht/data-grid
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20130610/77be90dd/attachment.html 


More information about the infinispan-dev mailing list