On 17 May 2013, at 07:35, Dan Berindei <dan.berindei(a)gmail.com> wrote:
On Thu, May 16, 2013 at 8:27 PM, Mircea Markus <mmarkus(a)redhat.com> wrote:
On 16 May 2013, at 15:04, Dan Berindei <dan.berindei(a)gmail.com> wrote:
> Hi guys
>
> I'm working on an intermittent failure in NodeMoveAPIPessimisticTest and I think
I've come across what I think is underspecified behaviour in AtomicHashMap.
>
> Say we have two transactions, tx1 and tx2, and they both work with the same atomic
map in a pessimistic cache:
>
> 1. tx1: am1 = AtomicMapLookup.get(cache, key)
> 2. tx2: am2 = AtomicMapLookup.get(cache, key)
> 3. tx1: am1.put(subkey1, value1) // locks the map
> 4. tx2: am2.get(subkey1) // returns null
> 5. tx1: commit // the map is now {subkey1=value1}
> 6. tx2: am2.put(subkey2, value2) // locks the map
> 7. tx2: commit // the map is now {subkey2=value2}
>
> It's not clear to me from the AtomicMap/AtomicHashMap javadoc if this is ok or
if it's a bug...
as a user I find that a bit confusing so I think tx2 should merge stuff in the AtomiMap.
Id be curious to hear Manik(author) and Sanne's (user) opinion on this.
Merging should work with pessimistic locking, but I don't think we could do it with
optimistic locking and write skew check enabled: we only do the write skew check for the
whole map.
if the WSC is enabled, then the 2nd transaction should fail: tx2 reads
the version at 2. and at 7. The WSC should forbid it to commit, so I we shouldn't have
this problem at all.
Would it be worth making this change if it meant making the behaviour
of AtomicHashMap more complex?
how more complex? If it's not a quick fix (2h)
I'd say no as this is more of a nice to have/no user requires this functionality ATM.
On the other hand, I believe FineGrainedAtomicHashMap doesn't do separate write skew
checks for each key in the map either, so users probably have to deal with this difference
between pessimistic and optimistic locking already.
For FGAM I think the WSC should
be performed on a per FGAM's key basis, and not for the whole map.
>
> Note that today the map is overwritten by tx2 even without step 4 ("tx2:
am2.get(subkey1)"). I'm pretty sure that's a bug and I fixed it locally by
using the FORCE_WRITE_LOCK in AtomicHashMapProxy.getDeltaMapForWrite.
>
> However, when the Tree API moves a node it first checks for the existence of the
destination node, which means NodeMoveAPIPessimisticTest is still failing. I'm not
sure if I should fix that by forcing a write lock for all AtomicHashMap reads, for all
TreeCache reads, or only in TreeCache.move().
>
I tried using the FORCE_WRITE_LOCKS flag for all TreeCache reads. This seems to work
fine, and move() doesn't throw any exceptions in pessimistic mode any more. In
optimistic mode, it doesn't change anything, and concurrent moves still fail with
WriteSkewException. The only downside is the performance, having extra locks will
certainly slow things down.
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Cheers,
--
Mircea Markus
Infinispan lead (
www.infinispan.org)