[infinispan-dev] AtomicHashMap concurrent modifications in pessimistic mode

Manik Surtani msurtani at redhat.com
Mon May 20 06:57:04 EDT 2013


On 16 May 2013, at 15:04, Dan Berindei <dan.berindei at gmail.com> wrote:

> Hi guys
> 
> I'm working on an intermittent failure in NodeMoveAPIPessimisticTest and I think I've come across what I think is underspecified behaviour in AtomicHashMap.
> 
> Say we have two transactions, tx1 and tx2, and they both work with the same atomic map in a pessimistic cache:
> 
> 1. tx1: am1 = AtomicMapLookup.get(cache, key)
> 2. tx2: am2 = AtomicMapLookup.get(cache, key)
> 3. tx1: am1.put(subkey1, value1) // locks the map
> 4. tx2: am2.get(subkey1) // returns null
> 5. tx1: commit // the map is now {subkey1=value1}
> 6. tx2: am2.put(subkey2, value2) // locks the map
> 7. tx2: commit // the map is now {subkey2=value2}
> 
> It's not clear to me from the AtomicMap/AtomicHashMap javadoc if this is ok or if it's a bug...

If optimistic, step 7 should fail with a write skew check.  If pessimistic, step 2 would *usually* block assuming that another thread is updating the map, but since neither tx1 or tx2 has started updating the map yet, neither has a write lock on the map.  So that succeeds.  I'm not sure if this is any different from not using an atomic map:

1.  tx1: cache.get(k, v); // reads into tx context
2.  tx2: cache.get(k, v);
3.  tx1: cache.put(k, v + 1 );
4.  tx1: commit
5.  tx2: cache.put(k, v + 1 );
6.  tx2: commit

here as well, if using optimistic, step 6 will fail with a WSC but if pessimistic this will work (since tx2 only requested a write lock after tx1 committed/released its write lock).

> Note that today the map is overwritten by tx2 even without step 4 ("tx2: am2.get(subkey1)"). I'm pretty sure that's a bug and I fixed it locally by using the FORCE_WRITE_LOCK in AtomicHashMapProxy.getDeltaMapForWrite. 
> 
> However, when the Tree API moves a node it first checks for the existence of the destination node, which means NodeMoveAPIPessimisticTest is still failing. I'm not sure if I should fix that by forcing a write lock for all AtomicHashMap reads, for all TreeCache reads, or only in TreeCache.move().

I think only in TreeCache.move()


--
Manik Surtani
manik at jboss.org
twitter.com/maniksurtani

Platform Architect, JBoss Data Grid
http://red.ht/data-grid

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20130520/3404dd24/attachment.html 


More information about the infinispan-dev mailing list