On 16 May 2013, at 15:04, Dan Berindei <dan.berindei@gmail.com> wrote:7. tx2: commit // the map is now {subkey2=value2}6. tx2: am2.put(subkey2, value2) // locks the map5. tx1: commit // the map is now {subkey1=value1}4. tx2: am2.get(subkey1) // returns null3. tx1: am1.put(subkey1, value1) // locks the map2. tx2: am2 = AtomicMapLookup.get(cache, key)1. tx1: am1 = AtomicMapLookup.get(cache, key)Say we have two transactions, tx1 and tx2, and they both work with the same atomic map in a pessimistic cache:Hi guysI'm working on an intermittent failure in NodeMoveAPIPessimisticTest and I think I've come across what I think is underspecified behaviour in AtomicHashMap.
It's not clear to me from the AtomicMap/AtomicHashMap javadoc if this is ok or if it's a bug...If optimistic, step 7 should fail with a write skew check. If pessimistic, step 2 would *usually* block assuming that another thread is updating the map, but since neither tx1 or tx2 has started updating the map yet, neither has a write lock on the map. So that succeeds. I'm not sure if this is any different from not using an atomic map:1. tx1: cache.get(k, v); // reads into tx context2. tx2: cache.get(k, v);3. tx1: cache.put(k, v + 1 );4. tx1: commit5. tx2: cache.put(k, v + 1 );6. tx2: commithere as well, if using optimistic, step 6 will fail with a WSC but if pessimistic this will work (since tx2 only requested a write lock after tx1 committed/released its write lock).
I think only in TreeCache.move()Note that today the map is overwritten by tx2 even without step 4 ("tx2: am2.get(subkey1)"). I'm pretty sure that's a bug and I fixed it locally by using the FORCE_WRITE_LOCK in AtomicHashMapProxy.getDeltaMapForWrite.
However, when the Tree API moves a node it first checks for the existence of the destination node, which means NodeMoveAPIPessimisticTest is still failing. I'm not sure if I should fix that by forcing a write lock for all AtomicHashMap reads, for all TreeCache reads, or only in TreeCache.move().