The problem is that I have 2 code paths:1. Acquiring a lock.1.1. Check CHM for lock1.2. If lock == null, create new lock, do a putIfAbsent on CHM1.3. acquire the lock1.4. Check (in a loop) if the lock we have acquired is the same as the lock in the CHM, if not, release and retry.
2. Releasing the lock.2.1. Get lock from CHM2.2. Release lock.2.3. Remove lock from CHM
The problem is that we may have several competing threads in code path 1. Imagine T1, waiting on 1.3., and T2, who owns the lock, releasing it in 2.2. With some unfortunate timing, I have seen:* T1 acquires the lock (1.3), does checks (1.4) and leaves code path 1.* T2 removes the lock from the CHM (2.3)* T3 comes in to code path 1, sees the lock missing from the CHM, creates a new lock acquires it, etc.* T1 now tries to unlock the lock it thinks it owns. Finds a different lock instance in the CHM. All sorts of problems by this stage.I have a working branch where I solved this by:* Replacing the putIfAbsent in 1.2 with a compute() closure, which means the null check and store of the value is atomic wrt. any other modification on the same key. (with pIA, the null check didn't seem to be atomic?!)* Replacing 2.1, 2.2 and 2.3 with a computeIfPresent() to release the lock and remove.This seems to have solved things, because step 1.4 cannot proceed until any remove operation completes (competing for the same entry space in the map) and process 1 cannot get beyond 1.3 since the lock is still held by the releasing thread in step 2. But I'm still testing further in case of edge cases.
On 16 Oct 2012, at 16:33, Sanne Grinovero <sanne@infinispan.org> wrote:But still, since the process of creating, acquiring and adding the lock to the lock map needs to be atomic, and not just atomic but also safe with regards to competing threads (say, an old owner) releasing the lock and removing it from the map (also atomic), I think a concurrent map isn't good enough anymore.
are you sure that adding + creating + acquiring needs to be atomic?
I see several solutions by using the standard CHM; assuming you
already checked for Lock existence:
- create a Lock, use a putIfAbsent to store it, then *try* to acquire it
- create a Lock, acquire it, putIfAbsent to store it, throw your
instance away if you failed to store it and try again by starting over
from _get_ to lock on an eventually created new instance.
For removal of the Lock I assume you're only allowed to do that when
owning it? Which is even simpler.
- CHM.remove(), release the returned value;
Take care that threads waiting on that removed lock don't assume they
acquired it when they get this instance but go through the acquire
routine again, or they might end up owning the wrong instance;
basically after a succesfull acquire you'll have to check you didn't
get an outdated instance.
What I don't like of this is that you need to get/put on the same key
multiple times, hashing the key over and over, looking up the same
segment again and potentially traversing segment locks for each of
these steps: the V8 solution from Jason looks like far more efficient.
Sanne
On 16 October 2012 11:32, Manik Surtani <manik@jboss.org> wrote:
On 16 Oct 2012, at 11:03, Dan Berindei <dan.berindei@gmail.com> wrote:
Manik, how about adding a reference count to the lock entry? If there is a
waiter on the lock, the reverence count will be > 0 and the owner won't
remove the key on unlock.
Then you have a race on reading the counter/removing.
On Tue, Oct 16, 2012 at 3:43 AM, Manik Surtani <manik@jboss.org> wrote:
Hmm, that actually might just do the trick. Thanks!
On 15 Oct 2012, at 17:46, Jason Greene <jason.greene@redhat.com> wrote:
I think what you are looking for is this:
http://gee.cs.oswego.edu/dl/jsr166/dist/jsr166edocs/jsr166e/ConcurrentHashMapV8.html#computeIfAbsent(K,
jsr166e.ConcurrentHashMapV8.Fun)
On Oct 15, 2012, at 11:23 AM, Manik Surtani <manik@jboss.org> wrote:
Guys, after investigating https://issues.jboss.org/browse/ISPN-2381 and
https://github.com/infinispan/infinispan/pull/1382, I've discovered a pretty
nasty race condition in our per-entry lock containers (whether
OwnableReentrantLocks or JDK locks for non-transactional caches).
The problem is that we maintain a lock map, and any given request can
acquire a lock, if a lock doesn't exist for a given key, create the lock and
acquire it, and when done, release the lock and remove it from the lock map.
There's lots of room for races to occur. The current impl uses a
ConcurrentMap, where concurrent operations on the map are used to make sure
locks are not overwritten. But still, since the process of creating,
acquiring and adding the lock to the lock map needs to be atomic, and not
just atomic but also safe with regards to competing threads (say, an old
owner) releasing the lock and removing it from the map (also atomic), I
think a concurrent map isn't good enough anymore.
The sledgehammer approach is to synchronise on this map for these two
operations, but that causes all sorts of suckage. Ideally, I'd just hold on
to the segment lock for the duration of these operations, but these aren't
exposed. Extending CHM to expose methods like acquireLockAndGet() and
unlockAndRemove() would work perfectly, but again a lot of CHM internals are
private or package protected.
So my options are: completely re-implement a CHM-like structure, like
we've done for BCHM, or perhaps think of a new, specialised structure to
contain locks. In terms of contract, I just need a fast way to look up a
value under given a key, efficient put and remove as well. It should be
thread-safe (naturally), and allow for an atomic operation (like "get, do
work, put").
Any interesting new data structures on peoples' minds?
Cheers
Manik
--
Manik Surtani
manik@jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
--
Manik Surtani
manik@jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Manik Surtani
manik@jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--Manik Surtani
Platform Architect, JBoss Data Grid
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev