On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa <rvansa(a)redhat.com> wrote:
Hi all,
I have realized that fine grained maps don't work reliably with
write-skew check. This happens because WSC tries to load the entry from
DC/cache-store, compare versions and store it, assuming that this
happens atomically as the entry is locked. However, as fine grained maps
can lock two different keys and modify the same entry, there is a risk
that the check & store won't be atomic. Right now, the update itself
won't be lost, because fine grained maps use DeltaAwareCacheEntries
which apply the updates DC's lock (there can be some problems when
passivation is used, though, [1] hopefully deals with them).
I had a hard time understanding what the problem is, but then I
realized it's because I was assuming we keep a separate version for
each subkey. After I realized it's not implemented like that, I also
found a couple of bugs I filed for it a long time ago:
https://issues.jboss.org/browse/ISPN-3123
https://issues.jboss.org/browse/ISPN-5584
I have figured this out when trying to update the DeltaAware handling
to
support more than just atomic maps - yes, there are special branches for
atomic maps in the code, which is quite ugly design-wise, IMO. My
intention is to do similar things like WSC for replaying the deltas, but
this, obviously, needs some atomicity.
Yes, for all the bugs in the AtomicMaps, it's even harder implementing
a DeltaAware that is not an AtomicMap...
But I don't see any reason to do that anyway, I'd rather work on
making the functional stuff work with transactions.
IIUC, fine-grained locking was introduced back in 5.1 because of
deadlocks in the lock-acquisition algorithm; the purpose was not to
improve concurrency. Luckily, the days of deadlocks are far back, now we
can get the cluster stuck in more complex ways :) Therefore, with a
correctness-first approach, in optimistic caches I would lock just the
main key (not the composite keys). The prepare-commit should be quite
fast anyway, and I don't see how this could affect users
(counter-examples are welcome) but slightly reduced concurrency.
I don't remember what initial use case for FineGrainedAtomicMaps was,
but I agree with Pedro that it's a bit long in the tooth now. The only
advantage of FGAM over grouping is that getGroup(key) needs to iterate
over the entire data container/store, so it can be a lot slower when
you have lots of small groups. But if you need to work with all the
subkeys in the every transaction, you should probably be using a
regular AtomicMap instead.
IMO we should deprecate FineGrainedAtomicMap and implement it as a
regular AtomicMap.
In pessimistic caches we have to be more cautious, since users
manipulate the locks directly and reason about them more. Therefore, we
need to lock the composite keys during transaction runtime, but in
addition to that, during the commit itself we should lock the main key
for the duration of the commit if necessary - pessimistic caches don't
sport WSC, but I was looking for some atomicity options for deltas.
-1 to implicitly locking the main key. If a DeltaAware implementation
wants to support partial locking, then it should take care of the
atomicity of the merge operation itself. If it doesn't want to support
partial locking, then it shouldn't use AdvancedCache.applyDelta().
It's a bit unfortunate that applyDelta() looks like a method that
anyone can call, but it should only be called by the DeltaAware
implementation itself.
However, I agree that implementing a DeltaAware partial locking
correctly in all possible configurations is nigh impossible. So it
would be much better if we also deprecate applyDelta() and start
ignoring the `locksToAcquire` parameter.
An alternative would be to piggyback on DC's locking scheme,
however,
this is quite unsuitable for the optimistic case with a RPC between WSC
and DC store. In addition to that, it doesn't fit into our async picture
and we would send complex compute functions into the DC, instead of
decoupled lock/unlock. I could also devise another layer of locking, but
that's just madness.
-10 to piggyback on DC locking, and -100 to a new locking layer.
I think you could lock the main key by executing a
LockControlCommand(CACHE_MODE_LOCAL) from
PessimisticLockingInterceptor.visitPrepareCommand, before passing the
PrepareCommand to the next interceptor. But please don't do it!
I am adding Sanne to recipients as OGM is probably the most
important
consumer of atomic hash maps.
WDYT?
Radim
[1]
https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea...
--
Radim Vansa <rvansa(a)redhat.com>
JBoss Performance Team
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev