[infinispan-dev] Fine grained maps
Radim Vansa
rvansa at redhat.com
Mon Sep 26 09:00:44 EDT 2016
Using infinispan-dev as debugging duck...
The pessimistic case is somewhat precarious. Since during the 1PC commit
we cannot set the order by synchronizing on primary owner, we should
lock on all owners. However, the opens the possibility to lock locally
and do a RPC to lock remotely (since we lock in LockingInterceptor and
DistributionInterceptor is below that), which leads to the well-known
deadlocks. So we could move the locking into new interceptor below DI;
however, the idea is that the WSC load should happen in
EntryWrappingInterceptor/CacheLoaderInterceptor, as this is the place to
load stuff into context, and we need to lock it before these, which are
above DI :-/
So the only way I could think of is move the replication of
PrepareCommand in pessimistic caches above
PessimisticLockingInterceptor. And that's rather big for my taste.
And in order to prevent deadlocks due to different ordering of locked
keys, we have to order the keys as in optimistic caches. However, if
user locks the keys that atomic maps use explicitly, he could lock them
in different order and that would lead to deadlocks!
Pheew. Almost lost an apetite for such changes.
Radim
PS: non-tx caches aren't that complex, the situation there is quite
similar to optimistic caches.
PPS: (Repl|Dist)WriteSkewAtomicMapAPITests have testConcurrentTx
disabled, instead of requiring the WSC to be thrown :-/
On 09/26/2016 09:36 AM, Radim Vansa wrote:
> Hi all,
>
> I have realized that fine grained maps don't work reliably with
> write-skew check. This happens because WSC tries to load the entry from
> DC/cache-store, compare versions and store it, assuming that this
> happens atomically as the entry is locked. However, as fine grained maps
> can lock two different keys and modify the same entry, there is a risk
> that the check & store won't be atomic. Right now, the update itself
> won't be lost, because fine grained maps use DeltaAwareCacheEntries
> which apply the updates DC's lock (there can be some problems when
> passivation is used, though, [1] hopefully deals with them).
>
> I have figured this out when trying to update the DeltaAware handling to
> support more than just atomic maps - yes, there are special branches for
> atomic maps in the code, which is quite ugly design-wise, IMO. My
> intention is to do similar things like WSC for replaying the deltas, but
> this, obviously, needs some atomicity.
>
> IIUC, fine-grained locking was introduced back in 5.1 because of
> deadlocks in the lock-acquisition algorithm; the purpose was not to
> improve concurrency. Luckily, the days of deadlocks are far back, now we
> can get the cluster stuck in more complex ways :) Therefore, with a
> correctness-first approach, in optimistic caches I would lock just the
> main key (not the composite keys). The prepare-commit should be quite
> fast anyway, and I don't see how this could affect users
> (counter-examples are welcome) but slightly reduced concurrency.
>
> In pessimistic caches we have to be more cautious, since users
> manipulate the locks directly and reason about them more. Therefore, we
> need to lock the composite keys during transaction runtime, but in
> addition to that, during the commit itself we should lock the main key
> for the duration of the commit if necessary - pessimistic caches don't
> sport WSC, but I was looking for some atomicity options for deltas.
>
> An alternative would be to piggyback on DC's locking scheme, however,
> this is quite unsuitable for the optimistic case with a RPC between WSC
> and DC store. In addition to that, it doesn't fit into our async picture
> and we would send complex compute functions into the DC, instead of
> decoupled lock/unlock. I could also devise another layer of locking, but
> that's just madness.
>
> I am adding Sanne to recipients as OGM is probably the most important
> consumer of atomic hash maps.
>
> WDYT?
>
> Radim
>
> [1]
> https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e
>
--
Radim Vansa <rvansa at redhat.com>
JBoss Performance Team
More information about the infinispan-dev
mailing list