[infinispan-dev] Fine grained maps

Sanne Grinovero sanne at infinispan.org
Tue Sep 27 07:35:54 EDT 2016


Hibernate OGM is using the FGAMs and needs their "fine-grained"
semantics as it's being used to store what is perceived possibly as
independent user entries; having these entries share a same lock would
introduce possibilities of deadlock depending on the end user's
business logic; we're an abstraction layer and these semantics are
being relied on.
I believe a good analogy to this is the comparison of databases
implementing row-level locking vs. page-level locking; such changes in
granularity would make end users loose hair so I'll not change our
usage from FGAMS to AMs.

We can maybe move to grouping or to queries; this wasn't considered
back then as neither grouping nor (unindexed) queries were available.
Both grouping and queries have their drawbacks though, so we can take
this in consideration on the OGM team but it's going to take some
time;

So feel free deprecate FGAMs but please don't remove them yet.

For the record, OGM never uses write skew checks on FGAMs, and also
Hibernate OGM / Hot Rod doesn't use FGAMs (whe use them only in
embedded mode, when transactions are available).

Thanks,
Sanne


On 27 September 2016 at 08:28, Radim Vansa <rvansa at redhat.com> wrote:
> To Pedro: I have figured out that it shouldn't work rather
> theoretically, so I haven't crashed into ISPN-2729.
>
> On 09/26/2016 10:51 PM, Dan Berindei wrote:
>> On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa <rvansa at redhat.com> wrote:
>>> Hi all,
>>>
>>> I have realized that fine grained maps don't work reliably with
>>> write-skew check. This happens because WSC tries to load the entry from
>>> DC/cache-store, compare versions and store it, assuming that this
>>> happens atomically as the entry is locked. However, as fine grained maps
>>> can lock two different keys and modify the same entry, there is a risk
>>> that the check & store won't be atomic. Right now, the update itself
>>> won't be lost, because fine grained maps use DeltaAwareCacheEntries
>>> which apply the updates DC's lock (there can be some problems when
>>> passivation is used, though, [1] hopefully deals with them).
>>>
>> I had a hard time understanding what the problem is, but then I
>> realized it's because I was assuming we keep a separate version for
>> each subkey. After I realized it's not implemented like that, I also
>> found a couple of bugs I filed for it a long time ago:
>>
>> https://issues.jboss.org/browse/ISPN-3123
>> https://issues.jboss.org/browse/ISPN-5584
>>
>>> I have figured this out when trying to update the DeltaAware handling to
>>> support more than just atomic maps - yes, there are special branches for
>>> atomic maps in the code, which is quite ugly design-wise, IMO. My
>>> intention is to do similar things like WSC for replaying the deltas, but
>>> this, obviously, needs some atomicity.
>>>
>> Yes, for all the bugs in the AtomicMaps, it's even harder implementing
>> a DeltaAware that is not an AtomicMap...
>>
>> But I don't see any reason to do that anyway, I'd rather work on
>> making the functional stuff work with transactions.
>
> Yes, I would rather focus on functional stuff too, but the Delta* stuff
> gets into my way all the time, so I was trying to remove that. However,
> though we could deprecate fine grained maps (+1!) we have to keep it
> working as OGM uses that. I am awaiting some details from Sanne that
> could suggest alternative solution, but he's on PTO now.
>
>>
>>> IIUC, fine-grained locking was introduced back in 5.1 because of
>>> deadlocks in the lock-acquisition algorithm; the purpose was not to
>>> improve concurrency. Luckily, the days of deadlocks are far back, now we
>>> can get the cluster stuck in more complex ways :) Therefore, with a
>>> correctness-first approach, in optimistic caches I would lock just the
>>> main key (not the composite keys). The prepare-commit should be quite
>>> fast anyway, and I don't see how this could affect users
>>> (counter-examples are welcome) but slightly reduced concurrency.
>>>
>> I don't remember what initial use case for FineGrainedAtomicMaps was,
>> but I agree with Pedro that it's a bit long in the tooth now. The only
>> advantage of FGAM over grouping is that getGroup(key) needs to iterate
>> over the entire data container/store, so it can be a lot slower when
>> you have lots of small groups. But if you need to work with all the
>> subkeys in the every transaction, you should probably be using a
>> regular AtomicMap instead.
>
> Iterating through whole container seems like a very limiting factor to
> me, but I would keep AtomicMaps and let them be implemented through
> deltas/functional commands (preferred), but use the standard locking
> mechanisms instead of fine-grained insanity.
>
>>
>> IMO we should deprecate FineGrainedAtomicMap and implement it as a
>> regular AtomicMap.
>>
>>> In pessimistic caches we have to be more cautious, since users
>>> manipulate the locks directly and reason about them more. Therefore, we
>>> need to lock the composite keys during transaction runtime, but in
>>> addition to that, during the commit itself we should lock the main key
>>> for the duration of the commit if necessary - pessimistic caches don't
>>> sport WSC, but I was looking for some atomicity options for deltas.
>>>
>> -1 to implicitly locking the main key. If a DeltaAware implementation
>> wants to support partial locking, then it should take care of the
>> atomicity of the merge operation itself. If it doesn't want to support
>> partial locking, then it shouldn't use AdvancedCache.applyDelta().
>> It's a bit unfortunate that applyDelta() looks like a method that
>> anyone can call, but it should only be called by the DeltaAware
>> implementation itself.
>
> As I have mentioned in my last mail, I found that it's not that easy, so
> I am not implementing that. But it's not about taking care of atomicity
> of the merge - applying merge can be synchronized, but you have to do
> that with the entry stored in DC when the entry is about to be stored in
> DC - and this is the only moment you can squeeze the WSC inl, because
> the DeltaAware can't know anything about WSCs. That's the DC locking
> piggyback you -10.
>
>>
>> However, I agree that implementing a DeltaAware partial locking
>> correctly in all possible configurations is nigh impossible. So it
>> would be much better if we also deprecate applyDelta() and start
>> ignoring the `locksToAcquire` parameter.
>>
>>> An alternative would be to piggyback on DC's locking scheme, however,
>>> this is quite unsuitable for the optimistic case with a RPC between WSC
>>> and DC store. In addition to that, it doesn't fit into our async picture
>>> and we would send complex compute functions into the DC, instead of
>>> decoupled lock/unlock. I could also devise another layer of locking, but
>>> that's just madness.
>>>
>> -10 to piggyback on DC locking, and -100 to a new locking layer.
>>
>> I think you could lock the main key by executing a
>> LockControlCommand(CACHE_MODE_LOCAL) from
>> PessimisticLockingInterceptor.visitPrepareCommand, before passing the
>> PrepareCommand to the next interceptor. But please don't do it!
>
> Okay, I'll just wait until someone tells me why the heck anyone needs
> fine grained, discuss how to avoid it and then deprecate it :)
>
> Radim
>
>>
>>> I am adding Sanne to recipients as OGM is probably the most important
>>> consumer of atomic hash maps.
>>>
>>> WDYT?
>>>
>>> Radim
>>>
>>> [1]
>>> https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e
>>>
>>> --
>>> Radim Vansa <rvansa at redhat.com>
>>> JBoss Performance Team
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> --
> Radim Vansa <rvansa at redhat.com>
> JBoss Performance Team
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


More information about the infinispan-dev mailing list