[infinispan-dev] Fine grained maps

Radim Vansa rvansa at redhat.com
Thu Sep 29 04:29:09 EDT 2016


On 09/28/2016 09:09 AM, Dan Berindei wrote:
> On Tue, Sep 27, 2016 at 6:15 PM, Radim Vansa <rvansa at redhat.com> wrote:
>> On 09/27/2016 04:47 PM, Dan Berindei wrote:
>>> On Tue, Sep 27, 2016 at 10:28 AM, Radim Vansa <rvansa at redhat.com> wrote:
>>>> To Pedro: I have figured out that it shouldn't work rather
>>>> theoretically, so I haven't crashed into ISPN-2729.
>>>>
>>>> On 09/26/2016 10:51 PM, Dan Berindei wrote:
>>>>> On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa <rvansa at redhat.com> wrote:
> ...
>>>>>> In pessimistic caches we have to be more cautious, since users
>>>>>> manipulate the locks directly and reason about them more. Therefore, we
>>>>>> need to lock the composite keys during transaction runtime, but in
>>>>>> addition to that, during the commit itself we should lock the main key
>>>>>> for the duration of the commit if necessary - pessimistic caches don't
>>>>>> sport WSC, but I was looking for some atomicity options for deltas.
>>>>>>
>>>>> -1 to implicitly locking the main key. If a DeltaAware implementation
>>>>> wants to support partial locking, then it should take care of the
>>>>> atomicity of the merge operation itself. If it doesn't want to support
>>>>> partial locking, then it shouldn't use AdvancedCache.applyDelta().
>>>>> It's a bit unfortunate that applyDelta() looks like a method that
>>>>> anyone can call, but it should only be called by the DeltaAware
>>>>> implementation itself.
>>>> As I have mentioned in my last mail, I found that it's not that easy, so
>>>> I am not implementing that. But it's not about taking care of atomicity
>>>> of the merge - applying merge can be synchronized, but you have to do
>>>> that with the entry stored in DC when the entry is about to be stored in
>>>> DC - and this is the only moment you can squeeze the WSC inl, because
>>>> the DeltaAware can't know anything about WSCs. That's the DC locking
>>>> piggyback you -10.
>>>>
>>> I think you're making it harder than it should be, because you're
>>> trying to come up with a generic solution that works with any possible
>>> data structure.
>> I am trying to work with two concepts: (Copyable)DeltaAware interface,
>> and ApplyDeltaCommand with its set of locks. Atomic maps are
>> higher-level concept and there should not be any Ugh!s [1].
>>
> We can't look at ApplyDeltaCommand in isolation, as the user of the
> data structure should never call applyDelta() directly. All DeltaAware
> structures should have a user-facing proxy that knows the internal
> structure and calls applyDelta(), following the rules that we set.
>
> The way I see it, these are the 2 low-level concepts, both using the
> DeltaAware interface:
>
> 1) Coarse-grained DeltaAware (high-level equivalent: AtomicMap).
> These use PutKeyValueCommand(with Flag.DELTA_WRITE set in the
> constructor), although I do have a stale branch trying to make them
> use ApplyDeltaCommand(locksToAcquire=null).
> These are merged in the invocation context, using only the regular locks.
>
> 2) Fine-grained DeltaAware (high-level equivalent: FineGrainedAtomicMap).
> These use use ApplyDeltaCommand(locksToAcquire=CompositeKey*) and
> DeltaAwareCacheEntry, and must be able to merge concurrent updates to
> separate subkeys without losing updates.
>
> Currently both types of data structures can implement either
> DeltaAware or CopyableDeltaAware, but non-copyable DeltaAware breaks
> transaction isolation (and listeners, and query), so we should require
> all DeltaAwares to be copyable. (As a bridge, we can use
> serialization+deserialization for those that are not.)
>
> In theory, applyDelta() can also apply a delta without first issuing a
> read for the key, e.g. a set of counters or the DeltaAwareList used by
> our old M/R framework. I don't think we have any tests for this
> scenario now, so I'd prohibit it explicitly.
>
>> In any implementation of DeltaAwareCacheEntry.commit(), you'll have to
>> atomically load the (Im)mutableCacheEntry from DC/cache store and store
>> it into DC. Yes, you could do a load, copy (because the delta is
>> modifying), apply delta/run WSC, compare&set in DC, but that's quite
>> annoying loop implemented through exception handling <- I haven't
>> proposed this and focused on changing the locking scheme.
>>
> I'm ok with doing the merge while holding the DC lock for fine-grained
> DeltaAware, as we do now. For coarse-grained DeltaAware, we can keep
> doing the merge in the context entry.
>
> I'm not convinced we need new locking for the write-skew check in either case.

"...doing the merge while holding DC lock..." means that it's already 
second phase of 2PC. Therefore, if the WSC fails, you can't rollback the 
transaction. Unless you want to do a RPC while holding DC lock.

Btw., I guess that you don't have any motivation to move persistence 
loading from ClusteredRepeatableReadEntry to persistence-related 
interceptors?

>> [1]
>> https://github.com/infinispan/jdg/blob/3aaa3d85fe9a90ee3c371b44ff5e5b36414c69fd/core/src/main/java/org/infinispan/container/entries/ReadCommittedEntry.java#L150
>>
>>>    But if a data structure is not suitable for
>>> fine-grained locking, it should just use regular locking instead
>>> (locksToAcquire = {mainKey}).
>>>
>>> E.g. any ordered structure is out of the question for fine-grained
>>> locking, but it should be possible to implement a fine-grained set/bag
>>> without any new locking in core.
>>>
>>> As you may have seen from ISPN-3123 and ISPN-5584, I think the problem
>>> with FGAM is that it's not granular enough: we shouldn't throw
>>> WriteSkewExceptions just because two transactions modify the same
>>> FGAM, we should only throw the WriteSkewException when both
>>> transaction modify the same subkey.
>> You're right that WSC is not fine-grained enough, and at this point you
>> can't solve that generally - how do you apply WSC on DeltaAware when you
>> know that it locks certain keys? And you would add the static call to
>> WSCHelper into DeltaAwareCacheEntry, class made for storing value &
>> interacting with DC?
>>
> I'm not sure how we should do it, but we'd almost certainly need a new
> kind of metadata that holds a map of subkeys to versions instead of a
> single version.
>
> I'd try to modify EntryWrappingInterceptor to add dummy
> ClusteredRepeatableReadEntries in the context for all the
> `locksToAcquire` composite keys during ApplyDeltaCommand. I think we
> can add all the subkey versions to the transaction's versionsSeenMap
> the moment we read the DeltaAware, and let the prepare perform the
> regular WSC for those fake entries.

It's not limited to ApplyDeltaCommand - this command should modify the 
metadata, adding the subkeys, but you have to record the versions seen 
upon first read (e.g. Get*). But ok, you can check the subkeys in the 
metadata. You also have to deal with non-existent versions of subkeys 
for existing keys later on as you don't know about subkeys added 
concurrently.

>
> Of course, those fake entries couldn't load their version from the
> data container/persistence, so we'd also have to move the version
> loading to the EntryWrappingInterceptor, but I think you've already
> started working on that.

No; I've dropped my refactoring efforts for now, all what I've done is 
in [1] - please review when time permits.

[1] https://github.com/infinispan/infinispan/pull/4564

Radim

>
> Cheers
> Dan
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


-- 
Radim Vansa <rvansa at redhat.com>
JBoss Performance Team



More information about the infinispan-dev mailing list