[infinispan-dev] Dangers of getCacheEntry (ISPN-4424)

Galder Zamarreño galder at redhat.com
Wed Jun 25 03:33:55 EDT 2014


On 25 Jun 2014, at 08:45, Dan Berindei <dan.berindei at gmail.com> wrote:

> 
> 
> 
> On Tue, Jun 24, 2014 at 9:04 PM, William Burns <mudokonman at gmail.com> wrote:
> I must admit this seems a bit heavy handed to have to enable
> transactions, when are using them solely for the purpose of having
> implicit transactions.
> 
> Can we not instead just tweak the NonTransactionalLockingInterceptor
> to obey the FORCE_WRITE_LOCK, so it would guarantee a consistent value
> in the face of a concurrent write when doing getCacheEntry?
> 
> 
> I suggested that as well, but ISPN-2956 means we can't guarantee the atomicity of non-tx conditional operations anyway.
> And HotRod doesn't work nicely with optimistic locking (yet), so we have to require pessimistic locking. I'm not sure about total order, though.

Indeed, I agree that transactions is a bit heavy handed, but the fact that transactions are needed to deal with ISPN-2956 properly, made me lean that way.

> Although thinking again on the locking, I don't think it alone is
> sufficient either.  As the cache entry is serialized after releasing
> the lock, which means there is still a window when only the value may
> be changed on an owner node.  We really need immutable CacheEntries
> returned from getCacheEntry even with locking to work properly.
> 
> Galder didn't mention this, but his proposal also copies the entry in GetKeyValueCommand.perform, so the serialization happens on an ImmutableCacheEntryView.

Yeah, ImmutableCacheEntryView effectively caches the cache entry’s values at construction time. Assuming there’s lock in place, that should be safe.

The thing to note here is that replaceIfUmodified() needs both the version and value part to be correct, hence why we added the transaction and force write lock, to avoid one not matching up with the other. The other piece of the puzzle is getVersioned() operation, that the client calls to find an entry’s version and use that. Since the only thing sent to the server is the version (and not the value), I’ve stayed away from making getVersioned() calling getCacheEntry with FORCE_WRITE_LOCK and a transaction.

Cheers,

> 
> 
>  - Will
> 
> On Tue, Jun 24, 2014 at 1:51 PM, Pedro Ruivo <pedro at infinispan.org> wrote:
> >
> >
> > On 06/24/2014 05:11 PM, Mircea Markus wrote:
> >>
> >> On Jun 24, 2014, at 16:50, Galder Zamarreño <galder at redhat.com> wrote:
> >>
> >>>
> >>> On 24 Jun 2014, at 16:51, Mircea Markus <mmarkus at redhat.com> wrote:
> >>>
> >>>>
> >>>> On Jun 24, 2014, at 15:27, Galder Zamarreño <galder at redhat.com> wrote:
> >>>>
> >>>>> To fix this, I’ve been working with Dan on some solutions and we’ve taken inspiration of the new requirements appearing as a result of ISPN-2956. To be able to deal with partial application of conditional operations properly, transactional caches are needed. So, the solution that can be seen in [4] takes that, and creates a transaction around the replaceIfUmodified and forces the getCacheEntry() call to acquire lock via FORCE_WRITE_LOCK flag.
> >>>>
> >>>> so the entire underlaying cache needs to be transactional then?
> >>>
> >>> Yeah, it needs to be transactional, but the code I’ve written also deals with the fact that the cache might not be transactional. I’ll probably add a WARN message when it’s not transactional. This goes in hand with the recommendations for ISPN-2956, whereby failover for conditional operations relying on return values require transactional caches to properly deal with failover situations.
> >>>
> >>> To sum up, if using conditional operations or CRUD methods with Flag.FORCE_RETURN_VALUE, caches should be transactional. Moreover, to achieve concurrency guarantees of counter tests such as the one tested for 4424, locking needs to be pessimistic too. If not using conditional operations or CRUD methods without the flag, the cache could be non-transactional.
> >>
> >> Thanks for the analysis. I think we should go with your patch for ISPN 7.0 and consider the proper solution for the future, as you suggest below.
> >> +1 for the warning, users should be made aware for the limitation.
> >
> > +1 for the patch.
> >
> > Another suggestion: in *InternalEntryFactory.update()*, you can
> 
> I don't know if that would cover all the places since we also set the
> value in the various WriteCommands themselves.
> 
> I think the WriteCommands only modify wrapped entries. 
> 
> It would be really bad if a put(k, v) in a read-committed tx cache could modify the cache entry before the tx is committed. Making this distinction clear would be another reason to make InternalCacheEntry immutable...
> 
> 
> > synchronize in the cache entry, and create a new method
> > *InternalCacheEntry copy(InternalCacheEntry)* that makes a copy while it
> > also synchronizes in the existing cache entry. I think in this way, you
> > don't need the cache to be transactional neither to force the lock on
> > reads. Also, I would suggest the copy() to be invoked in your case (or
> > in the conditional commands accesses to DataContainer?).
> 
> Nitpicking: copy() only exists in the CopyableDeltaAware interface, and the commands themselves should not access the DataContainer.
> 
> This sounds like it might work, and in fact it's quite similar to Takayoshi's initial proposal (though he targeted it specifically to HotRod's replaceIfUnmodified). It's a bit scary because CacheEntry.setValue() is called in so many places, but I think everything but InternalEntryFactoryImpl.update() should be working on context entries, so they don't need synchronization. Still, ISPN-2956...
> 
> 
> Yeah I was thinking of this approach as well, we would either have to
> make the copy or synchronize in the serialization.  I personally think
> making a copy would be better, but the entire copy operation would
> have to be synchronized then as you mentioned.
> 
> >
> >>>
> >>>>
> >>>>> This solves the issue explained above, but of course it has an impact of the performance. The test now runs about 1.5 or 2 times slower.
> >>>>>
> >>>>> This is probably the best that we can do in the 7.0 time frame, but there’s several things that could improve this:
> >>>>>
> >>>>> 1. True immutable entries in the cache. If the entries in the cache were truly immutable, there would be no risk of sending back a partially correct entry back to the client.
> >>>>>
> >>>>> 2. A cache replace method that does not compare objects based on equality (the current replace()), but a replace method that takes a function. The function could compare that the old entry’s version and the cached entry’s version match. The function would only be executed inside the inner container, with all the locks held properly. I already hinted something similar in [5].
> >>
> >>
> >> Shall we raise a new JIRA for the permanent solution then?
> >
> > +1
> >
> > I think the immutable entries would be better, but it will destroy
> > ReadCommitted isolation. The ReadCommitted is based on the mutability of
> > the entry (i.e. it will not invoke the data container if the entry
> > exists in transaction context). When (and if) is removed, then we can
> > make the entries immutable.
> 
> Agree, it would be a significant change. Perhaps context entries in read-committed caches could hold a reference to the actual EquivalentConcurrentHashMapV8.MapEntry if the entry was local, but that would add more complexity. So I would rather see it removed completely.
>  
> 
> If we only make getCacheEntry immutable, it wouldn't affect the normal
> get operations.  I was thinking we could probably make the copy only
> in the GetKeyValueCommand.
> 
> It would still affect every write operation. And we would need a way to distinguish between GetKeyValueCommands for getCacheEntry and generated by ClusteredGetCommands, since both kinds return InternalCacheEntries.
> 
> On the other hand, I think we have the same race with the write skew check in optimistic caches. We read the value and version for write skew checking without holding the key lock, so in theory it's possible to read an outdated value and the current version. So we need some kind of synchronization when reading the entry for optimistic mode, anyway. 
> 
> But instead of a synchronized block, I propose making the value and metadata references volatile. It would require care to always read the metadata first and write it last, but reads would still be "free" on x86. HotRod would still need transactions, but it wouldn't need FORCE_WRITE_LOCK, so this might be a little faster than Galder's version.
>  
> 
> >
> >>
> >>>>>
> >>>>> Finally, this was not a problem when the value stored in Hot Rod was a ByteArrayValue wrapping the byte array and the version, because the value was treated atomically, and in hindsight, maybe adding getCacheEntry might have been premature, but this method has proven useful for other use cases too (rolling upgrades…etc).
> >>>>>
> >>>>> Cheers,
> >>>>>
> >>>>> [1] https://issues.jboss.org/browse/ISPN-4424
> >>>>> [2] https://issues.jboss.org/browse/ISPN-2956
> >>>>> [3] https://github.com/infinispan/infinispan/blob/master/server/core/src/main/scala/org/infinispan/server/core/AbstractProtocolDecoder.scala#L302
> >>>>> [4] https://github.com/galderz/infinispan/tree/t_4424
> >>>>> [5] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/AdvancedCache.java#L374
> >>>>> --
> >>>>> Galder Zamarreño
> >>>>> galder at redhat.com
> >>>>> twitter.com/galderz
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> infinispan-dev mailing list
> >>>>> infinispan-dev at lists.jboss.org
> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >>>>
> >>>> Cheers,
> >>>> --
> >>>> Mircea Markus
> >>>> Infinispan lead (www.infinispan.org)
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> infinispan-dev mailing list
> >>>> infinispan-dev at lists.jboss.org
> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >>>
> >>>
> >>> --
> >>> Galder Zamarreño
> >>> galder at redhat.com
> >>> twitter.com/galderz
> >>>
> >>>
> >>> _______________________________________________
> >>> infinispan-dev mailing list
> >>> infinispan-dev at lists.jboss.org
> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >>
> >> Cheers,
> >>
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Galder Zamarreño
galder at redhat.com
twitter.com/galderz




More information about the infinispan-dev mailing list