[infinispan-dev] fixing eviction with transactions (critical for Hibernate Search)

Sanne Grinovero sanne at infinispan.org
Wed Jun 8 11:24:20 EDT 2011


I was mentioning passivation was enabled as debugging I saw it was
going to invoke to the passivator,
but I just realized that passivation was actually disabled in my configuration.
So one thing I'm solving now, is that we don't need to lock if the
passivator is disabled, as we're not going to do any operation with
the lock held, besides potentially invoking user listeners.
So this affects the 2nd level cache as well, as the bug kicks in
regardless of the passivation being enabled).

Some findings about this lock, after talking with Vladimir yesterday on irc:

we realized that the lock is actually acquired *after* it was evicted
from the container;
so if the mission of this lock was to protect readers from reading the
non-existing value, then we should acquire it before it's actually
evicted; currently there's a smallish race condition possible, in
which the value has been evicted but not stored yet; during the proper
store operation which is likely the slowest part the lock is owned so
this might prevent a get with locks to not find the value anywhere.

So for this case maybe having the lock is a good idea, but could
anybody confirm that having the lock would prevent an inconsistent
read?

To answer to a doubt Manik had on irc, the purpose of this lock can
not be to protect the cachestore writer from writing the wrong value
as we currently do InternalCacheEntry.getValue() before acquiring the
lock, and write that value even if it changes later on (next eviction
will write the correct value).

Sanne

2011/6/8 Mircea Markus <mircea.markus at jboss.com>:
>
> On 8 Jun 2011, at 11:44, Galder Zamarreño wrote:
>
>>
>> On Jun 7, 2011, at 2:41 PM, Mircea Markus wrote:
>>
>>>
>>> On 7 Jun 2011, at 13:13, Sanne Grinovero wrote:
>>>
>>>> Hello all,
>>>> in this scenario we have the Infinispan Lucene Directory using
>>>> batching (DummyTransaction), eviction and passivation to keep the
>>>> amount of memory being used for the index under control; I'm using
>>>> LIRS but experienced the same issue with all other strategies.
>>>>
>>>> As you can see from the following stacktrace, the batching ends by
>>>> sending a commit request, so the status of the transaction is 8
>>>> (STATUS_COMMITTING) in this context.
>>>> The new data is stored in the DataContainer, then the
>>>> BoundedConcurrentHashMap notifies the EvictionManagerImpl as it has to
>>>> evict some values, and this one attempts to acquire a lock on the
>>>> to-be-evicted keys (which are obviously not the same I'm trying to
>>>> store).
>>>> Acquiring this lock is an invalid operation as the transaction is in
>>>> commit state, and so this operation fails with an exception.
>>>>
>>>> Thread [Hibernate Search: Directory writer-1] (Suspended (breakpoint
>>>> at line 92 in LockManagerImpl))
>>>>     LockManagerImpl.lockAndRecord(Object, InvocationContext) line: 92
>>>>     EvictionManagerImpl.acquireLock(InvocationContext, Object) line: 210
>>>>     EvictionManagerImpl.onEntryEviction(Object, InternalCacheEntry) line: 170
>>>>     EvictionManagerImpl.onEntryEviction(Map<Object,InternalCacheEntry>) line: 162
>>>>     DefaultDataContainer$DefaultEvictionListener.onEntryEviction(Map<Object,InternalCacheEntry>)
>>>> line: 201
>>>>     BoundedConcurrentHashMap$Segment<K,V>.notifyEvictionListener(Set<HashEntry<K,V>>)
>>>> line: 1176
>>>>     BoundedConcurrentHashMap$Segment<K,V>.put(K, int, V, boolean) line: 1011
>>>>     BoundedConcurrentHashMap<K,V>.put(K, V) line: 1556
>>>>     DefaultDataContainer.put(Object, Object, long, long) line: 148
>>>>     ReadCommittedEntry.commit(DataContainer) line: 177
>>>>     LockingInterceptor.commitEntry(CacheEntry, boolean) line: 389
>>>>     LockingInterceptor.cleanupLocks(InvocationContext, boolean) line: 367
>>>>     LockingInterceptor.visitCommitCommand(TxInvocationContext,
>>>> CommitCommand) line: 98
>>>>     CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>>>>     CacheStoreInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>>>> VisitableCommand) line: 119
>>>>     CacheStoreInterceptor.visitCommitCommand(TxInvocationContext,
>>>> CommitCommand) line: 148
>>>>     CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>>>>     CacheLoaderInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>>>> VisitableCommand) line: 119
>>>>     CacheLoaderInterceptor(CommandInterceptor).handleDefault(InvocationContext,
>>>> VisitableCommand) line: 133
>>>>     CacheLoaderInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
>>>> CommitCommand) line: 116
>>>>     CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>>>>     NotificationInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>>>> VisitableCommand) line: 119
>>>>     NotificationInterceptor.visitCommitCommand(TxInvocationContext,
>>>> CommitCommand) line: 56
>>>>     CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>>>>     TxInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>>>> VisitableCommand) line: 119
>>>>     TxInterceptor.visitCommitCommand(TxInvocationContext, CommitCommand) line: 142
>>>>     CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>>>>     CacheMgmtInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>>>> VisitableCommand) line: 119
>>>>     CacheMgmtInterceptor(CommandInterceptor).handleDefault(InvocationContext,
>>>> VisitableCommand) line: 133
>>>>     CacheMgmtInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
>>>> CommitCommand) line: 116
>>>>     CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>>>>     InvocationContextInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>>>> VisitableCommand) line: 119
>>>>     InvocationContextInterceptor.handleAll(InvocationContext,
>>>> VisitableCommand) line: 96
>>>>     InvocationContextInterceptor.handleDefault(InvocationContext,
>>>> VisitableCommand) line: 63
>>>>     InvocationContextInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
>>>> CommitCommand) line: 116
>>>>     CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>>>>     BatchingInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>>>> VisitableCommand) line: 119
>>>>     BatchingInterceptor.handleDefault(InvocationContext,
>>>> VisitableCommand) line: 77
>>>>     BatchingInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
>>>> CommitCommand) line: 116
>>>>     CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>>>>     InterceptorChain.invoke(InvocationContext, VisitableCommand) line: 274
>>>>     TransactionCoordinator.commit(LocalTransaction, boolean) line: 136
>>>>     TransactionXaAdapter.commit(Xid, boolean) line: 124
>>>>     DummyTransaction.runCommitTx() line: 312
>>>>     DummyTransaction.commit() line: 99
>>>>     BatchModeTransactionManager(DummyBaseTransactionManager).commit() line: 97
>>>>     BatchContainer.resolveTransaction(BatchContainer$BatchDetails,
>>>> boolean) line: 131
>>>>     BatchContainer.endBatch(boolean, boolean) line: 108
>>>>     BatchContainer.endBatch(boolean) line: 93
>>>>     CacheDelegate<K,V>.endBatch(boolean) line: 436
>>>>     InfinispanIndexOutput.close() line: 208
>>>>     IOUtils.closeSafely(Closeable...) line: 80
>>>>     FieldsWriter.close() line: 111
>>>>     StoredFieldsWriter.flush(SegmentWriteState) line: 52
>>>>     DocFieldProcessor.flush(Collection<DocConsumerPerThread>,
>>>> SegmentWriteState) line: 58
>>>>
>>>> I would like to remove the lock operation from the eviction listener,
>>>> but I'm not understanding why this locking is needed there and would
>>>> appreciate some explanations or help with this.
>>> Looking at the code, my first thought was that it is needed for sync-ing cache store /passivator access on that key.
>>> But afaik the cache store takes care of key locking at its own level[1], so I think that should be remove entirely.
>>>
>>> [1] https://github.com/mmarkus/infinispan/blob/master/core/src/main/java/org/infinispan/loaders/LockSupportCacheStore.java
>>>> Shouldn't an evict operation be a "best effort" operation in all
>>>> cases,
>>> +1. Even if my statement above doesn't stand and there's a reason to keep the lock for consistency, it should be a best effort: try to lock with 0 timeout, if fails just ignore it and move on.
>>
>> Hmmm, if eviction was on its own, I can see the point of best effort.
>>
>> But, the point of passivation is that you evict from memory and you store in the cache store. What harm does it cause that eviction could not work for a passivated cache? You could potentially end up with entries both in cache and store for at least a brief period of time, until the eviction thread tries to evict it again.
> As per the stack trace, the lock acquisition fails within the scope of  BoundedConcurrentHashMap$Segment.notifyEvictionListener: that means that entries were removed from memory for sure. So the only possible problematic outcome is to have entires removed from memory but not passivated. Still a problem though.
>> Now, what happens in a passivated cache if the entry, which is supposed to have been passivated, is still in memory due to eviction not acquiring the lock,
> that's not going to happen, the failing lock acquisition fails after element is evicted. I don't think this lock acquisition is needed though, and the fact that tests didn't fail after Sanne removed the locking code from EvictionManagerImpl.onEntryEviction back my assumption.
>> and suddenly it gets modified in memory. Will this changes propagate to the passivated entry?
>>
>> I think this needs some thought...
>>
>> Btw, 2LC uses eviction but no passivation.
>>
>>>> or is the idea here that we want the evictable data to be
>>>> consistently evicted on multiple nodes, or maybe even rollback an
>>>> evict operation?
>>> We don't advertise cluster-wise eviction consistency, so I doubt that is the case.
>>
>> Indeed, eviction is a local only operation.
>>
>>>>
>>>> Cheers,
>>>> Sanne
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> --
>> Galder Zamarreño
>> Sr. Software Engineer
>> Infinispan, JBoss Cache
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>



More information about the infinispan-dev mailing list