[infinispan-dev] fixing eviction with transactions (critical for Hibernate Search)

Sanne Grinovero sanne.grinovero at gmail.com
Tue Jun 7 11:41:08 EDT 2011


So, I removed all locking logic from EvictionManagerImpl, all my
issues are solved and all of core tests look like no worse than usual.
I'm still hoping to understand if there was an important reason to
lock, or I will send a pull request ;)

Sanne

2011/6/7 Sanne Grinovero <sanne.grinovero at gmail.com>:
> 2011/6/7 Mircea Markus <mircea.markus at jboss.com>:
>>
>> On 7 Jun 2011, at 13:13, Sanne Grinovero wrote:
>>
>> Hello all,
>> in this scenario we have the Infinispan Lucene Directory using
>> batching (DummyTransaction), eviction and passivation to keep the
>> amount of memory being used for the index under control; I'm using
>> LIRS but experienced the same issue with all other strategies.
>>
>> As you can see from the following stacktrace, the batching ends by
>> sending a commit request, so the status of the transaction is 8
>> (STATUS_COMMITTING) in this context.
>> The new data is stored in the DataContainer, then the
>> BoundedConcurrentHashMap notifies the EvictionManagerImpl as it has to
>> evict some values, and this one attempts to acquire a lock on the
>> to-be-evicted keys (which are obviously not the same I'm trying to
>> store).
>> Acquiring this lock is an invalid operation as the transaction is in
>> commit state, and so this operation fails with an exception.
>>
>> Thread [Hibernate Search: Directory writer-1] (Suspended (breakpoint
>> at line 92 in LockManagerImpl))
>> LockManagerImpl.lockAndRecord(Object, InvocationContext) line: 92
>> EvictionManagerImpl.acquireLock(InvocationContext, Object) line: 210
>> EvictionManagerImpl.onEntryEviction(Object, InternalCacheEntry) line: 170
>> EvictionManagerImpl.onEntryEviction(Map<Object,InternalCacheEntry>) line:
>> 162
>> DefaultDataContainer$DefaultEvictionListener.onEntryEviction(Map<Object,InternalCacheEntry>)
>> line: 201
>> BoundedConcurrentHashMap$Segment<K,V>.notifyEvictionListener(Set<HashEntry<K,V>>)
>> line: 1176
>> BoundedConcurrentHashMap$Segment<K,V>.put(K, int, V, boolean) line: 1011
>> BoundedConcurrentHashMap<K,V>.put(K, V) line: 1556
>> DefaultDataContainer.put(Object, Object, long, long) line: 148
>> ReadCommittedEntry.commit(DataContainer) line: 177
>> LockingInterceptor.commitEntry(CacheEntry, boolean) line: 389
>> LockingInterceptor.cleanupLocks(InvocationContext, boolean) line: 367
>> LockingInterceptor.visitCommitCommand(TxInvocationContext,
>> CommitCommand) line: 98
>> CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>> CacheStoreInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>> VisitableCommand) line: 119
>> CacheStoreInterceptor.visitCommitCommand(TxInvocationContext,
>> CommitCommand) line: 148
>> CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>> CacheLoaderInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>> VisitableCommand) line: 119
>> CacheLoaderInterceptor(CommandInterceptor).handleDefault(InvocationContext,
>> VisitableCommand) line: 133
>> CacheLoaderInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
>> CommitCommand) line: 116
>> CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>> NotificationInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>> VisitableCommand) line: 119
>> NotificationInterceptor.visitCommitCommand(TxInvocationContext,
>> CommitCommand) line: 56
>> CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>> TxInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>> VisitableCommand) line: 119
>> TxInterceptor.visitCommitCommand(TxInvocationContext, CommitCommand) line:
>> 142
>> CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>> CacheMgmtInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>> VisitableCommand) line: 119
>> CacheMgmtInterceptor(CommandInterceptor).handleDefault(InvocationContext,
>> VisitableCommand) line: 133
>> CacheMgmtInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
>> CommitCommand) line: 116
>> CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>> InvocationContextInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>> VisitableCommand) line: 119
>> InvocationContextInterceptor.handleAll(InvocationContext,
>> VisitableCommand) line: 96
>> InvocationContextInterceptor.handleDefault(InvocationContext,
>> VisitableCommand) line: 63
>> InvocationContextInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
>> CommitCommand) line: 116
>> CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>> BatchingInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
>> VisitableCommand) line: 119
>> BatchingInterceptor.handleDefault(InvocationContext,
>> VisitableCommand) line: 77
>> BatchingInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
>> CommitCommand) line: 116
>> CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
>> InterceptorChain.invoke(InvocationContext, VisitableCommand) line: 274
>> TransactionCoordinator.commit(LocalTransaction, boolean) line: 136
>> TransactionXaAdapter.commit(Xid, boolean) line: 124
>> DummyTransaction.runCommitTx() line: 312
>> DummyTransaction.commit() line: 99
>> BatchModeTransactionManager(DummyBaseTransactionManager).commit() line: 97
>> BatchContainer.resolveTransaction(BatchContainer$BatchDetails,
>> boolean) line: 131
>> BatchContainer.endBatch(boolean, boolean) line: 108
>> BatchContainer.endBatch(boolean) line: 93
>> CacheDelegate<K,V>.endBatch(boolean) line: 436
>> InfinispanIndexOutput.close() line: 208
>> IOUtils.closeSafely(Closeable...) line: 80
>> FieldsWriter.close() line: 111
>> StoredFieldsWriter.flush(SegmentWriteState) line: 52
>> DocFieldProcessor.flush(Collection<DocConsumerPerThread>,
>> SegmentWriteState) line: 58
>>
>> I would like to remove the lock operation from the eviction listener,
>> but I'm not understanding why this locking is needed there and would
>> appreciate some explanations or help with this.
>>
>> Looking at the code, my first thought was that it is needed for sync-ing
>> cache store /passivator access on that key.
>> But afaik the cache store takes care of key locking at its own level[1], so
>> I think that should be remove entirely.
>> [1] https://github.com/mmarkus/infinispan/blob/master/core/src/main/java/org/infinispan/loaders/LockSupportCacheStore.java
>>
>> Shouldn't an evict operation be a "best effort" operation in all
>> cases,
>>
>> +1. Even if my statement above doesn't stand and there's a reason to keep
>> the lock for consistency, it should be a best effort: try to lock with 0
>> timeout, if fails just ignore it and move on.
>
> Why should I even attempt locking it?
> From what you say, it seems that we can ignore it alltogether?
>
>>
>> or is the idea here that we want the evictable data to be
>> consistently evicted on multiple nodes, or maybe even rollback an
>> evict operation?
>>
>> We don't advertise cluster-wise eviction consistency, so I doubt that is the
>> case.
>
> So some alternative solutions:
> 1) consider the STATUS_COMMITTING valid for further locking; not nice for XA.
> 2) don't lock at all the evicted keys
> 3) perform eviction before the commit phase: means shuffling the
> invocation to the bounded container to another moment
> 4) Make sure batching does not reuse the same code as XA - and
> consider it illegal to use XA with eviction (use cases?)
>
> I'm going to try 2), please warn me if there's need to lock it.
>
> Cheers,
> Sanne
>



More information about the infinispan-dev mailing list