fixing eviction with transactions (critical for Hibernate Search)
by Sanne Grinovero
Hello all,
in this scenario we have the Infinispan Lucene Directory using
batching (DummyTransaction), eviction and passivation to keep the
amount of memory being used for the index under control; I'm using
LIRS but experienced the same issue with all other strategies.
As you can see from the following stacktrace, the batching ends by
sending a commit request, so the status of the transaction is 8
(STATUS_COMMITTING) in this context.
The new data is stored in the DataContainer, then the
BoundedConcurrentHashMap notifies the EvictionManagerImpl as it has to
evict some values, and this one attempts to acquire a lock on the
to-be-evicted keys (which are obviously not the same I'm trying to
store).
Acquiring this lock is an invalid operation as the transaction is in
commit state, and so this operation fails with an exception.
Thread [Hibernate Search: Directory writer-1] (Suspended (breakpoint
at line 92 in LockManagerImpl))
LockManagerImpl.lockAndRecord(Object, InvocationContext) line: 92
EvictionManagerImpl.acquireLock(InvocationContext, Object) line: 210
EvictionManagerImpl.onEntryEviction(Object, InternalCacheEntry) line: 170
EvictionManagerImpl.onEntryEviction(Map<Object,InternalCacheEntry>) line: 162
DefaultDataContainer$DefaultEvictionListener.onEntryEviction(Map<Object,InternalCacheEntry>)
line: 201
BoundedConcurrentHashMap$Segment<K,V>.notifyEvictionListener(Set<HashEntry<K,V>>)
line: 1176
BoundedConcurrentHashMap$Segment<K,V>.put(K, int, V, boolean) line: 1011
BoundedConcurrentHashMap<K,V>.put(K, V) line: 1556
DefaultDataContainer.put(Object, Object, long, long) line: 148
ReadCommittedEntry.commit(DataContainer) line: 177
LockingInterceptor.commitEntry(CacheEntry, boolean) line: 389
LockingInterceptor.cleanupLocks(InvocationContext, boolean) line: 367
LockingInterceptor.visitCommitCommand(TxInvocationContext,
CommitCommand) line: 98
CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
CacheStoreInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
VisitableCommand) line: 119
CacheStoreInterceptor.visitCommitCommand(TxInvocationContext,
CommitCommand) line: 148
CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
CacheLoaderInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
VisitableCommand) line: 119
CacheLoaderInterceptor(CommandInterceptor).handleDefault(InvocationContext,
VisitableCommand) line: 133
CacheLoaderInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
CommitCommand) line: 116
CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
NotificationInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
VisitableCommand) line: 119
NotificationInterceptor.visitCommitCommand(TxInvocationContext,
CommitCommand) line: 56
CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
TxInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
VisitableCommand) line: 119
TxInterceptor.visitCommitCommand(TxInvocationContext, CommitCommand) line: 142
CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
CacheMgmtInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
VisitableCommand) line: 119
CacheMgmtInterceptor(CommandInterceptor).handleDefault(InvocationContext,
VisitableCommand) line: 133
CacheMgmtInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
CommitCommand) line: 116
CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
InvocationContextInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
VisitableCommand) line: 119
InvocationContextInterceptor.handleAll(InvocationContext,
VisitableCommand) line: 96
InvocationContextInterceptor.handleDefault(InvocationContext,
VisitableCommand) line: 63
InvocationContextInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
CommitCommand) line: 116
CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
BatchingInterceptor(CommandInterceptor).invokeNextInterceptor(InvocationContext,
VisitableCommand) line: 119
BatchingInterceptor.handleDefault(InvocationContext,
VisitableCommand) line: 77
BatchingInterceptor(AbstractVisitor).visitCommitCommand(TxInvocationContext,
CommitCommand) line: 116
CommitCommand.acceptVisitor(InvocationContext, Visitor) line: 60
InterceptorChain.invoke(InvocationContext, VisitableCommand) line: 274
TransactionCoordinator.commit(LocalTransaction, boolean) line: 136
TransactionXaAdapter.commit(Xid, boolean) line: 124
DummyTransaction.runCommitTx() line: 312
DummyTransaction.commit() line: 99
BatchModeTransactionManager(DummyBaseTransactionManager).commit() line: 97
BatchContainer.resolveTransaction(BatchContainer$BatchDetails,
boolean) line: 131
BatchContainer.endBatch(boolean, boolean) line: 108
BatchContainer.endBatch(boolean) line: 93
CacheDelegate<K,V>.endBatch(boolean) line: 436
InfinispanIndexOutput.close() line: 208
IOUtils.closeSafely(Closeable...) line: 80
FieldsWriter.close() line: 111
StoredFieldsWriter.flush(SegmentWriteState) line: 52
DocFieldProcessor.flush(Collection<DocConsumerPerThread>,
SegmentWriteState) line: 58
I would like to remove the lock operation from the eviction listener,
but I'm not understanding why this locking is needed there and would
appreciate some explanations or help with this.
Shouldn't an evict operation be a "best effort" operation in all
cases, or is the idea here that we want the evictable data to be
consistently evicted on multiple nodes, or maybe even rollback an
evict operation?
Cheers,
Sanne
13 years, 6 months
Problem generating configdoc
by Pete Muir
Vladimir
There is a problem generating the configdoc atm. I've tracked it down to to a problem with the async node, and it not being able to find the bean class associated, but can't immediately see the problem.
Can you take a look?
Pete
13 years, 6 months
TPC-C benchmark for Infinispan
by Sebastiano Peluso
Hi all,
I'm Sebastiano Peluso and during my work in the Cloud-TM project [1] I
have adapted the TPC-C benchmark [2] in order to execute it on Infinispan.
The resulting implementation is RadargunTPCC benchmark and you can take
a look at the source code at the following link:
https://github.com/cloudtm/RadargunTPCC
In RadargunTPCC I use the software infrastructure offered by Radargun in
order to execute TPC-C transactions on top of a transactional in-memory
data grid like Infinispan. Basically, during the simulation period, each
Stresser thread executes a sequence of iterations and in each iteration
it randomly chooses a type of transaction (among the set of the
transactions implemented) and executes that one.
The types of transactions currently implemented are:
- New-Order Transaction;
- Payment Transaction;
- Order-Status Transaction.
Concerning the data layer, the tables of the TPC-C database are mapped
in the data grid following this scheme:
- Each table (Stock table, Warehouse table, etc.) is mapped in a Java
class and the columns of the table become instance variables of the
corresponding class.
- Each class of the domain implements a getKey() method which builds a
key identifier for an instance of that class. More specifically, if a
table T with columns C1,...,Cn and primary key formed by columns
C1,...,Ck (with k<=n) is mapped in a class S, the key of an instance of
S is a String built by concatenating the name of S and the String
representation of the values of C1,...,Ck.
- Each class of the domain implements store() and load() methods: the
former used for storing an instance of the class as a <key,value> pair
in a row of the data grid; the latter used for loading an instance of
the class from the data grid. More specifically an instance o is stored
in a row as <key,value> pair where key is equals to o.getKey() and value
is o itself.
Any suggestion or comment is highly appreciated.
Regards,
Sebastiano Peluso
[1] http://www.cloudtm.eu/
[2] http://www.tpc.org/tpcc/spec/tpcc_current.pdf
13 years, 6 months
Partial state transfer
by Bela Ban
I looked into adding partial state transfer back into JGroups, but found
out that partial state transfer is fundamentally flawed, something I've
always suspected ! (Regular state transfer is correct, and has always
been correct).
- Say we have node A and B. B requests the state from A
- There are partial states X and Y
- Message M1 modifies X, M2 modifies Y
Here's what happens:
T1: A multicasts M1
T2: A delivers M1, and changes X
T3: B sends a GET_STATE("Y") request to A // partial state request
for state Y
T4: A multicasts M2
T5: A delivers M2, changing Y
T6: A receives the GET_STATE request, sends a SET_STATE response back
including Y and the digest (including M1 and M2's seqnos)
T7: B receives the SET_STATE response, sets its digest (includes now M1
and M2) and state Y *BUT NOT* state X !
T8: *** B receives M1, discards it because it is already in its digest ***
T9: B receives M2, and also discards it
At time T8, M1 (which would have changed state X) is discarded, because
it is already in the digest sent with the SET_STATE response. Therefore
state X is now incorrect, as M1 was never applied !
As a summary, if we get a number of updates to partial states, and don't
receive all of them before requesting the partial state, the last update
includes in the digest wins...
I'm a real idiot, as I've written this down before, in 2006: see [1] for
details.
In a nutshell, [1] shows that partial state transfer doesn't work,
unless virtual synchrony (FLUSH) is used.
So I propose Infinispan and JBoss AS look into how they can replace
their use of partial state transfer. I suggest Infinispan uses the same
approach already used for state transfer with mode=distribution.
Opinions ?
[1]
https://github.com/belaban/JGroups/blob/master/doc/design/PartialStateTra...
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
13 years, 6 months
Issue posted for JBoss Logging and slf4j
by Jeff Ramsdale
I've posted https://issues.jboss.org/browse/JBLOGGING-65 to address an
issue I've run into converting to Infinispan 5.0.0.CR5. The issue
concerns the switch to JBoss Logging and its failure to support
log4j-over-slf4j, a jar provided by slf4j to adapt log4j events and
route them to slf4j.
JBoss Logging doesn't detect that org.apache.log4j.LogManager is
provided by the log4j-over-slf4j jar and selects the
Log4jLoggerProvider instead of the Slf4jLoggerProvider.
I still haven't nailed down what's going on with logging in Infinispan
5.0.0 but I'm not getting any logging output and have run into several
issues. See https://issues.jboss.org/browse/ISPN-1177 as well (it
seems Maven 3 is required to build Infinispan 5?). I have concerns
that folks converting from 4.x will have issues as I have and wanted
to raise a red flag before release. Anybody else seeing a degradation
in logging behavior?
-jeff
13 years, 6 months
Partial state transfer in Infinispan
by Bela Ban
We currently use JGroups' partial state transfer to transfer individual
caches from one Infinispan instance to another.
Since I got rid of partial state transfer in JGroups 3.0, and don't like
to add it back, I'd like to know whether this is still needed.
I thought that we currently require the same set of caches to be
available in all Infinispan instances, and the reason (IIRC) was that
distribution wouldn't work if we have caches 1 and 2 available on
instances A and B, but not on C, because consistent hashing distributes
the data based on views, and we didn't want to have to keep track of
individual caches...
Why are we actually using JGroups' state transfer with replication, but
use our own state transfer with distribution ?
Opinions ?
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
13 years, 6 months
new context Flag: SKIP_INDEXING
by Sanne Grinovero
Hello all,
I was having a quick look into
https://issues.jboss.org/browse/ISPN-1179
to see if I could get a solution in tomorrow's release; The problem is
that during preload from a cacheloader, the entries are added into the
caches as if they where just "put", and so they will be added again to
the index: I need some way to notify the QueryInterceptor that this
operation needs to be skipped.
https://github.com/Sanne/infinispan/commit/a1d279c16de2af0f3ff6475b1610f4...
I've already sent a pull request, but since I'm proposing a new Flag
in the core module I wanted to raise some more comments about it.
In the testcase I could see that a standard evict/reload does not
trigger the same behaviour, so I guess it's totally possible to change
the CacheLoaderManagerImpl in such a way that this doesn't look like a
put, but I think that having the SKIP_INDEXING flag around could be
very usefull for all Query users: in Hibernate Search it's a
frequently requested feature (which was not yet implemented as it's
not that simple in that case).
Cheers,
Sanne
13 years, 6 months