[JBoss JIRA] (ISPN-2723) NPE using cache loader preload with Lucene directory
by Christopher Wong (JIRA)
[ https://issues.jboss.org/browse/ISPN-2723?page=com.atlassian.jira.plugin.... ]
Christopher Wong commented on ISPN-2723:
----------------------------------------
I added a link to a dev mailing list posting, because Matej Lazar seems to have hit the same issue and provided some diagnosis.
> NPE using cache loader preload with Lucene directory
> ----------------------------------------------------
>
> Key: ISPN-2723
> URL: https://issues.jboss.org/browse/ISPN-2723
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache, Loaders and Stores
> Affects Versions: 5.2.0.CR1
> Reporter: Christopher Wong
> Assignee: Mircea Markus
> Attachments: infinispan.log
>
>
> I am seeing an NPE that looks a lot like ISPN-1470, except this is happening in version 5.2.0.CR1 of Infinispan. I have configured Infinispan's Lucene directory provider for use in Hibernate Search. The Hibernate SessionFactory is configured with a JTA transaction manager. Starting with no index works fine, but if I shut down Tomcat (with shutdown.sh) and restart, a huge pile of exceptions occur, starting with an NPE. The cache configuration in infinispan.cfg.xml looks like the following. I will attach a log file excerpt with a sampling of the exceptions being logged. This only happens with distributed mode. Replicated mode is fine. I have seen this happen with both the Jdbm and file cache store.
> <namedCache
> name="LuceneIndexesData">
> <clustering
> mode="dist">
> <stateTransfer fetchInMemoryState="true"/>
> <sync
> replTimeout="50000" />
> <l1 enabled="false" />
> </clustering>
> <loaders shared="true" preload="true">
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false">
> <properties>
> <property name="location" value="/some/path/.index/data" />
> </properties>
> </loader>
> </loaders>
> </namedCache>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 12 months
[JBoss JIRA] (ISPN-2723) NPE using cache loader preload with Lucene directory
by Christopher Wong (JIRA)
[ https://issues.jboss.org/browse/ISPN-2723?page=com.atlassian.jira.plugin.... ]
Christopher Wong updated ISPN-2723:
-----------------------------------
Forum Reference: http://lists.jboss.org/pipermail/infinispan-dev/2013-January/011854.html
> NPE using cache loader preload with Lucene directory
> ----------------------------------------------------
>
> Key: ISPN-2723
> URL: https://issues.jboss.org/browse/ISPN-2723
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache, Loaders and Stores
> Affects Versions: 5.2.0.CR1
> Reporter: Christopher Wong
> Assignee: Mircea Markus
> Attachments: infinispan.log
>
>
> I am seeing an NPE that looks a lot like ISPN-1470, except this is happening in version 5.2.0.CR1 of Infinispan. I have configured Infinispan's Lucene directory provider for use in Hibernate Search. The Hibernate SessionFactory is configured with a JTA transaction manager. Starting with no index works fine, but if I shut down Tomcat (with shutdown.sh) and restart, a huge pile of exceptions occur, starting with an NPE. The cache configuration in infinispan.cfg.xml looks like the following. I will attach a log file excerpt with a sampling of the exceptions being logged. This only happens with distributed mode. Replicated mode is fine. I have seen this happen with both the Jdbm and file cache store.
> <namedCache
> name="LuceneIndexesData">
> <clustering
> mode="dist">
> <stateTransfer fetchInMemoryState="true"/>
> <sync
> replTimeout="50000" />
> <l1 enabled="false" />
> </clustering>
> <loaders shared="true" preload="true">
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false">
> <properties>
> <property name="location" value="/some/path/.index/data" />
> </properties>
> </loader>
> </loaders>
> </namedCache>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 12 months
[JBoss JIRA] (ISPN-2723) NPE using cache loader preload with Lucene directory
by Christopher Wong (JIRA)
[ https://issues.jboss.org/browse/ISPN-2723?page=com.atlassian.jira.plugin.... ]
Christopher Wong updated ISPN-2723:
-----------------------------------
Affects Version/s: 5.2.0.CR1
Component/s: Distributed Cache
Loaders and Stores
> NPE using cache loader preload with Lucene directory
> ----------------------------------------------------
>
> Key: ISPN-2723
> URL: https://issues.jboss.org/browse/ISPN-2723
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache, Loaders and Stores
> Affects Versions: 5.2.0.CR1
> Reporter: Christopher Wong
> Assignee: Mircea Markus
> Attachments: infinispan.log
>
>
> I am seeing an NPE that looks a lot like ISPN-1470, except this is happening in version 5.2.0.CR1 of Infinispan. I have configured Infinispan's Lucene directory provider for use in Hibernate Search. The Hibernate SessionFactory is configured with a JTA transaction manager. Starting with no index works fine, but if I shut down Tomcat (with shutdown.sh) and restart, a huge pile of exceptions occur, starting with an NPE. The cache configuration in infinispan.cfg.xml looks like the following. I will attach a log file excerpt with a sampling of the exceptions being logged. This only happens with distributed mode. Replicated mode is fine. I have seen this happen with both the Jdbm and file cache store.
> <namedCache
> name="LuceneIndexesData">
> <clustering
> mode="dist">
> <stateTransfer fetchInMemoryState="true"/>
> <sync
> replTimeout="50000" />
> <l1 enabled="false" />
> </clustering>
> <loaders shared="true" preload="true">
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false">
> <properties>
> <property name="location" value="/some/path/.index/data" />
> </properties>
> </loader>
> </loaders>
> </namedCache>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 12 months
[JBoss JIRA] (ISPN-2723) NPE using cache loader preload with Lucene directory
by Christopher Wong (JIRA)
Christopher Wong created ISPN-2723:
--------------------------------------
Summary: NPE using cache loader preload with Lucene directory
Key: ISPN-2723
URL: https://issues.jboss.org/browse/ISPN-2723
Project: Infinispan
Issue Type: Bug
Reporter: Christopher Wong
Assignee: Mircea Markus
Attachments: infinispan.log
I am seeing an NPE that looks a lot like ISPN-1470, except this is happening in version 5.2.0.CR1 of Infinispan. I have configured Infinispan's Lucene directory provider for use in Hibernate Search. The Hibernate SessionFactory is configured with a JTA transaction manager. Starting with no index works fine, but if I shut down Tomcat (with shutdown.sh) and restart, a huge pile of exceptions occur, starting with an NPE. The cache configuration in infinispan.cfg.xml looks like the following. I will attach a log file excerpt with a sampling of the exceptions being logged. This only happens with distributed mode. Replicated mode is fine. I have seen this happen with both the Jdbm and file cache store.
<namedCache
name="LuceneIndexesData">
<clustering
mode="dist">
<stateTransfer fetchInMemoryState="true"/>
<sync
replTimeout="50000" />
<l1 enabled="false" />
</clustering>
<loaders shared="true" preload="true">
<loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false">
<properties>
<property name="location" value="/some/path/.index/data" />
</properties>
</loader>
</loaders>
</namedCache>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 12 months
[JBoss JIRA] (ISPN-2724) L1 is not used for write-operations (for node affine keys)
by Thomas Fromm (JIRA)
Thomas Fromm created ISPN-2724:
----------------------------------
Summary: L1 is not used for write-operations (for node affine keys)
Key: ISPN-2724
URL: https://issues.jboss.org/browse/ISPN-2724
Project: Infinispan
Issue Type: Bug
Components: Distributed Execution and Map/Reduce
Affects Versions: 5.2.0.CR1
Reporter: Thomas Fromm
Assignee: Vladimir Blagojevic
I need to make sure, that a key written on a node is present in L1, even when the owner is a different node.
Attached an example which shows the expected behaviour.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 12 months
[JBoss JIRA] (ISPN-2723) NPE using cache loader preload with Lucene directory
by Christopher Wong (JIRA)
[ https://issues.jboss.org/browse/ISPN-2723?page=com.atlassian.jira.plugin.... ]
Christopher Wong updated ISPN-2723:
-----------------------------------
Attachment: infinispan.log
Tomcat log excerpt with exceptions on cache reload.
> NPE using cache loader preload with Lucene directory
> ----------------------------------------------------
>
> Key: ISPN-2723
> URL: https://issues.jboss.org/browse/ISPN-2723
> Project: Infinispan
> Issue Type: Bug
> Reporter: Christopher Wong
> Assignee: Mircea Markus
> Attachments: infinispan.log
>
>
> I am seeing an NPE that looks a lot like ISPN-1470, except this is happening in version 5.2.0.CR1 of Infinispan. I have configured Infinispan's Lucene directory provider for use in Hibernate Search. The Hibernate SessionFactory is configured with a JTA transaction manager. Starting with no index works fine, but if I shut down Tomcat (with shutdown.sh) and restart, a huge pile of exceptions occur, starting with an NPE. The cache configuration in infinispan.cfg.xml looks like the following. I will attach a log file excerpt with a sampling of the exceptions being logged. This only happens with distributed mode. Replicated mode is fine. I have seen this happen with both the Jdbm and file cache store.
> <namedCache
> name="LuceneIndexesData">
> <clustering
> mode="dist">
> <stateTransfer fetchInMemoryState="true"/>
> <sync
> replTimeout="50000" />
> <l1 enabled="false" />
> </clustering>
> <loaders shared="true" preload="true">
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false">
> <properties>
> <property name="location" value="/some/path/.index/data" />
> </properties>
> </loader>
> </loaders>
> </namedCache>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 12 months
[JBoss JIRA] (ISPN-1300) Reduce number of locks and buckets generated by BucketBasedCacheStore
by Johann Burkard (JIRA)
[ https://issues.jboss.org/browse/ISPN-1300?page=com.atlassian.jira.plugin.... ]
Johann Burkard commented on ISPN-1300:
--------------------------------------
Sorry for digging this one up but I tried setting the maximum number of buckets to 4096 (for an ext3 file system) and that made Infinispan infinitely slower. When this is configurable, could this be done in a way that's free of side-effects?
> Reduce number of locks and buckets generated by BucketBasedCacheStore
> ---------------------------------------------------------------------
>
> Key: ISPN-1300
> URL: https://issues.jboss.org/browse/ISPN-1300
> Project: Infinispan
> Issue Type: Enhancement
> Components: Configuration, Loaders and Stores
> Affects Versions: 5.0.0.CR8
> Reporter: Robert Stupp
> Assignee: Manik Surtani
> Fix For: 5.0.0.FINAL
>
>
> The current implementation of FileCacheStore creates one bucket and lock for each hash key - which results in up to 4.2 billion files (2^32).
> It should limit the number of files to
> a) improve performance of purge
> b) reduce number of open file handles (system resources)
> c) reduce number of Java objects/heap (JVM resources)
> d) improve performance
> The implementation allows us to do so.
> Only 4 lines of code are necessary in BucketBasedCacheStore implementation:
> {code}
> private int hashKeyMask = 0xfffffc00; // TODO should get a configuration entry
> @Override
> protected Integer getLockFromKey(Object key) {
> return Integer.valueOf(key.hashCode() & hashKeyMask);
> }
> {code}
> This reduces the number of files to 2^22 = 4,194,304 files. Since each application and each cache store has different semantics the hasKeyMask value should be configurable - best would be to configure the number of bits.
> Side effect: If someone changes the FileCacheStore hasKeyMask, the whole cache store becomes unuseable. So I opened another enhancement ...
> Note: This implementation should be used in a different class (e.g. extend FileCacheStore) because it makes existing file cache stores unusable.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 12 months
[JBoss JIRA] (ISPN-2439) Deadlock in Map/Reduce tasks
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2439?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2439:
-----------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Deadlock in Map/Reduce tasks
> ----------------------------
>
> Key: ISPN-2439
> URL: https://issues.jboss.org/browse/ISPN-2439
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Mircea Markus
> Priority: Blocker
> Fix For: 5.2.0.CR2
>
> Attachments: dfnmrt.log.gz
>
>
> It looks like the Map/Reduce intermediate caches use pessimistic transactions, but the transactions are not guaranteed to write to the keys in the same order. So it's possible for two tasks to get into a deadlock, ending with a TimeoutException:
> {noformat}
> 16:18:40,649 ERROR (testng-DistributedFourNodesMapReduceTest:) [UnitTestTestNGListener] Test testCombinerDoesNotChangeResult(org.infinispan.distexec.mapreduce.DistributedFourNodesMapReduceTest) failed.
> org.infinispan.CacheException: Could not invoke map phase of MapReduce task on remote nodes
> at org.infinispan.distexec.mapreduce.MapReduceTask.invokeEverywhere(MapReduceTask.java:562)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhase(MapReduceTask.java:374)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:315)
> at org.infinispan.distexec.mapreduce.BaseWordCountMapReduceTest.testCombinerDoesNotChangeResult(BaseWordCountMapReduceTest.java:188)
> ...
> Caused by: org.infinispan.CacheException: org.infinispan.CacheException: Could not move intermediate keys/values for M/R task 04244b4b-08b1-4fc4-9755-ed02f3f35a3a
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:97)
> at org.infinispan.commands.read.MapCombineCommand.perform(MapCombineCommand.java:89)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:110)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:82)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:244)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:217)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:483)
> ...
> Caused by: org.infinispan.CacheException: Could not move intermediate keys/values for M/R task 04244b4b-08b1-4fc4-9755-ed02f3f35a3a
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.combine(MapReduceManagerImpl.java:281)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:95)
> ... 26 more
> Caused by: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [10 seconds] on key [JBoss] for requestor [GlobalTransaction:<NodeD-56763>:10429:remote]! Lock held by [GlobalTransaction:<NodeB-55590>:10432:remote]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:217)
> at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLock(LockManagerImpl.java:190)
> at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.lockKeyAndCheckOwnership(AbstractTxLockingInterceptor.java:190)
> at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.lockAndRegisterBackupLock(AbstractTxLockingInterceptor.java:125)
> at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitLockControlCommand(PessimisticLockingInterceptor.java:248)
> at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:131)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:132)
> at org.infinispan.commands.AbstractVisitor.visitLockControlCommand(AbstractVisitor.java:177)
> at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:131)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
> at org.infinispan.interceptors.TxInterceptor.invokeNextInterceptorAndVerifyTransaction(TxInterceptor.java:125)
> at org.infinispan.interceptors.TxInterceptor.visitLockControlCommand(TxInterceptor.java:174)
> at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:131)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
> at org.infinispan.statetransfer.StateTransferInterceptor.handleTopologyAffectedCommand(StateTransferInterceptor.java:212)
> at org.infinispan.statetransfer.StateTransferInterceptor.handleTxCommand(StateTransferInterceptor.java:187)
> at org.infinispan.statetransfer.StateTransferInterceptor.visitLockControlCommand(StateTransferInterceptor.java:131)
> at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:131)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:129)
> at org.infinispan.interceptors.InvocationContextInterceptor.visitLockControlCommand(InvocationContextInterceptor.java:98)
> at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:131)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:347)
> at org.infinispan.commands.control.LockControlCommand.perform(LockControlCommand.java:150)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:110)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:82)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:244)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:217)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:483)
> ...
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 12 months
[JBoss JIRA] (ISPN-2439) Deadlock in Map/Reduce tasks
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2439?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2439:
-----------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/1583
> Deadlock in Map/Reduce tasks
> ----------------------------
>
> Key: ISPN-2439
> URL: https://issues.jboss.org/browse/ISPN-2439
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Mircea Markus
> Priority: Blocker
> Fix For: 5.2.0.CR2
>
> Attachments: dfnmrt.log.gz
>
>
> It looks like the Map/Reduce intermediate caches use pessimistic transactions, but the transactions are not guaranteed to write to the keys in the same order. So it's possible for two tasks to get into a deadlock, ending with a TimeoutException:
> {noformat}
> 16:18:40,649 ERROR (testng-DistributedFourNodesMapReduceTest:) [UnitTestTestNGListener] Test testCombinerDoesNotChangeResult(org.infinispan.distexec.mapreduce.DistributedFourNodesMapReduceTest) failed.
> org.infinispan.CacheException: Could not invoke map phase of MapReduce task on remote nodes
> at org.infinispan.distexec.mapreduce.MapReduceTask.invokeEverywhere(MapReduceTask.java:562)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhase(MapReduceTask.java:374)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:315)
> at org.infinispan.distexec.mapreduce.BaseWordCountMapReduceTest.testCombinerDoesNotChangeResult(BaseWordCountMapReduceTest.java:188)
> ...
> Caused by: org.infinispan.CacheException: org.infinispan.CacheException: Could not move intermediate keys/values for M/R task 04244b4b-08b1-4fc4-9755-ed02f3f35a3a
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:97)
> at org.infinispan.commands.read.MapCombineCommand.perform(MapCombineCommand.java:89)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:110)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:82)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:244)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:217)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:483)
> ...
> Caused by: org.infinispan.CacheException: Could not move intermediate keys/values for M/R task 04244b4b-08b1-4fc4-9755-ed02f3f35a3a
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.combine(MapReduceManagerImpl.java:281)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:95)
> ... 26 more
> Caused by: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [10 seconds] on key [JBoss] for requestor [GlobalTransaction:<NodeD-56763>:10429:remote]! Lock held by [GlobalTransaction:<NodeB-55590>:10432:remote]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:217)
> at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLock(LockManagerImpl.java:190)
> at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.lockKeyAndCheckOwnership(AbstractTxLockingInterceptor.java:190)
> at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.lockAndRegisterBackupLock(AbstractTxLockingInterceptor.java:125)
> at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitLockControlCommand(PessimisticLockingInterceptor.java:248)
> at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:131)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:132)
> at org.infinispan.commands.AbstractVisitor.visitLockControlCommand(AbstractVisitor.java:177)
> at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:131)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
> at org.infinispan.interceptors.TxInterceptor.invokeNextInterceptorAndVerifyTransaction(TxInterceptor.java:125)
> at org.infinispan.interceptors.TxInterceptor.visitLockControlCommand(TxInterceptor.java:174)
> at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:131)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
> at org.infinispan.statetransfer.StateTransferInterceptor.handleTopologyAffectedCommand(StateTransferInterceptor.java:212)
> at org.infinispan.statetransfer.StateTransferInterceptor.handleTxCommand(StateTransferInterceptor.java:187)
> at org.infinispan.statetransfer.StateTransferInterceptor.visitLockControlCommand(StateTransferInterceptor.java:131)
> at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:131)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:129)
> at org.infinispan.interceptors.InvocationContextInterceptor.visitLockControlCommand(InvocationContextInterceptor.java:98)
> at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:131)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:347)
> at org.infinispan.commands.control.LockControlCommand.perform(LockControlCommand.java:150)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:110)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:82)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:244)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:217)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:483)
> ...
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 12 months