[JBoss JIRA] (ISPN-3234) Upgrade to JCache 0.x
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-3234?page=com.atlassian.jira.plugin.... ]
Work on ISPN-3234 started by Galder Zamarreño.
> Upgrade to JCache 0.x
> ---------------------
>
> Key: ISPN-3234
> URL: https://issues.jboss.org/browse/ISPN-3234
> Project: Infinispan
> Issue Type: Component Upgrade
> Components: JCache
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 6.0.0.Final
>
>
> When next JCache version is released, upgrade and re-enable the following TCK tests which have been disabled in ISPN-3213:
> {code}org.jsr107.tck.CacheStatisticsTest#testCacheStatistics
> org.jsr107.tck.CacheStatisticsTest#testCacheStatisticsInvokeEntryProcessorGet
> org.jsr107.tck.CacheStatisticsTest#testCacheStatisticsInvokeEntryProcessorCreate
> org.jsr107.tck.CacheStatisticsTest#testCacheStatisticsInvokeEntryProcessorRemove
> org.jsr107.tck.CacheStatisticsTest#testIterateAndRemove
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-3170) create a Metadata store
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3170?page=com.atlassian.jira.plugin.... ]
Mircea Markus edited comment on ISPN-3170 at 7/5/13 9:39 AM:
-------------------------------------------------------------
Also consider making hotrod server use this cache instead of defining the topology cache.
was (Author: mircea.markus):
Also consider making hotrod server use this cache instead of its own.
> create a Metadata store
> -----------------------
>
> Key: ISPN-3170
> URL: https://issues.jboss.org/browse/ISPN-3170
> Project: Infinispan
> Issue Type: Sub-task
> Reporter: Mircea Markus
> Assignee: Mircea Markus
> Priority: Blocker
> Fix For: 6.0.0.Alpha1, 6.0.0.Final
>
>
> See #3: https://community.jboss.org/wiki/QueryingDesignInInfinispan
> This is intended as a general purpose internal component to allow sharing information between the nodes in the cluster.
> E.g.:
> - protobuf files, once uploaded to a node, would need to be made available to all the nodes in the cluster
> - indexing information. An index is to be defined on onde node but this information (what's indexed) needs to be made available to all the nodes in the cluster
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-1523) Remote nodes send duplicate invalidation messages
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-1523?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-1523:
-------------------------------------
And just to clarify this is only true for non-tx caches. In a tx cache only the primary owner sends the invalidation since it is the one holding the locked key.
{code}
} else if (isL1CacheEnabled && !ctx.isOriginLocal() && !ctx.getLockedKeys().isEmpty()) {
// We fall into this block if we are a remote node, happen to be the primary data owner and have locked keys.
// it is still our responsibility to invalidate L1 caches in the cluster.
blockOnL1FutureIfNeeded(flushL1Caches(ctx));
}
{code}
So we could look into changing non-tx to be the same.
> Remote nodes send duplicate invalidation messages
> -------------------------------------------------
>
> Key: ISPN-1523
> URL: https://issues.jboss.org/browse/ISPN-1523
> Project: Infinispan
> Issue Type: Enhancement
> Components: Distributed Cache
> Affects Versions: 5.1.0.BETA4
> Reporter: Dan Berindei
> Assignee: William Burns
> Priority: Minor
>
> I though only the originator should send invalidation messages, but I'm seeing these messages in the log:
> {noformat}
> 2011-11-11 11:10:27,608 TRACE (OOB-2,Infinispan-Cluster,NodeD-8993) [org.infinispan.interceptors.DistributionInterceptor] Put occuring on node, requesting cache invalidation for keys [k1]. Origin of command is remote
> 2011-11-11 11:10:27,608 TRACE (OOB-3,Infinispan-Cluster,NodeA-31187) [org.infinispan.interceptors.DistributionInterceptor] Put occuring on node, requesting cache invalidation for keys [k1]. Origin of command is remote
> 2011-11-11 11:10:27,608 TRACE (OOB-2,Infinispan-Cluster,NodeD-8993) [org.infinispan.distribution.L1ManagerImpl] Invalidating L1 caches for keys [k1]
> 2011-11-11 11:10:27,608 TRACE (OOB-3,Infinispan-Cluster,NodeA-31187) [org.infinispan.distribution.L1ManagerImpl] Invalidating L1 caches for keys [k1]
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-3262) LevelDB cache store allows loading after shutdown
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3262?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3262:
--------------------------------
Fix Version/s: 5.3.1.Final
> LevelDB cache store allows loading after shutdown
> -------------------------------------------------
>
> Key: ISPN-3262
> URL: https://issues.jboss.org/browse/ISPN-3262
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.7.Final, 5.3.0.Final
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 5.3.1.Final, 6.0.0.Alpha1, 6.0.0.Final
>
>
> I'm getting NPE in these tests:
> https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
> it's caused by a thread asking the cache store to load all keys after it's been shutdown.
> It might be considered also a problem of LevelDB implementation that doesn't guard against this.
> these are the stacktraces of the causing events:
> {code}
> DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread main
> org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
> org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
> org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
> org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
> org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
> org.infinispan.CacheImpl.stop(CacheImpl.java:604)
> org.infinispan.CacheImpl.stop(CacheImpl.java:599)
> org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
> org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
> org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> org.testng.TestRunner.privateRun(TestRunner.java:767)
> org.testng.TestRunner.run(TestRunner.java:617)
> org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
> org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
> org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> org.testng.SuiteRunner.run(SuiteRunner.java:240)
> org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> org.testng.TestNG.run(TestNG.java:1057)
> org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
> org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
> org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
> DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread OOB-1,ISPN,NodeC-18285
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> org.jgroups.JChannel.up(JChannel.java:707)
> org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> org.jgroups.protocols.FC.up(FC.java:479)
> org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> org.jgroups.protocols.Discovery.up(Discovery.java:359)
> org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:722)
> 2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
> java.lang.NullPointerException
> at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> at org.jgroups.JChannel.up(JChannel.java:707)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FC.up(FC.java:479)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-3262) LevelDB cache store allows loading after shutdown
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3262?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3262:
--------------------------------
Fix Version/s: 6.0.0.Alpha1
> LevelDB cache store allows loading after shutdown
> -------------------------------------------------
>
> Key: ISPN-3262
> URL: https://issues.jboss.org/browse/ISPN-3262
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.7.Final, 5.3.0.Final
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 6.0.0.Alpha1, 6.0.0.Final
>
>
> I'm getting NPE in these tests:
> https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
> it's caused by a thread asking the cache store to load all keys after it's been shutdown.
> It might be considered also a problem of LevelDB implementation that doesn't guard against this.
> these are the stacktraces of the causing events:
> {code}
> DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread main
> org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
> org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
> org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
> org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
> org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
> org.infinispan.CacheImpl.stop(CacheImpl.java:604)
> org.infinispan.CacheImpl.stop(CacheImpl.java:599)
> org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
> org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
> org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> org.testng.TestRunner.privateRun(TestRunner.java:767)
> org.testng.TestRunner.run(TestRunner.java:617)
> org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
> org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
> org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> org.testng.SuiteRunner.run(SuiteRunner.java:240)
> org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> org.testng.TestNG.run(TestNG.java:1057)
> org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
> org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
> org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
> DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread OOB-1,ISPN,NodeC-18285
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> org.jgroups.JChannel.up(JChannel.java:707)
> org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> org.jgroups.protocols.FC.up(FC.java:479)
> org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> org.jgroups.protocols.Discovery.up(Discovery.java:359)
> org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:722)
> 2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
> java.lang.NullPointerException
> at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> at org.jgroups.JChannel.up(JChannel.java:707)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FC.up(FC.java:479)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-3262) LevelDB cache store allows loading after shutdown
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3262?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3262:
--------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> LevelDB cache store allows loading after shutdown
> -------------------------------------------------
>
> Key: ISPN-3262
> URL: https://issues.jboss.org/browse/ISPN-3262
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.7.Final, 5.3.0.Final
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 6.0.0.Alpha1, 6.0.0.Final
>
>
> I'm getting NPE in these tests:
> https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
> it's caused by a thread asking the cache store to load all keys after it's been shutdown.
> It might be considered also a problem of LevelDB implementation that doesn't guard against this.
> these are the stacktraces of the causing events:
> {code}
> DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread main
> org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
> org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
> org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
> org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
> org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
> org.infinispan.CacheImpl.stop(CacheImpl.java:604)
> org.infinispan.CacheImpl.stop(CacheImpl.java:599)
> org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
> org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
> org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> org.testng.TestRunner.privateRun(TestRunner.java:767)
> org.testng.TestRunner.run(TestRunner.java:617)
> org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
> org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
> org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> org.testng.SuiteRunner.run(SuiteRunner.java:240)
> org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> org.testng.TestNG.run(TestNG.java:1057)
> org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
> org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
> org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
> DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread OOB-1,ISPN,NodeC-18285
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> org.jgroups.JChannel.up(JChannel.java:707)
> org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> org.jgroups.protocols.FC.up(FC.java:479)
> org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> org.jgroups.protocols.Discovery.up(Discovery.java:359)
> org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:722)
> 2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
> java.lang.NullPointerException
> at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> at org.jgroups.JChannel.up(JChannel.java:707)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FC.up(FC.java:479)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-3220) Integration tests rely on a nonexistent artifact
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3220?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3220:
--------------------------------
Fix Version/s: 5.3.1.Final
> Integration tests rely on a nonexistent artifact
> ------------------------------------------------
>
> Key: ISPN-3220
> URL: https://issues.jboss.org/browse/ISPN-3220
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 5.3.0.CR1
> Reporter: Manik Surtani
> Assignee: Tristan Tarrant
> Priority: Critical
> Fix For: 5.3.1.Final, 6.0.0.Alpha1
>
>
> When running in a clean environment (no cached .m2 repo), {{AS Module Integration Tests}} fails with a broken dependency on {{org.jboss.as:jboss-as-dist:zip:7.2.0.Alpha1-redhat-4}}.
> The following repos are queried:
> {code}
> [ERROR] jboss-public-repository-group (https://repository.jboss.org/nexus/content/groups/public/, releases=true, snapshots=true),
> [ERROR] jboss-public-repository (https://repository.jboss.org/nexus/content/groups/public, releases=true, snapshots=true),
> [ERROR] central (http://repo1.maven.org/maven2, releases=true, snapshots=false)
> {code}
> If this integration test relies on specific installs of JBoss AS/EAP/WildFly then they should be made optional, and *not* enabled by default as it breaks community builds/tests.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months