[JBoss JIRA] (ISPN-3262) LevelDB cache store allows loading after shutdown
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-3262?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-3262:
-----------------------------------
Priority: Minor (was: Major)
> LevelDB cache store allows loading after shutdown
> -------------------------------------------------
>
> Key: ISPN-3262
> URL: https://issues.jboss.org/browse/ISPN-3262
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.7.Final, 5.3.0.Final
> Reporter: Michal Linhard
> Assignee: Mircea Markus
> Priority: Minor
>
> I'm getting NPE in these tests:
> https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
> it's caused by a thread asking the cache store to load all keys after it's been shutdown.
> It might be considered also a problem of LevelDB implementation that doesn't guard against this.
> these are the stacktraces of the causing events:
> {code}
> DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread main
> org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
> org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
> org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
> org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
> org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
> org.infinispan.CacheImpl.stop(CacheImpl.java:604)
> org.infinispan.CacheImpl.stop(CacheImpl.java:599)
> org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
> org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
> org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> org.testng.TestRunner.privateRun(TestRunner.java:767)
> org.testng.TestRunner.run(TestRunner.java:617)
> org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
> org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
> org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> org.testng.SuiteRunner.run(SuiteRunner.java:240)
> org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> org.testng.TestNG.run(TestNG.java:1057)
> org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
> org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
> org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
> DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread OOB-1,ISPN,NodeC-18285
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> org.jgroups.JChannel.up(JChannel.java:707)
> org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> org.jgroups.protocols.FC.up(FC.java:479)
> org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> org.jgroups.protocols.Discovery.up(Discovery.java:359)
> org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:722)
> 2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
> java.lang.NullPointerException
> at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> at org.jgroups.JChannel.up(JChannel.java:707)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FC.up(FC.java:479)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3262) LevelDB cache store allows loading after shutdown
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-3262?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño reassigned ISPN-3262:
--------------------------------------
Assignee: Galder Zamarreño (was: Mircea Markus)
> LevelDB cache store allows loading after shutdown
> -------------------------------------------------
>
> Key: ISPN-3262
> URL: https://issues.jboss.org/browse/ISPN-3262
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.7.Final, 5.3.0.Final
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
>
> I'm getting NPE in these tests:
> https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
> it's caused by a thread asking the cache store to load all keys after it's been shutdown.
> It might be considered also a problem of LevelDB implementation that doesn't guard against this.
> these are the stacktraces of the causing events:
> {code}
> DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread main
> org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
> org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
> org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
> org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
> org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
> org.infinispan.CacheImpl.stop(CacheImpl.java:604)
> org.infinispan.CacheImpl.stop(CacheImpl.java:599)
> org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
> org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
> org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> org.testng.TestRunner.privateRun(TestRunner.java:767)
> org.testng.TestRunner.run(TestRunner.java:617)
> org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
> org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
> org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> org.testng.SuiteRunner.run(SuiteRunner.java:240)
> org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> org.testng.TestNG.run(TestNG.java:1057)
> org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
> org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
> org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
> DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread OOB-1,ISPN,NodeC-18285
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> org.jgroups.JChannel.up(JChannel.java:707)
> org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> org.jgroups.protocols.FC.up(FC.java:479)
> org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> org.jgroups.protocols.Discovery.up(Discovery.java:359)
> org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:722)
> 2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
> java.lang.NullPointerException
> at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> at org.jgroups.JChannel.up(JChannel.java:707)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FC.up(FC.java:479)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3262) LevelDB cache store allows loading after shutdown
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-3262?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-3262:
-----------------------------------
Fix Version/s: 6.0.0.Final
> LevelDB cache store allows loading after shutdown
> -------------------------------------------------
>
> Key: ISPN-3262
> URL: https://issues.jboss.org/browse/ISPN-3262
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.7.Final, 5.3.0.Final
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 6.0.0.Final
>
>
> I'm getting NPE in these tests:
> https://github.com/mlinhard/infinispan/commit/d1efd673ba6c34a4f6383b16740...
> it's caused by a thread asking the cache store to load all keys after it's been shutdown.
> It might be considered also a problem of LevelDB implementation that doesn't guard against this.
> these are the stacktraces of the causing events:
> {code}
> DbImpl closing org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread main
> org.iq80.leveldb.impl.DbImpl.close(DbImpl.java:-1)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.stop(LevelDBCacheStore.java:107)
> org.infinispan.loaders.CacheLoaderManagerImpl.stop(CacheLoaderManagerImpl.java:296)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
> org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
> org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
> org.infinispan.factories.ComponentRegistry.stop(ComponentRegistry.java:242)
> org.infinispan.CacheImpl.stop(CacheImpl.java:604)
> org.infinispan.CacheImpl.stop(CacheImpl.java:599)
> org.infinispan.test.TestingUtil.killCaches(TestingUtil.java:734)
> org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:590)
> org.infinispan.loaders.MultiCacheStoreFunctionalTest.testStartStopOfBackupDoesntRewriteValue(MultiCacheStoreFunctionalTest.java:107)
> sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:601)
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> org.testng.TestRunner.privateRun(TestRunner.java:767)
> org.testng.TestRunner.run(TestRunner.java:617)
> org.testng.SuiteRunner.runTest(SuiteRunner.java:335)
> org.testng.SuiteRunner.runSequentially(SuiteRunner.java:330)
> org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> org.testng.SuiteRunner.run(SuiteRunner.java:240)
> org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> org.testng.TestNG.run(TestNG.java:1057)
> org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
> org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
> org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
> DbImpl iterator requested org.iq80.leveldb.impl.DbImpl@121e74ed
> Stack trace for thread OOB-1,ISPN,NodeC-18285
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:-1)
> org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> org.jgroups.JChannel.up(JChannel.java:707)
> org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> org.jgroups.protocols.FC.up(FC.java:479)
> org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> org.jgroups.protocols.Discovery.up(Discovery.java:359)
> org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:722)
> 2013-06-21 10:32:42,333 WARN [CacheTopologyControlCommand] (OOB-1,ISPN,NodeC-18285) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=CH_UPDATE, sender=NodeA-43485, joinInfo=null, topologyId=6, currentCH=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeC-18285]}, pendingCH=null, throwable=null, viewId=3}
> java.lang.NullPointerException
> at org.iq80.leveldb.impl.DbImpl.internalIterator(DbImpl.java:757)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:722)
> at org.iq80.leveldb.impl.DbImpl.iterator(DbImpl.java:83)
> at org.infinispan.loaders.leveldb.LevelDBCacheStore.loadAllKeysLockSafe(LevelDBCacheStore.java:216)
> at org.infinispan.loaders.LockSupportCacheStore.loadAllKeys(LockSupportCacheStore.java:179)
> at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:800)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:329)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:195)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:61)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:121)
> at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:207)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:146)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> at org.jgroups.JChannel.up(JChannel.java:707)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FC.up(FC.java:479)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3273) Dist L1 owners that aren't primary don't respect assumeOriginKeptEntryInL1
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-3273?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-3273:
-------------------------------------
This issue is caused due to assumeOriginKeptEntryInL1 only confirms with the origin in the ctx. In the case of DIST the primary owner is contacted first by the calling node. The primary owner than forwards the update to the other owner nodes. The other owner nodes think the call was therefore started by the primary node and thus will invalidate all known requestors for the given key. Currently there is no way to fake what the origin is so the other owners would know which node not to send to.
> Dist L1 owners that aren't primary don't respect assumeOriginKeptEntryInL1
> --------------------------------------------------------------------------
>
> Key: ISPN-3273
> URL: https://issues.jboss.org/browse/ISPN-3273
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Reporter: William Burns
> Assignee: William Burns
> Attachments: DistSyncFuncTest.java
>
>
> When a write operation occurs causing a L1 invalidation, there is a boolean to say assumeOriginKeptEntryInL1 which means the owner won't send an invalidation to the originating node that caused this update. This works fine for the primary owner, however any additional backups think the origin is the primary owner and such send invalidations to possibly the real origin.
> This affects both tx and non tx caches. Tx caches that are sync don't see the problem since locking prevents the invalidation, however it causes an unneeded network roundtrip which can cause delay.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3273) Dist L1 owners that aren't primary don't respect assumeOriginKeptEntryInL1
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-3273?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-3273:
--------------------------------
Attachment: DistSyncFuncTest.java
Attached test file that shows issue.
> Dist L1 owners that aren't primary don't respect assumeOriginKeptEntryInL1
> --------------------------------------------------------------------------
>
> Key: ISPN-3273
> URL: https://issues.jboss.org/browse/ISPN-3273
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Reporter: William Burns
> Assignee: Mircea Markus
> Attachments: DistSyncFuncTest.java
>
>
> When a write operation occurs causing a L1 invalidation, there is a boolean to say assumeOriginKeptEntryInL1 which means the owner won't send an invalidation to the originating node that caused this update. This works fine for the primary owner, however any additional backups think the origin is the primary owner and such send invalidations to possibly the real origin.
> This affects both tx and non tx caches. Tx caches that are sync don't see the problem since locking prevents the invalidation, however it causes an unneeded network roundtrip which can cause delay.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3273) Dist L1 owners that aren't primary don't respect assumeOriginKeptEntryInL1
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-3273?page=com.atlassian.jira.plugin.... ]
William Burns reassigned ISPN-3273:
-----------------------------------
Assignee: William Burns (was: Mircea Markus)
> Dist L1 owners that aren't primary don't respect assumeOriginKeptEntryInL1
> --------------------------------------------------------------------------
>
> Key: ISPN-3273
> URL: https://issues.jboss.org/browse/ISPN-3273
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Reporter: William Burns
> Assignee: William Burns
> Attachments: DistSyncFuncTest.java
>
>
> When a write operation occurs causing a L1 invalidation, there is a boolean to say assumeOriginKeptEntryInL1 which means the owner won't send an invalidation to the originating node that caused this update. This works fine for the primary owner, however any additional backups think the origin is the primary owner and such send invalidations to possibly the real origin.
> This affects both tx and non tx caches. Tx caches that are sync don't see the problem since locking prevents the invalidation, however it causes an unneeded network roundtrip which can cause delay.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3273) Dist L1 owners that aren't primary don't respect assumeOriginKeptEntryInL1
by William Burns (JIRA)
William Burns created ISPN-3273:
-----------------------------------
Summary: Dist L1 owners that aren't primary don't respect assumeOriginKeptEntryInL1
Key: ISPN-3273
URL: https://issues.jboss.org/browse/ISPN-3273
Project: Infinispan
Issue Type: Bug
Components: Distributed Cache
Reporter: William Burns
Assignee: Mircea Markus
When a write operation occurs causing a L1 invalidation, there is a boolean to say assumeOriginKeptEntryInL1 which means the owner won't send an invalidation to the originating node that caused this update. This works fine for the primary owner, however any additional backups think the origin is the primary owner and such send invalidations to possibly the real origin.
This affects both tx and non tx caches. Tx caches that are sync don't see the problem since locking prevents the invalidation, however it causes an unneeded network roundtrip which can cause delay.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months