[JBoss JIRA] (ISPN-7586) Rolling Upgrade: use of Remote Store in mode read-only causes data inconsistencies
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-7586?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-7586:
------------------------------------
Description:
Assuming a Hot Rod client pointing to the target cluster. Target cluster has a {{RemoteStore}} pointing to the source cluster in read-only mode.
Clients can see the following situation:
{code:java}
client.put("K","value")
// will get "V" back
client.get("K")
// Deletes will not propagate to the source cluster,
// the RemoteStore is in 'read-only' mode.
client.remove("K")
// Returns "V". Although deleted, value will be retrieved
// from the remote store
cache.get("K")
{code}
This can break existing application that expect a transparent and consistent access to data during a Rolling Upgrade process. Clearly the remote store should be in non-read only mode for clients iteration but at the same time it should not allow the Rolling Upgrade itself to cause writes back to the remote store.
was:
Assuming a Hot Rod client pointing to the target cluster. Target cluster has a {{RemoteStore}} pointing to the source cluster in read-only mode.
Clients can see the following situation:
{code:java}
client.put("K","value")
// will get "V" back
client.get("K")
// Deletes will not propagate to the source cluster,
// the RemoteStore is in 'read-only' mode.
client.remove("K")
// Returns "V". Although deleted, value will be retrieved
// from the remote store
cache.get("K")
{code}
> Rolling Upgrade: use of Remote Store in mode read-only causes data inconsistencies
> ----------------------------------------------------------------------------------
>
> Key: ISPN-7586
> URL: https://issues.jboss.org/browse/ISPN-7586
> Project: Infinispan
> Issue Type: Bug
> Reporter: Gustavo Fernandes
>
> Assuming a Hot Rod client pointing to the target cluster. Target cluster has a {{RemoteStore}} pointing to the source cluster in read-only mode.
> Clients can see the following situation:
> {code:java}
> client.put("K","value")
> // will get "V" back
> client.get("K")
> // Deletes will not propagate to the source cluster,
> // the RemoteStore is in 'read-only' mode.
> client.remove("K")
> // Returns "V". Although deleted, value will be retrieved
> // from the remote store
> cache.get("K")
>
> {code}
> This can break existing application that expect a transparent and consistent access to data during a Rolling Upgrade process. Clearly the remote store should be in non-read only mode for clients iteration but at the same time it should not allow the Rolling Upgrade itself to cause writes back to the remote store.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ISPN-7586) Rolling Upgrade: use of Remote Store in mode read-only causes data inconsistencies
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-7586?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-7586:
------------------------------------
Description:
Assuming a Hot Rod client pointing to the target cluster. Target cluster has a {{RemoteStore}} pointing to the source cluster in read-only mode.
Clients can see the following situation:
{code:java}
client.put("K","value")
// will get "V" back
client.get("K")
// Deletes will not propagate to the source cluster,
// the RemoteStore is in 'read-only' mode.
client.remove("K")
// Returns "V". Although deleted, value will be retrieved
// from the remote store
cache.get("K")
{code}
This can break existing applications that expect a transparent and consistent access to data during a Rolling Upgrade process. Clearly the remote store should be in non-read only mode for clients iteration but at the same time it should not allow the Rolling Upgrade itself to cause writes back to the remote store.
was:
Assuming a Hot Rod client pointing to the target cluster. Target cluster has a {{RemoteStore}} pointing to the source cluster in read-only mode.
Clients can see the following situation:
{code:java}
client.put("K","value")
// will get "V" back
client.get("K")
// Deletes will not propagate to the source cluster,
// the RemoteStore is in 'read-only' mode.
client.remove("K")
// Returns "V". Although deleted, value will be retrieved
// from the remote store
cache.get("K")
{code}
This can break existing application that expect a transparent and consistent access to data during a Rolling Upgrade process. Clearly the remote store should be in non-read only mode for clients iteration but at the same time it should not allow the Rolling Upgrade itself to cause writes back to the remote store.
> Rolling Upgrade: use of Remote Store in mode read-only causes data inconsistencies
> ----------------------------------------------------------------------------------
>
> Key: ISPN-7586
> URL: https://issues.jboss.org/browse/ISPN-7586
> Project: Infinispan
> Issue Type: Bug
> Reporter: Gustavo Fernandes
>
> Assuming a Hot Rod client pointing to the target cluster. Target cluster has a {{RemoteStore}} pointing to the source cluster in read-only mode.
> Clients can see the following situation:
> {code:java}
> client.put("K","value")
> // will get "V" back
> client.get("K")
> // Deletes will not propagate to the source cluster,
> // the RemoteStore is in 'read-only' mode.
> client.remove("K")
> // Returns "V". Although deleted, value will be retrieved
> // from the remote store
> cache.get("K")
>
> {code}
> This can break existing applications that expect a transparent and consistent access to data during a Rolling Upgrade process. Clearly the remote store should be in non-read only mode for clients iteration but at the same time it should not allow the Rolling Upgrade itself to cause writes back to the remote store.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ISPN-7586) Rolling Upgrade: use of Remote Store in mode read-only causes data inconsistencies
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-7586?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-7586:
------------------------------------
Description:
Assuming a Hot Rod client pointing to the target cluster. Target cluster has a {{RemoteStore}} pointing to the source cluster in read-only mode.
Clients can see the following situation:
{code:java}
client.put("K","value")
// will get "V" back
client.get("K")
// Deletes will not propagate to the source cluster,
// the RemoteStore is in 'read-only' mode.
client.remove("K")
// Returns "V". Although deleted, value will be retrieved
// from the remote store
cache.get("K")
{code}
This can break existing applications that expect a transparent and consistent access to data during a Rolling Upgrade process. Clearly the remote store should not be in non-read only mode for clients iteration but at the same time it should not allow the Rolling Upgrade itself to cause writes back to the remote store.
was:
Assuming a Hot Rod client pointing to the target cluster. Target cluster has a {{RemoteStore}} pointing to the source cluster in read-only mode.
Clients can see the following situation:
{code:java}
client.put("K","value")
// will get "V" back
client.get("K")
// Deletes will not propagate to the source cluster,
// the RemoteStore is in 'read-only' mode.
client.remove("K")
// Returns "V". Although deleted, value will be retrieved
// from the remote store
cache.get("K")
{code}
This can break existing applications that expect a transparent and consistent access to data during a Rolling Upgrade process. Clearly the remote store should be in non-read only mode for clients iteration but at the same time it should not allow the Rolling Upgrade itself to cause writes back to the remote store.
> Rolling Upgrade: use of Remote Store in mode read-only causes data inconsistencies
> ----------------------------------------------------------------------------------
>
> Key: ISPN-7586
> URL: https://issues.jboss.org/browse/ISPN-7586
> Project: Infinispan
> Issue Type: Bug
> Reporter: Gustavo Fernandes
>
> Assuming a Hot Rod client pointing to the target cluster. Target cluster has a {{RemoteStore}} pointing to the source cluster in read-only mode.
> Clients can see the following situation:
> {code:java}
> client.put("K","value")
> // will get "V" back
> client.get("K")
> // Deletes will not propagate to the source cluster,
> // the RemoteStore is in 'read-only' mode.
> client.remove("K")
> // Returns "V". Although deleted, value will be retrieved
> // from the remote store
> cache.get("K")
>
> {code}
> This can break existing applications that expect a transparent and consistent access to data during a Rolling Upgrade process. Clearly the remote store should not be in non-read only mode for clients iteration but at the same time it should not allow the Rolling Upgrade itself to cause writes back to the remote store.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ISPN-7604) StateTransferLockImpl.stop() never runs
by Dan Berindei (JIRA)
Dan Berindei created ISPN-7604:
----------------------------------
Summary: StateTransferLockImpl.stop() never runs
Key: ISPN-7604
URL: https://issues.jboss.org/browse/ISPN-7604
Project: Infinispan
Issue Type: Bug
Components: Core
Affects Versions: 9.0.0.CR2
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 9.0.0.Final
{{StateTransferLockImpl.stop()}} is supposed to signal the installation of topology {{Integer.MAX_VALUE}}, in order to unblock any commands waiting for a new topology. However, it doesn't have the {{@Stop}} annotation, so it's never called, and threads waiting on a new topology will block forever:
{noformat}
"jgroups-4,test-NodeB-41665@3770" prio=5 tid=0x1f nid=NA waiting
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Unsafe.java:-1)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.runSync(BaseBlockingRunnable.java:48)
at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:38)
at org.infinispan.remoting.inboundhandler.TrianglePerCacheInboundInvocationHandler.handleStateRequestCommand(TrianglePerCacheInboundInvocationHandler.java:171)
at org.infinispan.remoting.inboundhandler.TrianglePerCacheInboundInvocationHandler.handle(TrianglePerCacheInboundInvocationHandler.java:109)
at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleCacheRpcCommand(GlobalInboundInvocationHandler.java:120)
at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:79)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:175)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:149)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:383)
at org.jgroups.blocks.RequestCorrelator.dispatch(RequestCorrelator.java:356)
at org.jgroups.blocks.RequestCorrelator.receiveMessageBatch(RequestCorrelator.java:326)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:586)
at org.jgroups.JChannel.up(JChannel.java:813)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:896)
at org.jgroups.protocols.RSVP.up(RSVP.java:233)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.stack.Protocol.up(Protocol.java:344)
at org.jgroups.stack.Protocol.up(Protocol.java:344)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:293)
at org.jgroups.protocols.UNICAST3.deliverBatch(UNICAST3.java:1083)
at org.jgroups.protocols.UNICAST3.removeAndDeliver(UNICAST3.java:892)
at org.jgroups.protocols.UNICAST3.handleBatchReceived(UNICAST3.java:858)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:529)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:695)
at org.jgroups.stack.Protocol.up(Protocol.java:344)
at org.jgroups.protocols.TP.passBatchUp(TP.java:1229)
at org.jgroups.util.MaxOneThreadPerSender$BatchHandlerLoop.passBatchUp(MaxOneThreadPerSender.java:284)
at org.jgroups.util.SubmitToThreadPool$BatchHandler.run(SubmitToThreadPool.java:136)
at org.jgroups.util.MaxOneThreadPerSender$BatchHandlerLoop.run(MaxOneThreadPerSender.java:273)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ISPN-7604) StateTransferLockImpl.stop() never runs
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7604?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7604:
-------------------------------
Status: Open (was: New)
> StateTransferLockImpl.stop() never runs
> ---------------------------------------
>
> Key: ISPN-7604
> URL: https://issues.jboss.org/browse/ISPN-7604
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.0.0.CR2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.0.0.Final
>
>
> {{StateTransferLockImpl.stop()}} is supposed to signal the installation of topology {{Integer.MAX_VALUE}}, in order to unblock any commands waiting for a new topology. However, it doesn't have the {{@Stop}} annotation, so it's never called, and threads waiting on a new topology will block forever:
> {noformat}
> "jgroups-4,test-NodeB-41665@3770" prio=5 tid=0x1f nid=NA waiting
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Unsafe.java:-1)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.runSync(BaseBlockingRunnable.java:48)
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:38)
> at org.infinispan.remoting.inboundhandler.TrianglePerCacheInboundInvocationHandler.handleStateRequestCommand(TrianglePerCacheInboundInvocationHandler.java:171)
> at org.infinispan.remoting.inboundhandler.TrianglePerCacheInboundInvocationHandler.handle(TrianglePerCacheInboundInvocationHandler.java:109)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleCacheRpcCommand(GlobalInboundInvocationHandler.java:120)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:79)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:175)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:149)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:383)
> at org.jgroups.blocks.RequestCorrelator.dispatch(RequestCorrelator.java:356)
> at org.jgroups.blocks.RequestCorrelator.receiveMessageBatch(RequestCorrelator.java:326)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:586)
> at org.jgroups.JChannel.up(JChannel.java:813)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:896)
> at org.jgroups.protocols.RSVP.up(RSVP.java:233)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
> at org.jgroups.stack.Protocol.up(Protocol.java:344)
> at org.jgroups.stack.Protocol.up(Protocol.java:344)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:293)
> at org.jgroups.protocols.UNICAST3.deliverBatch(UNICAST3.java:1083)
> at org.jgroups.protocols.UNICAST3.removeAndDeliver(UNICAST3.java:892)
> at org.jgroups.protocols.UNICAST3.handleBatchReceived(UNICAST3.java:858)
> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:529)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:695)
> at org.jgroups.stack.Protocol.up(Protocol.java:344)
> at org.jgroups.protocols.TP.passBatchUp(TP.java:1229)
> at org.jgroups.util.MaxOneThreadPerSender$BatchHandlerLoop.passBatchUp(MaxOneThreadPerSender.java:284)
> at org.jgroups.util.SubmitToThreadPool$BatchHandler.run(SubmitToThreadPool.java:136)
> at org.jgroups.util.MaxOneThreadPerSender$BatchHandlerLoop.run(MaxOneThreadPerSender.java:273)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ISPN-7598) Core test suite leaks threads
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7598?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7598:
-------------------------------
Summary: Core test suite leaks threads (was: LocalDistributedExecutorTest leaks threads)
> Core test suite leaks threads
> -----------------------------
>
> Key: ISPN-7598
> URL: https://issues.jboss.org/browse/ISPN-7598
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.0.0.CR2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.0.0.Final
>
>
> Each test method creates a new {{DefaultExecutorService}} instance, and each instance uses a new local executor created with {{Executors.newCachedThreadPool(...)}}. But the {{DefaultExecutorService}} is created with {{takeExecutorOwnership = false}}, and so the local executor is not stopped on shutdown.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ISPN-7598) Core test suite leaks threads
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7598?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7598:
-------------------------------
Description:
Each {{LocalDistributedExecutorTest}} test method creates a new {{DefaultExecutorService}} instance, and each instance uses a new local executor created with {{Executors.newCachedThreadPool(...)}}. But the {{DefaultExecutorService}} is created with {{takeExecutorOwnership = false}}, and so the local executor is not stopped on shutdown.
was:
Each test method creates a new {{DefaultExecutorService}} instance, and each instance uses a new local executor created with {{Executors.newCachedThreadPool(...)}}. But the {{DefaultExecutorService}} is created with {{takeExecutorOwnership = false}}, and so the local executor is not stopped on shutdown.
> Core test suite leaks threads
> -----------------------------
>
> Key: ISPN-7598
> URL: https://issues.jboss.org/browse/ISPN-7598
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.0.0.CR2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.0.0.Final
>
>
> Each {{LocalDistributedExecutorTest}} test method creates a new {{DefaultExecutorService}} instance, and each instance uses a new local executor created with {{Executors.newCachedThreadPool(...)}}. But the {{DefaultExecutorService}} is created with {{takeExecutorOwnership = false}}, and so the local executor is not stopped on shutdown.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ISPN-7598) LocalDistributedExecutorTest leaks threads
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7598?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-7598:
------------------------------------
Found more leaked threads in the test suite:
* {{SingleFileStoreStressTest}} creates some executors to run {{SingleFileStore.process()}} and never stops them.
* {{ClusteredSecureCacheTest}} and {{CacheAuthorizationTest}} don't properly shut down the cache manager because of a NPE.
* {{ParallelIteratorTest}} subclasses don't stop the executor service.
* The {{CountingCARD}} used by some functional API tests doesn't shut down its timeout executor.
The problem in {{ClusteredSecureCacheTest}} is that {{TestingUtil.killCacheManagers()}} first stops each cache individually, including the security cache, and only then calls {{DefaultCacheManager.stop()}}. {{DefaultCacheManager.terminate()}} then tries to unregister the cache MBeans from JMX, even if the cache is already stopped (because {{CacheJmxRegistration.stop()}} doesn't do that). But because the security cache is also stopped, {{unregisterCacheMBean}} throws a {{NullPointerException}}, and the cache manager doesn't shut down properly.
{noformat}
12:10:03,334 WARN (testng-test:[]) [TestingUtil] Problems killing cache manager org.infinispan.manager.DefaultCacheManager@54cf7c6a@Address:test-NodeB-27841
org.infinispan.IllegalLifecycleStateException: ISPN000323: ___acl_cache is in 'TERMINATED' state and so it does not accept new invocations. Either restart it or recreate the cache container.
at org.infinispan.cache.impl.SimpleCacheImpl.getDataContainer(SimpleCacheImpl.java:1049) ~[classes/:?]
at org.infinispan.cache.impl.SimpleCacheImpl.computeIfAbsentInternal(SimpleCacheImpl.java:1121) ~[classes/:?]
at org.infinispan.cache.impl.StatsCollectingCache.computeIfAbsentInternal(StatsCollectingCache.java:268) ~[classes/:?]
at org.infinispan.cache.impl.SimpleCacheImpl.computeIfAbsent(SimpleCacheImpl.java:1116) ~[classes/:?]
at org.infinispan.cache.impl.AbstractDelegatingCache.computeIfAbsent(AbstractDelegatingCache.java:343) ~[classes/:?]
at org.infinispan.cache.impl.TypeConverterDelegatingAdvancedCache.computeIfAbsent(TypeConverterDelegatingAdvancedCache.java:157) ~[classes/:?]
at org.infinispan.security.impl.AuthorizationHelper.checkSubjectPermissionAndRole(AuthorizationHelper.java:107) ~[classes/:?]
at org.infinispan.security.impl.AuthorizationHelper.checkPermission(AuthorizationHelper.java:76) ~[classes/:?]
at org.infinispan.security.impl.AuthorizationManagerImpl.checkPermission(AuthorizationManagerImpl.java:42) ~[classes/:?]
at org.infinispan.security.impl.SecureCacheImpl.getComponentRegistry(SecureCacheImpl.java:346) ~[classes/:?]
at org.infinispan.manager.DefaultCacheManager.unregisterCacheMBean(DefaultCacheManager.java:739) ~[classes/:?]
at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:692) ~[classes/:?]
at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:733) ~[classes/:?]
at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:713) ~[classes/:?]
at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:656) [test-classes/:?]
at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:639) [test-classes/:?]
at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:635) [test-classes/:?]
at org.infinispan.test.MultipleCacheManagersTest.destroy(MultipleCacheManagersTest.java:125) [test-classes/:?]
at org.infinispan.security.ClusteredSecureCacheTest.access$201(ClusteredSecureCacheTest.java:22) [test-classes/:?]
at org.infinispan.security.ClusteredSecureCacheTest$2.run(ClusteredSecureCacheTest.java:52) [test-classes/:?]
at org.infinispan.security.ClusteredSecureCacheTest$2.run(ClusteredSecureCacheTest.java:49) [test-classes/:?]
at org.infinispan.security.Security.doAs(Security.java:118) [classes/:?]
at org.infinispan.security.ClusteredSecureCacheTest.destroy(ClusteredSecureCacheTest.java:49) [test-classes/:?]
{noformat}
> LocalDistributedExecutorTest leaks threads
> ------------------------------------------
>
> Key: ISPN-7598
> URL: https://issues.jboss.org/browse/ISPN-7598
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.0.0.CR2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.0.0.Final
>
>
> Each test method creates a new {{DefaultExecutorService}} instance, and each instance uses a new local executor created with {{Executors.newCachedThreadPool(...)}}. But the {{DefaultExecutorService}} is created with {{takeExecutorOwnership = false}}, and so the local executor is not stopped on shutdown.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years