[JBoss JIRA] (ISPN-8731) Write command times out waiting for wrong topology
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-8731?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-8731:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Write command times out waiting for wrong topology
> --------------------------------------------------
>
> Key: ISPN-8731
> URL: https://issues.jboss.org/browse/ISPN-8731
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Query, Test Suite - Server
> Affects Versions: 9.2.0.CR1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: testsuite_stability
> Fix For: 9.3.0.Final
>
>
> {{MultipleCacheManagersTest.waitForClusterToForm(new String[]{})}} doesn't actually wait for the rebalance to finish everywhere: https://github.com/infinispan/infinispan/pull/5705/files#diff-f9f535214b9...
> Most tests should work regardless of whether we wait for the rebalance to finish or not, it's just a way to reduce the number of test failures when the command retry doesn't work. Waiting for the rebalance to finish is only required when the test needs to install new topologies in a specific order.
> However, some recent test failures in CI show that the effectiveness of {{MultipleCacheManagersTest.waitForClusterToForm(new String[]{})}} does make a difference. Most likely there is a bug in the retry logic during the latter phases of a rebalance, and the retry-specific tests do not cover all the scenarios.
> {noformat}
> 00:23:20,668 DEBUG (remote-thread-ClusteredScriptingTest-NodeI-p218-t5:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache ___defaultcache, topology = CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredScriptingTest-NodeI-40041: 134, ClusteredScriptingTest-NodeJ-9982: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredScriptingTest-NodeI-40041, ClusteredScriptingTest-NodeJ-9982], persistentUUIDs=[91a47fef-fad6-479c-9ec9-12e9350dc6d5, d2200930-5793-4b74-beea-e001a2b414c1]}, availability mode = AVAILABLE
> 00:23:20,669 DEBUG (remote-thread-ClusteredScriptingTest-NodeI-p218-t5:[]) [ClusterTopologyManagerImpl] Updating cluster-wide stable topology for cache ___defaultcache, topology = CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredScriptingTest-NodeI-40041: 134, ClusteredScriptingTest-NodeJ-9982: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredScriptingTest-NodeI-40041, ClusteredScriptingTest-NodeJ-9982], persistentUUIDs=[91a47fef-fad6-479c-9ec9-12e9350dc6d5, d2200930-5793-4b74-beea-e001a2b414c1]}
> 00:23:20,669 DEBUG (transport-thread-ClusteredScriptingTest-NodeI-p220-t6:[Topology-___defaultcache]) [LocalTopologyManagerImpl] Updating local topology for cache ___defaultcache: CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredScriptingTest-NodeI-40041: 134, ClusteredScriptingTest-NodeJ-9982: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredScriptingTest-NodeI-40041, ClusteredScriptingTest-NodeJ-9982], persistentUUIDs=[91a47fef-fad6-479c-9ec9-12e9350dc6d5, d2200930-5793-4b74-beea-e001a2b414c1]}
> 00:23:20,669 DEBUG (transport-thread-ClusteredScriptingTest-NodeJ-p234-t6:[Topology-___defaultcache]) [LocalTopologyManagerImpl] Updating local topology for cache ___defaultcache: CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredScriptingTest-NodeI-40041: 134, ClusteredScriptingTest-NodeJ-9982: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredScriptingTest-NodeI-40041, ClusteredScriptingTest-NodeJ-9982], persistentUUIDs=[91a47fef-fad6-479c-9ec9-12e9350dc6d5, d2200930-5793-4b74-beea-e001a2b414c1]}
> 00:23:35,673 ERROR (timeout-thread-ClusteredScriptingTest-NodeI-p219-t1:[]) [InvocationContextInterceptor] ISPN000136: Error executing command PutKeyValueCommand, writing keys [/macbeth.txt0]
> org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.BaseStateTransferInterceptor$CancellableRetry.run(BaseStateTransferInterceptor.java:333) [infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> 00:23:35,673 DEBUG (testng-ClusteredScriptingTest:[]) [DefaultCacheManager] Stopping cache manager ISPN on ClusteredScriptingTest-NodeJ-9982
> 00:23:35,682 DEBUG (remote-thread-ClusteredScriptingTest-NodeI-p218-t5:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache ___defaultcache, topology = CacheTopology{id=6, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[ClusteredScriptingTest-NodeI-40041: 256]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredScriptingTest-NodeI-40041], persistentUUIDs=[91a47fef-fad6-479c-9ec9-12e9350dc6d5]}, availability mode = AVAILABLE
> 00:23:35,699 ERROR (testng-ClusteredScriptingTest:[]) [TestSuiteProgress] Test failed: org.infinispan.scripting.ClusteredScriptingTest.testDistributedMapReduceStreamWithFlag([REPL_SYNC])
> org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:259) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1636) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1284) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1750) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:217) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.AbstractDelegatingCache.put(AbstractDelegatingCache.java:358) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.EncoderCache.put(EncoderCache.java:671) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.scripting.utils.ScriptingUtils.loadData(ScriptingUtils.java:29) ~[test-classes/:?]
> at org.infinispan.scripting.ClusteredScriptingTest$6.call(ClusteredScriptingTest.java:144) ~[test-classes/:?]
> at org.infinispan.test.TestingUtil.withCacheManagers(TestingUtil.java:1522) ~[infinispan-core-9.2.0-SNAPSHOT-tests.jar:9.2.0-SNAPSHOT]
> at org.infinispan.scripting.ClusteredScriptingTest.testDistributedMapReduceStreamWithFlag(ClusteredScriptingTest.java:137) ~[test-classes/:?]
> {noformat}
> {noformat}
> 18:04:17,097 DEBUG (transport-thread-ClusteredCacheTest-NodeB-p3689-t1:[Topology-___defaultcache]) [LocalTopologyManagerImpl] Updating local topology for cache ___defaultcache: CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredCacheTest-NodeA-53655: 134, ClusteredCacheTest-NodeB-27398: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredCacheTest-NodeA-53655, ClusteredCacheTest-NodeB-27398], persistentUUIDs=[5f5b2dd6-c570-4c6c-9c2d-92aa5fcd2362, b623ac7a-3c77-471d-88de-e151da823c0c]}
> 18:04:17,097 INFO (testng-ClusteredCacheTest:[]) [TestSuiteProgress] Test starting: org.infinispan.query.blackbox.ClusteredCacheTest.testConditionalRemoveFromNonOwner
> 18:04:17,097 DEBUG (transport-thread-ClusteredCacheTest-NodeA-p3679-t5:[Topology-___defaultcache]) [LocalTopologyManagerImpl] Updating local topology for cache ___defaultcache: CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredCacheTest-NodeA-53655: 134, ClusteredCacheTest-NodeB-27398: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredCacheTest-NodeA-53655, ClusteredCacheTest-NodeB-27398], persistentUUIDs=[5f5b2dd6-c570-4c6c-9c2d-92aa5fcd2362, b623ac7a-3c77-471d-88de-e151da823c0c]}
> 18:04:32,100 ERROR (timeout-thread-ClusteredCacheTest-NodeA-p3678-t1:[]) [InvocationContextInterceptor] ISPN000136: Error executing command PutKeyValueCommand, writing keys [WrappedByteArray{bytes=[B0x010201054E617669..[9], hashCode=-1707624030}]
> org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.BaseStateTransferInterceptor$CancellableRetry.run(BaseStateTransferInterceptor.java:333) [infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> 18:04:32,101 ERROR (testng-ClusteredCacheTest:[]) [TestSuiteProgress] Test failed: org.infinispan.query.blackbox.ClusteredCacheTest.testConditionalRemoveFromNonOwner
> org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:259) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1636) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1284) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1750) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:217) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.AbstractDelegatingCache.put(AbstractDelegatingCache.java:358) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.EncoderCache.put(EncoderCache.java:671) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.query.blackbox.ClusteredCacheTest.prepareTestData(ClusteredCacheTest.java:140) ~[test-classes/:?]
> at org.infinispan.query.blackbox.ClusteredCacheTest.testConditionalRemoveFrom(ClusteredCacheTest.java:334) ~[test-classes/:?]
> at org.infinispan.query.blackbox.ClusteredCacheTest.testConditionalRemoveFromNonOwner(ClusteredCacheTest.java:298) ~[test-classes/:?]
> 18:04:32,214 DEBUG (testng-ClusteredCacheTest:[]) [DefaultCacheManager] Stopping cache manager ISPN on ClusteredCacheTest-NodeB-27398
> 18:04:32,479 DEBUG (remote-thread-ClusteredCacheTest-NodeA-p3677-t6:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache ___defaultcache, topology = CacheTopology{id=6, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[ClusteredCacheTest-NodeA-53655: 256]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredCacheTest-NodeA-53655], persistentUUIDs=[5f5b2dd6-c570-4c6c-9c2d-92aa5fcd2362]}, availability mode = AVAILABLE
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-7682) DistributionManager's cache topology updated in wrong order
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7682?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7682:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/6025
> DistributionManager's cache topology updated in wrong order
> -----------------------------------------------------------
>
> Key: ISPN-7682
> URL: https://issues.jboss.org/browse/ISPN-7682
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.0.0.CR3
> Reporter: Radim Vansa
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Attachments: NonTxPutIfAbsentDuringLeaveStressTest.testNodeLeavingDuringPutIfAbsent_master_5.log.gz
>
>
> {{StateTransferInterceptor}} sets command topology according to {{StateTransferManager.getCacheTopology()}} which goes to {{StateConsumerImpl}}, but {{EntryWrappingInterceptor}} routes according to {{DistributionManager.getCacheTopology()}}. {{StateConsumer}}'s topology is set in {{onTopologyUpdate}} before {{DistributionManager}}'s, and therefore a command can have the current topology set, but EWI does not wrap the entry and some assertions in {{BaseDistributionInterceptor.remoteGet}} fails.
> This is a regression brought in ISPN-7400.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-8731) Write command times out waiting for wrong topology
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8731?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8731:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/6025
> Write command times out waiting for wrong topology
> --------------------------------------------------
>
> Key: ISPN-8731
> URL: https://issues.jboss.org/browse/ISPN-8731
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Query, Test Suite - Server
> Affects Versions: 9.2.0.CR1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: testsuite_stability
> Fix For: 9.3.0.Final
>
>
> {{MultipleCacheManagersTest.waitForClusterToForm(new String[]{})}} doesn't actually wait for the rebalance to finish everywhere: https://github.com/infinispan/infinispan/pull/5705/files#diff-f9f535214b9...
> Most tests should work regardless of whether we wait for the rebalance to finish or not, it's just a way to reduce the number of test failures when the command retry doesn't work. Waiting for the rebalance to finish is only required when the test needs to install new topologies in a specific order.
> However, some recent test failures in CI show that the effectiveness of {{MultipleCacheManagersTest.waitForClusterToForm(new String[]{})}} does make a difference. Most likely there is a bug in the retry logic during the latter phases of a rebalance, and the retry-specific tests do not cover all the scenarios.
> {noformat}
> 00:23:20,668 DEBUG (remote-thread-ClusteredScriptingTest-NodeI-p218-t5:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache ___defaultcache, topology = CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredScriptingTest-NodeI-40041: 134, ClusteredScriptingTest-NodeJ-9982: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredScriptingTest-NodeI-40041, ClusteredScriptingTest-NodeJ-9982], persistentUUIDs=[91a47fef-fad6-479c-9ec9-12e9350dc6d5, d2200930-5793-4b74-beea-e001a2b414c1]}, availability mode = AVAILABLE
> 00:23:20,669 DEBUG (remote-thread-ClusteredScriptingTest-NodeI-p218-t5:[]) [ClusterTopologyManagerImpl] Updating cluster-wide stable topology for cache ___defaultcache, topology = CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredScriptingTest-NodeI-40041: 134, ClusteredScriptingTest-NodeJ-9982: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredScriptingTest-NodeI-40041, ClusteredScriptingTest-NodeJ-9982], persistentUUIDs=[91a47fef-fad6-479c-9ec9-12e9350dc6d5, d2200930-5793-4b74-beea-e001a2b414c1]}
> 00:23:20,669 DEBUG (transport-thread-ClusteredScriptingTest-NodeI-p220-t6:[Topology-___defaultcache]) [LocalTopologyManagerImpl] Updating local topology for cache ___defaultcache: CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredScriptingTest-NodeI-40041: 134, ClusteredScriptingTest-NodeJ-9982: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredScriptingTest-NodeI-40041, ClusteredScriptingTest-NodeJ-9982], persistentUUIDs=[91a47fef-fad6-479c-9ec9-12e9350dc6d5, d2200930-5793-4b74-beea-e001a2b414c1]}
> 00:23:20,669 DEBUG (transport-thread-ClusteredScriptingTest-NodeJ-p234-t6:[Topology-___defaultcache]) [LocalTopologyManagerImpl] Updating local topology for cache ___defaultcache: CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredScriptingTest-NodeI-40041: 134, ClusteredScriptingTest-NodeJ-9982: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredScriptingTest-NodeI-40041, ClusteredScriptingTest-NodeJ-9982], persistentUUIDs=[91a47fef-fad6-479c-9ec9-12e9350dc6d5, d2200930-5793-4b74-beea-e001a2b414c1]}
> 00:23:35,673 ERROR (timeout-thread-ClusteredScriptingTest-NodeI-p219-t1:[]) [InvocationContextInterceptor] ISPN000136: Error executing command PutKeyValueCommand, writing keys [/macbeth.txt0]
> org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.BaseStateTransferInterceptor$CancellableRetry.run(BaseStateTransferInterceptor.java:333) [infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> 00:23:35,673 DEBUG (testng-ClusteredScriptingTest:[]) [DefaultCacheManager] Stopping cache manager ISPN on ClusteredScriptingTest-NodeJ-9982
> 00:23:35,682 DEBUG (remote-thread-ClusteredScriptingTest-NodeI-p218-t5:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache ___defaultcache, topology = CacheTopology{id=6, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[ClusteredScriptingTest-NodeI-40041: 256]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredScriptingTest-NodeI-40041], persistentUUIDs=[91a47fef-fad6-479c-9ec9-12e9350dc6d5]}, availability mode = AVAILABLE
> 00:23:35,699 ERROR (testng-ClusteredScriptingTest:[]) [TestSuiteProgress] Test failed: org.infinispan.scripting.ClusteredScriptingTest.testDistributedMapReduceStreamWithFlag([REPL_SYNC])
> org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:259) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1636) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1284) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1750) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:217) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.AbstractDelegatingCache.put(AbstractDelegatingCache.java:358) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.EncoderCache.put(EncoderCache.java:671) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.scripting.utils.ScriptingUtils.loadData(ScriptingUtils.java:29) ~[test-classes/:?]
> at org.infinispan.scripting.ClusteredScriptingTest$6.call(ClusteredScriptingTest.java:144) ~[test-classes/:?]
> at org.infinispan.test.TestingUtil.withCacheManagers(TestingUtil.java:1522) ~[infinispan-core-9.2.0-SNAPSHOT-tests.jar:9.2.0-SNAPSHOT]
> at org.infinispan.scripting.ClusteredScriptingTest.testDistributedMapReduceStreamWithFlag(ClusteredScriptingTest.java:137) ~[test-classes/:?]
> {noformat}
> {noformat}
> 18:04:17,097 DEBUG (transport-thread-ClusteredCacheTest-NodeB-p3689-t1:[Topology-___defaultcache]) [LocalTopologyManagerImpl] Updating local topology for cache ___defaultcache: CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredCacheTest-NodeA-53655: 134, ClusteredCacheTest-NodeB-27398: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredCacheTest-NodeA-53655, ClusteredCacheTest-NodeB-27398], persistentUUIDs=[5f5b2dd6-c570-4c6c-9c2d-92aa5fcd2362, b623ac7a-3c77-471d-88de-e151da823c0c]}
> 18:04:17,097 INFO (testng-ClusteredCacheTest:[]) [TestSuiteProgress] Test starting: org.infinispan.query.blackbox.ClusteredCacheTest.testConditionalRemoveFromNonOwner
> 18:04:17,097 DEBUG (transport-thread-ClusteredCacheTest-NodeA-p3679-t5:[Topology-___defaultcache]) [LocalTopologyManagerImpl] Updating local topology for cache ___defaultcache: CacheTopology{id=5, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[ClusteredCacheTest-NodeA-53655: 134, ClusteredCacheTest-NodeB-27398: 122]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredCacheTest-NodeA-53655, ClusteredCacheTest-NodeB-27398], persistentUUIDs=[5f5b2dd6-c570-4c6c-9c2d-92aa5fcd2362, b623ac7a-3c77-471d-88de-e151da823c0c]}
> 18:04:32,100 ERROR (timeout-thread-ClusteredCacheTest-NodeA-p3678-t1:[]) [InvocationContextInterceptor] ISPN000136: Error executing command PutKeyValueCommand, writing keys [WrappedByteArray{bytes=[B0x010201054E617669..[9], hashCode=-1707624030}]
> org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.BaseStateTransferInterceptor$CancellableRetry.run(BaseStateTransferInterceptor.java:333) [infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> 18:04:32,101 ERROR (testng-ClusteredCacheTest:[]) [TestSuiteProgress] Test failed: org.infinispan.query.blackbox.ClusteredCacheTest.testConditionalRemoveFromNonOwner
> org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:259) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1636) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1284) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1750) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:217) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.AbstractDelegatingCache.put(AbstractDelegatingCache.java:358) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.cache.impl.EncoderCache.put(EncoderCache.java:671) ~[infinispan-core-9.2.0-SNAPSHOT.jar:9.2.0-SNAPSHOT]
> at org.infinispan.query.blackbox.ClusteredCacheTest.prepareTestData(ClusteredCacheTest.java:140) ~[test-classes/:?]
> at org.infinispan.query.blackbox.ClusteredCacheTest.testConditionalRemoveFrom(ClusteredCacheTest.java:334) ~[test-classes/:?]
> at org.infinispan.query.blackbox.ClusteredCacheTest.testConditionalRemoveFromNonOwner(ClusteredCacheTest.java:298) ~[test-classes/:?]
> 18:04:32,214 DEBUG (testng-ClusteredCacheTest:[]) [DefaultCacheManager] Stopping cache manager ISPN on ClusteredCacheTest-NodeB-27398
> 18:04:32,479 DEBUG (remote-thread-ClusteredCacheTest-NodeA-p3677-t6:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache ___defaultcache, topology = CacheTopology{id=6, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[ClusteredCacheTest-NodeA-53655: 256]}, pendingCH=null, unionCH=null, actualMembers=[ClusteredCacheTest-NodeA-53655], persistentUUIDs=[5f5b2dd6-c570-4c6c-9c2d-92aa5fcd2362]}, availability mode = AVAILABLE
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-9212) Bypass interceptors for nontx REPL reads
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-9212?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-9212:
--------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5997
> Bypass interceptors for nontx REPL reads
> ----------------------------------------
>
> Key: ISPN-9212
> URL: https://issues.jboss.org/browse/ISPN-9212
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Affects Versions: 9.3.0.Beta1
> Reporter: William Burns
> Assignee: William Burns
> Fix For: 9.3.0.CR1
>
>
> It should be possible to completely bypass the interceptor stack and directly query the data container with a REPL cache. We just can only do this when it is non tx, has no persistence. We would have to do stats and notifications manually.
> It could even be possible to add this to all cache modes and fall back to the interceptor stack if no entry was found. This would slow down the miss state minimally, however the hot path would be increased.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-9224) Reduce ExceptionEvictionTest combinations
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-9224?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-9224:
--------------------------------
Status: Open (was: New)
> Reduce ExceptionEvictionTest combinations
> -----------------------------------------
>
> Key: ISPN-9224
> URL: https://issues.jboss.org/browse/ISPN-9224
> Project: Infinispan
> Issue Type: Task
> Components: Test Suite - Core
> Reporter: William Burns
> Assignee: William Burns
> Fix For: 9.3.0.CR1
>
>
> The ExceptionEvictionTest tests all possible permutations. This is a bit wasteful as the multiple node ones should behave the same irrespective of the storage type. The local mode should be sufficient to test different storage type calculations.
> We can comment out quite a few of the tests, which should allow for quicker test runs
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-9212) Bypass interceptors for nontx REPL reads
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-9212?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-9212:
--------------------------------
Description:
It should be possible to completely bypass the interceptor stack and directly query the data container with a REPL cache. We just can only do this when it is non tx, has no persistence. We would have to do stats and notifications manually.
It could even be possible to add this to all cache modes and fall back to the interceptor stack if no entry was found. This would slow down the miss state minimally, however the hot path would be increased.
was:
It should be possible to completely bypass the interceptor stack and directly query the data container with a REPL cache. We just can only do this when it is non tx, has no persistence. We would have to do stats and notifications manually.
It could even be possible to add this to all cache modes and fall back to the interceptor stack if no entry was found. This would slow down the miss state minimally, however the hot path would be increased many times.
> Bypass interceptors for nontx REPL reads
> ----------------------------------------
>
> Key: ISPN-9212
> URL: https://issues.jboss.org/browse/ISPN-9212
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Affects Versions: 9.3.0.Beta1
> Reporter: William Burns
> Assignee: William Burns
> Fix For: 9.3.0.CR1
>
>
> It should be possible to completely bypass the interceptor stack and directly query the data container with a REPL cache. We just can only do this when it is non tx, has no persistence. We would have to do stats and notifications manually.
> It could even be possible to add this to all cache modes and fall back to the interceptor stack if no entry was found. This would slow down the miss state minimally, however the hot path would be increased.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-9212) Bypass interceptors for nontx REPL reads
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-9212?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-9212:
--------------------------------
Status: Open (was: New)
> Bypass interceptors for nontx REPL reads
> ----------------------------------------
>
> Key: ISPN-9212
> URL: https://issues.jboss.org/browse/ISPN-9212
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Affects Versions: 9.3.0.Beta1
> Reporter: William Burns
> Assignee: William Burns
> Fix For: 9.3.0.CR1
>
>
> It should be possible to completely bypass the interceptor stack and directly query the data container with a REPL cache. We just can only do this when it is non tx, has no persistence. We would have to do stats and notifications manually.
> It could even be possible to add this to all cache modes and fall back to the interceptor stack if no entry was found. This would slow down the miss state minimally, however the hot path would be increased many times.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-9224) Reduce ExceptionEvictionTest combinations
by William Burns (JIRA)
William Burns created ISPN-9224:
-----------------------------------
Summary: Reduce ExceptionEvictionTest combinations
Key: ISPN-9224
URL: https://issues.jboss.org/browse/ISPN-9224
Project: Infinispan
Issue Type: Task
Components: Test Suite - Core
Reporter: William Burns
Assignee: William Burns
Fix For: 9.3.0.CR1
The ExceptionEvictionTest tests all possible permutations. This is a bit wasteful as the multiple node ones should behave the same irrespective of the storage type. The local mode should be sufficient to test different storage type calculations.
We can comment out quite a few of the tests, which should allow for quicker test runs
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months