[JBoss JIRA] (ISPN-8558) Administration console - some events are not displayed in the console
by Pedro Zapata (JIRA)
[ https://issues.jboss.org/browse/ISPN-8558?page=com.atlassian.jira.plugin.... ]
Pedro Zapata reassigned ISPN-8558:
----------------------------------
Assignee: Vladimir Blagojevic
> Administration console - some events are not displayed in the console
> ---------------------------------------------------------------------
>
> Key: ISPN-8558
> URL: https://issues.jboss.org/browse/ISPN-8558
> Project: Infinispan
> Issue Type: Bug
> Components: JMX, reporting and management
> Affects Versions: 9.2.0.Beta1
> Reporter: Roman Macor
> Assignee: Vladimir Blagojevic
>
> Some events are not displayed in the console.
> e.g.
> I see events in the server log, but not in the status event tab in the console:
> 10 INFO [org.infinispan.CLUSTER] (transport-thread--p4-t24) [Context=___query_known_classes][Context=master:server-one]ISPN100003: Node master:server-one finished rebalance phase with topology id 12
> [Server:server-one] 11:06:08,011 INFO [org.infinispan.CLUSTER] (remote-thread--p2-t24) [Context=___query_known_classes][Context=master:server-two]ISPN100003: Node master:server-two finished rebalance phase with topology id 12
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (ISPN-8639) Merge policy tests random failures with ArrayIndexOutOfBoundsException
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-8639?page=com.atlassian.jira.plugin.... ]
Ryan Emerson resolved ISPN-8639.
--------------------------------
Resolution: Done
> Merge policy tests random failures with ArrayIndexOutOfBoundsException
> ----------------------------------------------------------------------
>
> Key: ISPN-8639
> URL: https://issues.jboss.org/browse/ISPN-8639
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.2.0.CR1
>
>
> {{BaseMergePolicyTest.getCacheFromPreferredPartition()}} reads the cache status of all the running caches, including the {{CONFIG}} cache, but assumes a 1-to-1 mapping between the responses list and the input caches (the default cache on each node).
> Depending on how the responses are ordered, it may try to return a cache that doesn't exist in the input array:
> {noformat}
> 10:47:41,378 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.conflict.impl.MergePolicyRemoveAllTest.testPartitionMergePolicy[DIST_SYNC, 5N]
> java.lang.ArrayIndexOutOfBoundsException: 8
> at org.infinispan.conflict.impl.BaseMergePolicyTest.getCacheFromPreferredPartition(BaseMergePolicyTest.java:190) ~[test-classes/:?]
> at org.infinispan.conflict.impl.BaseMergePolicyTest.getCacheFromPreferredPartition(BaseMergePolicyTest.java:153) ~[test-classes/:?]
> at org.infinispan.conflict.impl.BaseMergePolicyTest.testPartitionMergePolicy(BaseMergePolicyTest.java:124) ~[test-classes/:?]
> {noformat}
> {noformat}
> 16:57:12,012 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.conflict.impl.MergePolicyCustomTest.testPartitionMergePolicy[DIST_SYNC, 4N]
> java.lang.ArrayIndexOutOfBoundsException: 6
> at org.infinispan.conflict.impl.BaseMergePolicyTest.getCacheFromPreferredPartition(BaseMergePolicyTest.java:190) ~[test-classes/:?]
> at org.infinispan.conflict.impl.BaseMergePolicyTest.getCacheFromPreferredPartition(BaseMergePolicyTest.java:153) ~[test-classes/:?]
> at org.infinispan.conflict.impl.BaseMergePolicyTest.testPartitionMergePolicy(BaseMergePolicyTest.java:124) ~[test-classes/:?]
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (ISPN-8587) Coordinator crash in 2-node cluster can lead to invalid cache topology
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-8587?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-8587:
-------------------------------
Fix Version/s: 9.1.5.Final
(was: 9.1.4.Final)
> Coordinator crash in 2-node cluster can lead to invalid cache topology
> ----------------------------------------------------------------------
>
> Key: ISPN-8587
> URL: https://issues.jboss.org/browse/ISPN-8587
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.0.Beta1, 9.1.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.2.0.CR1, 9.1.5.Final
>
>
> After the coordinator changes, {{PreferAvailabilityStrategy}} first broadcasts a cache topology with the {{currentCH}} of the "maximum" topology. In the 2nd step it broadcasts a topology that removes all the topology members no longer in the cluster, and in the 3rd step it queues a rebalance with the remaining members.
> If the cluster had only 2 nodes, {{A}} (the coordinator) and {{B}}, and B had not finished joining the cache, the maximum topology has {{A}} as the only member. That means step 2 tries to remove all members, and in the process removes the cache topology from {{ClusterCacheStatus}}. When step 3 tries to rebalance with {{B}} as the only member, it re-initializes {{ClusterCacheStatus}} with topology id 1, and because {{LocalTopologyManager}} already has a higher topology id it will never confirm the rebalance.
> This sometimes happens in {{CacheManagerTest.testRestartReusingConfiguration}}. Like most other tests, it waits for the cache to finish joining before killing a node. But it only waits for the test cache, not for the {{CONFIG}} cache (which has {{awaitInitialTransfer(false)}}). Also, most of the time {{A}} either finishes the rebalance or re-initializes {{ClusterCacheStatus}} and sends a topology update with {{B}} as the only member before leaving. The test only fails if {{B}} doesn't receive or ignores one or more topology updates.
> {noformat}
> 10:37:50,674 INFO (remote-thread-Test-NodeA-p2265-t6:[]) [CLUSTER] ISPN000310: Starting cluster-wide rebalance for cache org.infinispan.CONFIG, topology CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeA-37820: 134, Test-NodeB-59687: 122]}, unionCH=null, phase=READ_OLD_WRITE_ALL, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}
> 10:37:51,037 DEBUG (remote-thread-Test-NodeA-p2265-t6:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache org.infinispan.CONFIG, topology = CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeA-37820: 134, Test-NodeB-59687: 122]}, unionCH=null, phase=READ_ALL_WRITE_ALL, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}, availability mode = AVAILABLE
> 10:37:51,097 DEBUG (remote-thread-Test-NodeA-p2265-t5:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache org.infinispan.CONFIG, topology = CacheTopology{id=4, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeA-37820: 134, Test-NodeB-59687: 122]}, unionCH=null, phase=READ_NEW_WRITE_ALL, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}, availability mode = AVAILABLE
> 10:37:51,203 DEBUG (testng-Test:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache org.infinispan.CONFIG, topology = CacheTopology{id=5, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeB-59687: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeB-59687], persistentUUIDs=[96c95d15-440a-4dc7-915d-5d36ac4257bb]}, availability mode = AVAILABLE
> 10:37:51,207 INFO (jgroups-7,Test-NodeB-59687:[]) [CLUSTER] ISPN000094: Received new cluster view for channel ISPN: [Test-NodeB-59687|2] (1) [Test-NodeB-59687]
> *** Here topology updates are ignored
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t5:[Topology-org.infinispan.CONFIG]) [LocalTopologyManagerImpl] Ignoring topology 4 for cache org.infinispan.CONFIG from old coordinator Test-NodeA-37820
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t5:[Topology-org.infinispan.CONFIG]) [LocalTopologyManagerImpl] Ignoring topology 5 for cache org.infinispan.CONFIG from old coordinator Test-NodeA-37820
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [ClusterCacheStatus] Recovered 1 partition(s) for cache org.infinispan.CONFIG: [CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeA-37820: 134, Test-NodeB-59687: 122]}, unionCH=null, phase=READ_ALL_WRITE_ALL, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}]
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [ClusterCacheStatus] Updating topologies after merge for cache org.infinispan.CONFIG, current topology = CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}, stable topology = CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeA-37820], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73]}, availability mode = null, resolveConflicts = false
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache org.infinispan.CONFIG, topology = CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}, availability mode = null
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [ClusterTopologyManagerImpl] Updating cluster-wide stable topology for cache org.infinispan.CONFIG, topology = CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeA-37820], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73]}
> 10:37:51,340 FATAL (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [CLUSTER] [Context=org.infinispan.CONFIG]ISPN000313: Lost data because of abrupt leavers [Test-NodeA-37820]
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [ClusterCacheStatus] Queueing rebalance for cache org.infinispan.CONFIG with members [Test-NodeB-59687]
> 10:37:51,341 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Topology-org.infinispan.CONFIG]) [LocalTopologyManagerImpl] Updating local topology for cache org.infinispan.CONFIG: CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}
> *** The topology is re-initialized, without sending topology update
> 10:37:51,378 DEBUG (transport-thread-Test-NodeB-p2311-t1:[Merge-2]) [ClusterCacheStatus] Queueing rebalance for cache ___defaultcache with members [Test-NodeB-59687]
> 10:37:51,547 INFO (jgroups-7,Test-NodeB-59687:[]) [CLUSTER] ISPN000094: Received new cluster view for channel ISPN: [Test-NodeB-59687|3] (2) [Test-NodeB-59687, Test-NodeA-12100]
> 10:37:51,962 DEBUG (testng-Test:[]) [LocalTopologyManagerImpl] Node Test-NodeA-12100 joining cache org.infinispan.CONFIG
> 10:37:51,964 DEBUG (remote-thread-Test-NodeB-p2309-t6:[]) [ClusterCacheStatus] Queueing rebalance for cache org.infinispan.CONFIG with members [Test-NodeB-59687, Test-NodeA-12100]
> *** Rebalance start is sent with wrong topology id
> 10:37:51,964 INFO (remote-thread-Test-NodeB-p2309-t6:[]) [CLUSTER] ISPN000310: Starting cluster-wide rebalance for cache org.infinispan.CONFIG, topology CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeB-59687: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeB-59687: 129, Test-NodeA-12100: 127]}, unionCH=null, phase=READ_OLD_WRITE_ALL, actualMembers=[Test-NodeB-59687, Test-NodeA-12100], persistentUUIDs=[96c95d15-440a-4dc7-915d-5d36ac4257bb, 538b5324-cda9-49df-9786-7c6d6458332e]}
> 10:37:51,965 DEBUG (transport-thread-Test-NodeB-p2311-t4:[Topology-org.infinispan.CONFIG]) [LocalTopologyManagerImpl] Ignoring old rebalance for cache org.infinispan.CONFIG, current topology is 4: CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeB-59687: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeB-59687: 129, Test-NodeA-12100: 127]}, unionCH=null, phase=READ_OLD_WRITE_ALL, actualMembers=[Test-NodeB-59687, Test-NodeA-12100], persistentUUIDs=[96c95d15-440a-4dc7-915d-5d36ac4257bb, 538b5324-cda9-49df-9786-7c6d6458332e]}
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (ISPN-8555) CacheManagerTest.testConcurrentCacheManagerStopAndGetCache randomly hangs
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-8555?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-8555:
-------------------------------
Fix Version/s: 9.1.5.Final
(was: 9.1.4.Final)
> CacheManagerTest.testConcurrentCacheManagerStopAndGetCache randomly hangs
> -------------------------------------------------------------------------
>
> Key: ISPN-8555
> URL: https://issues.jboss.org/browse/ISPN-8555
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.2.0.Beta1, 9.1.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.2.0.CR1, 9.1.5.Final
>
>
> If there is any exception, the finally block tries to stop the cache manager without first unblocking the stop method, and it hangs:
> {noformat}
> "ForkThread-1,CacheManagerTest" #204160 prio=5 os_prio=0 tid=0x00007fa1900aa800 nid=0x1be5 waiting on condition [0x00007fa0db5b3000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c846b690> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.manager.CacheManagerTest$2.stop(CacheManagerTest.java:274)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:91)
> at org.infinispan.commons.util.SecurityActions$$Lambda$169/1215571888.run(Unknown Source)
> at org.infinispan.commons.util.SecurityActions.doPrivileged(SecurityActions.java:83)
> at org.infinispan.commons.util.SecurityActions.invokeAccessibly(SecurityActions.java:88)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:165)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:883)
> at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:684)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:583)
> - locked <0x00000000c846b6d8> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:271)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:206)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:1000)
> at org.infinispan.cache.impl.AbstractDelegatingCache.start(AbstractDelegatingCache.java:411)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:637)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:582)
> at org.infinispan.manager.DefaultCacheManager.internalGetCache(DefaultCacheManager.java:468)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:454)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:440)
> at org.infinispan.manager.CacheManagerTest.lambda$testConcurrentCacheManagerStopAndGetCache$4(CacheManagerTest.java:279)
> at org.infinispan.manager.CacheManagerTest$$Lambda$3417/950279155.call(Unknown Source)
> at org.infinispan.test.AbstractInfinispanTest$LoggingCallable.call(AbstractInfinispanTest.java:543)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c846b8d0> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "testng-CacheManagerTest" #24 prio=5 os_prio=0 tid=0x00007fa260ece000 nid=0x44b6 waiting on condition [0x00007fa1e4626000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c84702c0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
> at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
> at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:695)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:774)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:765)
> at org.infinispan.manager.CacheManagerTest.testConcurrentCacheManagerStopAndGetCache(CacheManagerTest.java:295)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c4628978> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "ForkThread-2,CacheManagerTest" #204172 prio=5 os_prio=0 tid=0x00007fa1900f6800 nid=0x1bf2 waiting on condition [0x00007fa0da9a8000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c84181c0> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:681)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:727)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:704)
> at org.infinispan.manager.CacheManagerTest.lambda$testConcurrentCacheManagerStopAndGetCache$5(CacheManagerTest.java:282)
> at org.infinispan.manager.CacheManagerTest$$Lambda$3418/1712334616.run(Unknown Source)
> at org.infinispan.test.AbstractInfinispanTest$RunnableWrapper.run(AbstractInfinispanTest.java:510)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c8418270> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> - <0x00000000c84702c0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (ISPN-8220) ClusteredCacheMgmtInterceptorMBeanTest fails intermittently
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-8220?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-8220:
-------------------------------
Fix Version/s: 9.1.5.Final
(was: 9.1.4.Final)
> ClusteredCacheMgmtInterceptorMBeanTest fails intermittently
> -----------------------------------------------------------
>
> Key: ISPN-8220
> URL: https://issues.jboss.org/browse/ISPN-8220
> Project: Infinispan
> Issue Type: Bug
> Components: JMX, reporting and management, Test Suite - Core
> Affects Versions: 9.1.0.Final
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Labels: testsuite_stability
> Fix For: 9.1.5.Final
>
>
> {code:java}
> org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:259)
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1679)
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1327)
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1793)
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:282)
> at org.infinispan.cache.impl.AbstractDelegatingCache.put(AbstractDelegatingCache.java:358)
> at org.infinispan.cache.impl.EncoderCache.put(EncoderCache.java:655)
> at org.infinispan.jmx.ClusteredCacheMgmtInterceptorMBeanTest.testCorrectStatsInCluster(ClusteredCacheMgmtInterceptorMBeanTest.java:48)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.BaseStateTransferInterceptor$CancellableRetry.run(BaseStateTransferInterceptor.java:347)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> ... 3 more
> ... Removed 16 stack frames
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (ISPN-8217) RestStoreTest.tearDown intermittent failure
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-8217?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-8217:
-------------------------------
Fix Version/s: 9.1.5.Final
(was: 9.1.4.Final)
> RestStoreTest.tearDown intermittent failure
> -------------------------------------------
>
> Key: ISPN-8217
> URL: https://issues.jboss.org/browse/ISPN-8217
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 9.1.0.Final
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Fix For: 9.1.5.Final
>
>
> The test fails intermittently with the following:
> {code:java}
> io.netty.channel.ChannelException: eventfd_write() failed: Bad file descriptor
> at io.netty.channel.epoll.Native.eventFdWrite(Native Method)
> at io.netty.channel.epoll.EpollEventLoop.wakeup(EpollEventLoop.java:126)
> at io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:589)
> at io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:163)
> at org.infinispan.server.core.transport.NettyTransport.stop(NettyTransport.java:161)
> at org.infinispan.server.core.AbstractProtocolServer.stop(AbstractProtocolServer.java:144)
> at org.infinispan.persistence.rest.RestStoreTest.tearDown(RestStoreTest.java:73)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ... Removed 18 stack frames
>
> {code}
> I believe this is caused by Netty not closing connections properly. See [here|http://netty.io/news/2017/07/06/4-0-49-Final-4-1-13-Final.html]. Upgrading Netty to the latest version should hopefully resolve this issue.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (ISPN-8641) Upgrade to Wildfly 11
by Ryan Emerson (JIRA)
Ryan Emerson created ISPN-8641:
----------------------------------
Summary: Upgrade to Wildfly 11
Key: ISPN-8641
URL: https://issues.jboss.org/browse/ISPN-8641
Project: Infinispan
Issue Type: Component Upgrade
Components: Server
Affects Versions: 9.2.0.Beta2
Reporter: Ryan Emerson
Assignee: Ryan Emerson
Fix For: 9.2.0.CR1
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (ISPN-8639) Merge policy tests random failures with ArrayIndexOutOfBoundsException
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8639?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8639:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5650
> Merge policy tests random failures with ArrayIndexOutOfBoundsException
> ----------------------------------------------------------------------
>
> Key: ISPN-8639
> URL: https://issues.jboss.org/browse/ISPN-8639
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.2.0.CR1
>
>
> {{BaseMergePolicyTest.getCacheFromPreferredPartition()}} reads the cache status of all the running caches, including the {{CONFIG}} cache, but assumes a 1-to-1 mapping between the responses list and the input caches (the default cache on each node).
> Depending on how the responses are ordered, it may try to return a cache that doesn't exist in the input array:
> {noformat}
> 10:47:41,378 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.conflict.impl.MergePolicyRemoveAllTest.testPartitionMergePolicy[DIST_SYNC, 5N]
> java.lang.ArrayIndexOutOfBoundsException: 8
> at org.infinispan.conflict.impl.BaseMergePolicyTest.getCacheFromPreferredPartition(BaseMergePolicyTest.java:190) ~[test-classes/:?]
> at org.infinispan.conflict.impl.BaseMergePolicyTest.getCacheFromPreferredPartition(BaseMergePolicyTest.java:153) ~[test-classes/:?]
> at org.infinispan.conflict.impl.BaseMergePolicyTest.testPartitionMergePolicy(BaseMergePolicyTest.java:124) ~[test-classes/:?]
> {noformat}
> {noformat}
> 16:57:12,012 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.conflict.impl.MergePolicyCustomTest.testPartitionMergePolicy[DIST_SYNC, 4N]
> java.lang.ArrayIndexOutOfBoundsException: 6
> at org.infinispan.conflict.impl.BaseMergePolicyTest.getCacheFromPreferredPartition(BaseMergePolicyTest.java:190) ~[test-classes/:?]
> at org.infinispan.conflict.impl.BaseMergePolicyTest.getCacheFromPreferredPartition(BaseMergePolicyTest.java:153) ~[test-classes/:?]
> at org.infinispan.conflict.impl.BaseMergePolicyTest.testPartitionMergePolicy(BaseMergePolicyTest.java:124) ~[test-classes/:?]
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (ISPN-8637) ReadAfterLostDataTest.testPutMap failure
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-8637?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-8637:
------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5651
> ReadAfterLostDataTest.testPutMap failure
> ----------------------------------------
>
> Key: ISPN-8637
> URL: https://issues.jboss.org/browse/ISPN-8637
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
> Labels: testsuite_stability
>
> assert fails:
> {code:java}
> Caused by: java.lang.AssertionError: SegmentBasedCollector{id=56430, topologyId=13, primaryResult=null, primaryResultReceived=false, backups={ReadAfterLostDataTest[DIST_SYNC]-NodeD-52928=[118], ReadAfterLostDataTest[DIST_SYNC]-NodeB-35269=[212, 86], ReadAfterLostDataTest[DIST_SYNC]-NodeC-19848=[252]}}
> {code}
> I need to investigate further, but I guess it is OK to replace the collector if the new has a topology id higher. the collector ignores acks from old topologies anyway.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months
[JBoss JIRA] (ISPN-8637) ReadAfterLostDataTest.testPutMap failure
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-8637?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-8637:
------------------------------
Status: Open (was: New)
> ReadAfterLostDataTest.testPutMap failure
> ----------------------------------------
>
> Key: ISPN-8637
> URL: https://issues.jboss.org/browse/ISPN-8637
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
> Labels: testsuite_stability
>
> assert fails:
> {code:java}
> Caused by: java.lang.AssertionError: SegmentBasedCollector{id=56430, topologyId=13, primaryResult=null, primaryResultReceived=false, backups={ReadAfterLostDataTest[DIST_SYNC]-NodeD-52928=[118], ReadAfterLostDataTest[DIST_SYNC]-NodeB-35269=[212, 86], ReadAfterLostDataTest[DIST_SYNC]-NodeC-19848=[252]}}
> {code}
> I need to investigate further, but I guess it is OK to replace the collector if the new has a topology id higher. the collector ignores acks from old topologies anyway.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 2 months