[JBoss JIRA] (ISPN-8611) Persistent volume names too long
by Martin Gencur (JIRA)
[ https://issues.jboss.org/browse/ISPN-8611?page=com.atlassian.jira.plugin.... ]
Martin Gencur reassigned ISPN-8611:
-----------------------------------
Assignee: Martin Gencur
> Persistent volume names too long
> --------------------------------
>
> Key: ISPN-8611
> URL: https://issues.jboss.org/browse/ISPN-8611
> Project: Infinispan
> Issue Type: Bug
> Components: Cloud
> Reporter: Martin Gencur
> Assignee: Martin Gencur
>
> The default name for the application is "caching-service-app" and "shared-memory-service-app" respectively.
> The persistent volume claim (for StatefulSets) is called {code}${APPLICATION_NAME}-data{code}
> Now when I use e.g. GlusterFS for persistent volumes and deploy this application, a new service is created called "glusterfs-dynamic-caching-service-app-data-caching-service-app-0" (or "glusterfs-dynamic-shared-memory-service-app-data-shared-memory-service-app-0"
> However, the maximum length of service name is 63 chars. And the persistent volume claim fails with: {code}Failed to provision volume with StorageClass "gluster-container": glusterfs: create volume err: failed to create endpoint/service <nil>.{code}
> When the name whole name gets under 63 characters the volume claim is created successfully.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8611) Caching and shared memory default service names too long
by Martin Gencur (JIRA)
[ https://issues.jboss.org/browse/ISPN-8611?page=com.atlassian.jira.plugin.... ]
Martin Gencur updated ISPN-8611:
--------------------------------
Description:
The default name for the application is "caching-service-app" and "shared-memory-service-app" respectively.
The persistent volume claim (for StatefulSets) is called {code}${APPLICATION_NAME}-data{code}
Now when I use e.g. GlusterFS for persistent volumes and deploy this application, a new service is created called "glusterfs-dynamic-caching-service-app-data-caching-service-app-0" (or "glusterfs-dynamic-shared-memory-service-app-data-shared-memory-service-app-0"
However, the maximum length of service name is 63 chars. And the persistent volume claim fails with: {code}Failed to provision volume with StorageClass "gluster-container": glusterfs: create volume err: failed to create endpoint/service <nil>.{code}
When the name whole name gets under 63 characters the volume claim is created successfully.
was:
The default name for the application is "caching-service-app" and "shared-memory-service-app" respectively.
The persistent volume claim (for StatefulSets) is called ${APPLICATION_NAME}-data.
Now when I use e.g. GlusterFS for persistent volumes and deploy this application, a new service is created called "glusterfs-dynamic-caching-service-app-data-caching-service-app-0" (or "glusterfs-dynamic-shared-memory-service-app-data-shared-memory-service-app-0"
However, the maximum length of service name is 63 chars. And the persistent volume claim fails with: {code}Failed to provision volume with StorageClass "gluster-container": glusterfs: create volume err: failed to create endpoint/service <nil>.{code}
When the name whole name gets under 63 characters the volume claim is created successfully.
> Caching and shared memory default service names too long
> --------------------------------------------------------
>
> Key: ISPN-8611
> URL: https://issues.jboss.org/browse/ISPN-8611
> Project: Infinispan
> Issue Type: Bug
> Components: Cloud
> Reporter: Martin Gencur
>
> The default name for the application is "caching-service-app" and "shared-memory-service-app" respectively.
> The persistent volume claim (for StatefulSets) is called {code}${APPLICATION_NAME}-data{code}
> Now when I use e.g. GlusterFS for persistent volumes and deploy this application, a new service is created called "glusterfs-dynamic-caching-service-app-data-caching-service-app-0" (or "glusterfs-dynamic-shared-memory-service-app-data-shared-memory-service-app-0"
> However, the maximum length of service name is 63 chars. And the persistent volume claim fails with: {code}Failed to provision volume with StorageClass "gluster-container": glusterfs: create volume err: failed to create endpoint/service <nil>.{code}
> When the name whole name gets under 63 characters the volume claim is created successfully.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8587) Coordinator crash in 2-node cluster can lead to invalid cache topology
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8587?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8587:
-------------------------------
Git Pull Request: https://github.com/infinispan/infinispan/pull/5617, https://github.com/infinispan/infinispan/pull/5637 (was: https://github.com/infinispan/infinispan/pull/5617)
> Coordinator crash in 2-node cluster can lead to invalid cache topology
> ----------------------------------------------------------------------
>
> Key: ISPN-8587
> URL: https://issues.jboss.org/browse/ISPN-8587
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.0.Beta1, 9.1.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.2.0.Beta2, 9.1.4.Final
>
>
> After the coordinator changes, {{PreferAvailabilityStrategy}} first broadcasts a cache topology with the {{currentCH}} of the "maximum" topology. In the 2nd step it broadcasts a topology that removes all the topology members no longer in the cluster, and in the 3rd step it queues a rebalance with the remaining members.
> If the cluster had only 2 nodes, {{A}} (the coordinator) and {{B}}, and B had not finished joining the cache, the maximum topology has {{A}} as the only member. That means step 2 tries to remove all members, and in the process removes the cache topology from {{ClusterCacheStatus}}. When step 3 tries to rebalance with {{B}} as the only member, it re-initializes {{ClusterCacheStatus}} with topology id 1, and because {{LocalTopologyManager}} already has a higher topology id it will never confirm the rebalance.
> This sometimes happens in {{CacheManagerTest.testRestartReusingConfiguration}}. Like most other tests, it waits for the cache to finish joining before killing a node. But it only waits for the test cache, not for the {{CONFIG}} cache (which has {{awaitInitialTransfer(false)}}). Also, most of the time {{A}} either finishes the rebalance or re-initializes {{ClusterCacheStatus}} and sends a topology update with {{B}} as the only member before leaving. The test only fails if {{B}} doesn't receive or ignores one or more topology updates.
> {noformat}
> 10:37:50,674 INFO (remote-thread-Test-NodeA-p2265-t6:[]) [CLUSTER] ISPN000310: Starting cluster-wide rebalance for cache org.infinispan.CONFIG, topology CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeA-37820: 134, Test-NodeB-59687: 122]}, unionCH=null, phase=READ_OLD_WRITE_ALL, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}
> 10:37:51,037 DEBUG (remote-thread-Test-NodeA-p2265-t6:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache org.infinispan.CONFIG, topology = CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeA-37820: 134, Test-NodeB-59687: 122]}, unionCH=null, phase=READ_ALL_WRITE_ALL, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}, availability mode = AVAILABLE
> 10:37:51,097 DEBUG (remote-thread-Test-NodeA-p2265-t5:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache org.infinispan.CONFIG, topology = CacheTopology{id=4, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeA-37820: 134, Test-NodeB-59687: 122]}, unionCH=null, phase=READ_NEW_WRITE_ALL, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}, availability mode = AVAILABLE
> 10:37:51,203 DEBUG (testng-Test:[]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache org.infinispan.CONFIG, topology = CacheTopology{id=5, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeB-59687: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeB-59687], persistentUUIDs=[96c95d15-440a-4dc7-915d-5d36ac4257bb]}, availability mode = AVAILABLE
> 10:37:51,207 INFO (jgroups-7,Test-NodeB-59687:[]) [CLUSTER] ISPN000094: Received new cluster view for channel ISPN: [Test-NodeB-59687|2] (1) [Test-NodeB-59687]
> *** Here topology updates are ignored
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t5:[Topology-org.infinispan.CONFIG]) [LocalTopologyManagerImpl] Ignoring topology 4 for cache org.infinispan.CONFIG from old coordinator Test-NodeA-37820
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t5:[Topology-org.infinispan.CONFIG]) [LocalTopologyManagerImpl] Ignoring topology 5 for cache org.infinispan.CONFIG from old coordinator Test-NodeA-37820
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [ClusterCacheStatus] Recovered 1 partition(s) for cache org.infinispan.CONFIG: [CacheTopology{id=3, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeA-37820: 134, Test-NodeB-59687: 122]}, unionCH=null, phase=READ_ALL_WRITE_ALL, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}]
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [ClusterCacheStatus] Updating topologies after merge for cache org.infinispan.CONFIG, current topology = CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}, stable topology = CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeA-37820], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73]}, availability mode = null, resolveConflicts = false
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache org.infinispan.CONFIG, topology = CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}, availability mode = null
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [ClusterTopologyManagerImpl] Updating cluster-wide stable topology for cache org.infinispan.CONFIG, topology = CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeA-37820], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73]}
> 10:37:51,340 FATAL (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [CLUSTER] [Context=org.infinispan.CONFIG]ISPN000313: Lost data because of abrupt leavers [Test-NodeA-37820]
> 10:37:51,340 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Merge-2]) [ClusterCacheStatus] Queueing rebalance for cache org.infinispan.CONFIG with members [Test-NodeB-59687]
> 10:37:51,341 DEBUG (transport-thread-Test-NodeB-p2311-t6:[Topology-org.infinispan.CONFIG]) [LocalTopologyManagerImpl] Updating local topology for cache org.infinispan.CONFIG: CacheTopology{id=4, rebalanceId=3, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeA-37820: 256]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[Test-NodeA-37820, Test-NodeB-59687], persistentUUIDs=[d56ec014-ebb3-4be9-9ce2-91c2982ccb73, 96c95d15-440a-4dc7-915d-5d36ac4257bb]}
> *** The topology is re-initialized, without sending topology update
> 10:37:51,378 DEBUG (transport-thread-Test-NodeB-p2311-t1:[Merge-2]) [ClusterCacheStatus] Queueing rebalance for cache ___defaultcache with members [Test-NodeB-59687]
> 10:37:51,547 INFO (jgroups-7,Test-NodeB-59687:[]) [CLUSTER] ISPN000094: Received new cluster view for channel ISPN: [Test-NodeB-59687|3] (2) [Test-NodeB-59687, Test-NodeA-12100]
> 10:37:51,962 DEBUG (testng-Test:[]) [LocalTopologyManagerImpl] Node Test-NodeA-12100 joining cache org.infinispan.CONFIG
> 10:37:51,964 DEBUG (remote-thread-Test-NodeB-p2309-t6:[]) [ClusterCacheStatus] Queueing rebalance for cache org.infinispan.CONFIG with members [Test-NodeB-59687, Test-NodeA-12100]
> *** Rebalance start is sent with wrong topology id
> 10:37:51,964 INFO (remote-thread-Test-NodeB-p2309-t6:[]) [CLUSTER] ISPN000310: Starting cluster-wide rebalance for cache org.infinispan.CONFIG, topology CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeB-59687: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeB-59687: 129, Test-NodeA-12100: 127]}, unionCH=null, phase=READ_OLD_WRITE_ALL, actualMembers=[Test-NodeB-59687, Test-NodeA-12100], persistentUUIDs=[96c95d15-440a-4dc7-915d-5d36ac4257bb, 538b5324-cda9-49df-9786-7c6d6458332e]}
> 10:37:51,965 DEBUG (transport-thread-Test-NodeB-p2311-t4:[Topology-org.infinispan.CONFIG]) [LocalTopologyManagerImpl] Ignoring old rebalance for cache org.infinispan.CONFIG, current topology is 4: CacheTopology{id=2, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeB-59687: 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeB-59687: 129, Test-NodeA-12100: 127]}, unionCH=null, phase=READ_OLD_WRITE_ALL, actualMembers=[Test-NodeB-59687, Test-NodeA-12100], persistentUUIDs=[96c95d15-440a-4dc7-915d-5d36ac4257bb, 538b5324-cda9-49df-9786-7c6d6458332e]}
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8555) CacheManagerTest.testConcurrentCacheManagerStopAndGetCache randomly hangs
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8555?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8555:
-------------------------------
Fix Version/s: 9.1.4.Final
> CacheManagerTest.testConcurrentCacheManagerStopAndGetCache randomly hangs
> -------------------------------------------------------------------------
>
> Key: ISPN-8555
> URL: https://issues.jboss.org/browse/ISPN-8555
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.2.0.Beta1, 9.1.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.2.0.Beta2, 9.1.4.Final
>
>
> If there is any exception, the finally block tries to stop the cache manager without first unblocking the stop method, and it hangs:
> {noformat}
> "ForkThread-1,CacheManagerTest" #204160 prio=5 os_prio=0 tid=0x00007fa1900aa800 nid=0x1be5 waiting on condition [0x00007fa0db5b3000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c846b690> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.manager.CacheManagerTest$2.stop(CacheManagerTest.java:274)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:91)
> at org.infinispan.commons.util.SecurityActions$$Lambda$169/1215571888.run(Unknown Source)
> at org.infinispan.commons.util.SecurityActions.doPrivileged(SecurityActions.java:83)
> at org.infinispan.commons.util.SecurityActions.invokeAccessibly(SecurityActions.java:88)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:165)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:883)
> at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:684)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:583)
> - locked <0x00000000c846b6d8> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:271)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:206)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:1000)
> at org.infinispan.cache.impl.AbstractDelegatingCache.start(AbstractDelegatingCache.java:411)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:637)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:582)
> at org.infinispan.manager.DefaultCacheManager.internalGetCache(DefaultCacheManager.java:468)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:454)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:440)
> at org.infinispan.manager.CacheManagerTest.lambda$testConcurrentCacheManagerStopAndGetCache$4(CacheManagerTest.java:279)
> at org.infinispan.manager.CacheManagerTest$$Lambda$3417/950279155.call(Unknown Source)
> at org.infinispan.test.AbstractInfinispanTest$LoggingCallable.call(AbstractInfinispanTest.java:543)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c846b8d0> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "testng-CacheManagerTest" #24 prio=5 os_prio=0 tid=0x00007fa260ece000 nid=0x44b6 waiting on condition [0x00007fa1e4626000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c84702c0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
> at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
> at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:695)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:774)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:765)
> at org.infinispan.manager.CacheManagerTest.testConcurrentCacheManagerStopAndGetCache(CacheManagerTest.java:295)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c4628978> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "ForkThread-2,CacheManagerTest" #204172 prio=5 os_prio=0 tid=0x00007fa1900f6800 nid=0x1bf2 waiting on condition [0x00007fa0da9a8000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c84181c0> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:681)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:727)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:704)
> at org.infinispan.manager.CacheManagerTest.lambda$testConcurrentCacheManagerStopAndGetCache$5(CacheManagerTest.java:282)
> at org.infinispan.manager.CacheManagerTest$$Lambda$3418/1712334616.run(Unknown Source)
> at org.infinispan.test.AbstractInfinispanTest$RunnableWrapper.run(AbstractInfinispanTest.java:510)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c8418270> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> - <0x00000000c84702c0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8555) CacheManagerTest.testConcurrentCacheManagerStopAndGetCache randomly hangs
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8555?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8555:
-------------------------------
Status: Pull Request Sent (was: Reopened)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5617, https://github.com/infinispan/infinispan/pull/5637 (was: https://github.com/infinispan/infinispan/pull/5617)
> CacheManagerTest.testConcurrentCacheManagerStopAndGetCache randomly hangs
> -------------------------------------------------------------------------
>
> Key: ISPN-8555
> URL: https://issues.jboss.org/browse/ISPN-8555
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.2.0.Beta1, 9.1.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.2.0.Beta2, 9.1.4.Final
>
>
> If there is any exception, the finally block tries to stop the cache manager without first unblocking the stop method, and it hangs:
> {noformat}
> "ForkThread-1,CacheManagerTest" #204160 prio=5 os_prio=0 tid=0x00007fa1900aa800 nid=0x1be5 waiting on condition [0x00007fa0db5b3000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c846b690> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.manager.CacheManagerTest$2.stop(CacheManagerTest.java:274)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:91)
> at org.infinispan.commons.util.SecurityActions$$Lambda$169/1215571888.run(Unknown Source)
> at org.infinispan.commons.util.SecurityActions.doPrivileged(SecurityActions.java:83)
> at org.infinispan.commons.util.SecurityActions.invokeAccessibly(SecurityActions.java:88)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:165)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:883)
> at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:684)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:583)
> - locked <0x00000000c846b6d8> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:271)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:206)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:1000)
> at org.infinispan.cache.impl.AbstractDelegatingCache.start(AbstractDelegatingCache.java:411)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:637)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:582)
> at org.infinispan.manager.DefaultCacheManager.internalGetCache(DefaultCacheManager.java:468)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:454)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:440)
> at org.infinispan.manager.CacheManagerTest.lambda$testConcurrentCacheManagerStopAndGetCache$4(CacheManagerTest.java:279)
> at org.infinispan.manager.CacheManagerTest$$Lambda$3417/950279155.call(Unknown Source)
> at org.infinispan.test.AbstractInfinispanTest$LoggingCallable.call(AbstractInfinispanTest.java:543)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c846b8d0> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "testng-CacheManagerTest" #24 prio=5 os_prio=0 tid=0x00007fa260ece000 nid=0x44b6 waiting on condition [0x00007fa1e4626000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c84702c0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
> at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
> at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:695)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:774)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:765)
> at org.infinispan.manager.CacheManagerTest.testConcurrentCacheManagerStopAndGetCache(CacheManagerTest.java:295)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c4628978> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "ForkThread-2,CacheManagerTest" #204172 prio=5 os_prio=0 tid=0x00007fa1900f6800 nid=0x1bf2 waiting on condition [0x00007fa0da9a8000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c84181c0> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:681)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:727)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:704)
> at org.infinispan.manager.CacheManagerTest.lambda$testConcurrentCacheManagerStopAndGetCache$5(CacheManagerTest.java:282)
> at org.infinispan.manager.CacheManagerTest$$Lambda$3418/1712334616.run(Unknown Source)
> at org.infinispan.test.AbstractInfinispanTest$RunnableWrapper.run(AbstractInfinispanTest.java:510)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c8418270> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> - <0x00000000c84702c0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8555) CacheManagerTest.testConcurrentCacheManagerStopAndGetCache randomly hangs
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8555?page=com.atlassian.jira.plugin.... ]
Dan Berindei reopened ISPN-8555:
--------------------------------
> CacheManagerTest.testConcurrentCacheManagerStopAndGetCache randomly hangs
> -------------------------------------------------------------------------
>
> Key: ISPN-8555
> URL: https://issues.jboss.org/browse/ISPN-8555
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.2.0.Beta1, 9.1.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.2.0.Beta2, 9.1.4.Final
>
>
> If there is any exception, the finally block tries to stop the cache manager without first unblocking the stop method, and it hangs:
> {noformat}
> "ForkThread-1,CacheManagerTest" #204160 prio=5 os_prio=0 tid=0x00007fa1900aa800 nid=0x1be5 waiting on condition [0x00007fa0db5b3000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c846b690> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.manager.CacheManagerTest$2.stop(CacheManagerTest.java:274)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:91)
> at org.infinispan.commons.util.SecurityActions$$Lambda$169/1215571888.run(Unknown Source)
> at org.infinispan.commons.util.SecurityActions.doPrivileged(SecurityActions.java:83)
> at org.infinispan.commons.util.SecurityActions.invokeAccessibly(SecurityActions.java:88)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:165)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:883)
> at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:684)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:583)
> - locked <0x00000000c846b6d8> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:271)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:206)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:1000)
> at org.infinispan.cache.impl.AbstractDelegatingCache.start(AbstractDelegatingCache.java:411)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:637)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:582)
> at org.infinispan.manager.DefaultCacheManager.internalGetCache(DefaultCacheManager.java:468)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:454)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:440)
> at org.infinispan.manager.CacheManagerTest.lambda$testConcurrentCacheManagerStopAndGetCache$4(CacheManagerTest.java:279)
> at org.infinispan.manager.CacheManagerTest$$Lambda$3417/950279155.call(Unknown Source)
> at org.infinispan.test.AbstractInfinispanTest$LoggingCallable.call(AbstractInfinispanTest.java:543)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c846b8d0> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "testng-CacheManagerTest" #24 prio=5 os_prio=0 tid=0x00007fa260ece000 nid=0x44b6 waiting on condition [0x00007fa1e4626000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c84702c0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
> at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
> at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:695)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:774)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:765)
> at org.infinispan.manager.CacheManagerTest.testConcurrentCacheManagerStopAndGetCache(CacheManagerTest.java:295)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c4628978> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "ForkThread-2,CacheManagerTest" #204172 prio=5 os_prio=0 tid=0x00007fa1900f6800 nid=0x1bf2 waiting on condition [0x00007fa0da9a8000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c84181c0> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:681)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:727)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:704)
> at org.infinispan.manager.CacheManagerTest.lambda$testConcurrentCacheManagerStopAndGetCache$5(CacheManagerTest.java:282)
> at org.infinispan.manager.CacheManagerTest$$Lambda$3418/1712334616.run(Unknown Source)
> at org.infinispan.test.AbstractInfinispanTest$RunnableWrapper.run(AbstractInfinispanTest.java:510)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c8418270> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> - <0x00000000c84702c0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8555) CacheManagerTest.testConcurrentCacheManagerStopAndGetCache randomly hangs
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8555?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8555:
-------------------------------
Affects Version/s: 9.1.3.Final
> CacheManagerTest.testConcurrentCacheManagerStopAndGetCache randomly hangs
> -------------------------------------------------------------------------
>
> Key: ISPN-8555
> URL: https://issues.jboss.org/browse/ISPN-8555
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.2.0.Beta1, 9.1.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.2.0.Beta2, 9.1.4.Final
>
>
> If there is any exception, the finally block tries to stop the cache manager without first unblocking the stop method, and it hangs:
> {noformat}
> "ForkThread-1,CacheManagerTest" #204160 prio=5 os_prio=0 tid=0x00007fa1900aa800 nid=0x1be5 waiting on condition [0x00007fa0db5b3000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c846b690> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.manager.CacheManagerTest$2.stop(CacheManagerTest.java:274)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:91)
> at org.infinispan.commons.util.SecurityActions$$Lambda$169/1215571888.run(Unknown Source)
> at org.infinispan.commons.util.SecurityActions.doPrivileged(SecurityActions.java:83)
> at org.infinispan.commons.util.SecurityActions.invokeAccessibly(SecurityActions.java:88)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:165)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:883)
> at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:684)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:583)
> - locked <0x00000000c846b6d8> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:271)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:206)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:1000)
> at org.infinispan.cache.impl.AbstractDelegatingCache.start(AbstractDelegatingCache.java:411)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:637)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:582)
> at org.infinispan.manager.DefaultCacheManager.internalGetCache(DefaultCacheManager.java:468)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:454)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:440)
> at org.infinispan.manager.CacheManagerTest.lambda$testConcurrentCacheManagerStopAndGetCache$4(CacheManagerTest.java:279)
> at org.infinispan.manager.CacheManagerTest$$Lambda$3417/950279155.call(Unknown Source)
> at org.infinispan.test.AbstractInfinispanTest$LoggingCallable.call(AbstractInfinispanTest.java:543)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c846b8d0> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "testng-CacheManagerTest" #24 prio=5 os_prio=0 tid=0x00007fa260ece000 nid=0x44b6 waiting on condition [0x00007fa1e4626000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c84702c0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
> at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
> at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:695)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:774)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:765)
> at org.infinispan.manager.CacheManagerTest.testConcurrentCacheManagerStopAndGetCache(CacheManagerTest.java:295)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c4628978> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "ForkThread-2,CacheManagerTest" #204172 prio=5 os_prio=0 tid=0x00007fa1900f6800 nid=0x1bf2 waiting on condition [0x00007fa0da9a8000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x00000000c84181c0> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:681)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:727)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:704)
> at org.infinispan.manager.CacheManagerTest.lambda$testConcurrentCacheManagerStopAndGetCache$5(CacheManagerTest.java:282)
> at org.infinispan.manager.CacheManagerTest$$Lambda$3418/1712334616.run(Unknown Source)
> at org.infinispan.test.AbstractInfinispanTest$RunnableWrapper.run(AbstractInfinispanTest.java:510)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
> - <0x00000000c8418270> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> - <0x00000000c84702c0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8396) Add interceptor preventing going out of memory
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-8396?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-8396:
-------------------------------------
So I am thinking about this a bit more.
I am thinking we could add
{code}
COUNT-EXCEPTION
MEMORY-EXCEPTION
{code}
They would behave as they are currently (COUNT or MEMORY based) but they would throw exceptions. I kinda wonder if this should be a different property though, separating the exception part. This would also allow for on heap to have count based limiting.
> Add interceptor preventing going out of memory
> ----------------------------------------------
>
> Key: ISPN-8396
> URL: https://issues.jboss.org/browse/ISPN-8396
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud, Core
> Reporter: Sebastian Łaskawiec
> Assignee: William Burns
>
> We need an interceptor which will calculate the amount of required memory for PUT and report an error if that put will cause going out of memory.
> Note that this is strictly connected to eviction mechanism (we might want to evict some entries on write)
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8612) HashConfiguration should only allow numOwners=2 with scattered cache
by William Burns (JIRA)
William Burns created ISPN-8612:
-----------------------------------
Summary: HashConfiguration should only allow numOwners=2 with scattered cache
Key: ISPN-8612
URL: https://issues.jboss.org/browse/ISPN-8612
Project: Infinispan
Issue Type: Task
Components: Core
Affects Versions: 9.2.0.Beta1
Reporter: William Burns
Assignee: William Burns
Fix For: 9.2.0.Beta2
Scattered cache always runs with essentially num owners being 2. It just ignores the hash configuration. However we should make sure the configuration is always set to 2 just in case if we query it for other purposes.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years