[JBoss JIRA] (ISPN-8615) ClusteredLockImplTest.testTryLockWithTimeoutAfterLockWithSmallTimeout random failures
by Dan Berindei (JIRA)
Dan Berindei created ISPN-8615:
----------------------------------
Summary: ClusteredLockImplTest.testTryLockWithTimeoutAfterLockWithSmallTimeout random failures
Key: ISPN-8615
URL: https://issues.jboss.org/browse/ISPN-8615
Project: Infinispan
Issue Type: Bug
Components: Test Suite - Core
Affects Versions: 9.2.0.Beta1
Reporter: Dan Berindei
Assignee: Katia Aresti
Fix For: 9.2.0.Beta2
{noformat}
java.lang.AssertionError:
at org.infinispan.lock.impl.lock.ClusteredLockImplTest.testTryLockWithTimeoutAfterLockWithSmallTimeout(ClusteredLockImplTest.java:94)
{noformat}
It happens rarely in CI, but I can reproduce it every time if I change the timeout to 100 ms. IMO the difference between {{testTryLockWithTimeoutAfterLockWithSmallTimeout}} and {{testTryLockWithTimeoutAfterLockWithBigTimeout}} should be that the former waits for {{tryLock(smalltimeout, unit)}} to time out before unlocking, and the latter waits for a little time before unlocking and checks that {{tryLock(bigtimeout, unit)}} still succeeds.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8524) ScatteredDelayedAvailabilityUpdateTest.testDelayedAvailabilityUpdate5 failing randomly
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8524?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8524:
-------------------------------
Attachment: ScatteredDelayedAvailabilityUpdateTest_ISPN-7919_RpcManager_ResponseCollector_20171212.log.gz
> ScatteredDelayedAvailabilityUpdateTest.testDelayedAvailabilityUpdate5 failing randomly
> --------------------------------------------------------------------------------------
>
> Key: ISPN-8524
> URL: https://issues.jboss.org/browse/ISPN-8524
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.0.Alpha2
> Reporter: Galder Zamarreño
> Assignee: Radim Vansa
> Labels: testsuite_stability
> Attachments: ScatteredDelayedAvailabilityUpdateTest_ISPN-7919_RpcManager_ResponseCollector_20171212.log.gz
>
>
> http://ci.infinispan.org/job/Infinispan/job/PR-5556/17/
> org.infinispan.partitionhandling.ScatteredDelayedAvailabilityUpdateTest.testDelayedAvailabilityUpdate5[SCATTERED_SYNC]
> {code}
> java.util.concurrent.TimeoutException
> at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1771)
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
> at org.infinispan.partitionhandling.ScatteredDelayedAvailabilityUpdateTest.testDelayedAvailabilityUpdate(ScatteredDelayedAvailabilityUpdateTest.java:75)
> at org.infinispan.partitionhandling.DelayedAvailabilityUpdateTest.testDelayedAvailabilityUpdate5(DelayedAvailabilityUpdateTest.java:39)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ... Removed 16 stack frames
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8524) ScatteredDelayedAvailabilityUpdateTest.testDelayedAvailabilityUpdate5 failing randomly
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8524?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8524:
-------------------------------
Status: Open (was: New)
> ScatteredDelayedAvailabilityUpdateTest.testDelayedAvailabilityUpdate5 failing randomly
> --------------------------------------------------------------------------------------
>
> Key: ISPN-8524
> URL: https://issues.jboss.org/browse/ISPN-8524
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.0.Alpha2
> Reporter: Galder Zamarreño
> Assignee: Radim Vansa
> Labels: testsuite_stability
>
> http://ci.infinispan.org/job/Infinispan/job/PR-5556/17/
> org.infinispan.partitionhandling.ScatteredDelayedAvailabilityUpdateTest.testDelayedAvailabilityUpdate5[SCATTERED_SYNC]
> {code}
> java.util.concurrent.TimeoutException
> at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1771)
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
> at org.infinispan.partitionhandling.ScatteredDelayedAvailabilityUpdateTest.testDelayedAvailabilityUpdate(ScatteredDelayedAvailabilityUpdateTest.java:75)
> at org.infinispan.partitionhandling.DelayedAvailabilityUpdateTest.testDelayedAvailabilityUpdate5(DelayedAvailabilityUpdateTest.java:39)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ... Removed 16 stack frames
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8431) ScatteredSplitAndMergeTest random failures
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8431?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8431:
-------------------------------
Status: Open (was: New)
> ScatteredSplitAndMergeTest random failures
> ------------------------------------------
>
> Key: ISPN-8431
> URL: https://issues.jboss.org/browse/ISPN-8431
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.2.0.Alpha1
> Environment: Jenkins
> Reporter: Tristan Tarrant
> Assignee: Radim Vansa
> Labels: testsuite_stability
> Attachments: ScatteredSplitAndMergeTest_ISPN-7919_RpcManager_ResponseCollector_20171212.log.gz
>
>
> http://ci.infinispan.org/job/Infinispan/job/master/214/testReport/junit/o...
> java.lang.AssertionError: expected [null] but found [v0]
> at org.infinispan.partitionhandling.ScatteredSplitAndMergeTest.testSplitAndMerge(ScatteredSplitAndMergeTest.java:80)
> at org.infinispan.partitionhandling.ScatteredSplitAndMergeTest.testSplitAndMerge5(ScatteredSplitAndMergeTest.java:51)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ... Removed 20 stack frames
>
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8431) ScatteredSplitAndMergeTest random failures
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8431?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8431:
-------------------------------
Attachment: ScatteredSplitAndMergeTest_ISPN-7919_RpcManager_ResponseCollector_20171212.log.gz
> ScatteredSplitAndMergeTest random failures
> ------------------------------------------
>
> Key: ISPN-8431
> URL: https://issues.jboss.org/browse/ISPN-8431
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.2.0.Alpha1
> Environment: Jenkins
> Reporter: Tristan Tarrant
> Assignee: Radim Vansa
> Labels: testsuite_stability
> Attachments: ScatteredSplitAndMergeTest_ISPN-7919_RpcManager_ResponseCollector_20171212.log.gz
>
>
> http://ci.infinispan.org/job/Infinispan/job/master/214/testReport/junit/o...
> java.lang.AssertionError: expected [null] but found [v0]
> at org.infinispan.partitionhandling.ScatteredSplitAndMergeTest.testSplitAndMerge(ScatteredSplitAndMergeTest.java:80)
> at org.infinispan.partitionhandling.ScatteredSplitAndMergeTest.testSplitAndMerge5(ScatteredSplitAndMergeTest.java:51)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ... Removed 20 stack frames
>
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8613) Handle PartitionHandling configs in Asymmetric Caches
by Ryan Emerson (JIRA)
Ryan Emerson created ISPN-8613:
----------------------------------
Summary: Handle PartitionHandling configs in Asymmetric Caches
Key: ISPN-8613
URL: https://issues.jboss.org/browse/ISPN-8613
Project: Infinispan
Issue Type: Enhancement
Reporter: Ryan Emerson
Currently the ClusterTopologyManagerImpl assumes that all caches have a cache configuration defined. Instead, we should add the requires PartitionHandling config to CacheJoinInfo and throw an exception if the cache is not defined on the coordinator.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8611) Persistent volume names too long
by Martin Gencur (JIRA)
[ https://issues.jboss.org/browse/ISPN-8611?page=com.atlassian.jira.plugin.... ]
Martin Gencur updated ISPN-8611:
--------------------------------
Status: Open (was: New)
> Persistent volume names too long
> --------------------------------
>
> Key: ISPN-8611
> URL: https://issues.jboss.org/browse/ISPN-8611
> Project: Infinispan
> Issue Type: Bug
> Components: Cloud
> Reporter: Martin Gencur
> Assignee: Martin Gencur
>
> The default name for the application is "caching-service-app" and "shared-memory-service-app" respectively.
> The persistent volume claim (for StatefulSets) is called {code}${APPLICATION_NAME}-data{code}
> Now when I use e.g. GlusterFS for persistent volumes and deploy this application, a new service is created called "glusterfs-dynamic-caching-service-app-data-caching-service-app-0" (or "glusterfs-dynamic-shared-memory-service-app-data-shared-memory-service-app-0"
> However, the maximum length of service name is 63 chars. And the persistent volume claim fails with: {code}Failed to provision volume with StorageClass "gluster-container": glusterfs: create volume err: failed to create endpoint/service <nil>.{code}
> When the name whole name gets under 63 characters the volume claim is created successfully.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8611) Persistent volume names too long
by Martin Gencur (JIRA)
[ https://issues.jboss.org/browse/ISPN-8611?page=com.atlassian.jira.plugin.... ]
Martin Gencur updated ISPN-8611:
--------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/jboss-container-images/jboss-dataservices-image/pull/56
> Persistent volume names too long
> --------------------------------
>
> Key: ISPN-8611
> URL: https://issues.jboss.org/browse/ISPN-8611
> Project: Infinispan
> Issue Type: Bug
> Components: Cloud
> Reporter: Martin Gencur
> Assignee: Martin Gencur
>
> The default name for the application is "caching-service-app" and "shared-memory-service-app" respectively.
> The persistent volume claim (for StatefulSets) is called {code}${APPLICATION_NAME}-data{code}
> Now when I use e.g. GlusterFS for persistent volumes and deploy this application, a new service is created called "glusterfs-dynamic-caching-service-app-data-caching-service-app-0" (or "glusterfs-dynamic-shared-memory-service-app-data-shared-memory-service-app-0"
> However, the maximum length of service name is 63 chars. And the persistent volume claim fails with: {code}Failed to provision volume with StorageClass "gluster-container": glusterfs: create volume err: failed to create endpoint/service <nil>.{code}
> When the name whole name gets under 63 characters the volume claim is created successfully.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years
[JBoss JIRA] (ISPN-8611) Persistent volume names too long
by Martin Gencur (JIRA)
[ https://issues.jboss.org/browse/ISPN-8611?page=com.atlassian.jira.plugin.... ]
Martin Gencur updated ISPN-8611:
--------------------------------
Summary: Persistent volume names too long (was: Caching and shared memory default service names too long)
> Persistent volume names too long
> --------------------------------
>
> Key: ISPN-8611
> URL: https://issues.jboss.org/browse/ISPN-8611
> Project: Infinispan
> Issue Type: Bug
> Components: Cloud
> Reporter: Martin Gencur
>
> The default name for the application is "caching-service-app" and "shared-memory-service-app" respectively.
> The persistent volume claim (for StatefulSets) is called {code}${APPLICATION_NAME}-data{code}
> Now when I use e.g. GlusterFS for persistent volumes and deploy this application, a new service is created called "glusterfs-dynamic-caching-service-app-data-caching-service-app-0" (or "glusterfs-dynamic-shared-memory-service-app-data-shared-memory-service-app-0"
> However, the maximum length of service name is 63 chars. And the persistent volume claim fails with: {code}Failed to provision volume with StorageClass "gluster-container": glusterfs: create volume err: failed to create endpoint/service <nil>.{code}
> When the name whole name gets under 63 characters the volume claim is created successfully.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years