[JBoss JIRA] (ISPN-11035) XSiteResourceTest.testPushAllCaches random failures
by Dan Berindei (Jira)
Dan Berindei created ISPN-11035:
-----------------------------------
Summary: XSiteResourceTest.testPushAllCaches random failures
Key: ISPN-11035
URL: https://issues.jboss.org/browse/ISPN-11035
Project: Infinispan
Issue Type: Bug
Components: Cross-Site Replication, REST, Test Suite
Affects Versions: 10.1.0.Beta1
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 10.1.0.CR1
The test doesn't seem to wait enough for the xsite state transfer to finish. When it fails, there is only one request to `/rest/v2/cache-managers/default/x-site/backups`, before the remote site receives the state:
{noformat}
19:32:35,797 TRACE (REST-Test-Test-NodeA-48205-ServerIO-15-9:[]) [REST_ACCESS_LOG] /rest/v2/cache-managers/default/x-site/backups
19:32:35,807 TRACE (jgroups-4,bridge-org.infinispan.rest.resources.Test,_Test-NodeC-47561:SFO-3:[]) [JGroupsTransport] Test-NodeC-47561 received request 628 from Test-NodeA-48205:LON-1: XSiteStatePushCommand{cacheName=CACHE_2, timeout=1200000 (10 keys)}
19:32:35,809 TRACE (REST-Test-Test-NodeC-47561-ServerIO-21-1:[]) [InvocationContextInterceptor] Invoked with command SizeCommand{} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@30b93450]
19:32:35,813 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.rest.resources.XSiteResourceTest.testPushAllCaches
java.lang.AssertionError: expected:<10> but was:<9>
19:32:35,813 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.rest.resources.XSiteResourceTest.testPushAllCaches
java.lang.AssertionError: expected:<10> but was:<9>
at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.14.3.jar:?]
at org.testng.AssertJUnit.failNotEquals(AssertJUnit.java:364) ~[testng-6.14.3.jar:?]
at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:80) ~[testng-6.14.3.jar:?]
at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:245) ~[testng-6.14.3.jar:?]
at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:252) ~[testng-6.14.3.jar:?]
at org.infinispan.rest.resources.XSiteResourceTest.testPushAllCaches(XSiteResourceTest.java:330) ~[test-classes/:?]
{noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 4 months
[JBoss JIRA] (ISPN-11034) ProtobufRemoteIteratorIndexingTest.testFilteredIterationWithQuery random failures
by Gustavo Fernandes (Jira)
[ https://issues.jboss.org/browse/ISPN-11034?page=com.atlassian.jira.plugin... ]
Gustavo Fernandes reassigned ISPN-11034:
----------------------------------------
Assignee: Gustavo Fernandes
> ProtobufRemoteIteratorIndexingTest.testFilteredIterationWithQuery random failures
> ---------------------------------------------------------------------------------
>
> Key: ISPN-11034
> URL: https://issues.jboss.org/browse/ISPN-11034
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Querying
> Affects Versions: 10.1.0.Beta1
> Reporter: Dan Berindei
> Assignee: Gustavo Fernandes
> Priority: Major
> Labels: testsuite_stability
> Fix For: 10.1.0.Final
>
>
> {noformat}
> [OK: 2246, KO: 1, SKIP: 0] Test failed: org.infinispan.client.hotrod.impl.iteration.ProtobufRemoteIteratorIndexingTest.testFilteredIterationWithQuery
> org.infinispan.client.hotrod.exceptions.HotRodClientException:: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=7573 returned server error (status=0x85): org.infinispan.remoting.RemoteException: ISPN000217: Received exception from ProtobufRemoteIteratorIndexingTest-NodeA-43945, see cause for remote stack trace
> org.hibernate.search.bridge.BridgeException: Exception while calling bridge#set
> entity class: org.infinispan.query.remote.impl.indexing.ProtobufValueWrapper
> field bridge: org.infinispan.query.remote.impl.indexing.ProtobufValueWrapperFieldBridge@166e51ab
> java.lang.IllegalArgumentException: messageDescriptor cannot be null
> at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> at java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:600)
> at java.base/java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:678)
> at java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:737)
> at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:159)
> at java.base/java.util.stream.ForEachOps$ForEachOp$OfInt.evaluateParallel(ForEachOps.java:188)
> at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
> at java.base/java.util.stream.IntPipeline.forEach(IntPipeline.java:439)
> at java.base/java.util.stream.IntPipeline$Head.forEach(IntPipeline.java:596)
> at org.infinispan.client.hotrod.impl.iteration.AbstractRemoteIteratorTest.populateCache(AbstractRemoteIteratorTest.java:34)
> at org.infinispan.client.hotrod.impl.iteration.ProtobufRemoteIteratorIndexingTest.testFilteredIterationWithQuery(ProtobufRemoteIteratorIndexingTest.java:73)
> Caused by: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=7573 returned server error (status=0x85): org.infinispan.remoting.RemoteException: ISPN000217: Received exception from ProtobufRemoteIteratorIndexingTest-NodeA-43945, see cause for remote stack trace
> org.hibernate.search.bridge.BridgeException: Exception while calling bridge#set
> entity class: org.infinispan.query.remote.impl.indexing.ProtobufValueWrapper
> field bridge: org.infinispan.query.remote.impl.indexing.ProtobufValueWrapperFieldBridge@166e51ab
> java.lang.IllegalArgumentException: messageDescriptor cannot be null
> at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:337)
> at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:176)
> at org.infinispan.client.hotrod.impl.transport.netty.HeaderDecoder.decode(HeaderDecoder.java:139)
> at org.infinispan.client.hotrod.impl.transport.netty.HintedReplayingDecoder.callDecode(HintedReplayingDecoder.java:94)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:281)
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 4 months
[JBoss JIRA] (ISPN-11034) ProtobufRemoteIteratorIndexingTest.testFilteredIterationWithQuery random failures
by Dan Berindei (Jira)
Dan Berindei created ISPN-11034:
-----------------------------------
Summary: ProtobufRemoteIteratorIndexingTest.testFilteredIterationWithQuery random failures
Key: ISPN-11034
URL: https://issues.jboss.org/browse/ISPN-11034
Project: Infinispan
Issue Type: Bug
Components: Remote Querying
Affects Versions: 10.1.0.Beta1
Reporter: Dan Berindei
Fix For: 10.1.0.Final
{noformat}
[OK: 2246, KO: 1, SKIP: 0] Test failed: org.infinispan.client.hotrod.impl.iteration.ProtobufRemoteIteratorIndexingTest.testFilteredIterationWithQuery
org.infinispan.client.hotrod.exceptions.HotRodClientException:: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=7573 returned server error (status=0x85): org.infinispan.remoting.RemoteException: ISPN000217: Received exception from ProtobufRemoteIteratorIndexingTest-NodeA-43945, see cause for remote stack trace
org.hibernate.search.bridge.BridgeException: Exception while calling bridge#set
entity class: org.infinispan.query.remote.impl.indexing.ProtobufValueWrapper
field bridge: org.infinispan.query.remote.impl.indexing.ProtobufValueWrapperFieldBridge@166e51ab
java.lang.IllegalArgumentException: messageDescriptor cannot be null
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:600)
at java.base/java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:678)
at java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:737)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:159)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfInt.evaluateParallel(ForEachOps.java:188)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
at java.base/java.util.stream.IntPipeline.forEach(IntPipeline.java:439)
at java.base/java.util.stream.IntPipeline$Head.forEach(IntPipeline.java:596)
at org.infinispan.client.hotrod.impl.iteration.AbstractRemoteIteratorTest.populateCache(AbstractRemoteIteratorTest.java:34)
at org.infinispan.client.hotrod.impl.iteration.ProtobufRemoteIteratorIndexingTest.testFilteredIterationWithQuery(ProtobufRemoteIteratorIndexingTest.java:73)
Caused by: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=7573 returned server error (status=0x85): org.infinispan.remoting.RemoteException: ISPN000217: Received exception from ProtobufRemoteIteratorIndexingTest-NodeA-43945, see cause for remote stack trace
org.hibernate.search.bridge.BridgeException: Exception while calling bridge#set
entity class: org.infinispan.query.remote.impl.indexing.ProtobufValueWrapper
field bridge: org.infinispan.query.remote.impl.indexing.ProtobufValueWrapperFieldBridge@166e51ab
java.lang.IllegalArgumentException: messageDescriptor cannot be null
at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:337)
at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:176)
at org.infinispan.client.hotrod.impl.transport.netty.HeaderDecoder.decode(HeaderDecoder.java:139)
at org.infinispan.client.hotrod.impl.transport.netty.HintedReplayingDecoder.callDecode(HintedReplayingDecoder.java:94)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:281)
{noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 4 months
[JBoss JIRA] (ISPN-11033) Cluster fails while inserting data for a while
by Jens Reimann (Jira)
[ https://issues.jboss.org/browse/ISPN-11033?page=com.atlassian.jira.plugin... ]
Jens Reimann updated ISPN-11033:
--------------------------------
Attachment: logs-2.tar.gz
> Cluster fails while inserting data for a while
> ----------------------------------------------
>
> Key: ISPN-11033
> URL: https://issues.jboss.org/browse/ISPN-11033
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 10.0.1.Final
> Environment: 12 node Infinispan cluster, OpenShift 4.2
> Reporter: Jens Reimann
> Priority: Blocker
> Attachments: deviceManagement.proto, infinispan.xml, logs-2.tar.gz
>
>
> Inserting data into an Infinispan cluster works for a while, and then the cluster fails. Showing the following log messages in one pod:
> {code}
> 14:20:34,432 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p4-t1) ISPN000136: Error executing command ReplaceCommand on Cache 'devices', writing keys [WrappedByteArray{bytes=8201\*\i\o\.\e\n\m\a\s\s\e\.\i\o\t\.\i\n\f\i\n\i\s\p\a\n\.\d\e\v\i\c\e\.\D\e\v\i\c\e\K\e\y8A01\<0A1F\j\b\t\e\s\t\.\i\o\t\/\2\0\1\9\-\1\2\-\0\4\T\0\8\:\2\5\:\3\4\Z1219\h\t\t\p\-\i\n\s\e\r\t\e\r\-\f\r\8\l\m\1\5\2\2\4\7, hashCode=-381217399}]: org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 15 seconds for key WrappedByteArray{bytes=8201\*\i\o\.\e\n\m\a\s\s\e\.\i\o\t\.\i\n\f\i\n\i\s\p\a\n\.\d\e\v\i\c\e\.\D\e\v\i\c\e\K\e\y8A01\<0A1F\j\b\t\e\s\t\.\i\o\t\/\2\0\1\9\-\1\2\-\0\4\T\0\8\:\2\5\:\3\4\Z1219\h\t\t\p\-\i\n\s\e\r\t\e\r\-\f\r\8\l\m\1\5\2\2\4\7, hashCode=-381217399} and requestor GlobalTx:infinispan-8-8720:1383960. Lock is held by GlobalTx:infinispan-8-8720:33804
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:292)
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:222)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.checkState(InfinispanLock.java:440)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.lambda$toInvocationStage$3(InfinispanLock.java:416)
> at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
> at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
> at org.infinispan.commons.util.concurrent.CallerRunsRejectOnShutdownPolicy.rejectedExecution(CallerRunsRejectOnShutdownPolicy.java:19)
> at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825)
> at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355)
> at org.infinispan.executors.LazyInitializingExecutorService.execute(LazyInitializingExecutorService.java:138)
> at java.base/java.util.concurrent.CompletableFuture$UniCompletion.claim(CompletableFuture.java:568)
> at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:638)
> at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.notifyListeners(InfinispanLock.java:527)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.cancel(InfinispanLock.java:382)
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.call(DefaultLockManager.java:286)
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.call(DefaultLockManager.java:222)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
> at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> {code}
> While showing the following message in the other nodes log:
> {code}
> 14:44:26,310 ERROR [org.jgroups.protocols.TCP] (jgroups-133,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
> 14:44:28,611 ERROR [org.jgroups.protocols.TCP] (jgroups-133,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
> 14:44:30,912 ERROR [org.jgroups.protocols.TCP] (jgroups-126,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
> {code}
> The node showing the exception gets killed after a while by Kubernetes:
> {code}
> NAME READY STATUS RESTARTS AGE
> infinispan-0 1/1 Running 0 83m
> infinispan-1 1/1 Running 0 83m
> infinispan-10 1/1 Running 0 83m
> infinispan-11 1/1 Running 0 83m
> infinispan-2 1/1 Running 0 83m
> infinispan-3 1/1 Running 0 83m
> infinispan-4 1/1 Running 0 83m
> infinispan-5 1/1 Running 0 83m
> infinispan-6 1/1 Running 0 83m
> infinispan-7 1/1 Running 0 83m
> infinispan-8 0/1 CreateContainerError 3 83m
> infinispan-9 1/1 Running 0 83m
> {code}
> But it never becomes ready again.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 4 months
[JBoss JIRA] (ISPN-11033) Cluster fails while inserting data for a while
by Jens Reimann (Jira)
[ https://issues.jboss.org/browse/ISPN-11033?page=com.atlassian.jira.plugin... ]
Jens Reimann updated ISPN-11033:
--------------------------------
Attachment: deviceManagement.proto
> Cluster fails while inserting data for a while
> ----------------------------------------------
>
> Key: ISPN-11033
> URL: https://issues.jboss.org/browse/ISPN-11033
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 10.0.1.Final
> Environment: 12 node Infinispan cluster, OpenShift 4.2
> Reporter: Jens Reimann
> Priority: Blocker
> Attachments: deviceManagement.proto, infinispan.xml
>
>
> Inserting data into an Infinispan cluster works for a while, and then the cluster fails. Showing the following log messages in one pod:
> {code}
> 14:20:34,432 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p4-t1) ISPN000136: Error executing command ReplaceCommand on Cache 'devices', writing keys [WrappedByteArray{bytes=8201\*\i\o\.\e\n\m\a\s\s\e\.\i\o\t\.\i\n\f\i\n\i\s\p\a\n\.\d\e\v\i\c\e\.\D\e\v\i\c\e\K\e\y8A01\<0A1F\j\b\t\e\s\t\.\i\o\t\/\2\0\1\9\-\1\2\-\0\4\T\0\8\:\2\5\:\3\4\Z1219\h\t\t\p\-\i\n\s\e\r\t\e\r\-\f\r\8\l\m\1\5\2\2\4\7, hashCode=-381217399}]: org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 15 seconds for key WrappedByteArray{bytes=8201\*\i\o\.\e\n\m\a\s\s\e\.\i\o\t\.\i\n\f\i\n\i\s\p\a\n\.\d\e\v\i\c\e\.\D\e\v\i\c\e\K\e\y8A01\<0A1F\j\b\t\e\s\t\.\i\o\t\/\2\0\1\9\-\1\2\-\0\4\T\0\8\:\2\5\:\3\4\Z1219\h\t\t\p\-\i\n\s\e\r\t\e\r\-\f\r\8\l\m\1\5\2\2\4\7, hashCode=-381217399} and requestor GlobalTx:infinispan-8-8720:1383960. Lock is held by GlobalTx:infinispan-8-8720:33804
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:292)
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:222)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.checkState(InfinispanLock.java:440)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.lambda$toInvocationStage$3(InfinispanLock.java:416)
> at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
> at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
> at org.infinispan.commons.util.concurrent.CallerRunsRejectOnShutdownPolicy.rejectedExecution(CallerRunsRejectOnShutdownPolicy.java:19)
> at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825)
> at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355)
> at org.infinispan.executors.LazyInitializingExecutorService.execute(LazyInitializingExecutorService.java:138)
> at java.base/java.util.concurrent.CompletableFuture$UniCompletion.claim(CompletableFuture.java:568)
> at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:638)
> at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.notifyListeners(InfinispanLock.java:527)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.cancel(InfinispanLock.java:382)
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.call(DefaultLockManager.java:286)
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.call(DefaultLockManager.java:222)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
> at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> {code}
> While showing the following message in the other nodes log:
> {code}
> 14:44:26,310 ERROR [org.jgroups.protocols.TCP] (jgroups-133,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
> 14:44:28,611 ERROR [org.jgroups.protocols.TCP] (jgroups-133,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
> 14:44:30,912 ERROR [org.jgroups.protocols.TCP] (jgroups-126,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
> {code}
> The node showing the exception gets killed after a while by Kubernetes:
> {code}
> NAME READY STATUS RESTARTS AGE
> infinispan-0 1/1 Running 0 83m
> infinispan-1 1/1 Running 0 83m
> infinispan-10 1/1 Running 0 83m
> infinispan-11 1/1 Running 0 83m
> infinispan-2 1/1 Running 0 83m
> infinispan-3 1/1 Running 0 83m
> infinispan-4 1/1 Running 0 83m
> infinispan-5 1/1 Running 0 83m
> infinispan-6 1/1 Running 0 83m
> infinispan-7 1/1 Running 0 83m
> infinispan-8 0/1 CreateContainerError 3 83m
> infinispan-9 1/1 Running 0 83m
> {code}
> But it never becomes ready again.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 4 months
[JBoss JIRA] (ISPN-11033) Cluster fails while inserting data for a while
by Jens Reimann (Jira)
Jens Reimann created ISPN-11033:
-----------------------------------
Summary: Cluster fails while inserting data for a while
Key: ISPN-11033
URL: https://issues.jboss.org/browse/ISPN-11033
Project: Infinispan
Issue Type: Bug
Components: Server
Affects Versions: 10.0.1.Final
Environment: 12 node Infinispan cluster, OpenShift 4.2
Reporter: Jens Reimann
Attachments: deviceManagement.proto, infinispan.xml
Inserting data into an Infinispan cluster works for a while, and then the cluster fails. Showing the following log messages in one pod:
{code}
14:20:34,432 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p4-t1) ISPN000136: Error executing command ReplaceCommand on Cache 'devices', writing keys [WrappedByteArray{bytes=8201\*\i\o\.\e\n\m\a\s\s\e\.\i\o\t\.\i\n\f\i\n\i\s\p\a\n\.\d\e\v\i\c\e\.\D\e\v\i\c\e\K\e\y8A01\<0A1F\j\b\t\e\s\t\.\i\o\t\/\2\0\1\9\-\1\2\-\0\4\T\0\8\:\2\5\:\3\4\Z1219\h\t\t\p\-\i\n\s\e\r\t\e\r\-\f\r\8\l\m\1\5\2\2\4\7, hashCode=-381217399}]: org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 15 seconds for key WrappedByteArray{bytes=8201\*\i\o\.\e\n\m\a\s\s\e\.\i\o\t\.\i\n\f\i\n\i\s\p\a\n\.\d\e\v\i\c\e\.\D\e\v\i\c\e\K\e\y8A01\<0A1F\j\b\t\e\s\t\.\i\o\t\/\2\0\1\9\-\1\2\-\0\4\T\0\8\:\2\5\:\3\4\Z1219\h\t\t\p\-\i\n\s\e\r\t\e\r\-\f\r\8\l\m\1\5\2\2\4\7, hashCode=-381217399} and requestor GlobalTx:infinispan-8-8720:1383960. Lock is held by GlobalTx:infinispan-8-8720:33804
at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:292)
at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:222)
at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.checkState(InfinispanLock.java:440)
at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.lambda$toInvocationStage$3(InfinispanLock.java:416)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
at org.infinispan.commons.util.concurrent.CallerRunsRejectOnShutdownPolicy.rejectedExecution(CallerRunsRejectOnShutdownPolicy.java:19)
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825)
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355)
at org.infinispan.executors.LazyInitializingExecutorService.execute(LazyInitializingExecutorService.java:138)
at java.base/java.util.concurrent.CompletableFuture$UniCompletion.claim(CompletableFuture.java:568)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:638)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.notifyListeners(InfinispanLock.java:527)
at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.cancel(InfinispanLock.java:382)
at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.call(DefaultLockManager.java:286)
at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.call(DefaultLockManager.java:222)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
{code}
While showing the following message in the other nodes log:
{code}
14:44:26,310 ERROR [org.jgroups.protocols.TCP] (jgroups-133,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
14:44:28,611 ERROR [org.jgroups.protocols.TCP] (jgroups-133,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
14:44:30,912 ERROR [org.jgroups.protocols.TCP] (jgroups-126,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
{code}
The node showing the exception gets killed after a while by Kubernetes:
{code}
NAME READY STATUS RESTARTS AGE
infinispan-0 1/1 Running 0 83m
infinispan-1 1/1 Running 0 83m
infinispan-10 1/1 Running 0 83m
infinispan-11 1/1 Running 0 83m
infinispan-2 1/1 Running 0 83m
infinispan-3 1/1 Running 0 83m
infinispan-4 1/1 Running 0 83m
infinispan-5 1/1 Running 0 83m
infinispan-6 1/1 Running 0 83m
infinispan-7 1/1 Running 0 83m
infinispan-8 0/1 CreateContainerError 3 83m
infinispan-9 1/1 Running 0 83m
{code}
But it never becomes ready again.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 4 months
[JBoss JIRA] (ISPN-11033) Cluster fails while inserting data for a while
by Jens Reimann (Jira)
[ https://issues.jboss.org/browse/ISPN-11033?page=com.atlassian.jira.plugin... ]
Jens Reimann updated ISPN-11033:
--------------------------------
Attachment: infinispan.xml
> Cluster fails while inserting data for a while
> ----------------------------------------------
>
> Key: ISPN-11033
> URL: https://issues.jboss.org/browse/ISPN-11033
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 10.0.1.Final
> Environment: 12 node Infinispan cluster, OpenShift 4.2
> Reporter: Jens Reimann
> Priority: Blocker
> Attachments: deviceManagement.proto, infinispan.xml
>
>
> Inserting data into an Infinispan cluster works for a while, and then the cluster fails. Showing the following log messages in one pod:
> {code}
> 14:20:34,432 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p4-t1) ISPN000136: Error executing command ReplaceCommand on Cache 'devices', writing keys [WrappedByteArray{bytes=8201\*\i\o\.\e\n\m\a\s\s\e\.\i\o\t\.\i\n\f\i\n\i\s\p\a\n\.\d\e\v\i\c\e\.\D\e\v\i\c\e\K\e\y8A01\<0A1F\j\b\t\e\s\t\.\i\o\t\/\2\0\1\9\-\1\2\-\0\4\T\0\8\:\2\5\:\3\4\Z1219\h\t\t\p\-\i\n\s\e\r\t\e\r\-\f\r\8\l\m\1\5\2\2\4\7, hashCode=-381217399}]: org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 15 seconds for key WrappedByteArray{bytes=8201\*\i\o\.\e\n\m\a\s\s\e\.\i\o\t\.\i\n\f\i\n\i\s\p\a\n\.\d\e\v\i\c\e\.\D\e\v\i\c\e\K\e\y8A01\<0A1F\j\b\t\e\s\t\.\i\o\t\/\2\0\1\9\-\1\2\-\0\4\T\0\8\:\2\5\:\3\4\Z1219\h\t\t\p\-\i\n\s\e\r\t\e\r\-\f\r\8\l\m\1\5\2\2\4\7, hashCode=-381217399} and requestor GlobalTx:infinispan-8-8720:1383960. Lock is held by GlobalTx:infinispan-8-8720:33804
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:292)
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:222)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.checkState(InfinispanLock.java:440)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.lambda$toInvocationStage$3(InfinispanLock.java:416)
> at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
> at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
> at org.infinispan.commons.util.concurrent.CallerRunsRejectOnShutdownPolicy.rejectedExecution(CallerRunsRejectOnShutdownPolicy.java:19)
> at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825)
> at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355)
> at org.infinispan.executors.LazyInitializingExecutorService.execute(LazyInitializingExecutorService.java:138)
> at java.base/java.util.concurrent.CompletableFuture$UniCompletion.claim(CompletableFuture.java:568)
> at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:638)
> at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.notifyListeners(InfinispanLock.java:527)
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.cancel(InfinispanLock.java:382)
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.call(DefaultLockManager.java:286)
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.call(DefaultLockManager.java:222)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
> at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> {code}
> While showing the following message in the other nodes log:
> {code}
> 14:44:26,310 ERROR [org.jgroups.protocols.TCP] (jgroups-133,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
> 14:44:28,611 ERROR [org.jgroups.protocols.TCP] (jgroups-133,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
> 14:44:30,912 ERROR [org.jgroups.protocols.TCP] (jgroups-126,infinispan-3-50867) JGRP000034: infinispan-3-50867: failure sending message to infinispan-8-17029: java.net.SocketTimeoutException: connect timed out
> {code}
> The node showing the exception gets killed after a while by Kubernetes:
> {code}
> NAME READY STATUS RESTARTS AGE
> infinispan-0 1/1 Running 0 83m
> infinispan-1 1/1 Running 0 83m
> infinispan-10 1/1 Running 0 83m
> infinispan-11 1/1 Running 0 83m
> infinispan-2 1/1 Running 0 83m
> infinispan-3 1/1 Running 0 83m
> infinispan-4 1/1 Running 0 83m
> infinispan-5 1/1 Running 0 83m
> infinispan-6 1/1 Running 0 83m
> infinispan-7 1/1 Running 0 83m
> infinispan-8 0/1 CreateContainerError 3 83m
> infinispan-9 1/1 Running 0 83m
> {code}
> But it never becomes ready again.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 4 months