[JBoss JIRA] (ISPN-9449) Node X left the cluster - SuspectException: ISPN000400: Node X was suspected
by Diego Lovison (JIRA)
Diego Lovison created ISPN-9449:
-----------------------------------
Summary: Node X left the cluster - SuspectException: ISPN000400: Node X was suspected
Key: ISPN-9449
URL: https://issues.jboss.org/browse/ISPN-9449
Project: Infinispan
Issue Type: Bug
Reporter: Diego Lovison
Assignee: Dan Berindei
Priority: Critical
After the commit df9ffb5ba46752d2509aa3a08c59519469cc929a in Infinispan, the tests regression-cs-hotrod-dist-reads and regression-cs-hotrod-repl-reads are failing.
I ran 3 times the same test with the commit df9ffb5ba46752d2509aa3a08c59519469cc929a and they are working.
If we run with master for "regression-cs-hotrod-dist-reads" it will because of:
{noformat}
11:18:04,874 INFO [org.radargun.RemoteMasterConnection] (sc-main) Message successfully sent to the master
11:22:06,470 INFO [org.radargun.Slave] (sc-main) Stage 'BasicOperationsTest' should not be executed
11:22:06,472 INFO [org.radargun.RemoteMasterConnection] (sc-main) Message successfully sent to the master
[0m[0m11:22:34,182 INFO [org.infinispan.CLUSTER] (jgroups-112,slave0) ISPN000094: Received new cluster view for channel cluster: [slave3|8] (7) [slave3, slave6, slave0, slave5, slave4, slave2, slave1]
[0m[0m11:22:34,190 INFO [org.infinispan.CLUSTER] (jgroups-112,slave0) ISPN100001: Node slave7 left the cluster
[0m[0m11:22:48,182 INFO [org.infinispan.CLUSTER] (jgroups-115,slave0) ISPN000094: Received new cluster view for channel cluster: [slave3|9] (6) [slave3, slave6, slave0, slave5, slave4, slave1]
[0m[0m11:22:48,191 INFO [org.infinispan.CLUSTER] (jgroups-115,slave0) ISPN100001: Node slave2 left the cluster
[0m[0m11:23:09,176 INFO [org.infinispan.CLUSTER] (jgroups-111,slave0) ISPN000094: Received new cluster view for channel cluster: [slave3|10] (5) [slave3, slave6, slave0, slave5, slave1]
[0m[0m11:23:09,179 INFO [org.infinispan.CLUSTER] (jgroups-111,slave0) ISPN100001: Node slave4 left the cluster
[0m[0m11:23:20,173 INFO [org.infinispan.CLUSTER] (jgroups-121,slave0) ISPN000094: Received new cluster view for channel cluster: [slave6|11] (4) [slave6, slave0, slave5, slave1]
[0m[0m11:23:20,178 INFO [org.infinispan.CLUSTER] (jgroups-121,slave0) ISPN100001: Node slave3 left the cluster
[0m[33m11:23:20,199 WARN [org.infinispan.statetransfer.InboundTransferTask] (stateTransferExecutor-thread--p5-t60) ISPN000210: Failed to request state of cache memcachedCache from node slave3, segments {114 184 190-191}: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave3 was suspected
at org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:33)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:52)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onNewView(SingleTargetRequest.java:42)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$null$3(JGroupsTransport.java:672)
at org.infinispan.remoting.transport.impl.RequestRepository.lambda$forEach$0(RequestRepository.java:60)
at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
at org.infinispan.remoting.transport.impl.RequestRepository.forEach(RequestRepository.java:60)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$receiveClusterView$4(JGroupsTransport.java:672)
at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave3 was suspected
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:92)
at org.infinispan.remoting.rpc.RpcManagerImpl.blocking(RpcManagerImpl.java:261)
at org.infinispan.statetransfer.InboundTransferTask.startTransfer(InboundTransferTask.java:134)
at org.infinispan.statetransfer.InboundTransferTask.requestSegments(InboundTransferTask.java:113)
at org.infinispan.statetransfer.StateConsumerImpl.lambda$addTransfer$7(StateConsumerImpl.java:1073)
at org.infinispan.executors.LimitedExecutor.lambda$executeAsync$1(LimitedExecutor.java:130)
at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:175)
at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:37)
at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:227)
... 3 more
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave3 was suspected
at org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:33)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:52)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onNewView(SingleTargetRequest.java:42)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$null$3(JGroupsTransport.java:672)
at org.infinispan.remoting.transport.impl.RequestRepository.lambda$forEach$0(RequestRepository.java:60)
at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
at org.infinispan.remoting.transport.impl.RequestRepository.forEach(RequestRepository.java:60)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$receiveClusterView$4(JGroupsTransport.java:672)
at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212)
... 3 more
[CIRCULAR REFERENCE:java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave3 was suspected]
[0m[33m11:23:23,916 WARN [org.jgroups.protocols.pbcast.NAKACK2] (jgroups-117,slave0) JGRP000011: slave0: dropped message 319143 from non-member slave3 (view=[slave6|11] (4) [slave6, slave0, slave5, slave1])
[0m[0m11:23:34,142 INFO [org.infinispan.CLUSTER] (jgroups-111,slave0) ISPN000094: Received new cluster view for channel cluster: [slave6|12] (3) [slave6, slave0, slave5]
[0m[0m11:23:34,145 INFO [org.infinispan.CLUSTER] (jgroups-111,slave0) ISPN100001: Node slave1 left the cluster
[0m[33m11:23:34,154 WARN [org.infinispan.statetransfer.InboundTransferTask] (stateTransferExecutor-thread--p5-t61) ISPN000210: Failed to request state of cache hotrodDist from node slave1, segments {59-60 71-74 78 81 146 180-181 185 192 217}: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected
at org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:33)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:52)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onNewView(SingleTargetRequest.java:42)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$null$3(JGroupsTransport.java:672)
at org.infinispan.remoting.transport.impl.RequestRepository.lambda$forEach$0(RequestRepository.java:60)
at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
at org.infinispan.remoting.transport.impl.RequestRepository.forEach(RequestRepository.java:60)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$receiveClusterView$4(JGroupsTransport.java:672)
at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:92)
at org.infinispan.remoting.rpc.RpcManagerImpl.blocking(RpcManagerImpl.java:261)
at org.infinispan.statetransfer.InboundTransferTask.startTransfer(InboundTransferTask.java:134)
at org.infinispan.statetransfer.InboundTransferTask.requestSegments(InboundTransferTask.java:113)
at org.infinispan.statetransfer.StateConsumerImpl.lambda$addTransfer$7(StateConsumerImpl.java:1073)
at org.infinispan.executors.LimitedExecutor.lambda$executeAsync$1(LimitedExecutor.java:130)
at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:175)
at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:37)
at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:227)
... 3 more
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected
at org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:33)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:52)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onNewView(SingleTargetRequest.java:42)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$null$3(JGroupsTransport.java:672)
at org.infinispan.remoting.transport.impl.RequestRepository.lambda$forEach$0(RequestRepository.java:60)
at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
at org.infinispan.remoting.transport.impl.RequestRepository.forEach(RequestRepository.java:60)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$receiveClusterView$4(JGroupsTransport.java:672)
at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212)
... 3 more
[CIRCULAR REFERENCE:java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected]
[0m[33m11:23:34,183 WARN [org.infinispan.statetransfer.InboundTransferTask] (stateTransferExecutor-thread--p5-t58) ISPN000210: Failed to request state of cache rest from node slave1, segments {59-60 71-74 78 81 146 180-181 185 192 217}: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected
at org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:33)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:52)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onNewView(SingleTargetRequest.java:42)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$null$3(JGroupsTransport.java:672)
at org.infinispan.remoting.transport.impl.RequestRepository.lambda$forEach$0(RequestRepository.java:60)
at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
at org.infinispan.remoting.transport.impl.RequestRepository.forEach(RequestRepository.java:60)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$receiveClusterView$4(JGroupsTransport.java:672)
at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:92)
at org.infinispan.remoting.rpc.RpcManagerImpl.blocking(RpcManagerImpl.java:261)
at org.infinispan.statetransfer.InboundTransferTask.startTransfer(InboundTransferTask.java:134)
at org.infinispan.statetransfer.InboundTransferTask.requestSegments(InboundTransferTask.java:113)
at org.infinispan.statetransfer.StateConsumerImpl.lambda$addTransfer$7(StateConsumerImpl.java:1073)
at org.infinispan.executors.LimitedExecutor.lambda$executeAsync$1(LimitedExecutor.java:130)
at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:175)
at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:37)
at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:227)
... 3 more
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected
at org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:33)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:52)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onNewView(SingleTargetRequest.java:42)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$null$3(JGroupsTransport.java:672)
at org.infinispan.remoting.transport.impl.RequestRepository.lambda$forEach$0(RequestRepository.java:60)
at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
at org.infinispan.remoting.transport.impl.RequestRepository.forEach(RequestRepository.java:60)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$receiveClusterView$4(JGroupsTransport.java:672)
at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212)
... 3 more
[CIRCULAR REFERENCE:java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected]
[0m[33m11:23:34,177 WARN [org.infinispan.statetransfer.InboundTransferTask] (stateTransferExecutor-thread--p5-t54) ISPN000210: Failed to request state of cache memcachedCache from node slave1, segments {59-60 71-74 78 81 146 180-181 185 192 217}: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected
at org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:33)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:52)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onNewView(SingleTargetRequest.java:42)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$null$3(JGroupsTransport.java:672)
at org.infinispan.remoting.transport.impl.RequestRepository.lambda$forEach$0(RequestRepository.java:60)
at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
at org.infinispan.remoting.transport.impl.RequestRepository.forEach(RequestRepository.java:60)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$receiveClusterView$4(JGroupsTransport.java:672)
at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:92)
at org.infinispan.remoting.rpc.RpcManagerImpl.blocking(RpcManagerImpl.java:261)
at org.infinispan.statetransfer.InboundTransferTask.startTransfer(InboundTransferTask.java:134)
at org.infinispan.statetransfer.InboundTransferTask.requestSegments(InboundTransferTask.java:113)
at org.infinispan.statetransfer.StateConsumerImpl.lambda$addTransfer$7(StateConsumerImpl.java:1073)
at org.infinispan.executors.LimitedExecutor.lambda$executeAsync$1(LimitedExecutor.java:130)
at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:175)
at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:37)
at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:227)
... 3 more
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected
at org.infinispan.remoting.transport.ResponseCollectors.remoteNodeSuspected(ResponseCollectors.java:33)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:31)
at org.infinispan.remoting.transport.impl.SingleResponseCollector.targetNotFound(SingleResponseCollector.java:17)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:23)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:52)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onNewView(SingleTargetRequest.java:42)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$null$3(JGroupsTransport.java:672)
at org.infinispan.remoting.transport.impl.RequestRepository.lambda$forEach$0(RequestRepository.java:60)
at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
at org.infinispan.remoting.transport.impl.RequestRepository.forEach(RequestRepository.java:60)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$receiveClusterView$4(JGroupsTransport.java:672)
at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212)
... 3 more
[CIRCULAR REFERENCE:java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node slave1 was suspected]
[0m[31m11:23:34,203 ERROR [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p4-t5) ISPN000208: No live owners found for segments {59-60 71 73-74 78 81 146 180-181 185 192 217} of cache rest. Excluded owners: []
[0m[31m11:23:34,318 ERROR [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p4-t11) ISPN000208: No live owners found for segments {59-60 71 73-74 78 81 146 180-181 185 192 217} of cache memcachedCache. Excluded owners: []
[0m[33m11:23:34,332 WARN [org.infinispan.statetransfer.StateConsumerImpl] (stateTransferExecutor-thread--p5-t61) Discarding received cache entries for segment 72 of cache memcachedCache because they do not belong to this node.
[0m[33m11:23:34,396 WARN [org.infinispan.statetransfer.StateConsumerImpl] (stateTransferExecutor-thread--p5-t57) Discarding received cache entries for segment 72 of cache memcachedCache because they do not belong to this node.
[0m[31m11:23:34,398 ERROR [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p4-t1) ISPN000208: No live owners found for segments {71 74 146} of cache hotrodDist. Excluded owners: []
[0m[33m11:23:34,521 WARN [org.jgroups.protocols.pbcast.NAKACK2] (jgroups-123,slave0) JGRP000011: slave0: dropped message 338000 from non-member slave1 (view=[slave6|12] (3) [slave6, slave0, slave5])
[0m[33m11:23:51,210 WARN [org.jgroups.protocols.pbcast.GMS] (jgroups-124,slave0) slave0: not member of view [slave6|13]; discarding it
[0m[33m11:24:02,223 WARN [org.jgroups.protocols.pbcast.GMS] (jgroups-119,slave0) slave0: failed to create view from delta-view; dropping view: java.lang.IllegalStateException: the view-id of the delta view ([slave6|13]) doesn't match the current view-id ([slave6|12]); discarding delta view [slave6|14], ref-view=[slave6|13], left=[slave5]
[0m[33m11:24:02,231 WARN [org.jgroups.protocols.pbcast.GMS] (jgroups-119,slave0) slave0: not member of view [slave6|14]; discarding it
[0m[33m11:24:11,932 WARN [org.jgroups.protocols.pbcast.GMS] (jgroups-119,slave0) slave0: not member of view [slave5|15]; discarding it
[0m[0m11:24:12,485 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-129,slave0) ISPN000094: Received new cluster view for channel cluster: [slave0|16] (2) [slave0, slave5]
[0m[0m11:24:12,488 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-129,slave0) ISPN100001: Node slave6 left the cluster
[0m[33m11:24:14,492 WARN [org.jgroups.protocols.pbcast.GMS] (VERIFY_SUSPECT.TimerThread-129,slave0) slave0: failed to collect all ACKs (expected=1) for view [slave0|16] after 2000ms, missing 1 ACKs from (1) slave5
[0m[33m11:24:34,209 WARN [org.jgroups.protocols.pbcast.NAKACK2] (jgroups-128,slave0) JGRP000011: slave0: dropped message 319152 from non-member slave3 (view=[slave0|16] (2) [slave0, slave5]) (received 11 identical messages from slave3 in the last 70294 ms)
[0m[0m11:25:10,121 INFO [org.infinispan.CLUSTER] (jgroups-128,slave0) ISPN000093: Received new, MERGED cluster view for channel cluster: MergeView::[slave3|17] (7) [slave3, slave5, slave2, slave0, slave4, slave1, slave7], 6 subgroups: [slave5|15] (1) [slave5], [slave3|15] (2) [slave3, slave0], [slave0|16] (2) [slave0, slave5], [slave3|7] (8) [slave3, slave6, slave0, slave5, slave4, slave2, slave7, slave1], [slave3|8] (7) [slave3, slave6, slave0, slave5, slave4, slave2, slave1], [slave3|9] (6) [slave3, slave6, slave0, slave5, slave4, slave1]
[0m[0m11:25:10,124 INFO [org.infinispan.CLUSTER] (jgroups-128,slave0) ISPN100000: Node slave3 joined the cluster
[0m[0m11:25:10,127 INFO [org.infinispan.CLUSTER] (jgroups-128,slave0) ISPN100000: Node slave2 joined the cluster
[0m[0m11:25:10,128 INFO [org.infinispan.CLUSTER] (jgroups-128,slave0) ISPN100000: Node slave4 joined the cluster
[0m[0m11:25:10,129 INFO [org.infinispan.CLUSTER] (jgroups-128,slave0) ISPN100000: Node slave1 joined the cluster
[0m[0m11:25:10,130 INFO [org.infinispan.CLUSTER] (jgroups-128,slave0) ISPN100000: Node slave7 joined the cluster
[0m[33m11:25:10,362 WARN [org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy] (stateTransferExecutor-thread--p5-t58) ISPN000517: Ignoring cache topology from [slave0] during merge: CacheTopology{id=49, phase=NO_REBALANCE, rebalanceId=14, currentCH=DefaultConsistentHash{ns=256, owners = (3)[slave6: 86+75, slave5: 82+89, slave0: 88+92]}, pendingCH=null, unionCH=null, actualMembers=[slave6, slave5, slave0], persistentUUIDs=[c1c2227d-2656-431e-a5b5-721459759a7f, 30123e75-2b33-46bc-a2e2-15f882782719, 2d550811-e842-4382-b8ae-3f36973f49f9]}
[0m[0m11:25:10,374 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t58) [Context=rest]ISPN100007: After merge (or coordinator change), recovered members [slave5] with topology id 57
[0m[0m11:25:10,374 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t73) [Context=__global_tx_table__]ISPN100007: After merge (or coordinator change), recovered members [slave5] with topology id 43
[0m[0m11:25:10,374 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t70) [Context=___hotRodTopologyCache]ISPN100007: After merge (or coordinator change), recovered members [slave5] with topology id 43
[0m[0m11:25:10,374 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t71) [Context=___protobuf_metadata]ISPN100007: After merge (or coordinator change), recovered members [slave5] with topology id 43
[0m[0m11:25:10,374 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t72) [Context=org.infinispan.CONFIG]ISPN100007: After merge (or coordinator change), recovered members [slave5] with topology id 43
[0m[0m11:25:10,394 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t58) [Context=rest]ISPN100008: Updating cache members list [slave5], topology id 58
[0m[0m11:25:10,415 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t70) [Context=___hotRodTopologyCache]ISPN100002: Starting rebalance with members [slave5, slave0], phase READ_OLD_WRITE_ALL, topology id 44
[0m[0m11:25:10,415 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t71) [Context=___protobuf_metadata]ISPN100002: Starting rebalance with members [slave5, slave0], phase READ_OLD_WRITE_ALL, topology id 44
[0m[0m11:25:10,416 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t73) [Context=__global_tx_table__]ISPN100002: Starting rebalance with members [slave5, slave0], phase READ_OLD_WRITE_ALL, topology id 44
[0m[0m11:25:10,419 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t71) [Context=hotrodRepl]ISPN100007: After merge (or coordinator change), recovered members [slave5] with topology id 43
[0m[0m11:25:10,420 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t58) [Context=rest]ISPN100002: Starting rebalance with members [slave5, slave0], phase READ_OLD_WRITE_ALL, topology id 59
[0m[0m11:25:10,420 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t73) [Context=___script_cache]ISPN100007: After merge (or coordinator change), recovered members [slave5] with topology id 42
[0m[33m11:25:10,419 WARN [org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy] (stateTransferExecutor-thread--p5-t70) ISPN000517: Ignoring cache topology from [slave0] during merge: CacheTopology{id=47, phase=READ_ALL_WRITE_ALL, rebalanceId=14, currentCH=DefaultConsistentHash{ns=256, owners = (3)[slave6: 83+30, slave5: 88+37, slave0: 85+35]}, pendingCH=DefaultConsistentHash{ns=256, owners = (3)[slave6: 86+75, slave5: 82+89, slave0: 88+92]}, unionCH=null, actualMembers=[slave6, slave5, slave0], persistentUUIDs=[c1c2227d-2656-431e-a5b5-721459759a7f, 30123e75-2b33-46bc-a2e2-15f882782719, 2d550811-e842-4382-b8ae-3f36973f49f9]}
[0m[0m11:25:10,422 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t70) [Context=memcachedCache]ISPN100007: After merge (or coordinator change), recovered members [slave5] with topology id 52
[0m[0m11:25:10,424 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t72) [Context=org.infinispan.CONFIG]ISPN100002: Starting rebalance with members [slave5, slave0], phase READ_OLD_WRITE_ALL, topology id 44
[0m[33m11:25:10,423 WARN [org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy] (stateTransferExecutor-thread--p5-t58) ISPN000517: Ignoring cache topology from [slave0] during merge: CacheTopology{id=44, phase=READ_OLD_WRITE_ALL, rebalanceId=13, currentCH=DefaultConsistentHash{ns=256, owners = (3)[slave6: 83+30, slave5: 88+37, slave0: 85+35]}, pendingCH=DefaultConsistentHash{ns=256, owners = (3)[slave6: 86+75, slave5: 82+89, slave0: 88+92]}, unionCH=null, actualMembers=[slave6, slave5, slave0], persistentUUIDs=[c1c2227d-2656-431e-a5b5-721459759a7f, 30123e75-2b33-46bc-a2e2-15f882782719, 2d550811-e842-4382-b8ae-3f36973f49f9]}
[0m[0m11:25:10,426 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t70) [Context=memcachedCache]ISPN100008: Updating cache members list [slave5], topology id 53
[0m[0m11:25:10,426 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t58) [Context=hotrodDist]ISPN100007: After merge (or coordinator change), recovered members [slave5] with topology id 49
[0m[0m11:25:10,430 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t58) [Context=hotrodDist]ISPN100008: Updating cache members list [slave5], topology id 50
[0m[0m11:25:10,431 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t71) [Context=hotrodRepl]ISPN100002: Starting rebalance with members [slave5, slave0], phase READ_OLD_WRITE_ALL, topology id 44
[0m[0m11:25:10,431 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t73) [Context=___script_cache]ISPN100002: Starting rebalance with members [slave5, slave0], phase READ_OLD_WRITE_ALL, topology id 43
[0m[0m11:25:10,435 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t70) [Context=memcachedCache]ISPN100002: Starting rebalance with members [slave5, slave0], phase READ_OLD_WRITE_ALL, topology id 54
[0m[0m11:25:10,438 INFO [org.infinispan.CLUSTER] (stateTransferExecutor-thread--p5-t58) [Context=hotrodDist]ISPN100002: Starting rebalance with members [slave5, slave0], phase READ_OLD_WRITE_ALL, topology id 51
[0m[33m11:25:12,134 WARN [org.jgroups.protocols.pbcast.GMS] (jgroups-128,slave0) slave0: failed to collect all ACKs (expected=1) for view [slave3|17] after 2000ms, missing 1 ACKs from (1) slave5
11:26:11,452 INFO [org.radargun.Slave] (sc-main) Starting stage ClusterSplitVerify
11:26:11,453 ERROR [org.radargun.stages.monitor.ClusterSplitVerifyStage] (sc-main) Cluster size at the beginning of the test was 8 but changed to 7 during the test! Perhaps a split occured, or a new node joined?
11:26:11,454 INFO [org.radargun.Slave] (sc-main) Finished stage ClusterSplitVerify
11:26:11,455 INFO [org.radargun.RemoteMasterConnection] (sc-main) Message successfully sent to the master
11:26:11,571 INFO [org.radargun.Slave] (sc-main) Starting stage ScenarioDestroy
11:26:11,573 INFO [org.radargun.stages.ScenarioDestroyStage] (sc-main) Scenario finished, destroying...
11:26:11,575 INFO [org.radargun.stages.ScenarioDestroyStage] (sc-main) Memory before cleanup:
Runtime free: 1,484,671 kb
Runtime max:27,960,320 kb
Runtime total:1,974,784 kb
{noformat}
If we run with master for "regression-cs-hotrod-dist-writes" it will because of:
{noformat}
21:39:52,651 INFO [org.radargun.RemoteSlaveConnection] (main) Master started and listening for connection on: /172.18.1.18:2103
21:39:52,651 INFO [org.radargun.RemoteSlaveConnection] (main) Waiting 5 seconds for server socket to open completely
21:39:57,655 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 16 slaves.
21:39:57,666 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 15 slaves.
21:39:57,667 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 14 slaves.
21:39:57,668 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 13 slaves.
21:39:57,669 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 12 slaves.
21:39:57,670 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 11 slaves.
21:39:57,671 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 10 slaves.
21:39:57,672 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 9 slaves.
21:39:57,674 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 8 slaves.
21:39:57,675 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 7 slaves.
21:39:57,676 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 6 slaves.
21:39:57,677 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 5 slaves.
21:39:57,678 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 4 slaves.
21:39:57,708 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 3 slaves.
21:39:57,710 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 2 slaves.
21:39:57,711 INFO [org.radargun.RemoteSlaveConnection] (main) Awaiting registration from 1 slaves.
21:44:57,668 ERROR [org.radargun.Master] (main) Exception in Master.run:
java.io.IOException: 1 slaves haven't connected within timeout!
at org.radargun.RemoteSlaveConnection.establish(RemoteSlaveConnection.java:112) ~[radargun-core-3.0.0-SNAPSHOT.jar:?]
at org.radargun.Master.run(Master.java:59) [radargun-core-3.0.0-SNAPSHOT.jar:?]
at org.radargun.LaunchMaster.main(LaunchMaster.java:34) [radargun-core-3.0.0-SNAPSHOT.jar:?]
21:44:57,697 WARN [org.radargun.RemoteSlaveConnection] (main) Failed to send termination to slaves.
java.lang.NullPointerException: null
at org.radargun.RemoteSlaveConnection$SlaveRecord.access$100(RemoteSlaveConnection.java:63) ~[radargun-core-3.0.0-SNAPSHOT.jar:?]
at org.radargun.RemoteSlaveConnection.mcastBuffer(RemoteSlaveConnection.java:201) ~[radargun-core-3.0.0-SNAPSHOT.jar:?]
at org.radargun.RemoteSlaveConnection.mcastObject(RemoteSlaveConnection.java:211) ~[radargun-core-3.0.0-SNAPSHOT.jar:?]
at org.radargun.RemoteSlaveConnection.release(RemoteSlaveConnection.java:357) [radargun-core-3.0.0-SNAPSHOT.jar:?]
at org.radargun.Master.run(Master.java:155) [radargun-core-3.0.0-SNAPSHOT.jar:?]
at org.radargun.LaunchMaster.main(LaunchMaster.java:34) [radargun-core-3.0.0-SNAPSHOT.jar:?]
21:44:57,703 INFO [org.radargun.ShutDownHook] (Thread-1) Master process is being shutdown
Master 16071 finished with value 127
kill: sending signal to 16071 failed: No such process
kill: sending signal to 16071 failed: No such process
{noformat}
The tests are related with HotRod during the reads operation in a replicated and distributed cache.
The scenario is:
8 - Servers
8 - Slaves
Commits:
13/May - df9ffb5ba46752d2509aa3a08c59519469cc929a - Passed - Executed 3 times
14/May - df9ffb5ba46752d2509aa3a08c59519469cc929a - Passed - Executed 3 times (I didn't executed this because it is the same of the above, just to keep the history)
15/May - 26ba1aeb1d66cf65fb5c410ec98629093c29ab0b - Need to double check (It passed)
16/May - 92a5e4f62c39d63221aed2ed5763081b626874e6 - Need to double check (It start failing here)
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 7 months
[JBoss JIRA] (ISPN-9446) Application hangs when the owners leave the cluster when clustered maxIdle expiration is enabled
by Diego Lovison (JIRA)
[ https://issues.jboss.org/browse/ISPN-9446?page=com.atlassian.jira.plugin.... ]
Diego Lovison updated ISPN-9446:
--------------------------------
Issue Type: Bug (was: Task)
> Application hangs when the owners leave the cluster when clustered maxIdle expiration is enabled
> ------------------------------------------------------------------------------------------------
>
> Key: ISPN-9446
> URL: https://issues.jboss.org/browse/ISPN-9446
> Project: Infinispan
> Issue Type: Bug
> Components: Expiration
> Reporter: Diego Lovison
> Assignee: William Burns
> Priority: Critical
>
> 1) Execute the following application 3 times.
> 2) Wait until the application receives all topologies changes
> 3) See what application is returning null values. Let's say APP_3
> 4) Identify the other two application that are returning nonnull values. Let's say APP_1 and APP_2
> 5) The first line of APP_1 and APP_2 show the pid number
> 6) Execute the following command "kill -STOP $PID_APP_1 && kill -STOP $PID_APP_2"
> 7) Look in APP_3 until receiving a new topology and wait 20 seconds.
> 8) Execute the following command "kill -CONT $PID_APP_1 && kill -CONT $PID_APP_2"
> The APP_1 and APP_2 hang and don't continue anymore.
> Here is the stack trace from APP_1
> {noformat}
> 8-08-22 11:45:06,537 INFO [org.infinispan.CLUSTER] (jgroups-3,Diegos-MacBook-Pro-29329:[]) ISPN000094: Received new cluster view for channel WeatherApp: [Diegos-MacBook-Pro-21527|3] (2) [Diegos-MacBook-Pro-21527, Diegos-MacBook-Pro-29329]
> 2018-08-22 11:45:06,542 INFO [org.infinispan.CLUSTER] (jgroups-3,Diegos-MacBook-Pro-29329:[]) ISPN100001: Node Diegos-MacBook-Pro-45456 left the cluster
> EventImpl{type=VIEW_CHANGED, newMembers=[Diegos-MacBook-Pro-21527, Diegos-MacBook-Pro-29329], oldMembers=[Diegos-MacBook-Pro-21527, Diegos-MacBook-Pro-45456, Diegos-MacBook-Pro-29329], localAddress=Diegos-MacBook-Pro-29329, viewId=3, subgroupsMerged=null, mergeView=false}
> 2018-08-22 11:45:16,548 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (remote-thread--p2-t1:[]) ISPN000136: Error executing command RemoveExpiredCommand, writing keys [Brazil]
> org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 10 seconds for key Brazil and requestor CommandInvocation:Diegos-MacBook-Pro-45456:1. Lock is held by CommandInvocation:Diegos-MacBook-Pro-29329:1
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:288) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:218) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.checkState(InfinispanLock.java:436) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.toInvocationStage(InfinispanLock.java:408) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.toInvocationStage(DefaultLockManager.java:248) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.toInvocationStage(DefaultLockManager.java:272) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.nonTxLockAndInvokeNext(AbstractLockingInterceptor.java:294) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitNonTxDataWriteCommand(AbstractLockingInterceptor.java:126) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitDataWriteCommand(NonTransactionalLockingInterceptor.java:40) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitRemoveCommand(AbstractLockingInterceptor.java:102) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.Visitor.visitRemoveExpiredCommand(Visitor.java:66) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.write.RemoveExpiredCommand.acceptVisitor(RemoveExpiredCommand.java:55) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:54) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:306) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:252) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.statetransfer.StateTransferInterceptor.visitRemoveCommand(StateTransferInterceptor.java:108) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.Visitor.visitRemoveExpiredCommand(Visitor.java:66) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.write.RemoveExpiredCommand.acceptVisitor(RemoveExpiredCommand.java:55) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:54) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.impl.CacheMgmtInterceptor.visitRemoveCommand(CacheMgmtInterceptor.java:427) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.Visitor.visitRemoveExpiredCommand(Visitor.java:66) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.write.RemoveExpiredCommand.acceptVisitor(RemoveExpiredCommand.java:55) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:54) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.DDAsyncInterceptor.handleDefault(DDAsyncInterceptor.java:54) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.DDAsyncInterceptor.visitRemoveCommand(DDAsyncInterceptor.java:65) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.Visitor.visitRemoveExpiredCommand(Visitor.java:66) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.write.RemoveExpiredCommand.acceptVisitor(RemoveExpiredCommand.java:55) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndExceptionally(BaseAsyncInterceptor.java:123) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.impl.InvocationContextInterceptor.visitCommand(InvocationContextInterceptor.java:90) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:56) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.DDAsyncInterceptor.handleDefault(DDAsyncInterceptor.java:54) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.DDAsyncInterceptor.visitRemoveCommand(DDAsyncInterceptor.java:65) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.Visitor.visitRemoveExpiredCommand(Visitor.java:66) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.write.RemoveExpiredCommand.acceptVisitor(RemoveExpiredCommand.java:55) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.DDAsyncInterceptor.visitCommand(DDAsyncInterceptor.java:50) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invokeAsync(AsyncInterceptorChainImpl.java:234) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommandAsync(BaseRpcInvokingCommand.java:63) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.remote.SingleRpcCommand.invokeAsync(SingleRpcCommand.java:57) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokeCommand(BasePerCacheInboundInvocationHandler.java:94) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.invoke(BaseBlockingRunnable.java:99) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.runAsync(BaseBlockingRunnable.java:71) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:40) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_162]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_162]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]
> 2018-08-22 11:45:16,555 WARN [org.infinispan.remoting.inboundhandler.TrianglePerCacheInboundInvocationHandler] (remote-thread--p2-t1:[]) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='weather', command=RemoveExpiredCommand{key=Brazil, value=LocationWeather{temperature=37.0, country='Brazil'}, lifespan=null, maxIde=true}}
> org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 10 seconds for key Brazil and requestor CommandInvocation:Diegos-MacBook-Pro-45456:1. Lock is held by CommandInvocation:Diegos-MacBook-Pro-29329:1
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:288) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.get(DefaultLockManager.java:218) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.checkState(InfinispanLock.java:436) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.locks.impl.InfinispanLock$LockPlaceHolder.toInvocationStage(InfinispanLock.java:408) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.toInvocationStage(DefaultLockManager.java:248) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.toInvocationStage(DefaultLockManager.java:272) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.nonTxLockAndInvokeNext(AbstractLockingInterceptor.java:294) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitNonTxDataWriteCommand(AbstractLockingInterceptor.java:126) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitDataWriteCommand(NonTransactionalLockingInterceptor.java:40) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitRemoveCommand(AbstractLockingInterceptor.java:102) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.Visitor.visitRemoveExpiredCommand(Visitor.java:66) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.write.RemoveExpiredCommand.acceptVisitor(RemoveExpiredCommand.java:55) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:54) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:306) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:252) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.statetransfer.StateTransferInterceptor.visitRemoveCommand(StateTransferInterceptor.java:108) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.Visitor.visitRemoveExpiredCommand(Visitor.java:66) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.write.RemoveExpiredCommand.acceptVisitor(RemoveExpiredCommand.java:55) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:54) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.impl.CacheMgmtInterceptor.visitRemoveCommand(CacheMgmtInterceptor.java:427) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.Visitor.visitRemoveExpiredCommand(Visitor.java:66) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.write.RemoveExpiredCommand.acceptVisitor(RemoveExpiredCommand.java:55) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:54) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.DDAsyncInterceptor.handleDefault(DDAsyncInterceptor.java:54) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.DDAsyncInterceptor.visitRemoveCommand(DDAsyncInterceptor.java:65) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.Visitor.visitRemoveExpiredCommand(Visitor.java:66) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.write.RemoveExpiredCommand.acceptVisitor(RemoveExpiredCommand.java:55) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndExceptionally(BaseAsyncInterceptor.java:123) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.impl.InvocationContextInterceptor.visitCommand(InvocationContextInterceptor.java:90) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:56) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.DDAsyncInterceptor.handleDefault(DDAsyncInterceptor.java:54) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.DDAsyncInterceptor.visitRemoveCommand(DDAsyncInterceptor.java:65) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.Visitor.visitRemoveExpiredCommand(Visitor.java:66) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.write.RemoveExpiredCommand.acceptVisitor(RemoveExpiredCommand.java:55) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.DDAsyncInterceptor.visitCommand(DDAsyncInterceptor.java:50) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invokeAsync(AsyncInterceptorChainImpl.java:234) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommandAsync(BaseRpcInvokingCommand.java:63) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.commands.remote.SingleRpcCommand.invokeAsync(SingleRpcCommand.java:57) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokeCommand(BasePerCacheInboundInvocationHandler.java:94) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.invoke(BaseBlockingRunnable.java:99) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.runAsync(BaseBlockingRunnable.java:71) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:40) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_162]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_162]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]
> 2018-08-22 11:45:21,546 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p3-t1:[]) ISPN000136: Error executing command RemoveExpiredCommand, writing keys [Brazil]
> org.infinispan.util.concurrent.TimeoutException: ISPN000427: Timeout after 15 seconds waiting for acks. Id=1
> at org.infinispan.util.concurrent.CommandAckCollector.createTimeoutException(CommandAckCollector.java:222) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.CommandAckCollector.access$800(CommandAckCollector.java:58) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.CommandAckCollector$BaseCollector.call(CommandAckCollector.java:275) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.CommandAckCollector$BaseCollector.call(CommandAckCollector.java:255) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_162]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_162]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_162]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_162]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_162]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]
> 2018-08-22 11:45:21,547 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (main:[]) ISPN000136: Error executing command GetKeyValueCommand, writing keys []
> org.infinispan.util.concurrent.TimeoutException: ISPN000427: Timeout after 15 seconds waiting for acks. Id=1
> at org.infinispan.util.concurrent.CommandAckCollector.createTimeoutException(CommandAckCollector.java:222) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.CommandAckCollector.access$800(CommandAckCollector.java:58) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.CommandAckCollector$BaseCollector.call(CommandAckCollector.java:275) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.CommandAckCollector$BaseCollector.call(CommandAckCollector.java:255) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_162]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_162]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_162]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_162]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_162]
> at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_162]
> 2018-08-22 11:45:21,547 WARN [org.infinispan.expiration.impl.ClusterExpirationManager] (expiration-thread--p6-t1:[]) ISPN000026: Caught exception purging data container!
> java.util.concurrent.CompletionException: org.infinispan.util.concurrent.TimeoutException: ISPN000427: Timeout after 15 seconds waiting for acks. Id=1
> at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375) ~[?:1.8.0_162]
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934) ~[?:1.8.0_162]
> at java.util.ArrayList.forEach(ArrayList.java:1257) ~[?:1.8.0_162]
> at org.infinispan.expiration.impl.ClusterExpirationManager.processExpiration(ClusterExpirationManager.java:104) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.expiration.impl.ExpirationManagerImpl$ScheduledTask.run(ExpirationManagerImpl.java:243) [infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_162]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_162]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_162]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_162]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_162]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_162]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]
> Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000427: Timeout after 15 seconds waiting for acks. Id=1
> at org.infinispan.util.concurrent.CommandAckCollector.createTimeoutException(CommandAckCollector.java:222) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.CommandAckCollector.access$800(CommandAckCollector.java:58) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.CommandAckCollector$BaseCollector.call(CommandAckCollector.java:275) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at org.infinispan.util.concurrent.CommandAckCollector$BaseCollector.call(CommandAckCollector.java:255) ~[infinispan-embedded-9.4.0-SNAPSHOT.jar:9.4.0-SNAPSHOT]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_162]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_162]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_162]
> ... 3 more
> Exception in thread "main" org.infinispan.util.concurrent.TimeoutException: ISPN000427: Timeout after 15 seconds waiting for acks. Id=1
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:259)
> at org.infinispan.cache.impl.CacheImpl.get(CacheImpl.java:479)
> at org.infinispan.cache.impl.DecoratedCache.get(DecoratedCache.java:529)
> at org.infinispan.cache.impl.AbstractDelegatingCache.get(AbstractDelegatingCache.java:348)
> at org.infinispan.cache.impl.EncoderCache.get(EncoderCache.java:659)
> at com.github.diegolovison.example.infinispan.ClusteredCacheManagerExample.main(ClusteredCacheManagerExample.java:50)
> Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000427: Timeout after 15 seconds waiting for acks. Id=1
> at org.infinispan.util.concurrent.CommandAckCollector.createTimeoutException(CommandAckCollector.java:222)
> at org.infinispan.util.concurrent.CommandAckCollector.access$800(CommandAckCollector.java:58)
> at org.infinispan.util.concurrent.CommandAckCollector$BaseCollector.call(CommandAckCollector.java:275)
> at org.infinispan.util.concurrent.CommandAckCollector$BaseCollector.call(CommandAckCollector.java:255)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Suppressed: org.infinispan.util.logging.TraceException
> at org.infinispan.interceptors.impl.SimpleAsyncInvocationStage.get(SimpleAsyncInvocationStage.java:41)
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:250)
> at org.infinispan.cache.impl.CacheImpl.get(CacheImpl.java:479)
> at org.infinispan.cache.impl.DecoratedCache.get(DecoratedCache.java:529)
> at org.infinispan.cache.impl.AbstractDelegatingCache.get(AbstractDelegatingCache.java:348)
> at org.infinispan.cache.impl.EncoderCache.get(EncoderCache.java:659)
> at com.github.diegolovison.example.infinispan.ClusteredCacheManagerExample.main(ClusteredCacheManagerExample.java:50)
> {noformat}
> {code:java}
> public class ClusteredCacheManagerExample {
> private static final String DEFAULT_LOCATION = "Brazil";
> public static void main(String[] args) throws InterruptedException {
> System.out.println(getPID());
> EmbeddedCacheManager cacheManager = new DefaultCacheManager(GlobalConfigurationBuilder.defaultClusteredBuilder()
> .transport().clusterName("WeatherApp")
> .addProperty("configurationFile", "default-jgroups-tcp.xml")
> .build());
> cacheManager.defineConfiguration("weather", new ConfigurationBuilder()
> .clustering()
> .cacheMode(CacheMode.DIST_SYNC)
> .hash().numOwners(2)
> .expiration().maxIdle(15, TimeUnit.SECONDS)
> .build());
> cacheManager.addListener(new ClusterListener());
> Cache<String, LocationWeather> cache = cacheManager.getCache("weather");
> cache.addListener(new CacheListener());
> if (cacheManager.isCoordinator()) {
> cache.put(DEFAULT_LOCATION, new LocationWeather(37, DEFAULT_LOCATION));
> }
> AdvancedCache<String, LocationWeather> localCache = cache.getAdvancedCache().withFlags(Flag.CACHE_MODE_LOCAL);
> while (true) {
> System.out.println(new Date() + ": " + localCache.get(DEFAULT_LOCATION));
> TimeUnit.SECONDS.sleep(5);
> }
> }
> public static long getPID() {
> String processName = java.lang.management.ManagementFactory.getRuntimeMXBean().getName();
> if (processName != null && processName.length() > 0) {
> try {
> return Long.parseLong(processName.split("@")[0]);
> }
> catch (Exception e) {
> return 0;
> }
> }
> return 0;
> }
> }
> @Listener
> public class ClusterListener {
> @ViewChanged
> public void viewChanged(ViewChangedEvent event) {
> System.out.println(event);
> }
> }
> @Listener(clustered = true)
> public class CacheListener {
> @CacheEntryCreated
> public void entryCreated(CacheEntryCreatedEvent<String, LocationWeather> event) {
> if (!event.isOriginLocal()) {
> System.out.printf("-- Entry for %s modified by another node in the cluster\n", event.getKey());
> }
> }
> }
> public class LocationWeather implements Serializable {
> private final float temperature;
> private final String country;
> public LocationWeather(float temperature, String country) {
> this.temperature = temperature;
> this.country = country;
> }
> @Override
> public String toString() {
> return "LocationWeather{" +
> "temperature=" + temperature +
> ", country='" + country + '\'' +
> '}';
> }
> }
> {code}
> If I coment the line *.expiration().maxIdle(15, TimeUnit.SECONDS)* it starts woking.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 7 months