[JBoss JIRA] (ISPN-11227) Cluster fails to startup due to initial state transfer timing out
by Johno Crawford (Jira)
Johno Crawford created ISPN-11227:
-------------------------------------
Summary: Cluster fails to startup due to initial state transfer timing out
Key: ISPN-11227
URL: https://issues.redhat.com/browse/ISPN-11227
Project: Infinispan
Issue Type: Bug
Components: Core
Affects Versions: 10.1.1.Final
Reporter: Johno Crawford
If a zero capacity node is part of a running cluster and all other nodes are restarted, the nodes will hang on startup.
{code:java}
"ForkJoinPool.commonPool-worker-2@11514" daemon prio=5 tid=0xa3 nid=NA waiting
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Unsafe.java:-1)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:270)
at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:1091)
at org.infinispan.cache.impl.AbstractDelegatingCache.start(AbstractDelegatingCache.java:513)
at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:693)
at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:632)
at org.infinispan.manager.DefaultCacheManager.internalGetCache(DefaultCacheManager.java:517)
at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:498)
at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:491)
{code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-10889) Error building infinispan on ppc64le
by Ryan Emerson (Jira)
[ https://issues.redhat.com/browse/ISPN-10889?page=com.atlassian.jira.plugi... ]
Ryan Emerson commented on ISPN-10889:
-------------------------------------
[~dan.berindei] Sure. It would be an easy enough change to https://github.com/infinispan/infinispan-maven-plugins/blob/master/proto-...
> Error building infinispan on ppc64le
> ------------------------------------
>
> Key: ISPN-10889
> URL: https://issues.redhat.com/browse/ISPN-10889
> Project: Infinispan
> Issue Type: Bug
> Reporter: Rashmi Salgaonkar
> Priority: Major
>
> Apache Maven 3.6.2
> Java version: 11.0.5
> OS: linux-ppc64le
> Error building infinispan:-
> [INFO] Alternative client marshallers ..................... SKIPPED
> [INFO] Infinispan Common Marshaller Test Classes .......... SKIPPED
> [INFO] Infinispan Kryo Marshaller Bridge .................. SKIPPED
> [INFO] Infinispan Kryo Marshaller Bridge Bundle ........... SKIPPED
> [INFO] Infinispan Protostuff Marshaller Bridge ............ SKIPPED
> [INFO] Infinispan Protostuff Marshaller Bridge Bundle ..... SKIPPED
> [INFO] Infinispan Hibernate 5.1 Cache ..................... SKIPPED
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 48.824 s
> [INFO] Finished at: 2019-11-01T10:46:39Z
> [INFO] ------------------------------------------------------------------------
> [ERROR] Failed to execute goal org.infinispan.maven-plugins:proto-schema-compatibility:1.0.1.Final:proto-schema-compatibility-check (default) on project infinispan-commons: *OS not supported. Unable to find a protolock binary for the classifier linux-ppcle_64 *-> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please read the following articles:
> [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the command
> [ERROR] mvn <args> -rf :infinispan-commons
> bash-4.4#
> I was able to build https://github.com/infinispan/infinispan-maven-plugins repo.
> I need details on how I can generate the architecture specific protolock binary file.
> https://github.com/infinispan/infinispan-maven-plugins/tree/master/proto-...
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-6485) JGroupsTransport.lambda$invokeRemotelyAsync$1 seems to trigger resizes of HashMap
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-6485?page=com.atlassian.jira.plugin... ]
Dan Berindei resolved ISPN-6485.
--------------------------------
Fix Version/s: 9.1.0.Final
Resolution: Done
Fixed with ISPN-6971.
{{JGroupsTransport.backupRemotely()}} still uses the number of targets as the map capacity instead of rounding up with {{CollectionFactory.computeCapacity()}}, but sync xsite is on its way out anyway.
> JGroupsTransport.lambda$invokeRemotelyAsync$1 seems to trigger resizes of HashMap
> ---------------------------------------------------------------------------------
>
> Key: ISPN-6485
> URL: https://issues.redhat.com/browse/ISPN-6485
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Sanne GRINOVERO
> Assignee: Dan Berindei
> Priority: Minor
> Fix For: 9.1.0.Final
>
>
> The resize of an {{HashMap}} is highlighted by the JFR profiler in this method, it's possible that this is very simple to fix by simply creating the Map with a better estimate.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-11225) Server should send stack trace to client
by Dan Berindei (Jira)
Dan Berindei created ISPN-11225:
-----------------------------------
Summary: Server should send stack trace to client
Key: ISPN-11225
URL: https://issues.redhat.com/browse/ISPN-11225
Project: Infinispan
Issue Type: Enhancement
Components: Server
Affects Versions: 10.1.1.Final
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 11.0.0.Final
When an error happens on the server, all the client gets is the exception message. Many times that is enough, as the user can use the exception message to search the server logs, but sometimes the server logs are not available, and changing the server logging configuration is not always an option.
We don't want to fill the client logs with too server stack traces either, so we need to limit the amount of information we send with one or more of
1. Disable server stack traces by default, and add a client configuration property to request stack traces when enabled.
2. Don't send the full stack trace, just 5-10 stack frames (for each exception, if there's an exception chain).
3. Don't send the stack trace for common exceptions like {{IllegalLifecycleStateException}}.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-6325) Other node shutting down caused local exception
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-6325?page=com.atlassian.jira.plugin... ]
Dan Berindei commented on ISPN-6325:
------------------------------------
The exception in the description definitely can't happen any more, as we no longer have a {{StreamSegmentResponseCommand}}, but similar problems still happen (see [ISPN-11172|https://issues.redhat.com/browse/ISPN-11172?focusedCommentId=1...]).
> Other node shutting down caused local exception
> -----------------------------------------------
>
> Key: ISPN-6325
> URL: https://issues.redhat.com/browse/ISPN-6325
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.1.2.Final
> Reporter: Will Burns
> Priority: Major
>
> I was running DistributedStreamRehashStressTest and I found that one of my processing threads was killed by the following exception, which shouldn't be propagated to the user:
> {code}
> java.lang.AssertionError: Found an exception in at least 1 thread
> at org.testng.Assert.fail(Assert.java:83)
> at org.infinispan.stream.stress.DistributedStreamRehashStressTest.testStressNodesLeavingWhilePerformingCallable(DistributedStreamRehashStressTest.java:226)
> at org.infinispan.stream.stress.DistributedStreamRehashStressTest.testStressNodesLeavingWhileMultipleIterators(DistributedStreamRehashStressTest.java:113)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:343)
> at org.testng.SuiteRunner.privateRun(SuiteRunner.java:305)
> at org.testng.SuiteRunner.run(SuiteRunner.java:254)
> at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> at org.testng.TestNG.run(TestNG.java:1057)
> at org.testng.IDEARemoteTestNG.run(IDEARemoteTestNG.java:72)
> at org.testng.RemoteTestNGStarter.main(RemoteTestNGStarter.java:122)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
> Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from DistributedStreamRehashStressTest-NodeCU-43302, see cause for remote stack trace
> at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:44)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:796)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$212(JGroupsTransport.java:633)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.futureDone(SingleResponseFuture.java:30)
> at org.jgroups.blocks.Request.checkCompletion(Request.java:162)
> at org.jgroups.blocks.UnicastRequest.receiveResponse(UnicastRequest.java:81)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:373)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:237)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:695)
> at org.jgroups.JChannel.up(JChannel.java:738)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
> at org.jgroups.protocols.RSVP.up(RSVP.java:201)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:392)
> at org.jgroups.protocols.tom.TOA.up(TOA.java:121)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1043)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
> at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1064)
> at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:779)
> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:652)
> at org.jgroups.protocols.Discovery.up(Discovery.java:296)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1590)
> at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1802)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from DistributedStreamRehashStressTest-NodeCV-40406, see cause for remote stack trace
> ... 31 more
> Caused by: org.infinispan.commons.CacheException: Problems invoking command.
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:180)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:402)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:352)
> ... 20 more
> Caused by: java.io.EOFException: Read past end of file
> at org.jboss.marshalling.SimpleDataInput.eofOnRead(SimpleDataInput.java:151)
> at org.jboss.marshalling.SimpleDataInput.readUnsignedByteDirect(SimpleDataInput.java:294)
> at org.jboss.marshalling.SimpleDataInput.readUnsignedByte(SimpleDataInput.java:249)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at org.infinispan.commons.marshall.MarshallUtil.unmarshallCollectionUnbounded(MarshallUtil.java:217)
> at org.infinispan.stream.impl.StreamSegmentResponseCommand.readFrom(StreamSegmentResponseCommand.java:61)
> at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readCommandParameters(ReplicableCommandExternalizer.java:113)
> at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:173)
> at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:68)
> at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:478)
> at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:235)
> at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:149)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:354)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:134)
> at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101)
> at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80)
> at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:160)
> ... 22 more
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-6325) Other node shutting down caused local exception
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-6325?page=com.atlassian.jira.plugin... ]
Dan Berindei reassigned ISPN-6325:
----------------------------------
Assignee: Dan Berindei
> Other node shutting down caused local exception
> -----------------------------------------------
>
> Key: ISPN-6325
> URL: https://issues.redhat.com/browse/ISPN-6325
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.1.2.Final
> Reporter: Will Burns
> Assignee: Dan Berindei
> Priority: Major
>
> I was running DistributedStreamRehashStressTest and I found that one of my processing threads was killed by the following exception, which shouldn't be propagated to the user:
> {code}
> java.lang.AssertionError: Found an exception in at least 1 thread
> at org.testng.Assert.fail(Assert.java:83)
> at org.infinispan.stream.stress.DistributedStreamRehashStressTest.testStressNodesLeavingWhilePerformingCallable(DistributedStreamRehashStressTest.java:226)
> at org.infinispan.stream.stress.DistributedStreamRehashStressTest.testStressNodesLeavingWhileMultipleIterators(DistributedStreamRehashStressTest.java:113)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:343)
> at org.testng.SuiteRunner.privateRun(SuiteRunner.java:305)
> at org.testng.SuiteRunner.run(SuiteRunner.java:254)
> at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> at org.testng.TestNG.run(TestNG.java:1057)
> at org.testng.IDEARemoteTestNG.run(IDEARemoteTestNG.java:72)
> at org.testng.RemoteTestNGStarter.main(RemoteTestNGStarter.java:122)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
> Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from DistributedStreamRehashStressTest-NodeCU-43302, see cause for remote stack trace
> at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:44)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:796)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$212(JGroupsTransport.java:633)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.futureDone(SingleResponseFuture.java:30)
> at org.jgroups.blocks.Request.checkCompletion(Request.java:162)
> at org.jgroups.blocks.UnicastRequest.receiveResponse(UnicastRequest.java:81)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:373)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:237)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:695)
> at org.jgroups.JChannel.up(JChannel.java:738)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
> at org.jgroups.protocols.RSVP.up(RSVP.java:201)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:392)
> at org.jgroups.protocols.tom.TOA.up(TOA.java:121)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1043)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
> at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1064)
> at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:779)
> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:652)
> at org.jgroups.protocols.Discovery.up(Discovery.java:296)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1590)
> at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1802)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from DistributedStreamRehashStressTest-NodeCV-40406, see cause for remote stack trace
> ... 31 more
> Caused by: org.infinispan.commons.CacheException: Problems invoking command.
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:180)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:402)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:352)
> ... 20 more
> Caused by: java.io.EOFException: Read past end of file
> at org.jboss.marshalling.SimpleDataInput.eofOnRead(SimpleDataInput.java:151)
> at org.jboss.marshalling.SimpleDataInput.readUnsignedByteDirect(SimpleDataInput.java:294)
> at org.jboss.marshalling.SimpleDataInput.readUnsignedByte(SimpleDataInput.java:249)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at org.infinispan.commons.marshall.MarshallUtil.unmarshallCollectionUnbounded(MarshallUtil.java:217)
> at org.infinispan.stream.impl.StreamSegmentResponseCommand.readFrom(StreamSegmentResponseCommand.java:61)
> at org.infinispan.marshall.exts.ReplicableCommandExternalizer.readCommandParameters(ReplicableCommandExternalizer.java:113)
> at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:173)
> at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:68)
> at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:478)
> at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:235)
> at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:149)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:354)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:134)
> at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101)
> at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80)
> at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:160)
> ... 22 more
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-11172) GracefulShutdownRestartIT fails
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-11172?page=com.atlassian.jira.plugi... ]
Dan Berindei commented on ISPN-11172:
-------------------------------------
I found an exception in the container output:
{noformat}
18:15:05,923 WARN (jgroups-8,05ea2c1c87a5-30319) [CLUSTER] ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___script_cache, type=SHUTDOWN_REQUEST, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, phase=null, actualMembers=null, throwable=null, viewId=0} org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 05ea2c1c87a5-30319, see cause for remote stack trace
at org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan.remoting.transport.impl.VoidResponseCollector.addException(VoidResponseCollector.java:46)
at org.infinispan.remoting.transport.impl.VoidResponseCollector.addException(VoidResponseCollector.java:18)
at org.infinispan.remoting.transport.ValidResponseCollector.addResponse(ValidResponseCollector.java:29)
at org.infinispan.topology.TopologyManagementHelper.lambda$addLocalResult$2(TopologyManagementHelper.java:135)
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946)
at java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2266)
at org.infinispan.topology.TopologyManagementHelper.lambda$addLocalResult$3(TopologyManagementHelper.java:131)
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:67)
at org.infinispan.remoting.transport.impl.MultiTargetRequest.onResponse(MultiTargetRequest.java:104)
at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1411)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1314)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:129)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1459)
at org.jgroups.JChannel.up(JChannel.java:775)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:920)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:338)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:338)
at org.jgroups.protocols.tom.TOA.up(TOA.java:119)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:859)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049)
at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772)
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:205)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:285)
at org.jgroups.protocols.Discovery.up(Discovery.java:300)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1367)
at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:89)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from 05ea2c1c87a5-30319, see cause for remote stack trace
at org.infinispan.topology.TopologyManagementHelper.makeResponse(TopologyManagementHelper.java:144)
at org.infinispan.topology.TopologyManagementHelper.lambda$addLocalResult$2(TopologyManagementHelper.java:132)
... 36 more
Caused by: java.lang.NullPointerException
at org.infinispan.topology.LocalTopologyManagerImpl.handleCacheShutdown(LocalTopologyManagerImpl.java:743)
at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:189)
at org.infinispan.topology.CacheTopologyControlCommand.invokeAsync(CacheTopologyControlCommand.java:160)
at org.infinispan.topology.TopologyManagementHelper.executeOnClusterSync(TopologyManagementHelper.java:54)
at org.infinispan.topology.ClusterTopologyManagerImpl.broadcastShutdownCache(ClusterTopologyManagerImpl.java:734)
at org.infinispan.topology.ClusterCacheStatus.shutdownCache(ClusterCacheStatus.java:951)
at org.infinispan.topology.ClusterTopologyManagerImpl.handleShutdownRequest(ClusterTopologyManagerImpl.java:749)
at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:174)
at org.infinispan.topology.CacheTopologyControlCommand.invokeAsync(CacheTopologyControlCommand.java:160)
at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:160)
at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:215)
... 3 more
{noformat}
We need to do more to ensure that any remote commands processed during shutdown return a {{CacheNotFoundResponse}} instead of throwing random exceptions.
> GracefulShutdownRestartIT fails
> -------------------------------
>
> Key: ISPN-11172
> URL: https://issues.redhat.com/browse/ISPN-11172
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 10.1.0.Final
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Priority: Major
>
> {noformat}
> Error Message
> Cluster did not shutdown within timeout
> Stacktrace
> java.lang.AssertionError: Cluster did not shutdown within timeout
> at org.infinispan.commons.util.Eventually.lambda$eventually$0(Eventually.java:33)
> at org.infinispan.commons.util.Eventually.eventually(Eventually.java:25)
> at org.infinispan.commons.util.Eventually.eventually(Eventually.java:33)
> at org.infinispan.server.resilience.GracefulShutdownRestartIT.testGracefulShutdownRestart(GracefulShutdownRestartIT.java:50)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.infinispan.server.test.InfinispanServerTestMethodRule$1.evaluate(InfinispanServerTestMethodRule.java:69)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.infinispan.server.test.InfinispanServerRule$1.evaluate(InfinispanServerRule.java:90)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runners.Suite.runChild(Suite.java:128)
> at org.junit.runners.Suite.runChild(Suite.java:27)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
> at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
> at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Standard Output
> [OK: 131, KO: 0, SKIP: 0] Test starting: GracefulShutdownRestartIT.testGracefulShutdownRestart
> [0] STDOUT: 12:36:05,513 WARN (async-thread--p2-t6) [CONFIG] ISPN000564: Configured store 'SingleFileStore' is segmented and may use a large number of file descriptors&#27;[m
> [0] STDOUT: 12:36:05,514 WARN (async-thread--p2-t6) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,515 WARN (async-thread--p2-t6) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [1] STDOUT: 12:36:05,558 WARN (remote-thread--p3-t2) [CONFIG] ISPN000564: Configured store 'SingleFileStore' is segmented and may use a large number of file descriptors&#27;[m
> [1] STDOUT: 12:36:05,560 WARN (remote-thread--p3-t2) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [1] STDOUT: 12:36:05,562 WARN (remote-thread--p3-t2) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,771 WARN (jgroups-13,06978f4151c0-63390) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,776 WARN (jgroups-13,06978f4151c0-63390) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,883 INFO (transport-thread--p5-t7) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100002: Starting rebalance with members [63781b79183e-43839, 06978f4151c0-63390], phase READ_OLD_WRITE_ALL, topology id 2&#27;[m
> [0] STDOUT: 12:36:05,963 INFO (remote-thread--p3-t1) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3&#27;[m
> [0] STDOUT: 12:36:05,978 INFO (remote-thread--p3-t1) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4&#27;[m
> [0] STDOUT: 12:36:05,985 INFO (async-thread--p2-t17) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100010: Finished rebalance with members [63781b79183e-43839, 06978f4151c0-63390], topology id 5&#27;[m
> [0] STDERR: WARNING: An illegal reflective access operation has occurred
> [0] STDERR: WARNING: Illegal reflective access by protostream.com.google.protobuf.UnsafeUtil (file:/opt/infinispan/lib/protostream-4.3.1.Final.jar) to field java.nio.Buffer.address
> [0] STDERR: WARNING: Please consider reporting this to the maintainers of protostream.com.google.protobuf.UnsafeUtil
> [0] STDERR: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
> [0] STDERR: WARNING: All illegal access operations will be denied in a future release
> [1] STDERR: WARNING: An illegal reflective access operation has occurred
> [1] STDERR: WARNING: Illegal reflective access by protostream.com.google.protobuf.UnsafeUtil (file:/opt/infinispan/lib/protostream-4.3.1.Final.jar) to field java.nio.Buffer.address
> [1] STDERR: WARNING: Please consider reporting this to the maintainers of protostream.com.google.protobuf.UnsafeUtil
> [1] STDERR: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
> [1] STDERR: WARNING: All illegal access operations will be denied in a future release
> [0] STDOUT: 12:36:06,811 INFO (SINGLE_PORT-ServerIO-8-2) [CLUSTER] [Context=___protobuf_metadata]ISPN100008: Updating cache members list [63781b79183e-43839], topology id 6&#27;[m
> [0] STDOUT: 12:36:06,860 INFO (SINGLE_PORT-ServerIO-8-2) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100008: Updating cache members list [63781b79183e-43839], topology id 6&#27;[m
> [0] STDOUT: 12:36:06,898 INFO (SINGLE_PORT-ServerIO-8-2) [CLUSTER] [Context=memcachedCache]ISPN100008: Updating cache members list [63781b79183e-43839], topology id 6&#27;[m
> [1] STDOUT: 12:36:06,904 WARN (remote-thread--p3-t2) [CLUSTER] ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___script_cache, type=SHUTDOWN_PERFORM, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, phase=null, actualMembers=null, throwable=null, viewId=0} java.lang.NullPointerException
> [1] STDOUT: at org.infinispan.topology.LocalTopologyManagerImpl.handleCacheShutdown(LocalTopologyManagerImpl.java:743)
> [1] STDOUT: at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:189)
> [1] STDOUT: at org.infinispan.topology.CacheTopologyControlCommand.invokeAsync(CacheTopologyControlCommand.java:160)
> [1] STDOUT: at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:160)
> [1] STDOUT: at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:215)
> [1] STDOUT: at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> [1] STDOUT: at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> [1] STDOUT: at java.base/java.lang.Thread.run(Thread.java:834)
> [1] STDOUT: &#27;[m
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-7187) Cluster configured CacheManager.removeCache on LOCAL cache results in NPE
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-7187?page=com.atlassian.jira.plugin... ]
Dan Berindei resolved ISPN-7187.
--------------------------------
Resolution: Out of Date
{{EmbeddedCacheManager.removeCache()}} is deprecated, and no longer uses a {{RemoveCacheCommand}} anyway.
> Cluster configured CacheManager.removeCache on LOCAL cache results in NPE
> -------------------------------------------------------------------------
>
> Key: ISPN-7187
> URL: https://issues.redhat.com/browse/ISPN-7187
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.2.4.Final
> Reporter: Ryan Gustafson
> Priority: Major
> Attachments: RemoveCacheInfinispanCacheTest.java
>
>
> While integration testing a common application scoped CacheManager, I hit a problem with LOCAL caches.
> When using a CacheManager which has clustering support configured, attempts to call removeCache() on a LOCAL cache results in a NPE. INVALIDATION_ASYNC caches have no problem though.
> At a minimum I would expect the removal of the cache from the calling CacheManager. It is unclear whether a NON-clustered CacheMode Cache would be removed in all the CacheManagers in the cluster. I would presume not, but I cannot test the behavior to find out. A literal reading of the CacheManager.removeCache(String) method JavaDoc however would expect it to wipe out all LOCAL caches with the same name in the cluster. The JavaDoc could be improved to clarify the behavior for non-clustered CacheModes.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months