[JBoss JIRA] (ISPN-7519) NPE and Deadlock during server start
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-7519?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-7519:
------------------------------------
Fix Version/s: 8.2.7.Final
> NPE and Deadlock during server start
> ------------------------------------
>
> Key: ISPN-7519
> URL: https://issues.jboss.org/browse/ISPN-7519
> Project: Infinispan
> Issue Type: Bug
> Components: Server, Test Suite - Server
> Affects Versions: 9.0.0.CR1
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 9.0.0.CR2, 8.2.7.Final
>
> Attachments: server-trace.txt, surefire-trace.txt
>
>
> When running the full test suite, it deadlocked when running the test {{StateTransferSuppressIT.testRebalanceWithFirstNodeStop}}.
> The server startup printed
> {noformat}
> 12:13:07,787 INFO [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-4) ISPN000128: Infinispan version: Infinispan 'Ruppaner' 9.0.0-SNAPSHOT
> 12:13:08,159 INFO [org.jboss.as.clustering.infinispan] (MSC service thread 1-4) DGISPN0001: Started memcachedCache cache from clustered container
> 12:13:08,159 INFO [org.jboss.as.clustering.infinispan] (MSC service thread 1-2) DGISPN0001: Started default cache from clustered container
> 12:13:08,181 INFO [org.infinispan.server.endpoint] (MSC service thread 1-4) DGENDPT10000: MemcachedServer starting
> 12:13:08,181 INFO [org.infinispan.server.endpoint] (MSC service thread 1-3) DGENDPT10000: HotRodServer starting
> 12:13:08,182 INFO [org.infinispan.server.endpoint] (MSC service thread 1-4) DGENDPT10001: MemcachedServer listening on 127.0.0.1:11211
> 12:13:08,182 INFO [org.infinispan.server.endpoint] (MSC service thread 1-3) DGENDPT10001: HotRodServer listening on 127.0.0.1:11222
> 12:13:08,188 INFO [org.infinispan.server.endpoint] (MSC service thread 1-1) DGENDPT10000: REST starting
> 12:13:08,259 WARNING [io.netty.channel.epoll.EpollEventLoop] (MemcachedServerMaster-1-1) Unexpected exception in the selector loop.: java.lang.NullPointerException
> at io.netty.util.internal.shaded.org.jctools.queues.MpscChunkedArrayQueue.poll(MpscChunkedArrayQueue.java:264)
> at io.netty.util.concurrent.SingleThreadEventExecutor.pollTaskFrom(SingleThreadEventExecutor.java:223)
> at io.netty.util.concurrent.SingleThreadEventExecutor.pollTask(SingleThreadEventExecutor.java:218)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:306)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> at java.lang.Thread.run(Thread.java:745)
> 12:13:08,338 INFO [org.infinispan.rest.NettyRestServer] (MSC service thread 1-1) ISPN012003: REST server starting, listening on 127.0.0.1:8080
> 12:13:08,338 INFO [org.infinispan.server.endpoint] (MSC service thread 1-1) DGENDPT10002: REST mapped to /rest
> {noformat}
> and then hanged. Attached is the stack trace of the server, and the forked test process.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ISPN-7519) NPE and Deadlock during server start
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-7519?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-7519:
------------------------------------
Git Pull Request: https://github.com/infinispan/infinispan/pull/4891, https://github.com/infinispan/infinispan/pull/4993 (was: https://github.com/infinispan/infinispan/pull/4891)
> NPE and Deadlock during server start
> ------------------------------------
>
> Key: ISPN-7519
> URL: https://issues.jboss.org/browse/ISPN-7519
> Project: Infinispan
> Issue Type: Bug
> Components: Server, Test Suite - Server
> Affects Versions: 9.0.0.CR1
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 9.0.0.CR2, 8.2.7.Final
>
> Attachments: server-trace.txt, surefire-trace.txt
>
>
> When running the full test suite, it deadlocked when running the test {{StateTransferSuppressIT.testRebalanceWithFirstNodeStop}}.
> The server startup printed
> {noformat}
> 12:13:07,787 INFO [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-4) ISPN000128: Infinispan version: Infinispan 'Ruppaner' 9.0.0-SNAPSHOT
> 12:13:08,159 INFO [org.jboss.as.clustering.infinispan] (MSC service thread 1-4) DGISPN0001: Started memcachedCache cache from clustered container
> 12:13:08,159 INFO [org.jboss.as.clustering.infinispan] (MSC service thread 1-2) DGISPN0001: Started default cache from clustered container
> 12:13:08,181 INFO [org.infinispan.server.endpoint] (MSC service thread 1-4) DGENDPT10000: MemcachedServer starting
> 12:13:08,181 INFO [org.infinispan.server.endpoint] (MSC service thread 1-3) DGENDPT10000: HotRodServer starting
> 12:13:08,182 INFO [org.infinispan.server.endpoint] (MSC service thread 1-4) DGENDPT10001: MemcachedServer listening on 127.0.0.1:11211
> 12:13:08,182 INFO [org.infinispan.server.endpoint] (MSC service thread 1-3) DGENDPT10001: HotRodServer listening on 127.0.0.1:11222
> 12:13:08,188 INFO [org.infinispan.server.endpoint] (MSC service thread 1-1) DGENDPT10000: REST starting
> 12:13:08,259 WARNING [io.netty.channel.epoll.EpollEventLoop] (MemcachedServerMaster-1-1) Unexpected exception in the selector loop.: java.lang.NullPointerException
> at io.netty.util.internal.shaded.org.jctools.queues.MpscChunkedArrayQueue.poll(MpscChunkedArrayQueue.java:264)
> at io.netty.util.concurrent.SingleThreadEventExecutor.pollTaskFrom(SingleThreadEventExecutor.java:223)
> at io.netty.util.concurrent.SingleThreadEventExecutor.pollTask(SingleThreadEventExecutor.java:218)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:306)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> at java.lang.Thread.run(Thread.java:745)
> 12:13:08,338 INFO [org.infinispan.rest.NettyRestServer] (MSC service thread 1-1) ISPN012003: REST server starting, listening on 127.0.0.1:8080
> 12:13:08,338 INFO [org.infinispan.server.endpoint] (MSC service thread 1-1) DGENDPT10002: REST mapped to /rest
> {noformat}
> and then hanged. Attached is the stack trace of the server, and the forked test process.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ISPN-7628) Administration console - the cluster status doesn't reflect "reload-required" state of its nodes
by Roman Macor (JIRA)
[ https://issues.jboss.org/browse/ISPN-7628?page=com.atlassian.jira.plugin.... ]
Roman Macor commented on ISPN-7628:
-----------------------------------
[~vblagojevic] sounds good, but I wouldn't use "degraded" mode here (as it indicates there has been a cluster partition). Instead if one of the nodes is in reload-required/restart-required state the cluster should be in reload-required/restart-required state in my opinion.
An example would be:
The user changes the configuration, clicks restart later, this puts all nodes in the cluster to reload-required state, now they click reload action on one of the nodes.
> Administration console - the cluster status doesn't reflect "reload-required" state of its nodes
> ------------------------------------------------------------------------------------------------
>
> Key: ISPN-7628
> URL: https://issues.jboss.org/browse/ISPN-7628
> Project: Infinispan
> Issue Type: Bug
> Components: JMX, reporting and management
> Affects Versions: 9.0.0.CR2
> Reporter: Roman Macor
> Assignee: Vladimir Blagojevic
>
> The cluster has "Started" status even though all of its nodes have "reload-required" status.
> Expected result:
> The cluster status should also be "reload-required"
> Another suggestion:
> "Reload" action should be available from the cluster so that the user doesn't need to perform this action on individual nodes (there could be hundreds of them )
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years
[JBoss JIRA] (ISPN-7580) Use of marsheller is not consistent in all places
by Ramesh Reddy (JIRA)
[ https://issues.jboss.org/browse/ISPN-7580?page=com.atlassian.jira.plugin.... ]
Ramesh Reddy commented on ISPN-7580:
------------------------------------
[~anistor] I have worked couple different working sets, but below is most non-intrusive changes I could make with out breaking any of current code. Can you please take look, if you agree I can submit as pull request
https://github.com/rareddy/infinispan-1/commit/659ad4cb31cf69faa7e4b2f12d...
https://github.com/rareddy/protostream/commit/f62aae0a9a8b005f5f8b6332102...
> Use of marsheller is not consistent in all places
> -------------------------------------------------
>
> Key: ISPN-7580
> URL: https://issues.jboss.org/browse/ISPN-7580
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling, Remote Querying
> Reporter: Ramesh Reddy
> Assignee: Adrian Nistor
>
> Usage of extended ProtoStreamMarshaller is not consistent across all the code paths. For the purposes of Teiid translator, I have extended ProtoStreamMarshaller which knows to read/write byte streams in portable fashion for given message type, which are representions of a relational table in Teiid. This works fine, if I just use cache's get/put calls.
> However, the same fails when used with RemoteQuery or Continuous query. The reason is, these classes circumvent extended Marsheller and go directly to serialization context registered to do the wrapping/unwrapping. Not only that there are few places code will type cast the SerializationContext to SerializationContextImpl object. Thus I can not even provide my own Serializer nor I can extend this as SerializationContextImpl is declared as final. These need to be corrected such that extended marsheller is used rather than hard coding them.
> I am guessing this is first time anyone has even done this without using dedicated java classes as marshellers.
> This is extremely critical for me to be fixed to move forward, I can provide the pull request for it?
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years