[JBoss JIRA] (WFLY-10276) NPE during server shutdown when using scattered cache
by Paul Ferraro (JIRA)
Paul Ferraro created WFLY-10276:
-----------------------------------
Summary: NPE during server shutdown when using scattered cache
Key: WFLY-10276
URL: https://issues.jboss.org/browse/WFLY-10276
Project: WildFly
Issue Type: Bug
Components: Clustering
Affects Versions: 13.0.0.Beta1
Reporter: Paul Ferraro
Assignee: Paul Ferraro
We hit NPE when running tests for RFE EAP7-867.
EAP distribution was built from https://github.com/pferraro/wildfly/tree/scattered .
Test description: Positive stress test (no failover), 4-node EAP cluster, clients: starting with 400 clients in the beginning, raising the number of clients to 6000 in the end of the test.
During clean server shutdown in the end of the test, server logged NPE and got stuck:
{code}
[JBossINF] [0m[31m07:55:57,643 ERROR [org.infinispan.scattered.impl.ScatteredStateConsumerImpl] (thread-200,ejb,dev214) ISPN000471: Failed processing values received from remote node during rebalance.: java.lang.NullPointerException
[JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.applyValues(ScatteredStateConsumerImpl.java:505)
[JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.lambda$getValuesAndApply$8(ScatteredStateConsumerImpl.java:475)
[JBossINF] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
[JBossINF] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
[JBossINF] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
[JBossINF] at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
[JBossINF] at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:66)
[JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:56)
[JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:35)
[JBossINF] at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:53)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1304)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1207)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$200(JGroupsTransport.java:123)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.receive(JGroupsTransport.java:1342)
[JBossINF] at org.jgroups.JChannel.up(JChannel.java:819)
[JBossINF] at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:134)
[JBossINF] at org.jgroups.stack.Protocol.up(Protocol.java:340)
[JBossINF] at org.jgroups.protocols.FORK.up(FORK.java:134)
[JBossINF] at org.jgroups.protocols.FRAG3.up(FRAG3.java:166)
[JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
[JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
[JBossINF] at org.jgroups.protocols.pbcast.GMS.up(GMS.java:864)
[JBossINF] at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:240)
[JBossINF] at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1002)
[JBossINF] at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:728)
[JBossINF] at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:383)
[JBossINF] at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:600)
[JBossINF] at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:119)
[JBossINF] at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:199)
[JBossINF] at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:252)
[JBossINF] at org.jgroups.protocols.MERGE3.up(MERGE3.java:276)
[JBossINF] at org.jgroups.protocols.Discovery.up(Discovery.java:267)
[JBossINF] at org.jgroups.protocols.TP.passMessageUp(TP.java:1248)
[JBossINF] at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
[JBossINF] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[JBossINF] at org.jboss.as.clustering.jgroups.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:52)
[JBossINF] at java.lang.Thread.run(Thread.java:748)
[JBossINF]
{code}
Scattered cache was configured with bias-lifespan="0".
Server configuration:
http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-ses...
Server link:
http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-ses...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (WFLY-10275) ArrayIndexOutOfBoundsException on server using scattered cache
by Paul Ferraro (JIRA)
Paul Ferraro created WFLY-10275:
-----------------------------------
Summary: ArrayIndexOutOfBoundsException on server using scattered cache
Key: WFLY-10275
URL: https://issues.jboss.org/browse/WFLY-10275
Project: WildFly
Issue Type: Bug
Components: Clustering
Affects Versions: 13.0.0.Beta1
Reporter: Paul Ferraro
Assignee: Paul Ferraro
We hit ArrayIndexOutOfBoundsException when running tests for RFE EAP7-867.
EAP distribution was built from {{https://github.com/pferraro/wildfly/tree/scattered}} .
Test description: Positive stress test (no failover), 4-node EAP cluster, clients: starting with 400 clients in the beginning, raising the number of clients to 6000 in the end of the test.
Error occured on server dev215 around 7th iteration (can be seen in the performance report, link below):
{code}
[JBossINF] [0m[31m04:26:11,708 ERROR [stderr] (transport-thread--p15-t25) Exception in thread "transport-thread--p15-t25" java.lang.ArrayIndexOutOfBoundsException: 129
[JBossINF] [0m[31m04:26:11,708 ERROR [stderr] (transport-thread--p15-t25) at org.infinispan.scattered.impl.ScatteredVersionManagerImpl.lambda$tryRegularInvalidations$4(ScatteredVersionManagerImpl.java:413)
[JBossINF] [0m[31m04:26:11,708 ERROR [stderr] (transport-thread--p15-t25) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[JBossINF] [0m[31m04:26:11,708 ERROR [stderr] (transport-thread--p15-t25) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[JBossINF] [0m[31m04:26:11,708 ERROR [stderr] (transport-thread--p15-t25) at org.wildfly.clustering.service.concurrent.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:47)
[JBossINF] [0m[31m04:26:11,708 ERROR [stderr] (transport-thread--p15-t25) at java.lang.Thread.run(Thread.java:748)
{code}
Clients were getting "SocketTimeoutException: Read timed out" exceptions even before the ArrayIndexOutOfBoundsException ocurred, but also after.
Performance report (accessible only when connected to VPN):
http://download.eng.brq.redhat.com/scratch/mvinkler/reports/2018-04-19_15...
One can observe that dev215 CPU usage and network usage dropped after 7th iteration.
dev215 server log link:
https://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-se...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (WFLY-10274) NPE during server shutdown when using scattered cache
by Michal Vinkler (JIRA)
[ https://issues.jboss.org/browse/WFLY-10274?page=com.atlassian.jira.plugin... ]
Michal Vinkler updated WFLY-10274:
----------------------------------
Description:
We hit NPE when running tests for RFE EAP7-867.
EAP distribution was built from https://github.com/pferraro/wildfly/tree/scattered .
Test description: Positive stress test (no failover), 4-node EAP cluster, clients: starting with 400 clients in the beginning, raising the number of clients to 6000 in the end of the test.
During clean server shutdown in the end of the test, server logged NPE and got stuck:
{code}
[JBossINF] [0m[31m07:55:57,643 ERROR [org.infinispan.scattered.impl.ScatteredStateConsumerImpl] (thread-200,ejb,dev214) ISPN000471: Failed processing values received from remote node during rebalance.: java.lang.NullPointerException
[JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.applyValues(ScatteredStateConsumerImpl.java:505)
[JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.lambda$getValuesAndApply$8(ScatteredStateConsumerImpl.java:475)
[JBossINF] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
[JBossINF] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
[JBossINF] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
[JBossINF] at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
[JBossINF] at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:66)
[JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:56)
[JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:35)
[JBossINF] at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:53)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1304)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1207)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$200(JGroupsTransport.java:123)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.receive(JGroupsTransport.java:1342)
[JBossINF] at org.jgroups.JChannel.up(JChannel.java:819)
[JBossINF] at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:134)
[JBossINF] at org.jgroups.stack.Protocol.up(Protocol.java:340)
[JBossINF] at org.jgroups.protocols.FORK.up(FORK.java:134)
[JBossINF] at org.jgroups.protocols.FRAG3.up(FRAG3.java:166)
[JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
[JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
[JBossINF] at org.jgroups.protocols.pbcast.GMS.up(GMS.java:864)
[JBossINF] at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:240)
[JBossINF] at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1002)
[JBossINF] at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:728)
[JBossINF] at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:383)
[JBossINF] at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:600)
[JBossINF] at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:119)
[JBossINF] at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:199)
[JBossINF] at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:252)
[JBossINF] at org.jgroups.protocols.MERGE3.up(MERGE3.java:276)
[JBossINF] at org.jgroups.protocols.Discovery.up(Discovery.java:267)
[JBossINF] at org.jgroups.protocols.TP.passMessageUp(TP.java:1248)
[JBossINF] at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
[JBossINF] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[JBossINF] at org.jboss.as.clustering.jgroups.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:52)
[JBossINF] at java.lang.Thread.run(Thread.java:748)
[JBossINF]
{code}
Scattered cache was configured with bias-lifespan="0".
Server configuration:
http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-ses...
Server link:
http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-ses...
was:
We hit NPE when running tests for RFE EAP7-867.
EAP distribution was built from https://github.com/pferraro/wildfly/tree/scattered .
Test description: Positive stress test (no failover), 4-node EAP cluster, clients: starting with 400 clients in the beginning, raising the number of clients to 6000 in the end of the test.
During clean server shutdown in the end of the test, server logged NPE and got stuck:
{code}
[JBossINF] [0m[31m07:55:57,643 ERROR [org.infinispan.scattered.impl.ScatteredStateConsumerImpl] (thread-200,ejb,dev214) ISPN000471: Failed processing values received from remote node during rebalance.: java.lang.NullPointerException
[JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.applyValues(ScatteredStateConsumerImpl.java:505)
[JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.lambda$getValuesAndApply$8(ScatteredStateConsumerImpl.java:475)
[JBossINF] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
[JBossINF] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
[JBossINF] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
[JBossINF] at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
[JBossINF] at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:66)
[JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:56)
[JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:35)
[JBossINF] at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:53)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1304)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1207)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$200(JGroupsTransport.java:123)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.receive(JGroupsTransport.java:1342)
[JBossINF] at org.jgroups.JChannel.up(JChannel.java:819)
[JBossINF] at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:134)
[JBossINF] at org.jgroups.stack.Protocol.up(Protocol.java:340)
[JBossINF] at org.jgroups.protocols.FORK.up(FORK.java:134)
[JBossINF] at org.jgroups.protocols.FRAG3.up(FRAG3.java:166)
[JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
[JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
[JBossINF] at org.jgroups.protocols.pbcast.GMS.up(GMS.java:864)
[JBossINF] at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:240)
[JBossINF] at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1002)
[JBossINF] at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:728)
[JBossINF] at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:383)
[JBossINF] at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:600)
[JBossINF] at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:119)
[JBossINF] at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:199)
[JBossINF] at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:252)
[JBossINF] at org.jgroups.protocols.MERGE3.up(MERGE3.java:276)
[JBossINF] at org.jgroups.protocols.Discovery.up(Discovery.java:267)
[JBossINF] at org.jgroups.protocols.TP.passMessageUp(TP.java:1248)
[JBossINF] at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
[JBossINF] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[JBossINF] at org.jboss.as.clustering.jgroups.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:52)
[JBossINF] at java.lang.Thread.run(Thread.java:748)
[JBossINF]
{code}
Server link:
https://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-se...
> NPE during server shutdown when using scattered cache
> -----------------------------------------------------
>
> Key: WFLY-10274
> URL: https://issues.jboss.org/browse/WFLY-10274
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 13.0.0.Beta1
> Reporter: Michal Vinkler
> Assignee: Paul Ferraro
>
> We hit NPE when running tests for RFE EAP7-867.
> EAP distribution was built from https://github.com/pferraro/wildfly/tree/scattered .
> Test description: Positive stress test (no failover), 4-node EAP cluster, clients: starting with 400 clients in the beginning, raising the number of clients to 6000 in the end of the test.
> During clean server shutdown in the end of the test, server logged NPE and got stuck:
> {code}
> [JBossINF] [0m[31m07:55:57,643 ERROR [org.infinispan.scattered.impl.ScatteredStateConsumerImpl] (thread-200,ejb,dev214) ISPN000471: Failed processing values received from remote node during rebalance.: java.lang.NullPointerException
> [JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.applyValues(ScatteredStateConsumerImpl.java:505)
> [JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.lambda$getValuesAndApply$8(ScatteredStateConsumerImpl.java:475)
> [JBossINF] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> [JBossINF] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> [JBossINF] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> [JBossINF] at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> [JBossINF] at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:66)
> [JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:56)
> [JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:35)
> [JBossINF] at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:53)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1304)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1207)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$200(JGroupsTransport.java:123)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.receive(JGroupsTransport.java:1342)
> [JBossINF] at org.jgroups.JChannel.up(JChannel.java:819)
> [JBossINF] at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:134)
> [JBossINF] at org.jgroups.stack.Protocol.up(Protocol.java:340)
> [JBossINF] at org.jgroups.protocols.FORK.up(FORK.java:134)
> [JBossINF] at org.jgroups.protocols.FRAG3.up(FRAG3.java:166)
> [JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
> [JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
> [JBossINF] at org.jgroups.protocols.pbcast.GMS.up(GMS.java:864)
> [JBossINF] at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:240)
> [JBossINF] at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1002)
> [JBossINF] at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:728)
> [JBossINF] at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:383)
> [JBossINF] at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:600)
> [JBossINF] at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:119)
> [JBossINF] at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:199)
> [JBossINF] at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:252)
> [JBossINF] at org.jgroups.protocols.MERGE3.up(MERGE3.java:276)
> [JBossINF] at org.jgroups.protocols.Discovery.up(Discovery.java:267)
> [JBossINF] at org.jgroups.protocols.TP.passMessageUp(TP.java:1248)
> [JBossINF] at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [JBossINF] at org.jboss.as.clustering.jgroups.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:52)
> [JBossINF] at java.lang.Thread.run(Thread.java:748)
> [JBossINF]
> {code}
> Scattered cache was configured with bias-lifespan="0".
> Server configuration:
> http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-ses...
> Server link:
> http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-ses...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (WFLY-10274) NPE during server shutdown when using scattered cache
by Michal Vinkler (JIRA)
Michal Vinkler created WFLY-10274:
-------------------------------------
Summary: NPE during server shutdown when using scattered cache
Key: WFLY-10274
URL: https://issues.jboss.org/browse/WFLY-10274
Project: WildFly
Issue Type: Bug
Components: Clustering
Affects Versions: 13.0.0.Beta1
Reporter: Michal Vinkler
Assignee: Paul Ferraro
We hit NPE when running tests for RFE EAP7-867.
EAP distribution was built from https://github.com/pferraro/wildfly/tree/scattered .
Test description: Positive stress test (no failover), 4-node EAP cluster, clients: starting with 400 clients in the beginning, raising the number of clients to 6000 in the end of the test.
During clean server shutdown in the end of the test, server logged NPE and got stuck:
{code}
[JBossINF] [0m[31m07:55:57,643 ERROR [org.infinispan.scattered.impl.ScatteredStateConsumerImpl] (thread-200,ejb,dev214) ISPN000471: Failed processing values received from remote node during rebalance.: java.lang.NullPointerException
[JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.applyValues(ScatteredStateConsumerImpl.java:505)
[JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.lambda$getValuesAndApply$8(ScatteredStateConsumerImpl.java:475)
[JBossINF] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
[JBossINF] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
[JBossINF] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
[JBossINF] at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
[JBossINF] at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:66)
[JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:56)
[JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:35)
[JBossINF] at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:53)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1304)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1207)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$200(JGroupsTransport.java:123)
[JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.receive(JGroupsTransport.java:1342)
[JBossINF] at org.jgroups.JChannel.up(JChannel.java:819)
[JBossINF] at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:134)
[JBossINF] at org.jgroups.stack.Protocol.up(Protocol.java:340)
[JBossINF] at org.jgroups.protocols.FORK.up(FORK.java:134)
[JBossINF] at org.jgroups.protocols.FRAG3.up(FRAG3.java:166)
[JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
[JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
[JBossINF] at org.jgroups.protocols.pbcast.GMS.up(GMS.java:864)
[JBossINF] at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:240)
[JBossINF] at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1002)
[JBossINF] at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:728)
[JBossINF] at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:383)
[JBossINF] at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:600)
[JBossINF] at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:119)
[JBossINF] at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:199)
[JBossINF] at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:252)
[JBossINF] at org.jgroups.protocols.MERGE3.up(MERGE3.java:276)
[JBossINF] at org.jgroups.protocols.Discovery.up(Discovery.java:267)
[JBossINF] at org.jgroups.protocols.TP.passMessageUp(TP.java:1248)
[JBossINF] at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
[JBossINF] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[JBossINF] at org.jboss.as.clustering.jgroups.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:52)
[JBossINF] at java.lang.Thread.run(Thread.java:748)
[JBossINF]
{code}
Server link:
https://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-se...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (DROOLS-2485) SpeedTest often fails with "java.lang.OutOfMemoryError: GC overhead limit exceeded"
by Jan Hrcek (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2485?page=com.atlassian.jira.plugi... ]
Jan Hrcek updated DROOLS-2485:
------------------------------
Description:
- when I run just that test class locally from IntelliJ, it runs ~20
seconds, with small Metaspace/heap consumption
- on jenkins (there's maven module-level build parallelism with -T2),
it runs 110-180s - no idea why it runs so much slower.
- I tried several time running identical command as is running on
jenkins [2], including version of JDK and memory settings == xmx=2G:
- the Heap/Metaspace usage didn't get near the 2G maximum (see
screenshots from VisualVM from 2 runs)
- the running time of that SpeedTest was 40-80s
- slaves used for full downstream build have 24G RAM + 8G swap, so
lack of RAM shouldn't be an issue
- there seems to be no connection to the issue you mentioned [3]
- looking at history of this SpeedTest I see that you committed it
initially as @Ignore, but Jozef later un-ignored it. Since this can be
considered performance test and is failing randomly from time to time,
I think we should mark it with @Ignore again.
[1] http://janhrcek.cz/random-failures/#/class/org.drools.workbench.services....
[2] JAVA_HOME=/home/jhrcek/Tmp/jdk1.8.0_152 MAVEN_OPTS=-Xmx2g mvn -e
-nsu -fae -B -T2 -Pkie-wb,wildfly11,sourcemaps clean install
-Dfull=true -Dcontainer=wildfly11 -Dcontainer.profile=wildfly11
-Dintegration-tests=true -Dmaven.test.failure.ignore=true
-Dmaven.test.redirectTestOutputToFile=true
-Dgwt.compiler.localWorkers=1
[3] https://issues.jboss.org/browse/RHDM-488
was:
- when I run just that test class locally from IntelliJ, it runs ~20
seconds, with small Metaspace/heap consumption
- on jenkins (there's maven module-level build parallelism with -T2),
it runs 110-180s - no idea why it runs so much slower.
- I tried several time running identical command as is running on
jenkins [2], including version of JDK and memory settings == xmx=2G:
- the Heap/Metaspace usage didn't get near the 2G maximum (see
screenshots from VisualVM from 2 runs)
- the running time of that SpeedTest was 40-80s
- slaves used for full downstream build have 24G RAM + 8G swap, so
lack of RAM shouldn't be an issue
- there seems to be no connection to the issue you mentioned [3]
- looking at history of this SpeedTest I see that you committed it
initially as @Ignore, but Jozef later un-ignored it. Since this can be
considered performance test and is failing randomly from time to time,
I think we should mark it with @Ignore again.
[1] http://janhrcek.cz/random-failures/#/class/org.drools.workbench.services....
[2] JAVA_HOME=/home/jhrcek/Tmp/jdk1.8.0_152 MAVEN_OPTS=-Xmx2g mvn -e
-nsu -fae -B -T2 -Pkie-wb,wildfly11,sourcemaps clean install
-Dfull=true -Dcontainer=wildfly11 -Dcontainer.profile=wildfly11
-Dintegration-tests=true -Dmaven.test.failure.ignore=true
-Dmaven.test.redirectTestOutputToFile=true
-Dgwt.compiler.localWorkers=1
[3] https://issues.jboss.org/browse/RHDM-488
> SpeedTest often fails with "java.lang.OutOfMemoryError: GC overhead limit exceeded"
> -----------------------------------------------------------------------------------
>
> Key: DROOLS-2485
> URL: https://issues.jboss.org/browse/DROOLS-2485
> Project: Drools
> Issue Type: Task
> Components: decision tables
> Affects Versions: 7.8.0.Final
> Reporter: Jan Hrcek
> Assignee: Toni Rikkola
> Priority: Minor
> Attachments: drools-wb_run1.png, drools-wb_run2.png
>
>
> - when I run just that test class locally from IntelliJ, it runs ~20
> seconds, with small Metaspace/heap consumption
> - on jenkins (there's maven module-level build parallelism with -T2),
> it runs 110-180s - no idea why it runs so much slower.
> - I tried several time running identical command as is running on
> jenkins [2], including version of JDK and memory settings == xmx=2G:
> - the Heap/Metaspace usage didn't get near the 2G maximum (see
> screenshots from VisualVM from 2 runs)
> - the running time of that SpeedTest was 40-80s
> - slaves used for full downstream build have 24G RAM + 8G swap, so
> lack of RAM shouldn't be an issue
> - there seems to be no connection to the issue you mentioned [3]
> - looking at history of this SpeedTest I see that you committed it
> initially as @Ignore, but Jozef later un-ignored it. Since this can be
> considered performance test and is failing randomly from time to time,
> I think we should mark it with @Ignore again.
> [1] http://janhrcek.cz/random-failures/#/class/org.drools.workbench.services....
> [2] JAVA_HOME=/home/jhrcek/Tmp/jdk1.8.0_152 MAVEN_OPTS=-Xmx2g mvn -e
> -nsu -fae -B -T2 -Pkie-wb,wildfly11,sourcemaps clean install
> -Dfull=true -Dcontainer=wildfly11 -Dcontainer.profile=wildfly11
> -Dintegration-tests=true -Dmaven.test.failure.ignore=true
> -Dmaven.test.redirectTestOutputToFile=true
> -Dgwt.compiler.localWorkers=1
> [3] https://issues.jboss.org/browse/RHDM-488
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (DROOLS-2485) SpeedTest often fails with "java.lang.OutOfMemoryError: GC overhead limit exceeded"
by Jan Hrcek (JIRA)
Jan Hrcek created DROOLS-2485:
---------------------------------
Summary: SpeedTest often fails with "java.lang.OutOfMemoryError: GC overhead limit exceeded"
Key: DROOLS-2485
URL: https://issues.jboss.org/browse/DROOLS-2485
Project: Drools
Issue Type: Task
Components: decision tables
Affects Versions: 7.8.0.Final
Reporter: Jan Hrcek
Assignee: Toni Rikkola
Priority: Minor
Attachments: drools-wb_run1.png, drools-wb_run2.png
- when I run just that test class locally from IntelliJ, it runs ~20
seconds, with small Metaspace/heap consumption
- on jenkins (there's maven module-level build parallelism with -T2),
it runs 110-180s - no idea why it runs so much slower.
- I tried several time running identical command as is running on
jenkins [2], including version of JDK and memory settings == xmx=2G:
- the Heap/Metaspace usage didn't get near the 2G maximum (see
screenshots from VisualVM from 2 runs)
- the running time of that SpeedTest was 40-80s
- slaves used for full downstream build have 24G RAM + 8G swap, so
lack of RAM shouldn't be an issue
- there seems to be no connection to the issue you mentioned [3]
- looking at history of this SpeedTest I see that you committed it
initially as @Ignore, but Jozef later un-ignored it. Since this can be
considered performance test and is failing randomly from time to time,
I think we should mark it with @Ignore again. Wdyt? Any other ideas
what I could check to troubleshoot this?
[1] http://janhrcek.cz/random-failures/#/class/org.drools.workbench.services....
[2] JAVA_HOME=/home/jhrcek/Tmp/jdk1.8.0_152 MAVEN_OPTS=-Xmx2g mvn -e
-nsu -fae -B -T2 -Pkie-wb,wildfly11,sourcemaps clean install
-Dfull=true -Dcontainer=wildfly11 -Dcontainer.profile=wildfly11
-Dintegration-tests=true -Dmaven.test.failure.ignore=true
-Dmaven.test.redirectTestOutputToFile=true
-Dgwt.compiler.localWorkers=1
[3] https://issues.jboss.org/browse/RHDM-488
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (DROOLS-2485) SpeedTest often fails with "java.lang.OutOfMemoryError: GC overhead limit exceeded"
by Jan Hrcek (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2485?page=com.atlassian.jira.plugi... ]
Jan Hrcek updated DROOLS-2485:
------------------------------
Attachment: drools-wb_run1.png
> SpeedTest often fails with "java.lang.OutOfMemoryError: GC overhead limit exceeded"
> -----------------------------------------------------------------------------------
>
> Key: DROOLS-2485
> URL: https://issues.jboss.org/browse/DROOLS-2485
> Project: Drools
> Issue Type: Task
> Components: decision tables
> Affects Versions: 7.8.0.Final
> Reporter: Jan Hrcek
> Assignee: Toni Rikkola
> Priority: Minor
> Attachments: drools-wb_run1.png, drools-wb_run2.png
>
>
> - when I run just that test class locally from IntelliJ, it runs ~20
> seconds, with small Metaspace/heap consumption
> - on jenkins (there's maven module-level build parallelism with -T2),
> it runs 110-180s - no idea why it runs so much slower.
> - I tried several time running identical command as is running on
> jenkins [2], including version of JDK and memory settings == xmx=2G:
> - the Heap/Metaspace usage didn't get near the 2G maximum (see
> screenshots from VisualVM from 2 runs)
> - the running time of that SpeedTest was 40-80s
> - slaves used for full downstream build have 24G RAM + 8G swap, so
> lack of RAM shouldn't be an issue
> - there seems to be no connection to the issue you mentioned [3]
> - looking at history of this SpeedTest I see that you committed it
> initially as @Ignore, but Jozef later un-ignored it. Since this can be
> considered performance test and is failing randomly from time to time,
> I think we should mark it with @Ignore again. Wdyt? Any other ideas
> what I could check to troubleshoot this?
> [1] http://janhrcek.cz/random-failures/#/class/org.drools.workbench.services....
> [2] JAVA_HOME=/home/jhrcek/Tmp/jdk1.8.0_152 MAVEN_OPTS=-Xmx2g mvn -e
> -nsu -fae -B -T2 -Pkie-wb,wildfly11,sourcemaps clean install
> -Dfull=true -Dcontainer=wildfly11 -Dcontainer.profile=wildfly11
> -Dintegration-tests=true -Dmaven.test.failure.ignore=true
> -Dmaven.test.redirectTestOutputToFile=true
> -Dgwt.compiler.localWorkers=1
> [3] https://issues.jboss.org/browse/RHDM-488
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (DROOLS-2485) SpeedTest often fails with "java.lang.OutOfMemoryError: GC overhead limit exceeded"
by Jan Hrcek (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2485?page=com.atlassian.jira.plugi... ]
Jan Hrcek updated DROOLS-2485:
------------------------------
Attachment: drools-wb_run2.png
> SpeedTest often fails with "java.lang.OutOfMemoryError: GC overhead limit exceeded"
> -----------------------------------------------------------------------------------
>
> Key: DROOLS-2485
> URL: https://issues.jboss.org/browse/DROOLS-2485
> Project: Drools
> Issue Type: Task
> Components: decision tables
> Affects Versions: 7.8.0.Final
> Reporter: Jan Hrcek
> Assignee: Toni Rikkola
> Priority: Minor
> Attachments: drools-wb_run1.png, drools-wb_run2.png
>
>
> - when I run just that test class locally from IntelliJ, it runs ~20
> seconds, with small Metaspace/heap consumption
> - on jenkins (there's maven module-level build parallelism with -T2),
> it runs 110-180s - no idea why it runs so much slower.
> - I tried several time running identical command as is running on
> jenkins [2], including version of JDK and memory settings == xmx=2G:
> - the Heap/Metaspace usage didn't get near the 2G maximum (see
> screenshots from VisualVM from 2 runs)
> - the running time of that SpeedTest was 40-80s
> - slaves used for full downstream build have 24G RAM + 8G swap, so
> lack of RAM shouldn't be an issue
> - there seems to be no connection to the issue you mentioned [3]
> - looking at history of this SpeedTest I see that you committed it
> initially as @Ignore, but Jozef later un-ignored it. Since this can be
> considered performance test and is failing randomly from time to time,
> I think we should mark it with @Ignore again. Wdyt? Any other ideas
> what I could check to troubleshoot this?
> [1] http://janhrcek.cz/random-failures/#/class/org.drools.workbench.services....
> [2] JAVA_HOME=/home/jhrcek/Tmp/jdk1.8.0_152 MAVEN_OPTS=-Xmx2g mvn -e
> -nsu -fae -B -T2 -Pkie-wb,wildfly11,sourcemaps clean install
> -Dfull=true -Dcontainer=wildfly11 -Dcontainer.profile=wildfly11
> -Dintegration-tests=true -Dmaven.test.failure.ignore=true
> -Dmaven.test.redirectTestOutputToFile=true
> -Dgwt.compiler.localWorkers=1
> [3] https://issues.jboss.org/browse/RHDM-488
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (DROOLS-2485) SpeedTest often fails with "java.lang.OutOfMemoryError: GC overhead limit exceeded"
by Jan Hrcek (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2485?page=com.atlassian.jira.plugi... ]
Jan Hrcek updated DROOLS-2485:
------------------------------
Description:
- when I run just that test class locally from IntelliJ, it runs ~20
seconds, with small Metaspace/heap consumption
- on jenkins (there's maven module-level build parallelism with -T2),
it runs 110-180s - no idea why it runs so much slower.
- I tried several time running identical command as is running on
jenkins [2], including version of JDK and memory settings == xmx=2G:
- the Heap/Metaspace usage didn't get near the 2G maximum (see
screenshots from VisualVM from 2 runs)
- the running time of that SpeedTest was 40-80s
- slaves used for full downstream build have 24G RAM + 8G swap, so
lack of RAM shouldn't be an issue
- there seems to be no connection to the issue you mentioned [3]
- looking at history of this SpeedTest I see that you committed it
initially as @Ignore, but Jozef later un-ignored it. Since this can be
considered performance test and is failing randomly from time to time,
I think we should mark it with @Ignore again.
[1] http://janhrcek.cz/random-failures/#/class/org.drools.workbench.services....
[2] JAVA_HOME=/home/jhrcek/Tmp/jdk1.8.0_152 MAVEN_OPTS=-Xmx2g mvn -e
-nsu -fae -B -T2 -Pkie-wb,wildfly11,sourcemaps clean install
-Dfull=true -Dcontainer=wildfly11 -Dcontainer.profile=wildfly11
-Dintegration-tests=true -Dmaven.test.failure.ignore=true
-Dmaven.test.redirectTestOutputToFile=true
-Dgwt.compiler.localWorkers=1
[3] https://issues.jboss.org/browse/RHDM-488
was:
- when I run just that test class locally from IntelliJ, it runs ~20
seconds, with small Metaspace/heap consumption
- on jenkins (there's maven module-level build parallelism with -T2),
it runs 110-180s - no idea why it runs so much slower.
- I tried several time running identical command as is running on
jenkins [2], including version of JDK and memory settings == xmx=2G:
- the Heap/Metaspace usage didn't get near the 2G maximum (see
screenshots from VisualVM from 2 runs)
- the running time of that SpeedTest was 40-80s
- slaves used for full downstream build have 24G RAM + 8G swap, so
lack of RAM shouldn't be an issue
- there seems to be no connection to the issue you mentioned [3]
- looking at history of this SpeedTest I see that you committed it
initially as @Ignore, but Jozef later un-ignored it. Since this can be
considered performance test and is failing randomly from time to time,
I think we should mark it with @Ignore again. Wdyt? Any other ideas
what I could check to troubleshoot this?
[1] http://janhrcek.cz/random-failures/#/class/org.drools.workbench.services....
[2] JAVA_HOME=/home/jhrcek/Tmp/jdk1.8.0_152 MAVEN_OPTS=-Xmx2g mvn -e
-nsu -fae -B -T2 -Pkie-wb,wildfly11,sourcemaps clean install
-Dfull=true -Dcontainer=wildfly11 -Dcontainer.profile=wildfly11
-Dintegration-tests=true -Dmaven.test.failure.ignore=true
-Dmaven.test.redirectTestOutputToFile=true
-Dgwt.compiler.localWorkers=1
[3] https://issues.jboss.org/browse/RHDM-488
> SpeedTest often fails with "java.lang.OutOfMemoryError: GC overhead limit exceeded"
> -----------------------------------------------------------------------------------
>
> Key: DROOLS-2485
> URL: https://issues.jboss.org/browse/DROOLS-2485
> Project: Drools
> Issue Type: Task
> Components: decision tables
> Affects Versions: 7.8.0.Final
> Reporter: Jan Hrcek
> Assignee: Toni Rikkola
> Priority: Minor
> Attachments: drools-wb_run1.png, drools-wb_run2.png
>
>
> - when I run just that test class locally from IntelliJ, it runs ~20
> seconds, with small Metaspace/heap consumption
> - on jenkins (there's maven module-level build parallelism with -T2),
> it runs 110-180s - no idea why it runs so much slower.
> - I tried several time running identical command as is running on
> jenkins [2], including version of JDK and memory settings == xmx=2G:
> - the Heap/Metaspace usage didn't get near the 2G maximum (see
> screenshots from VisualVM from 2 runs)
> - the running time of that SpeedTest was 40-80s
> - slaves used for full downstream build have 24G RAM + 8G swap, so
> lack of RAM shouldn't be an issue
> - there seems to be no connection to the issue you mentioned [3]
> - looking at history of this SpeedTest I see that you committed it
> initially as @Ignore, but Jozef later un-ignored it. Since this can be
> considered performance test and is failing randomly from time to time,
> I think we should mark it with @Ignore again.
> [1] http://janhrcek.cz/random-failures/#/class/org.drools.workbench.services....
> [2] JAVA_HOME=/home/jhrcek/Tmp/jdk1.8.0_152 MAVEN_OPTS=-Xmx2g mvn -e
> -nsu -fae -B -T2 -Pkie-wb,wildfly11,sourcemaps clean install
> -Dfull=true -Dcontainer=wildfly11 -Dcontainer.profile=wildfly11
> -Dintegration-tests=true -Dmaven.test.failure.ignore=true
> -Dmaven.test.redirectTestOutputToFile=true
> -Dgwt.compiler.localWorkers=1
> [3] https://issues.jboss.org/browse/RHDM-488
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years