[JBoss JIRA] (WFLY-12753) Intermittent failure in LastNodeToLeaveRemoteEJBTestCase
by Brian Stansberry (Jira)
[ https://issues.jboss.org/browse/WFLY-12753?page=com.atlassian.jira.plugin... ]
Brian Stansberry commented on WFLY-12753:
-----------------------------------------
Scrolling up further in the log, skipping the previous test execution, which doesn't involve any server running on 10090, and moving to the one before it, which does have such a server, I see:
{code}
[20:38:47][Step 2/3] [INFO] Running org.jboss.as.test.clustering.cluster.ejb.xpc.StatefulWithXPCFailoverTestCase
[20:39:42][Step 2/3] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.511 s - in org.jboss.as.test.clustering.cluster.ejb.xpc.StatefulWithXPCFailoverTestCase
[20:39:42][Step 2/3] [INFO] Running org.jboss.as.test.clustering.cluster.ejb2.stateful.failover.RemoteEJBClientStatefulBeanFailoverTestCase
[21:59:11][Step 2/3] [INFO]
[21:59:11][Step 2/3] [INFO] Results:
[21:59:11][Step 2/3] [INFO]
[21:59:11][Step 2/3] [WARNING] Tests run: 37, Failures: 0, Errors: 0, Skipped: 5
[21:59:11][Step 2/3] [INFO]
[21:59:11][Step 2/3] [ERROR] There was a timeout or other error in the fork
{code}
So the problem looks like some sort of hang in .RemoteEJBClientStatefulBeanFailoverTestCase.
> Intermittent failure in LastNodeToLeaveRemoteEJBTestCase
> --------------------------------------------------------
>
> Key: WFLY-12753
> URL: https://issues.jboss.org/browse/WFLY-12753
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Reporter: Brian Stansberry
> Assignee: Paul Ferraro
> Priority: Major
>
> It's getting fairly common lately to see dozens/hundreds of failures like at https://ci.wildfly.org/viewLog.html?buildId=173641&buildTypeId=WF_PullReq...
> Problem is LastNodeToLeaveRemoteEJBTestCase, first test class run in a surefire execution, fails due to a port conflict, perhaps a leftover process.
> {code}
> [21:59:27][Step 2/3] [INFO] Running org.jboss.as.test.clustering.cluster.ejb.remote.byteman.LastNodeToLeaveRemoteEJBTestCase
> [21:59:36][Step 2/3] [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.321 s <<< FAILURE! - in org.jboss.as.test.clustering.cluster.ejb.remote.byteman.LastNodeToLeaveRemoteEJBTestCase
> [21:59:36][Step 2/3] [ERROR] testDNRContentsAfterLastNodeToLeave(org.jboss.as.test.clustering.cluster.ejb.remote.byteman.LastNodeToLeaveRemoteEJBTestCase) Time elapsed: 6.8 s <<< ERROR!
> [21:59:36][Step 2/3] org.jboss.arquillian.container.spi.client.container.LifecycleException:
> [21:59:36][Step 2/3] The port 10090 is already in use. It means that either the server might be already running or there is another process using port 10090.
> [21:59:36][Step 2/3] Managed containers do not support connecting to running server instances due to the possible harmful effect of connecting to the wrong server.
> [21:59:36][Step 2/3] Please stop server (or another process) before running, change to another type of container (e.g. remote) or use jboss.socket.binding.port-offset variable to change the default port.
> [21:59:36][Step 2/3] To disable this check and allow Arquillian to connect to a running server, set allowConnectingToRunningServer to true in the container configuration
> [21:59:36][Step 2/3] at org.jboss.as.arquillian.container.managed.ManagedDeployableContainer.failDueToRunning(ManagedDeployableContainer.java:323)
> [21:59:36][Step 2/3] at org.jboss.as.arquillian.container.managed.ManagedDeployableContainer.startInternal(ManagedDeployableContainer.java:81)
> [21:59:36][Step 2/3] at org.jboss.as.arquillian.container.CommonDeployableContainer.start(CommonDeployableContainer.java:123)
> [21:59:36][Step 2/3] at org.jboss.arquillian.container.impl.ContainerImpl.start(ContainerImpl.java:179)
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (WFLY-12753) Intermittent failure in LastNodeToLeaveRemoteEJBTestCase
by Brian Stansberry (Jira)
Brian Stansberry created WFLY-12753:
---------------------------------------
Summary: Intermittent failure in LastNodeToLeaveRemoteEJBTestCase
Key: WFLY-12753
URL: https://issues.jboss.org/browse/WFLY-12753
Project: WildFly
Issue Type: Bug
Components: Clustering
Reporter: Brian Stansberry
Assignee: Paul Ferraro
It's getting fairly common lately to see dozens/hundreds of failures like at https://ci.wildfly.org/viewLog.html?buildId=173641&buildTypeId=WF_PullReq...
Problem is LastNodeToLeaveRemoteEJBTestCase, first test class run in a surefire execution, fails due to a port conflict, perhaps a leftover process.
{code}
[21:59:27][Step 2/3] [INFO] Running org.jboss.as.test.clustering.cluster.ejb.remote.byteman.LastNodeToLeaveRemoteEJBTestCase
[21:59:36][Step 2/3] [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.321 s <<< FAILURE! - in org.jboss.as.test.clustering.cluster.ejb.remote.byteman.LastNodeToLeaveRemoteEJBTestCase
[21:59:36][Step 2/3] [ERROR] testDNRContentsAfterLastNodeToLeave(org.jboss.as.test.clustering.cluster.ejb.remote.byteman.LastNodeToLeaveRemoteEJBTestCase) Time elapsed: 6.8 s <<< ERROR!
[21:59:36][Step 2/3] org.jboss.arquillian.container.spi.client.container.LifecycleException:
[21:59:36][Step 2/3] The port 10090 is already in use. It means that either the server might be already running or there is another process using port 10090.
[21:59:36][Step 2/3] Managed containers do not support connecting to running server instances due to the possible harmful effect of connecting to the wrong server.
[21:59:36][Step 2/3] Please stop server (or another process) before running, change to another type of container (e.g. remote) or use jboss.socket.binding.port-offset variable to change the default port.
[21:59:36][Step 2/3] To disable this check and allow Arquillian to connect to a running server, set allowConnectingToRunningServer to true in the container configuration
[21:59:36][Step 2/3] at org.jboss.as.arquillian.container.managed.ManagedDeployableContainer.failDueToRunning(ManagedDeployableContainer.java:323)
[21:59:36][Step 2/3] at org.jboss.as.arquillian.container.managed.ManagedDeployableContainer.startInternal(ManagedDeployableContainer.java:81)
[21:59:36][Step 2/3] at org.jboss.as.arquillian.container.CommonDeployableContainer.start(CommonDeployableContainer.java:123)
[21:59:36][Step 2/3] at org.jboss.arquillian.container.impl.ContainerImpl.start(ContainerImpl.java:179)
{code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (WFLY-12752) RankedAffinityTestCase fails on OpenJ9
by Paul Ferraro (Jira)
Paul Ferraro created WFLY-12752:
-----------------------------------
Summary: RankedAffinityTestCase fails on OpenJ9
Key: WFLY-12752
URL: https://issues.jboss.org/browse/WFLY-12752
Project: WildFly
Issue Type: Bug
Components: Clustering, Test Suite
Affects Versions: 18.0.0.Final
Reporter: Paul Ferraro
Assignee: Paul Ferraro
{noformat}
java.lang.NoSuchMethodError: org/bouncycastle/util/Arrays.append([Ljava/lang/String;Ljava/lang/String;)[Ljava/lang/String; (loaded from file:/store/repository/org/apache/directory/server/apacheds-all/2.0.0-M24/apacheds-all-2.0.0-M24.jar by jdk.internal.loader.ClassLoaders$AppClassLoader@b33b58fe) called from class org.jboss.as.test.clustering.cluster.affinity.RankedAffinityTestCase (loaded from file:/store/work/tc-work/aa09375132417a7/testsuite/integration/clustering/target/test-classes/ by jdk.internal.loader.ClassLoaders$AppClassLoader@b33b58fe).
at org.jboss.as.test.clustering.cluster.affinity.RankedAffinityTestCase.<init>(RankedAffinityTestCase.java:85)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
at org.jboss.arquillian.junit.Arquillian.access$300(Arquillian.java:54)
at org.jboss.arquillian.junit.Arquillian$6.runReflectiveCall(Arquillian.java:240)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.jboss.arquillian.junit.Arquillian.methodBlock(Arquillian.java:242)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.jboss.arquillian.junit.Arquillian$2.evaluate(Arquillian.java:166)
at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:350)
at org.jboss.arquillian.junit.Arquillian.access$200(Arquillian.java:54)
at org.jboss.arquillian.junit.Arquillian$3.evaluate(Arquillian.java:177)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:115)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417)
{noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (JGRP-2395) LOCAL_PING fails when 2 nodes start at the same time
by Dan Berindei (Jira)
Dan Berindei created JGRP-2395:
----------------------------------
Summary: LOCAL_PING fails when 2 nodes start at the same time
Key: JGRP-2395
URL: https://issues.jboss.org/browse/JGRP-2395
Project: JGroups
Issue Type: Bug
Affects Versions: 4.1.6
Reporter: Dan Berindei
Assignee: Bela Ban
We have a test that starts 2 nodes in parallel ({{ConcurrentStartTest}} and it is randomly failing since we started using {{LOCAL_PING}}.
{noformat}
01:02:11,930 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: discovery took 3 ms, members: 1 rsps (0 coords) [done]
01:02:11,930 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: discovery took 3 ms, members: 1 rsps (0 coords) [done]
01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: could not determine coordinator from rsps 1 rsps (0 coords) [done]
01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: I (Test-NodeB-43694) am the first of the nodes, will become coordinator
01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: I (Test-NodeA-29550) am not the first of the nodes, waiting for another client to become coordinator
01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: discovery took 0 ms, members: 1 rsps (0 coords) [done]
01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
...
01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: I (Test-NodeA-29550) am not the first of the nodes, waiting for another client to become coordinator
01:02:11,942 WARN (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: too many JOIN attempts (10): becoming singleton
01:02:11,942 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: installing view [Test-NodeA-29550|0] (1) [Test-NodeA-29550]
01:02:11,977 DEBUG (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: created cluster (first member). My view is [Test-NodeB-43694|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
01:02:11,977 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: created cluster (first member). My view is [Test-NodeA-29550|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
{noformat}
The problem seems to be that it takes longer for the coordinator to install the initial view and update {{LOCAL_PING}}'s {{PingData}} then it takes the other node to retry the discovery process 10 times.
In some cases there is no retry, because one node starts slightly faster, but it's not yet coordinator when the 2nd node does its discovery, and both nodes decide they should be coordinator:
{noformat}
01:13:44,460 INFO (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-5386: no members discovered after 3 ms: creating cluster as first member
01:13:44,463 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: discovery took 1 ms, members: 1 rsps (0 coords) [done]
01:13:44,465 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: could not determine coordinator from rsps 1 rsps (0 coords) [done]
01:13:44,465 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: nodes to choose new coord from are: [Test-NodeB-51165, Test-NodeA-5386]
01:13:44,466 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: I (Test-NodeB-51165) am the first of the nodes, will become coordinator
01:13:44,466 DEBUG (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: installing view [Test-NodeB-51165|0] (1) [Test-NodeB-51165]
01:13:44,466 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-5386: installing view [Test-NodeA-5386|0] (1) [Test-NodeA-5386]
{noformat}
This second failure mode seems to go away if I move the {{discovery}} map access inside the {{synchronized}} block both in {{findMembers()}} and in {{down()}}.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months