[infinispan-issues] [JBoss JIRA] (ISPN-4802) HotRodConcurrentStartTest.testConcurrentStartup random failures

Dan Berindei (JIRA) issues at jboss.org
Fri Oct 3 07:25:11 EDT 2014


    [ https://issues.jboss.org/browse/ISPN-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13008451#comment-13008451 ] 

Dan Berindei commented on ISPN-4802:
------------------------------------

The test also seems to leak HotRod servers when the future times out, sometimes causing failures in other tests:

{noformat}
00:24:46,784 ERROR (testng-HotRodReplicatedEventsTest:) [UnitTestTestNGListener] Configuration method createBeforeClass(org.infinispan.server.hotrod.event.HotRodReplicatedEventsTest) threw an exception
java.net.BindException: Address already in use
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:476)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1021)
	at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:454)
	at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:439)
	at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:844)
	at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:195)
	at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:338)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:370)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
	at java.lang.Thread.run(Thread.java:745)
{noformat}

It would also be nice if the threads creating the servers had the test name in them (e.g. using {{AbstractInfinispanTest.fork()}}), as it would make filtering the logs much easier.

> HotRodConcurrentStartTest.testConcurrentStartup random failures
> ---------------------------------------------------------------
>
>                 Key: ISPN-4802
>                 URL: https://issues.jboss.org/browse/ISPN-4802
>             Project: Infinispan
>          Issue Type: Bug
>          Components: Server, Test Suite - Server
>    Affects Versions: 7.0.0.Beta2
>            Reporter: Dan Berindei
>            Assignee: Galder Zamarreño
>            Priority: Critical
>              Labels: testsuite_stability
>             Fix For: 7.0.0.CR1
>
>
> Sometimes it takes a lot of time to start the cluster, and the 20s timeout is not enough:
> {noformat}
> 10:44:42,144 INFO  (ForkJoinPool-1-worker-1:) [HotRodTestingUtil$] Start server in port 13081
> 10:44:42,171 INFO  (ForkJoinPool-1-worker-3:) [HotRodTestingUtil$] Start server in port 13091
> 10:44:42,234 INFO  (ForkJoinPool-1-worker-1:) [JGroupsTransport] ISPN000078: Starting JGroups channel ISPN
> 10:44:42,254 INFO  (ForkJoinPool-1-worker-3:) [JGroupsTransport] ISPN000078: Starting JGroups channel ISPN
> 10:44:47,383 DEBUG (ForkJoinPool-1-worker-3:) [GMS] address=HotRodConcurrentStartTest-NodeB-30943, cluster=ISPN, physical address=127.0.0.1:9000
> 10:44:47,746 DEBUG (ForkJoinPool-1-worker-3:) [CacheImpl] Started cache __cluster_registry_cache__ on HotRodConcurrentStartTest-NodeB-30943
> 10:44:47,750 DEBUG (ForkJoinPool-1-worker-3:) [CacheImpl] Started cache ___defaultcache on HotRodConcurrentStartTest-NodeB-30943
> 10:44:48,078 DEBUG (ForkJoinPool-1-worker-1:) [GMS] address=HotRodConcurrentStartTest-NodeA-34821, cluster=ISPN, physical address=127.0.0.1:9001
> 10:44:48,187 DEBUG (ForkJoinPool-1-worker-3:) [CacheImpl] Started cache hotRodConcurrentStart on HotRodConcurrentStartTest-NodeB-30943
> 10:44:48,308 DEBUG (ForkJoinPool-1-worker-3:) [HotRodTestingUtil$$anon$1] Externally facing address is 127.0.0.1:13091
> 10:44:48,556 DEBUG (ForkJoinPool-1-worker-3:) [CacheImpl] Started cache ___hotRodTopologyCache on HotRodConcurrentStartTest-NodeB-30943
> 10:44:48,557 DEBUG (ForkJoinPool-1-worker-3:) [HotRodTestingUtil$$anon$1] Map HotRodConcurrentStartTest-NodeB-30943 cluster address with 127.0.0.1:13091 server endpoint in address cache
> 10:44:50,947 DEBUG (ForkJoinPool-1-worker-1:) [CacheImpl] Started cache __cluster_registry_cache__ on HotRodConcurrentStartTest-NodeA-34821
> 10:44:50,952 DEBUG (ForkJoinPool-1-worker-1:) [CacheImpl] Started cache ___defaultcache on HotRodConcurrentStartTest-NodeA-34821
> 10:44:51,925 DEBUG (ForkJoinPool-1-worker-1:) [CacheImpl] Started cache hotRodConcurrentStart on HotRodConcurrentStartTest-NodeA-34821
> 10:45:02,048 DEBUG (ForkJoinPool-1-worker-1:) [HotRodTestingUtil$$anon$1] Externally facing address is 127.0.0.1:13081
> 10:45:02,248 DEBUG (ForkJoinPool-1-worker-1:) [LocalTopologyManagerImpl] Node HotRodConcurrentStartTest-NodeA-34821 joining cache ___hotRodTopologyCache
> 10:45:02,356 ERROR (testng-HotRodConcurrentStartTest:) [UnitTestTestNGListener] Test testConcurrentStartup(org.infinispan.server.hotrod.HotRodConcurrentStartTest) failed.
> java.util.concurrent.TimeoutException: Futures timed out after [20 seconds]
> 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
> 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
> 	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:116)
> 	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
> 	at scala.concurrent.Await$.result(package.scala:116)
> 	at org.infinispan.server.hotrod.HotRodConcurrentStartTest.testConcurrentStartup(HotRodConcurrentStartTest.scala:64)
> {noformat}
> http://ci.infinispan.org/viewLog.html?buildId=12599&buildTypeId=bt8
> http://ci.infinispan.org/viewLog.html?buildId=12408&buildTypeId=Infinispan_MasterHotspotJdk8



--
This message was sent by Atlassian JIRA
(v6.3.1#6329)



More information about the infinispan-issues mailing list