[JBoss JIRA] (ISPN-8220) ClusteredCacheMgmtInterceptorMBeanTest fails intermittently
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-8220?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-8220:
----------------------------------
Fix Version/s: 9.1.2.Final
(was: 9.1.1.Final)
> ClusteredCacheMgmtInterceptorMBeanTest fails intermittently
> -----------------------------------------------------------
>
> Key: ISPN-8220
> URL: https://issues.jboss.org/browse/ISPN-8220
> Project: Infinispan
> Issue Type: Bug
> Components: JMX, reporting and management, Test Suite - Core
> Affects Versions: 9.1.0.Final
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Labels: testsuite_stability
> Fix For: 9.1.2.Final
>
>
> {code:java}
> org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:259)
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1679)
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1327)
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1793)
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:282)
> at org.infinispan.cache.impl.AbstractDelegatingCache.put(AbstractDelegatingCache.java:358)
> at org.infinispan.cache.impl.EncoderCache.put(EncoderCache.java:655)
> at org.infinispan.jmx.ClusteredCacheMgmtInterceptorMBeanTest.testCorrectStatsInCluster(ClusteredCacheMgmtInterceptorMBeanTest.java:48)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 6
> at org.infinispan.interceptors.impl.BaseStateTransferInterceptor$CancellableRetry.run(BaseStateTransferInterceptor.java:347)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> ... 3 more
> ... Removed 16 stack frames
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (ISPN-8217) RestStoreTest.tearDown intermittent failure
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-8217?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-8217:
----------------------------------
Fix Version/s: 9.1.2.Final
(was: 9.1.1.Final)
> RestStoreTest.tearDown intermittent failure
> -------------------------------------------
>
> Key: ISPN-8217
> URL: https://issues.jboss.org/browse/ISPN-8217
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 9.1.0.Final
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Fix For: 9.1.2.Final
>
>
> The test fails intermittently with the following:
> {code:java}
> io.netty.channel.ChannelException: eventfd_write() failed: Bad file descriptor
> at io.netty.channel.epoll.Native.eventFdWrite(Native Method)
> at io.netty.channel.epoll.EpollEventLoop.wakeup(EpollEventLoop.java:126)
> at io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:589)
> at io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:163)
> at org.infinispan.server.core.transport.NettyTransport.stop(NettyTransport.java:161)
> at org.infinispan.server.core.AbstractProtocolServer.stop(AbstractProtocolServer.java:144)
> at org.infinispan.persistence.rest.RestStoreTest.tearDown(RestStoreTest.java:73)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ... Removed 18 stack frames
>
> {code}
> I believe this is caused by Netty not closing connections properly. See [here|http://netty.io/news/2017/07/06/4-0-49-Final-4-1-13-Final.html]. Upgrading Netty to the latest version should hopefully resolve this issue.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (ISPN-8321) Deadlock in hibernate-cache tests
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8321?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8321:
-------------------------------
Description:
{noformat}
Found one Java-level deadlock:
=============================
"TestDisconnectHandler-1":
waiting to lock monitor 0x00007fdff40036f8 (object 0x00000007359182d0, a org.jgroups.protocols.pbcast.Merger),
which is held by "jgroups-4,EntityCollectionInvalidationTest-NodeF-59512"
"jgroups-4,EntityCollectionInvalidationTest-NodeF-59512":
waiting for ownable synchronizer 0x0000000735645138, (a java.util.concurrent.locks.ReentrantLock$NonfairSync),
which is held by "TestDisconnectHandler-1"
Java stack information for the threads listed above:
===================================================
"TestDisconnectHandler-1":
at org.jgroups.protocols.pbcast.Merger.cancelMerge(Merger.java:431)
- waiting to lock <0x00000007359182d0> (a org.jgroups.protocols.pbcast.Merger)
at org.jgroups.protocols.pbcast.CoordGmsImpl.init(CoordGmsImpl.java:34)
at org.jgroups.protocols.pbcast.GMS.becomeCoordinator(GMS.java:407)
at org.jgroups.protocols.pbcast.ParticipantGmsImpl.handleMembershipChange(ParticipantGmsImpl.java:114)
at org.jgroups.protocols.pbcast.GMS.process(GMS.java:1296)
at org.jgroups.protocols.pbcast.GMS$$Lambda$95/1582906120.accept(Unknown Source)
at org.jgroups.protocols.pbcast.ViewHandler.process(ViewHandler.java:173)
at org.jgroups.protocols.pbcast.ViewHandler.add(ViewHandler.java:111)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:841)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:233)
at org.jgroups.stack.Protocol.up(Protocol.java:302)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:591)
at org.jgroups.stack.Protocol.up(Protocol.java:302)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:245)
at org.infinispan.test.hibernate.cache.util.TestDisconnectHandler.lambda$down$0(TestDisconnectHandler.java:63)
at org.infinispan.test.hibernate.cache.util.TestDisconnectHandler$$Lambda$392/1960261368.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
"jgroups-4,EntityCollectionInvalidationTest-NodeF-59512":
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000735645138> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at org.jgroups.protocols.pbcast.ViewHandler.resume(ViewHandler.java:140)
at org.jgroups.protocols.pbcast.Merger.cancelMerge(Merger.java:435)
- locked <0x00000007359182d0> (a org.jgroups.protocols.pbcast.Merger)
at org.jgroups.protocols.pbcast.CoordGmsImpl.init(CoordGmsImpl.java:34)
at org.jgroups.protocols.pbcast.GMS.becomeCoordinator(GMS.java:407)
at org.jgroups.protocols.pbcast.GMS.installView(GMS.java:688)
- locked <0x0000000735918798> (a org.jgroups.Membership)
- locked <0x0000000735643d68> (a org.jgroups.protocols.pbcast.GMS)
at org.jgroups.protocols.pbcast.ParticipantGmsImpl.handleViewChange(ParticipantGmsImpl.java:135)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:918)
at org.jgroups.stack.Protocol.up(Protocol.java:336)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:293)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:428)
at org.jgroups.protocols.pbcast.NAKACK2.deliverBatch(NAKACK2.java:962)
at org.jgroups.protocols.pbcast.NAKACK2.removeAndDeliver(NAKACK2.java:896)
at org.jgroups.protocols.pbcast.NAKACK2.handleMessages(NAKACK2.java:870)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:690)
at org.jgroups.protocols.FD.up(FD.java:280)
at org.jgroups.stack.Protocol.up(Protocol.java:344)
at org.jgroups.stack.Protocol.up(Protocol.java:344)
at org.jgroups.stack.Protocol.up(Protocol.java:344)
at org.jgroups.protocols.TP.passBatchUp(TP.java:1255)
at org.jgroups.util.MaxOneThreadPerSender$BatchHandlerLoop.passBatchUp(MaxOneThreadPerSender.java:284)
at org.jgroups.util.SubmitToThreadPool$BatchHandler.run(SubmitToThreadPool.java:136)
at org.jgroups.util.MaxOneThreadPerSender$BatchHandlerLoop.run(MaxOneThreadPerSender.java:273)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}
> Deadlock in hibernate-cache tests
> ---------------------------------
>
> Key: ISPN-8321
> URL: https://issues.jboss.org/browse/ISPN-8321
> Project: Infinispan
> Issue Type: Bug
> Components: Hibernate Cache
> Affects Versions: 9.1.0.Final
> Reporter: Gustavo Fernandes
> Assignee: Galder Zamarreño
> Labels: testsuite_stability
> Attachments: deadlock.txt
>
>
> {noformat}
> Found one Java-level deadlock:
> =============================
> "TestDisconnectHandler-1":
> waiting to lock monitor 0x00007fdff40036f8 (object 0x00000007359182d0, a org.jgroups.protocols.pbcast.Merger),
> which is held by "jgroups-4,EntityCollectionInvalidationTest-NodeF-59512"
> "jgroups-4,EntityCollectionInvalidationTest-NodeF-59512":
> waiting for ownable synchronizer 0x0000000735645138, (a java.util.concurrent.locks.ReentrantLock$NonfairSync),
> which is held by "TestDisconnectHandler-1"
> Java stack information for the threads listed above:
> ===================================================
> "TestDisconnectHandler-1":
> at org.jgroups.protocols.pbcast.Merger.cancelMerge(Merger.java:431)
> - waiting to lock <0x00000007359182d0> (a org.jgroups.protocols.pbcast.Merger)
> at org.jgroups.protocols.pbcast.CoordGmsImpl.init(CoordGmsImpl.java:34)
> at org.jgroups.protocols.pbcast.GMS.becomeCoordinator(GMS.java:407)
> at org.jgroups.protocols.pbcast.ParticipantGmsImpl.handleMembershipChange(ParticipantGmsImpl.java:114)
> at org.jgroups.protocols.pbcast.GMS.process(GMS.java:1296)
> at org.jgroups.protocols.pbcast.GMS$$Lambda$95/1582906120.accept(Unknown Source)
> at org.jgroups.protocols.pbcast.ViewHandler.process(ViewHandler.java:173)
> at org.jgroups.protocols.pbcast.ViewHandler.add(ViewHandler.java:111)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:841)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:233)
> at org.jgroups.stack.Protocol.up(Protocol.java:302)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:591)
> at org.jgroups.stack.Protocol.up(Protocol.java:302)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:245)
> at org.infinispan.test.hibernate.cache.util.TestDisconnectHandler.lambda$down$0(TestDisconnectHandler.java:63)
> at org.infinispan.test.hibernate.cache.util.TestDisconnectHandler$$Lambda$392/1960261368.run(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> "jgroups-4,EntityCollectionInvalidationTest-NodeF-59512":
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x0000000735645138> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
> at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
> at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
> at org.jgroups.protocols.pbcast.ViewHandler.resume(ViewHandler.java:140)
> at org.jgroups.protocols.pbcast.Merger.cancelMerge(Merger.java:435)
> - locked <0x00000007359182d0> (a org.jgroups.protocols.pbcast.Merger)
> at org.jgroups.protocols.pbcast.CoordGmsImpl.init(CoordGmsImpl.java:34)
> at org.jgroups.protocols.pbcast.GMS.becomeCoordinator(GMS.java:407)
> at org.jgroups.protocols.pbcast.GMS.installView(GMS.java:688)
> - locked <0x0000000735918798> (a org.jgroups.Membership)
> - locked <0x0000000735643d68> (a org.jgroups.protocols.pbcast.GMS)
> at org.jgroups.protocols.pbcast.ParticipantGmsImpl.handleViewChange(ParticipantGmsImpl.java:135)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:918)
> at org.jgroups.stack.Protocol.up(Protocol.java:336)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:293)
> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:428)
> at org.jgroups.protocols.pbcast.NAKACK2.deliverBatch(NAKACK2.java:962)
> at org.jgroups.protocols.pbcast.NAKACK2.removeAndDeliver(NAKACK2.java:896)
> at org.jgroups.protocols.pbcast.NAKACK2.handleMessages(NAKACK2.java:870)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:690)
> at org.jgroups.protocols.FD.up(FD.java:280)
> at org.jgroups.stack.Protocol.up(Protocol.java:344)
> at org.jgroups.stack.Protocol.up(Protocol.java:344)
> at org.jgroups.stack.Protocol.up(Protocol.java:344)
> at org.jgroups.protocols.TP.passBatchUp(TP.java:1255)
> at org.jgroups.util.MaxOneThreadPerSender$BatchHandlerLoop.passBatchUp(MaxOneThreadPerSender.java:284)
> at org.jgroups.util.SubmitToThreadPool$BatchHandler.run(SubmitToThreadPool.java:136)
> at org.jgroups.util.MaxOneThreadPerSender$BatchHandlerLoop.run(MaxOneThreadPerSender.java:273)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months