[JBoss JIRA] (ISPN-5531) java.lang.UnsupportedOperationException during remove (using RemoteCacheManager)
by Enrico Olivelli (JIRA)
Enrico Olivelli created ISPN-5531:
-------------------------------------
Summary: java.lang.UnsupportedOperationException during remove (using RemoteCacheManager)
Key: ISPN-5531
URL: https://issues.jboss.org/browse/ISPN-5531
Project: Infinispan
Issue Type: Bug
Components: Core
Affects Versions: 7.2.2.Final
Reporter: Enrico Olivelli
during a "remove" from a java HotRod client we got this error.
This error happens only when there are more than one hotrod server in the cluster.
We are using hotrod server in library mode (embedded)
{code}
15/06/09 08:05:34 ERROR interceptors.InvocationContextInterceptor: ISPN000136: Execution error
org.infinispan.commons.CacheListenerException: ISPN000280: Caught exception [java.lang.UnsupportedOperationException] while invoking method [public void org.infinispan.n
otifications.cachelistener.cluster.RemoteClusterListener.handleClusterEvents(org.infinispan.notifications.cachelistener.event.CacheEntryEvent) throws java.lang.Exception] on listener instance: org.infinispan.not
ifications.cachelistener.cluster.RemoteClusterListener@28d94aee
at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:291)
at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:22)
at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl.invoke(AbstractListenerImpl.java:309)
at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.doRealInvocation(CacheNotifierImpl.java:1180)
at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invokeNoChecks(CacheNotifierImpl.java:1175)
at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invoke(CacheNotifierImpl.java:1152)
at org.infinispan.notifications.cachelistener.CacheNotifierImpl.notifyCacheEntryRemoved(CacheNotifierImpl.java:333)
at org.infinispan.commands.write.RemoveCommand.notify(RemoveCommand.java:97)
at org.infinispan.interceptors.locking.ClusteringDependentLogic$AbstractClusteringDependentLogic.notifyCommitEntry(ClusteringDependentLogic.java:130)
at org.infinispan.interceptors.locking.ClusteringDependentLogic$DistributionLogic.commitSingleEntry(ClusteringDependentLogic.java:493)
at org.infinispan.interceptors.locking.ClusteringDependentLogic$AbstractClusteringDependentLogic.commitEntry(ClusteringDependentLogic.java:108)
at org.infinispan.interceptors.EntryWrappingInterceptor.commitContextEntry(EntryWrappingInterceptor.java:371)
at org.infinispan.interceptors.EntryWrappingInterceptor.commitEntryIfNeeded(EntryWrappingInterceptor.java:549)
at org.infinispan.interceptors.EntryWrappingInterceptor.commitContextEntries(EntryWrappingInterceptor.java:348)
at org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:422)
at org.infinispan.interceptors.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForDataCommand(EntryWrappingInterceptor.java:453)
at org.infinispan.interceptors.EntryWrappingInterceptor.visitRemoveCommand(EntryWrappingInterceptor.java:252)
at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
at org.infinispan.interceptors.distribution.L1LastChanceInterceptor.visitDataWriteCommand(L1LastChanceInterceptor.java:79)
at org.infinispan.interceptors.distribution.L1LastChanceInterceptor.visitRemoveCommand(L1LastChanceInterceptor.java:75)
at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitNonTxDataWriteCommand(AbstractLockingInterceptor.java:88)
at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitDataWriteCommand(NonTransactionalLockingInterceptor.java:40)
at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitRemoveCommand(AbstractLockingInterceptor.java:65)
at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)
at org.infinispan.commands.AbstractVisitor.visitRemoveCommand(AbstractVisitor.java:49)
at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:324)
at org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:256)
at org.infinispan.statetransfer.StateTransferInterceptor.visitRemoveCommand(StateTransferInterceptor.java:130)
at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
at org.infinispan.interceptors.CacheMgmtInterceptor.visitRemoveCommand(CacheMgmtInterceptor.java:209)
at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
at org.infinispan.interceptors.compat.BaseTypeConverterInterceptor.visitRemoveCommand(BaseTypeConverterInterceptor.java:228
at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)
at org.infinispan.commands.AbstractVisitor.visitRemoveCommand(AbstractVisitor.java:49)
at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)
at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1617)
at org.infinispan.cache.impl.CacheImpl.removeInternal(CacheImpl.java:579)
at org.infinispan.cache.impl.CacheImpl.remove(CacheImpl.java:572)
at org.infinispan.cache.impl.DecoratedCache.remove(DecoratedCache.java:442)
at org.infinispan.server.hotrod.CacheDecodeContext.remove(CacheDecodeContext.scala:235)
at org.infinispan.server.hotrod.HotRodDecoder.handleModification(HotRodDecoder.scala:216)
at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$hotrod$HotRodDecoder$$decodeKey(HotRodDecoder.scala:102)
at org.infinispan.server.hotrod.HotRodDecoder$$anonfun$decode$1.apply$mcV$sp(HotRodDecoder.scala:48)
at org.infinispan.server.hotrod.HotRodDecoder$$anon$1.run(HotRodDecoder.scala:206)
at org.infinispan.server.hotrod.HotRodDecoder$$anon$1.run(HotRodDecoder.scala:205)
at org.infinispan.security.Security.doAs(Security.java:143)
at org.infinispan.server.hotrod.HotRodDecoder.wrapSecurity(HotRodDecoder.scala:205)
at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.scala:45)
at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:370)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:168)
at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$core$transport$StatsChannelHandler$$super$channelRead(HotRodDecoder.scala:31)
at org.infinispan.server.core.transport.StatsChannelHandler$class.channelRead(StatsChannelHandler.scala:32)
at org.infinispan.server.hotrod.HotRodDecoder.channelRead(HotRodDecoder.scala:31)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsupportedOperationException
at java.util.AbstractCollection.add(AbstractCollection.java:262)
at java.util.AbstractCollection.addAll(AbstractCollection.java:344)
at org.infinispan.notifications.cachelistener.cluster.impl.BatchingClusterEventManagerImpl$UnicastEventContext.addTargets(BatchingClusterEventManagerImpl.java:82)
at org.infinispan.notifications.cachelistener.cluster.impl.BatchingClusterEventManagerImpl.addEvents(BatchingClusterEventManagerImpl.java:43)
at org.infinispan.notifications.cachelistener.cluster.RemoteClusterListener.handleClusterEvents(RemoteClusterListener.java:108)
at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:286)
... 76 more
{code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 7 months
[JBoss JIRA] (ISPN-5523) Allocated buffer in key and value converter should be released
by Vojtech Juranek (JIRA)
[ https://issues.jboss.org/browse/ISPN-5523?page=com.atlassian.jira.plugin.... ]
Vojtech Juranek commented on ISPN-5523:
---------------------------------------
Netty leak suspect from recent ISPN with Netty 4.0.28 ([full log|https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-perf-cs-near...]):
{noformat}
[0m[31m13:28:23,169 SEVERE [io.netty.util.ResourceLeakDetector] (HotRodServerWorker-9-12) LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 5
#5:
io.netty.buffer.AdvancedLeakAwareByteBuf.writeBytes(AdvancedLeakAwareByteBuf.java:565)
org.infinispan.server.core.transport.ExtendedByteBuf$.writeRangedBytes(ExtendedByteBuf.scala:44)
org.infinispan.server.hotrod.Encoder2x$.writeEvent(Encoder2x.scala:49)
org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:55)
io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
java.lang.Thread.run(Thread.java:745)
#4:
io.netty.buffer.AdvancedLeakAwareByteBuf.writeByte(AdvancedLeakAwareByteBuf.java:499)
org.infinispan.server.core.transport.VInt$.write(VInt.scala:17)
org.infinispan.server.core.transport.ExtendedByteBuf$.writeUnsignedInt(ExtendedByteBuf.scala:38)
org.infinispan.server.core.transport.ExtendedByteBuf$.writeRangedBytes(ExtendedByteBuf.scala:42)
org.infinispan.server.hotrod.Encoder2x$.writeEvent(Encoder2x.scala:49)
org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:55)
io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
java.lang.Thread.run(Thread.java:745)
#3:
io.netty.buffer.AdvancedLeakAwareByteBuf.writeByte(AdvancedLeakAwareByteBuf.java:499)
org.infinispan.server.core.transport.VInt$.write(VInt.scala:19)
org.infinispan.server.core.transport.ExtendedByteBuf$.writeUnsignedInt(ExtendedByteBuf.scala:38)
org.infinispan.server.core.transport.ExtendedByteBuf$.writeRangedBytes(ExtendedByteBuf.scala:42)
org.infinispan.server.hotrod.Encoder2x$.writeEvent(Encoder2x.scala:49)
org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:55)
io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
java.lang.Thread.run(Thread.java:745)
#2:
io.netty.buffer.AdvancedLeakAwareByteBuf.writeByte(AdvancedLeakAwareByteBuf.java:499)
org.infinispan.server.hotrod.Encoder2x$.writeEvent(Encoder2x.scala:48)
org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:55)
io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
java.lang.Thread.run(Thread.java:745)
#1:
io.netty.buffer.AdvancedLeakAwareByteBuf.writeByte(AdvancedLeakAwareByteBuf.java:499)
org.infinispan.server.hotrod.Encoder2x$.writeEvent(Encoder2x.scala:47)
org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:55)
io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
java.lang.Thread.run(Thread.java:745)
Created at:
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:259)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:141)
io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:75)
org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:30)
io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
java.lang.Thread.run(Thread.java:745)
{noformat}
> Allocated buffer in key and value converter should be released
> --------------------------------------------------------------
>
> Key: ISPN-5523
> URL: https://issues.jboss.org/browse/ISPN-5523
> Project: Infinispan
> Issue Type: Feature Request
> Components: Remote Protocols
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 8.0.0.Alpha2, 7.2.3.Final, 8.0.0.Final
>
>
> Some near caching tests are throwing:
> {code}
> [0m[31m04:11:24,499 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRodServerWorker-43) ISPN005009: Unexpected error before any request parameters read: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) [rt.jar:1.7.0_75]
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) [rt.jar:1.7.0_75]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) [rt.jar:1.7.0_75]
> at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:433) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:179) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:168) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:98) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:241) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:146) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:107) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:106) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:494) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:461) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_75]
> {code}
> KeyValueVersionConverter allocates a byte buffer but does not release it. It could be the cause...
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 7 months
[JBoss JIRA] (ISPN-5530) AtomicObjectFactoryTest.distributedCacheTest random failures
by Dan Berindei (JIRA)
Dan Berindei created ISPN-5530:
----------------------------------
Summary: AtomicObjectFactoryTest.distributedCacheTest random failures
Key: ISPN-5530
URL: https://issues.jboss.org/browse/ISPN-5530
Project: Infinispan
Issue Type: Bug
Components: Test Suite - Core
Affects Versions: 8.0.0.Alpha1, 7.2.2.Final
Reporter: Dan Berindei
Priority: Blocker
Fix For: 8.0.0.Alpha2
{noformat}
java.lang.AssertionError: obtained = 999; espected = 1000 expected:<1000> but was:<999>
at org.testng.AssertJUnit.fail(AssertJUnit.java:59)
at org.testng.AssertJUnit.failNotEquals(AssertJUnit.java:364)
at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:80)
at org.infinispan.atomic.AtomicObjectFactoryTest.distributedCacheTest(AtomicObjectFactoryTest.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
at org.testng.TestRunner.privateRun(TestRunner.java:767)
at org.testng.TestRunner.run(TestRunner.java:617)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 7 months
[JBoss JIRA] (ISPN-5459) StateTransferManager.waitForInitialTransferToComplete can fail if the coordinator crashes
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5459?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5459:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/3528
> StateTransferManager.waitForInitialTransferToComplete can fail if the coordinator crashes
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-5459
> URL: https://issues.jboss.org/browse/ISPN-5459
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.2.1.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: testsuite_stability
> Fix For: 8.0.0.Alpha2
>
>
> {{LocalTopologyManagerImpl.isRebalancingEnabled()}} will throw a {{SuspectException}} if the coordinator crashes, preventing the cache from starting up.
> This is causing random failures in {{ClusterListenerDistTxAddListenerTest}}:
> {noformat}
> 22:23:59,439 ERROR (testng-ClusterListenerDistTxAddListenerTest:) [UnitTestTestNGListener] Test testNodeJoiningAndStateNodeDiesWithExistingClusterListener(org.infinispan.notifications.cachelistener.cluster.ClusterListenerDistTxAddListenerTest) failed.
> java.util.concurrent.ExecutionException: org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() throws java.lang.Exception on object of type StateTransferManagerImpl
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:202)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest.testNodeJoiningAndStateNodeDiesWithExistingClusterListener(AbstractClusterListenerDistAddListenerTest.java:254)
> ...
> Caused by: org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() throws java.lang.Exception on object of type StateTransferManagerImpl
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:172)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)
> at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)
> at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627)
> at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:218)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:850)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:599)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:554)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:424)
> at org.infinispan.test.MultipleCacheManagersTest.cache(MultipleCacheManagersTest.java:366)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest.access$100(AbstractClusterListenerDistAddListenerTest.java:32)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest$4.call(AbstractClusterListenerDistAddListenerTest.java:237)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest$4.call(AbstractClusterListenerDistAddListenerTest.java:234)
> at org.infinispan.test.AbstractInfinispanTest$LoggingCallable.call(AbstractInfinispanTest.java:422)
> ... 4 more
> Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: Node NodeM-34961 was suspected
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:245)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:566)
> at org.infinispan.topology.LocalTopologyManagerImpl.executeOnCoordinator(LocalTopologyManagerImpl.java:501)
> at org.infinispan.topology.LocalTopologyManagerImpl.isRebalancingEnabled(LocalTopologyManagerImpl.java:445)
> at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:216)
> at sun.reflect.GeneratedMethodAccessor165.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
> ... 18 more
> Caused by: SuspectedException
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:414)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:427)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:240)
> ... 26 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 7 months
[JBoss JIRA] (ISPN-5459) StateTransferManager.waitForInitialTransferToComplete can fail if the coordinator crashes
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5459?page=com.atlassian.jira.plugin.... ]
Dan Berindei reassigned ISPN-5459:
----------------------------------
Assignee: Dan Berindei (was: William Burns)
> StateTransferManager.waitForInitialTransferToComplete can fail if the coordinator crashes
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-5459
> URL: https://issues.jboss.org/browse/ISPN-5459
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.2.1.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: testsuite_stability
> Fix For: 8.0.0.Alpha2
>
>
> {{LocalTopologyManagerImpl.isRebalancingEnabled()}} will throw a {{SuspectException}} if the coordinator crashes, preventing the cache from starting up.
> This is causing random failures in {{ClusterListenerDistTxAddListenerTest}}:
> {noformat}
> 22:23:59,439 ERROR (testng-ClusterListenerDistTxAddListenerTest:) [UnitTestTestNGListener] Test testNodeJoiningAndStateNodeDiesWithExistingClusterListener(org.infinispan.notifications.cachelistener.cluster.ClusterListenerDistTxAddListenerTest) failed.
> java.util.concurrent.ExecutionException: org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() throws java.lang.Exception on object of type StateTransferManagerImpl
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:202)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest.testNodeJoiningAndStateNodeDiesWithExistingClusterListener(AbstractClusterListenerDistAddListenerTest.java:254)
> ...
> Caused by: org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() throws java.lang.Exception on object of type StateTransferManagerImpl
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:172)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)
> at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)
> at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627)
> at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:218)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:850)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:599)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:554)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:424)
> at org.infinispan.test.MultipleCacheManagersTest.cache(MultipleCacheManagersTest.java:366)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest.access$100(AbstractClusterListenerDistAddListenerTest.java:32)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest$4.call(AbstractClusterListenerDistAddListenerTest.java:237)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest$4.call(AbstractClusterListenerDistAddListenerTest.java:234)
> at org.infinispan.test.AbstractInfinispanTest$LoggingCallable.call(AbstractInfinispanTest.java:422)
> ... 4 more
> Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: Node NodeM-34961 was suspected
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:245)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:566)
> at org.infinispan.topology.LocalTopologyManagerImpl.executeOnCoordinator(LocalTopologyManagerImpl.java:501)
> at org.infinispan.topology.LocalTopologyManagerImpl.isRebalancingEnabled(LocalTopologyManagerImpl.java:445)
> at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:216)
> at sun.reflect.GeneratedMethodAccessor165.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
> ... 18 more
> Caused by: SuspectedException
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:414)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:427)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:240)
> ... 26 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 7 months
[JBoss JIRA] (ISPN-5529) ClusterListenerReplTest.testPrimaryOwnerGoesDownAfterBackupRaisesEvent random failures
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5529?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5529:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/3528
> ClusterListenerReplTest.testPrimaryOwnerGoesDownAfterBackupRaisesEvent random failures
> --------------------------------------------------------------------------------------
>
> Key: ISPN-5529
> URL: https://issues.jboss.org/browse/ISPN-5529
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 8.0.0.Alpha2
>
>
> Because the cache is replicated, both cache 0 or cache 2 are backup owners of the tested key, and both of them can become primary owners after cache 1 is killed. If cache 0 becomes the primary owner, {{cache.put(key, FIRST_VALUE)}} may return {{null}} and/or the cluster listener may receive only one event:
> {noformat}
> java.lang.AssertionError: expected:<null> but was:<first-value>
> at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:101)
> at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:108)
> at org.infinispan.notifications.cachelistener.cluster.ClusterListenerReplTest.testPrimaryOwnerGoesDownAfterBackupRaisesEvent(ClusterListenerReplTest.java:119)
> {noformat}
> {noformat}
> 12:23:53,438 ERROR (testng-ClusterListenerReplTest:) [UnitTestTestNGListener] Test testPrimaryOwnerGoesDownAfterBackupRaisesEvent(org.infinispan.notifications.cachelistener.cluster.ClusterListenerReplTest) failed.java.lang.AssertionError: expected:<1> but was:<2>
> at org.testng.AssertJUnit.fail(AssertJUnit.java:59)
> at org.testng.AssertJUnit.failNotEquals(AssertJUnit.java:364)
> at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:80)
> at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:245)
> at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:252)
> at org.infinispan.notifications.cachelistener.cluster.ClusterListenerReplTest.testPrimaryOwnerGoesDownAfterBackupRaisesEvent(ClusterListenerReplTest.java:121)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 7 months