[infinispan-issues] [JBoss JIRA] (ISPN-5684) ISPN000136 concurrent TimeoutException if a HotRod client uses getAll(...) and the owners < numOfNodes
RH Bugzilla Integration (JIRA)
issues at jboss.org
Wed Sep 2 11:08:05 EDT 2015
[ https://issues.jboss.org/browse/ISPN-5684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13104624#comment-13104624 ]
RH Bugzilla Integration commented on ISPN-5684:
-----------------------------------------------
Dave Stahl <dstahl at redhat.com> changed the Status of [bug 1259433|https://bugzilla.redhat.com/show_bug.cgi?id=1259433] from NEW to POST
> ISPN000136 concurrent TimeoutException if a HotRod client uses getAll(...) and the owners < numOfNodes
> ------------------------------------------------------------------------------------------------------
>
> Key: ISPN-5684
> URL: https://issues.jboss.org/browse/ISPN-5684
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Environment: Current upstream:
> 615b91b (HEAD, upstream/master, master) ISPN-5595 Deployed Cache Store Factory operates on promises
> Reporter: Wolf-Dieter Fink
> Assignee: Galder Zamarreño
> Fix For: 8.0.0.Final
>
> Attachments: TestGetAll.java, TestGetAll2.java
>
>
> If a distributed cache configuration contains less owner than current nodes are within the cluster a HotRod client fail if the copatible mode is enabled.
> The getAll(...) must include keys of different owners to fail constant.
> The ERROR is as followed:
> 19:04:08,991 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (HotRodServerWorker-9-1) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 3
> at org.infinispan.statetransfer.StateTransferLockImpl.waitForTopology(StateTransferLockImpl.java:144)
> at org.infinispan.interceptors.base.BaseStateTransferInterceptor.waitForTopology(BaseStateTransferInterceptor.java:100)
> at org.infinispan.statetransfer.StateTransferInterceptor.visitGetAllCommand(StateTransferInterceptor.java:177)
> at org.infinispan.commands.read.GetAllCommand.acceptVisitor(GetAllCommand.java:59)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> at org.infinispan.interceptors.CacheMgmtInterceptor.visitGetAllCommand(CacheMgmtInterceptor.java:127)
> at org.infinispan.commands.read.GetAllCommand.acceptVisitor(GetAllCommand.java:59)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> at org.infinispan.interceptors.compat.BaseTypeConverterInterceptor.visitGetAllCommand(BaseTypeConverterInterceptor.java:166)
> at org.infinispan.commands.read.GetAllCommand.acceptVisitor(GetAllCommand.java:59)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)
> at org.infinispan.commands.AbstractVisitor.visitGetAllCommand(AbstractVisitor.java:95)
> at org.infinispan.commands.read.GetAllCommand.acceptVisitor(GetAllCommand.java:59)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113)
> at org.infinispan.commands.AbstractVisitor.visitGetAllCommand(AbstractVisitor.java:95)
> at org.infinispan.commands.read.GetAllCommand.acceptVisitor(GetAllCommand.java:59)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)
> at org.infinispan.cache.impl.CacheImpl.getAll(CacheImpl.java:443)
> at org.infinispan.cache.impl.DecoratedCache.getAll(DecoratedCache.java:442)
> at org.infinispan.cache.impl.AbstractDelegatingAdvancedCache.getAll(AbstractDelegatingAdvancedCache.java:207)
> at org.infinispan.server.hotrod.Decoder2x$.customReadValue(Decoder2x.scala:482)
> at org.infinispan.server.hotrod.HotRodDecoder.customDecodeValue(HotRodDecoder.scala:197)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$hotrod$HotRodDecoder$$decodeValue(HotRodDecoder.scala:136)
> at org.infinispan.server.hotrod.HotRodDecoder$$anonfun$decode$1.apply$mcV$sp(HotRodDecoder.scala:50)
> at org.infinispan.server.hotrod.HotRodDecoder.wrapSecurity(HotRodDecoder.scala:206)
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.scala:45)
> at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:370)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:168)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$core$transport$StatsChannelHandler$$super$channelRead(HotRodDecoder.scala:31)
> at org.infinispan.server.core.transport.StatsChannelHandler$class.channelRead(StatsChannelHandler.scala:32)
> at org.infinispan.server.hotrod.HotRodDecoder.channelRead(HotRodDecoder.scala:31)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
More information about the infinispan-issues
mailing list