[
https://issues.jboss.org/browse/ISPN-5684?page=com.atlassian.jira.plugin....
]
Wolf-Dieter Fink updated ISPN-5684:
-----------------------------------
Steps to Reproduce:
Cache is configured as followed:
<distributed-cache name="test" mode="SYNC"
segments="20" owners="1" remote-timeout="30000"
start="EAGER">
<locking acquire-timeout="30000"
concurrency-level="1000" striping="false"/>
<transaction mode="NONE"/>
<compatibility enabled="true" />
</distributed-cache>
The JUnit test classes are attached.
GetAll will fail constant.
GetAll2, only change the initialization timing but did work most of the time.
Only sometimes the number of returned enties is less than expected
Start nodes like this:
bin/standalone.sh -c clustered.xml -Djboss.socket.binding.port-offset=0
-Djboss.node.name=Node1
bin/standalone.sh -c clustered.xml -Djboss.socket.binding.port-offset=100
-Djboss.node.name=Node2
bin/standalone.sh -c clustered.xml -Djboss.socket.binding.port-offset=200
-Djboss.node.name=Node3
bin/standalone.sh -c clustered.xml -Djboss.socket.binding.port-offset=300
-Djboss.node.name=Node4
was:
Cache is configured as followed:
<distributed-cache name="test" mode="SYNC"
segments="20" owners="1" remote-timeout="30000"
start="EAGER">
<locking acquire-timeout="30000"
concurrency-level="1000" striping="false"/>
<transaction mode="NONE"/>
<compatibility enabled="true" />
</distributed-cache>
The JUnit test classes are attached.
GetAll will fail constant.
GetAll2, only change the initialization timing but did work most of the time.
Only sometimes the number of returned enties is less than expected
ISPN000136 concurrent TimeoutException if a HotRod client uses
getAll(...) and the owners < numOfNodes
------------------------------------------------------------------------------------------------------
Key: ISPN-5684
URL:
https://issues.jboss.org/browse/ISPN-5684
Project: Infinispan
Issue Type: Bug
Components: Core
Environment: Current upstream:
615b91b (HEAD, upstream/master, master) ISPN-5595 Deployed Cache Store Factory operates
on promises
Reporter: Wolf-Dieter Fink
Assignee: William Burns
Attachments: TestGetAll.java, TestGetAll2.java
If a distributed cache configuration contains less owner than current nodes are within
the cluster a HotRod client fail if the copatible mode is enabled.
The getAll(...) must include keys of different owners to fail constant.
The ERROR is as followed:
19:04:08,991 ERROR [org.infinispan.interceptors.InvocationContextInterceptor]
(HotRodServerWorker-9-1) ISPN000136: Execution error:
org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 3
at
org.infinispan.statetransfer.StateTransferLockImpl.waitForTopology(StateTransferLockImpl.java:144)
at
org.infinispan.interceptors.base.BaseStateTransferInterceptor.waitForTopology(BaseStateTransferInterceptor.java:100)
at
org.infinispan.statetransfer.StateTransferInterceptor.visitGetAllCommand(StateTransferInterceptor.java:177)
at org.infinispan.commands.read.GetAllCommand.acceptVisitor(GetAllCommand.java:59)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
at
org.infinispan.interceptors.CacheMgmtInterceptor.visitGetAllCommand(CacheMgmtInterceptor.java:127)
at org.infinispan.commands.read.GetAllCommand.acceptVisitor(GetAllCommand.java:59)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
at
org.infinispan.interceptors.compat.BaseTypeConverterInterceptor.visitGetAllCommand(BaseTypeConverterInterceptor.java:166)
at org.infinispan.commands.read.GetAllCommand.acceptVisitor(GetAllCommand.java:59)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
at
org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102)
at
org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)
at org.infinispan.commands.AbstractVisitor.visitGetAllCommand(AbstractVisitor.java:95)
at org.infinispan.commands.read.GetAllCommand.acceptVisitor(GetAllCommand.java:59)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
at
org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113)
at org.infinispan.commands.AbstractVisitor.visitGetAllCommand(AbstractVisitor.java:95)
at org.infinispan.commands.read.GetAllCommand.acceptVisitor(GetAllCommand.java:59)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)
at org.infinispan.cache.impl.CacheImpl.getAll(CacheImpl.java:443)
at org.infinispan.cache.impl.DecoratedCache.getAll(DecoratedCache.java:442)
at
org.infinispan.cache.impl.AbstractDelegatingAdvancedCache.getAll(AbstractDelegatingAdvancedCache.java:207)
at org.infinispan.server.hotrod.Decoder2x$.customReadValue(Decoder2x.scala:482)
at
org.infinispan.server.hotrod.HotRodDecoder.customDecodeValue(HotRodDecoder.scala:197)
at
org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$hotrod$HotRodDecoder$$decodeValue(HotRodDecoder.scala:136)
at
org.infinispan.server.hotrod.HotRodDecoder$$anonfun$decode$1.apply$mcV$sp(HotRodDecoder.scala:50)
at org.infinispan.server.hotrod.HotRodDecoder.wrapSecurity(HotRodDecoder.scala:206)
at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.scala:45)
at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:370)
at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:168)
at
org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$core$transport$StatsChannelHandler$$super$channelRead(HotRodDecoder.scala:31)
at
org.infinispan.server.core.transport.StatsChannelHandler$class.channelRead(StatsChannelHandler.scala:32)
at org.infinispan.server.hotrod.HotRodDecoder.channelRead(HotRodDecoder.scala:31)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)