[JBoss JIRA] (ISPN-10168) Cache requested but no configuration exists should not happen for hardcoded caches
by Galder Zamarreño (Jira)
[ https://issues.jboss.org/browse/ISPN-10168?page=com.atlassian.jira.plugin... ]
Galder Zamarreño commented on ISPN-10168:
-----------------------------------------
Cache is defined in XML as:
{code}
<distributed-cache name="players">
<memory>
<off-heap/>
</memory>
</distributed-cache>
{code}
> Cache requested but no configuration exists should not happen for hardcoded caches
> ----------------------------------------------------------------------------------
>
> Key: ISPN-10168
> URL: https://issues.jboss.org/browse/ISPN-10168
> Project: Infinispan
> Issue Type: Bug
> Components: Configuration
> Affects Versions: 9.4.12.Final, 10.0.0.Beta3
> Reporter: Galder Zamarreño
> Assignee: Tristan Tarrant
> Priority: Major
> Labels: rhdemo-2019
>
> A cache defined in the XML should never result in an exception like this.
> There seems to be some race condition between cache set up on startup and a remote client requesting it:
> {code}
> [0m[31m10:55:21,929 ERROR [org.infinispan.stats.impl.ClusterCacheStatsImpl] (HotRod-hotrod-internal-ServerIO-4-17) Could not execute cluster wide cache stats operation : java.util.concurrent.CompletionException: org.infinispan.commons.CacheException: org.infinispan.commons.CacheConfigurationException: ISPN000436: Cache 'players' has been requested, but no cache configuration exists with that name and no default cache has been set for this container
> at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375) [rt.jar:1.8.0_191]
> at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934) [rt.jar:1.8.0_191]
> at org.infinispan.stats.impl.ClusterCacheStatsImpl.updateStats(ClusterCacheStatsImpl.java:116) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.stats.impl.AbstractClusterStats.fetchClusterWideStatsIfNeeded(AbstractClusterStats.java:114) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.stats.impl.AbstractClusterStats.getStat(AbstractClusterStats.java:207) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.stats.impl.AbstractClusterStats.getStatAsInt(AbstractClusterStats.java:202) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.stats.impl.ClusterCacheStatsImpl.getNumberOfEntries(ClusterCacheStatsImpl.java:251) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.stats.impl.ClusterCacheStatsImpl.getCurrentNumberOfEntries(ClusterCacheStatsImpl.java:314) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.server.hotrod.Encoder2x.statsResponse(Encoder2x.java:191) [infinispan-server-hotrod-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.server.hotrod.CacheRequestProcessor.stats(CacheRequestProcessor.java:64) [infinispan-server-hotrod-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.server.hotrod.HotRodDecoder.switch1(HotRodDecoder.java:1063) [infinispan-server-hotrod-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.server.hotrod.HotRodDecoder.switch1_0(HotRodDecoder.java:154) [infinispan-server-hotrod-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:143) [infinispan-server-hotrod-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) [netty-codec-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) [netty-codec-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-codec-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at org.infinispan.server.core.transport.StatsChannelHandler.channelRead(StatsChannelHandler.java:26) [infinispan-server-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:417) [netty-transport-native-epoll-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:317) [netty-transport-native-epoll-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.28.Final.jar:4.1.28.Final]
> at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.28.Final.jar:4.1.28.Final]
> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_191]
> Caused by: org.infinispan.commons.CacheException: org.infinispan.commons.CacheConfigurationException: ISPN000436: Cache 'players' has been requested, but no cache configuration exists with that name and no default cache has been set for this container
> at org.infinispan.stats.impl.ClusterCacheStatsImpl.lambda$updateStats$0(ClusterCacheStatsImpl.java:105) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.manager.impl.AllClusterExecutor.lambda$submitConsumer$6(AllClusterExecutor.java:193) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.manager.impl.AbstractClusterExecutor.consumeResponse(AbstractClusterExecutor.java:64) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.manager.impl.AllClusterExecutor.lambda$submitConsumer$7(AllClusterExecutor.java:192) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [rt.jar:1.8.0_191]
> at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [rt.jar:1.8.0_191]
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) [rt.jar:1.8.0_191]
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) [rt.jar:1.8.0_191]
> at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:67) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:57) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:35) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1372) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1275) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:126) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1420) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.jgroups.JChannel.up(JChannel.java:816) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:133) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.stack.Protocol.up(Protocol.java:340) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.FORK.up(FORK.java:141) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.FRAG3.up(FRAG3.java:171) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:339) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:872) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:240) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1008) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:734) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:389) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:590) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:131) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:203) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:253) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.MERGE3.up(MERGE3.java:280) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.Discovery.up(Discovery.java:295) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1249) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87) [jgroups-4.0.18.Final.jar:4.0.18.Final]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_191]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_191]
> ... 1 more
> Caused by: org.infinispan.commons.CacheConfigurationException: ISPN000436: Cache 'players' has been requested, but no cache configuration exists with that name and no default cache has been set for this container
> at org.infinispan.configuration.ConfigurationManager.getConfiguration(ConfigurationManager.java:66) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:612) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:601) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.manager.DefaultCacheManager.internalGetCache(DefaultCacheManager.java:484) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:468) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:454) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.stats.impl.ClusterCacheStatsImpl$DistributedCacheStatsCallable.apply(ClusterCacheStatsImpl.java:478) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.stats.impl.ClusterCacheStatsImpl$DistributedCacheStatsCallable.apply(ClusterCacheStatsImpl.java:465) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.manager.impl.ReplicableCommandManagerFunction.invokeAsync(ReplicableCommandManagerFunction.java:36) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.invokeReplicableCommand(GlobalInboundInvocationHandler.java:175) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.runReplicableCommand(GlobalInboundInvocationHandler.java:156) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.lambda$handleReplicableCommand$1(GlobalInboundInvocationHandler.java:150) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
> ... 3 more
> {code}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
4 years, 12 months
[JBoss JIRA] (ISPN-10168) Cache requested but no configuration exists should not happen for hardcoded caches
by Galder Zamarreño (Jira)
Galder Zamarreño created ISPN-10168:
---------------------------------------
Summary: Cache requested but no configuration exists should not happen for hardcoded caches
Key: ISPN-10168
URL: https://issues.jboss.org/browse/ISPN-10168
Project: Infinispan
Issue Type: Bug
Components: Configuration
Affects Versions: 10.0.0.Beta3, 9.4.12.Final
Reporter: Galder Zamarreño
Assignee: Tristan Tarrant
A cache defined in the XML should never result in an exception like this.
There seems to be some race condition between cache set up on startup and a remote client requesting it:
{code}
[0m[31m10:55:21,929 ERROR [org.infinispan.stats.impl.ClusterCacheStatsImpl] (HotRod-hotrod-internal-ServerIO-4-17) Could not execute cluster wide cache stats operation : java.util.concurrent.CompletionException: org.infinispan.commons.CacheException: org.infinispan.commons.CacheConfigurationException: ISPN000436: Cache 'players' has been requested, but no cache configuration exists with that name and no default cache has been set for this container
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375) [rt.jar:1.8.0_191]
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934) [rt.jar:1.8.0_191]
at org.infinispan.stats.impl.ClusterCacheStatsImpl.updateStats(ClusterCacheStatsImpl.java:116) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.stats.impl.AbstractClusterStats.fetchClusterWideStatsIfNeeded(AbstractClusterStats.java:114) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.stats.impl.AbstractClusterStats.getStat(AbstractClusterStats.java:207) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.stats.impl.AbstractClusterStats.getStatAsInt(AbstractClusterStats.java:202) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.stats.impl.ClusterCacheStatsImpl.getNumberOfEntries(ClusterCacheStatsImpl.java:251) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.stats.impl.ClusterCacheStatsImpl.getCurrentNumberOfEntries(ClusterCacheStatsImpl.java:314) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.server.hotrod.Encoder2x.statsResponse(Encoder2x.java:191) [infinispan-server-hotrod-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.server.hotrod.CacheRequestProcessor.stats(CacheRequestProcessor.java:64) [infinispan-server-hotrod-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.server.hotrod.HotRodDecoder.switch1(HotRodDecoder.java:1063) [infinispan-server-hotrod-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.server.hotrod.HotRodDecoder.switch1_0(HotRodDecoder.java:154) [infinispan-server-hotrod-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:143) [infinispan-server-hotrod-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) [netty-codec-4.1.28.Final.jar:4.1.28.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) [netty-codec-4.1.28.Final.jar:4.1.28.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-codec-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at org.infinispan.server.core.transport.StatsChannelHandler.channelRead(StatsChannelHandler.java:26) [infinispan-server-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:417) [netty-transport-native-epoll-4.1.28.Final.jar:4.1.28.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:317) [netty-transport-native-epoll-4.1.28.Final.jar:4.1.28.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.28.Final.jar:4.1.28.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.28.Final.jar:4.1.28.Final]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_191]
Caused by: org.infinispan.commons.CacheException: org.infinispan.commons.CacheConfigurationException: ISPN000436: Cache 'players' has been requested, but no cache configuration exists with that name and no default cache has been set for this container
at org.infinispan.stats.impl.ClusterCacheStatsImpl.lambda$updateStats$0(ClusterCacheStatsImpl.java:105) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.manager.impl.AllClusterExecutor.lambda$submitConsumer$6(AllClusterExecutor.java:193) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.manager.impl.AbstractClusterExecutor.consumeResponse(AbstractClusterExecutor.java:64) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.manager.impl.AllClusterExecutor.lambda$submitConsumer$7(AllClusterExecutor.java:192) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [rt.jar:1.8.0_191]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [rt.jar:1.8.0_191]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) [rt.jar:1.8.0_191]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) [rt.jar:1.8.0_191]
at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:67) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:57) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:35) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1372) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1275) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:126) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1420) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.jgroups.JChannel.up(JChannel.java:816) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:133) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.stack.Protocol.up(Protocol.java:340) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.FORK.up(FORK.java:141) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.FRAG3.up(FRAG3.java:171) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:339) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:872) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:240) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1008) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:734) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:389) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:590) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:131) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:203) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:253) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.MERGE3.up(MERGE3.java:280) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.Discovery.up(Discovery.java:295) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.protocols.TP.passMessageUp(TP.java:1249) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87) [jgroups-4.0.18.Final.jar:4.0.18.Final]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_191]
... 1 more
Caused by: org.infinispan.commons.CacheConfigurationException: ISPN000436: Cache 'players' has been requested, but no cache configuration exists with that name and no default cache has been set for this container
at org.infinispan.configuration.ConfigurationManager.getConfiguration(ConfigurationManager.java:66) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:612) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:601) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.manager.DefaultCacheManager.internalGetCache(DefaultCacheManager.java:484) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:468) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:454) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.stats.impl.ClusterCacheStatsImpl$DistributedCacheStatsCallable.apply(ClusterCacheStatsImpl.java:478) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.stats.impl.ClusterCacheStatsImpl$DistributedCacheStatsCallable.apply(ClusterCacheStatsImpl.java:465) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.manager.impl.ReplicableCommandManagerFunction.invokeAsync(ReplicableCommandManagerFunction.java:36) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.invokeReplicableCommand(GlobalInboundInvocationHandler.java:175) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.runReplicableCommand(GlobalInboundInvocationHandler.java:156) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.lambda$handleReplicableCommand$1(GlobalInboundInvocationHandler.java:150) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212) [infinispan-core-9.4.13-SNAPSHOT.jar:9.4.13-SNAPSHOT]
... 3 more
{code}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
4 years, 12 months
[JBoss JIRA] (ISPN-7912) Prevent RocksDBStore writes blocking on full expiry queue
by Dan Berindei (Jira)
[ https://issues.jboss.org/browse/ISPN-7912?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-7912:
------------------------------------
Maybe we wouldn't need a queue if we could write to the expiration DB directly, without merging with the existing list of values?
We may be able to do that if if we changed the format of the expiration "table" to {{key = <expiration timestamp> + <key bytes>, value = <nothing>}}.
> Prevent RocksDBStore writes blocking on full expiry queue
> ---------------------------------------------------------
>
> Key: ISPN-7912
> URL: https://issues.jboss.org/browse/ISPN-7912
> Project: Infinispan
> Issue Type: Sub-task
> Components: Loaders and Stores
> Affects Versions: 9.1.0.Alpha1
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Priority: Major
> Fix For: 10.0.0.Final
>
>
> Currently you can only insert 10000 elements into the rocks db store until you will block a thread until the expiration reaper is ran. Instead we should offer elements to the queue and upon failure utilise the persistence executors to run purge.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
4 years, 12 months
[JBoss JIRA] (ISPN-10093) PersistenceManagerImpl stop deadlock with topology update
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10093?page=com.atlassian.jira.plugin... ]
Tristan Tarrant reassigned ISPN-10093:
--------------------------------------
Assignee: Will Burns
> PersistenceManagerImpl stop deadlock with topology update
> ---------------------------------------------------------
>
> Key: ISPN-10093
> URL: https://issues.jboss.org/browse/ISPN-10093
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 10.0.0.Beta3
> Reporter: Dan Berindei
> Assignee: Will Burns
> Priority: Major
> Labels: testsuite_stability
> Fix For: 10.0.0.Beta4
>
> Attachments: threaddump.txt
>
>
> {{DistSyncStoreNotSharedTest.clearContent}} hanged in CI recently:
> {noformat}
> "testng-DistSyncStoreNotSharedTest" #16 prio=5 os_prio=0 cpu=11511.26ms elapsed=435.14s tid=0x00007fdb710b6000 nid=0x3222 waiting on condition [0x00007fdb352d3000]
> java.lang.Thread.State: WAITING (parking)
> at jdk.internal.misc.Unsafe.park(java.base@11/Native Method)
> - parking to wait for <0x00000000c8a22450> (a java.util.concurrent.Semaphore$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(java.base@11/LockSupport.java:194)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@11/AbstractQueuedSynchronizer.java:885)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(java.base@11/AbstractQueuedSynchronizer.java:1009)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@11/AbstractQueuedSynchronizer.java:1324)
> at java.util.concurrent.Semaphore.acquireUninterruptibly(java.base@11/Semaphore.java:504)
> at org.infinispan.persistence.manager.PersistenceManagerImpl.stop(PersistenceManagerImpl.java:222)
> at jdk.internal.reflect.GeneratedMethodAccessor72.invoke(Unknown Source)
> at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@11/DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(java.base@11/Method.java:566)
> at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:79)
> at org.infinispan.commons.util.SecurityActions$$Lambda$237/0x0000000100661c40.run(Unknown Source)
> at org.infinispan.commons.util.SecurityActions.doPrivileged(SecurityActions.java:71)
> at org.infinispan.commons.util.SecurityActions.invokeAccessibly(SecurityActions.java:76)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:181)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.performStop(BasicComponentRegistryImpl.java:601)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.stopWrapper(BasicComponentRegistryImpl.java:590)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.stop(BasicComponentRegistryImpl.java:461)
> at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:431)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:366)
> at org.infinispan.cache.impl.CacheImpl.performImmediateShutdown(CacheImpl.java:1160)
> at org.infinispan.cache.impl.CacheImpl.stop(CacheImpl.java:1125)
> at org.infinispan.cache.impl.AbstractDelegatingCache.stop(AbstractDelegatingCache.java:521)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:747)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:799)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:775)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:846)
> at org.infinispan.test.MultipleCacheManagersTest.clearContent(MultipleCacheManagersTest.java:158)
> "persistence-thread-DistSyncStoreNotSharedTest-NodeB-p16432-t1" #53654 daemon prio=5 os_prio=0 cpu=1.26ms elapsed=301.93s tid=0x00007fdb3c3d8000 nid=0x8ef waiting on condition [0x00007fdb00055000]
> java.lang.Thread.State: WAITING (parking)
> at jdk.internal.misc.Unsafe.park(java.base@11/Native Method)
> - parking to wait for <0x00000000c8b1fb88> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(java.base@11/LockSupport.java:194)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@11/AbstractQueuedSynchronizer.java:885)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(java.base@11/AbstractQueuedSynchronizer.java:1009)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@11/AbstractQueuedSynchronizer.java:1324)
> at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@11/ReentrantReadWriteLock.java:738)
> at org.infinispan.persistence.manager.PersistenceManagerImpl.pollStoreAvailability(PersistenceManagerImpl.java:196)
> at org.infinispan.persistence.manager.PersistenceManagerImpl$$Lambda$492/0x00000001007fb440.run(Unknown Source)
> at java.util.concurrent.Executors$RunnableAdapter.call(java.base@11/Executors.java:515)
> at java.util.concurrent.FutureTask.runAndReset(java.base@11/FutureTask.java:305)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(java.base@11/ScheduledThreadPoolExecutor.java:305)
> "transport-thread-DistSyncStoreNotSharedTest-NodeB-p16424-t5" #53646 daemon prio=5 os_prio=0 cpu=3.15ms elapsed=301.94s tid=0x00007fdb2007a000 nid=0x8e8 waiting on condition [0x00007fdb0b406000]
> java.lang.Thread.State: WAITING (parking)
> at jdk.internal.misc.Unsafe.park(java.base@11/Native Method)
> - parking to wait for <0x00000000c8d2abb0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(java.base@11/LockSupport.java:194)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11/AbstractQueuedSynchronizer.java:2081)
> at io.reactivex.internal.operators.flowable.BlockingFlowableIterable$BlockingFlowableIterator.hasNext(BlockingFlowableIterable.java:94)
> at io.reactivex.Flowable.blockingForEach(Flowable.java:5682)
> at org.infinispan.statetransfer.StateConsumerImpl.removeStaleData(StateConsumerImpl.java:1011)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:453)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:202)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:58)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:114)
> at org.infinispan.topology.LocalTopologyManagerImpl.resetLocalTopologyBeforeRebalance(LocalTopologyManagerImpl.java:437)
> at org.infinispan.topology.LocalTopologyManagerImpl.doHandleRebalance(LocalTopologyManagerImpl.java:519)
> - locked <0x00000000c8b30b30> (a org.infinispan.topology.LocalCacheStatus)
> at org.infinispan.topology.LocalTopologyManagerImpl.lambda$handleRebalance$3(LocalTopologyManagerImpl.java:484)
> at org.infinispan.topology.LocalTopologyManagerImpl$$Lambda$574/0x000000010089a040.run(Unknown Source)
> at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:175){noformat}
> [Full thread dump|https://ci.infinispan.org/job/Infinispan/job/master/1133/artifact/core/]
> Somehow the producer thread for the transport-thread iteration is blocked, but without waiting for the persistence mutex. Maybe it's waiting for a topology? Not sure if it's relevant, but the last test to run was {{testClearWithFlag}}, so the data container was empty and the store had 5 entries.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
4 years, 12 months