[JBoss JIRA] (ISPN-8347) Provide a silent way to check for Cache existence from an Hot Rod client
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-8347?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-8347:
----------------------------------
Status: Open (was: New)
> Provide a silent way to check for Cache existence from an Hot Rod client
> ------------------------------------------------------------------------
>
> Key: ISPN-8347
> URL: https://issues.jboss.org/browse/ISPN-8347
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Affects Versions: 9.1.1.Final
> Reporter: Sanne Grinovero
> Assignee: Tristan Tarrant
>
> Currently an Hot Rod client can create a new cache by using
> {code:java}
> hotrodClient.administration().createCache( cacheName, null );
> {code}
> But we shouldn't invoke this if the {{Cache}} might already exist.
> When we don't know if the cache might exist already, we check in advance with
> {code:java}
> RemoteCache<?,?> cache = hotrodClient.getCache( cacheName );
> if ( cache == null ) {
> ...
> {code}
> This works fine from a client side perspective, but it triggers to log a full stacktrace mentioning a not so reassuring {{ERROR}} :
> {noformat}
> 20:22:35,437 WARN Codec21:361 - ISPN004005: Error received from the server: org.infinispan.server.hotrod.CacheNotFoundException: Cache with name 'ENTITY_CACHE' not found amongst the configured caches
> 2017-09-26 20:22:36,021 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRod-ServerWorker-3-7) ISPN005003: Exception reported: org.infinispan.server.hotrod.CacheNotFoundException: Cache with name 'ANOTHER_ENTITY_CACHE' not found amongst the configured caches
> at org.infinispan.server.hotrod.CacheDecodeContext.obtainCache(CacheDecodeContext.java:121)
> at org.infinispan.server.hotrod.HotRodDecoder.decodeHeader(HotRodDecoder.java:160)
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:92)
> at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at org.infinispan.server.core.transport.StatsChannelHandler.channelRead(StatsChannelHandler.java:26)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
> at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:1017)
> at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:299)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> I'm not sure if we could easily avoid to log such an error as this check is "normal business" for our code. Alternatively I'd welcome a new client operation to query defined/started caches.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (ISPN-8347) Provide a silent way to check for Cache existence from an Hot Rod client
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-8347?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-8347:
----------------------------------
Summary: Provide a silent way to check for Cache existence from an Hot Rod client (was: Provide a silent way to check for Cache existance from an Hot Rod client)
> Provide a silent way to check for Cache existence from an Hot Rod client
> ------------------------------------------------------------------------
>
> Key: ISPN-8347
> URL: https://issues.jboss.org/browse/ISPN-8347
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Affects Versions: 9.1.1.Final
> Reporter: Sanne Grinovero
> Assignee: Tristan Tarrant
>
> Currently an Hot Rod client can create a new cache by using
> {code:java}
> hotrodClient.administration().createCache( cacheName, null );
> {code}
> But we shouldn't invoke this if the {{Cache}} might already exist.
> When we don't know if the cache might exist already, we check in advance with
> {code:java}
> RemoteCache<?,?> cache = hotrodClient.getCache( cacheName );
> if ( cache == null ) {
> ...
> {code}
> This works fine from a client side perspective, but it triggers to log a full stacktrace mentioning a not so reassuring {{ERROR}} :
> {noformat}
> 20:22:35,437 WARN Codec21:361 - ISPN004005: Error received from the server: org.infinispan.server.hotrod.CacheNotFoundException: Cache with name 'ENTITY_CACHE' not found amongst the configured caches
> 2017-09-26 20:22:36,021 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRod-ServerWorker-3-7) ISPN005003: Exception reported: org.infinispan.server.hotrod.CacheNotFoundException: Cache with name 'ANOTHER_ENTITY_CACHE' not found amongst the configured caches
> at org.infinispan.server.hotrod.CacheDecodeContext.obtainCache(CacheDecodeContext.java:121)
> at org.infinispan.server.hotrod.HotRodDecoder.decodeHeader(HotRodDecoder.java:160)
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:92)
> at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at org.infinispan.server.core.transport.StatsChannelHandler.channelRead(StatsChannelHandler.java:26)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
> at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:1017)
> at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:299)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> I'm not sure if we could easily avoid to log such an error as this check is "normal business" for our code. Alternatively I'd welcome a new client operation to query defined/started caches.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (ISPN-8347) Provide a silent way to check for Cache existance from an Hot Rod client
by Sanne Grinovero (JIRA)
Sanne Grinovero created ISPN-8347:
-------------------------------------
Summary: Provide a silent way to check for Cache existance from an Hot Rod client
Key: ISPN-8347
URL: https://issues.jboss.org/browse/ISPN-8347
Project: Infinispan
Issue Type: Enhancement
Components: Remote Protocols
Affects Versions: 9.1.1.Final
Reporter: Sanne Grinovero
Currently an Hot Rod client can create a new cache by using
{code:java}
hotrodClient.administration().createCache( cacheName, null );
{code}
But we shouldn't invoke this if the {{Cache}} might already exist.
When we don't know if the cache might exist already, we check in advance with
{code:java}
RemoteCache<?,?> cache = hotrodClient.getCache( cacheName );
if ( cache == null ) {
...
{code}
This works fine from a client side perspective, but it triggers to log a full stacktrace mentioning a not so reassuring {{ERROR}} :
{noformat}
20:22:35,437 WARN Codec21:361 - ISPN004005: Error received from the server: org.infinispan.server.hotrod.CacheNotFoundException: Cache with name 'ENTITY_CACHE' not found amongst the configured caches
2017-09-26 20:22:36,021 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRod-ServerWorker-3-7) ISPN005003: Exception reported: org.infinispan.server.hotrod.CacheNotFoundException: Cache with name 'ANOTHER_ENTITY_CACHE' not found amongst the configured caches
at org.infinispan.server.hotrod.CacheDecodeContext.obtainCache(CacheDecodeContext.java:121)
at org.infinispan.server.hotrod.HotRodDecoder.decodeHeader(HotRodDecoder.java:160)
at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:92)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at org.infinispan.server.core.transport.StatsChannelHandler.channelRead(StatsChannelHandler.java:26)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:1017)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:299)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
{noformat}
I'm not sure if we could easily avoid to log such an error as this check is "normal business" for our code. Alternatively I'd welcome a new client operation to query defined/started caches.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (ISPN-8347) Provide a silent way to check for Cache existance from an Hot Rod client
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-8347?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant reassigned ISPN-8347:
-------------------------------------
Assignee: Tristan Tarrant
> Provide a silent way to check for Cache existance from an Hot Rod client
> ------------------------------------------------------------------------
>
> Key: ISPN-8347
> URL: https://issues.jboss.org/browse/ISPN-8347
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Affects Versions: 9.1.1.Final
> Reporter: Sanne Grinovero
> Assignee: Tristan Tarrant
>
> Currently an Hot Rod client can create a new cache by using
> {code:java}
> hotrodClient.administration().createCache( cacheName, null );
> {code}
> But we shouldn't invoke this if the {{Cache}} might already exist.
> When we don't know if the cache might exist already, we check in advance with
> {code:java}
> RemoteCache<?,?> cache = hotrodClient.getCache( cacheName );
> if ( cache == null ) {
> ...
> {code}
> This works fine from a client side perspective, but it triggers to log a full stacktrace mentioning a not so reassuring {{ERROR}} :
> {noformat}
> 20:22:35,437 WARN Codec21:361 - ISPN004005: Error received from the server: org.infinispan.server.hotrod.CacheNotFoundException: Cache with name 'ENTITY_CACHE' not found amongst the configured caches
> 2017-09-26 20:22:36,021 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRod-ServerWorker-3-7) ISPN005003: Exception reported: org.infinispan.server.hotrod.CacheNotFoundException: Cache with name 'ANOTHER_ENTITY_CACHE' not found amongst the configured caches
> at org.infinispan.server.hotrod.CacheDecodeContext.obtainCache(CacheDecodeContext.java:121)
> at org.infinispan.server.hotrod.HotRodDecoder.decodeHeader(HotRodDecoder.java:160)
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:92)
> at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at org.infinispan.server.core.transport.StatsChannelHandler.channelRead(StatsChannelHandler.java:26)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
> at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:1017)
> at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:299)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> I'm not sure if we could easily avoid to log such an error as this check is "normal business" for our code. Alternatively I'd welcome a new client operation to query defined/started caches.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (ISPN-8324) Cache listener receives CacheEntriesEvictedEvent for entries irrespective of filter
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-8324?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-8324:
-------------------------------------
The reason for this is due to the fact that CacheEntriesEvictedEvent doesn't extend CacheEntryEvent which is required for an Event to be filterable. To be honest the fact that have eviction listener event implemented returning a Map is a remnant of the old eviction done on a schedule (this is only ever a single entry in the map).
I am thinking the cleanest way may be to add a new CacheEntryEvictedEvent that does properly extend CacheEntryEvent which would therefore make it filterable. The CacheEntriesEvictedEvent could then be deprecated.
> Cache listener receives CacheEntriesEvictedEvent for entries irrespective of filter
> -----------------------------------------------------------------------------------
>
> Key: ISPN-8324
> URL: https://issues.jboss.org/browse/ISPN-8324
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 9.1.1.Final
> Reporter: Paul Ferraro
> Attachments: Test.java
>
>
> If a cache registers a listener using a KeyFilter, should not receive CacheEntriesEvictedEvents for entries that do not honor the filter. See attached reproducer.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (ISPN-8346) Reduce the number of clustered listener in clustered counters
by Pedro Ruivo (JIRA)
Pedro Ruivo created ISPN-8346:
---------------------------------
Summary: Reduce the number of clustered listener in clustered counters
Key: ISPN-8346
URL: https://issues.jboss.org/browse/ISPN-8346
Project: Infinispan
Issue Type: Bug
Components: Clustered Counter
Reporter: Pedro Ruivo
Assignee: Pedro Ruivo
Current implementation register one clustered listener for each created counter (or 2 in the case of weak counters) per node. A single clustered listener per node is ok if we invoke the user listener in a different thread since most of the work perform by the clustered listener is updating a local cached value.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months