[JBoss JIRA] (ISPN-5930) OOM error when registering continuous query
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5930?page=com.atlassian.jira.plugin.... ]
Adrian Nistor edited comment on ISPN-5930 at 12/1/15 11:47 AM:
---------------------------------------------------------------
This OOM happens right when the listener is registered and it starts to receive initial state. This is after the tests runs for a while (minutes) to warm up so I expect there are quite a large number of entries in the cache. All of them are going to be sent to the listener one by one and for each event we get a memory allocation that is going to be released after the event is consumed. So if the client listener is slow at processing these events it will lead to OOM in the server quite easily. To solve this we need to implement a mechanism to cause some backpressure and stop/delay marshalling more events if the client cannot keep up.
was (Author: anistor):
This OOM happens right when the listener is registered and it starts to receive initial state. This is after the tests runs for a while (minutes) to warm up so I expect there are quite a large number of entries in the cache. All of them are going to be sent to the listener one by one and for each event we get a memory allocation that is going to be released after the event is consumed. So if the client listener is slow at processing these events it will lead to OOM in the server quite easily. To solve this we need to implement a mechanism to cause some backpressure and stop marshalling more events if the client cannot keep up.
> OOM error when registering continuous query
> -------------------------------------------
>
> Key: ISPN-5930
> URL: https://issues.jboss.org/browse/ISPN-5930
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.1.0.CR1, 8.0.2.Final
> Reporter: Vojtech Juranek
> Assignee: Adrian Nistor
>
> When running CQ perf tests in client-server mode, I hit following exception in HR server.
> The scenario was to store random texts in the cache and CQ which matches all the entries (i.e. query was "%").
> Full logs from server are [here|https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/JDG/view/PERF-...]. Machines perf01-04 are servers, perf05-07 clients and perf08 is RG master.
> {noformat}
> [0m[31m12:36:37,557 ERROR [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-9-33) ISPN005023: Exception encoding message CustomRawEvent{version=23, messageId=39574, op=CacheEntryCreatedEventResponse, listenerId=([B@9641785,false), event=([B@5ecfed02,false), isRetried=false}: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658)
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at io.netty.buffer.UnpooledUnsafeDirectByteBuf.allocateDirect(UnpooledUnsafeDirectByteBuf.java:108)
> at io.netty.buffer.UnpooledUnsafeDirectByteBuf.capacity(UnpooledUnsafeDirectByteBuf.java:157)
> at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251)
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:817)
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:825)
> at org.infinispan.server.core.transport.ExtendedByteBuf$.writeRangedBytes(ExtendedByteBuf.scala:67)
> at org.infinispan.server.hotrod.Encoder2x$.writeEvent(Encoder2x.scala:47)
> at org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:57)
> at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
> at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
> at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:691)
> at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:681)
> at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:716)
> at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:954)
> at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:243)
> at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.sendEvent(ClientListenerRegistry.scala:219)
> at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.onCacheEvent(ClientListenerRegistry.scala:186)
> at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:286)
> at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:21)
> at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl.invoke(AbstractListenerImpl.java:309)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invokeNoChecks(CacheNotifierImpl.java:1213)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.raiseEventForInitialTransfer(CacheNotifierImpl.java:911)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.addListener(CacheNotifierImpl.java:852)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.addListener(CacheNotifierImpl.java:923)
> at org.infinispan.cache.impl.CacheImpl.addListener(CacheImpl.java:722)
> at org.infinispan.cache.impl.AbstractDelegatingCache.addListener(AbstractDelegatingCache.java:347)
> at org.infinispan.server.hotrod.ClientListenerRegistry.addClientListener(ClientListenerRegistry.scala:100)
> at org.infinispan.server.hotrod.Decoder2x$.customReadKey(Decoder2x.scala:384)
> at org.infinispan.server.hotrod.HotRodDecoder.customDecodeKey(HotRodDecoder.scala:194)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$hotrod$HotRodDecoder$$decodeKey(HotRodDecoder.scala:104)
> at org.infinispan.server.hotrod.HotRodDecoder$$anonfun$decode$1.apply$mcV$sp(HotRodDecoder.scala:48)
> at org.infinispan.server.hotrod.HotRodDecoder.wrapSecurity(HotRodDecoder.scala:206)
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.scala:45)
> at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:370)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:168)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$core$transport$StatsChannelHandler$$super$channelRead(HotRodDecoder.scala:31)
> at org.infinispan.server.core.transport.StatsChannelHandler$class.channelRead(StatsChannelHandler.scala:32)
> at org.infinispan.server.hotrod.HotRodDecoder.channelRead(HotRodDecoder.scala:31)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-6005) Remote listener with includeCurrentState=true can cause OOME
by Gustavo Fernandes (JIRA)
Gustavo Fernandes created ISPN-6005:
---------------------------------------
Summary: Remote listener with includeCurrentState=true can cause OOME
Key: ISPN-6005
URL: https://issues.jboss.org/browse/ISPN-6005
Project: Infinispan
Issue Type: Bug
Components: Server
Affects Versions: 8.1.0.CR1
Reporter: Gustavo Fernandes
As detailed in ISPN-5930, a remote listener without filtering, over a slow channel that requires the initial state, can lead to memory starvation in the server
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-5930) OOM error when registering continuous query
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5930?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5930:
--------------------------------
Affects Version/s: 8.0.2.Final
8.1.0.CR1
> OOM error when registering continuous query
> -------------------------------------------
>
> Key: ISPN-5930
> URL: https://issues.jboss.org/browse/ISPN-5930
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.1.0.CR1, 8.0.2.Final
> Reporter: Vojtech Juranek
> Assignee: Adrian Nistor
>
> When running CQ perf tests in client-server mode, I hit following exception in HR server.
> The scenario was to store random texts in the cache and CQ which matches all the entries (i.e. query was "%").
> Full logs from server are [here|https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/JDG/view/PERF-...]. Machines perf01-04 are servers, perf05-07 clients and perf08 is RG master.
> {noformat}
> [0m[31m12:36:37,557 ERROR [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-9-33) ISPN005023: Exception encoding message CustomRawEvent{version=23, messageId=39574, op=CacheEntryCreatedEventResponse, listenerId=([B@9641785,false), event=([B@5ecfed02,false), isRetried=false}: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658)
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at io.netty.buffer.UnpooledUnsafeDirectByteBuf.allocateDirect(UnpooledUnsafeDirectByteBuf.java:108)
> at io.netty.buffer.UnpooledUnsafeDirectByteBuf.capacity(UnpooledUnsafeDirectByteBuf.java:157)
> at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251)
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:817)
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:825)
> at org.infinispan.server.core.transport.ExtendedByteBuf$.writeRangedBytes(ExtendedByteBuf.scala:67)
> at org.infinispan.server.hotrod.Encoder2x$.writeEvent(Encoder2x.scala:47)
> at org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:57)
> at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
> at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
> at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:691)
> at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:681)
> at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:716)
> at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:954)
> at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:243)
> at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.sendEvent(ClientListenerRegistry.scala:219)
> at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.onCacheEvent(ClientListenerRegistry.scala:186)
> at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:286)
> at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:21)
> at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl.invoke(AbstractListenerImpl.java:309)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invokeNoChecks(CacheNotifierImpl.java:1213)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.raiseEventForInitialTransfer(CacheNotifierImpl.java:911)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.addListener(CacheNotifierImpl.java:852)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.addListener(CacheNotifierImpl.java:923)
> at org.infinispan.cache.impl.CacheImpl.addListener(CacheImpl.java:722)
> at org.infinispan.cache.impl.AbstractDelegatingCache.addListener(AbstractDelegatingCache.java:347)
> at org.infinispan.server.hotrod.ClientListenerRegistry.addClientListener(ClientListenerRegistry.scala:100)
> at org.infinispan.server.hotrod.Decoder2x$.customReadKey(Decoder2x.scala:384)
> at org.infinispan.server.hotrod.HotRodDecoder.customDecodeKey(HotRodDecoder.scala:194)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$hotrod$HotRodDecoder$$decodeKey(HotRodDecoder.scala:104)
> at org.infinispan.server.hotrod.HotRodDecoder$$anonfun$decode$1.apply$mcV$sp(HotRodDecoder.scala:48)
> at org.infinispan.server.hotrod.HotRodDecoder.wrapSecurity(HotRodDecoder.scala:206)
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.scala:45)
> at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:370)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:168)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$core$transport$StatsChannelHandler$$super$channelRead(HotRodDecoder.scala:31)
> at org.infinispan.server.core.transport.StatsChannelHandler$class.channelRead(StatsChannelHandler.scala:32)
> at org.infinispan.server.hotrod.HotRodDecoder.channelRead(HotRodDecoder.scala:31)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-5930) OOM error when registering continuous query
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5930?page=com.atlassian.jira.plugin.... ]
Adrian Nistor commented on ISPN-5930:
-------------------------------------
This OOM happens right when the listener is registered and it starts to receive initial state. This is after the tests runs for a while (minutes) to warm up so I expect there are quite a large number of entries in the cache. All of them are going to be sent to the listener one by one and for each event we get a memory allocation that is going to be released after the event is consumed. So if the client listener is slow at processing these events it will lead to OOM in the server quite easily. To solve this we need to implement a mechanism to cause some backpressure and stop marshalling more events if the client cannot keep up.
> OOM error when registering continuous query
> -------------------------------------------
>
> Key: ISPN-5930
> URL: https://issues.jboss.org/browse/ISPN-5930
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Reporter: Vojtech Juranek
> Assignee: Adrian Nistor
>
> When running CQ perf tests in client-server mode, I hit following exception in HR server.
> The scenario was to store random texts in the cache and CQ which matches all the entries (i.e. query was "%").
> Full logs from server are [here|https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/JDG/view/PERF-...]. Machines perf01-04 are servers, perf05-07 clients and perf08 is RG master.
> {noformat}
> [0m[31m12:36:37,557 ERROR [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-9-33) ISPN005023: Exception encoding message CustomRawEvent{version=23, messageId=39574, op=CacheEntryCreatedEventResponse, listenerId=([B@9641785,false), event=([B@5ecfed02,false), isRetried=false}: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658)
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at io.netty.buffer.UnpooledUnsafeDirectByteBuf.allocateDirect(UnpooledUnsafeDirectByteBuf.java:108)
> at io.netty.buffer.UnpooledUnsafeDirectByteBuf.capacity(UnpooledUnsafeDirectByteBuf.java:157)
> at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251)
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:817)
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:825)
> at org.infinispan.server.core.transport.ExtendedByteBuf$.writeRangedBytes(ExtendedByteBuf.scala:67)
> at org.infinispan.server.hotrod.Encoder2x$.writeEvent(Encoder2x.scala:47)
> at org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:57)
> at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
> at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
> at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:691)
> at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:681)
> at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:716)
> at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:954)
> at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:243)
> at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.sendEvent(ClientListenerRegistry.scala:219)
> at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.onCacheEvent(ClientListenerRegistry.scala:186)
> at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:286)
> at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:21)
> at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl.invoke(AbstractListenerImpl.java:309)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invokeNoChecks(CacheNotifierImpl.java:1213)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.raiseEventForInitialTransfer(CacheNotifierImpl.java:911)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.addListener(CacheNotifierImpl.java:852)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.addListener(CacheNotifierImpl.java:923)
> at org.infinispan.cache.impl.CacheImpl.addListener(CacheImpl.java:722)
> at org.infinispan.cache.impl.AbstractDelegatingCache.addListener(AbstractDelegatingCache.java:347)
> at org.infinispan.server.hotrod.ClientListenerRegistry.addClientListener(ClientListenerRegistry.scala:100)
> at org.infinispan.server.hotrod.Decoder2x$.customReadKey(Decoder2x.scala:384)
> at org.infinispan.server.hotrod.HotRodDecoder.customDecodeKey(HotRodDecoder.scala:194)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$hotrod$HotRodDecoder$$decodeKey(HotRodDecoder.scala:104)
> at org.infinispan.server.hotrod.HotRodDecoder$$anonfun$decode$1.apply$mcV$sp(HotRodDecoder.scala:48)
> at org.infinispan.server.hotrod.HotRodDecoder.wrapSecurity(HotRodDecoder.scala:206)
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.scala:45)
> at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:370)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:168)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$core$transport$StatsChannelHandler$$super$channelRead(HotRodDecoder.scala:31)
> at org.infinispan.server.core.transport.StatsChannelHandler$class.channelRead(StatsChannelHandler.scala:32)
> at org.infinispan.server.hotrod.HotRodDecoder.channelRead(HotRodDecoder.scala:31)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-6003) Reduce number of allocations
by Dan Berindei (JIRA)
Dan Berindei created ISPN-6003:
----------------------------------
Summary: Reduce number of allocations
Key: ISPN-6003
URL: https://issues.jboss.org/browse/ISPN-6003
Project: Infinispan
Issue Type: Task
Components: Core, Server
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 8.1.0.Final
Profiling revealed some allocations that are easy to remove:
* The HotRod operations factory stores a list of flags in a thread-local. The thread-local can be removed, and the flags can be stored in an {{int}}.
* JGroupsTransport copies the list of members to check if a broadcast should be sent as a unicast, and the copy is then discarded.
* ExtendedByteBuf could use {{Array.empty}} instead of {{Array[Byte]()}}.
* DecoratedCache could avoid calling {{Arrays.asList()}}.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-6002) Cache log.isTraceEnabled() as much as possible
by Dan Berindei (JIRA)
Dan Berindei created ISPN-6002:
----------------------------------
Summary: Cache log.isTraceEnabled() as much as possible
Key: ISPN-6002
URL: https://issues.jboss.org/browse/ISPN-6002
Project: Infinispan
Issue Type: Task
Components: Core, Server
Affects Versions: 8.1.0.CR1
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 8.1.0.Final
With log4j2 it's not that big of a problem, but there are still environments where this could make a difference (i.e. in the server).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years