[JBoss JIRA] (ISPN-5523) Enabling eager caching can lead to sever throwing "OutOfMemoryError: Direct buffer memory"
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5523?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5523:
-----------------------------------------------
Vojtech Juranek <vjuranek(a)redhat.com> changed the Status of [bug 1228676|https://bugzilla.redhat.com/show_bug.cgi?id=1228676] from ON_QA to VERIFIED
> Enabling eager caching can lead to sever throwing "OutOfMemoryError: Direct buffer memory"
> -------------------------------------------------------------------------------------------
>
> Key: ISPN-5523
> URL: https://issues.jboss.org/browse/ISPN-5523
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 7.2.4.Final, 8.0.0.CR1
>
>
> Some near caching tests are throwing:
> {code}
> [0m[31m04:11:24,499 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRodServerWorker-43) ISPN005009: Unexpected error before any request parameters read: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) [rt.jar:1.7.0_75]
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) [rt.jar:1.7.0_75]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) [rt.jar:1.7.0_75]
> at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:433) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:179) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:168) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:98) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:241) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:146) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:107) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:106) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:494) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:461) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.18.Final-redhat-1.jar:4.0.18.Final-redhat-1]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_75]
> {code}
> KeyValueVersionConverter allocates a byte buffer but does not release it. It could be the cause...
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 7 months
[JBoss JIRA] (ISPN-5576) Use MessageToByteEncoder to avoid ByteBuf leak w/ exceptions
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5576?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5576:
-----------------------------------------------
Vojtech Juranek <vjuranek(a)redhat.com> changed the Status of [bug 1228676|https://bugzilla.redhat.com/show_bug.cgi?id=1228676] from ON_QA to VERIFIED
> Use MessageToByteEncoder to avoid ByteBuf leak w/ exceptions
> ------------------------------------------------------------
>
> Key: ISPN-5576
> URL: https://issues.jboss.org/browse/ISPN-5576
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 7.2.3.Final
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 8.0.0.Beta1, 7.2.4.Final, 8.0.0.Final
>
>
> Encoder implementation allocates buffers but does not release them in case of exception before the encoded buffer is added to the response list. As a result, if encoding faces exceptions, it could leak byte buffers and show messages like this:
> {code}
> [io.netty.util.ResourceLeakDetector] LEAK: ByteBuf.release() was not called before it's garbage-collected.
> Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak reporting,
> specify the JVM option '-Dio.netty.leakDetectionLevel=advanced' or call ResourceLeakDetector.setLevel()
> LEAK: ByteBuf.release() was not called before it's garbage-collected.
> Recent access records: 0
> Created at:
> io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:259)
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155)
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:141)
> io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:75)
> org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:29)
> io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:691)
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:626)
> io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:266)
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:691)
> io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:681)
> io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:716)
> io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:954)
> io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:243)
> org.infinispan.server.core.AbstractProtocolDecoder.writeResponse(AbstractProtocolDecoder.scala:220)
> org.infinispan.server.hotrod.HotRodDecoder.customDecodeHeader(HotRodDecoder.scala:153)
> org.infinispan.server.core.AbstractProtocolDecoder.org$infinispan$server$core$AbstractProtocolDecoder$$decodeHeader(AbstractProtocolDecoder.scala:137)
> org.infinispan.server.core.AbstractProtocolDecoder$$anon$2.run(AbstractProtocolDecoder.scala:98)
> org.infinispan.server.core.AbstractProtocolDecoder$$anon$2.run(AbstractProtocolDecoder.scala:95)
> org.infinispan.security.Security.doAs(Security.java:143)
> org.infinispan.server.core.AbstractProtocolDecoder.secureDecodeDispatch(AbstractProtocolDecoder.scala:95)
> org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:59)
> io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:370)
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:168)
> org.infinispan.server.core.AbstractProtocolDecoder.channelRead(AbstractProtocolDecoder.scala:459)
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
> java.lang.Thread.run(Thread.java:745)
> {code}
> Also, encoder implementation does not log exceptions reported at encoding time, so exceptions like this can only be noticed via instrumentation.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 7 months
[JBoss JIRA] (ISPN-5653) keySet().iterator() does not propagate flags to remove()
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5653?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5653:
-----------------------------------------------
Matej Čimbora <mcimbora(a)redhat.com> changed the Status of [bug 1252565|https://bugzilla.redhat.com/show_bug.cgi?id=1252565] from ON_QA to VERIFIED
> keySet().iterator() does not propagate flags to remove()
> --------------------------------------------------------
>
> Key: ISPN-5653
> URL: https://issues.jboss.org/browse/ISPN-5653
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.2.3.Final, 8.0.0.Beta2
> Reporter: Radim Vansa
> Assignee: William Burns
> Fix For: 8.0.0.Beta3, 7.2.4.Final, 8.0.0.Final
>
>
> Calling {{remove()}} on iterator obtained through keySet() does not propagate flags from the original (decorated) cache. Test case (should go into BaseEntryRetrieverTest):
> {code}
> @Test
> public void simpleTestWithFlags() {
> Map<Object, String> values = putValuesInCache();
> final Cache<Object, Object> cache = cache(0, CACHE_NAME);
> cache.getAdvancedCache().addInterceptor(new BaseCustomInterceptor() {
> @Override
> public Object visitRemoveCommand(InvocationContext ctx, RemoveCommand command) throws Throwable {
> assertTrue(command.hasFlag(Flag.SKIP_CACHE_STORE));
> return super.visitRemoveCommand(ctx, command);
> }
> }, 0);
> for (Iterator it = cache(0, CACHE_NAME).getAdvancedCache().withFlags(Flag.SKIP_CACHE_STORE).keySet().iterator(); it.hasNext();) {
> assertTrue(values.containsKey(it.next()));
> it.remove();
> }
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 7 months
[JBoss JIRA] (ISPN-5664) Null is returned for a not expired entry in Hot Rod client
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5664?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5664:
-----------------------------------------------
Matej Čimbora <mcimbora(a)redhat.com> changed the Status of [bug 1243671|https://bugzilla.redhat.com/show_bug.cgi?id=1243671] from ON_QA to VERIFIED
> Null is returned for a not expired entry in Hot Rod client
> ----------------------------------------------------------
>
> Key: ISPN-5664
> URL: https://issues.jboss.org/browse/ISPN-5664
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 8.0.0.Beta2
> Reporter: William Burns
> Assignee: William Burns
> Fix For: 8.0.0.CR1, 7.2.5.Final
>
>
> For a mortal entry (lifespan > -1), overwriting it with lifespan=-1 (make it immortal) unexpectedly removes the entry like follows.
> ~~~
> cache.put(key, "value1", 100, TimeUnit.SECONDS, 100, TimeUnit.SECONDS);
> cache.get(key); // returns "value1"
> cache.put(key, "value2", -1, TimeUnit.SECONDS, 100, TimeUnit.SECONDS);
> cache.get(key); // returns null, expected "value2"
> cache.put(key, "value3", -1, TimeUnit.SECONDS, 100, TimeUnit.SECONDS);
> cache.get(key); // returns "value3"
> ~~~
> In library mode, the 2nd get returns non-null as expected. The same behaviour is observed for a transient (maxIdle > -1) entry also.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 7 months
[JBoss JIRA] (ISPN-5561) Problem in dealing with the HQL queries that contain string comparison operations, when it involves empty string as an operand
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5561?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5561:
-----------------------------------------------
Matej Čimbora <mcimbora(a)redhat.com> changed the Status of [bug 1246688|https://bugzilla.redhat.com/show_bug.cgi?id=1246688] from ON_QA to VERIFIED
> Problem in dealing with the HQL queries that contain string comparison operations, when it involves empty string as an operand
> ------------------------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-5561
> URL: https://issues.jboss.org/browse/ISPN-5561
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying, Remote Querying
> Affects Versions: 7.2.0.Final
> Environment: Infinispan version: Infinispan 7.2.0-Final
> Operating System: Linux
> Compatibility mode is enabled.
> Indexing is enabled and 'autoconfig' parameter is set to true.
> Reporter: Pavan Kundgol
> Assignee: Adrian Nistor
> Fix For: 8.0.0.Beta2, 7.2.4.Final, 8.0.0.Final
>
>
> When a HQL query that involves string comparison operation with the empty string matching is executed through hot rod client(java), it is resulting in the following error,
> org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for message id[7] returned server error (status=0x85): org.hibernate.search.exception.EmptyQueryException: HSEARCH000146: The query string '' applied on field 'CITY' has no meaningful tokens to be matched. Validate the query input against the Analyzer applied on this field.
> The same problem may also occur when the string matching operation involves a string containing certain special keywords(hibernate search stop words), as operands.
> The problem seems to be assosiated with the the 'hibernate-hql-lucene' parser used by the 'infinispan-remote-query-server' module. The problem here is that the hibernate-hql-lucene parser is translating comparison operations directly into lucene keyword matching operations, irrespective of the field type, though the phrase matching is more appropriate for string fields. This leads to invalid query formation when the field in concern is of type string, and is matched against an empty string(or a string containing one or more hibernate search stop words).
> For example,
> the where condition city='' present in the following HQL query
> select name from com.test.User where city=''
> is tralated to lucene query, the hql-lucene parser is currently translating to
> queryBuilder.keyword().onField("city").matching("").createQuery()
> whereas recommended one is
> queryBuilder.phrase().onField("city").sentence("").createQuery()
> In the former way of translation, Hibernate Search query execution fails due to its usage of standard analyzer while executing the query. Hence, the problem can also be avoided by disabling the standard analyzer while executing the query.
> queryBuilder.keyword().onField("city").ignoreAnalyzer().matching("").createQuery()
> Preferred solution may be to enhance the 'hibernate-hql-lucene' library to do type specific translations, and to specifically translate the string comparison operations of hql query into phrase matching operations of lucene query(instead of keyword matching operations).
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 7 months
[JBoss JIRA] (ISPN-5558) DistributedTaskPart.equals() implementation is wrong
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5558?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5558:
-----------------------------------------------
Matej Čimbora <mcimbora(a)redhat.com> changed the Status of [bug 1232733|https://bugzilla.redhat.com/show_bug.cgi?id=1232733] from ON_QA to VERIFIED
> DistributedTaskPart.equals() implementation is wrong
> ----------------------------------------------------
>
> Key: ISPN-5558
> URL: https://issues.jboss.org/browse/ISPN-5558
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Distributed Execution and Map/Reduce
> Affects Versions: 7.2.2.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.0.0.Beta1, 7.2.4.Final, 8.0.0.Final
>
>
> {{DistributedExecutorService.submitEverywhere()}} returns a list of futures, one future for each targeted node. Because of how {{DistributedTaskPart.equals()}} is implemented, all the futures in the list appear to be equal, even though their target node is different and their result will also be different.
> The simplest fix would be to remove the equals() and hashCode() overloads from {{DistributedTaskPart}}.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 7 months