[JBoss JIRA] (ISPN-5721) Add SNI support to the endpoints
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-5721?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec updated ISPN-5721:
--------------------------------------
Description:
Openshift Router uses DNS names to perform routing. It is perfectly legal to have this kind of configuration:
{code}
client 1 --> example.com:11222 -----+> Hotrod server
/
client 2 --> example2.com:11222 /
{code}
In that case the TLS configuration might be problematic (since very often certificates are issued for a domain name). However it is possible to use [SNI TLS Extension|https://tools.ietf.org/html/rfc6066#page-6].
The SNI needs to be added to:
* Client's configuration (it needs to modify it's own {{SSLContext}} and add {{SSLParams}}
* Hotrod server to support SNI (with Netty)
* XML Configuration for Hotrod
was:
Openshift Router uses DNS names to perform routing. It is perfectly legal to have this kind of configuration:
{code}
client 1 --> example.com:11222 ---+> Hotrod server
/
client 2 --> example2.com:11222/
{code}
In that case the TLS configuration might be problematic (since very often certificates are issued for a domain name). However it is possible to use [SNI TLS Extension|https://tools.ietf.org/html/rfc6066#page-6].
The SNI needs to be added to:
* Client's configuration (it needs to modify it's own {{SSLContext}} and add {{SSLParams}}
* Hotrod server to support SNI (with Netty)
* XML Configuration for Hotrod
> Add SNI support to the endpoints
> --------------------------------
>
> Key: ISPN-5721
> URL: https://issues.jboss.org/browse/ISPN-5721
> Project: Infinispan
> Issue Type: Feature Request
> Components: Security, Server
> Affects Versions: 8.0.0.Final
> Reporter: Tristan Tarrant
> Assignee: Sebastian Łaskawiec
> Fix For: 9.0.0.Alpha2
>
>
> Openshift Router uses DNS names to perform routing. It is perfectly legal to have this kind of configuration:
> {code}
> client 1 --> example.com:11222 -----+> Hotrod server
> /
> client 2 --> example2.com:11222 /
> {code}
> In that case the TLS configuration might be problematic (since very often certificates are issued for a domain name). However it is possible to use [SNI TLS Extension|https://tools.ietf.org/html/rfc6066#page-6].
> The SNI needs to be added to:
> * Client's configuration (it needs to modify it's own {{SSLContext}} and add {{SSLParams}}
> * Hotrod server to support SNI (with Netty)
> * XML Configuration for Hotrod
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (ISPN-5721) Add SNI support to the endpoints
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-5721?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec updated ISPN-5721:
--------------------------------------
Description:
Openshift Router uses DNS names to perform routing. It is perfectly legal to have this kind of configuration:
{code}
client 1 --> example.com:11222 ---+> Hotrod server
/
client 2 --> example2.com:11222/
{code}
In that case the TLS configuration might be problematic (since very often certificates are issued for a domain name). However it is possible to use [SNI TLS Extension|https://tools.ietf.org/html/rfc6066#page-6].
The SNI needs to be added to:
* Client's configuration (it needs to modify it's own {{SSLContext}} and add {{SSLParams}}
* Hotrod server to support SNI (with Netty)
* XML Configuration for Hotrod
was:The endpoints should add support for SNI
> Add SNI support to the endpoints
> --------------------------------
>
> Key: ISPN-5721
> URL: https://issues.jboss.org/browse/ISPN-5721
> Project: Infinispan
> Issue Type: Feature Request
> Components: Security, Server
> Affects Versions: 8.0.0.Final
> Reporter: Tristan Tarrant
> Assignee: Sebastian Łaskawiec
> Fix For: 9.0.0.Alpha2
>
>
> Openshift Router uses DNS names to perform routing. It is perfectly legal to have this kind of configuration:
> {code}
> client 1 --> example.com:11222 ---+> Hotrod server
> /
> client 2 --> example2.com:11222/
> {code}
> In that case the TLS configuration might be problematic (since very often certificates are issued for a domain name). However it is possible to use [SNI TLS Extension|https://tools.ietf.org/html/rfc6066#page-6].
> The SNI needs to be added to:
> * Client's configuration (it needs to modify it's own {{SSLContext}} and add {{SSLParams}}
> * Hotrod server to support SNI (with Netty)
> * XML Configuration for Hotrod
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (ISPN-5943) Access log and statistics of cache operations
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5943?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-5943:
--------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/infinispan/infinispan/pull/4240
> Access log and statistics of cache operations
> ---------------------------------------------
>
> Key: ISPN-5943
> URL: https://issues.jboss.org/browse/ISPN-5943
> Project: Infinispan
> Issue Type: Feature Request
> Components: JMX, reporting and management
> Reporter: Pedro Zapata
> Assignee: William Burns
> Labels: jdg7
>
> The HotRod and REST endpoint will have a dedicated log category, which will be disabled by default, to which we will log using a format which follows the typical HTTPD server access.log format, i.e. client_ip user_id timestamp op key response_size processing_time.
> Enabling the log will be possible either by configuring the XML or by using the CLI.
> Additional JMX and DMR metrics will be implemented which report the number of invocations for each operation, their cumulative execution time, and the average execution time.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (ISPN-6406) NullPointerException while executed javascript returns null to js-client
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-6406?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-6406:
-----------------------------------
Forum Reference: http://lists.jboss.org/pipermail/infinispan-dev/2016-April/016595.html
> NullPointerException while executed javascript returns null to js-client
> ------------------------------------------------------------------------
>
> Key: ISPN-6406
> URL: https://issues.jboss.org/browse/ISPN-6406
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Reporter: Anna Manukyan
> Assignee: Galder Zamarreño
>
> In case when executing a javascript on local node, which returns as a result null, the following exception is thrown on the server:
> {code}
> 17:25:26,127 ERROR [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-7-119) ISPN005022: Exception writing response with messageId=188: java.lang.NullPointerException
> at org.infinispan.server.core.transport.ExtendedByteBuf$.writeRangedBytes(ExtendedByteBuf.scala:65)
> at org.infinispan.server.hotrod.Encoder2x$.writeResponse(Encoder2x.scala:340)
> at org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:45)
> at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
> at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
> at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:691)
> at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:681)
> at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:716)
> at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:954)
> at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:243)
> at org.infinispan.server.hotrod.HotRodDecoder.writeResponse(HotRodDecoder.scala:250)
> at org.infinispan.server.hotrod.HotRodDecoder.customDecodeHeader(HotRodDecoder.scala:209)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$hotrod$HotRodDecoder$$decodeHeader(HotRodDecoder.scala:97)
> at org.infinispan.server.hotrod.HotRodDecoder$$anonfun$decode$1.apply$mcV$sp(HotRodDecoder.scala:52)
> at org.infinispan.server.hotrod.HotRodDecoder.wrapSecurity(HotRodDecoder.scala:224)
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.scala:50)
> at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:370)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:168)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$core$transport$StatsChannelHandler$$super$channelRead(HotRodDecoder.scala:32)
> at org.infinispan.server.core.transport.StatsChannelHandler$class.channelRead(StatsChannelHandler.scala:32)
> at org.infinispan.server.hotrod.HotRodDecoder.channelRead(HotRodDecoder.scala:32)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The code is:
> {code}
> //Executable javascript:
> // mode=local,language=javascript,datatype='text/plain; charset=utf-8'
> cacheManager.getAddress()
> And the Test:
> it('can execute a script remotely to get node address from cacheManager', function(done) {
> Promise.all([client, readFile('spec/utils/test-cacheManager.js')])
> .then(function(vals) {
> var c = vals[0];
> return c.addScript('test-cacheManager.js', vals[1].toString())
> .then(function() { return c; } );
> })
> .then(t.assert(t.exec('test-cacheManager.js'),
> t.toBeUndefined))
> .catch(failed(done)).finally(done);
> });
> {code}
> The result of the test execution is:
> {code}
> Message:
> java.lang.NullPointerException
> Stacktrace:
> undefined
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (ISPN-6406) NullPointerException while executed javascript returns null to js-client
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-6406?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-6406:
----------------------------------------
This problem is not specific to the Javascript client. In general, how to handle null values returned by the execution of a remote script is not defined in the Hot Rod protocol.
> NullPointerException while executed javascript returns null to js-client
> ------------------------------------------------------------------------
>
> Key: ISPN-6406
> URL: https://issues.jboss.org/browse/ISPN-6406
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Reporter: Anna Manukyan
> Assignee: Galder Zamarreño
>
> In case when executing a javascript on local node, which returns as a result null, the following exception is thrown on the server:
> {code}
> 17:25:26,127 ERROR [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-7-119) ISPN005022: Exception writing response with messageId=188: java.lang.NullPointerException
> at org.infinispan.server.core.transport.ExtendedByteBuf$.writeRangedBytes(ExtendedByteBuf.scala:65)
> at org.infinispan.server.hotrod.Encoder2x$.writeResponse(Encoder2x.scala:340)
> at org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:45)
> at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
> at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
> at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:691)
> at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:681)
> at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:716)
> at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:954)
> at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:243)
> at org.infinispan.server.hotrod.HotRodDecoder.writeResponse(HotRodDecoder.scala:250)
> at org.infinispan.server.hotrod.HotRodDecoder.customDecodeHeader(HotRodDecoder.scala:209)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$hotrod$HotRodDecoder$$decodeHeader(HotRodDecoder.scala:97)
> at org.infinispan.server.hotrod.HotRodDecoder$$anonfun$decode$1.apply$mcV$sp(HotRodDecoder.scala:52)
> at org.infinispan.server.hotrod.HotRodDecoder.wrapSecurity(HotRodDecoder.scala:224)
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.scala:50)
> at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:370)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:168)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$core$transport$StatsChannelHandler$$super$channelRead(HotRodDecoder.scala:32)
> at org.infinispan.server.core.transport.StatsChannelHandler$class.channelRead(StatsChannelHandler.scala:32)
> at org.infinispan.server.hotrod.HotRodDecoder.channelRead(HotRodDecoder.scala:32)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The code is:
> {code}
> //Executable javascript:
> // mode=local,language=javascript,datatype='text/plain; charset=utf-8'
> cacheManager.getAddress()
> And the Test:
> it('can execute a script remotely to get node address from cacheManager', function(done) {
> Promise.all([client, readFile('spec/utils/test-cacheManager.js')])
> .then(function(vals) {
> var c = vals[0];
> return c.addScript('test-cacheManager.js', vals[1].toString())
> .then(function() { return c; } );
> })
> .then(t.assert(t.exec('test-cacheManager.js'),
> t.toBeUndefined))
> .catch(failed(done)).finally(done);
> });
> {code}
> The result of the test execution is:
> {code}
> Message:
> java.lang.NullPointerException
> Stacktrace:
> undefined
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (ISPN-5699) Simplify entry wrapping
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5699?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-5699:
------------------------------------------
Bugzilla References: (was: https://bugzilla.redhat.com/show_bug.cgi?id=1324030)
Bugzilla Update: (was: Perform)
> Simplify entry wrapping
> -----------------------
>
> Key: ISPN-5699
> URL: https://issues.jboss.org/browse/ISPN-5699
> Project: Infinispan
> Issue Type: Task
> Components: Core
> Affects Versions: 8.0.0.CR1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.1.0.Alpha2, 8.1.0.Final
>
>
> The entry wrapping should be more or less the same for all write commands.
> Currently we have some optimizations to skip wrapping in some cases, in the first phase we should probably keep them:
> * Non-repeatable reads don't store anything in the context if the value is null
> * Replace and remove store a null entry in the context
> * PutForExternalRead doesn't store anything in the context if the value is non-null
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (ISPN-6425) FileNotFoundException with async indexing backend
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-6425?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-6425:
-----------------------------------------
Right, let's first agree on the various {{async}} involved:
1. {{async cache store}}: writes to the cache store are done in a separate thread that the one doing the {{cache.put}}. Applies to all regular cache stores, not directly related to indexing or lucene. This has been around since eariler versions of Infinispan (5+)
2. {{async indexing backend}}: lucene indexing are done separately from the {{cache.put}}, this does not need to involve a cache store, and could be configured to all indexing backends: filesystem, ram and infinispan directory. This has been available since Infinispan 6, but on Infinispan 7 it's performance increased drastically
3. {{write_metadata_async}}: this is a experimental indexing config that applies only when using indexes stored in Infinispan. Infinispan based lucene indexes are composed of 3 caches: Data, Metadada and Lock. This config applies only to the metadata cache writes. This is a config from Infinispan 7 onwards
Your case is related to 1 while this JIRA fixes 3, so I'd say your Infinispan version is not affected by this issue.
Regarding your {{FileNotFoundException}} exception, I'd strong recommend you to consider upgrading to latest Infinispan, as it has several fixes and huge performance improvements for the indexing and queries in general.
> FileNotFoundException with async indexing backend
> -------------------------------------------------
>
> Key: ISPN-6425
> URL: https://issues.jboss.org/browse/ISPN-6425
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying, Lucene Directory
> Affects Versions: 8.2.0.Final
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 8.2.1.Final, 9.0.0.Alpha1, 9.0.0.Final
>
>
> The Infinispan directory defaults to {{write_metadata_async=true}} when the indexing backend is configured as async, i.e. {{default.worker.execution}} is {{true}}.
> The {{write_metadata_async=true}} will use {{cache.putAsync}} to write the index file metadata, while still deleting and creating files syncronously. This can lead to
> a stale metadata causing FileNotFoundExceptions when executing queries:
> Suppose a lucene directory contains files \[segments_4, _4.si\]. During normal regime, apart from the user thread, there could be other 2 threads that could be changing the index, the periodic commit thread (since backend is async) and the async deletion of files.
> The following race can happen:
> ||Time||Thread||work type||work||
> |T1|Hibernate Search: Commit Scheduler for index| SYNC | write files segments_5 and _5.si to the index
> |T2|Hibernate Search: Commit Scheduler for index| ASYNC | write the new file list containing \[segments_4, _4.si, segments_5,_5.si\]
> |T3|Hibernate Search: Commit Scheduler for index| ASYNC | enqueue a deletion task for files segments_4 and _4.si
> |T4|Hibernate Search: async deletion of index| SYNC | dequeue deletion task for files segments_4 and _4.si
> |T5|Hibernate Search: async deletion of index| SYNC | delete files segments_4 and _4.si from the index
> |T6|Hibernate Search: async deletion of index| ASYNC | write the new file list containing \[segments_5,_5.si\]
> |T7|User-thread| |open index reader, file list is \[segments_4, _4.si\], highest segment number is 4 (file list is not updated yet)
> |T8|User-thread| |open segments_4
> |T9|User-thread| |FileNotFoundException!
> |T10|remote-thread-User| | new file list received \[segments_4, _4.si, segments_5,_5.si\]
> |T11|remote-thread-User| | new file list received \[segments_5,_5.si\]
> This race can be observed in {{MassIndexerAsyncBackendTest#testMassIndexOnAsync}} that fails intermittently with the exception:
> {noformat}
> Caused by: java.io.FileNotFoundException: Error loading metadata for index file: M|segments_4|commonIndex|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:493) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:490) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:731) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:683) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:490) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:344) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:300) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:251) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> {noformat}
> We should not enable {{write_metadata_async=true}} for async backends. The file list is already {{DeltaAware}}, so writing should not pose a meaningfull overhead when done synchronously.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (ISPN-6425) FileNotFoundException with async indexing backend
by kostd kostd (JIRA)
[ https://issues.jboss.org/browse/ISPN-6425?page=com.atlassian.jira.plugin.... ]
kostd kostd commented on ISPN-6425:
-----------------------------------
[~gustavonalle], we have similar issue in production environment. Environment: wildfly 8.2.0.Final, infinispan 6.0.2.Final, hibernate-search 4.5.1.Final, hibernate-search-infinispan 4.5.1.Final, two nodes in hibernate-search cluster by jgroups 3.4.5.Final.
we use async data and metadata cache because it recommended for perf:
{quote}
if you need high performance on writes with the Lucene Directory the best option is to disable any CacheStore; the second best option is to configure the CacheStore as async .
{quote}
{code:title=our infinispan config}
<global>
<!-- Duplicate domains are allowed so that multiple deployments with default configuration of Hibernate Search applications
work - if possible it would be better to use JNDI to share the CacheManager across applications -->
<globalJmxStatistics enabled="true" cacheManagerName="HibernateSearch" allowDuplicateDomains="true" />
<!-- If the transport is omitted, there is no way to create distributed or clustered caches. There is no added cost to
defining a transport but not creating a cache that uses one, since the transport is created and initialized lazily. -->
<transport clusterName="${argus.textsearch.infinispan.cluster-name}" distributedSyncTimeout="240000">
<!-- Note that the JGroups transport uses sensible defaults if no configuration property is defined. See the JGroupsTransport
javadocs for more flags -->
<properties>
<property name="configurationFile" value="${jboss.home.dir}/domain/configuration/hibernatesearch-infinispan-jgroups-tcp.xml" />
</properties>
</transport>
<!-- Note that the JGroups transport uses sensible defaults if no configuration property is defined. See the Infinispan
wiki for more JGroups settings: http://community.jboss.org/wiki/ClusteredConfigurationQuickStart -->
<!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER, DONT_REGISTER. Hibernate Search takes care to
stop the CacheManager so registering is not needed -->
<shutdown hookBehavior="DONT_REGISTER" />
</global>
<!-- *************************** -->
<!-- Default "template" settings -->
<!-- *************************** -->
<default>
<locking lockAcquisitionTimeout="20000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false" />
<invocationBatching enabled="false" />
<!-- This element specifies that the cache is clustered. modes supported: distribution (d), replication (r) or invalidation
(i). Don't use invalidation to store Lucene indexes (as with Hibernate Search DirectoryProvider). Replication is recommended
for best performance of Lucene indexes, but make sure you have enough memory to store the index in your heap. Also distribution
scales much better than replication on high number of nodes in the cluster. -->
<clustering mode="replication">
<!-- Prefer loading all data at startup than later -->
<stateTransfer timeout="480000" fetchInMemoryState="true" />
<!-- Network calls are synchronous by default -->
<sync replTimeout="30000" />
</clustering>
<jmxStatistics enabled="true" />
<eviction maxEntries="-1" strategy="NONE" />
<expiration maxIdle="-1" />
</default>
<!-- *************************************** -->
<!-- Cache to store Lucene's file metadata -->
<!-- *************************************** -->
<namedCache name="LuceneIndexesMetadata">
<persistence passivation="false">
<singleFile fetchPersistentState="true" ignoreModifications="false" preload="true" purgeOnStartup="false"
shared="false" location="${jboss.server.data.dir}/textsearch-store/${argus.db.name}/">
<async enabled="true" />
</singleFile>
</persistence>
</namedCache>
<!-- **************************** -->
<!-- Cache to store Lucene data -->
<!-- **************************** -->
<namedCache name="LuceneIndexesData">
<persistence passivation="false">
<singleFile fetchPersistentState="true" ignoreModifications="false" preload="true" purgeOnStartup="false"
shared="false" location="${jboss.server.data.dir}/textsearch-store/${argus.db.name}/">
<async enabled="true" />
</singleFile>
</persistence>
</namedCache>
{code}
Why changes in this issue only corrects default value and do nothing with cases, when async metadata cache was selected explicitly?
We wanna fast async metadata cache and do not want to regularly catch FileNotFound. Can we, or should migrate to synchronous metadata(data?) cache?
May be it not possible to correct FileNotFoundException for async cache? Or may be our old hibernate-search-infinispan-6.0.2.Final.jar not affected to this issue? please help.
> FileNotFoundException with async indexing backend
> -------------------------------------------------
>
> Key: ISPN-6425
> URL: https://issues.jboss.org/browse/ISPN-6425
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying, Lucene Directory
> Affects Versions: 8.2.0.Final
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 8.2.1.Final, 9.0.0.Alpha1, 9.0.0.Final
>
>
> The Infinispan directory defaults to {{write_metadata_async=true}} when the indexing backend is configured as async, i.e. {{default.worker.execution}} is {{true}}.
> The {{write_metadata_async=true}} will use {{cache.putAsync}} to write the index file metadata, while still deleting and creating files syncronously. This can lead to
> a stale metadata causing FileNotFoundExceptions when executing queries:
> Suppose a lucene directory contains files \[segments_4, _4.si\]. During normal regime, apart from the user thread, there could be other 2 threads that could be changing the index, the periodic commit thread (since backend is async) and the async deletion of files.
> The following race can happen:
> ||Time||Thread||work type||work||
> |T1|Hibernate Search: Commit Scheduler for index| SYNC | write files segments_5 and _5.si to the index
> |T2|Hibernate Search: Commit Scheduler for index| ASYNC | write the new file list containing \[segments_4, _4.si, segments_5,_5.si\]
> |T3|Hibernate Search: Commit Scheduler for index| ASYNC | enqueue a deletion task for files segments_4 and _4.si
> |T4|Hibernate Search: async deletion of index| SYNC | dequeue deletion task for files segments_4 and _4.si
> |T5|Hibernate Search: async deletion of index| SYNC | delete files segments_4 and _4.si from the index
> |T6|Hibernate Search: async deletion of index| ASYNC | write the new file list containing \[segments_5,_5.si\]
> |T7|User-thread| |open index reader, file list is \[segments_4, _4.si\], highest segment number is 4 (file list is not updated yet)
> |T8|User-thread| |open segments_4
> |T9|User-thread| |FileNotFoundException!
> |T10|remote-thread-User| | new file list received \[segments_4, _4.si, segments_5,_5.si\]
> |T11|remote-thread-User| | new file list received \[segments_5,_5.si\]
> This race can be observed in {{MassIndexerAsyncBackendTest#testMassIndexOnAsync}} that fails intermittently with the exception:
> {noformat}
> Caused by: java.io.FileNotFoundException: Error loading metadata for index file: M|segments_4|commonIndex|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:493) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:490) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:731) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:683) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:490) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:344) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:300) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:251) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
> {noformat}
> We should not enable {{write_metadata_async=true}} for async backends. The file list is already {{DeltaAware}}, so writing should not pose a meaningfull overhead when done synchronously.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (ISPN-6511) Allocate less memory during RPC response handling
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-6511?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño resolved ISPN-6511.
------------------------------------
Fix Version/s: 9.0.0.Final
Resolution: Done
> Allocate less memory during RPC response handling
> -------------------------------------------------
>
> Key: ISPN-6511
> URL: https://issues.jboss.org/browse/ISPN-6511
> Project: Infinispan
> Issue Type: Task
> Components: Core
> Affects Versions: 8.2.1.Final, 9.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.0.0.Alpha2, 9.0.0.Final
>
>
> For broadcasts/anycasts, the {{JGroupsTransport.invokeRemotelyAsync()}} responses {{HashMap}} is created with the wrong initial capacity ({{recipients.size()}} instead of ({{recipients.size()/DEFAULT_LOAD_FACTOR}}, and it needs a resize to fit all the responses.
> Clustered get commands shouldn't have the problem because usually there is only one valid response. However, we actually a {{CacheNotFoundResponse}} for all the owners that we didn't request the entry from, triggering the resize.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (ISPN-6511) Allocate less memory during RPC response handling
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-6511?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-6511:
-----------------------------------
Git Pull Request: https://github.com/infinispan/infinispan/pull/4250
> Allocate less memory during RPC response handling
> -------------------------------------------------
>
> Key: ISPN-6511
> URL: https://issues.jboss.org/browse/ISPN-6511
> Project: Infinispan
> Issue Type: Task
> Components: Core
> Affects Versions: 8.2.1.Final, 9.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.0.0.Alpha2, 9.0.0.Final
>
>
> For broadcasts/anycasts, the {{JGroupsTransport.invokeRemotelyAsync()}} responses {{HashMap}} is created with the wrong initial capacity ({{recipients.size()}} instead of ({{recipients.size()/DEFAULT_LOAD_FACTOR}}, and it needs a resize to fit all the responses.
> Clustered get commands shouldn't have the problem because usually there is only one valid response. However, we actually a {{CacheNotFoundResponse}} for all the owners that we didn't request the entry from, triggering the resize.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months