[JBoss JIRA] (ISPN-9063) Average stats should be calculated in Nanoseconds but exposed in Milliseconds
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-9063?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-9063:
-------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 9.2.2.Final
Resolution: Done
> Average stats should be calculated in Nanoseconds but exposed in Milliseconds
> -----------------------------------------------------------------------------
>
> Key: ISPN-9063
> URL: https://issues.jboss.org/browse/ISPN-9063
> Project: Infinispan
> Issue Type: Bug
> Components: Console, JMX, reporting and management
> Affects Versions: 9.2.1.Final
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Fix For: 9.2.2.Final, 9.3.0.Alpha1
>
>
> ISPN-7768 changed the precision of the exposed average-<read|write>time stats from Milliseconds to Nanoseconds. This causes backwards compatibility issues with previous versions of Infinispan and JDG. Therefore we need to ensure that these stats are exposed to the user in Milliseconds, however they should still be calculated in Nanoseconds internally to improve accuracy and avoid rounding errors.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9062) JGroupsTransport should only send messages to nodes in the cluster view
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-9062?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-9062:
-------------------------------------
Integrated into master and 9.2.x. 9.2.x is just a straight up cherry-pick with no conflicts.
> JGroupsTransport should only send messages to nodes in the cluster view
> -----------------------------------------------------------------------
>
> Key: ISPN-9062
> URL: https://issues.jboss.org/browse/ISPN-9062
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.1.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.2.2.Final, 9.3.0.Alpha1
>
>
> {{JGroupsTransport}} only waits for responses from nodes in the JGroups cluster view, but it still sends messages to all the nodes specified as a target. The idea was to optimize the common case by avoiding a {{HashSet.contains()}} call.
> However, when a node is not in the view, messages to it still pass through the entire JGroups stack, and UNICAST3 keeps those messages in a send table for a long time ({{UNICAST3.conn_expiry_timeout}}, changed with ISPN-9038 from {{0}} (unlimited) to 2 minutes (JGroups default)). Having a potentially unlimited number of messages of non-members, each with its own send table, makes it much harder to estimate memory usage.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9062) JGroupsTransport should only send messages to nodes in the cluster view
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-9062?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-9062:
--------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> JGroupsTransport should only send messages to nodes in the cluster view
> -----------------------------------------------------------------------
>
> Key: ISPN-9062
> URL: https://issues.jboss.org/browse/ISPN-9062
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.1.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.2.2.Final, 9.3.0.Alpha1
>
>
> {{JGroupsTransport}} only waits for responses from nodes in the JGroups cluster view, but it still sends messages to all the nodes specified as a target. The idea was to optimize the common case by avoiding a {{HashSet.contains()}} call.
> However, when a node is not in the view, messages to it still pass through the entire JGroups stack, and UNICAST3 keeps those messages in a send table for a long time ({{UNICAST3.conn_expiry_timeout}}, changed with ISPN-9038 from {{0}} (unlimited) to 2 minutes (JGroups default)). Having a potentially unlimited number of messages of non-members, each with its own send table, makes it much harder to estimate memory usage.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-8980) High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8980?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes edited comment on ISPN-8980 at 4/16/18 9:57 AM:
------------------------------------------------------------------
I created a unit test (against 8.2.x branch) with two nodes using persistent caches similar to your setup: https://github.com/gustavonalle/infinispan/commit/6c7eb8b8a0bc24bf5179510...
The test basically start a node, index a bunch of entities using the Hibernate Search Directory provider, then joins a second node, and checks that all data is transferred via state transfer correctly. The test also stops the first node and query the second node to guarantee all data is queryable and available.
One thing you can observe running the test, is that the size on disk *is not necessarily the same for both nodes*, since the SingleFileStore can do some compactations, and the size on disk can differ depending on how the data is written.
So it seems the issue you are observing is not due to transfer from nodes. Maybe you could try to change the unit test to replicate your problem? So far it seems to me it's the lack of a proper locking issue that is causing index corruption.
was (Author: gustavonalle):
I created a unit test (against 8.2.x branch) with two nodes using persistent caches similar to your setup: https://github.com/gustavonalle/infinispan/commit/e50d6325ccd1ee3fed7b860...
The test basically start a node, index a bunch of entities using the Hibernate Search Directory provider, then joins a second node, and checks that all data is transferred via state transfer correctly. The test also stops the first node and query the second node to guarantee all data is queryable and available.
One thing you can observe running the test, is that the size on disk *is not necessarily the same for both nodes*, since the SingleFileStore can do some compactations, and the size on disk can differ depending on how the data is written.
So it seems the issue you are observing is not due to transfer from nodes. Maybe you could try to change the unit test to replicate your problem? So far it seems to me it's the lack of a proper locking issue that is causing index corruption.
> High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
> ------------------------------------------------------------------------------------------------
>
> Key: ISPN-8980
> URL: https://issues.jboss.org/browse/ISPN-8980
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 8.2.5.Final
> Reporter: Debashish Bharali
> Assignee: Gustavo Fernandes
> Priority: Critical
> Attachments: JoiningNode_N2.zip, OriginalNode_N1.zip, SysOutLogs.txt, neutrino-hibernate-search-worker-jgroups.xml, neutrino-hibernatesearch-infinispan.xml
>
>
> During high concurrency of action, we are getting *{color:red}'Error loading metadata for index file'{color}* even in *{color:red}Non-Clustered{color}* env.
> *Hibernate Search Indexes (Lucene Indexes) - 5.7.0.Final*
> *Infinispan - 8.2.5.Final*
> *infinispan-directory-provider-8.2.5.Final*
> *jgroups-3.6.7.Final*
> *Worker Backend : JGroups*
> *Worker Execution: Sync*
> *write_metadata_async: false (implicitly)*
> *Note:* Currently we are on Non-Clustered env. We are moving to Clustered Env within few days.
> On analyzing the code, and putting some additional SYSOUT loggers into FileListOperations and DirectoryImplementor classes, we have established the following points:
> # This is happening during high concurrency on non-clustered env.
> # One thread *'T1'* is deleting a segment and segment name *'SEG1'* from the *'FileListCacheKey'* list* stored in MetaDatacache*.
> # Concurrently, at the same time, another thread *'T2'* is looping through the FileList ['copy list' from MetadataCache - for -FileListCacheKey - provided by toArray method of *FileListOperations* (changes also being done in the corresponding original list by T1 thread) ].
> # *'T2'* is calling open input method on each segment name - getting corresponding Metadata segment from *MetadataCache*.
> # However, for *'T2'*, the *'copy list'* still contains the name of segment *'SEG1'*.
> # So while looping through the list, *'T2'* tries to get Segment from MetadataCache for segment name *'SEG1'*.
> # But at this instant, *segment* corresponding to segment name *'SEG1'*, has been already removed from *MetadataCache* by *'T1'*.
> # This results in *'java.io.FileNotFoundException: Error loading metadata for index file'* for segment name *'SEG1'*
> # As mentioned earlier, this happens more often during high concurrency.
> *{color:red}On a standalone server (non-clustered), we are getting below error intermittently:{color}*
> Full Stack trace:
> 2018-03-19 17:29:11,938 ERROR [Hibernate Search sync consumer thread for index com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer] o.h.s.e.i.LogErrorHandler [LogErrorHandler.java:69]
> *{color:red}HSEARCH000058: Exception occurred java.io.FileNotFoundException: Error loading metadata for index file{color}*: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> Primary Failure:
> Entity com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer Id 1649990024999813056 Work Type org.hibernate.search.backend.AddLuceneWork
> java.io.FileNotFoundException: Error loading metadata for index file: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:171) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:949) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractCommitPolicy.getIndexWriter(AbstractCommitPolicy.java:33) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexCommitPolicy.getIndexWriter(SharedIndexCommitPolicy.java:77) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexWorkspaceImpl.getIndexWriter(SharedIndexWorkspaceImpl.java:36) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:165) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:151) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
> *As per our understanding, this issue should not come in {color:red}'non-clustered'{color} env. Also it should not arise when worker execution is {color:red}'sync'{color}.*
> *We have debugged the code, and confirmed that the value for {color:red}'write_metadata_async'{color} is coming as 'false' only (as expected).*
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9080) Distributed cache stream should wait for new topology when segments aren't complete
by William Burns (JIRA)
William Burns created ISPN-9080:
-----------------------------------
Summary: Distributed cache stream should wait for new topology when segments aren't complete
Key: ISPN-9080
URL: https://issues.jboss.org/browse/ISPN-9080
Project: Infinispan
Issue Type: Bug
Components: Streams
Affects Versions: 9.1.7.Final, 9.2.1.Final
Reporter: William Burns
Assignee: William Burns
Fix For: 9.2.2.Final, 9.3.0.Alpha1
Distributed streams can get stuck in infinite loop, wasting CPU if a new toplogy is never installed. This affects all cache modes (scattered, dist and repl).
We should add a wait for the next topology when we complete a loop and not all segments are completed.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9079) Cache operations during Conflict Resolution phase should force CR on that key
by Ryan Emerson (JIRA)
Ryan Emerson created ISPN-9079:
----------------------------------
Summary: Cache operations during Conflict Resolution phase should force CR on that key
Key: ISPN-9079
URL: https://issues.jboss.org/browse/ISPN-9079
Project: Infinispan
Issue Type: Enhancement
Components: Core
Affects Versions: 9.2.1.Final
Reporter: Ryan Emerson
Assignee: Ryan Emerson
Fix For: 9.3.0.Alpha1
During a partition merge conflict resolution can take a long time when there are many entries, therefore if a user initiates an operation on a key during this phase, it should force CR to occur on the key before executing the operation.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-8980) High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8980?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-8980:
-----------------------------------------
I created a unit test (against 8.2.x branch) with two nodes using persistent caches similar to your setup: https://github.com/gustavonalle/infinispan/commit/e50d6325ccd1ee3fed7b860...
The test basically start a node, index a bunch of entities using the Hibernate Search Directory provider, then joins a second node, and checks that all data is transferred via state transfer correctly. The test also stops the first node and query the second node to guarantee all data is queryable and available.
One thing you can observe running the test, is that the size on disk *is not necessarily the same for both nodes*, since the SingleFileStore can do some compactations, and the size on disk can differ depending on how the data is written.
So it seems the issue you are observing is not due to transfer from nodes. Maybe you could try to change the unit test to replicate your problem? So far it seems to me it's the lack of a proper locking issue that is causing index corruption.
> High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
> ------------------------------------------------------------------------------------------------
>
> Key: ISPN-8980
> URL: https://issues.jboss.org/browse/ISPN-8980
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 8.2.5.Final
> Reporter: Debashish Bharali
> Assignee: Gustavo Fernandes
> Priority: Critical
> Attachments: JoiningNode_N2.zip, OriginalNode_N1.zip, SysOutLogs.txt, neutrino-hibernate-search-worker-jgroups.xml, neutrino-hibernatesearch-infinispan.xml
>
>
> During high concurrency of action, we are getting *{color:red}'Error loading metadata for index file'{color}* even in *{color:red}Non-Clustered{color}* env.
> *Hibernate Search Indexes (Lucene Indexes) - 5.7.0.Final*
> *Infinispan - 8.2.5.Final*
> *infinispan-directory-provider-8.2.5.Final*
> *jgroups-3.6.7.Final*
> *Worker Backend : JGroups*
> *Worker Execution: Sync*
> *write_metadata_async: false (implicitly)*
> *Note:* Currently we are on Non-Clustered env. We are moving to Clustered Env within few days.
> On analyzing the code, and putting some additional SYSOUT loggers into FileListOperations and DirectoryImplementor classes, we have established the following points:
> # This is happening during high concurrency on non-clustered env.
> # One thread *'T1'* is deleting a segment and segment name *'SEG1'* from the *'FileListCacheKey'* list* stored in MetaDatacache*.
> # Concurrently, at the same time, another thread *'T2'* is looping through the FileList ['copy list' from MetadataCache - for -FileListCacheKey - provided by toArray method of *FileListOperations* (changes also being done in the corresponding original list by T1 thread) ].
> # *'T2'* is calling open input method on each segment name - getting corresponding Metadata segment from *MetadataCache*.
> # However, for *'T2'*, the *'copy list'* still contains the name of segment *'SEG1'*.
> # So while looping through the list, *'T2'* tries to get Segment from MetadataCache for segment name *'SEG1'*.
> # But at this instant, *segment* corresponding to segment name *'SEG1'*, has been already removed from *MetadataCache* by *'T1'*.
> # This results in *'java.io.FileNotFoundException: Error loading metadata for index file'* for segment name *'SEG1'*
> # As mentioned earlier, this happens more often during high concurrency.
> *{color:red}On a standalone server (non-clustered), we are getting below error intermittently:{color}*
> Full Stack trace:
> 2018-03-19 17:29:11,938 ERROR [Hibernate Search sync consumer thread for index com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer] o.h.s.e.i.LogErrorHandler [LogErrorHandler.java:69]
> *{color:red}HSEARCH000058: Exception occurred java.io.FileNotFoundException: Error loading metadata for index file{color}*: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> Primary Failure:
> Entity com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer Id 1649990024999813056 Work Type org.hibernate.search.backend.AddLuceneWork
> java.io.FileNotFoundException: Error loading metadata for index file: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:171) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:949) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractCommitPolicy.getIndexWriter(AbstractCommitPolicy.java:33) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexCommitPolicy.getIndexWriter(SharedIndexCommitPolicy.java:77) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexWorkspaceImpl.getIndexWriter(SharedIndexWorkspaceImpl.java:36) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:165) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:151) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
> *As per our understanding, this issue should not come in {color:red}'non-clustered'{color} env. Also it should not arise when worker execution is {color:red}'sync'{color}.*
> *We have debugged the code, and confirmed that the value for {color:red}'write_metadata_async'{color} is coming as 'false' only (as expected).*
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9076) SearchException when indexing entities in persistent caches
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-9076?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-9076:
--------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 9.2.2.Final
9.3.0.Alpha1
Resolution: Done
Integrated in master and 9.2.x. Thanks [~gustavonalle]!
> SearchException when indexing entities in persistent caches
> -----------------------------------------------------------
>
> Key: ISPN-9076
> URL: https://issues.jboss.org/browse/ISPN-9076
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.2.1.Final
> Environment: Cluster with two hosts (each instance of Infinispan hosted in own machine)
> Reporter: Sergey Chernolyas
> Assignee: Gustavo Fernandes
> Priority: Blocker
> Fix For: 9.2.2.Final, 9.3.0.Alpha1
>
> Attachments: Device.java, DeviceKey.java, NewsQueryTestProd.java, re_shop_entities.proto
>
>
> Can't save to cache with indexing where key is not primitive.
> I take exception:
> ________________
> 2018-04-13 15:28:02,053 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (HotRod-ServerHandler-6-101) ISPN000136: Error executing command PutKeyValueCommand, writing keys [WrappedByteArray{bytes=[B0x82012572752E6265..[58], hashCode=-1231863705}]: org
> .hibernate.search.exception.SearchException: Unable to perform work. Entity Class is not @Indexed nor hosts @ContainedIn: [B
> at org.hibernate.search.backend.impl.PerTransactionWorker.performWork(PerTransactionWorker.java:63)
> at org.infinispan.query.backend.QueryInterceptor.performSearchWorks(QueryInterceptor.java:387)
> at org.infinispan.query.backend.QueryInterceptor.removeFromIndexes(QueryInterceptor.java:341)
> at org.infinispan.query.backend.QueryInterceptor.processChange(QueryInterceptor.java:453)
> at org.infinispan.query.backend.QueryInterceptor.lambda$handleDataWriteCommand$0(QueryInterceptor.java:188)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextThenAccept(BaseAsyncInterceptor.java:105)
> at org.infinispan.query.backend.QueryInterceptor.handleDataWriteCommand(QueryInterceptor.java:181)
> at org.infinispan.query.backend.QueryInterceptor.visitPutKeyValueCommand(QueryInterceptor.java:248)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:54)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitDataCommand(CacheLoaderInterceptor.java:200)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitPutKeyValueCommand(CacheLoaderInterceptor.java:127)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextThenAccept(BaseAsyncInterceptor.java:98)
> at org.infinispan.interceptors.impl.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForDataCommand(EntryWrappingInterceptor.java:657)
> at org.infinispan.interceptors.impl.EntryWrappingInterceptor.visitPutKeyValueCommand(EntryWrappingInterceptor.java:299)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextThenApply(BaseAsyncInterceptor.java:74)
> at org.infinispan.interceptors.distribution.L1LastChanceInterceptor.visitDataWriteCommand(L1LastChanceInterceptor.java:136)
> at org.infinispan.interceptors.distribution.L1LastChanceInterceptor.visitPutKeyValueCommand(L1LastChanceInterceptor.java:72)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndFinally(BaseAsyncInterceptor.java:150)
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lambda$nonTxLockAndInvokeNext$1(AbstractLockingInterceptor.java:285)
> at org.infinispan.interceptors.SyncInvocationStage.addCallback(SyncInvocationStage.java:42)
> at org.infinispan.interceptors.InvocationStage.andHandle(InvocationStage.java:44)
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.nonTxLockAndInvokeNext(AbstractLockingInterceptor.java:280)
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitNonTxDataWriteCommand(AbstractLockingInterceptor.java:127)
> at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitDataWriteCommand(NonTransactionalLockingInterceptor.java:40)
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitPutKeyValueCommand(AbstractLockingInterceptor.java:82)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndHandle(BaseAsyncInterceptor.java:183)
> at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:309)
> at org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:252)
> at org.infinispan.statetransfer.StateTransferInterceptor.visitPutKeyValueCommand(StateTransferInterceptor.java:96)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndFinally(BaseAsyncInterceptor.java:150)
> at org.infinispan.interceptors.impl.CacheMgmtInterceptor.updateStoreStatistics(CacheMgmtInterceptor.java:209)
> at org.infinispan.interceptors.impl.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:171)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndExceptionally(BaseAsyncInterceptor.java:123)
> at org.infinispan.interceptors.impl.InvocationContextInterceptor.visitCommand(InvocationContextInterceptor.java:90)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:56)
> at org.infinispan.interceptors.DDAsyncInterceptor.handleDefault(DDAsyncInterceptor.java:54)
> at org.infinispan.interceptors.DDAsyncInterceptor.visitPutKeyValueCommand(DDAsyncInterceptor.java:60)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.DDAsyncInterceptor.visitCommand(DDAsyncInterceptor.java:50)
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invokeAsync(AsyncInterceptorChainImpl.java:234)
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeededAsync(CacheImpl.java:1741)
> _____
> Proto schema, Java classes of model, and client code is attached.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-8980) High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8980?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-8980:
-----------------------------------------
Also, I can see that in the TRACE that the LuceneIndexesLocking is LOCAL. As I mentioned earlier, this *will cause index corruption*.
I understand you mentioned this is a "workaround" for the fact that the JGroups backend incorrectly handles locks, but the workaround is not valid, and we should instead fix the JGroups backend so that it correctly manages the locks.
So the correct config would be {{<prop key="hibernate.search.default.exclusive_index_use">false</prop>}} plus a *SYNC_REPL LuceneIndexesLocking*, this would guarantee that the lock is visible cluster wide, and only one node can write to the index at a time.
> High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
> ------------------------------------------------------------------------------------------------
>
> Key: ISPN-8980
> URL: https://issues.jboss.org/browse/ISPN-8980
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 8.2.5.Final
> Reporter: Debashish Bharali
> Assignee: Gustavo Fernandes
> Priority: Critical
> Attachments: JoiningNode_N2.zip, OriginalNode_N1.zip, SysOutLogs.txt, neutrino-hibernate-search-worker-jgroups.xml, neutrino-hibernatesearch-infinispan.xml
>
>
> During high concurrency of action, we are getting *{color:red}'Error loading metadata for index file'{color}* even in *{color:red}Non-Clustered{color}* env.
> *Hibernate Search Indexes (Lucene Indexes) - 5.7.0.Final*
> *Infinispan - 8.2.5.Final*
> *infinispan-directory-provider-8.2.5.Final*
> *jgroups-3.6.7.Final*
> *Worker Backend : JGroups*
> *Worker Execution: Sync*
> *write_metadata_async: false (implicitly)*
> *Note:* Currently we are on Non-Clustered env. We are moving to Clustered Env within few days.
> On analyzing the code, and putting some additional SYSOUT loggers into FileListOperations and DirectoryImplementor classes, we have established the following points:
> # This is happening during high concurrency on non-clustered env.
> # One thread *'T1'* is deleting a segment and segment name *'SEG1'* from the *'FileListCacheKey'* list* stored in MetaDatacache*.
> # Concurrently, at the same time, another thread *'T2'* is looping through the FileList ['copy list' from MetadataCache - for -FileListCacheKey - provided by toArray method of *FileListOperations* (changes also being done in the corresponding original list by T1 thread) ].
> # *'T2'* is calling open input method on each segment name - getting corresponding Metadata segment from *MetadataCache*.
> # However, for *'T2'*, the *'copy list'* still contains the name of segment *'SEG1'*.
> # So while looping through the list, *'T2'* tries to get Segment from MetadataCache for segment name *'SEG1'*.
> # But at this instant, *segment* corresponding to segment name *'SEG1'*, has been already removed from *MetadataCache* by *'T1'*.
> # This results in *'java.io.FileNotFoundException: Error loading metadata for index file'* for segment name *'SEG1'*
> # As mentioned earlier, this happens more often during high concurrency.
> *{color:red}On a standalone server (non-clustered), we are getting below error intermittently:{color}*
> Full Stack trace:
> 2018-03-19 17:29:11,938 ERROR [Hibernate Search sync consumer thread for index com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer] o.h.s.e.i.LogErrorHandler [LogErrorHandler.java:69]
> *{color:red}HSEARCH000058: Exception occurred java.io.FileNotFoundException: Error loading metadata for index file{color}*: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> Primary Failure:
> Entity com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer Id 1649990024999813056 Work Type org.hibernate.search.backend.AddLuceneWork
> java.io.FileNotFoundException: Error loading metadata for index file: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:171) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:949) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractCommitPolicy.getIndexWriter(AbstractCommitPolicy.java:33) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexCommitPolicy.getIndexWriter(SharedIndexCommitPolicy.java:77) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexWorkspaceImpl.getIndexWriter(SharedIndexWorkspaceImpl.java:36) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:165) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:151) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
> *As per our understanding, this issue should not come in {color:red}'non-clustered'{color} env. Also it should not arise when worker execution is {color:red}'sync'{color}.*
> *We have debugged the code, and confirmed that the value for {color:red}'write_metadata_async'{color} is coming as 'false' only (as expected).*
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9078) Spring CacheDelegate not handling null properly
by Mike Noordermeer (JIRA)
Mike Noordermeer created ISPN-9078:
--------------------------------------
Summary: Spring CacheDelegate not handling null properly
Key: ISPN-9078
URL: https://issues.jboss.org/browse/ISPN-9078
Project: Infinispan
Issue Type: Bug
Components: Spring Integration
Affects Versions: 9.2.1.Final
Reporter: Mike Noordermeer
Spring {{CacheDelegate}} does not properly handle {{NULL}} values in several functions. I already mentioned this in ISPN-7224 but I think it got overlooked.
The list of issues, looking at the code:
* {{get(Object key, Class<T> type)}} does not unwrap {{NullValue.NULL}}
* {{get(Object key, Callable<T> valueLoader)}} does not unwrap {{NullValue.NULL}}
* {{get(Object key, Callable<T> valueLoader)}} does not put {{NullValue.NULL}} if the value returned from {{valueLoader}} is {{null}}
* {{putIfAbsent}} does not put {{NullValue.NULL}} is {{value}} is {{null}}
The other put functions and the main get function do work as expected.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months