[JBoss JIRA] (ISPN-8990) Avoid JBossMarshaller instance caches growing limitless
by Richard Janík (JIRA)
[ https://issues.jboss.org/browse/ISPN-8990?page=com.atlassian.jira.plugin.... ]
Richard Janík updated ISPN-8990:
--------------------------------
Git Pull Request: (was: https://github.com/infinispan/infinispan/pull/5860)
> Avoid JBossMarshaller instance caches growing limitless
> -------------------------------------------------------
>
> Key: ISPN-8990
> URL: https://issues.jboss.org/browse/ISPN-8990
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling
> Affects Versions: 9.2.0.Final
> Reporter: Richard Janík
> Assignee: Galder Zamarreño
> Labels: downstream_dependency
>
> In the 8.2.x design, JBoss Marshaller marshaller/unmarshaller instances were cached in thread locals to speed up things. Internally it instance caches are kept which never shrink. This can create leaks.
> We should change the code so that when releasing marshallers/unmarshaller, if instance cache is bigger than certain size, destroy the marshaller and let it recreate it. A good size to not go beyond would be 1024, given that in 8.2.x we configure JBoss Marshaller instance count to 32.
> This would require Infinispan code to access and check instance cache size.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8990) Avoid JBossMarshaller instance caches growing limitless
by Richard Janík (JIRA)
Richard Janík created ISPN-8990:
-----------------------------------
Summary: Avoid JBossMarshaller instance caches growing limitless
Key: ISPN-8990
URL: https://issues.jboss.org/browse/ISPN-8990
Project: Infinispan
Issue Type: Bug
Components: Marshalling
Affects Versions: 8.2.10.Final
Reporter: Richard Janík
Assignee: Galder Zamarreño
In the 8.2.x design, JBoss Marshaller marshaller/unmarshaller instances were cached in thread locals to speed up things. Internally it instance caches are kept which never shrink. This can create leaks.
We should change the code so that when releasing marshallers/unmarshaller, if instance cache is bigger than certain size, destroy the marshaller and let it recreate it. A good size to not go beyond would be 1024, given that in 8.2.x we configure JBoss Marshaller instance count to 32.
This would require Infinispan code to access and check instance cache size.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8990) Avoid JBossMarshaller instance caches growing limitless
by Richard Janík (JIRA)
[ https://issues.jboss.org/browse/ISPN-8990?page=com.atlassian.jira.plugin.... ]
Richard Janík updated ISPN-8990:
--------------------------------
Affects Version/s: 9.2.0.Final
(was: 8.2.10.Final)
> Avoid JBossMarshaller instance caches growing limitless
> -------------------------------------------------------
>
> Key: ISPN-8990
> URL: https://issues.jboss.org/browse/ISPN-8990
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling
> Affects Versions: 9.2.0.Final
> Reporter: Richard Janík
> Assignee: Galder Zamarreño
> Labels: downstream_dependency
>
> In the 8.2.x design, JBoss Marshaller marshaller/unmarshaller instances were cached in thread locals to speed up things. Internally it instance caches are kept which never shrink. This can create leaks.
> We should change the code so that when releasing marshallers/unmarshaller, if instance cache is bigger than certain size, destroy the marshaller and let it recreate it. A good size to not go beyond would be 1024, given that in 8.2.x we configure JBoss Marshaller instance count to 32.
> This would require Infinispan code to access and check instance cache size.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8974) Avoid JBossMarshaller instance caches growing limitless
by Richard Janík (JIRA)
[ https://issues.jboss.org/browse/ISPN-8974?page=com.atlassian.jira.plugin.... ]
Richard Janík commented on ISPN-8974:
-------------------------------------
I'm looking at the code differences between 8.2.8.Final and current master (9.2.1-SNAPSHOT, 5d3d2f1c0a3c41d60158d2b771d43abb3d4a5233). It looks like the marshallers and unmarshallers are still cached per thread and the {{removeMarshaller/removeUnmarshaller}} logic is the same in master. I'll clone the issue for 9.x, if that's ok. If there's a difference in design in 9.x that I'm not aware that fixes this, please let me know and just close the new issue. Thanks!
Cc [~mvinkler]
> Avoid JBossMarshaller instance caches growing limitless
> -------------------------------------------------------
>
> Key: ISPN-8974
> URL: https://issues.jboss.org/browse/ISPN-8974
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling
> Affects Versions: 8.2.10.Final
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Labels: downstream_dependency
>
> In the 8.2.x design, JBoss Marshaller marshaller/unmarshaller instances were cached in thread locals to speed up things. Internally it instance caches are kept which never shrink. This can create leaks.
> We should change the code so that when releasing marshallers/unmarshaller, if instance cache is bigger than certain size, destroy the marshaller and let it recreate it. A good size to not go beyond would be 1024, given that in 8.2.x we configure JBoss Marshaller instance count to 32.
> This would require Infinispan code to access and check instance cache size.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8908) Remote cache fails to get entries when state transfer is turned off
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-8908?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo commented on ISPN-8908:
-----------------------------------
This isn't a bug but a feature request. This behaviour was never supported.
Besides the data lost that we can't prevent, if the state transfer is turn off, it may create inconsistencies in the data, when used with concurrent remove operation.
As an example, assuming a backup owner of a key, this events can happen:
* It receives a {{remove_operation}} from the primary owner and applies it. The key is removed.
* It performs a local {{get()}}. The value isn't found so it asks to all the other owners. If the other owners haven't applied the {{remove_operation}} yet, the old value is returned
* And it stores the old value again.
In the end, this backup owner keeps the value while the remaining owners remove it.
To solve this, we need to support tombstones. We currently don't have that.
IMO, enabling the state transfer and disabling {{awaitInitialTransfer}} is the best option.
[~dan.berindei] or [~rvansa], do you have another suggestion/opinion?
> Remote cache fails to get entries when state transfer is turned off
> -------------------------------------------------------------------
>
> Key: ISPN-8908
> URL: https://issues.jboss.org/browse/ISPN-8908
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 9.2.0.Final
> Reporter: Vojtech Juranek
> Assignee: Pedro Ruivo
>
> When state transfer is turned off, remote cache fails to obtain cache entries from HR server (in client-server mode) which connects to the cluster. Only about half of the entries can be seen by HR client on newly connected server, both in replicated cache and distributed cache.
> It's not specific to CS mode, same problem is also in embedded mode.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8989) Administration console - Counters don't work in standalone mode
by Roman Macor (JIRA)
Roman Macor created ISPN-8989:
---------------------------------
Summary: Administration console - Counters don't work in standalone mode
Key: ISPN-8989
URL: https://issues.jboss.org/browse/ISPN-8989
Project: Infinispan
Issue Type: Bug
Components: JMX, reporting and management
Affects Versions: 9.2.0.Final
Reporter: Roman Macor
start the server in standalone clustered mode:
- bin/standalone.sh -c clustered.xml
- click on cache container -> Counters tab -> Create new counter -> create counter e.g. (name: new, initial value 5)
This results in error pop up:
WFLYCTL0030: No resource definition is registered for address [ ("profile" => "standalone"), ("subsystem" => "datagrid-infinispan"), ("cache-container" => "clustered"), ("counters" => "COUNTERS") ]
Please note that this works in domain mode.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8980) High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8980?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes edited comment on ISPN-8980 at 3/26/18 4:10 AM:
------------------------------------------------------------------
Answering all the questions:
>> The same change has been applied to a clustered env and it is working there too (when backend_execution is set as sync).
You should also change your neutrino-hibernatesearch-infinispan.xml to use a <replicated-cache> cache for {{LuceneIndexesLocking}} otherwise you'll likely see index corruption since the lock is only visible to a local node
>> For this version (6.0.2) too, will infinispan handle the locking itself?
Yes, the infinispan directory always used a locking cache since its inception.
On a side note, I'd strongly encourage you to upgrade as this version is no longer maintained and besides that, newer versions of Infinispan have orders of magnitude performance improvement, which is many cases you can even avoid using async indexing since the sync performs very well.
>> When we set the backend_execution as async, the exception starts coming again.
Could you re-try changing the LuceneIndexesLocking cache as suggested above?
was (Author: gustavonalle):
Answering all the questions:
>> The same change has been applied to a clustered env and it is working there too (when backend_execution is set as sync).
You should also change your neutrino-hibernatesearch-infinispan.xml to use a <replicated-cache> cache for {{LuceneIndexesLocking}} otherwise you'll likely see index corruption since the lock is only visible to a local node
>> For this version (6.0.2) too, will infinispan handle the locking itself?
Yes, the infinispan directory always used a locking cache since its inception.
On a side note, I'd strongly encourage you to upgrade as this version is no longer maintained and besides that, newer versions Infinispan have orders of magnitude performance improvement, which is many cases you can even avoid using async indexing since the sync performs very well.
>> When we set the backend_execution as async, the exception starts coming again.
Could you re-try changing the LuceneIndexesLocking cache as suggested above?
> High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
> ------------------------------------------------------------------------------------------------
>
> Key: ISPN-8980
> URL: https://issues.jboss.org/browse/ISPN-8980
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 8.2.5.Final
> Reporter: Debashish Bharali
> Assignee: Gustavo Fernandes
> Priority: Critical
> Attachments: SysOutLogs.txt, neutrino-hibernate-search-worker-jgroups.xml, neutrino-hibernatesearch-infinispan.xml
>
>
> During high concurrency of action, we are getting *{color:red}'Error loading metadata for index file'{color}* even in *{color:red}Non-Clustered{color}* env.
> *Hibernate Search Indexes (Lucene Indexes) - 5.7.0.Final*
> *Infinispan - 8.2.5.Final*
> *infinispan-directory-provider-8.2.5.Final*
> *jgroups-3.6.7.Final*
> *Worker Backend : JGroups*
> *Worker Execution: Sync*
> *write_metadata_async: false (implicitly)*
> *Note:* Currently we are on Non-Clustered env. We are moving to Clustered Env within few days.
> On analyzing the code, and putting some additional SYSOUT loggers into FileListOperations and DirectoryImplementor classes, we have established the following points:
> # This is happening during high concurrency on non-clustered env.
> # One thread *'T1'* is deleting a segment and segment name *'SEG1'* from the *'FileListCacheKey'* list* stored in MetaDatacache*.
> # Concurrently, at the same time, another thread *'T2'* is looping through the FileList ['copy list' from MetadataCache - for -FileListCacheKey - provided by toArray method of *FileListOperations* (changes also being done in the corresponding original list by T1 thread) ].
> # *'T2'* is calling open input method on each segment name - getting corresponding Metadata segment from *MetadataCache*.
> # However, for *'T2'*, the *'copy list'* still contains the name of segment *'SEG1'*.
> # So while looping through the list, *'T2'* tries to get Segment from MetadataCache for segment name *'SEG1'*.
> # But at this instant, *segment* corresponding to segment name *'SEG1'*, has been already removed from *MetadataCache* by *'T1'*.
> # This results in *'java.io.FileNotFoundException: Error loading metadata for index file'* for segment name *'SEG1'*
> # As mentioned earlier, this happens more often during high concurrency.
> *{color:red}On a standalone server (non-clustered), we are getting below error intermittently:{color}*
> Full Stack trace:
> 2018-03-19 17:29:11,938 ERROR [Hibernate Search sync consumer thread for index com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer] o.h.s.e.i.LogErrorHandler [LogErrorHandler.java:69]
> *{color:red}HSEARCH000058: Exception occurred java.io.FileNotFoundException: Error loading metadata for index file{color}*: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> Primary Failure:
> Entity com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer Id 1649990024999813056 Work Type org.hibernate.search.backend.AddLuceneWork
> java.io.FileNotFoundException: Error loading metadata for index file: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:171) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:949) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractCommitPolicy.getIndexWriter(AbstractCommitPolicy.java:33) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexCommitPolicy.getIndexWriter(SharedIndexCommitPolicy.java:77) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexWorkspaceImpl.getIndexWriter(SharedIndexWorkspaceImpl.java:36) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:165) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:151) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
> *As per our understanding, this issue should not come in {color:red}'non-clustered'{color} env. Also it should not arise when worker execution is {color:red}'sync'{color}.*
> *We have debugged the code, and confirmed that the value for {color:red}'write_metadata_async'{color} is coming as 'false' only (as expected).*
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years