[JBoss JIRA] (ISPN-3048) Eviction needs to be transactional
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-3048?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-3048:
------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/infinispan/infinispan/pull/2252
> Eviction needs to be transactional
> ----------------------------------
>
> Key: ISPN-3048
> URL: https://issues.jboss.org/browse/ISPN-3048
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 5.3.0.Alpha1
> Reporter: Paul Ferraro
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: 620
> Fix For: 6.1.0.Final
>
>
> Currently, Infinispan eviction is non-transactional. This makes Infinispan's eviction manager virtually unusable, since non-transactional eviction can cause phantom reads and data loss because it violates the isolation of concurrent transactions. This is especially problematic when using a passivation-enabled cache store. In this case, a cache eviction/passivation can cause a concurrently executed cache retrieval to return null - even though the act of passivation does not change the data - it only changes where it is stored.
> We work around this in the AS by performing eviction manually, using pessimistic locking in combination with eager lock acquisition prior to eviction. This is unfortunate, since it prevents me from leveraging Infinispan's build-in eviction strategies.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-3432) Data put to index enabled cache with Infinispan Directory provider using Async. JDBC StringBased CacheStore fails
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-3432?page=com.atlassian.jira.plugin.... ]
Adrian Nistor commented on ISPN-3432:
-------------------------------------
There's also Anna's comment that by adding InfinispanIndexManager the locking issue disappears. I think using InfinispanIndexManager is the normal case. The other might just work, but there are no guarantees.
> Data put to index enabled cache with Infinispan Directory provider using Async. JDBC StringBased CacheStore fails
> -----------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-3432
> URL: https://issues.jboss.org/browse/ISPN-3432
> Project: Infinispan
> Issue Type: Bug
> Components: Querying
> Affects Versions: 6.0.0.Alpha1
> Reporter: Anna Manukyan
> Assignee: Adrian Nistor
> Priority: Critical
> Labels: 620
> Fix For: 6.1.0.Final
>
> Attachments: async-config.xml
>
>
> Hi,
> this issue is related to the ISPN-3090, but I thought to specify this case separately for bringing detailed explanation for the configuration and thrown exceptions.
> The issue relates to the performance tests for Index enabled Infinispan cache, with configured Infinispan directory and Async JDBC. String Based Cache store.
> The tests are running on 4 nodes and performing puts/gets on all nodes with many threads.
> The problem is that, during data put, the following exceptions are thrown continuously:
> {code}
> 04:04:05,633 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate Search: Index updates queue processor for index query-1) HSEARCH000058: Exception occurred org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> Primary Failure:
> Entity org.radargun.cachewrappers.InfinispanQueryWrapper$QueryableData Id S:_InstallBenchmarkStage_0 Work Type org.hibernate.search.backend.UpdateLuceneWork
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:554)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:359)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1138)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:148)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:115)
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> ......
> 04:14:21,605 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate Search: Index updates queue processor for index query-1) HSEARCH000058: Exception occurred org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> Primary Failure:
> Entity org.radargun.cachewrappers.InfinispanQueryWrapper$QueryableData Id S:key_0_0_0000000000000017 Work Type org.hibernate.search.backend.UpdateLuceneWork
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:554)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:359)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1138)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:148)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:115)
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> {code}
> You can find the cache configuration attached.
> Yet another thing to mention:
> if the following line is added to the cache configuration:
> {code}
> <property name="default.indexmanager" value="org.infinispan.query.indexmanager.InfinispanIndexManager" />
> {code}
> then the issue is gone - no lock issue appears then.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-3432) Data put to index enabled cache with Infinispan Directory provider using Async. JDBC StringBased CacheStore fails
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-3432?page=com.atlassian.jira.plugin.... ]
Adrian Nistor closed ISPN-3432.
-------------------------------
Resolution: Cannot Reproduce Bug
> Data put to index enabled cache with Infinispan Directory provider using Async. JDBC StringBased CacheStore fails
> -----------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-3432
> URL: https://issues.jboss.org/browse/ISPN-3432
> Project: Infinispan
> Issue Type: Bug
> Components: Querying
> Affects Versions: 6.0.0.Alpha1
> Reporter: Anna Manukyan
> Assignee: Adrian Nistor
> Priority: Critical
> Labels: 620
> Fix For: 6.1.0.Final
>
> Attachments: async-config.xml
>
>
> Hi,
> this issue is related to the ISPN-3090, but I thought to specify this case separately for bringing detailed explanation for the configuration and thrown exceptions.
> The issue relates to the performance tests for Index enabled Infinispan cache, with configured Infinispan directory and Async JDBC. String Based Cache store.
> The tests are running on 4 nodes and performing puts/gets on all nodes with many threads.
> The problem is that, during data put, the following exceptions are thrown continuously:
> {code}
> 04:04:05,633 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate Search: Index updates queue processor for index query-1) HSEARCH000058: Exception occurred org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> Primary Failure:
> Entity org.radargun.cachewrappers.InfinispanQueryWrapper$QueryableData Id S:_InstallBenchmarkStage_0 Work Type org.hibernate.search.backend.UpdateLuceneWork
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:554)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:359)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1138)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:148)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:115)
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> ......
> 04:14:21,605 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate Search: Index updates queue processor for index query-1) HSEARCH000058: Exception occurred org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> Primary Failure:
> Entity org.radargun.cachewrappers.InfinispanQueryWrapper$QueryableData Id S:key_0_0_0000000000000017 Work Type org.hibernate.search.backend.UpdateLuceneWork
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:554)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:359)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1138)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:148)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:115)
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> {code}
> You can find the cache configuration attached.
> Yet another thing to mention:
> if the following line is added to the cache configuration:
> {code}
> <property name="default.indexmanager" value="org.infinispan.query.indexmanager.InfinispanIndexManager" />
> {code}
> then the issue is gone - no lock issue appears then.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-3782) Update Infinispan code once HSEARCH-1461 is fixed
by Adrian Nistor (JIRA)
Adrian Nistor created ISPN-3782:
-----------------------------------
Summary: Update Infinispan code once HSEARCH-1461 is fixed
Key: ISPN-3782
URL: https://issues.jboss.org/browse/ISPN-3782
Project: Infinispan
Issue Type: Task
Components: JMX, reporting and management, Querying
Affects Versions: 6.1.0.Final
Reporter: Adrian Nistor
Assignee: Adrian Nistor
We currently have a workaround for HSEARCH-1461 in ISPN-3335.
This should be undone once proper fix for HSEARCH-1461 exists in Hibernate Search.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-3633) InvalidateL1Command during ST should not cancel writing the entry by ST
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-3633?page=com.atlassian.jira.plugin.... ]
William Burns resolved ISPN-3633.
---------------------------------
Resolution: Done
Radim I created ISPN-3780 as a clone of this to handle the non tx part. I just wanted to make sure we have the proper fix version here since the tx version made it into 6.0.
> InvalidateL1Command during ST should not cancel writing the entry by ST
> -----------------------------------------------------------------------
>
> Key: ISPN-3633
> URL: https://issues.jboss.org/browse/ISPN-3633
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.3.0.Final
> Reporter: Radim Vansa
> Assignee: William Burns
> Priority: Critical
> Labels: 620
> Fix For: 6.0.0.Final
>
>
> When a node which is about to receive the entries for some segment receives InvalidateL1Command, this puts the key into StateConsumer.updatedKeys. After the entries for ST are received, the entry which was invalidated is not updated -> this result in losing the entry.
> Btw., in EntryWrappingInterceptor.visitInvalidateL1Command the trace logs all looked up entries for each entry - this causes some performance problems when tracing is on and there are many invalidated entries. Please, do this only once.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-3780) CLONE - InvalidateL1Command during ST should not cancel writing the entry by ST in nontx
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-3780?page=com.atlassian.jira.plugin.... ]
Work on ISPN-3780 started by William Burns.
> CLONE - InvalidateL1Command during ST should not cancel writing the entry by ST in nontx
> ----------------------------------------------------------------------------------------
>
> Key: ISPN-3780
> URL: https://issues.jboss.org/browse/ISPN-3780
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.3.0.Final
> Reporter: Radim Vansa
> Assignee: William Burns
> Priority: Critical
> Labels: 620
> Fix For: 6.0.0.Final
>
>
> When a node which is about to receive the entries for some segment receives InvalidateL1Command, this puts the key into StateConsumer.updatedKeys. After the entries for ST are received, the entry which was invalidated is not updated -> this result in losing the entry.
> Btw., in EntryWrappingInterceptor.visitInvalidateL1Command the trace logs all looked up entries for each entry - this causes some performance problems when tracing is on and there are many invalidated entries. Please, do this only once.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years
[JBoss JIRA] (ISPN-3780) CLONE - InvalidateL1Command during ST should not cancel writing the entry by ST in nontx
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-3780?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-3780:
--------------------------------
Fix Version/s: 6.1.0.Final
(was: 6.0.0.Final)
> CLONE - InvalidateL1Command during ST should not cancel writing the entry by ST in nontx
> ----------------------------------------------------------------------------------------
>
> Key: ISPN-3780
> URL: https://issues.jboss.org/browse/ISPN-3780
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.3.0.Final
> Reporter: Radim Vansa
> Assignee: William Burns
> Priority: Critical
> Labels: 620
> Fix For: 6.1.0.Final
>
>
> When a node which is about to receive the entries for some segment receives InvalidateL1Command, this puts the key into StateConsumer.updatedKeys. After the entries for ST are received, the entry which was invalidated is not updated -> this result in losing the entry.
> Btw., in EntryWrappingInterceptor.visitInvalidateL1Command the trace logs all looked up entries for each entry - this causes some performance problems when tracing is on and there are many invalidated entries. Please, do this only once.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years