[JBoss JIRA] (ISPN-3617) Inconsistent L1 in non-tx distributed cache
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3617?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3617:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1017796|https://bugzilla.redhat.com/show_bug.cgi?id=1017796] from POST to ON_QA
> Inconsistent L1 in non-tx distributed cache
> -------------------------------------------
>
> Key: ISPN-3617
> URL: https://issues.jboss.org/browse/ISPN-3617
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.7.Final
> Reporter: Radim Vansa
> Assignee: William Burns
> Priority: Critical
> Labels: jdg62blocker
> Fix For: 6.0.0.CR2, 6.0.0.Final
>
>
> When the change is replicated to backup owner, it sends the InvalidateL1Command to backup owners before committing the entry in EntryWrappingInterceptor (it performs the WriteCommand in parallel with sending the invalidation commmand, but then it waits until the invalidation request gets acked. If a GET is executed between the invalidation and committing the entry, the response contains outdated result and the L1 will not be invalidated until next write operation.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] (ISPN-3665) SingleFileStore is not thread-safe for passivation
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3665?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-3665:
------------------------------------
The flush was needed with the bucket-based cache store because it opened the backing file every time it wanted to read or write an entry. The new single-file store uses the same file channel/descriptor for all the operations, so all the threads *should* see the same value.
On the other hand, I think there may be a problem with the SingleFileStore locking... e.g. the free() method doesn't wait for concurrent readers to unlock the entry, so I think it could cause those readers to read an empty entry. And we probably need more logging in there as well...
> SingleFileStore is not thread-safe for passivation
> --------------------------------------------------
>
> Key: ISPN-3665
> URL: https://issues.jboss.org/browse/ISPN-3665
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 6.0.0.CR1
> Reporter: Paul Ferraro
> Assignee: Paul Ferraro
> Priority: Blocker
> Labels: jdg620_dm, jdg62GAblocker
> Attachments: Test.java
>
>
> SingleFileStore never makes use of FileChannel.force(...) to flush changes to disk. This causes problems for the passivation use case.
> If one thread evicts a cache entry, while immediately after another thread attempts to read the same cache entry, the Cache.get(...) can return null. This is because the entry is never flushed to disk.
> I've attached a test to reproduce the problem.
> I also ran the same test with the addition of FileChannel.force(false) to the write(...) method, and the test succeeds.
> A proper fix should probably make this a configurable property (as it was with the old file store implementation). It would be nice if the flush could defer until just before tx commit, but, off hand, I don't know how feasible that is.
> I suspect this lack of flush also accounts for much of the bold claim of a 100x performance improvement over the old file store implementation.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] (ISPN-3599) CommitCommand with replayed PrepareCommand executes rollback and then commit
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3599?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-3599:
-------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> CommitCommand with replayed PrepareCommand executes rollback and then commit
> ----------------------------------------------------------------------------
>
> Key: ISPN-3599
> URL: https://issues.jboss.org/browse/ISPN-3599
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer, Transactions
> Affects Versions: 6.0.0.CR1
> Reporter: Radim Vansa
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: jdg62GAblocker
> Fix For: 6.0.0.CR2, 6.0.0.Final
>
>
> During state-transfer in tx cache, the node can receive {{CommitCommand}} from other node. After the node gets transaction data for affected segments, it creates the transaction with {{missingLookedUpEntries=true}} and the {{CommitCommand}} can be executed.
> In this command's {{perform(...)}} the transaction is *first* marked as completed, then it enters the interceptor chain. There, the {{PrepareCommand}} is created in {{StateTransferInterceptor.visitCommitCommand}} but after this is processed the {{TxInterceptor}} finds out that the transaction is already completed and executes {{RollbackCommand}}, clearing locks etc.
> Nevertheless, {{StateTransferInterceptor}} executes the initial {{CommitCommand}} afterwards. I suspect that this may be executed without the locks held.
> Anyway, it is not correct to execute both commit and rollback on the same transaction.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] (ISPN-263) Handle cluster partitions
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-263?page=com.atlassian.jira.plugin.s... ]
Mircea Markus updated ISPN-263:
-------------------------------
Summary: Handle cluster partitions (was: Handle JGroups MERGE events to help deal with split brains)
> Handle cluster partitions
> -------------------------
>
> Key: ISPN-263
> URL: https://issues.jboss.org/browse/ISPN-263
> Project: Infinispan
> Issue Type: Feature Request
> Components: Distributed Cache
> Reporter: Manik Surtani
> Assignee: Manik Surtani
> Labels: MERGE, split_brain
>
> JGroups already detects split brains and issues a callback. The cache layer needs to decide what to do. The idea is to implement a few canned policies (restart, wipe, etc) and allow custom handlers to be attached as well.
> Analogous to JBCACHE-471
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] (ISPN-3355) Add support for clustered listeners
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3355?page=com.atlassian.jira.plugin.... ]
Mircea Markus commented on ISPN-3355:
-------------------------------------
Suggested design: https://github.com/infinispan/infinispan/wiki/Clustered-listeners
> Add support for clustered listeners
> -----------------------------------
>
> Key: ISPN-3355
> URL: https://issues.jboss.org/browse/ISPN-3355
> Project: Infinispan
> Issue Type: Feature Request
> Reporter: Mircea Markus
> Assignee: Mircea Markus
>
> As opposed to the current listener approach in Infinispan ( a listener instance is invoked on the data owners ), this JIRA is about adding support for a cluster listener: the same listener instance that is notified disregarding of data ownership ( RPC calls involved).
> Due to the fact that the listener notification might involve an RPC, it is nice to be able to specify filters on these listeners.
> The clustered listener support opens the way for some interesting architectures:
> * persistent/continuous queries: the query is transformed in a filter. On each notification, the listener (stateful) updates the query state
> * simplistic CEP can be built on top of the persistent query described above
> * remote/hotrod notifications might be based on clustered listeners as well.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] (ISPN-3354) Multiple events on the local node with Infinispan 5.3.0-final
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3354?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-3354:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1024942
> Multiple events on the local node with Infinispan 5.3.0-final
> -------------------------------------------------------------
>
> Key: ISPN-3354
> URL: https://issues.jboss.org/browse/ISPN-3354
> Project: Infinispan
> Issue Type: Bug
> Components: Listeners
> Affects Versions: 5.3.0.Final
> Reporter: Luca Zenti
> Assignee: Mircea Markus
> Labels: jdg620_dm, jdg62GAblocker
> Attachments: TestInfinispanDuplicatedEvents.java
>
>
> After upgrading to Infinispan 5.3.0-final I found a strange "intermittent" problem in my application. Digging a bit deeper, I found out it is due to CacheEntry events raised twice for some keys on the local node (the node where the cache operation is invoked).
> I was able to reproduce the problem and I wrote the attached test case.
> The problem happens regardless of the cluster mode, but only with non-transactional caches. I think this is due to the fact that with transactional caches the events are raised on commit.
> Also, my application used to work with an interceptor rather than an event listener, so I actually found the problem when I saw my interceptor being occasionally executed 3 times with 2 nodes.
> I'm not sure whether the command and the chain of interceptor is really meant to be executed twice on the local node, but the consequent behaviour on the events sounds like a bug.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] (ISPN-3432) Data put to index enabled cache with Infinispan Directory provider using Async. JDBC StringBased CacheStore fails
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3432?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-3432:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1024936
> Data put to index enabled cache with Infinispan Directory provider using Async. JDBC StringBased CacheStore fails
> -----------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-3432
> URL: https://issues.jboss.org/browse/ISPN-3432
> Project: Infinispan
> Issue Type: Bug
> Components: Querying
> Affects Versions: 6.0.0.Alpha1
> Reporter: Anna Manukyan
> Assignee: Sanne Grinovero
> Labels: jdg620_dm, jdg62GAblocker
> Attachments: async-config.xml
>
>
> Hi,
> this issue is related to the ISPN-3090, but I thought to specify this case separately for bringing detailed explanation for the configuration and thrown exceptions.
> The issue relates to the performance tests for Index enabled Infinispan cache, with configured Infinispan directory and Async JDBC. String Based Cache store.
> The tests are running on 4 nodes and performing puts/gets on all nodes with many threads.
> The problem is that, during data put, the following exceptions are thrown continuously:
> {code}
> 04:04:05,633 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate Search: Index updates queue processor for index query-1) HSEARCH000058: Exception occurred org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> Primary Failure:
> Entity org.radargun.cachewrappers.InfinispanQueryWrapper$QueryableData Id S:_InstallBenchmarkStage_0 Work Type org.hibernate.search.backend.UpdateLuceneWork
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:554)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:359)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1138)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:148)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:115)
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> ......
> 04:14:21,605 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate Search: Index updates queue processor for index query-1) HSEARCH000058: Exception occurred org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> Primary Failure:
> Entity org.radargun.cachewrappers.InfinispanQueryWrapper$QueryableData Id S:key_0_0_0000000000000017 Work Type org.hibernate.search.backend.UpdateLuceneWork
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:554)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:359)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1138)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:148)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:115)
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> {code}
> You can find the cache configuration attached.
> Yet another thing to mention:
> if the following line is added to the cache configuration:
> {code}
> <property name="default.indexmanager" value="org.infinispan.query.indexmanager.InfinispanIndexManager" />
> {code}
> then the issue is gone - no lock issue appears then.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months