[JBoss JIRA] (ISPN-3665) SingleFileStore is not thread-safe for passivation
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3665?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-3665:
------------------------------------
The flush was needed with the bucket-based cache store because it opened the backing file every time it wanted to read or write an entry. The new single-file store uses the same file channel/descriptor for all the operations, so all the threads *should* see the same value.
On the other hand, I think there may be a problem with the SingleFileStore locking... e.g. the free() method doesn't wait for concurrent readers to unlock the entry, so I think it could cause those readers to read an empty entry. And we probably need more logging in there as well...
> SingleFileStore is not thread-safe for passivation
> --------------------------------------------------
>
> Key: ISPN-3665
> URL: https://issues.jboss.org/browse/ISPN-3665
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 6.0.0.CR1
> Reporter: Paul Ferraro
> Assignee: Paul Ferraro
> Priority: Blocker
> Labels: jdg620_dm, jdg62GAblocker
> Attachments: Test.java
>
>
> SingleFileStore never makes use of FileChannel.force(...) to flush changes to disk. This causes problems for the passivation use case.
> If one thread evicts a cache entry, while immediately after another thread attempts to read the same cache entry, the Cache.get(...) can return null. This is because the entry is never flushed to disk.
> I've attached a test to reproduce the problem.
> I also ran the same test with the addition of FileChannel.force(false) to the write(...) method, and the test succeeds.
> A proper fix should probably make this a configurable property (as it was with the old file store implementation). It would be nice if the flush could defer until just before tx commit, but, off hand, I don't know how feasible that is.
> I suspect this lack of flush also accounts for much of the bold claim of a 100x performance improvement over the old file store implementation.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-3599) CommitCommand with replayed PrepareCommand executes rollback and then commit
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3599?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-3599:
-------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> CommitCommand with replayed PrepareCommand executes rollback and then commit
> ----------------------------------------------------------------------------
>
> Key: ISPN-3599
> URL: https://issues.jboss.org/browse/ISPN-3599
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer, Transactions
> Affects Versions: 6.0.0.CR1
> Reporter: Radim Vansa
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: jdg62GAblocker
> Fix For: 6.0.0.CR2, 6.0.0.Final
>
>
> During state-transfer in tx cache, the node can receive {{CommitCommand}} from other node. After the node gets transaction data for affected segments, it creates the transaction with {{missingLookedUpEntries=true}} and the {{CommitCommand}} can be executed.
> In this command's {{perform(...)}} the transaction is *first* marked as completed, then it enters the interceptor chain. There, the {{PrepareCommand}} is created in {{StateTransferInterceptor.visitCommitCommand}} but after this is processed the {{TxInterceptor}} finds out that the transaction is already completed and executes {{RollbackCommand}}, clearing locks etc.
> Nevertheless, {{StateTransferInterceptor}} executes the initial {{CommitCommand}} afterwards. I suspect that this may be executed without the locks held.
> Anyway, it is not correct to execute both commit and rollback on the same transaction.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-263) Handle cluster partitions
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-263?page=com.atlassian.jira.plugin.s... ]
Mircea Markus updated ISPN-263:
-------------------------------
Summary: Handle cluster partitions (was: Handle JGroups MERGE events to help deal with split brains)
> Handle cluster partitions
> -------------------------
>
> Key: ISPN-263
> URL: https://issues.jboss.org/browse/ISPN-263
> Project: Infinispan
> Issue Type: Feature Request
> Components: Distributed Cache
> Reporter: Manik Surtani
> Assignee: Manik Surtani
> Labels: MERGE, split_brain
>
> JGroups already detects split brains and issues a callback. The cache layer needs to decide what to do. The idea is to implement a few canned policies (restart, wipe, etc) and allow custom handlers to be attached as well.
> Analogous to JBCACHE-471
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-3355) Add support for clustered listeners
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3355?page=com.atlassian.jira.plugin.... ]
Mircea Markus commented on ISPN-3355:
-------------------------------------
Suggested design: https://github.com/infinispan/infinispan/wiki/Clustered-listeners
> Add support for clustered listeners
> -----------------------------------
>
> Key: ISPN-3355
> URL: https://issues.jboss.org/browse/ISPN-3355
> Project: Infinispan
> Issue Type: Feature Request
> Reporter: Mircea Markus
> Assignee: Mircea Markus
>
> As opposed to the current listener approach in Infinispan ( a listener instance is invoked on the data owners ), this JIRA is about adding support for a cluster listener: the same listener instance that is notified disregarding of data ownership ( RPC calls involved).
> Due to the fact that the listener notification might involve an RPC, it is nice to be able to specify filters on these listeners.
> The clustered listener support opens the way for some interesting architectures:
> * persistent/continuous queries: the query is transformed in a filter. On each notification, the listener (stateful) updates the query state
> * simplistic CEP can be built on top of the persistent query described above
> * remote/hotrod notifications might be based on clustered listeners as well.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-3354) Multiple events on the local node with Infinispan 5.3.0-final
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3354?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-3354:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1024942
> Multiple events on the local node with Infinispan 5.3.0-final
> -------------------------------------------------------------
>
> Key: ISPN-3354
> URL: https://issues.jboss.org/browse/ISPN-3354
> Project: Infinispan
> Issue Type: Bug
> Components: Listeners
> Affects Versions: 5.3.0.Final
> Reporter: Luca Zenti
> Assignee: Mircea Markus
> Labels: jdg620_dm, jdg62GAblocker
> Attachments: TestInfinispanDuplicatedEvents.java
>
>
> After upgrading to Infinispan 5.3.0-final I found a strange "intermittent" problem in my application. Digging a bit deeper, I found out it is due to CacheEntry events raised twice for some keys on the local node (the node where the cache operation is invoked).
> I was able to reproduce the problem and I wrote the attached test case.
> The problem happens regardless of the cluster mode, but only with non-transactional caches. I think this is due to the fact that with transactional caches the events are raised on commit.
> Also, my application used to work with an interceptor rather than an event listener, so I actually found the problem when I saw my interceptor being occasionally executed 3 times with 2 nodes.
> I'm not sure whether the command and the chain of interceptor is really meant to be executed twice on the local node, but the consequent behaviour on the events sounds like a bug.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-3432) Data put to index enabled cache with Infinispan Directory provider using Async. JDBC StringBased CacheStore fails
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3432?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-3432:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1024936
> Data put to index enabled cache with Infinispan Directory provider using Async. JDBC StringBased CacheStore fails
> -----------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-3432
> URL: https://issues.jboss.org/browse/ISPN-3432
> Project: Infinispan
> Issue Type: Bug
> Components: Querying
> Affects Versions: 6.0.0.Alpha1
> Reporter: Anna Manukyan
> Assignee: Sanne Grinovero
> Labels: jdg620_dm, jdg62GAblocker
> Attachments: async-config.xml
>
>
> Hi,
> this issue is related to the ISPN-3090, but I thought to specify this case separately for bringing detailed explanation for the configuration and thrown exceptions.
> The issue relates to the performance tests for Index enabled Infinispan cache, with configured Infinispan directory and Async JDBC. String Based Cache store.
> The tests are running on 4 nodes and performing puts/gets on all nodes with many threads.
> The problem is that, during data put, the following exceptions are thrown continuously:
> {code}
> 04:04:05,633 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate Search: Index updates queue processor for index query-1) HSEARCH000058: Exception occurred org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> Primary Failure:
> Entity org.radargun.cachewrappers.InfinispanQueryWrapper$QueryableData Id S:_InstallBenchmarkStage_0 Work Type org.hibernate.search.backend.UpdateLuceneWork
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:554)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:359)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1138)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:148)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:115)
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> ......
> 04:14:21,605 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate Search: Index updates queue processor for index query-1) HSEARCH000058: Exception occurred org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> Primary Failure:
> Entity org.radargun.cachewrappers.InfinispanQueryWrapper$QueryableData Id S:key_0_0_0000000000000017 Work Type org.hibernate.search.backend.UpdateLuceneWork
> org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='query'}: files: []
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:554)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:359)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1138)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:148)
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:115)
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> {code}
> You can find the cache configuration attached.
> Yet another thing to mention:
> if the following line is added to the cache configuration:
> {code}
> <property name="default.indexmanager" value="org.infinispan.query.indexmanager.InfinispanIndexManager" />
> {code}
> then the issue is gone - no lock issue appears then.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month
[JBoss JIRA] (ISPN-3518) 1PC can cause a window of inconsistency with L1 invalidation
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3518?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-3518:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1024934
> 1PC can cause a window of inconsistency with L1 invalidation
> ------------------------------------------------------------
>
> Key: ISPN-3518
> URL: https://issues.jboss.org/browse/ISPN-3518
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.3.0.Final
> Reporter: William Burns
> Assignee: William Burns
> Labels: jdg620_dm, jdg62GAblocker
>
> The L1TxInterceptor currently doesn't block on L1 invalidations during a 1PC. This can cause an inconsistent view of data across non owner nodes.
> Example:
> {quote}
> Node A owns k with value of v1
> Node B has k in L1 with value of v1
> tx1 started
> Node A put k -> v2
> Node A sends invalidation
> Node A commits
> tx1 completed
> tx2 started
> Node B get k returns v1 from L1
> tx2 completed
> Node B gets invalidation for k
> tx3 started
> Node B get k remotely retrieves v2 from Node A
> tx3 completed
> {quote}
> We need to make sure that all L1 invalidations in Tx mode are completed before completing the transaction.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 1 month