[JBoss JIRA] (ISPN-5104) Infinite loop in TransactionAwareCloseableIterator when iterating through cache...
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5104?page=com.atlassian.jira.plugin.... ]
Work on ISPN-5104 started by William Burns.
-------------------------------------------
> Infinite loop in TransactionAwareCloseableIterator when iterating through cache...
> ----------------------------------------------------------------------------------
>
> Key: ISPN-5104
> URL: https://issues.jboss.org/browse/ISPN-5104
> Project: Infinispan
> …
[View More] Issue Type: Bug
> Components: Transactions
> Affects Versions: 7.0.2.Final
> Reporter: Christian Niessner
> Assignee: William Burns
>
> Hi,
> i have some testing code that iterates through a cache and validates all data in this cache for consistency.
> The reduced code is:
> TransactionManager tm = cache.getAdvancedCache().getTransactionManager();
> tm.begin();
> try {
> for (Entry<Object,Object> entry : metadataCache.entrySet()) {
> // validation code omitted...
> }
> } finally {
> tm.commit();
> }
> In some circumstances the iteration starts returning the same Entry every time. I stepped into the code and the value is returned from:
> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
> This code block is commented with:
> "// We do a last check to make sure no additional values were added to our context while iterating"
> And for me it seems that the value returned here never gets added to the "seenContextKeys" Set and so the iterator is always returning the same key.
> Maybe a simple "seenContextKeys.add(lookedUpEntry.getKey())" next to Line 70 would fix this issue. Maybe even a 'break' could make sense here because is there the need to walk through the entire list if we have found a candidate?
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
[View Less]
10 years, 2 months
[JBoss JIRA] (ISPN-5104) Infinite loop in TransactionAwareCloseableIterator when iterating through cache...
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5104?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-5104:
-------------------------------------
Good catch, looking at it briefly it appears we should have had the call here https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o... be an add instead of contains. I will see if I can get a test going and fix this up hopefully for the 7.0.3 build.
> Infinite loop in TransactionAwareCloseableIterator when …
[View More]iterating through cache...
> ----------------------------------------------------------------------------------
>
> Key: ISPN-5104
> URL: https://issues.jboss.org/browse/ISPN-5104
> Project: Infinispan
> Issue Type: Bug
> Components: Transactions
> Affects Versions: 7.0.2.Final
> Reporter: Christian Niessner
>
> Hi,
> i have some testing code that iterates through a cache and validates all data in this cache for consistency.
> The reduced code is:
> TransactionManager tm = cache.getAdvancedCache().getTransactionManager();
> tm.begin();
> try {
> for (Entry<Object,Object> entry : metadataCache.entrySet()) {
> // validation code omitted...
> }
> } finally {
> tm.commit();
> }
> In some circumstances the iteration starts returning the same Entry every time. I stepped into the code and the value is returned from:
> https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
> This code block is commented with:
> "// We do a last check to make sure no additional values were added to our context while iterating"
> And for me it seems that the value returned here never gets added to the "seenContextKeys" Set and so the iterator is always returning the same key.
> Maybe a simple "seenContextKeys.add(lookedUpEntry.getKey())" next to Line 70 would fix this issue. Maybe even a 'break' could make sense here because is there the need to walk through the entire list if we have found a candidate?
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
[View Less]
10 years, 2 months
[JBoss JIRA] (ISPN-5104) Infinite loop in TransactionAwareCloseableIterator when iterating through cache...
by Christian Niessner (JIRA)
Christian Niessner created ISPN-5104:
----------------------------------------
Summary: Infinite loop in TransactionAwareCloseableIterator when iterating through cache...
Key: ISPN-5104
URL: https://issues.jboss.org/browse/ISPN-5104
Project: Infinispan
Issue Type: Bug
Components: Transactions
Affects Versions: 7.0.2.Final
Reporter: Christian Niessner
Hi,
i have some testing code that iterates …
[View More]through a cache and validates all data in this cache for consistency.
The reduced code is:
TransactionManager tm = cache.getAdvancedCache().getTransactionManager();
tm.begin();
try {
for (Entry<Object,Object> entry : metadataCache.entrySet()) {
// validation code omitted...
}
} finally {
tm.commit();
}
In some circumstances the iteration starts returning the same Entry every time. I stepped into the code and the value is returned from:
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
This code block is commented with:
"// We do a last check to make sure no additional values were added to our context while iterating"
And for me it seems that the value returned here never gets added to the "seenContextKeys" Set and so the iterator is always returning the same key.
Maybe a simple "seenContextKeys.add(lookedUpEntry.getKey())" next to Line 70 would fix this issue. Maybe even a 'break' could make sense here because is there the need to walk through the entire list if we have found a candidate?
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
[View Less]
10 years, 2 months
[JBoss JIRA] (ISPN-5103) Inefficient index updates cause high cost merges and increase overall latency
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5103?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-5103:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1175382
> Inefficient index updates cause high cost merges and increase overall latency
> -----------------------------------------------------------------------------
>
> Key: ISPN-5103
> …
[View More] URL: https://issues.jboss.org/browse/ISPN-5103
> Project: Infinispan
> Issue Type: Enhancement
> Components: Embedded Querying
> Affects Versions: 7.0.2.Final, 7.1.0.Alpha1
> Reporter: Gustavo Fernandes
>
> Currently every change to the index is done Lucene-wise combining two operations:
> * Delete by query, using a boolean query on the id plus the entity class
> * Add
>
> Under high load, specially during merges those numerous deletes provoke very long delays causing high latency.
> We should instead use a simple Lucene Update to add/change documents, since internally it translates to a Delete by term plus an Add operation, and delete by terms are extremely efficient in Lucene.
> Some local tests showed average latency of updating the index using this strategy to drop 4 times, both for the SYNC and ASYNC backends
> With relation to sharing the index between entities, which was the original motivation of the Delete by query plus add strategy, we have two scenarios:
> * Same cache with muliple entity types: that's a non-issue, since obviously there's no id colision in this case
> * Different caches with the same index: this scenario happens when different caches shares the same index, for ex:
> {code}
> @Indexed(indeName=common)
> public class Country { ... }
> @Indexed(indeName=common)
> public class Currency { ... }
> cm.getCache("currencies").put(1, new Currency(...))
> cm.getCache("countries").put(1, new Country(...))
> @Indexed(indexName=common)
> {code}
> This would require a delete by query in order to persist both a Country and a Currency with id=1.
> It would also require setting "default.exclusive_index_use", "false", with the associated cost of having to reopen the IndexWriter on every operation.
> Given the performance gain of doing a simple Update is considerable, we should make the corner case supported by extra configuration or alternatively, generate a unique @ProvidedId, including the entity class or the cache name that work for all cases described above.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
[View Less]
10 years, 2 months
[JBoss JIRA] (ISPN-5103) Inefficient index updates cause high cost merges and increase overall latency
by Gustavo Fernandes (JIRA)
Gustavo Fernandes created ISPN-5103:
---------------------------------------
Summary: Inefficient index updates cause high cost merges and increase overall latency
Key: ISPN-5103
URL: https://issues.jboss.org/browse/ISPN-5103
Project: Infinispan
Issue Type: Enhancement
Components: Embedded Querying
Affects Versions: 7.1.0.Alpha1, 7.0.2.Final
Reporter: Gustavo Fernandes
Currently every change to …
[View More]the index is done Lucene-wise combining two operations:
* Delete by query, using a boolean query on the id plus the entity class
* Add
Under high load, specially during merges those numerous deletes provoke very long delays causing high latency.
We should instead use a simple Lucene Update to add/change documents, since internally it translates to a Delete by term plus an Add operation, and delete by terms are extremely efficient in Lucene.
Some local tests showed average latency of updating the index using this strategy to drop 4 times, both for the SYNC and ASYNC backends
With relation to sharing the index between entities, which was the original motivation of the Delete by query plus add strategy, we have two scenarios:
* Same cache with muliple entity types: that's a non-issue, since obviously there's no id colision in this case
* Different caches with the same index: this scenario happens when different caches shares the same index, for ex:
{code}
@Indexed(indeName=common)
public class Country { ... }
@Indexed(indeName=common)
public class Currency { ... }
cm.getCache("currencies").put(1, new Currency(...))
cm.getCache("countries").put(1, new Country(...))
@Indexed(indexName=common)
{code}
This would require a delete by query in order to persist both a Country and a Currency with id=1.
It would also require setting "default.exclusive_index_use", "false", with the associated cost of having to reopen the IndexWriter on every operation.
Given the performance gain of doing a simple Update is considerable, we should make the corner case supported by extra configuration or alternatively, generate a unique @ProvidedId, including the entity class or the cache name that work for all cases described above.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
[View Less]
10 years, 2 months
[JBoss JIRA] (ISPN-3959) JdbcBinaryStore's expiration locks buckets indefinitely
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3959?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-3959:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1176265
> JdbcBinaryStore's expiration locks buckets indefinitely
> -------------------------------------------------------
>
> Key: ISPN-3959
> URL: https://issues.jboss.org/…
[View More]browse/ISPN-3959
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 6.0.2.Final, 7.0.0.Alpha4
> Reporter: Radim Vansa
> Assignee: Radim Vansa
> Fix For: 7.0.0.Alpha5, 7.0.0.Final
>
>
> The buckets are locked in eviction thread (in the main purge method), while unlocked in BucketPurger.call() which is executed in persistence thread. The unlock fails and the buckets stay locked indefinitely.
> Another error is that the Bucket class is not serializable.
> This is also a bug in BaseStoreTest as this uses WithinThreadExecutor as the executor for purging while usually this is done in different thread. Moreover, as the purge method is actually not obliged to purge anything, the test does not test the purging itself, but rather a check for expired entry when it is loaded (contains operation). The purging should be enforced by purge listener (calling the purge method until all entries are purged).
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
[View Less]
10 years, 2 months
[JBoss JIRA] (ISPN-5088) Deleted entries from (FineGrained)AtomicMap reappear in subsequent transaction
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5088?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-5088:
--------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/infinispan/infinispan/pull/3174/files
> Deleted entries from (FineGrained)AtomicMap reappear in subsequent transaction
> ------------------------------------------------------------------------------
>
> Key: …
[View More]ISPN-5088
> URL: https://issues.jboss.org/browse/ISPN-5088
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.0.2.Final
> Reporter: Sanne Grinovero
> Assignee: William Burns
> Priority: Critical
> Labels: 7.0, for_OGM
> Fix For: 7.1.0.Beta1, 7.0.3.Final
>
> Attachments: Testcase-ISPN-5088.patch
>
>
> After a {{FineGrainedAtomicMap}} containing some data is deleted in a transaction, but then in the same transaction it's re-created, the next transaction will be able to read the original data which it contained.
> Some pseudocode:
> {code}tx1.start();
> am = AtomicMapLookup.getFineGrainedAtomicMap( cache, cacheKey, true );
> am.put(k1, v1);
> tx1.commit()
> tx2.start();
> am = AtomicMapLookup.getFineGrainedAtomicMap( cache, cacheKey, true );
> am.put(k3, v3);
> AtomicMapLookup.removeAtomicMap( cache, cacheKey );
> am = AtomicMapLookup.getFineGrainedAtomicMap( cache, cacheKey, true );
> am.put(k2,v2);
> tx2.commit()
> tx3.start();
> am = AtomicMapLookup.getFineGrainedAtomicMap( cache, cacheKey, true );
> am.contains(k1) == false; // FAILS!
> tx3.commit();{code}
> Also applies apparently to a normal AtomicMap if using DIST_SYNC.
> I'm attaching a full test, which results in:
> {noformat}
> Failed tests:
> RepeatableReadFineGrainedAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> FineGrainedAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> DistFineGrainedAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> DistRepeatableReadFineGrainedAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> DistRepeatableReadAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> DistAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> Tests run: 5636, Failures: 6, Errors: 0, Skipped: 0
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
[View Less]
10 years, 2 months
[JBoss JIRA] (ISPN-5088) Deleted entries from (FineGrained)AtomicMap reappear in subsequent transaction
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5088?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-5088:
--------------------------------
Fix Version/s: 7.1.0.Beta1
7.0.3.Final
> Deleted entries from (FineGrained)AtomicMap reappear in subsequent transaction
> ------------------------------------------------------------------------------
>
> Key: ISPN-5088
> URL: https://issues.jboss.org/browse/ISPN-5088
> …
[View More] Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.0.2.Final
> Reporter: Sanne Grinovero
> Assignee: William Burns
> Priority: Critical
> Labels: 7.0, for_OGM
> Fix For: 7.1.0.Beta1, 7.0.3.Final
>
> Attachments: Testcase-ISPN-5088.patch
>
>
> After a {{FineGrainedAtomicMap}} containing some data is deleted in a transaction, but then in the same transaction it's re-created, the next transaction will be able to read the original data which it contained.
> Some pseudocode:
> {code}tx1.start();
> am = AtomicMapLookup.getFineGrainedAtomicMap( cache, cacheKey, true );
> am.put(k1, v1);
> tx1.commit()
> tx2.start();
> am = AtomicMapLookup.getFineGrainedAtomicMap( cache, cacheKey, true );
> am.put(k3, v3);
> AtomicMapLookup.removeAtomicMap( cache, cacheKey );
> am = AtomicMapLookup.getFineGrainedAtomicMap( cache, cacheKey, true );
> am.put(k2,v2);
> tx2.commit()
> tx3.start();
> am = AtomicMapLookup.getFineGrainedAtomicMap( cache, cacheKey, true );
> am.contains(k1) == false; // FAILS!
> tx3.commit();{code}
> Also applies apparently to a normal AtomicMap if using DIST_SYNC.
> I'm attaching a full test, which results in:
> {noformat}
> Failed tests:
> RepeatableReadFineGrainedAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> FineGrainedAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> DistFineGrainedAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> DistRepeatableReadFineGrainedAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> DistRepeatableReadAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> DistAtomicMapAPITest>BaseAtomicHashMapAPITest.testInsertDeleteInsertCycle:596 null
> Tests run: 5636, Failures: 6, Errors: 0, Skipped: 0
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
[View Less]
10 years, 2 months
[JBoss JIRA] (ISPN-5076) Pessimistic transactions can lose their locks when the primary owner changes
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5076?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-5076:
------------------------------------
This is not a problem if the new primary owner just joined, as it will receive the tx data via state transfer and register backup locks before acquiring any locks. But state transfer won't transfer the transaction information (and register backup locks) to an existing backup owner, only to new owners.
We could require existing backup …
[View More]owners to request locally-originated txs from the primary owner at the beginning of a rebalance. This will slightly increase the amount of time transactions are blocked at the beginning of the rebalance, but it should be easy to implement.
We could also modify the ConsistentHashFactory contract to require implementations to only replace the primary owner with a joiner when rebalancing. ReplicatedConsistentHashFactory definitely doesn't do it now, and the other 4 are very likely to also need adjustments, so this would likely be harder to do than the first option.
> Pessimistic transactions can lose their locks when the primary owner changes
> ----------------------------------------------------------------------------
>
> Key: ISPN-5076
> URL: https://issues.jboss.org/browse/ISPN-5076
> Project: Infinispan
> Issue Type: Bug
> Components: Core, State Transfer
> Affects Versions: 7.0.2.Final, 7.1.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: 7.0
> Fix For: 7.1.0.Final
>
>
> In a pessimistic cache, if a transaction {{T1}} has a {{put(k, v)}} operation and the primary owner of the key is the originator, the lock is acquired on the originator but it is not replicated to on the backup(s).
> If one of the backup owners becomes the primary owner, it will allow another transaction {{T2}} to lock (and update) key {{k}} before it receives the one-phase prepare command from the originator of {{T1}}.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
[View Less]
10 years, 2 months
[JBoss JIRA] (ISPN-5076) Pessimistic transactions can lose their locks when the primary owner changes
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5076?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5076:
-------------------------------
Description:
In a pessimistic cache, if a transaction {{T1}} has a {{put(k, v)}} operation and the primary owner of the key is the originator, the lock is acquired on the originator but it is not replicated to on the backup(s).
If one of the backup owners becomes the primary owner, it will allow another transaction {{T2}} to lock (and update) …
[View More]key {{k}} before it receives the one-phase prepare command from the originator of {{T1}}.
was:
In a pessimistic cache, if a transaction {{T1}} has a {{put(k, v})}} operation and the primary owner of the key is the originator, the lock is acquired on the originator but it is not replicated to on the backup(s).
If one of the backup owners becomes the primary owner, it will allow another transaction {{T2}} to lock (and update) key {{k}} before it receives the one-phase prepare command from the originator of {{T1}}.
> Pessimistic transactions can lose their locks when the primary owner changes
> ----------------------------------------------------------------------------
>
> Key: ISPN-5076
> URL: https://issues.jboss.org/browse/ISPN-5076
> Project: Infinispan
> Issue Type: Bug
> Components: Core, State Transfer
> Affects Versions: 7.0.2.Final, 7.1.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: 7.0
> Fix For: 7.1.0.Final
>
>
> In a pessimistic cache, if a transaction {{T1}} has a {{put(k, v)}} operation and the primary owner of the key is the originator, the lock is acquired on the originator but it is not replicated to on the backup(s).
> If one of the backup owners becomes the primary owner, it will allow another transaction {{T2}} to lock (and update) key {{k}} before it receives the one-phase prepare command from the originator of {{T1}}.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
[View Less]
10 years, 2 months