[JBoss JIRA] (ISPN-2995) FineGrainedAtomicHashMap may not lock all the composite keys in optimistic locking
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2995?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2995:
--------------------------------
Fix Version/s: 5.3.0.Final
(was: 5.3.0.CR1)
> FineGrainedAtomicHashMap may not lock all the composite keys in optimistic locking
> ----------------------------------------------------------------------------------
>
> Key: ISPN-2995
> URL: https://issues.jboss.org/browse/ISPN-2995
> Project: Infinispan
> Issue Type: Bug
> Components: Fine-grained API
> Affects Versions: 5.3.0.Alpha1
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
> Labels: atomic_map, locking
> Fix For: 5.3.0.Final
>
>
> In OptimisticLockingInterceptor, we are collecting the composite keys from ApplyDeltaCommand is the key belongs to the node:
> {code}
> case ApplyDeltaCommand.COMMAND_ID:
> ApplyDeltaCommand command = (ApplyDeltaCommand) wc;
> if (cdl.localNodeIsOwner(command.getKey())) {
> Object[] compositeKeys = command.getCompositeKeys();
> set.addAll(Arrays.asList(compositeKeys));
> }
> break;
> {code}
> However, when we are going to acquire the lock in the node if it is the primary owner:
> {code}
> protected final void lockAndRegisterBackupLock(TxInvocationContext ctx, Object key, long lockTimeout, boolean skipLocking) throws InterruptedException {
> if (cdl.localNodeIsPrimaryOwner(key)) {
> lockKeyAndCheckOwnership(ctx, key, lockTimeout, skipLocking);
> } else if (cdl.localNodeIsOwner(key)) {
> ctx.getCacheTransaction().addBackupLockForKey(key);
> }
> }
> {code}
> The CompositeKey should always acquire the lock.
> This is probably a bug. Add an unit test to verify that locking works as expected for AtomicHashMap.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2976) Log4J dependencies in codebase to be cleaned up
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2976?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2976:
--------------------------------
Fix Version/s: (was: 5.3.0.CR1)
> Log4J dependencies in codebase to be cleaned up
> -----------------------------------------------
>
> Key: ISPN-2976
> URL: https://issues.jboss.org/browse/ISPN-2976
> Project: Infinispan
> Issue Type: Task
> Affects Versions: 5.2.5.Final
> Reporter: Manik Surtani
> Assignee: Mircea Markus
> Fix For: 5.3.0.Final
>
>
> When attempting to move to Log4J 2.0, I've noticed a number of hard deps on log4j classes.
> {{SampleConfigFilesCorrectnessTest}} - this class makes use of a custom appender to analyse what a user is being warned of when a config file is parsed. Why are we using Log4J for this? Our own logging interface should be mocked and messages captured directly.
> {{RehashStressTest}} and {{NucleotideCache}} - seems like a bug, I presume the author intended to use {{org.infinispan.logging.Log}}.
> {{CompressedFileAppender}} and {{ThreadNameFilter}}- can this be written in a way that works with Log4J 1.x as well as 2.x? Or have the SPIs changed that much?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2965) L1 and early invalidation leaves inconsistent state
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2965?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2965:
--------------------------------
Fix Version/s: (was: 5.3.0.CR1)
> L1 and early invalidation leaves inconsistent state
> ---------------------------------------------------
>
> Key: ISPN-2965
> URL: https://issues.jboss.org/browse/ISPN-2965
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache, Transactions
> Affects Versions: 5.2.1.Final
> Reporter: Sebastian Tusk
> Assignee: Adrian Nistor
> Labels: 5.2.x
> Fix For: 5.3.0.Final
>
>
> In a distributed transactional cache with L1 enabled I can observe the following.
> Prepare cache by adding an entry with Cache.put( k, v1 ).
> 1. Node B starts with adding a changed value. Cache.put( k, v2 )
> 2. Node B TxDistributionInterceptor.visitPrepareCommand flushL1Caches sends invalidations.
> 3. Node A calls Cache.get( k ) retrieves v1 and stores this value in L1.
> 4. Node B proceeds with transaction.
> The result is that Node A answers subsequent Cache.get(k) with v1 and Node B answers with v2.
> It seems the invalidation is either send to early or must be synchronized in some way with the transaction.
> Cache config:
> <namedCache name="entity">
> <jmxStatistics enabled="true" />
> <clustering mode="dist">
> <stateTransfer fetchInMemoryState="false" timeout="20000" />
> <async />
> <l1 enabled="true" />
> <hash numOwners="1"/>
> </clustering>
> <locking isolationLevel="READ_COMMITTED"
> lockAcquisitionTimeout="15000" useLockStriping="false" />
> <eviction maxEntries="10000" strategy="LRU" />
> <expiration maxIdle="100000" wakeUpInterval="5000"/>
> <storeAsBinary storeKeysAsBinary="true" storeValuesAsBinary="false" enabled="false" />
> <transaction transactionMode="TRANSACTIONAL" autoCommit="false" lockingMode="OPTIMISTIC"/>
> </namedCache>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2958) Lucene Directory Read past EOF
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2958?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2958:
--------------------------------
Fix Version/s: 5.3.0.Final
(was: 5.3.0.CR1)
> Lucene Directory Read past EOF
> ------------------------------
>
> Key: ISPN-2958
> URL: https://issues.jboss.org/browse/ISPN-2958
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 5.2.1.Final
> Reporter: Clement Pang
> Assignee: Sanne Grinovero
> Labels: stable_embedded_query
> Fix For: 5.3.0.Final
>
>
> This seems to be happening rather deterministically.
> Infinispan configuration (in JBoss EAP 6.1.0.Alpha):
> {code}
> <cache-container name="lucene">
> <local-cache name="dshell-index-data" start="EAGER">
> <eviction strategy="LIRS" max-entries="50000"/>
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> <local-cache name="dshell-index-metadata" start="EAGER">
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> <local-cache name="dshell-index-lock" start="EAGER">
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> </cache-container>
> {code}
> Upon shutting down the server and confirming that passivation did indeed write the data to disk, the subsequent start-up would fail right away with:
> {code}
> Caused by: org.hibernate.search.SearchException: Could not initialize index
> at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:162)
> at org.hibernate.search.infinispan.impl.InfinispanDirectoryProvider.start(InfinispanDirectoryProvider.java:103)
> at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.initialize(DirectoryBasedIndexManager.java:104)
> at org.hibernate.search.indexes.impl.IndexManagerHolder.createIndexManager(IndexManagerHolder.java:227)
> ... 64 more
> Caused by: java.io.IOException: Read past EOF
> at org.infinispan.lucene.SingleChunkIndexInput.readByte(SingleChunkIndexInput.java:77)
> at org.apache.lucene.store.ChecksumIndexInput.readByte(ChecksumIndexInput.java:41)
> at org.apache.lucene.store.DataInput.readInt(DataInput.java:86)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:272)
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:182)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1168)
> at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:157)
> ... 67 more
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2971) AsyncStoreStressTest is broken
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2971?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2971:
--------------------------------
Fix Version/s: 5.3.0.Final
(was: 5.3.0.CR1)
> AsyncStoreStressTest is broken
> ------------------------------
>
> Key: ISPN-2971
> URL: https://issues.jboss.org/browse/ISPN-2971
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.5.Final
> Reporter: Adrian Nistor
> Assignee: Adrian Nistor
> Labels: testsuite
> Fix For: 5.3.0.Final
>
>
> This test is not run during normal test suite, but we still need to fix this failure and the leaked threads (see System.exit(0) in main method):
> {code}
> testReadWriteRemove(org.infinispan.stress.AsyncStoreStressTest) Time elapsed: 72.377 sec <<< FAILURE!
> java.lang.UnsupportedOperationException
> at java.util.AbstractMap$SimpleImmutableEntry.setValue(AbstractMap.java:726)
> at org.infinispan.stress.AsyncStoreStressTest.testReadWriteRemove(AsyncStoreStressTest.java:140)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> at java.lang.Thread.run(Thread.java:662)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2964) putForExternalRead to L1 not invalidated
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2964?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2964:
--------------------------------
Fix Version/s: (was: 5.3.0.CR1)
> putForExternalRead to L1 not invalidated
> ----------------------------------------
>
> Key: ISPN-2964
> URL: https://issues.jboss.org/browse/ISPN-2964
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.1.Final
> Reporter: Sebastian Tusk
> Assignee: Mircea Markus
> Fix For: 5.3.0.Final
>
> Attachments: SebastianTusk_ISPN-2964_fix.patch
>
>
> With transactional distributed caches it happens that Cache.putForExternalRead(k,v) places an entry into L1 that never gets invalidated. It seems to happen when the the owner of k doesn't have the entry. In this case the non owner puts k into his cache without having the owner registering this. Usually the owner stores all requesters in L1ManagerImpl.addRequester and sends out invalidations to the requesters. What should happen is that the entry is replicated to the owner.
> Cache config:
> <namedCache name="entity">
> <jmxStatistics enabled="true" />
> <clustering mode="dist">
> <stateTransfer fetchInMemoryState="false" timeout="20000" />
> <async />
> <l1 enabled="true" />
> <hash numOwners="1"/>
> </clustering>
> <locking isolationLevel="READ_COMMITTED"
> lockAcquisitionTimeout="15000" useLockStriping="false" />
> <eviction maxEntries="10000" strategy="LRU" />
> <expiration maxIdle="100000" wakeUpInterval="5000"/>
> <storeAsBinary storeKeysAsBinary="true" storeValuesAsBinary="false" enabled="false" />
> <transaction transactionMode="TRANSACTIONAL" autoCommit="false" lockingMode="OPTIMISTIC"/>
> </namedCache>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2956) putIfAbsent on Hot Rod Java client doesn't reliably fulfil contract
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2956?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2956:
--------------------------------
Fix Version/s: (was: 5.3.0.CR1)
> putIfAbsent on Hot Rod Java client doesn't reliably fulfil contract
> -------------------------------------------------------------------
>
> Key: ISPN-2956
> URL: https://issues.jboss.org/browse/ISPN-2956
> Project: Infinispan
> Issue Type: Bug
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Labels: hotrod-java-client, remote-clients
> Fix For: 5.3.0.Final
>
>
> Hot Rod's putIfAbsent might have issues on some edge cases:
> {quote}I want to know whether the putting entry already exists in the remote
> cache cluster, or not.
> I thought that RemoteCache.putIfAbsent() would be useful for that
> purpose, i.e.,
> {code}
> if (remoteCache.putIfAbsent(k,v) == null) {
> // new entry.
> } else {
> // k already exists.
> }
> {code}
> But no.
> The putIfAbsent() for new entry may return non-null value, if one of the
> server crushed while putting.
> The behavior is like the following:
> 1. Client do putIfAbsent(k,v).
> 2. The server receives the request and sends replication requests to
> other servers. If the server crushed before completing replication, some
> servers own that (k,v), but others not.
> 3. Client receives the error. The putIfAbsent() internally retries the
> same request to the next server in the cluster server list.
> 4. If the next server owns the (k,v), the putIfAbsent() returns the
> replicated (k,v) at the step 2, without any error.
> So, putIfAbsent() is not reliable for knowing whether the putting entry
> is *exactly* new or not.
> Does anyone have any idea/workaround for this purpose?{quote}
> A workaround is to do this:
> {quote}We got a simple solution, which can be applied to our customer's application.
> If each value part of putting (k,v) is unique or contains unique value,
> the client can do *double check* wether the entry is new.
> {code}
> val = System.nanoTime(); // or uuid is also useful.
> if ((ret = cache.putIfAbsent(key, val)) == null
> || ret.equals(val)) {
> // new entry, if the return value is just the same.
> } else {
> // key already exists.
> }
> {code}
> We are proposing this workaround which almost works fine.{quote}
> However, this is a bit of a cludge.
> Hot Rod should be improved with an operation that allows a version to be passed when entry is created, instead of relying on the client generating it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months