[JBoss JIRA] (ISPN-1213) TreeCache expires parents that have children
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1213?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1213:
--------------------------------
Fix Version/s: (was: 6.0.0.Final)
> TreeCache expires parents that have children
> --------------------------------------------
>
> Key: ISPN-1213
> URL: https://issues.jboss.org/browse/ISPN-1213
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 4.2.1.FINAL
> Reporter: Todd Ciezadlo
> Assignee: Manik Surtani
> Attachments: ExpirationTest.java, TreeCacheUtil.java
>
>
> TreeCache parents expire according to the max-idle value even if they contain children. This puts the tree cache in an inconsistent state since the "dangling" children can be retrieved through TreeCache.get(FQN, String) calls, but cannot be traversed to through TreeCache.getRoot() and Node.getChildren() calls.
> Attached unit test to to reproduce.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-1089) NullPointerException when using an AtomicMap without a TransactionManagerLookup setup
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1089?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1089:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> NullPointerException when using an AtomicMap without a TransactionManagerLookup setup
> -------------------------------------------------------------------------------------
>
> Key: ISPN-1089
> URL: https://issues.jboss.org/browse/ISPN-1089
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.0.0.CR1
> Reporter: Emmanuel Bernard
> Assignee: Manik Surtani
> Fix For: 6.0.0.Final
>
>
> {code}
> org.infinispan.CacheException: Unable to start batch
> at org.infinispan.batch.BatchContainer.startBatch(BatchContainer.java:84)
> at org.infinispan.batch.AutoBatchSupport.startAtomic(AutoBatchSupport.java:44)
> at org.infinispan.atomic.AtomicHashMapProxy.put(AtomicHashMapProxy.java:160)
> at org.hibernate.ogm.type.descriptor.PassThroughGridTypeDescriptor$1.doBind(PassThroughGridTypeDescriptor.java:41)
> at org.hibernate.ogm.type.descriptor.BasicGridBinder.bind(BasicGridBinder.java:73)
> at org.hibernate.ogm.type.AbstractGenericBasicType.nullSafeSet(AbstractGenericBasicType.java:275)
> at org.hibernate.ogm.type.AbstractGenericBasicType.nullSafeSet(AbstractGenericBasicType.java:270)
> at org.hibernate.ogm.persister.OgmEntityPersister.createNewResultSetIfNull(OgmEntityPersister.java:875)
> at org.hibernate.ogm.persister.OgmEntityPersister.insert(OgmEntityPersister.java:859)
> at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79)
> at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:273)
> at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:265)
> at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184)
> at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
> at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51)
> at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216)
> at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:383)
> at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:133)
> at org.hibernate.ogm.test.embeddable.EmbeddableTest.testEmbeddable(EmbeddableTest.java:48)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at org.hibernate.testing.junit.functional.annotations.HibernateTestCase.runTest(HibernateTestCase.java:97)
> at org.hibernate.testing.junit.functional.annotations.HibernateTestCase.runBare(HibernateTestCase.java:85)
> at com.intellij.junit3.JUnit3IdeaTestRunner.doRun(JUnit3IdeaTestRunner.java:109)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:60)
> Caused by: java.lang.NullPointerException
> at org.infinispan.batch.BatchContainer.startBatch(BatchContainer.java:65)
> ... 37 more
> {code}
> A more descriptive exception would be nice. I did not have any NPE until I started to use AtomicMaps in Hibernate OGM. I now set a TransactionManagerLookup and it works.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-1089) NullPointerException when using an AtomicMap without a TransactionManagerLookup setup
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1089?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1089:
--------------------------------
Assignee: Pedro Ruivo (was: Manik Surtani)
> NullPointerException when using an AtomicMap without a TransactionManagerLookup setup
> -------------------------------------------------------------------------------------
>
> Key: ISPN-1089
> URL: https://issues.jboss.org/browse/ISPN-1089
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.0.0.CR1
> Reporter: Emmanuel Bernard
> Assignee: Pedro Ruivo
> Fix For: 6.0.0.Final
>
>
> {code}
> org.infinispan.CacheException: Unable to start batch
> at org.infinispan.batch.BatchContainer.startBatch(BatchContainer.java:84)
> at org.infinispan.batch.AutoBatchSupport.startAtomic(AutoBatchSupport.java:44)
> at org.infinispan.atomic.AtomicHashMapProxy.put(AtomicHashMapProxy.java:160)
> at org.hibernate.ogm.type.descriptor.PassThroughGridTypeDescriptor$1.doBind(PassThroughGridTypeDescriptor.java:41)
> at org.hibernate.ogm.type.descriptor.BasicGridBinder.bind(BasicGridBinder.java:73)
> at org.hibernate.ogm.type.AbstractGenericBasicType.nullSafeSet(AbstractGenericBasicType.java:275)
> at org.hibernate.ogm.type.AbstractGenericBasicType.nullSafeSet(AbstractGenericBasicType.java:270)
> at org.hibernate.ogm.persister.OgmEntityPersister.createNewResultSetIfNull(OgmEntityPersister.java:875)
> at org.hibernate.ogm.persister.OgmEntityPersister.insert(OgmEntityPersister.java:859)
> at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79)
> at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:273)
> at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:265)
> at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184)
> at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
> at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51)
> at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216)
> at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:383)
> at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:133)
> at org.hibernate.ogm.test.embeddable.EmbeddableTest.testEmbeddable(EmbeddableTest.java:48)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at org.hibernate.testing.junit.functional.annotations.HibernateTestCase.runTest(HibernateTestCase.java:97)
> at org.hibernate.testing.junit.functional.annotations.HibernateTestCase.runBare(HibernateTestCase.java:85)
> at com.intellij.junit3.JUnit3IdeaTestRunner.doRun(JUnit3IdeaTestRunner.java:109)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:60)
> Caused by: java.lang.NullPointerException
> at org.infinispan.batch.BatchContainer.startBatch(BatchContainer.java:65)
> ... 37 more
> {code}
> A more descriptive exception would be nice. I did not have any NPE until I started to use AtomicMaps in Hibernate OGM. I now set a TransactionManagerLookup and it works.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2255) FineGrainedAtomicMap missing key, value pairs in some cluster nodes in distributed mode, embedded infinispan cache
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2255?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2255:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> FineGrainedAtomicMap missing key,value pairs in some cluster nodes in distributed mode, embedded infinispan cache
> -----------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2255
> URL: https://issues.jboss.org/browse/ISPN-2255
> Project: Infinispan
> Issue Type: Bug
> Components: Fine-grained API
> Affects Versions: 5.1.5.FINAL
> Environment: Details about cluster:
> 1. Infinispan 5.1.5
> 2. java 7u5 64bit
> 3. linux 64bit (two nodes debian and one node ubuntu)
> 4. tomcat 7.0.28
> 5. jbossTM 4.16.0 JTA only
> Tomcat integration code is here:
> https://github.com/zvrablik/tomcatInfinispanSessionManager/tree/master/to...
> 6. all machines are on real hw ( no virtual machines), date and time is synchronized and shouldn't vary more than 1 second
> Reporter: Zdenek Henek
> Assignee: Manik Surtani
> Fix For: 6.0.0.Final
>
> Attachments: fgamissueTest2.zip, jgroupsConfig.xml, sessionInfinispanConfigtestLB.xml
>
>
> I am using FineGrainedAtomicMap to store data into clustering distributed cache. When clustering set to replicated doesn't cause any issues or at least I haven't run
> into any. When I switch to distributed mode I don't see some key,value pairs in some nodes. This happens randomly. The numOwners is set to two and I have cluster of three nodes.
> more details:
> https://community.jboss.org/thread/203420?tstart=60
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2931) async mode changes remove behaviour
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2931?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2931:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> async mode changes remove behaviour
> -----------------------------------
>
> Key: ISPN-2931
> URL: https://issues.jboss.org/browse/ISPN-2931
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.1.Final
> Reporter: Sebastian Tusk
> Assignee: Galder Zamarreño
> Fix For: 6.0.0.Final
>
>
> With a cache setup as clustering dist, 2 owners and async mode the Cache.remove API does not behave correctly. Cache.remove(key) should return the old value and Cache.remove(key, value) should return true if the entry was removed. Both methods only work correctly if invoked on the primary owner of the key. If invoked on another node remove(key) returns null every time and remove(key,value) returns false every time. The Infinispan documentation says that in async mode these operations should work as expected. https://docs.jboss.org/author/display/ISPN/Asynchronous+Options
> Complete cache config:
> <namedCache name="distributed">
> <!-- Used to register JMX statistics in any available MBean server -->
> <jmxStatistics enabled="true" />
>
> <clustering mode="dist">
> <stateTransfer fetchInMemoryState="true" timeout="20000" />
> <hash numOwners="2"/>
> <async/>
> </clustering>
> <locking isolationLevel="READ_COMMITTED"
> lockAcquisitionTimeout="15000" useLockStriping="false" />
>
> <eviction maxEntries="10000" strategy="LRU" />
> <expiration maxIdle="3600000" wakeUpInterval="5000"/>
> <storeAsBinary storeKeysAsBinary="true" storeValuesAsBinary="false" enabled="false" />
> </namedCache>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2835) Issues w/ M/R test cases if cache are not explicitly started on all nodes
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2835?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2835:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> Issues w/ M/R test cases if cache are not explicitly started on all nodes
> -------------------------------------------------------------------------
>
> Key: ISPN-2835
> URL: https://issues.jboss.org/browse/ISPN-2835
> Project: Infinispan
> Issue Type: Bug
> Components: Core API, Distributed Execution and Map/Reduce
> Reporter: Ray Tsang
> Assignee: Galder Zamarreño
> Labels: onboard
> Fix For: 6.0.0.Final
>
> Attachments: mr-test-src.zip
>
>
> I ran into some issues while using M/R. The gist of the issue I was seeing is that:
> Start a cluster of Embedded Caches, like 4 nodes
> Put in 100 elements
> Run a simple M/R job to count the number of keys
> If I run the M/R job using the node I'm inserting elements into as coordinator - the result is 100
> But if I run the M/R job using a different node as coordinator, the result is less than 100
> More interestingly, I can pause for 5 seconds and run the M/R jobs again, the results are always less than 100
> This behavior doens't occur if I explicitly run cacheManager.getCache() for each of the nodes...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2956) putIfAbsent on Hot Rod Java client doesn't reliably fulfil contract
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2956?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2956:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> putIfAbsent on Hot Rod Java client doesn't reliably fulfil contract
> -------------------------------------------------------------------
>
> Key: ISPN-2956
> URL: https://issues.jboss.org/browse/ISPN-2956
> Project: Infinispan
> Issue Type: Bug
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Labels: hotrod-java-client, remote-clients
> Fix For: 6.0.0.Final
>
>
> Hot Rod's putIfAbsent might have issues on some edge cases:
> {quote}I want to know whether the putting entry already exists in the remote
> cache cluster, or not.
> I thought that RemoteCache.putIfAbsent() would be useful for that
> purpose, i.e.,
> {code}
> if (remoteCache.putIfAbsent(k,v) == null) {
> // new entry.
> } else {
> // k already exists.
> }
> {code}
> But no.
> The putIfAbsent() for new entry may return non-null value, if one of the
> server crushed while putting.
> The behavior is like the following:
> 1. Client do putIfAbsent(k,v).
> 2. The server receives the request and sends replication requests to
> other servers. If the server crushed before completing replication, some
> servers own that (k,v), but others not.
> 3. Client receives the error. The putIfAbsent() internally retries the
> same request to the next server in the cluster server list.
> 4. If the next server owns the (k,v), the putIfAbsent() returns the
> replicated (k,v) at the step 2, without any error.
> So, putIfAbsent() is not reliable for knowing whether the putting entry
> is *exactly* new or not.
> Does anyone have any idea/workaround for this purpose?{quote}
> A workaround is to do this:
> {quote}We got a simple solution, which can be applied to our customer's application.
> If each value part of putting (k,v) is unique or contains unique value,
> the client can do *double check* wether the entry is new.
> {code}
> val = System.nanoTime(); // or uuid is also useful.
> if ((ret = cache.putIfAbsent(key, val)) == null
> || ret.equals(val)) {
> // new entry, if the return value is just the same.
> } else {
> // key already exists.
> }
> {code}
> We are proposing this workaround which almost works fine.{quote}
> However, this is a bit of a cludge.
> Hot Rod should be improved with an operation that allows a version to be passed when entry is created, instead of relying on the client generating it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2376) KeyAffinityServiceImpl.getKeyForAddress() can loop forever when DefaultConsistentHash has numSegments < numNodes
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2376?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2376:
--------------------------------
Assignee: Pedro Ruivo (was: Dan Berindei)
> KeyAffinityServiceImpl.getKeyForAddress() can loop forever when DefaultConsistentHash has numSegments < numNodes
> ----------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2376
> URL: https://issues.jboss.org/browse/ISPN-2376
> Project: Infinispan
> Issue Type: Bug
> Components: Core API
> Affects Versions: 5.2.0.Beta1
> Reporter: Scott Marlow
> Assignee: Pedro Ruivo
> Fix For: 6.0.0.Final
>
>
> I instrumented KeyAffinityServiceImpl and DefaultConsistentHash to show why KeyAffinityServiceImpl is looping forever when running the AS7 clustered tests with some recent changes that aren't committed yet. We are hoping to get through this failure so we can get clustered tests running again more completely on our continuous test server (lightning).
> We have two nodes running in the AS cluster, node-0/web and node-1/web.
> In my recent test run, I stopped the test after it was stuck for a while. Below is some of the instrumented logging output.
> {quote}
> KeyAffinityServiceImpl interestedInAddress() check, for address: node-1/web, filter.contains(address) returns false, filter contents [node-0/web]
> .
> .
> .
> KeyAffinityServiceImpl.getKeyForAddress() loop # 1455775 will loop again since result is null, queue [], address node-0/web
> {quote}
> We are using address "node-1/web" because that is passed into the DefaultConsistentHash constructor segmentOwners parameter (element zero).
> Later, address=node-1/web is the primary owner of the consistent hash (hash=DefaultConsistentHash{numSegments=1, numOwners=2, members=[node-1/web, node-0/web], segmentOwners={0: 0 1}).
> I'm still collecting information and want to get a little more.
> Let me know if there is anything that you would like to see.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2418) CLONE - Cache restart doesn't work properly
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2418?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2418:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> CLONE - Cache restart doesn't work properly
> -------------------------------------------
>
> Key: ISPN-2418
> URL: https://issues.jboss.org/browse/ISPN-2418
> Project: Infinispan
> Issue Type: Bug
> Components: Core API
> Affects Versions: 5.1.7.Final, 5.2.0.Alpha3
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: jdg6
> Fix For: 5.1.9.Final, 6.0.0.Final
>
>
> Copied from ISPN-2297:
> {quote}
> If a cache is stopped via {{Cache.stop()}} it will still be returned by {{DefaultCacheManager.getCache()}}. Cache {{start()}} and {{stop()}} are not synchronized in any way, so a {{start()}} call may return before the cache was properly started - just because another thread is in the process of starting it.
> Also, the documentation of {{EmbeddedCacheManager.getCache()}} should say that it will start the cache only if it doesn't exist yet - if the cache is stopped it will return the cache as it was. Alternatively we could change the behaviour of {{getCache()}} to always start the cache.
> {quote}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2376) KeyAffinityServiceImpl.getKeyForAddress() can loop forever when DefaultConsistentHash has numSegments < numNodes
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2376?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2376:
--------------------------------
Fix Version/s: 6.0.0.Final
(was: 5.3.0.Final)
> KeyAffinityServiceImpl.getKeyForAddress() can loop forever when DefaultConsistentHash has numSegments < numNodes
> ----------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2376
> URL: https://issues.jboss.org/browse/ISPN-2376
> Project: Infinispan
> Issue Type: Bug
> Components: Core API
> Affects Versions: 5.2.0.Beta1
> Reporter: Scott Marlow
> Assignee: Dan Berindei
> Fix For: 6.0.0.Final
>
>
> I instrumented KeyAffinityServiceImpl and DefaultConsistentHash to show why KeyAffinityServiceImpl is looping forever when running the AS7 clustered tests with some recent changes that aren't committed yet. We are hoping to get through this failure so we can get clustered tests running again more completely on our continuous test server (lightning).
> We have two nodes running in the AS cluster, node-0/web and node-1/web.
> In my recent test run, I stopped the test after it was stuck for a while. Below is some of the instrumented logging output.
> {quote}
> KeyAffinityServiceImpl interestedInAddress() check, for address: node-1/web, filter.contains(address) returns false, filter contents [node-0/web]
> .
> .
> .
> KeyAffinityServiceImpl.getKeyForAddress() loop # 1455775 will loop again since result is null, queue [], address node-0/web
> {quote}
> We are using address "node-1/web" because that is passed into the DefaultConsistentHash constructor segmentOwners parameter (element zero).
> Later, address=node-1/web is the primary owner of the consistent hash (hash=DefaultConsistentHash{numSegments=1, numOwners=2, members=[node-1/web, node-0/web], segmentOwners={0: 0 1}).
> I'm still collecting information and want to get a little more.
> Let me know if there is anything that you would like to see.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months