[JBoss JIRA] (ISPN-6046) Server Rolling Upgrade performance improvement
by Gustavo Fernandes (JIRA)
Gustavo Fernandes created ISPN-6046:
---------------------------------------
Summary: Server Rolling Upgrade performance improvement
Key: ISPN-6046
URL: https://issues.jboss.org/browse/ISPN-6046
Project: Infinispan
Issue Type: Enhancement
Affects Versions: 8.1.0.Final
Reporter: Gustavo Fernandes
Assignee: Gustavo Fernandes
Currently the rolling upgrade requires copying all keys for all entries in the cluster, and then transfer all those keys to the target cluster, that in turn will iterate and obtain the values. With a sufficient amount of data, the key recording can cause OOME
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-6045) TransactionAwareKeyCloseableIterator.remove uses previousValue which is never set
by Patrick Ruckstuhl (JIRA)
Patrick Ruckstuhl created ISPN-6045:
---------------------------------------
Summary: TransactionAwareKeyCloseableIterator.remove uses previousValue which is never set
Key: ISPN-6045
URL: https://issues.jboss.org/browse/ISPN-6045
Project: Infinispan
Issue Type: Bug
Affects Versions: 8.1.0.Final
Reporter: Patrick Ruckstuhl
TransactionAwareKeyCloseableIterator.remove is implemented as
{code:java}
cache.remove(previousValue);
{code}
But looking at the previousValue it never gets set. This results then in
{code}
java.lang.NullPointerException: Null keys are not supported!
at org.infinispan.cache.impl.CacheImpl.assertKeyNotNull(CacheImpl.java:224)
at org.infinispan.cache.impl.CacheImpl.remove(CacheImpl.java:547)
at org.infinispan.cache.impl.CacheImpl.remove(CacheImpl.java:543)
at org.infinispan.interceptors.TxInterceptor$TransactionAwareKeyCloseableIterator.remove(TxInterceptor.java:568)
{code}
I encountered this when trying to switch infinispan from 7.2 to 8.1 in conjunction with hibernate-infinispan 4.3 which does the following code to clear the cache:
{code:java}
public static void removeAll(AdvancedCache cache) {
try {
Iterator it = cache.keySet().iterator();
while (it.hasNext()) {
it.next(); // Necessary to get next element
it.remove();
}
} catch (UnsupportedOperationException e) {
// Fallback on using clear for older version
cache.clear();
}
}
{code}
from https://github.com/hibernate/hibernate-orm/blob/4.3/hibernate-infinispan/...
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-4468) HR client is not able to unmarshall custom class when using AS modules
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4468?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4468:
-----------------------------------------------
Christian Huffman <chuffman(a)redhat.com> changed the Status of [bug 1168262|https://bugzilla.redhat.com/show_bug.cgi?id=1168262] from ASSIGNED to ON_QA
> HR client is not able to unmarshall custom class when using AS modules
> ----------------------------------------------------------------------
>
> Key: ISPN-4468
> URL: https://issues.jboss.org/browse/ISPN-4468
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling, Remote Protocols
> Reporter: Vojtech Juranek
> Assignee: Galder Zamarreño
>
> When using HR client in JBoss and use JBoss modules for HR client, storing custom objects into remote cache works, however when custom objects is read back from remote cache, it fails as {{ClassNotFoundException}}:
> {noformat}
> testPutGetCustomObject(com.jboss.datagrid.test.hotrod.HotRodRemoteCacheIT) Time elapsed: 1.749 sec <<< ERROR!
> org.infinispan.client.hotrod.exceptions.HotRodClientException: Unable to unmarshall byte stream
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.bytes2obj(RemoteCacheImpl.java:555)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.get(RemoteCacheImpl.java:425)
> at org.infinispan.server.test.client.hotrod.AbstractRemoteCacheIT.testPutGetCustomObject(AbstractRemoteCacheIT.java:746)
> {noformat}
> [...]
> {noformat}
> Caused by: java.lang.ClassNotFoundException: org.infinispan.server.test.client.hotrod.AbstractRemoteCacheIT$Person from [Module "org.infinispan.commons:jdg-6.3" from local module loader @5cbf5bb7 (finder: local module finder @171e7af3 (roots: /opt/test_servers/jboss-eap-6.2.2/modules,/opt/test_servers/jboss-eap-6.2.2/modules/system/layers/base))]
> at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:213)
> at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:459)
> at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:408)
> at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:389)
> at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:134)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at org.jboss.marshalling.AbstractClassResolver.loadClass(AbstractClassResolver.java:131)
> at org.jboss.marshalling.AbstractClassResolver.resolveClass(AbstractClassResolver.java:112)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadClassDescriptor(RiverUnmarshaller.java:943)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadNewObject(RiverUnmarshaller.java:1239)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:272)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:135)
> at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromByteBuffer(AbstractJBossMarshaller.java:113)
> at org.infinispan.commons.marshall.AbstractMarshaller.objectFromByteBuffer(AbstractMarshaller.java:82)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.bytes2obj(RemoteCacheImpl.java:553)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.get(RemoteCacheImpl.java:425)
> at org.infinispan.server.test.client.hotrod.AbstractRemoteCacheIT.testPutGetCustomObject(AbstractRemoteCacheIT.java:746)
> {noformat}
> Adding jar file with {{org.infinispan.server.test.client.hotrod.AbstractRemoteCacheIT$Person}} into jboss-deployment-structure as a module didn't helped.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-3202) Infinispan cachestores remove entries early when maxIdle used
by Jakub Markos (JIRA)
[ https://issues.jboss.org/browse/ISPN-3202?page=com.atlassian.jira.plugin.... ]
Jakub Markos commented on ISPN-3202:
------------------------------------
It doesn't seem that this issue is solvable without updating the metadata of the entry in the cachestore for each get (which would be slow). So maybe we could at least warn the users in the documentation against using maxIdle with cachestores?
> Infinispan cachestores remove entries early when maxIdle used
> -------------------------------------------------------------
>
> Key: ISPN-3202
> URL: https://issues.jboss.org/browse/ISPN-3202
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.6.Final
> Environment: Linux: debian wheezy
> uname -a: Linux hostname 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux
> JBoss: 7.1.2.Final
> Embedded cache using infinispan 5.2.6.Final jars included in WAR's WEB-INF/lib directory.
> Reporter: Ralph Jennings
> Assignee: Pedro Ruivo
> Labels: cache-store, maxIdle, timeout
>
> When adding an entry to the cache (embedded), specifying maxIdle... The entry goes into the store, but the store removes the entry when maxIdle time elapses from creation (rather than from last access).
> The cache correctly keeps the entry in memory (unless evicted).
> This leaves the cache and store out of sync.
> I saw this same behavior with both stringKeyedJdbcStore and fileStore.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-5883) Node can apply new topology after sending status response
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-5883?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-5883:
------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Node can apply new topology after sending status response
> ---------------------------------------------------------
>
> Key: ISPN-5883
> URL: https://issues.jboss.org/browse/ISPN-5883
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 8.0.1.Final, 7.2.5.Final, 8.1.0.Alpha2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: testsuite_stability
> Fix For: 8.2.0.Alpha1
>
>
> {{LocalTopologyManagerImpl}} is responsible for sending the {{ClusterTopologyControlCommand(GET_STATUS)}} response, and when it sends the response it doesn't check the current view id against the new coordinator's view id. If the old coordinator already sent a topology update before the merge, that topology update might be processed after sending the status response. The new coordinator will send a topology update with a topology id of {{max(status response topology ids) + 1}}. The node will then process the topology update from the old coordinator, but it will ignore the topology update from the new coordinator with the same topology id.
> This is extra common in the partition handling tests, e.g. {{BasePessimisticTxPartitionAndMergeTest}} subclasses, because the test "injects" the JGroups view on each node serially, and often the 4th node sends the status response before it gets the new view.
> {noformat}
> 22:16:37,776 DEBUG (remote-thread-NodeD-p26-t6:[]) [LocalTopologyManagerImpl] Sending cluster status response for view 10
> // Topology from NodeC
> 22:16:37,778 DEBUG (transport-thread-NodeD-p28-t2:[]) [LocalTopologyManagerImpl] Updating local topology for cache pes-cache: CacheTopology{id=8, rebalanceId=3, currentCH=DefaultConsistentHash{ns=60, owners = (4)[NodeA-37631: 15+15, NodeB-47846: 15+15, NodeC-46467: 15+15, NodeD-30486: 15+15]}, pendingCH=null, unionCH=null, actualMembers=[NodeC-46467, NodeD-30486]}
> // Later, topology from NodeA
> 22:16:37,827 DEBUG (transport-thread-NodeD-p28-t1:[]) [LocalTopologyManagerImpl] Ignoring late consistent hash update for cache pes-cache, current topology is 8: CacheTopology{id=8, rebalanceId=3, currentCH=DefaultConsistentHash{ns=60, owners = (4)[NodeA-37631: 15+15, NodeB-47846: 15+15, NodeC-46467: 15+15, NodeD-30486: 15+15]}, pendingCH=null, unionCH=null, actualMembers=[NodeA-37631, NodeB-47846, NodeC-46467, NodeD-30486]}
> {noformat}
> As a solution, we can delay sending the status response until we have the same view as the coordinator (or a later one). We already check that the sender is the current coordinator before applying a topology update, so this will guarantee that the we don't apply other topology updates from the old coordinator. Since the status request is only sent after the new view was installed, this will not introduce any delays in the vast majority of cases.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-4845) statetransfer.ClusterTopologyManagerTest.testAbruptLeaveAfterGetStatus fails randomly
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-4845?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-4845:
------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> statetransfer.ClusterTopologyManagerTest.testAbruptLeaveAfterGetStatus fails randomly
> -------------------------------------------------------------------------------------
>
> Key: ISPN-4845
> URL: https://issues.jboss.org/browse/ISPN-4845
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 5.2.10.Final
> Reporter: Michal Vinkler
> Assignee: Dan Berindei
> Labels: 5.2.x
> Fix For: 8.2.0.Alpha1
>
>
> Seen with EAP 6.3.0.ER10, Infinispan 5.2.10
> Test org.infinispan.statetransfer.ClusterTopologyManagerTest.testAbruptLeaveAfterGetStatus randomly fails (seen on Solaris and HP-UX).
> Might be the same as ISPN-4743.
> Stacktraces:
> HP-UX version
> Error Message
> {code}
> Timed out waiting for rebalancing to complete on node ClusterTopologyManagerTest-NodeB-47391, expected member list is [ClusterTopologyManagerTest-NodeB-47391], current member list is [ClusterTopologyManagerTest-NodeB-47391, ClusterTopologyManagerTest-NodeC-55740]!
> {code}
> Stacktrace
> {code}
> java.lang.RuntimeException: Timed out waiting for rebalancing to complete on node ClusterTopologyManagerTest-NodeB-47391, expected member list is [ClusterTopologyManagerTest-NodeB-47391], current member list is [ClusterTopologyManagerTest-NodeB-47391, ClusterTopologyManagerTest-NodeC-55740]!
> at org.infinispan.test.TestingUtil.waitForRehashToComplete(TestingUtil.java:203)
> at org.infinispan.statetransfer.ClusterTopologyManagerTest.testAbruptLeaveAfterGetStatus(ClusterTopologyManagerTest.java:353)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> Also see standard output:
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EAP6/view/EAP6-Infi...
> Solaris version
> Error Message
> {code}
> Thread already timed out waiting for event 3 left
> {code}
> Stacktrace
> {code}
> java.lang.IllegalStateException: Thread already timed out waiting for event 3 left
> at org.infinispan.test.fwk.CheckPoint.trigger(CheckPoint.java:150)
> at org.infinispan.test.fwk.CheckPoint.trigger(CheckPoint.java:135)
> at org.infinispan.statetransfer.ClusterTopologyManagerTest.testAbruptLeaveAfterGetStatus(ClusterTopologyManagerTest.java:350)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> Also see standard output:
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EAP6/view/EAP6-Infi...
> Might be the same as ISPN-4743.
> Downstream BZ was: https://bugzilla.redhat.com/show_bug.cgi?id=987461
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-5044) Intermittent test failure: ClusterTopologyManagerTest.testClusterRecoveryAfterSplitAndCoordLeave
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-5044?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-5044:
------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Intermittent test failure: ClusterTopologyManagerTest.testClusterRecoveryAfterSplitAndCoordLeave
> ------------------------------------------------------------------------------------------------
>
> Key: ISPN-5044
> URL: https://issues.jboss.org/browse/ISPN-5044
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 7.1.0.Alpha1
> Reporter: Sanne Grinovero
> Assignee: Dan Berindei
> Priority: Blocker
> Fix For: 8.2.0.Alpha1
>
>
> {noformat}~~~~~~~~~~~~~~~~~~~~~~~~~ ENVIRONMENT INFO ~~~~~~~~~~~~~~~~~~~~~~~~~~
> jgroups.bind_addr = 127.0.0.1
> java.runtime.version = 1.7.0_71-mockbuild_2014_10_03_09_36-b00
> java.runtime.name =OpenJDK Runtime Environment
> java.vm.version = 24.65-b04
> java.vm.vendor = Oracle Corporation
> os.name = Linux
> os.version = 3.10.0-123.9.3.el7.x86_64
> sun.arch.data.model = 64
> sun.cpu.endian = little
> protocol.stack = null
> infinispan.test.jgroups.protocol = tcp
> infinispan.unsafe.allow_jdk8_chm = true
> java.net.preferIPv4Stack = true
> java.net.preferIPv6Stack = null
> log4.configuration = file:/opt/infinispan-log4j.xml
> MAVEN_OPTS = null
> ~~~~~~~~~~~~~~~~~~~~~~~~~ ENVIRONMENT INFO ~~~~~~~~~~~~~~~~~~~~~~~~~~
> Tests run: 5625, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 546.54 sec <<< FAILURE! - in TestSuite
> testClusterRecoveryAfterSplitAndCoordLeave(org.infinispan.statetransfer.ClusterTopologyManagerTest) Time elapsed: 0.3 sec <<< FAILURE!
> org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() throws java.lang.InterruptedException on object of type StateTransferManagerImpl
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:170)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)
> at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)
> at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627)
> at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:217)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:813)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:584)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:539)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:416)
> at org.infinispan.test.MultipleCacheManagersTest.cache(MultipleCacheManagersTest.java:365)
> at org.infinispan.statetransfer.ClusterTopologyManagerTest.testClusterRecoveryAfterSplitAndCoordLeave(ClusterTopologyManagerTest.java:208)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.infinispan.commons.CacheException: Initial state transfer timed out for cache testCache on ClusterTopologyManagerTest-NodeN-5046
> at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:218)
> at sun.reflect.GeneratedMethodAccessor159.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
> ... 31 more
> Results :
> Failed tests:
> ClusterTopologyManagerTest.testClusterRecoveryAfterSplitAndCoordLeave:208->MultipleCacheManagersTest.cache:365 » Cache
> Tests run: 5625, Failures: 1, Errors: 0, Skipped: 0
> {noformat}
> Also the execution of ClusterTopologyManagerTest is extremely slow, it takes more than 2 minutes here. Is that really necessary?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-5481) ConfigurationOverrideTest random failures
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-5481?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-5481:
------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> ConfigurationOverrideTest random failures
> -----------------------------------------
>
> Key: ISPN-5481
> URL: https://issues.jboss.org/browse/ISPN-5481
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 7.2.1.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 8.2.0.Alpha1
>
>
> {{ConfigurationOverrideTest}} uses the default global configuration, and it fails when another test has already registered a cache manager MBean in JMX with the same name:
> {noformat}
> org.infinispan.jmx.JmxDomainConflictException: ISPN000034: There's already a JMX MBean instance type=CacheManager,name="DefaultCacheManager" already registered under 'org.infinispan' JMX domain. If you want to allow multiple instances configured with same JMX domain enable 'allowDuplicateDomains' attribute in 'globalJmxStatistics' config element
> at org.infinispan.jmx.JmxUtil.buildJmxDomain(JmxUtil.java:51)
> at org.infinispan.jmx.CacheManagerJmxRegistration.updateDomain(CacheManagerJmxRegistration.java:79)
> at org.infinispan.jmx.CacheManagerJmxRegistration.buildRegistrar(CacheManagerJmxRegistration.java:73)
> at org.infinispan.jmx.AbstractJmxRegistration.registerMBeans(AbstractJmxRegistration.java:37)
> at org.infinispan.jmx.CacheManagerJmxRegistration.start(CacheManagerJmxRegistration.java:41)
> at org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:625)
> at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:218)
> at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:199)
> at org.infinispan.configuration.ConfigurationOverrideTest.testOverrideWithStore(ConfigurationOverrideTest.java:80)
> {noformat}
> We should verify the other tests as well, to make sure they all use the {{PerThreadMBeanServerLookup}} and/or a unique JMX domain.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-5459) StateTransferManager.waitForInitialTransferToComplete can fail if the coordinator crashes
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5459?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5459:
-----------------------------------------------
Matej Čimbora <mcimbora(a)redhat.com> changed the Status of [bug 1259418|https://bugzilla.redhat.com/show_bug.cgi?id=1259418] from ON_QA to VERIFIED
> StateTransferManager.waitForInitialTransferToComplete can fail if the coordinator crashes
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-5459
> URL: https://issues.jboss.org/browse/ISPN-5459
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.2.1.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: testsuite_stability
> Fix For: 8.0.0.Alpha2
>
>
> {{LocalTopologyManagerImpl.isRebalancingEnabled()}} will throw a {{SuspectException}} if the coordinator crashes, preventing the cache from starting up.
> This is causing random failures in {{ClusterListenerDistTxAddListenerTest}}:
> {noformat}
> 22:23:59,439 ERROR (testng-ClusterListenerDistTxAddListenerTest:) [UnitTestTestNGListener] Test testNodeJoiningAndStateNodeDiesWithExistingClusterListener(org.infinispan.notifications.cachelistener.cluster.ClusterListenerDistTxAddListenerTest) failed.
> java.util.concurrent.ExecutionException: org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() throws java.lang.Exception on object of type StateTransferManagerImpl
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:202)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest.testNodeJoiningAndStateNodeDiesWithExistingClusterListener(AbstractClusterListenerDistAddListenerTest.java:254)
> ...
> Caused by: org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() throws java.lang.Exception on object of type StateTransferManagerImpl
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:172)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)
> at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)
> at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627)
> at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:218)
> at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:850)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:599)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:554)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:424)
> at org.infinispan.test.MultipleCacheManagersTest.cache(MultipleCacheManagersTest.java:366)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest.access$100(AbstractClusterListenerDistAddListenerTest.java:32)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest$4.call(AbstractClusterListenerDistAddListenerTest.java:237)
> at org.infinispan.notifications.cachelistener.cluster.AbstractClusterListenerDistAddListenerTest$4.call(AbstractClusterListenerDistAddListenerTest.java:234)
> at org.infinispan.test.AbstractInfinispanTest$LoggingCallable.call(AbstractInfinispanTest.java:422)
> ... 4 more
> Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: Node NodeM-34961 was suspected
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:245)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:566)
> at org.infinispan.topology.LocalTopologyManagerImpl.executeOnCoordinator(LocalTopologyManagerImpl.java:501)
> at org.infinispan.topology.LocalTopologyManagerImpl.isRebalancingEnabled(LocalTopologyManagerImpl.java:445)
> at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:216)
> at sun.reflect.GeneratedMethodAccessor165.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
> ... 18 more
> Caused by: SuspectedException
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:414)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:427)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:240)
> ... 26 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years
[JBoss JIRA] (ISPN-6044) NumericVersionGenerator stops removed caches from being GCed
by Tristan Tarrant (JIRA)
Tristan Tarrant created ISPN-6044:
-------------------------------------
Summary: NumericVersionGenerator stops removed caches from being GCed
Key: ISPN-6044
URL: https://issues.jboss.org/browse/ISPN-6044
Project: Infinispan
Issue Type: Bug
Reporter: Tristan Tarrant
NumericVersionGenerator is a cache-scoped component which registers a global listener in its @Start method for org.infinispan.container.versioning.NumericVersionGenerator$RankCalculator. This, causes the cache registry to not be GCed since the DefaultCacheManager keeps a live reference to this component.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years