[JBoss JIRA] (ISPN-2600) CLI rolling upgrades
by Tristan Tarrant (JIRA)
Tristan Tarrant created ISPN-2600:
-------------------------------------
Summary: CLI rolling upgrades
Key: ISPN-2600
URL: https://issues.jboss.org/browse/ISPN-2600
Project: Infinispan
Issue Type: Feature Request
Components: CLI
Affects Versions: 5.2.0.Beta5
Reporter: Tristan Tarrant
Assignee: Tristan Tarrant
Fix For: 5.2.0.CR1
The upgrade command should be extended with additional steps to execute the new cluster sync and disabling the remotecachestore at the end
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2566) TopologyAwareConsistentHashFactory rebalance doesn't redistribute data properly
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2566?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-2566:
-----------------------------------------------
Tomas Sykora <tsykora(a)redhat.com> made a comment on [bug 868832|https://bugzilla.redhat.com/show_bug.cgi?id=868832]
Still no luck. Setting back to ON_DEV. Attaching new TRACE log from rack test - ER5. I don't know how can I help more now. Just let me know and I will do maximum.
> TopologyAwareConsistentHashFactory rebalance doesn't redistribute data properly
> -------------------------------------------------------------------------------
>
> Key: ISPN-2566
> URL: https://issues.jboss.org/browse/ISPN-2566
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.0.Beta4
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 5.2.0.Beta6, 5.2.0.Final
>
>
> Say we have a topology-aware cache with numOwners = 2 and two nodes: A(machine=m1) and B(machine=m1). When node C(machine=m2) joins, it should own every key, either as a primary or as a backup owner. This doesn't happen, node C owns just as many segments as nodes A and B.
> Example:
> {noformat}
> 19:21:17,295 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (undefined) Updating cache topology topology for rebalance:
> CacheTopology{id=3, currentCH=DefaultConsistentHash{numSegments=80, numOwners=2,
> members=[node0/default(primary), node1/default(primary)],
> owners={0: 0 1, 1: 0 1, 2: 0 1, 3: 0 1, 4: 0 1, 5: 0 1, 6: 0 1, 7: 0 1,
> 8: 0 1, 9: 0 1, 10: 0 1, 11: 0 1, 12: 0 1, 13: 0 1, 14: 0 1, 15: 0 1,
> 16: 0 1, 17: 0 1, 18: 0 1, 19: 0 1, 20: 0 1, 21: 0 1, 22: 0 1, 23: 0 1,
> 24: 0 1, 25: 0 1, 26: 0 1, 27: 0 1, 28: 0 1, 29: 0 1, 30: 0 1, 31: 0 1,
> 32: 0 1, 33: 0 1, 34: 0 1, 35: 0 1, 36: 0 1, 37: 0 1, 38: 0 1, 39: 0 1,
> 40: 1 0, 41: 1 0, 42: 1 0, 43: 1 0, 44: 1 0, 45: 1 0, 46: 1 0, 47: 1 0,
> 48: 1 0, 49: 1 0, 50: 1 0, 51: 1 0, 52: 1 0, 53: 1 0, 54: 1 0, 55: 1 0,
> 56: 1 0, 57: 1 0, 58: 1 0, 59: 1 0, 60: 1 0, 61: 1 0, 62: 1 0, 63: 1 0,
> 64: 1 0, 65: 1 0, 66: 1 0, 67: 1 0, 68: 1 0, 69: 1 0, 70: 1 0, 71: 1 0,
> 72: 1 0, 73: 1 0, 74: 1 0, 75: 1 0, 76: 1 0, 77: 1 0, 78: 1 0, 79: 1 0},
> pendingCH=DefaultConsistentHash{numSegments=80, numOwners=2,
> members=[node0/default(primary), node1/default(primary), node2/default(secondary)],
> owners={0: 0 1, 1: 0 1, 2: 0 1, 3: 0 1, 4: 0 1, 5: 0 1, 6: 0 1, 7: 0 1,
> 8: 0 1, 9: 0 1, 10: 0 1, 11: 0 1, 12: 0 1, 13: 0 1, 14: 0 1, 15: 0 1,
> 16: 0 1, 17: 0 1, 18: 0 1, 19: 0 1, 20: 0 1, 21: 0 1, 22: 0 1, 23: 0 1,
> 24: 0 1, 25: 0 1, 26: 0 1, 27: 2 0, 28: 2 0, 29: 2 0, 30: 2 0, 31: 2 0,
> 32: 2 0, 33: 2 0, 34: 2 0, 35: 2 0, 36: 2 0, 37: 2 0, 38: 2 0, 39: 2 0,
> 40: 1 0, 41: 1 0, 42: 1 0, 43: 1 0, 44: 1 0, 45: 1 0, 46: 1 0, 47: 1 0,
> 48: 1 0, 49: 1 0, 50: 1 0, 51: 1 0, 52: 1 0, 53: 1 0, 54: 1 0, 55: 1 0,
> 56: 1 0, 57: 1 0, 58: 1 0, 59: 1 0, 60: 1 0, 61: 1 0, 62: 1 0, 63: 1 0,
> 64: 1 0, 65: 1 0, 66: 1 0, 67: 2 1, 68: 2 1, 69: 2 1, 70: 2 1, 71: 2 1,
> 72: 2 1, 73: 2 1, 74: 2 1, 75: 2 1, 76: 2 1, 77: 2 1, 78: 2 1, 79: 2 1}}
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2572) "CacheException: Initial state transfer timed out for cache" reliably on AS7 testsuite
by Erik Salter (JIRA)
[ https://issues.jboss.org/browse/ISPN-2572?page=com.atlassian.jira.plugin.... ]
Erik Salter commented on ISPN-2572:
-----------------------------------
Here are trace logs of the initial state transfer timing out in my environment (Beta5). Don't know if it's the same root cause, but it's another data point, certainly.
https://dl.dropbox.com/u/50401510/ISPN-2572/10.20.23.146/server.log.tgz
https://dl.dropbox.com/u/50401510/ISPN-2572/10.20.23.147/server.log.tgz
https://dl.dropbox.com/u/50401510/ISPN-2572/10.20.23.148/server.log.tgz
https://dl.dropbox.com/u/50401510/ISPN-2572/10.20.23.149/server.log.tgz
https://dl.dropbox.com/u/50401510/ISPN-2572/10.20.23.150/server.log.tgz
https://dl.dropbox.com/u/50401510/ISPN-2572/10.20.23.208/server.log.tgz
> "CacheException: Initial state transfer timed out for cache" reliably on AS7 testsuite
> --------------------------------------------------------------------------------------
>
> Key: ISPN-2572
> URL: https://issues.jboss.org/browse/ISPN-2572
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.0.Beta4
> Reporter: Radoslav Husar
> Assignee: Dan Berindei
> Priority: Blocker
> Fix For: 5.2.0.Beta6
>
>
> While running AS7 testsuite with speedups implemented in my branch (https://github.com/jbossas/jboss-as/pull/3381) we are contantly seeing (log below) on Windows 2008.
> Run:
> http://teamcity.cafe-babe.org/viewLog.html?buildId=1689&tab=buildResultsD...
> {code}
> 16:34:46,092 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 13) MSC00001: Failed to start service jboss.infinispan.ejb.remote-connector-client-mappings: org.jboss.msc.service.StartException in service jboss.infinispan.ejb.remote-connector-client-mappings: org.infinispan.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() throws java.lang.InterruptedException on object of type StateTransferManagerImpl
> at org.jboss.as.clustering.msc.AsynchronousService$1.run(AsynchronousService.java:87)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_32]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_32]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_32]
> at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.0.0.GA.jar:2.0.0.GA]
> Caused by: org.infinispan.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() throws java.lang.InterruptedException on object of type StateTransferManagerImpl
> at org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:205)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:883)
> at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:654)
> at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:643)
> at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:546)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:199)
> at org.infinispan.CacheImpl.start(CacheImpl.java:520)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:690)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:653)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:549)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:563)
> at org.jboss.as.clustering.infinispan.DefaultEmbeddedCacheManager.getCache(DefaultEmbeddedCacheManager.java:107)
> at org.jboss.as.clustering.infinispan.DefaultEmbeddedCacheManager.getCache(DefaultEmbeddedCacheManager.java:98)
> at org.jboss.as.clustering.infinispan.subsystem.CacheService.start(CacheService.java:78)
> at org.jboss.as.clustering.msc.AsynchronousService$1.run(AsynchronousService.java:82)
> ... 4 more
> Caused by: org.infinispan.CacheException: Initial state transfer timed out for cache remote-connector-client-mappings on node-1/ejb
> at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.6.0_32]
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [rt.jar:1.6.0_32]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [rt.jar:1.6.0_32]
> at java.lang.reflect.Method.invoke(Method.java:597) [rt.jar:1.6.0_32]
> at org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> ... 18 more
> {code}
> Affected version -- current master (say 7dc531002539b078e429418d8ef204e401beafd1).
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2025) NPE in Externalizer on shutdown
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2025?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-2025:
-----------------------------------------------
Michal Linhard <mlinhard(a)redhat.com> changed the Status of [bug 818092|https://bugzilla.redhat.com/show_bug.cgi?id=818092] from ON_QA to VERIFIED
> NPE in Externalizer on shutdown
> -------------------------------
>
> Key: ISPN-2025
> URL: https://issues.jboss.org/browse/ISPN-2025
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling
> Affects Versions: 5.1.4.FINAL
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Critical
> Fix For: 5.1.5.FINAL, 5.2.0.ALPHA1, 5.2.0.Final
>
>
> This is what I get when I'm shutting down one of the clustered nodes (default config standalone-ha.xml) of JDG 6.0.0.ER7
> {code}
> 09:50:25,505 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] Problems unmarshalling remote command from byte buffer: java.lang.NullPointerException
> at org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:222)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:37) [jboss-marshalling-1.3.13.GA-redhat-1.jar:1.3.13.GA-redhat-1]
> at org.infinispan.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:154)
> at org.infinispan.marshall.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:114)
> at org.infinispan.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:85)
> at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:50)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:200)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:456) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:363) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:238) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.JChannel.up(JChannel.java:716) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.RSVP.up(RSVP.java:179) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:400) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:889) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:793) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:365) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:602) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.stack.Protocol.up(Protocol.java:363) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1180) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1728) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1710) [jgroups-3.0.9.Final-redhat-1.jar:3.0.9.Final-redhat-1]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_30]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_30]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_30]
> at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.0.0.GA-redhat-1.jar:2.0.0.GA-redhat-1]
> {code}
> I've ran the instances out of the box by this commands, binding them to virtual ips on my laptop:
> {code}
> server1/bin/standalone.sh -b 192.168.11.101 -c standalone-ha.xml -Djboss.bind.address.management=192.168.11.101 -Djboss.node.name=node1
> server2/bin/standalone.sh -b 192.168.11.102 -c standalone-ha.xml -Djboss.bind.address.management=192.168.11.102 -Djboss.node.name=node2
> {code}
> shutdown was a graceful shutdown by Ctrl+C from console
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month