[JBoss JIRA] (ISPN-2693) ByteArrayKey should print out its hashCode
by Radim Vansa (JIRA)
Radim Vansa created ISPN-2693:
---------------------------------
Summary: ByteArrayKey should print out its hashCode
Key: ISPN-2693
URL: https://issues.jboss.org/browse/ISPN-2693
Project: Infinispan
Issue Type: Bug
Components: Marshalling
Reporter: Radim Vansa
Assignee: Galder Zamarreño
Priority: Minor
When a ByteArrayKey is printed out, the format is {{ByteArrayKey{data=ByteArray{size=..., hashCode=..., array=...}}}}
However, ByteArray computes hashCode using array.hashCode() instead of Arrays.hashCode(array) and, therefore, two equal ByteArrayKeys have different hashCode when printed out.
Another way to fix that could be using Arrays.hashCode(array) in Util.printArray() (although I am not sure whether this could break anything).
As the result is pretty unexpected, I consider this a bug rather than a feature request.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2410) A command forwarded back to the originator can time out waiting on a key already locked by itself
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2410?page=com.atlassian.jira.plugin.... ]
Dan Berindei resolved ISPN-2410.
--------------------------------
Resolution: Done
Both the transactional case and the non-transactional case are now handled.
> A command forwarded back to the originator can time out waiting on a key already locked by itself
> -------------------------------------------------------------------------------------------------
>
> Key: ISPN-2410
> URL: https://issues.jboss.org/browse/ISPN-2410
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 5.2.0.CR1
>
>
> If a rebalance happens while a prepare command is executing on a remote node, and the originator has become an owner, it makes sense to forward the command back to the originator to lock the keys (or just add them to the backup locks list).
> However, we don't keep the old consistent hashes around, so we don't know if the originator became an owner after invoking the remote command or was already an owner. So if the topology changed, we always forward the prepare back to the owner.
> Back on the originator, minTxTopologyId < currentTopologyId, so the prepare command has to wait for all the backup locks from pending transactions to be released. The problem is that we wait for the current transaction as well, causing a deadlock.
> Seen in OnePhaseXATest:
> {noformat}
> 18:07:46,873 TRACE (testng-OnePhaseXATest:TestCache) [RpcManagerImpl] NodeA-46125 broadcasting call PrepareCommand {modifications=[PutKeyValueCommand{key=key0, value=value, flags=null, putIfAbsent=false, lifespanMillis=-1, maxIdleTimeMillis=-1}], onePhaseCommit=false, gtx=GlobalTransaction:<NodeA-46125>:4353:local, cacheName='TestCache', topologyId=-1} to recipient list null
> 18:07:46,873 DEBUG (transport-thread-2,NodeA:TestCache) [LocalTopologyManagerImpl] Updating local consistent hash(es) for cache TestCache: new topology = CacheTopology{id=2, currentCH=ReplicatedConsistentHash{members=[NodeA-46125, NodeB-49450]}, pendingCH=null}
> 18:07:46,894 TRACE (OOB-1,ISPN,NodeB-49450:TestCache) [StateTransferManagerImpl] Forwarding command PrepareCommand {modifications=[PutKeyValueCommand{key=key0, value=value, flags=null, putIfAbsent=false, lifespanMillis=-1, maxIdleTimeMillis=-1}], onePhaseCommit=false, gtx=GlobalTransaction:<NodeA-46125>:4353:remote, cacheName='TestCache', topologyId=2} to new targets [NodeA-46125]
> 18:07:46,935 TRACE (OOB-3,ISPN,NodeA-46125:TestCache) [StateTransferInterceptor] handleTopologyAffectedCommand for command PrepareCommand {modifications=[PutKeyValueCommand{key=key0, value=value, flags=null, putIfAbsent=false, lifespanMillis=-1, maxIdleTimeMillis=-1}], onePhaseCommit=false, gtx=GlobalTransaction:<NodeA-46125>:4353:remote, cacheName='TestCache', topologyId=2}, originLocal=false
> 18:07:46,935 TRACE (OOB-3,ISPN,NodeA-46125:TestCache) [AbstractCacheTransaction] Transaction gtx=GlobalTransaction:<NodeA-46125>:4353:local potentially locks key key0? true
> 18:08:16,874 TRACE (testng-OnePhaseXATest:TestCache) [RpcManagerImpl] replication exception:
> org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to NodeB-49450
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-800) Infinispan inside OSGI
by Daniel Chapman (JIRA)
[ https://issues.jboss.org/browse/ISPN-800?page=com.atlassian.jira.plugin.s... ]
Daniel Chapman commented on ISPN-800:
-------------------------------------
Resolution/Fix Version still says resolution is 6.0 which I see is Oct 2013 for this capability. Is this still accurate?
> Infinispan inside OSGI
> ----------------------
>
> Key: ISPN-800
> URL: https://issues.jboss.org/browse/ISPN-800
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core API
> Reporter: Luca Stancapiano
> Assignee: Galder Zamarreño
> Fix For: 6.0.0.Final
>
>
> We need to import infinispan inside a OSGI repository. Tests are made with Felix.
> I added the configuration to use infinispan inside a osgi repository. We need to ignore all listed dependencies. With this configuration we can install infinispan-core.jar inside OSGI. Its achievement will be as a base installation here: https://github.com/flashboss/infinispan
> I added the Import-Package because you are forced to put manually in Felix all dependencies as jgroups, jboss marshalling, jcip, all apache commons. I've seen infinispan core working by default without all those libraries, so I think the same achievement should be replicated in OSGI.
> Inside the Import-Package tag I excluded those libraries so Infinispan core can be started in default mode without errors. If we want use the replication in OSGI, it is enough add manually the other packages (jgroups.jar etc etc)
> Actually the core bundle can be installed. But to be used it needs theese projects be installed as osgi bundles:
> jboss transaction api 1.0.1.GA
> We patched it. There is a new OSGI version here: https://repository.jboss.org/nexus/content/groups/public/org/jboss/spec/j... )
> jgroups 2.10.1.GA
> (it's a osgi bundle since the 3.x version)
> river 1.2.3.GA
> (opened an issue for marshalling 1.4.0 in JBMAR-118 and https://github.com/flashboss/jboss-marshalling/blob/master/river/pom.xml )
> marshalling-api 1.2.3.GA
> (opened an issue for marshalling 1.4.0 in JBMAR-118 and https://github.com/flashboss/jboss-marshalling/blob/master/api/pom.xml )
> jboss logging spi 2.0.5.GA
> (added a jira issue in JBLOGGING-51 . It could be fixed in the 2.2.0.CR2 version. Fixed in the 3.x version)
> rhq plugin annotations 1.4.0.B01
> (opened a feature request in https://bugzilla.redhat.com/show_bug.cgi?id=657754 )
> i18nlog 1.0.9
> (sent a patch in https://sourceforge.net/projects/i18nlog . It could become a OSGI bundle in the 1.0.10 version. Waiting for a response. Fixed in 1.15)
> log4j 1.2.16
> (that's ok...it is a osgi bundle ;))
> jcip-annotations 1.0
> (I sent a patch via email to brian(a)briangoetz.com and a post in http://tembrel.blogspot.com. Sent the patch in concurrency-interest(a)cs.oswego.edu too. They responded to me. There is a OSGI version with a different artifact name. I changed the dependency in the pom.xml of the parent project)
> We should make sure proper 'Import-Package' property is specified in the MANIFEST.MF so that:
> 1- it fails to load obviously when there's any missing bundles that are essential in using the very core functionality of Infinispan.
> 2 - it does not fail due to the dependency that is not really essential.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2655) Make HotRod client always read from the main data owner
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2655?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2655:
--------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
thanks tristan, integrated.
> Make HotRod client always read from the main data owner
> -------------------------------------------------------
>
> Key: ISPN-2655
> URL: https://issues.jboss.org/browse/ISPN-2655
> Project: Infinispan
> Issue Type: Feature Request
> Affects Versions: 5.2.0.Beta6
> Reporter: Mircea Markus
> Assignee: Tristan Tarrant
> Fix For: 5.2.0.CR1, 5.2.0.Final
>
>
> ISPN-2643 made the java Hot Rod client always write to the main owner. ATM the client picks a random owner for reading though. This read-load-balancing doesn't really help, as assuming the data is evenly spread, the amount of reads would be distributed uniformly across the cluster. OTOH forcing the client to always read from the main owner would guarantee red-consistency for *async* replicated caches. Even so read consistency might still be a problem when a node crashed, but still these is a much stronger guarantee and makes async replication usable in many more scenarios.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2483) State transfer issue with the transactions for which the originator has crashed
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2483?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2483:
--------------------------------
Priority: Critical (was: Blocker)
> State transfer issue with the transactions for which the originator has crashed
> -------------------------------------------------------------------------------
>
> Key: ISPN-2483
> URL: https://issues.jboss.org/browse/ISPN-2483
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer, Transactions
> Affects Versions: 5.1.8.Final, 5.2.0.Beta3
> Reporter: Mircea Markus
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 5.2.0.CR1, 5.2.0.Final
>
>
> State transfer migrates and prepares the transactions for which the originator has left. On the receiving node, this results in the transaction being prepared and acquiring backup locks which are never released (unless manual intervention).
> This should behave as follows:
> - if there's no recovery enabled, the state producer should not send such transactions but drop them
> - if recovery is enabled these transactions should be sent across. They shouldn't be prepared/acquire backup locks, but be placed in the recovery cache (see RecoveryManagerImpl.inDoubtTransactions)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2566) TopologyAwareConsistentHashFactory rebalance doesn't redistribute data properly
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2566?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-2566:
-----------------------------------------------
Tomas Sykora <tsykora(a)redhat.com> made a comment on [bug 868832|https://bugzilla.redhat.com/show_bug.cgi?id=868832]
Hi Dan,
please, see the two latest TRACE logs for more information. In ER6/7 it was repaired server hinting for rack and site. Machine still seems to not working now.
When I was looking into logs I found these differencies:
for MACHINE (test is not passing):
14:22:16,074 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (OOB-76,null) Installing new cache topology CacheTopology{id=4, currentCH=DefaultConsistentHash{numSegments=1, numOwners=2, members=[node0/default(primary), node1/default(primary), node2/default(primary)], owners={0: 0 2}, pendingCH=null} on cache topology
for SITE (is ok, was fixed in ER6):
14:20:25,315 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (OOB-76,null) Installing new cache topology CacheTopology{id=4, currentCH=DefaultConsistentHash{numSegments=80, numOwners=2, members=[node0/default(primary), node1/default(primary), node2/default(secondary)], owners={0: 0 2, 1: 0 2, 2: 0 2, 3: 0 2, 4: 0 2, 5: 0 2, 6: 0 2, 7: 0 2, 8: 0 2, 9: 0 2, 10: 0 2, 11: 0 2, 12: 0 2, 13: 0 2, 14: 0 2, 15: 0 2, 16: 0 2, 17: 0 2, 18: 0 2, 19: 0 2, 20: 0 2, 21: 0 2, 22: 0 2, 23: 0 2, 24: 0 2, 25: 0 2, 26: 0 2, 27: 2 0, 28: 2 0, 29: 2 0, 30: 2 0, 31: 2 0, 32: 2 0, 33: 2 0, 34: 2 0, 35: 2 0, 36: 2 0, 37: 2 0, 38: 2 0, 39: 2 0, 40: 1 2, 41: 1 2, 42: 1 2, 43: 1 2, 44: 1 2, 45: 1 2, 46: 1 2, 47: 1 2, 48: 1 2, 49: 1 2, 50: 1 2, 51: 1 2, 52: 1 2, 53: 1 2, 54: 1 2, 55: 1 2, 56: 1 2, 57: 1 2, 58: 1 2, 59: 1 2, 60: 1 2, 61: 1 2, 62: 1 2, 63: 1 2, 64: 1 2, 65: 1 2, 66: 1 2, 67: 2 1, 68: 2 1, 69: 2 1, 70: 2 1, 71: 2 1, 72: 2 1, 73: 2 1, 74: 2 1, 75: 2 1, 76: 2 1, 77: 2 1, 78: 2 1, 79: 2 1}, pendingCH=null} on cache topology
Can this be potencial problem?
Thank you very much for your investigation. If you need any other info, let me know.
Setting back ON_DEV for now despite of 2/3 was fixed and verified.
> TopologyAwareConsistentHashFactory rebalance doesn't redistribute data properly
> -------------------------------------------------------------------------------
>
> Key: ISPN-2566
> URL: https://issues.jboss.org/browse/ISPN-2566
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.0.Beta4
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 5.2.0.Beta6, 5.2.0.Final
>
>
> Say we have a topology-aware cache with numOwners = 2 and two nodes: A(machine=m1) and B(machine=m1). When node C(machine=m2) joins, it should own every key, either as a primary or as a backup owner. This doesn't happen, node C owns just as many segments as nodes A and B.
> Example:
> {noformat}
> 19:21:17,295 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (undefined) Updating cache topology topology for rebalance:
> CacheTopology{id=3, currentCH=DefaultConsistentHash{numSegments=80, numOwners=2,
> members=[node0/default(primary), node1/default(primary)],
> owners={0: 0 1, 1: 0 1, 2: 0 1, 3: 0 1, 4: 0 1, 5: 0 1, 6: 0 1, 7: 0 1,
> 8: 0 1, 9: 0 1, 10: 0 1, 11: 0 1, 12: 0 1, 13: 0 1, 14: 0 1, 15: 0 1,
> 16: 0 1, 17: 0 1, 18: 0 1, 19: 0 1, 20: 0 1, 21: 0 1, 22: 0 1, 23: 0 1,
> 24: 0 1, 25: 0 1, 26: 0 1, 27: 0 1, 28: 0 1, 29: 0 1, 30: 0 1, 31: 0 1,
> 32: 0 1, 33: 0 1, 34: 0 1, 35: 0 1, 36: 0 1, 37: 0 1, 38: 0 1, 39: 0 1,
> 40: 1 0, 41: 1 0, 42: 1 0, 43: 1 0, 44: 1 0, 45: 1 0, 46: 1 0, 47: 1 0,
> 48: 1 0, 49: 1 0, 50: 1 0, 51: 1 0, 52: 1 0, 53: 1 0, 54: 1 0, 55: 1 0,
> 56: 1 0, 57: 1 0, 58: 1 0, 59: 1 0, 60: 1 0, 61: 1 0, 62: 1 0, 63: 1 0,
> 64: 1 0, 65: 1 0, 66: 1 0, 67: 1 0, 68: 1 0, 69: 1 0, 70: 1 0, 71: 1 0,
> 72: 1 0, 73: 1 0, 74: 1 0, 75: 1 0, 76: 1 0, 77: 1 0, 78: 1 0, 79: 1 0},
> pendingCH=DefaultConsistentHash{numSegments=80, numOwners=2,
> members=[node0/default(primary), node1/default(primary), node2/default(secondary)],
> owners={0: 0 1, 1: 0 1, 2: 0 1, 3: 0 1, 4: 0 1, 5: 0 1, 6: 0 1, 7: 0 1,
> 8: 0 1, 9: 0 1, 10: 0 1, 11: 0 1, 12: 0 1, 13: 0 1, 14: 0 1, 15: 0 1,
> 16: 0 1, 17: 0 1, 18: 0 1, 19: 0 1, 20: 0 1, 21: 0 1, 22: 0 1, 23: 0 1,
> 24: 0 1, 25: 0 1, 26: 0 1, 27: 2 0, 28: 2 0, 29: 2 0, 30: 2 0, 31: 2 0,
> 32: 2 0, 33: 2 0, 34: 2 0, 35: 2 0, 36: 2 0, 37: 2 0, 38: 2 0, 39: 2 0,
> 40: 1 0, 41: 1 0, 42: 1 0, 43: 1 0, 44: 1 0, 45: 1 0, 46: 1 0, 47: 1 0,
> 48: 1 0, 49: 1 0, 50: 1 0, 51: 1 0, 52: 1 0, 53: 1 0, 54: 1 0, 55: 1 0,
> 56: 1 0, 57: 1 0, 58: 1 0, 59: 1 0, 60: 1 0, 61: 1 0, 62: 1 0, 63: 1 0,
> 64: 1 0, 65: 1 0, 66: 1 0, 67: 2 1, 68: 2 1, 69: 2 1, 70: 2 1, 71: 2 1,
> 72: 2 1, 73: 2 1, 74: 2 1, 75: 2 1, 76: 2 1, 77: 2 1, 78: 2 1, 79: 2 1}}
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2318) Reimplement a Topology-Aware Consistent Hash
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2318?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-2318:
-----------------------------------------------
Tomas Sykora <tsykora(a)redhat.com> made a comment on [bug 868832|https://bugzilla.redhat.com/show_bug.cgi?id=868832]
Hi Dan,
please, see the two latest TRACE logs for more information. In ER6/7 it was repaired server hinting for rack and site. Machine still seems to not working now.
When I was looking into logs I found these differencies:
for MACHINE (test is not passing):
14:22:16,074 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (OOB-76,null) Installing new cache topology CacheTopology{id=4, currentCH=DefaultConsistentHash{numSegments=1, numOwners=2, members=[node0/default(primary), node1/default(primary), node2/default(primary)], owners={0: 0 2}, pendingCH=null} on cache topology
for SITE (is ok, was fixed in ER6):
14:20:25,315 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (OOB-76,null) Installing new cache topology CacheTopology{id=4, currentCH=DefaultConsistentHash{numSegments=80, numOwners=2, members=[node0/default(primary), node1/default(primary), node2/default(secondary)], owners={0: 0 2, 1: 0 2, 2: 0 2, 3: 0 2, 4: 0 2, 5: 0 2, 6: 0 2, 7: 0 2, 8: 0 2, 9: 0 2, 10: 0 2, 11: 0 2, 12: 0 2, 13: 0 2, 14: 0 2, 15: 0 2, 16: 0 2, 17: 0 2, 18: 0 2, 19: 0 2, 20: 0 2, 21: 0 2, 22: 0 2, 23: 0 2, 24: 0 2, 25: 0 2, 26: 0 2, 27: 2 0, 28: 2 0, 29: 2 0, 30: 2 0, 31: 2 0, 32: 2 0, 33: 2 0, 34: 2 0, 35: 2 0, 36: 2 0, 37: 2 0, 38: 2 0, 39: 2 0, 40: 1 2, 41: 1 2, 42: 1 2, 43: 1 2, 44: 1 2, 45: 1 2, 46: 1 2, 47: 1 2, 48: 1 2, 49: 1 2, 50: 1 2, 51: 1 2, 52: 1 2, 53: 1 2, 54: 1 2, 55: 1 2, 56: 1 2, 57: 1 2, 58: 1 2, 59: 1 2, 60: 1 2, 61: 1 2, 62: 1 2, 63: 1 2, 64: 1 2, 65: 1 2, 66: 1 2, 67: 2 1, 68: 2 1, 69: 2 1, 70: 2 1, 71: 2 1, 72: 2 1, 73: 2 1, 74: 2 1, 75: 2 1, 76: 2 1, 77: 2 1, 78: 2 1, 79: 2 1}, pendingCH=null} on cache topology
Can this be potencial problem?
Thank you very much for your investigation. If you need any other info, let me know.
Setting back ON_DEV for now despite of 2/3 was fixed and verified.
> Reimplement a Topology-Aware Consistent Hash
> --------------------------------------------
>
> Key: ISPN-2318
> URL: https://issues.jboss.org/browse/ISPN-2318
> Project: Infinispan
> Issue Type: Task
> Components: Core API, Distributed Cache
> Affects Versions: 5.2.0.Alpha3
> Reporter: Erik Salter
> Assignee: Dan Berindei
> Priority: Blocker
> Fix For: 5.2.0.Beta3
>
>
> Even with the advent of x-site replication, the TACH is useful to stripe key ownership across machines and/or racks for resiliency. This feature should be refactored from the 5.1 impl.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2566) TopologyAwareConsistentHashFactory rebalance doesn't redistribute data properly
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2566?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-2566:
-----------------------------------------------
Tomas Sykora <tsykora(a)redhat.com> changed the Status of [bug 868832|https://bugzilla.redhat.com/show_bug.cgi?id=868832] from ON_QA to ON_DEV
> TopologyAwareConsistentHashFactory rebalance doesn't redistribute data properly
> -------------------------------------------------------------------------------
>
> Key: ISPN-2566
> URL: https://issues.jboss.org/browse/ISPN-2566
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.0.Beta4
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 5.2.0.Beta6, 5.2.0.Final
>
>
> Say we have a topology-aware cache with numOwners = 2 and two nodes: A(machine=m1) and B(machine=m1). When node C(machine=m2) joins, it should own every key, either as a primary or as a backup owner. This doesn't happen, node C owns just as many segments as nodes A and B.
> Example:
> {noformat}
> 19:21:17,295 TRACE [org.infinispan.topology.ClusterTopologyManagerImpl] (undefined) Updating cache topology topology for rebalance:
> CacheTopology{id=3, currentCH=DefaultConsistentHash{numSegments=80, numOwners=2,
> members=[node0/default(primary), node1/default(primary)],
> owners={0: 0 1, 1: 0 1, 2: 0 1, 3: 0 1, 4: 0 1, 5: 0 1, 6: 0 1, 7: 0 1,
> 8: 0 1, 9: 0 1, 10: 0 1, 11: 0 1, 12: 0 1, 13: 0 1, 14: 0 1, 15: 0 1,
> 16: 0 1, 17: 0 1, 18: 0 1, 19: 0 1, 20: 0 1, 21: 0 1, 22: 0 1, 23: 0 1,
> 24: 0 1, 25: 0 1, 26: 0 1, 27: 0 1, 28: 0 1, 29: 0 1, 30: 0 1, 31: 0 1,
> 32: 0 1, 33: 0 1, 34: 0 1, 35: 0 1, 36: 0 1, 37: 0 1, 38: 0 1, 39: 0 1,
> 40: 1 0, 41: 1 0, 42: 1 0, 43: 1 0, 44: 1 0, 45: 1 0, 46: 1 0, 47: 1 0,
> 48: 1 0, 49: 1 0, 50: 1 0, 51: 1 0, 52: 1 0, 53: 1 0, 54: 1 0, 55: 1 0,
> 56: 1 0, 57: 1 0, 58: 1 0, 59: 1 0, 60: 1 0, 61: 1 0, 62: 1 0, 63: 1 0,
> 64: 1 0, 65: 1 0, 66: 1 0, 67: 1 0, 68: 1 0, 69: 1 0, 70: 1 0, 71: 1 0,
> 72: 1 0, 73: 1 0, 74: 1 0, 75: 1 0, 76: 1 0, 77: 1 0, 78: 1 0, 79: 1 0},
> pendingCH=DefaultConsistentHash{numSegments=80, numOwners=2,
> members=[node0/default(primary), node1/default(primary), node2/default(secondary)],
> owners={0: 0 1, 1: 0 1, 2: 0 1, 3: 0 1, 4: 0 1, 5: 0 1, 6: 0 1, 7: 0 1,
> 8: 0 1, 9: 0 1, 10: 0 1, 11: 0 1, 12: 0 1, 13: 0 1, 14: 0 1, 15: 0 1,
> 16: 0 1, 17: 0 1, 18: 0 1, 19: 0 1, 20: 0 1, 21: 0 1, 22: 0 1, 23: 0 1,
> 24: 0 1, 25: 0 1, 26: 0 1, 27: 2 0, 28: 2 0, 29: 2 0, 30: 2 0, 31: 2 0,
> 32: 2 0, 33: 2 0, 34: 2 0, 35: 2 0, 36: 2 0, 37: 2 0, 38: 2 0, 39: 2 0,
> 40: 1 0, 41: 1 0, 42: 1 0, 43: 1 0, 44: 1 0, 45: 1 0, 46: 1 0, 47: 1 0,
> 48: 1 0, 49: 1 0, 50: 1 0, 51: 1 0, 52: 1 0, 53: 1 0, 54: 1 0, 55: 1 0,
> 56: 1 0, 57: 1 0, 58: 1 0, 59: 1 0, 60: 1 0, 61: 1 0, 62: 1 0, 63: 1 0,
> 64: 1 0, 65: 1 0, 66: 1 0, 67: 2 1, 68: 2 1, 69: 2 1, 70: 2 1, 71: 2 1,
> 72: 2 1, 73: 2 1, 74: 2 1, 75: 2 1, 76: 2 1, 77: 2 1, 78: 2 1, 79: 2 1}}
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months