[JBoss JIRA] (ISPN-6026) Segment-aware shared cache stores
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-6026?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-6026:
----------------------------------
Sprint: Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.Final (was: Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1)
> Segment-aware shared cache stores
> ---------------------------------
>
> Key: ISPN-6026
> URL: https://issues.jboss.org/browse/ISPN-6026
> Project: Infinispan
> Issue Type: Enhancement
> Components: Loaders and Stores
> Reporter: Tristan Tarrant
> Assignee: William Burns
>
> Shared cache stores (e.g. JDBC) should be segment-aware so that they only load the keys they own.
> Possible implementation: add a segment id column, so that the store can SELECT * FROM table WHERE segment_id = segment
> Need to think about how to reconstruct segment ids.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 3 months
[JBoss JIRA] (ISPN-5997) JMX operation ClusterCacheStats.resetStatistics() not working
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-5997?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-5997:
----------------------------------
Sprint: Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.Final (was: Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1)
> JMX operation ClusterCacheStats.resetStatistics() not working
> -------------------------------------------------------------
>
> Key: ISPN-5997
> URL: https://issues.jboss.org/browse/ISPN-5997
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.1.0.CR1
> Reporter: Jiří Holuša
> Assignee: Vladimir Blagojevic
>
> Environment: 2 servers started in domain mode (ISPN 8.1.0.CR1).
> Having some clustered cache, inserted/read some entries (just to create some statistics).
> When connected via JMX (e.g. through JConsole, see https://issues.jboss.org/browse/ISPN-5983 ) to one of the servers, I execute resetStatistics() operation on ClusterCacheStats MBean (object name: jboss.datagrid-infinispan:type=Cache,name="<cache-name>",manager="<cache-container-name>",component=ClusterCacheStats) and nothing happens, the cluster statistics are not reset.
> To actually reset the statistics, I have to connect to each of the servers in domain individually and call resetStatistics() on Statistics MBean, which deletes "a portion" of statistics of that particular server to the cluster stats.
> I would expect ClusterCacheStats.resetStatistics() to do exactly the same: call Statistics.resetStatistics() on each server.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 3 months
[JBoss JIRA] (ISPN-9036) Parameterized branding for C++
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-9036?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9036:
----------------------------------
Sprint: Sprint 9.3.0.Alpha1, Sprint 9.3.0.Beta1, Sprint 9.3.0.CR1, Sprint 9.3.0.Final, Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.Final (was: Sprint 9.3.0.Alpha1, Sprint 9.3.0.Beta1, Sprint 9.3.0.CR1, Sprint 9.3.0.Final, Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1)
> Parameterized branding for C++
> ------------------------------
>
> Key: ISPN-9036
> URL: https://issues.jboss.org/browse/ISPN-9036
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Tristan Tarrant
> Assignee: Vittorio Rigamonti
>
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 3 months
[JBoss JIRA] (ISPN-9332) REPL local iteration optimization cannot be used when store has write behind
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-9332?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9332:
----------------------------------
Sprint: Sprint 9.4.0.CR1, Sprint 9.4.0.Final (was: Sprint 9.4.0.CR1)
> REPL local iteration optimization cannot be used when store has write behind
> ----------------------------------------------------------------------------
>
> Key: ISPN-9332
> URL: https://issues.jboss.org/browse/ISPN-9332
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores, Streams
> Affects Versions: 9.3.0.Final
> Reporter: William Burns
> Assignee: William Burns
>
> When write behind is enabled, the write modification is stored on the primary owner. REPL local iteration can read from non owned data, thus causing an inconsistency.
> Thus distributed streams should always go remote when not all segments are primarily owned on a given node when write behind is enabled.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 3 months
[JBoss JIRA] (ISPN-8234) Add support for @Spatial, @Latitude, @Longitude annotations in protobuf schema
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-8234?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-8234:
----------------------------------
Sprint: Sprint 9.3.0.Beta1, Sprint 9.3.0.CR1, Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.Final (was: Sprint 9.3.0.Beta1, Sprint 9.3.0.CR1, Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1)
> Add support for @Spatial, @Latitude, @Longitude annotations in protobuf schema
> ------------------------------------------------------------------------------
>
> Key: ISPN-8234
> URL: https://issues.jboss.org/browse/ISPN-8234
> Project: Infinispan
> Issue Type: Feature Request
> Components: Remote Querying
> Reporter: Adrian Nistor
> Assignee: Adrian Nistor
> Fix For: 9.4.0.Final
>
>
> These annotations must behave identically to their Hibernate Search counterparts defined in org.hibernate.search.annotations package.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 3 months
[JBoss JIRA] (ISPN-2445) Safe shutdown - transfer data to other nodes
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-2445?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-2445:
----------------------------------
Sprint: Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.Final (was: Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1)
> Safe shutdown - transfer data to other nodes
> --------------------------------------------
>
> Key: ISPN-2445
> URL: https://issues.jboss.org/browse/ISPN-2445
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Reporter: Matej Lazar
> Assignee: Dan Berindei
>
> Distributed cache should provide an configuration option to block shutdown until data from shutting down node is transferred to other nodes.
> This feature is useful if Infinispan is used as persistent storage.
> Without this option, in a cloud environment, where server node is deleted on scale down event, users are forced to use external storage, if they don't want to loose data.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 3 months
[JBoss JIRA] (ISPN-7624) Change CacheLoaderInterceptor to ignore in memory for certain operations when passivation is not enabled
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-7624?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-7624:
----------------------------------
Sprint: Sprint 9.4.0.CR1, Sprint 9.4.0.Final (was: Sprint 9.4.0.CR1)
> Change CacheLoaderInterceptor to ignore in memory for certain operations when passivation is not enabled
> --------------------------------------------------------------------------------------------------------
>
> Key: ISPN-7624
> URL: https://issues.jboss.org/browse/ISPN-7624
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Reporter: William Burns
> Assignee: William Burns
> Fix For: 9.4.0.Final
>
>
> CacheLoader currently reads all in memory entries and then the store for bulk operations. This can be changed to only load from the store when passivation is not used as it should contain all entries that aren't in memory.
> The only issue is that if Flag.SKIP_CACHE_STORE was used on a write. In this case we could store a boolean in the interceptor so that if this Flag is ever used we always do both. Very few users use this Flag and it should improve performance a bit and entries should be seen more consistently.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 3 months
[JBoss JIRA] (ISPN-9291) BasePartitionHandlingTest.Partition.installMergeView() doesn't compute the merge digest
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-9291?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9291:
----------------------------------
Sprint: Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.Final (was: Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1)
> BasePartitionHandlingTest.Partition.installMergeView() doesn't compute the merge digest
> ---------------------------------------------------------------------------------------
>
> Key: ISPN-9291
> URL: https://issues.jboss.org/browse/ISPN-9291
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.3.0.CR1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.4.0.Final
>
>
> The partition handling tests use {{BasePartitionHandlingTest.Partition.installMergeView(view1, view2)}} to install the merge view without waiting for {{MERGE3}} to run, making them much faster. Unfortunately, the implementation is incorrect: {{GMS.installView(view)}} only works for regular views, merge views need to be installed with {{GMS.installView(mergeView, digest)}}.
> The result is that the nodes that got isolated from the coordinator request the retransmission of all the {{NAKACK2}} messages (including view updates) since the cluster first started. The isolated nodes cannot install the merge view until they deliver all the older messages (even without knowing whether they're OOB or not). But if {{STABLE}} ran and cleared a range of messages already, the retransmission request cannot be satisfied, so the view updates will never be delivered.
> This is easily reproducible in {{CrashedNodeDuringConflictResolutionTest}} if we add a delay before updating the topology in {{StateConsumerImpl}}. The test installs the merge view manually, but then kills NodeC and expects the cluster to install the new view automatically. NodeD can't install the new view because it's waiting for earlier messages from NodeA:
> {noformat}
> 18:27:13,054 INFO (testng-test:[]) [TestSuiteProgress] Test starting: org.infinispan.conflict.impl.CrashedNodeDuringConflictResolutionTest.testPartitionMergePolicy[DIST_SYNC]
> 18:27:13,640 DEBUG (testng-test:[]) [GMS] test-NodeA-39513: installing view MergeView::[test-NodeA-39513|10] (4) [test-NodeA-39513, test-NodeB-9439, test-NodeC-43706, test-NodeD-59078], 2 subgroups: [test-NodeA-39513|8] (2) [test-NodeA-39513, test-NodeB-9439], [test-NodeC-43706|9] (2) [test-NodeC-43706, test-NodeD-59078]
> 18:27:13,674 DEBUG (testng-test:[]) [GMS] test-NodeD-59078: installing view MergeView::[test-NodeA-39513|10] (4) [test-NodeA-39513, test-NodeB-9439, test-NodeC-43706, test-NodeD-59078], 2 subgroups: [test-NodeA-39513|8] (2) [test-NodeA-39513, test-NodeB-9439], [test-NodeC-43706|9] (2) [test-NodeC-43706, test-NodeD-59078]
> 18:27:13,828 TRACE (jgroups-7,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((1): {50}) to test-NodeA-39513
> 18:27:13,966 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((49): {1-49}) to test-NodeA-39513
> 18:27:14,067 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> 18:27:14,504 DEBUG (testng-test:[]) [DefaultCacheManager] Stopping cache manager ISPN on test-NodeC-43706
> 18:27:18,642 TRACE (VERIFY_SUSPECT.TimerThread-89,test-NodeA-39513:[]) [GMS] test-NodeA-39513: joiners=[], suspected=[test-NodeC-43706], leaving=[], new view: [test-NodeA-39513|11] (3) [test-NodeA-39513, test-NodeB-9439, test-NodeD-59078]
> 18:27:18,643 TRACE (VERIFY_SUSPECT.TimerThread-89,test-NodeA-39513:[]) [GMS] test-NodeA-39513: mcasting view [test-NodeA-39513|11] (3) [test-NodeA-39513, test-NodeB-9439, test-NodeD-59078]
> 18:27:18,646 DEBUG (VERIFY_SUSPECT.TimerThread-89,test-NodeA-39513:[]) [GMS] test-NodeA-39513: installing view [test-NodeA-39513|11] (3) [test-NodeA-39513, test-NodeB-9439, test-NodeD-59078]
> 18:27:18,652 TRACE (VERIFY_SUSPECT.TimerThread-89,test-NodeA-39513:[]) [TCP_NIO2] test-NodeA-39513: sending msg to null, src=test-NodeA-39513, headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=63], TP: [cluster_name=ISPN]
> 18:27:18,656 TRACE (jgroups-20,test-NodeA-39513:[]) [TCP_NIO2] test-NodeA-39513: received [dst: test-NodeA-39513, src: test-NodeB-9439 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=100, TP: [cluster_name=ISPN]
> 18:27:20,554 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> 18:27:20,653 WARN (VERIFY_SUSPECT.TimerThread-89,test-NodeA-39513:[]) [GMS] test-NodeA-39513: failed to collect all ACKs (expected=2) for view [test-NodeA-39513|11] after 2000ms, missing 1 ACKs from (1) test-NodeD-59078
> 18:27:20,656 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> 18:27:20,756 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> ...
> 18:28:14,412 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> 18:28:14,513 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> 18:28:14,589 ERROR (testng-test:[]) [TestSuiteProgress] Test failed: org.infinispan.conflict.impl.CrashedNodeDuringConflictResolutionTest.testPartitionMergePolicy[DIST_SYNC]
> java.lang.RuntimeException: Cache ___defaultcache timed out waiting for rebalancing to complete on node test-NodeA-39513, current topology is CacheTopology{id=21, phase=CONFLICT_RESOLUTION, rebalanceId=7, currentCH=PartitionerConsistentHash:DefaultConsistentHash{ns=256, owners = (3)[test-NodeD-59078: 256+0, test-NodeA-39513: 0+256, test-NodeB-9439: 0+256]}, pendingCH=null, unionCH=null, actualMembers=[test-NodeD-59078, test-NodeA-39513, test-NodeB-9439], persistentUUIDs=[828108c4-4251-49fc-9481-ff6392bea9fb, 1d4b6f07-b71b-41a1-adfb-abbe68944a9f, 3a1ece05-c282-433e-9eb5-7b3e0f1932aa]}. rebalanceInProgress=true, currentChIsBalanced=true
> at org.infinispan.test.TestingUtil.waitForNoRebalance(TestingUtil.java:392) ~[test-classes/:?]
> at org.infinispan.conflict.impl.CrashedNodeDuringConflictResolutionTest.performMerge(CrashedNodeDuringConflictResolutionTest.java:113) ~[test-classes/:?]
> at org.infinispan.conflict.impl.BaseMergePolicyTest.testPartitionMergePolicy(BaseMergePolicyTest.java:137) ~[test-classes/:?]
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 3 months
[JBoss JIRA] (ISPN-9270) ClusteredLockWith2NodesTest.testTryLockAndKillLocking random failures
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-9270?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9270:
----------------------------------
Sprint: Sprint 9.3.0.Final, Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.Final (was: Sprint 9.3.0.Final, Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1)
> ClusteredLockWith2NodesTest.testTryLockAndKillLocking random failures
> ---------------------------------------------------------------------
>
> Key: ISPN-9270
> URL: https://issues.jboss.org/browse/ISPN-9270
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.3.0.CR1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.4.0.Final
>
>
> The test stops the coordinator A of a 2-node cluster \[A, B\], and then verifies that B stays available.
> Unfortunately that doesn't always happen: A tries to install topology \[B\] before shutting down, but B might receive it too late and reject it. If B expects members \[A, B\] and sees only B, the cache will enter degraded mode:
> {noformat}
> 19:04:05,863 INFO (jgroups-8,Test-NodeB-21374:[org.infinispan.LOCKS]) [CLUSTER] [Context=org.infinispan.LOCKS] ISPN100008: Updating cache members list [Test-NodeD-51051], topology id 7
> 19:04:05,863 TRACE (jgroups-4,Test-NodeD-51051:[]) [JGroupsTransport] Test-NodeD-51051 received command from Test-NodeB-21374: CacheTopologyControlCommand{cache=org.infinispan.LOCKS, type=CH_UPDATE, sender=Test-NodeB-21374, joinInfo=null, topologyId=7, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[Test-NodeD-51051: 256]}, pendingCH=null, availabilityMode=AVAILABLE, phase=NO_REBALANCE, actualMembers=[Test-NodeD-51051], throwable=null, viewId=1}
> 19:04:05,880 INFO (jgroups-4,Test-NodeD-51051:[]) [CLUSTER] ISPN000094: Received new cluster view for channel ISPN: [Test-NodeD-51051|2] (1) [Test-NodeD-51051]
> 19:04:05,884 DEBUG (stateTransferExecutor-thread-Test-NodeD-p101-t1:[Merge-2]) [ClusterCacheStatus] Recovered 1 partition(s) for cache org.infinispan.LOCKS: [CacheTopology{id=6, phase=NO_REBALANCE, rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[Test-NodeB-21374: 124, Test-NodeD-51051: 132]}, pendingCH=null, unionCH=null, actualMembers=[Test-NodeD-51051], persistentUUIDs=[6ac3a341-5af4-4aa2-ae7b-a4183d02d959]}]
> 19:04:05,884 WARN (stateTransferExecutor-thread-Test-NodeD-p101-t1:[Merge-2]) [CLUSTER] [Context=org.infinispan.LOCKS] ISPN000320: After merge (or coordinator change), cache still hasn't recovered a majority of members and must stay in degraded mode. Current members are [Test-NodeD-51051], lost members are [Test-NodeB-21374], stable members are [Test-NodeB-21374, Test-NodeD-51051]
> 19:04:05,885 INFO (stateTransferExecutor-thread-Test-NodeD-p101-t1:[Merge-2]) [CLUSTER] [Context=org.infinispan.LOCKS] ISPN100011: Entering availability mode DEGRADED_MODE, topology id 8
> 19:04:05,941 DEBUG (transport-thread-Test-NodeD-p100-t2:[Topology-org.infinispan.LOCKS]) [LocalTopologyManagerImpl] Ignoring topology 7 for cache org.infinispan.LOCKS from old coordinator Test-NodeB-21374
> 19:04:06,024 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.lock.ClusteredLockWith2NodesTest.testTryLockAndKillLocking
> java.lang.Error: java.util.concurrent.ExecutionException: java.lang.Error: java.util.concurrent.ExecutionException: org.infinispan.lock.exception.ClusteredLockException: org.infinispan.partitionhandling.AvailabilityException: ISPN000306: Key 'ClusteredLockKey{name=Test}' is not available. Not all owners are in this partition
> at org.infinispan.functional.FunctionalTestUtils.await(FunctionalTestUtils.java:47) ~[infinispan-core-9.3.0-SNAPSHOT-tests.jar:9.3.0-SNAPSHOT]
> at org.infinispan.lock.ClusteredLockWith2NodesTest.testTryLockAndKillLocking(ClusteredLockWith2NodesTest.java:49) ~[test-classes/:?]
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 3 months
[JBoss JIRA] (ISPN-9257) ClustertopologyManagerTest.testAbruptLeaveAfterGetStatus2[SCATTERED_SYNC, tx=false] random failures
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-9257?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9257:
----------------------------------
Sprint: Sprint 9.3.0.Final, Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.Final (was: Sprint 9.3.0.Final, Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1)
> ClustertopologyManagerTest.testAbruptLeaveAfterGetStatus2[SCATTERED_SYNC, tx=false] random failures
> ---------------------------------------------------------------------------------------------------
>
> Key: ISPN-9257
> URL: https://issues.jboss.org/browse/ISPN-9257
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.3.0.CR1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: testsuite_stability
> Fix For: 9.4.0.Final
>
> Attachments: ISPN-8731_wrong_topology_2018-05-18_ClusterTopologyManagerTest-infinispan-core.log.gz
>
>
> The test kills the coordinator NodeA, then while NodeB is trying to recover the caches it also kills NodeC. It expects NodeB to start a rebalance with 2 nodes and discards it, in order to test that it can process the 1-node rebalance first:
> {noformat}
> 00:34:06,582 DEBUG (transport-thread-test-NodeB-p12-t6:[testCache]) [ClusterTopologyManagerTest] Discarding rebalance command CacheTopology{id=8, phase=TRANSITORY, rebalanceId=5, currentCH=ScatteredConsistentHash{ns=256, rebalanced=false, owners = (2)[test-NodeB-49590: 85, test-NodeC-58596: 85]}, pendingCH=ScatteredConsistentHash{ns=256, rebalanced=true, owners = (2)[test-NodeB-49590: 128, test-NodeC-58596: 128]}, unionCH=null, actualMembers=[test-NodeB-49590, test-NodeC-58596], persistentUUIDs=[6b96414e-15d8-4350-aa3c-4fb4fc34e888, d47dc4a9-2a95-4bb1-a83b-bb8a27c9999f]}
> 00:34:06,609 DEBUG (transport-thread-test-NodeB-p12-t2:[Topology-testCache]) [LocalTopologyManagerImpl] Updating local topology for cache testCache: CacheTopology{id=9, phase=TRANSITORY, rebalanceId=5, currentCH=ScatteredConsistentHash{ns=256, rebalanced=false, owners = (1)[test-NodeB-49590: 85]}, pendingCH=ScatteredConsistentHash{ns=256, rebalanced=false, owners = (1)[test-NodeB-49590: 128]}, unionCH=null, actualMembers=[test-NodeB-49590], persistentUUIDs=[6b96414e-15d8-4350-aa3c-4fb4fc34e888]}
> 00:34:06,609 DEBUG (transport-thread-test-NodeB-p12-t2:[Topology-testCache]) [LocalTopologyManagerImpl] Installing fake cache topology CacheTopology{id=8, phase=NO_REBALANCE, rebalanceId=4, currentCH=ScatteredConsistentHash{ns=256, rebalanced=false, owners = (1)[test-NodeB-49590: 85]}, pendingCH=null, unionCH=null, actualMembers=[test-NodeB-49590], persistentUUIDs=[6b96414e-15d8-4350-aa3c-4fb4fc34e888]} for cache testCache
> {noformat}
> Unfortunately {{PreferAvailabilityStrategy}} has changed a bit and the rebalance ids don't always match the expectations of the test, so that the 1-node rebalance is discarded instead:
> {noformat}
> 09:46:10,530 DEBUG (transport-thread-Test-NodeB-p54539-t3:[testCache]) [Test] Discarding rebalance command CacheTopology{id=9, phase=TRANSITORY, rebalanceId=5, currentCH=ScatteredConsistentHash{ns=256, rebalanced=false, owners = (1)[Test-NodeB-62039: 85]}, pendingCH=ScatteredConsistentHash{ns=256, rebalanced=true, owners = (1)[Test-NodeB-62039: 256]}, unionCH=null, actualMembers=[Test-NodeB-62039], persistentUUIDs=[0ed7be74-4485-489b-baee-28c461c9e5de]}
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 3 months