[JBoss JIRA] (ISPN-8027) TimestampsRegionImplTest.testEvict randomly fails
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-8027?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-8027:
-----------------------------------
Affects Version/s: 9.1.0.Final
> TimestampsRegionImplTest.testEvict randomly fails
> -------------------------------------------------
>
> Key: ISPN-8027
> URL: https://issues.jboss.org/browse/ISPN-8027
> Project: Infinispan
> Issue Type: Bug
> Components: Hibernate Cache
> Affects Versions: 9.1.0.Final
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Labels: testsuite_stability
>
> Randomly failing test on CI:
> {{org.infinispan.test.hibernate.cache.timestamp.TimestampsRegionImplTest.testEvict[JTA, INVALIDATION_SYNC,AccessType[transactional]]}}
> {code}
> java.lang.AssertionError: expected:<value1> but was:<null>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at org.infinispan.test.hibernate.cache.AbstractGeneralDataRegionTest.lambda$testEvict$4(AbstractGeneralDataRegionTest.java:146)
> at org.infinispan.test.hibernate.cache.AbstractGeneralDataRegionTest.withSessionFactoriesAndRegions(AbstractGeneralDataRegionTest.java:104)
> at org.infinispan.test.hibernate.cache.AbstractGeneralDataRegionTest.testEvict(AbstractGeneralDataRegionTest.java:117)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at org.hibernate.testing.junit4.ExtendedFrameworkMethod.invokeExplosively(ExtendedFrameworkMethod.java:45)
> at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.infinispan.test.hibernate.cache.util.InfinispanTestingSetup$1.evaluate(InfinispanTestingSetup.java:38)
> at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-8092) Scattered cache state transfer misses segments
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8092?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-8092:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5253
> Scattered cache state transfer misses segments
> ----------------------------------------------
>
> Key: ISPN-8092
> URL: https://issues.jboss.org/browse/ISPN-8092
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.1.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Minor
>
> I noticed this in the pull request for [ISPN-7997|https://github.com/infinispan/infinispan/pull/5253], which uses a {{ControlledConsistentHashFactory}} to make the stream tests more predictable.
> For simplicity, I used 3 segments, and the ownership is as follows:
> * With a full cluster ABC, A owns segment 0, B owns segment 1, and C owns segment 2
> * With a smaller cluster A, AB, or AC, A owns all the segments.
> {{ScatteredStreamIteratorTest.verifyNodeLeavesAfterSendingBackSomeData[SCATTERED_SYNC, tx=false]}} kills node B, and A immediately becomes the owner of segment 1. Then the rebalance starts and A pushes segment 2 to node C, but it doesn't try to fetch any entries from segment 1 that were backed up on node C.
> {noformat}
> 17:35:09,897 DEBUG (remote-thread-test-NodeA-p2-t5:[testCache]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache testCache, topology = CacheTopology{id=7, rebalanceId=4, currentCH=ScatteredConsistentHash{ns=3, rebalanced=true, owners = (3)[test-NodeA-59810: 1, test-NodeB-37315: 1, test-NodeC-50539: 1]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[test-NodeA-59810, test-NodeB-37315, test-NodeC-50539], persistentUUIDs=[6118c3ba-840e-4838-a0cf-1165d3d5ec4b, 38cc2bd9-0a21-4020-97ab-909a32506fa1, 6a4f1a13-0fbb-4f92-867e-64068d574d4d]}, availability mode = AVAILABLE
> 17:35:10,974 DEBUG (remote-thread-test-NodeA-p2-t5:[testCache]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache testCache, topology = CacheTopology{id=8, rebalanceId=4, currentCH=ScatteredConsistentHash{ns=3, rebalanced=false, owners = (2)[test-NodeA-59810: 2, test-NodeC-50539: 1]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[test-NodeA-59810, test-NodeC-50539], persistentUUIDs=[6118c3ba-840e-4838-a0cf-1165d3d5ec4b, 6a4f1a13-0fbb-4f92-867e-64068d574d4d]}, availability mode = AVAILABLE
> 17:35:10,975 TRACE (transport-thread-test-NodeA-p4-t2:[Topology-testCache]) [StateConsumerImpl] On cache testCache we have: new segments: [0, 1]; old segments: [0]
> 17:35:10,975 TRACE (transport-thread-test-NodeA-p4-t2:[Topology-testCache]) [StateConsumerImpl] On cache testCache we have: added segments: {1}; removed segments: {}
> 17:35:10,975 TRACE (transport-thread-test-NodeA-p4-t2:[Topology-testCache]) [StateConsumerImpl] This is not a rebalance, not doing anything...
> 17:35:10,976 INFO (remote-thread-test-NodeA-p2-t5:[testCache]) [CLUSTER] ISPN000310: Starting cluster-wide rebalance for cache testCache, topology CacheTopology{id=9, rebalanceId=5, currentCH=ScatteredConsistentHash{ns=3, rebalanced=false, owners = (2)[test-NodeA-59810: 2, test-NodeC-50539: 1]}, pendingCH=ScatteredConsistentHash{ns=3, rebalanced=true, owners = (2)[test-NodeA-59810: 3, test-NodeC-50539: 0]}, unionCH=null, phase=TRANSITORY, actualMembers=[test-NodeA-59810, test-NodeC-50539], persistentUUIDs=[6118c3ba-840e-4838-a0cf-1165d3d5ec4b, 6a4f1a13-0fbb-4f92-867e-64068d574d4d]}
> 17:35:10,977 TRACE (transport-thread-test-NodeA-p4-t3:[Topology-testCache]) [CacheTopology] Current consistent hash's routing table: 0: 0, 1: 0, 2: 1
> 17:35:10,977 TRACE (transport-thread-test-NodeA-p4-t3:[Topology-testCache]) [CacheTopology] Pending consistent hash's routing table: 0: 0, 1: 0, 2: 0
> 17:35:10,978 TRACE (transport-thread-test-NodeA-p4-t3:[Topology-testCache]) [ScatteredVersionManager] Node will transfer value for topology 9
> 17:35:10,978 TRACE (transport-thread-test-NodeA-p4-t3:[Topology-testCache]) [StateConsumerImpl] On cache testCache we have: new segments: [0, 1, 2]; old segments: [0, 1]
> 17:35:10,978 TRACE (transport-thread-test-NodeA-p4-t3:[Topology-testCache]) [StateConsumerImpl] On cache testCache we have: added segments: {2}; removed segments: {}
> 17:35:10,979 TRACE (transport-thread-test-NodeA-p4-t3:[Topology-testCache]) [JGroupsTransport] test-NodeA-59810 sending request 9 to all: StateRequestCommand{cache=testCache, origin=test-NodeA-59810, type=CONFIRM_REVOKED_SEGMENTS, topologyId=9, segments=null}
> 17:35:10,989 TRACE (transport-thread-test-NodeA-p4-t3:[Topology-testCache]) [StateProviderImpl] Segments to replicate and invalidate: [0, 1]
> 17:35:10,989 TRACE (transport-thread-test-NodeA-p4-t1:[]) [OutboundTransferTask] Sending last chunk to node test-NodeC-50539 containing 0 cache entries from segments [0, 1]
> 17:35:10,989 TRACE (stateTransferExecutor-thread-test-NodeA-p7-t1:[StateRequest-testCache]) [RpcManagerImpl] test-NodeA-59810 invoking StateRequestCommand{cache=testCache, origin=test-NodeA-59810, type=START_KEYS_TRANSFER, topologyId=9, segments={2}} to recipient list [test-NodeC-50539] with options RpcOptions{timeout=240000, unit=MILLISECONDS, deliverOrder=NONE, responseFilter=null, responseMode=SYNCHRONOUS_IGNORE_LEAVERS}
> 17:35:11,017 INFO (transport-thread-test-NodeA-p4-t4:[testCache]) [CLUSTER] ISPN000336: Finished cluster-wide rebalance for cache testCache, topology id = 9
> 17:35:11,017 DEBUG (transport-thread-test-NodeA-p4-t4:[testCache]) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache testCache, topology = CacheTopology{id=10, rebalanceId=5, currentCH=ScatteredConsistentHash{ns=3, rebalanced=true, owners = (2)[test-NodeA-59810: 3, test-NodeC-50539: 0]}, pendingCH=null, unionCH=null, phase=NO_REBALANCE, actualMembers=[test-NodeA-59810, test-NodeC-50539], persistentUUIDs=[6118c3ba-840e-4838-a0cf-1165d3d5ec4b, 6a4f1a13-0fbb-4f92-867e-64068d574d4d]}, availability mode = AVAILABLE
> {noformat}
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-8105) Multimap - Handle names between regular and multimap cache
by Katia Aresti (JIRA)
Katia Aresti created ISPN-8105:
----------------------------------
Summary: Multimap - Handle names between regular and multimap cache
Key: ISPN-8105
URL: https://issues.jboss.org/browse/ISPN-8105
Project: Infinispan
Issue Type: Enhancement
Reporter: Katia Aresti
Assignee: Katia Aresti
When a multimap cache, embedded or hotrod, is created, there is no check or anything that tags the cache as a multimap cache. So if the cache already exists, this is conflicting.
Add a new configuration parameter or a namespace in multimaps to avoid that
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months