[JBoss JIRA] (ISPN-7809) Multiplex events for multiple listeners over a single connection in client
by Galder Zamarreño (JIRA)
Galder Zamarreño created ISPN-7809:
--------------------------------------
Summary: Multiplex events for multiple listeners over a single connection in client
Key: ISPN-7809
URL: https://issues.jboss.org/browse/ISPN-7809
Project: Infinispan
Issue Type: Enhancement
Components: Remote Protocols
Affects Versions: 9.0.0.Final
Reporter: Galder Zamarreño
Currently the Java Hot Rod client uses a separate …
[View More]connection for each of the listeners that's added to the server. Each of these connections is allocated for each listener and won't be released until listener is removed or the client is closed.
To avoid wasting all these connections, each client should create a single connection to be used for all its listeners, and events should be multiplexed through it.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
[View Less]
7 years, 10 months
[JBoss JIRA] (ISPN-7808) Upgrade to mockito 2.7.21
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7808?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7808:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5125
> Upgrade to mockito 2.7.21
> -------------------------
>
> Key: ISPN-7808
> URL: https://issues.jboss.org/browse/ISPN-7808
> Project: Infinispan
> …
[View More]Issue Type: Component Upgrade
> Components: Test Suite - Core, Test Suite - Server
> Affects Versions: 9.0.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.1.0.Alpha1
>
>
> While fixing ISPN-7659, I changed {{version.mockito}} in the parent POM to 2.7.21 (the latest version of mockito-core).
> It turns out that all the modules actually depend on {{mockito-all}} without specifying a version. {{version.mockito}} was only used by some OSGi integration tests, and the change broke them.
> The latest version of mockito-all is 1.9.5, which is quite old, so it would be best to upgrade to the latest mockito-core everywhere.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
[View Less]
7 years, 10 months
[JBoss JIRA] (ISPN-7808) Upgrade to mockito 2.7.21
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7808?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7808:
-------------------------------
Status: Open (was: New)
> Upgrade to mockito 2.7.21
> -------------------------
>
> Key: ISPN-7808
> URL: https://issues.jboss.org/browse/ISPN-7808
> Project: Infinispan
> Issue Type: Component Upgrade
> Components: Test Suite - Core, Test Suite - Server
> …
[View More] Affects Versions: 9.0.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.1.0.Alpha1
>
>
> While fixing ISPN-7659, I changed {{version.mockito}} in the parent POM to 2.7.21 (the latest version of mockito-core).
> It turns out that all the modules actually depend on {{mockito-all}} without specifying a version. {{version.mockito}} was only used by some OSGi integration tests, and the change broke them.
> The latest version of mockito-all is 1.9.5, which is quite old, so it would be best to upgrade to the latest mockito-core everywhere.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
[View Less]
7 years, 10 months
[JBoss JIRA] (ISPN-7808) Upgrade to mockito 2.7.21
by Dan Berindei (JIRA)
Dan Berindei created ISPN-7808:
----------------------------------
Summary: Upgrade to mockito 2.7.21
Key: ISPN-7808
URL: https://issues.jboss.org/browse/ISPN-7808
Project: Infinispan
Issue Type: Component Upgrade
Components: Test Suite - Core, Test Suite - Server
Affects Versions: 9.0.0.Final
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 9.1.0.Alpha1
While fixing …
[View More]ISPN-7659, I changed {{version.mockito}} in the parent POM to 2.7.21 (the latest version of mockito-core).
It turns out that all the modules actually depend on {{mockito-all}} without specifying a version. {{version.mockito}} was only used by some OSGi integration tests, and the change broke them.
The latest version of mockito-all is 1.9.5, which is quite old, so it would be best to upgrade to the latest mockito-core everywhere.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
[View Less]
7 years, 10 months
[JBoss JIRA] (ISPN-7801) RehashWithL1Test.testPutWithRehashAndCacheClear random failures
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7801?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-7801:
------------------------------------
I think the test is correct after all: the L1 entries on the old nodes should *not* be visible after the joiner became the only owner of all the keys. Writes on the joiner will not send any L1 invalidations, because its {{L1ManagerImpl}} doesn't have any requestors, so the other nodes could see stale values.
So we need to add another …
[View More]requirement for the rebalance/state transfer process: L1 entries should not be visible after the node they were requested from is no longer an owner in the write CH.
[~rvansa] I think the simplest way to do this would be to invalidate L1 entries at the beginning of the {{READ_NEW_WRITE_ALL}} phase. We could either split {{StateConsumerImpl.removeStaleData()}} into two parts, so that invalidation of regular entries still happens after the end of rebalance, or we could invalidate everything during {{READ_NEW_WRITE_ALL}}, but that would require additional logic to skip writing new values to the data container/stores.
> RehashWithL1Test.testPutWithRehashAndCacheClear random failures
> ---------------------------------------------------------------
>
> Key: ISPN-7801
> URL: https://issues.jboss.org/browse/ISPN-7801
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.0.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.1.0.Final
>
>
> The test kills the only owner of a key and checks that when a node starts owning an L1 entry, it doesn't send it to other nodes during state transfer. Then it adds a new node (owning the key) and checks that the key isn't transferred to the new node, and it's deleted from L1 on the old nodes. The problem is that it doesn't wait, it assumes all the nodes have already removed it by the time {{getCache()}} returns on the joiner.
> {noformat}
> 03:24:27,606 TRACE (jgroups-5,Test-NodeB-54331:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
> 03:24:27,607 TRACE (jgroups-5,Test-NodeB-54331:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
> 03:24:26,754 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeA-2588
> 03:24:27,514 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeB-54331
> 03:24:27,777 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeC-65326
> 03:24:27,777 DEBUG (testng-Test:[]) [Test] Killing node Test-NodeC-65326
> 03:24:27,781 TRACE (transport-thread-Test-NodeA-p51-t2:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
> *** The entry is not removed from NodeB at this point
> 03:24:27,936 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
> 03:24:27,998 TRACE (jgroups-5,Test-NodeB-54331:[]) [CommandAwareRpcDispatcher] About to send back response SuccessfulResponse{responseValue=MortalCacheValue{value=some data, lifespan=600000, created=1493943867607}} for command ClusteredGetCommand{key=k0, flags=[]}
> 03:24:28,034 TRACE (jgroups-7,Test-NodeA-2588:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
> 03:24:28,044 TRACE (jgroups-7,Test-NodeA-2588:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
> 03:24:28,519 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeB-54331
> 03:24:28,595 DEBUG (testng-Test:[]) [Test] Starting a new joiner
> 03:24:30,261 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [InvocationContextInterceptor] Invoked with command InvalidateCommand{keys=[k0, k1, k2, k3, k4, k5, k6, k7, k8, k9]} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@54c5cc1d]
> 03:24:30,292 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
> 03:24:30,355 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear
> java.lang.AssertionError: wrong value for k0
> at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.8.8.jar:?]
> at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:24) ~[testng-6.8.8.jar:?]
> at org.testng.AssertJUnit.assertNull(AssertJUnit.java:282) ~[testng-6.8.8.jar:?]
> at org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear(RehashWithL1Test.java:78) ~[test-classes/:?]
> *** Too late
> 03:24:30,360 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
[View Less]
7 years, 10 months
[JBoss JIRA] (ISPN-7801) RehashWithL1Test.testPutWithRehashAndCacheClear random failures
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7801?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7801:
-------------------------------
Description:
The test kills the only owner of a key and checks that when a node starts owning an L1 entry, it doesn't send it to other nodes during state transfer. Then it adds a new node (owning the key) and checks that the key isn't transferred to the new node, and it's deleted from L1 on the old nodes. The problem is that it doesn't wait, it …
[View More]assumes all the nodes have already removed it by the time {{getCache()}} returns on the joiner.
{noformat}
03:24:27,606 TRACE (jgroups-5,Test-NodeB-54331:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
03:24:27,607 TRACE (jgroups-5,Test-NodeB-54331:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
03:24:26,754 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeA-2588
03:24:27,514 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeB-54331
03:24:27,777 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeC-65326
03:24:27,777 DEBUG (testng-Test:[]) [Test] Killing node Test-NodeC-65326
03:24:27,781 TRACE (transport-thread-Test-NodeA-p51-t2:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
*** The entry is not removed from NodeB at this point
03:24:27,936 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
03:24:27,998 TRACE (jgroups-5,Test-NodeB-54331:[]) [CommandAwareRpcDispatcher] About to send back response SuccessfulResponse{responseValue=MortalCacheValue{value=some data, lifespan=600000, created=1493943867607}} for command ClusteredGetCommand{key=k0, flags=[]}
03:24:28,034 TRACE (jgroups-7,Test-NodeA-2588:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
03:24:28,044 TRACE (jgroups-7,Test-NodeA-2588:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
03:24:28,519 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeB-54331
03:24:28,595 DEBUG (testng-Test:[]) [Test] Starting a new joiner
03:24:30,261 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [InvocationContextInterceptor] Invoked with command InvalidateCommand{keys=[k0, k1, k2, k3, k4, k5, k6, k7, k8, k9]} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@54c5cc1d]
03:24:30,292 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
03:24:30,355 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear
java.lang.AssertionError: wrong value for k0
at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.8.8.jar:?]
at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:24) ~[testng-6.8.8.jar:?]
at org.testng.AssertJUnit.assertNull(AssertJUnit.java:282) ~[testng-6.8.8.jar:?]
at org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear(RehashWithL1Test.java:78) ~[test-classes/:?]
*** Too late
03:24:30,360 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
{noformat}
was:
The test kills the only owner of a key and checks that when a node starts owning an L1 entry, it doesn't send it to other nodes during state transfer. Then it adds a new node (owning the key) and checks that the key isn't transferred to the new node, and it's deleted from L1 on the old nodes. The problem is that it doesn't wait, it assumes all the nodes have already removed it by the time {{getCache()}} returns on the joiner.
{noformat}
03:24:27,606 TRACE (jgroups-5,Test-NodeB-54331:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
03:24:27,607 TRACE (jgroups-5,Test-NodeB-54331:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
03:24:27,777 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeC-65326
03:24:27,777 DEBUG (testng-Test:[]) [Test] Killing node Test-NodeC-65326
03:24:27,781 TRACE (transport-thread-Test-NodeA-p51-t2:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
*** The entry is not removed from NodeB at this point
03:24:27,936 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
03:24:27,998 TRACE (jgroups-5,Test-NodeB-54331:[]) [CommandAwareRpcDispatcher] About to send back response SuccessfulResponse{responseValue=MortalCacheValue{value=some data, lifespan=600000, created=1493943867607}} for command ClusteredGetCommand{key=k0, flags=[]}
03:24:28,034 TRACE (jgroups-7,Test-NodeA-2588:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
03:24:28,044 TRACE (jgroups-7,Test-NodeA-2588:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
03:24:30,261 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [InvocationContextInterceptor] Invoked with command InvalidateCommand{keys=[k0, k1, k2, k3, k4, k5, k6, k7, k8, k9]} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@54c5cc1d]
03:24:30,292 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
03:24:30,355 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear
java.lang.AssertionError: wrong value for k0
at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.8.8.jar:?]
at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:24) ~[testng-6.8.8.jar:?]
at org.testng.AssertJUnit.assertNull(AssertJUnit.java:282) ~[testng-6.8.8.jar:?]
at org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear(RehashWithL1Test.java:78) ~[test-classes/:?]
*** Too late
03:24:30,360 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
{noformat}
> RehashWithL1Test.testPutWithRehashAndCacheClear random failures
> ---------------------------------------------------------------
>
> Key: ISPN-7801
> URL: https://issues.jboss.org/browse/ISPN-7801
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.0.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.1.0.Final
>
>
> The test kills the only owner of a key and checks that when a node starts owning an L1 entry, it doesn't send it to other nodes during state transfer. Then it adds a new node (owning the key) and checks that the key isn't transferred to the new node, and it's deleted from L1 on the old nodes. The problem is that it doesn't wait, it assumes all the nodes have already removed it by the time {{getCache()}} returns on the joiner.
> {noformat}
> 03:24:27,606 TRACE (jgroups-5,Test-NodeB-54331:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
> 03:24:27,607 TRACE (jgroups-5,Test-NodeB-54331:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
> 03:24:26,754 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeA-2588
> 03:24:27,514 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeB-54331
> 03:24:27,777 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeC-65326
> 03:24:27,777 DEBUG (testng-Test:[]) [Test] Killing node Test-NodeC-65326
> 03:24:27,781 TRACE (transport-thread-Test-NodeA-p51-t2:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
> *** The entry is not removed from NodeB at this point
> 03:24:27,936 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
> 03:24:27,998 TRACE (jgroups-5,Test-NodeB-54331:[]) [CommandAwareRpcDispatcher] About to send back response SuccessfulResponse{responseValue=MortalCacheValue{value=some data, lifespan=600000, created=1493943867607}} for command ClusteredGetCommand{key=k0, flags=[]}
> 03:24:28,034 TRACE (jgroups-7,Test-NodeA-2588:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
> 03:24:28,044 TRACE (jgroups-7,Test-NodeA-2588:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
> 03:24:28,519 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeB-54331
> 03:24:28,595 DEBUG (testng-Test:[]) [Test] Starting a new joiner
> 03:24:30,261 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [InvocationContextInterceptor] Invoked with command InvalidateCommand{keys=[k0, k1, k2, k3, k4, k5, k6, k7, k8, k9]} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@54c5cc1d]
> 03:24:30,292 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
> 03:24:30,355 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear
> java.lang.AssertionError: wrong value for k0
> at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.8.8.jar:?]
> at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:24) ~[testng-6.8.8.jar:?]
> at org.testng.AssertJUnit.assertNull(AssertJUnit.java:282) ~[testng-6.8.8.jar:?]
> at org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear(RehashWithL1Test.java:78) ~[test-classes/:?]
> *** Too late
> 03:24:30,360 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
[View Less]
7 years, 10 months
[JBoss JIRA] (ISPN-7806) QueryInterceptor should not load entries from DC but context
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-7806?page=com.atlassian.jira.plugin.... ]
Radim Vansa updated ISPN-7806:
------------------------------
Description:
Currently in {{visitPrepareCommand}} the query interceptor is loading data directly from data container. That's wrong - if the entry is passivated/evicted, the previous value is incorrect.
As the data is not loaded (from DC/persistence) at current QI position, we should move QueryInterceptor after EntryWrappingInterceptor and …
[View More]CacheLoaderInterceptor (before xDistributionInterceptor), and load the previous entry from context instead. The same approach should be taken for non-tx command, rather than relying on their return value.
There will still be issues if the command has SKIP_CACHE_LOAD flag: I suggest warning message if it doesn't have SKIP_INDEXING flag as well.
was:
Currently in {{visitPrepareCommand}} the query interceptor is loading data directly from data container. That's wrong - if the entry is passivated/evicted, the previous value is incorrect.
Therefore we should move QueryInterceptor after EntryWrappingInterceptor and CacheLoaderInterceptor (before xDistributionInterceptor), and load the previous entry from context instead. The same approach should be taken for non-tx command, rather than relying on their return value.
There will still be issues if the command has SKIP_CACHE_LOAD flag: I suggest warning message if it doesn't have SKIP_INDEXING flag as well.
> QueryInterceptor should not load entries from DC but context
> ------------------------------------------------------------
>
> Key: ISPN-7806
> URL: https://issues.jboss.org/browse/ISPN-7806
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying
> Affects Versions: 9.0.0.Final
> Reporter: Radim Vansa
> Assignee: Adrian Nistor
>
> Currently in {{visitPrepareCommand}} the query interceptor is loading data directly from data container. That's wrong - if the entry is passivated/evicted, the previous value is incorrect.
> As the data is not loaded (from DC/persistence) at current QI position, we should move QueryInterceptor after EntryWrappingInterceptor and CacheLoaderInterceptor (before xDistributionInterceptor), and load the previous entry from context instead. The same approach should be taken for non-tx command, rather than relying on their return value.
> There will still be issues if the command has SKIP_CACHE_LOAD flag: I suggest warning message if it doesn't have SKIP_INDEXING flag as well.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
[View Less]
7 years, 10 months
[JBoss JIRA] (ISPN-7806) QueryInterceptor should not load entries from DC but context
by Radim Vansa (JIRA)
Radim Vansa created ISPN-7806:
---------------------------------
Summary: QueryInterceptor should not load entries from DC but context
Key: ISPN-7806
URL: https://issues.jboss.org/browse/ISPN-7806
Project: Infinispan
Issue Type: Bug
Components: Embedded Querying
Affects Versions: 9.0.0.Final
Reporter: Radim Vansa
Assignee: Adrian Nistor
Currently in {{visitPrepareCommand}} the query …
[View More]interceptor is loading data directly from data container. That's wrong - if the entry is passivated/evicted, the previous value is incorrect.
Therefore we should move QueryInterceptor after EntryWrappingInterceptor and CacheLoaderInterceptor (before xDistributionInterceptor), and load the previous entry from context instead. The same approach should be taken for non-tx command, rather than relying on their return value.
There will still be issues if the command has SKIP_CACHE_LOAD flag: I suggest warning message if it doesn't have SKIP_INDEXING flag as well.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
[View Less]
7 years, 10 months