[JBoss JIRA] (ISPN-8480) Data Inconsistency in case of Topology change in Infinispan Cluster
by Rohit Singh (JIRA)
[ https://issues.jboss.org/browse/ISPN-8480?page=com.atlassian.jira.plugin.... ]
Rohit Singh updated ISPN-8480:
------------------------------
Labels: Infinispan JGroups hibernate_2nd_level_cache (was: Infinispan)
> Data Inconsistency in case of Topology change in Infinispan Cluster
> -------------------------------------------------------------------
>
> Key: ISPN-8480
> URL: https://issues.jboss.org/browse/ISPN-8480
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.2.5.Final
> Reporter: Rohit Singh
> Priority: Blocker
> Labels: Infinispan, JGroups, hibernate_2nd_level_cache
>
> {color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
> *Infinispan Version : 8.2.5
> Hibernate Version : 5.2.8
> JGROUPS Version : 3.6.7
> *
> *Clustering Mode : Replication*
> We have tested the same with invalidation mode too.
> Refer below config cache-config for hibernate L2 entity types:
> _<replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
> <state-transfer enabled="false" timeout="20000000"/>
> <locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
> <transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
> <eviction size="-1" strategy="NONE"/>
> <expiration max-idle="-1" interval="5000" lifespan="-1" />
> </replicated-cache-configuration>
> _
>
> When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
> *This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
> The below 4 scenarios should explain the issue.
> *For Example:
> Scenario (Issue) 1:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 2:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 3:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
> -Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
>
> *Scenario (Issue) 4:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
> -Now D has more updated state of the L2 cache.
> -And {A,B,C} are having stale state of the L2 Cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} have stale state of the L2 cache.
> -We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
>
>
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-8480) Data Inconsistency in case of Topology change in Infinispan Cluster
by Rohit Singh (JIRA)
[ https://issues.jboss.org/browse/ISPN-8480?page=com.atlassian.jira.plugin.... ]
Rohit Singh updated ISPN-8480:
------------------------------
Description:
{color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
*Infinispan Version : 8.2.5*
*Hibernate Version : 5.2.8*
*JGROUPS Version : 3.6.7*
*Clustering Mode : Replication*
We have tested the same with invalidation mode too.
Refer below config cache-config for hibernate L2 entity types:
_<replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
<state-transfer enabled="false" timeout="20000000"/>
<locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
<transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
<eviction size="-1" strategy="NONE"/>
<expiration max-idle="-1" interval="5000" lifespan="-1" />
</replicated-cache-configuration>
_
When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
*This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
The below 4 scenarios should explain the issue.
*For Example:*
*Scenario (Issue) 1:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Now D has stale state of the L2 cache.
-We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
*Scenario (Issue) 2:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Now D has stale state of the L2 cache.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still D has stale state of the L2 cache.
-We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
*Scenario (Issue) 3:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
-Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
-We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
*Scenario (Issue) 4:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
-Now D has more updated state of the L2 cache.
-And {A,B,C} are having stale state of the L2 Cache.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still {A,B,C} have stale state of the L2 cache.
-We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
was:
{color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
*Infinispan Version : 8.2.5
Hibernate Version : 5.2.8
JGROUPS Version : 3.6.7
*
*Clustering Mode : Replication*
We have tested the same with invalidation mode too.
Refer below config cache-config for hibernate L2 entity types:
_<replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
<state-transfer enabled="false" timeout="20000000"/>
<locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
<transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
<eviction size="-1" strategy="NONE"/>
<expiration max-idle="-1" interval="5000" lifespan="-1" />
</replicated-cache-configuration>
_
When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
*This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
The below 4 scenarios should explain the issue.
*For Example:
Scenario (Issue) 1:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Now D has stale state of the L2 cache.
-We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
*Scenario (Issue) 2:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Now D has stale state of the L2 cache.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still D has stale state of the L2 cache.
-We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
*Scenario (Issue) 3:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
-Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
-We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
*Scenario (Issue) 4:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
-Now D has more updated state of the L2 cache.
-And {A,B,C} are having stale state of the L2 Cache.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still {A,B,C} have stale state of the L2 cache.
-We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
> Data Inconsistency in case of Topology change in Infinispan Cluster
> -------------------------------------------------------------------
>
> Key: ISPN-8480
> URL: https://issues.jboss.org/browse/ISPN-8480
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.2.5.Final
> Reporter: Rohit Singh
> Priority: Blocker
> Labels: Infinispan, JGroups, hibernate_2nd_level_cache
>
> {color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
> *Infinispan Version : 8.2.5*
> *Hibernate Version : 5.2.8*
> *JGROUPS Version : 3.6.7*
> *Clustering Mode : Replication*
> We have tested the same with invalidation mode too.
> Refer below config cache-config for hibernate L2 entity types:
> _<replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
> <state-transfer enabled="false" timeout="20000000"/>
> <locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
> <transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
> <eviction size="-1" strategy="NONE"/>
> <expiration max-idle="-1" interval="5000" lifespan="-1" />
> </replicated-cache-configuration>
> _
>
> When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
> *This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
> The below 4 scenarios should explain the issue.
> *For Example:*
> *Scenario (Issue) 1:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 2:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 3:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
> -Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
>
> *Scenario (Issue) 4:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
> -Now D has more updated state of the L2 cache.
> -And {A,B,C} are having stale state of the L2 Cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} have stale state of the L2 cache.
> -We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
>
>
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-8480) Data Inconsistency in case of Topology change in Infinispan Cluster
by Rohit Singh (JIRA)
[ https://issues.jboss.org/browse/ISPN-8480?page=com.atlassian.jira.plugin.... ]
Rohit Singh updated ISPN-8480:
------------------------------
Labels: Infinispan (was: )
> Data Inconsistency in case of Topology change in Infinispan Cluster
> -------------------------------------------------------------------
>
> Key: ISPN-8480
> URL: https://issues.jboss.org/browse/ISPN-8480
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.2.5.Final
> Reporter: Rohit Singh
> Priority: Blocker
> Labels: Infinispan
>
> {color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
> *Infinispan Version : 8.2.5
> Hibernate Version : 5.2.8
> JGROUPS Version : 3.6.7
> *
> *Clustering Mode : Replication*
> We have tested the same with invalidation mode too.
> Refer below config cache-config for hibernate L2 entity types:
> _<replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
> <state-transfer enabled="false" timeout="20000000"/>
> <locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
> <transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
> <eviction size="-1" strategy="NONE"/>
> <expiration max-idle="-1" interval="5000" lifespan="-1" />
> </replicated-cache-configuration>
> _
>
> When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
> *This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
> The below 4 scenarios should explain the issue.
> *For Example:
> Scenario (Issue) 1:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 2:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 3:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
> -Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
>
> *Scenario (Issue) 4:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
> -Now D has more updated state of the L2 cache.
> -And {A,B,C} are having stale state of the L2 Cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} have stale state of the L2 cache.
> -We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
>
>
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-8480) Data Inconsistency in case of Topology change in Infinispan Cluster
by Rohit Singh (JIRA)
[ https://issues.jboss.org/browse/ISPN-8480?page=com.atlassian.jira.plugin.... ]
Rohit Singh updated ISPN-8480:
------------------------------
Affects Version/s: 8.2.5.Final
> Data Inconsistency in case of Topology change in Infinispan Cluster
> -------------------------------------------------------------------
>
> Key: ISPN-8480
> URL: https://issues.jboss.org/browse/ISPN-8480
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.2.5.Final
> Reporter: Rohit Singh
> Priority: Blocker
> Labels: Infinispan
>
> {color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
> *Infinispan Version : 8.2.5
> Hibernate Version : 5.2.8
> JGROUPS Version : 3.6.7
> *
> *Clustering Mode : Replication*
> We have tested the same with invalidation mode too.
> Refer below config cache-config for hibernate L2 entity types:
> _<replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
> <state-transfer enabled="false" timeout="20000000"/>
> <locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
> <transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
> <eviction size="-1" strategy="NONE"/>
> <expiration max-idle="-1" interval="5000" lifespan="-1" />
> </replicated-cache-configuration>
> _
>
> When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
> *This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
> The below 4 scenarios should explain the issue.
> *For Example:
> Scenario (Issue) 1:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 2:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 3:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
> -Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
>
> *Scenario (Issue) 4:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
> -Now D has more updated state of the L2 cache.
> -And {A,B,C} are having stale state of the L2 Cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} have stale state of the L2 cache.
> -We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
>
>
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-8480) Data Inconsistency in case of Topology change in Infinispan Cluster
by Rohit Singh (JIRA)
Rohit Singh created ISPN-8480:
---------------------------------
Summary: Data Inconsistency in case of Topology change in Infinispan Cluster
Key: ISPN-8480
URL: https://issues.jboss.org/browse/ISPN-8480
Project: Infinispan
Issue Type: Bug
Reporter: Rohit Singh
Priority: Blocker
{color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
*Infinispan Version : 8.2.5
Hibernate Version : 5.2.8
JGROUPS Version : 3.6.7
*
*Clustering Mode : Replication*
We have tested the same with invalidation mode too.
Refer below config cache-config for hibernate L2 entity types:
_<replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
<state-transfer enabled="false" timeout="20000000"/>
<locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
<transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
<eviction size="-1" strategy="NONE"/>
<expiration max-idle="-1" interval="5000" lifespan="-1" />
</replicated-cache-configuration>
_
When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
*This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
The below 4 scenarios should explain the issue.
*For Example:
Scenario (Issue) 1:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Now D has stale state of the L2 cache.
-We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
*Scenario (Issue) 2:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Now D has stale state of the L2 cache.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still D has stale state of the L2 cache.
-We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
*Scenario (Issue) 3:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
-Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
-We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
*Scenario (Issue) 4:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
-Now D has more updated state of the L2 cache.
-And {A,B,C} are having stale state of the L2 Cache.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still {A,B,C} have stale state of the L2 cache.
-We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-6879) Calculate (and expose) minimum number of nodes for data in Infinispan
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6879?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-6879:
-------------------------------------------
{quote}
This min number should it be based on the current memory usage or the configured maximum usage? Do we want both?
{quote}
I think it should be based on *current* memory usage. E.g. a user might want to spin up a JDG cluster with 10 nodes and later on he realized that this is too much and he wants to scale it down a bit. If we implemented this using the initial (configured or specified by the user) dataset size, the user will never be able to go below the initial number.
//cc [~NadirX][~epbernard]
> Calculate (and expose) minimum number of nodes for data in Infinispan
> ---------------------------------------------------------------------
>
> Key: ISPN-6879
> URL: https://issues.jboss.org/browse/ISPN-6879
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations, Server
> Reporter: Sebastian Łaskawiec
> Assignee: William Burns
>
> With Kubernetes autoscaling we need to be able to tell what is the minimum amount of nodes necessary for hosting data (probably some sort of size + number of nodes estimation).
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months