[JBoss JIRA] (ISPN-8480) Data Inconsistency in case of Topology change in Infinispan Cluster
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-8480?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant closed ISPN-8480.
---------------------------------
Resolution: Out of Date
Infinispan 9.x has partition handling which covers these scenarios. If you can reproduce the issue with a later version, let us know
> Data Inconsistency in case of Topology change in Infinispan Cluster
> -------------------------------------------------------------------
>
> Key: ISPN-8480
> URL: https://issues.jboss.org/browse/ISPN-8480
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.2.5.Final
> Reporter: Rohit Singh
> Priority: Blocker
> Labels: Infinispan, JGroups, hibernate_2nd_level_cache
>
> {color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
> *Infinispan Version : 8.2.5*
> *Hibernate Version : 5.2.8*
> *JGROUPS Version : 3.6.7*
> *Clustering Mode : Replication*
> We have tested the same with invalidation mode too.
> Refer below config cache-config for hibernate L2 entity types:
> <replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
> <state-transfer enabled="false" timeout="20000000"/>
> <locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
> <transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
> <eviction size="-1" strategy="NONE"/>
> <expiration max-idle="-1" interval="5000" lifespan="-1" />
> </replicated-cache-configuration>
>
> When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
> *This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
> The below 4 scenarios should explain the issue.
> *For Example:*
> *Scenario (Issue) 1:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 2:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 3:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
> -Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
>
> *Scenario (Issue) 4:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
> -Now D has more updated state of the L2 cache.
> -And {A,B,C} are having stale state of the L2 Cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} have stale state of the L2 cache.
> -We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
>
>
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 2 months
[JBoss JIRA] (ISPN-10429) Hot Rod client doesn't retry on socket timeout
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10429?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10429:
-----------------------------------
Fix Version/s: 9.4.17.Final
> Hot Rod client doesn't retry on socket timeout
> ----------------------------------------------
>
> Key: ISPN-10429
> URL: https://issues.jboss.org/browse/ISPN-10429
> Project: Infinispan
> Issue Type: Bug
> Components: Hot Rod
> Affects Versions: 10.0.0.Beta4, 9.4.15.Final
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Priority: Major
> Fix For: 10.0.0.CR3, 9.4.17.Final
>
>
> The Hot Rod client registers a timeout handler for operations using the socket timeout. However when this timeout is hit, the retry logic is not invoked causing requests to dead servers to fail immediately without retry.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 2 months