[JBoss JIRA] (ISPN-8481) Lazily calculate hashCode for WrappedByteArray
by William Burns (JIRA)
William Burns created ISPN-8481:
-----------------------------------
Summary: Lazily calculate hashCode for WrappedByteArray
Key: ISPN-8481
URL: https://issues.jboss.org/browse/ISPN-8481
Project: Infinispan
Issue Type: Sub-task
Reporter: William Burns
Assignee: William Burns
We don't always use the hashCode for a WrappedByteArray. By always calculating it this adds additional cost, we should see if we can change this safely.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-8467) Memory Leak in the Rest server
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8467?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-8467:
------------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5561
> Memory Leak in the Rest server
> ------------------------------
>
> Key: ISPN-8467
> URL: https://issues.jboss.org/browse/ISPN-8467
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 9.2.0.Alpha2
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
>
> The Netty rest server upon each connection will install the {{Http20RequestHandler}} that in turn creates a new instance of {{CacheOperations}} and {{RestCacheManager}} objects.
> The {{RestCacheManager}} on every connection, among other things, will create hashmaps to keep cache instances, and will try to register every time a new {{RestSourceMigrator}} which gets accumulated in the {{RollingUpgradeManager}}.
> Those objects should be shared across all channels so that they can efficiently cache resources and avoid creating lots of garbage.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months
[JBoss JIRA] (ISPN-8480) Data Inconsistency in case of Topology change in Infinispan Cluster
by Rohit Singh (JIRA)
[ https://issues.jboss.org/browse/ISPN-8480?page=com.atlassian.jira.plugin.... ]
Rohit Singh updated ISPN-8480:
------------------------------
Description:
{color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
*Infinispan Version : 8.2.5*
*Hibernate Version : 5.2.8*
*JGROUPS Version : 3.6.7*
*Clustering Mode : Replication*
We have tested the same with invalidation mode too.
Refer below config cache-config for hibernate L2 entity types:
<replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
<state-transfer enabled="false" timeout="20000000"/>
<locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
<transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
<eviction size="-1" strategy="NONE"/>
<expiration max-idle="-1" interval="5000" lifespan="-1" />
</replicated-cache-configuration>
When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
*This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
The below 4 scenarios should explain the issue.
*For Example:*
*Scenario (Issue) 1:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Now D has stale state of the L2 cache.
-We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
*Scenario (Issue) 2:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Now D has stale state of the L2 cache.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still D has stale state of the L2 cache.
-We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
*Scenario (Issue) 3:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
-Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
-We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
*Scenario (Issue) 4:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
-Now D has more updated state of the L2 cache.
-And {A,B,C} are having stale state of the L2 Cache.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still {A,B,C} have stale state of the L2 cache.
-We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
was:
{color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
*Infinispan Version : 8.2.5*
*Hibernate Version : 5.2.8*
*JGROUPS Version : 3.6.7*
*Clustering Mode : Replication*
We have tested the same with invalidation mode too.
Refer below config cache-config for hibernate L2 entity types:
_<replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
<state-transfer enabled="false" timeout="20000000"/>
<locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
<transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
<eviction size="-1" strategy="NONE"/>
<expiration max-idle="-1" interval="5000" lifespan="-1" />
</replicated-cache-configuration>
_
When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
*This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
The below 4 scenarios should explain the issue.
*For Example:*
*Scenario (Issue) 1:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Now D has stale state of the L2 cache.
-We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
*Scenario (Issue) 2:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Now D has stale state of the L2 cache.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still D has stale state of the L2 cache.
-We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
*Scenario (Issue) 3:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
-Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
-We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
*Scenario (Issue) 4:*
-Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
-Somehow, node D gets removed from the cluster view.
-Then some updates/inserts in hibernate L2 cache is done on Node B.
-These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
-And these updates/inserts doesn't get propagated to the Node D.
-Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
-Now D has more updated state of the L2 cache.
-And {A,B,C} are having stale state of the L2 Cache.
-Now D rejoins the cluster {A,B,C}.
-Now the updated cluster view is {A,B,C,D}.
-Still {A,B,C} have stale state of the L2 cache.
-We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
> Data Inconsistency in case of Topology change in Infinispan Cluster
> -------------------------------------------------------------------
>
> Key: ISPN-8480
> URL: https://issues.jboss.org/browse/ISPN-8480
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.2.5.Final
> Reporter: Rohit Singh
> Priority: Blocker
> Labels: Infinispan, JGroups, hibernate_2nd_level_cache
>
> {color:red}*Data Inconsistency in case of Topology change in Infinispan Cluster*{color}
> *Infinispan Version : 8.2.5*
> *Hibernate Version : 5.2.8*
> *JGROUPS Version : 3.6.7*
> *Clustering Mode : Replication*
> We have tested the same with invalidation mode too.
> Refer below config cache-config for hibernate L2 entity types:
> <replicated-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
> <state-transfer enabled="false" timeout="20000000"/>
> <locking isolation="READ_COMMITTED" concurrency-level="1000" acquire-timeout="15000" striping="false"/>
> <transaction mode="NONE" auto-commit="false" locking="OPTIMISTIC"/>
> <eviction size="-1" strategy="NONE"/>
> <expiration max-idle="-1" interval="5000" lifespan="-1" />
> </replicated-cache-configuration>
>
> When a disconnected node rejoins the cluster, data remains inconsistent on the reconnected node.
> *This is happening for both Hibernate L2 Cache and some custom cache (AdvancedCache).*
> The below 4 scenarios should explain the issue.
> *For Example:*
> *Scenario (Issue) 1:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 2:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Now D has stale state of the L2 cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still D has stale state of the L2 cache.
> -We expect, Node D should get the updated state of L2 Cache from {A,B,C}.
>
> *Scenario (Issue) 3:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates are done on some other keys, and not on the keys on which the updates were done by Node B.
> -Now {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} and Node D have different but updated state of the L2 cache, And are not in sync.
> -We expect, updates from {A,B,C} and Node D should get merged to all the nodes in cluster.
>
> *Scenario (Issue) 4:*
> -Initially the cluster comprises of 4 nodes, namely {A,B,C,D}.
> -Somehow, node D gets removed from the cluster view.
> -Then some updates/inserts in hibernate L2 cache is done on Node B.
> -These updates/inserts gets propagated to all the nodes in the current cluster view, i.e. {A,B,C}.
> -And these updates/inserts doesn't get propagated to the Node D.
> -Subsequently, some updates/inserts in hibernate L2 cache is done on Node D too. These updates might be on the same keys on which the updates were done by Node B.
> -Now D has more updated state of the L2 cache.
> -And {A,B,C} are having stale state of the L2 Cache.
> -Now D rejoins the cluster {A,B,C}.
> -Now the updated cluster view is {A,B,C,D}.
> -Still {A,B,C} have stale state of the L2 cache.
> -We expect, {A,B,C} should get the updated state of L2 Cache from Node D.
>
>
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 7 months