[JBoss JIRA] (ISPN-2871) All nodes are not replicated when eviction is enabled
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-2871?page=com.atlassian.jira.plugin.... ]
Adrian Nistor commented on ISPN-2871:
-------------------------------------
Btw, I see your FileCacheStore has purgeOnStartup="true", so those 100 keys are discarded on next run, in case you wonder where they disappeared :).
> All nodes are not replicated when eviction is enabled
> -----------------------------------------------------
>
> Key: ISPN-2871
> URL: https://issues.jboss.org/browse/ISPN-2871
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 5.2.1.Final
> Reporter: Chris Beer
> Assignee: Mircea Markus
> Priority: Blocker
> Fix For: 5.3.0.Final
>
>
> When I enable replication and eviction, it appear that not all nodes are replicated to all hosts. This problem was discovered when clustering modeshape with eviction, and critical nodes were not being properly replicated.
> I've modified the clustered-cache quick-start to (hopefully) demonstrate this problem:
> https://github.com/cbeer/infinispan-quickstart/tree/replication-eviction-...
> Node1 creates 100 cache entries (key0 -> key99). When eviction is disabled, the final cache size on Node0 is 100. When eviction is enabled, the final cache size is 78.
> This seems suspiciously similar to ISPN-2712.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2871) All nodes are not replicated when eviction is enabled
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-2871?page=com.atlassian.jira.plugin.... ]
Adrian Nistor commented on ISPN-2871:
-------------------------------------
[~cbeer] Why do you say this is similar to ISPN-2712? I checked your sample and noticed that the evicted data was properly written to cache store. In fact all your 100 keys are written to cache store because you have configured passivation==false. You can check this for yourself by adding the following lines in Node0 to print the size of the cache store after printing the size of the cache:
{noformat}
try {
log.info("Cache store size: " + cache.getAdvancedCache().getComponentRegistry()
.getComponent(org.infinispan.loaders.CacheLoaderManager.class)
.getCacheStore().loadAll().size());
} catch (Exception ex) { /*ignore*/ }
{noformat}
Cheers!
> All nodes are not replicated when eviction is enabled
> -----------------------------------------------------
>
> Key: ISPN-2871
> URL: https://issues.jboss.org/browse/ISPN-2871
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 5.2.1.Final
> Reporter: Chris Beer
> Assignee: Mircea Markus
> Priority: Blocker
> Fix For: 5.3.0.Final
>
>
> When I enable replication and eviction, it appear that not all nodes are replicated to all hosts. This problem was discovered when clustering modeshape with eviction, and critical nodes were not being properly replicated.
> I've modified the clustered-cache quick-start to (hopefully) demonstrate this problem:
> https://github.com/cbeer/infinispan-quickstart/tree/replication-eviction-...
> Node1 creates 100 cache entries (key0 -> key99). When eviction is disabled, the final cache size on Node0 is 100. When eviction is enabled, the final cache size is 78.
> This seems suspiciously similar to ISPN-2712.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2836) Thread deadlock in Map/Reduce with 2 nodes
by Alan Field (JIRA)
[ https://issues.jboss.org/browse/ISPN-2836?page=com.atlassian.jira.plugin.... ]
Alan Field commented on ISPN-2836:
----------------------------------
I have also confirmed that running the job with UDP shows the same behavior as running with TCP, send queue disabled, and UNICAST2.conn_expiry_timeout=0. The org.jgroups.TimeoutException happens after MapCombineCommand is invoked. (https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/afield-radargun-mapr...)
> Thread deadlock in Map/Reduce with 2 nodes
> ------------------------------------------
>
> Key: ISPN-2836
> URL: https://issues.jboss.org/browse/ISPN-2836
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.1.Final
> Reporter: Alan Field
> Assignee: Vladimir Blagojevic
> Attachments: afield-tcp-521-final.txt, udp-edg-perf01.txt, udp-edg-perf02.txt
>
>
> Using RadarGun and two nodes to execute the example WordCount Map/Reduce job against a cache with ~550 keys with a value size of 1MB is producing a thread deadlock. The cache is distributed with transactions disabled.
> TCP transport deadlocks without throwing an exception. Disabling the send queue and setting UNICAST2.conn_expiry_timeout=0 prevents the deadlock, but the job does not complete. The nodes send "are-you-alive" messages back and forth, and I have seen the following exception:
> {noformat}
> 11:44:29,970 ERROR [org.jgroups.protocols.TCP] (OOB-98,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (76 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:352)
> at org.radargun.cachewrappers.InfinispanMapReduceWrapper.executeMapReduceTask(InfinispanMapReduceWrapper.java:98)
> at org.radargun.stages.MapReduceStage.executeOnSlave(MapReduceStage.java:74)
> at org.radargun.Slave$2.run(Slave.java:103)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:832)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:477)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:350)
> ... 9 more
> Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:541)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
> 11:44:29,978 ERROR [org.jgroups.protocols.TCP] (Timer-3,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (60 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:254)
> at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:80)
> at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:288)
> ... 5 more
> Caused by: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:390)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
> 11:44:29,979 ERROR [org.jgroups.protocols.TCP] (Timer-4,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (63 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
> ... 11 more
> {noformat}
> With UDP transport, both threads are deadlocked. I will attach thread dumps from runs using TCP and UDP transport.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2871) All nodes are not replicated when eviction is enabled
by Chris Beer (JIRA)
[ https://issues.jboss.org/browse/ISPN-2871?page=com.atlassian.jira.plugin.... ]
Chris Beer commented on ISPN-2871:
----------------------------------
Mircea Markus
Sorry, I was using the terms from the clustered-cache infinispan-quickstart I modified to demonstrate the problem. In the clustered-cache replication quickstart, there are two "Nodes", Node0 and Node1. [1] As modified by me:
Node0 is a member of the cluster and spits out the cache.size() at 5 second intervals
Node1 is a member of the cluster and creates 100 cache entries, with keys named "key0" to "key99".
Both nodes load the same replication configuration [2].
I compile the clustered-cache project using:
$ mvn clean compile dependency:copy-dependencies -DstripVersion
I start Node0 with:
$ java -Dorg.infinispan.CacheDirPath=target/storage -cp target/classes:target/dependency/* org.infinispan.quickstart.clusteredcache.replication.Node0
and Node1 as:
$ java -Dorg.infinispan.CacheDirPath=target/storage2 -cp target/classes:target/dependency/* org.infinispan.quickstart.clusteredcache.replication.Node1
When eviction is disabled (by commenting it out), in the Node0 console output (at logging level INFO), I see:
Feb 27, 2013 9:43:18 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport start
INFO: ISPN000078: Starting JGroups Channel
Feb 27, 2013 9:43:21 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport viewAccepted
INFO: ISPN000094: Received new cluster view: [localhost-31095|0] [localhost-31095]
Feb 27, 2013 9:43:21 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport startJGroupsChannelIfNeeded
INFO: ISPN000079: Cache local address is localhost-31095, physical addresses are [127.0.0.1:7800]
Feb 27, 2013 9:43:21 AM org.infinispan.factories.GlobalComponentRegistry start
INFO: ISPN000128: Infinispan version: Infinispan 'Delirium' 5.2.1.Final
Feb 27, 2013 9:43:21 AM org.infinispan.transaction.lookup.GenericTransactionManagerLookup useDummyTM
WARN: ISPN000104: Falling back to DummyTransactionManager from Infinispan
Feb 27, 2013 9:43:21 AM org.infinispan.jmx.CacheJmxRegistration start
INFO: ISPN000031: MBeans were successfully registered to the platform MBean server.
Feb 27, 2013 9:43:21 AM org.infinispan.jmx.CacheJmxRegistration start
INFO: ISPN000031: MBeans were successfully registered to the platform MBean server.
Feb 27, 2013 9:43:24 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport viewAccepted
INFO: ISPN000094: Received new cluster view: [localhost-31095|1] [localhost-31095, localhost-56291]
Feb 27, 2013 9:43:25 AM org.infinispan.quickstart.clusteredcache.util.ClusterValidation checkReplicationSeveralTimes
INFO: Cluster formed successfully!
Feb 27, 2013 9:43:25 AM org.infinispan.quickstart.clusteredcache.replication.Node0 run
INFO: Cache size: 0
Feb 27, 2013 9:43:26 AM org.infinispan.quickstart.clusteredcache.util.LoggingListener observeAdd
INFO: Cache entry with key key0 added in cache Cache 'Demo'@localhost-31095
[ SNIP key1 -> key98 ]
Feb 27, 2013 9:43:27 AM org.infinispan.quickstart.clusteredcache.util.LoggingListener observeAdd
INFO: Cache entry with key key99 added in cache Cache 'Demo'@localhost-31095
Feb 27, 2013 9:43:30 AM org.infinispan.quickstart.clusteredcache.replication.Node0 run
INFO: Cache size: 100
But when I enabled eviction in the infinispan configuration, I see:
Feb 27, 2013 9:40:41 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport start
INFO: ISPN000078: Starting JGroups Channel
Feb 27, 2013 9:40:44 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport viewAccepted
INFO: ISPN000094: Received new cluster view: [localhost-31370|0] [localhost-31370]
Feb 27, 2013 9:40:44 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport startJGroupsChannelIfNeeded
INFO: ISPN000079: Cache local address is localhost-31370, physical addresses are [127.0.0.1:7800]
Feb 27, 2013 9:40:44 AM org.infinispan.factories.GlobalComponentRegistry start
INFO: ISPN000128: Infinispan version: Infinispan 'Delirium' 5.2.1.Final
Feb 27, 2013 9:40:44 AM org.infinispan.transaction.lookup.GenericTransactionManagerLookup useDummyTM
WARN: ISPN000104: Falling back to DummyTransactionManager from Infinispan
Feb 27, 2013 9:40:44 AM org.infinispan.jmx.CacheJmxRegistration start
INFO: ISPN000031: MBeans were successfully registered to the platform MBean server.
Feb 27, 2013 9:40:45 AM org.infinispan.jmx.CacheJmxRegistration start
INFO: ISPN000031: MBeans were successfully registered to the platform MBean server.
Feb 27, 2013 9:40:52 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport viewAccepted
INFO: ISPN000094: Received new cluster view: [localhost-31370|1] [localhost-31370, localhost-24294]
Feb 27, 2013 9:40:55 AM org.infinispan.quickstart.clusteredcache.util.ClusterValidation checkReplicationSeveralTimes
INFO: Cluster formed successfully!
Feb 27, 2013 9:40:55 AM org.infinispan.quickstart.clusteredcache.replication.Node0 run
INFO: Cache size: 0
Feb 27, 2013 9:40:55 AM org.infinispan.quickstart.clusteredcache.util.LoggingListener observeAdd
INFO: Cache entry with key key0 added in cache Cache 'Demo'@localhost-31370
Feb 27, 2013 9:40:55 AM org.infinispan.quickstart.clusteredcache.util.LoggingListener observeAdd
[ SNIP key1 -> key98 ]
Feb 27, 2013 9:40:55 AM org.infinispan.quickstart.clusteredcache.util.LoggingListener observeAdd
INFO: Cache entry with key key99 added in cache Cache 'Demo'@localhost-31370
Feb 27, 2013 9:41:00 AM org.infinispan.quickstart.clusteredcache.replication.Node0 run
INFO: Cache size: 78
[1] https://github.com/cbeer/infinispan-quickstart/tree/replication-eviction-...
[2] https://github.com/cbeer/infinispan-quickstart/blob/replication-eviction-...
> All nodes are not replicated when eviction is enabled
> -----------------------------------------------------
>
> Key: ISPN-2871
> URL: https://issues.jboss.org/browse/ISPN-2871
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 5.2.1.Final
> Reporter: Chris Beer
> Assignee: Mircea Markus
> Priority: Blocker
> Fix For: 5.3.0.Final
>
>
> When I enable replication and eviction, it appear that not all nodes are replicated to all hosts. This problem was discovered when clustering modeshape with eviction, and critical nodes were not being properly replicated.
> I've modified the clustered-cache quick-start to (hopefully) demonstrate this problem:
> https://github.com/cbeer/infinispan-quickstart/tree/replication-eviction-...
> Node1 creates 100 cache entries (key0 -> key99). When eviction is disabled, the final cache size on Node0 is 100. When eviction is enabled, the final cache size is 78.
> This seems suspiciously similar to ISPN-2712.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2871) All nodes are not replicated when eviction is enabled
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2871?page=com.atlassian.jira.plugin.... ]
Mircea Markus commented on ISPN-2871:
-------------------------------------
[~cbeer] can you please narrow down the problem a bit more to infinispan terms? E.g. I'm not suer what you mean by a node here. What's the ISPN config you use and what do you write to the cache? I assume you're using a cache store as well?
> All nodes are not replicated when eviction is enabled
> -----------------------------------------------------
>
> Key: ISPN-2871
> URL: https://issues.jboss.org/browse/ISPN-2871
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 5.2.1.Final
> Reporter: Chris Beer
> Assignee: Mircea Markus
> Priority: Blocker
> Fix For: 5.3.0.Final
>
>
> When I enable replication and eviction, it appear that not all nodes are replicated to all hosts. This problem was discovered when clustering modeshape with eviction, and critical nodes were not being properly replicated.
> I've modified the clustered-cache quick-start to (hopefully) demonstrate this problem:
> https://github.com/cbeer/infinispan-quickstart/tree/replication-eviction-...
> Node1 creates 100 cache entries (key0 -> key99). When eviction is disabled, the final cache size on Node0 is 100. When eviction is enabled, the final cache size is 78.
> This seems suspiciously similar to ISPN-2712.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2836) Thread deadlock in Map/Reduce with 2 nodes
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/ISPN-2836?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on ISPN-2836:
--------------------------------
OK, so you confirmed that the issue doesn't occur with use_send-queues disabled.
For the exceptions above you'll have to contact one of the Infinispan guys to look into, as it's probably an Infinispan/Radargun issue.
I'll run this test (when I get to it) to get to the bottom of the senders-blocking issue. But meanwhile the 2 workarounds of disabling use_send_queues or using UDP exist.
> Thread deadlock in Map/Reduce with 2 nodes
> ------------------------------------------
>
> Key: ISPN-2836
> URL: https://issues.jboss.org/browse/ISPN-2836
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.1.Final
> Reporter: Alan Field
> Assignee: Vladimir Blagojevic
> Attachments: afield-tcp-521-final.txt, udp-edg-perf01.txt, udp-edg-perf02.txt
>
>
> Using RadarGun and two nodes to execute the example WordCount Map/Reduce job against a cache with ~550 keys with a value size of 1MB is producing a thread deadlock. The cache is distributed with transactions disabled.
> TCP transport deadlocks without throwing an exception. Disabling the send queue and setting UNICAST2.conn_expiry_timeout=0 prevents the deadlock, but the job does not complete. The nodes send "are-you-alive" messages back and forth, and I have seen the following exception:
> {noformat}
> 11:44:29,970 ERROR [org.jgroups.protocols.TCP] (OOB-98,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (76 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:352)
> at org.radargun.cachewrappers.InfinispanMapReduceWrapper.executeMapReduceTask(InfinispanMapReduceWrapper.java:98)
> at org.radargun.stages.MapReduceStage.executeOnSlave(MapReduceStage.java:74)
> at org.radargun.Slave$2.run(Slave.java:103)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:832)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:477)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:350)
> ... 9 more
> Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:541)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
> 11:44:29,978 ERROR [org.jgroups.protocols.TCP] (Timer-3,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (60 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:254)
> at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:80)
> at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:288)
> ... 5 more
> Caused by: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:390)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
> 11:44:29,979 ERROR [org.jgroups.protocols.TCP] (Timer-4,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (63 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
> ... 11 more
> {noformat}
> With UDP transport, both threads are deadlocked. I will attach thread dumps from runs using TCP and UDP transport.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-2871) All nodes are not replicated when eviction is enabled
by Randall Hauch (JIRA)
[ https://issues.jboss.org/browse/ISPN-2871?page=com.atlassian.jira.plugin.... ]
Randall Hauch updated ISPN-2871:
--------------------------------
Priority: Blocker (was: Major)
I'm bumping this up to BLOCKER because it is a massive issue for ModeShape and the ability to cluster. Obviously the ISPN team will want to prioritize it as they see fit.
> All nodes are not replicated when eviction is enabled
> -----------------------------------------------------
>
> Key: ISPN-2871
> URL: https://issues.jboss.org/browse/ISPN-2871
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 5.2.1.Final
> Reporter: Chris Beer
> Assignee: Mircea Markus
> Priority: Blocker
>
> When I enable replication and eviction, it appear that not all nodes are replicated to all hosts. This problem was discovered when clustering modeshape with eviction, and critical nodes were not being properly replicated.
> I've modified the clustered-cache quick-start to (hopefully) demonstrate this problem:
> https://github.com/cbeer/infinispan-quickstart/tree/replication-eviction-...
> Node1 creates 100 cache entries (key0 -> key99). When eviction is disabled, the final cache size on Node0 is 100. When eviction is enabled, the final cache size is 78.
> This seems suspiciously similar to ISPN-2712.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months