[infinispan-issues] [JBoss JIRA] (ISPN-2712) Initial state transfer doesn't appear to all be persisted when using eviction in a replicated cluster
Dan Berindei (JIRA)
jira-events at lists.jboss.org
Thu Jan 24 06:31:47 EST 2013
[ https://issues.jboss.org/browse/ISPN-2712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12750180#comment-12750180 ]
Dan Berindei commented on ISPN-2712:
------------------------------------
When eviction is enabled, the data container is split in {{concurrencyLevel}} segments, and eviction is done individually on each segment. So instead of starting to evict entries when the total number of entries in the cache is {{maxEntries}}, we start to evict entries when the total number of entries in each segment is {{maxEntries/concurrencyLevel}}.
This obviously breaks down when {{concurrencyLevel >= maxEntries}}, so {{concurrencyLevel}} is limited to {{maxEntries/2}}. Still, that means just 2 entries in each segment. It's very easy for keys to get clustered in particular segments and to get evicted too soon.
I'm not aware of any changes in this area since 5.1, but I suggest lowering {{concurrencyLevel}} to 50 to avoid the problem.
> Initial state transfer doesn't appear to all be persisted when using eviction in a replicated cluster
> ------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2712
> URL: https://issues.jboss.org/browse/ISPN-2712
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.2.0.CR1
> Reporter: Randall Hauch
> Assignee: Adrian Nistor
> Priority: Critical
> Fix For: 5.2.0.Final
>
> Attachments: ispn_eviction_log.txt, spectrum-repository-infinispan.xml
>
>
> Using a clustered cache with 2 nodes, where the cache in each node is configured identically with replication, eviction and (non-shared) file cache store. (See attached configuration.)
> The first (coordinator) process in the cluster is started and populated with 293 entries. Then the first process continually adds a few entries every 5 seconds. After a short delay, the second process is started, at which point it joins the cluster and starts the state-transfer process; logging shows in the first process that all 293 entries are transferred to the new cluster member, and the second log shows that they are all received. The second process then attempts to look for a specific entry that was created during initial population in the first process. This fails to find the existing entry.
> By enabling trace logging and "IDE breakpoint output messages" around state transfer, it's visible that from the 293 keys, only 218 are placed into the cache, the others being lost.
> (This problem was originally discovered when clustering ModeShape, which behaves roughly in the manner described above. The initial entries that are populated upon initialization are content created when a new repository is started. The second process looks for this content, and if it finds the content it knows not to create all of this initial content. However, if it doesn't find it, it thinks the repository has not yet been initialized and that it should create the initial content. The problem described by this bug then manifests itself in ModeShape through dozens of exceptions because the repository has been corrupted. See MODE-1745 for details on this problem. ModeShape's corresponding known issue for this issue, ISPN-2712, is MODE-1754.)
> The eviction is configured like this:
> {code:xml}
> <eviction strategy="LIRS" maxEntries="1000"/>
> {code}
> The attached log file is from the second process (the "receiver" node) and it contains the following key points:
> * line 40 - the total number of keys & entries to be transferred = 293
> * line 1352 and from there onwards 1358 / 1364 / i + 6 - the data container's size stops growing at 218, while the other entries are being sent. This means that in effect, they are ignored.
> * line 1797 - the loop from {{org.infinispan.statetransfer.StateConsumerImpl#doApplyState}} finishes
> Disabling eviction fixes the problem and all 293 nodes are placed in Node2's cache.
> (I initially marked this as CRITICAL priority, though it is a blocker for our use of Infinispan 5.2.)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
More information about the infinispan-issues
mailing list