<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hey Bryan,<div><br><div><div>On 2010-01-08, at 2:09 AM, Bryan Thompson wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">
<div style="WORD-WRAP: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space">
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4">Valdimir,</font></span></div>
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4">I have used a similar approach to "batching" evictions with
good success. It reduced contention for the lock to handle eviction by
about twenty percent.</font></span></div></div></blockquote><div><br></div><div>How different, if at all, was your batching implementation from the one I referred to in my previous mail? </div><br><blockquote type="cite"><div style="WORD-WRAP: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space">
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4">For this cache, which was not realized using infinispan, I
did NO eviction until the cache reached a target percentage of the JVM heap
(this kept the cache hot). For our case, this was easy as the objects
entered in the cache were database records coded as byte[]s so have a good
measure for their memory footprint. Once adding another record to the
cache would push the cache size over the target memory limit, that thread would
gain a lock and then evict a batch of records (in an LRU fashion) until the
cache size had been driven down by a threashold (e.g., 5% of the target memory
was recovered).</font></span></div></div></blockquote><blockquote type="cite"><div style="WORD-WRAP: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space">
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4">Let me suggest an alternative approach. Neither LIRS
nor LRU need to be strong orderings. It is enough that a cache eviction
policy be "approximately" LRU (or LIRS). As long as the least desirable
entries wind up getting evicted, it does not matter what the precise eviction
order is.</font></span></div></div></blockquote><div><br></div><div>Can you clarify "need to be strong orderings"? Aha, you are referring to eviction order? </div><br><blockquote type="cite"><div style="WORD-WRAP: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space">
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4">If you accept "approximate" eviction orders, then striped
locks could be used to reduce thread contention. Evictions could be driven
on each segment of the cache independently. If we hash the cache entry key
to choose the segment, then each cache entry will only appear in one segment and
all segments should be driven more or less equally by the application workload
(assuming a decent hash function). So using 16 segments / locks, you have
1/16th the contention.</font></span></div>
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="427114300-08012010"><font face="Courier New" color="#0000ff" size="4">Thoughts?</font></span></div>
</div></blockquote></div><br><div>So what you are basically saying is that we implement LIRS and LRU per Segment of ConcurrentHashMap, is that right? Interesting idea! </div><div><br></div><div>Does your proposal still contain batching? If so, how would you fit it in this new design? You would have to keep batched record of cache entry accesses in a queue per segment? </div><div><br></div><div>Regards,</div><div>Vladimir</div><div><br></div><div><br></div></div></body></html>