[infinispan-dev] Implementing LIRS eviction policy [ISPN-299]

Vladimir Blagojevic vblagoje at redhat.com
Fri Jan 8 04:41:58 EST 2010


Hey Bryan,

On 2010-01-08, at 2:09 AM, Bryan Thompson wrote:

> Valdimir,
>  
> I have used a similar approach to "batching" evictions with  good success.  It reduced contention for the lock to handle eviction by about twenty percent.

How different, if at all, was your batching implementation from the one I referred to in my previous mail? 

>  
> For this cache, which was not realized using infinispan, I  did NO eviction until the cache reached a target percentage of the JVM heap (this kept the cache hot).  For our case, this was easy as the objects entered in the cache were database records coded as byte[]s so have a good measure for their memory footprint.  Once adding another record to the cache would push the cache size over the target memory limit, that thread would gain a lock and then evict a batch of records (in an LRU fashion) until the cache size had been driven down by a threashold (e.g., 5% of the target memory was recovered).
>  
> Let me suggest an alternative approach.  Neither LIRS nor LRU need to be strong orderings.  It is enough that a cache eviction policy be "approximately" LRU (or LIRS).  As long as the least desirable entries wind up getting evicted, it does not matter what the precise eviction order is.

Can you clarify "need to be strong orderings"? Aha, you are referring to eviction order? 

>  
> If you accept "approximate" eviction orders, then striped locks could be used to reduce thread contention.  Evictions could be driven on each segment of the cache independently.  If we hash the cache entry key to choose the segment, then each cache entry will only appear in one segment and all segments should be driven more or less equally by the application workload (assuming a decent hash function).  So using 16 segments / locks, you have 1/16th the contention.
>  
> Thoughts?

So what you are basically saying is that we implement LIRS and LRU per Segment of ConcurrentHashMap, is that right? Interesting idea! 

Does your proposal still contain batching? If so, how would you fit it in this new design? You would have to keep batched record of cache entry accesses in a queue per segment? 

Regards,
Vladimir


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20100108/0dee242f/attachment-0002.html 


More information about the infinispan-dev mailing list