[infinispan-dev] Lock amortization preliminary performance numbers

Vladimir Blagojevic vblagoje at redhat.com
Wed Feb 3 10:00:00 EST 2010


Bryan,

On 2010-02-02, at 7:49 PM, Bryan Thompson wrote:

> Vladimir,
>  
> I now have a workload where I have a hotspot on our cache which is 8% of the total time.  I am going to use this to test BCHM under a realistic workload.
>  
> One thing which I like about your design is that by placing the array buffering the operations for a thread within the Segment, you are using the lock guarding the Segment to control updates to that array.  While this has less potential throughput than a truely thread-local design (which is non-blocking until the thread-local array is full), it seems that the array buffering the updates can not "escape" under your design.  Was this intentional?

Yes. Escaping could also potentially be handled by shared pool of array buffers. Ideally shared pool would be some lock free structure since  there would be a lot of contention among threads for these array buffers (that record accesses)
 


>  
> The danger with a true thread-local design is that a thread can come in and do some work, get some updates buffered in its thread-local array, and then never visit again.  In this case those updates would remain buffered on the thread and would not in fact cause the access order to be updated in a timely manner.  Worse, if you are relying on WeakReference semantics, the buffered updates would remain strongly reachable and the corresponding objects would be wired into the cache.

Not if you return updates into above mentioned pool as the thread unwinds from call into cache (container). 

>  
> I've worked around this issue in a different context where it made sense to scope the buffers to an owning object (a B+Tree instance).  In that case, when the B+Tree container was closed, all buffered updates were discarded.  This nicely eliminated the problems with "escaping" threads.

Yes, there has to be some container boundary! In Infinispan this would be a cache instance, DataContainer to be exact.  

>  
> However, I like your approach better.
>  
> Bryan
>  
> PS: I see a comment on LRU#onEntryHit() indicating that it is invoked without holding the Segment lock.  Is that true only when BCHM#get() is invoked?

Yes, I'll change javadoc to say : "is potentially invoked without holding a lock".

>  
> PPS: Also, can you expand on the role of the LRU#accessQueue vs LRU#lruQueue?  Is there anything which corresponds to the total LRU order (after batching updates through the Segment)?


LRU#accessQueue records hits on a Segment. This is the batching FIFO queue from BP-Wrapper paper except it is not per thread but rather per Segment. Notice that LRU#accessQueue is lock free but thread safe since we expect a lot of threads recording accesses per Segment. Ideal collection for this use case is ConcurrentLinkedQueue. 

LRU#lruQueue is simply just a simple LRU stack implementation with one end of the queue holding most recently accessed entries and the other end of the queue with least recently accessed entries.


Regards,
Vladimir


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20100203/05da44c4/attachment-0002.html 


More information about the infinispan-dev mailing list