<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hey Bryan,<div><br><div><div>On 2010-01-08, at 1:35 PM, Bryan Thompson wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">
<div style="WORD-WRAP: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space">
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">Vladimir,</font></span></div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">Reading the paper, it appears that my suggestion of
segments is called "distributed locks" in this paper and the authors
suggests:</font></span></div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font size="2"></font></span> </div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font size="2">> For
example, the algorithms<span class="661490010-08012010"> </span>that need to
detect sequence of accesses cannot retain their<span class="661490010-08012010">
</span>performance advantages when pages in the same sequence<span class="661490010-08012010"> </span>have been distributed into multiple partitions
and cannot be<span class="661490010-08012010"> </span>identified as a
sequence.</font></span></div></div></blockquote><div><br></div><div>Indeed, but this sequence access seems to be important for subgroup of algorithms like SEQ. To be honest, when I first read the paper I was surprised by the authors claim that batching does not affect precision of LIRS and LRU. LIRS and LRU need order of access too by it seems that approximated access is not affecting precision significantly. </div><div><br></div><div>Along the same lines we have to figure out if segmenting and batching per segment is going to affect algorithm precision overall. If someone can prove that this is dead end please argue your point :) Otherwise, intuitively speaking, I cannot see how batching per segment would suddenly make LIRS imprecise if batching did not do so in the first place.</div><br><blockquote type="cite"><div style="WORD-WRAP: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space">
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">Overall, I like the approach outlined in the paper.
It takes the concept of batching further than what we did, but in much the same
spirit. In our implementation, we only batched evictions. They are
also batching the updates to the replacement policy ordering which is still a
bottleneck for us.</font></span></div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">Concerning the FIFO queue, t</font></span><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">he paper
(page 372) considers both per-thread queues or global queues. They dismiss
global queues for batching updates primarily because certain cache replacement
policies need to detect sequences of operations. Can such sequences of
operations be meaningfully detected within infinispan? Is there any
thread-affinity for a transaction? If not, then segmented queues might be
an alternative for infinispan.</font></span></div></div></blockquote><div><br></div><div>Yes, they dismiss global queue over algorithms we do not plan to use and locking cost of a global queue. I am betting that global lock free queue per Infinispan DataContainer is less costly than shared BP-Wrapper objects that we have to maintain in a pool and share among threads (they are not shared in the paper but we have to). As I mentioned the problem for us is that we can potentially have spikes of up to hundreds of threads in a cache instance. Infinispan use case differs from PostgreSQL used in the paper but I agree that potential impact of global lock free queue should not be dismissed and I would switch to pooling approach if it is proven that lock free queue causes bottleneck.</div><div><br></div><br><blockquote type="cite"><div style="WORD-WRAP: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space">
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">Also, for java, i</font></span><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">f the
hash map uses weak values then cache entries then any entries on the FIFO queues
will remain pinned in the hash map even if they would have otherwise been
evicted by the cache policy which is a nice touch. That way you can
continue to access an object which remains accessible to the JVM held even if
the cache replacement policy would have discarded the cache
entry.</font></span></div></div></blockquote><blockquote type="cite"><div style="WORD-WRAP: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space">
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">The experiment in the paper compared several cache
replacement policies, but the data were all hot in cache and the cache was large
enough to avoid all cache misses, so I am not clear why they bothered to compare
different cache replacement policies -- none of them did any evictions for their
tests. </font></span><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">While "prefetching" did reduce the lock
contention slightly, it had no meaningful effect on either throughput or
response time. The authors posit that it might have more effect on
machines with many more cores, depending on the CPU architecture. It seems
easily enough to ignore the prefetch dimension for now and just concentrate on
batching.</font></span></div></div></blockquote><div><br></div><div>Exactly the same sentiment here :)</div><div><br></div><br><blockquote type="cite"><div style="WORD-WRAP: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space">
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">So, I think that the batching approach per this paper makes
sense and could be combined with LRU or LIRS easily since the cache replacement
policy is encapsulated (which might not be true with prefetch) and seems overall
like a good idea and an improvement on what we were already doing. For our
application, I think that we would use a hash map with weak values (per above)
to gain the benefit of the actual retention time of the object by the JVM.
I would also like to explore the use of segmented queues as well as per-thread
queues to control the #of queues in the system separately from the #of
threads. By batching updates to the access order, you are also effectively
batching evictions since they are driven by updates. When combined with a
weak value caches, the thread running the updates should also clear entries from
the cache on the weak reference queue, thereby batching the clearing of stale
cache entries (those with cleared references) at the same time it is batching
updates to the access order and inserts to the cache.</font></span></div>
<div dir="ltr" align="left"><span class="661490010-08012010"></span><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4"></font></span> </div>
<div dir="ltr" align="left"><span class="661490010-08012010"><font face="Courier New" color="#0000ff" size="4">Bryan</font></span></div></div></blockquote></div><br><div>Excellent, lets keep cooperating on this one. If you are already in implementation stage, and it seems that you are, let us know of the progress you make and performance increases you observe. </div><div><br></div><div><br></div><div>Regards,</div><div>Vladimir</div></div></body></html>