[infinispan-dev] Lock amortization preliminary performance numbers
Manik Surtani
manik at jboss.org
Thu Jan 28 12:59:18 EST 2010
On 27 Jan 2010, at 15:29, Vladimir Blagojevic wrote:
> Hi all,
>
> As you probably recall I am working on a LIRS eviction algorithm. But before we get there, Manik and I agreed that I push in a direction of implementing ConcurrentHashMap(CHM) variant that will do lock amortization and eviction per segment of CHM. So far I have implemented LRU eviction so we can verify feasibility of this approach rather than eviction algorithm itself. We are hoping that eviction algorithm precision will not suffer if we do eviction per segment and you all are familiar with lock striping benefits of segments in CHM.
>
> So, I've cobbled up a first test to compare eviction and lock amortized enabled CHM (BCHM) with regular CHM and synchronized HashMap. The test is actually based on already existing test [1] used to measure performance of various DataContainers. I've changed it slightly to measure Map directly instead of DataContainer. The test launches in cohort 48 reader, 4 writer and 1 remover threads. All operations are randomized; readers execute map.get(R.nextInt(NUM_KEYS), writers map.put(R.nextInt(NUM_KEYS), "value"), and finally removers execute map.remove(R.nextInt(NUM_KEYS). NUM_KEYS was set to 10K. Each thread does in a loop 1K ops.
Can we think of a better way to generate keys here? The problem with having random.nextInt() within the timed loop is that you are effectively measuring how quick your random impl is. :)
Perhaps generate an array of random keys, and then have each thread just increment an offset on the array to get the key? This offset could be a simple int, no need for an AtomicInt since the consequences of inaccuracy are just that you get a different random key, and the costs are unnecessary CAS.
> Initial capacity for CHM and HashMap were set to 1K, max capacity for eviction and lock amortized enabled CHM was set to 256; therefore BCHM has to do a lot of evictions which is evident in map final size listed below.
What happens when the BCHM is bounded at something higher, e.g., 1024? We would have fewer eviction events?
>
> Size = 9999
> Performance for container ConcurrentHashMap
> Average get ops/ms 338
> Average put ops/ms 87
> Average remove ops/ms 171
>
> Size = 193
> Performance for container BufferedConcurrentHashMap
> Average get ops/ms 322
> Average put ops/ms 32
> Average remove ops/ms 74
>
> Size = 8340
> Performance for container SynchronizedMap
> Average get ops/ms 67
> Average put ops/ms 45
> Average remove ops/ms 63
>
> If I remove lock amortization for BufferedConcurrentHashMap, that is if we attempt eviction on every get/put/remove, performance of put/remove for BCHM goes to zero basically! As far as I can interpret these results, BCHM get operation performance does not suffer at all in comparison with CHM as it does for single lock HashMap. Predictably, for single lock HashMap each type of operation takes on avg almost the same amount of time. We pay a hit in BCHM put/remove operations in comparison with CHM but the numbers are promising, I think. If you have comments, or even suggestions on how to test these map variants in a different, possibly more insightful approach, speak up!
>
> Cheers,
> Vladimir
>
>
> [1] http://fisheye.jboss.org/browse/~raw,r=1264/Infinispan/trunk/core/src/test/java/org/infinispan/stress/DataContainerStressTest.java
>
>
>
>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Manik Surtani
manik at jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20100128/997804c7/attachment-0002.html
More information about the infinispan-dev
mailing list