[infinispan-dev] DataContainer performance review

Sanne Grinovero sanne at infinispan.org
Sun Jun 26 20:44:19 EDT 2011


Hi Vladimir,
this looks very interesting, I couldn't resist to start some runs.

I noticed the test is quite quick to finish, so I've raised my
LOOP_FACTOR to 200, but it still finishes in some minutes which is not
long enough IMHO for these numbers to be really representative.
I've noticed that the test has some "warmup" boolean, but that's not
being used while I think it should.
Also, the three different operations need of course to happen all
together to properly "shuffle" the data, but we have to consider while
interpreting these numbers that some operations will finish before the
others, so some of the results achieved by the remaining operations
are not disturbed by the other operations. Maybe it's more interesting
to have the three operations run in a predictable sequence? or have
them all work as fast as they can for a given timebox instead of
"until the keys are finished" ?

Here where my results, if any comparing is useful. To conclude
something from this data, it looks to me that indeed the put
operations during LIRS are having something wrong? Also trying to add
more writers worsens the scenario for LIRS significantly.

When running the test with "doTest(map, 28, 8, 8, true, testName);"
(adding more put and remove operations) the synchronizedMap is
significanly faster than the CacheImpl.

Performance for container BoundedConcurrentHashMap
Average get ops/ms 1711
Average put ops/ms 63
Average remove ops/ms 1108
Size = 480
Performance for container BoundedConcurrentHashMap
Average get ops/ms 1851
Average put ops/ms 665
Average remove ops/ms 1199
Size = 463
Performance for container CacheImpl
Average get ops/ms 349
Average put ops/ms 213
Average remove ops/ms 250
Size = 459
Performance for container ConcurrentHashMap
Average get ops/ms 776
Average put ops/ms 611
Average remove ops/ms 606
Size = 562
Performance for container SynchronizedMap
Average get ops/ms 244
Average put ops/ms 222
Average remove ops/ms 236
Size = 50000

Now with doTest(map, 28, 8, 8, true, testName):

Performance for container Infinispan Cache implementation
Average get ops/ms 71
Average put ops/ms 47
Average remove ops/ms 51
Size = 474
Performance for container ConcurrentHashMap
Average get ops/ms 606
Average put ops/ms 227
Average remove ops/ms 246
Size = 49823
Performance for container synchronizedMap
Average get ops/ms 175
Average put ops/ms 141
Average remove ops/ms 160

As a first glance it doesn't look very nice, but these runs where not
long enough at all.

Sanne

2011/6/26 Vladimir Blagojevic <vblagoje at redhat.com>:
> Hi,
>
> I would like to review recent DataContainer performance claims and I was
> wondering if any of you have some spare cycles to help me out.
>
> I've added a test[1] to MapStressTest that measures and contrasts single
> node Cache performance to synchronized HashMap, ConcurrentHashMap and
> BCHM variants.
>
>
> Performance for container BoundedConcurrentHashMap (LIRS)
> Average get ops/ms 1063
> Average put ops/ms 101
> Average remove ops/ms 421
> Size = 480
> Performance for container BoundedConcurrentHashMap (LRU)
> Average get ops/ms 976
> Average put ops/ms 306
> Average remove ops/ms 521
> Size = 463
> Performance for container CacheImpl
> Average get ops/ms 94
> Average put ops/ms 61
> Average remove ops/ms 65
> Size = 453
> Performance for container ConcurrentHashMap
> Average get ops/ms 484
> Average put ops/ms 326
> Average remove ops/ms 376
> Size = 49870
> Performance for container SynchronizedMap
> Average get ops/ms 96
> Average put ops/ms 85
> Average remove ops/ms 96
> Size = 49935
>
>
> I ran MapStressTest on my Macbook Air, 32 threads continually doing
> get/put/remove ops. Fore more details see[1]. If my measurements are
> correct Cache instance seems to be capable of about ~220 ops per
> millisecond on my crappy hardware setup. As you can see performance of
> the entire cache structure does not seem to be much worse from a
> SynchronizedMap which is great in one hand but also leaves us some room
> for potential improvement since concurrent hashmap and BCHM seem to be
> substantially faster. I have not tested impact of having a cache store
> for passivation and I will do that tomorrow/next week.
>
> Any comments/ideas going forward?
>
> [1] https://github.com/infinispan/infinispan/pull/404
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>


More information about the infinispan-dev mailing list