Vladimir,
I think it's better if you run your tests in one of the cluster or perf machines cos
that way everyone has access to the same base system and results can be compared,
particularly when changes are made. Also, you avoid local apps or CPU usage affecting your
test results.
I agree with Sanne, put ops for LIRS don't look go in comparison with LRU. Did you run
some profiling?
Cheers,
On Jun 27, 2011, at 10:11 PM, Vladimir Blagojevic wrote:
Sanne & others,
I think we might be onto something. I changed the test to run for
specified period of time, I used 10 minutes test runs (You need to pull
this change in MapStressTest manually until it is integrated). I noticed
that as we raise map capacity BCHM and CacheImpl performance starts to
degrade while it does not for ConcurrentHashMap and SynchronizedMap. See
results below.
max capacity = 512
Performance for container BoundedConcurrentHashMap
Average get ops/ms 382
Average put ops/ms 35
Average remove ops/ms 195
Size = 478
Performance for container BoundedConcurrentHashMap
Average get ops/ms 388
Average put ops/ms 54
Average remove ops/ms 203
Size = 462
Performance for container CacheImpl
Average get ops/ms 143
Average put ops/ms 16
Average remove ops/ms 26
Size = 418
Performance for container ConcurrentHashMap
Average get ops/ms 176
Average put ops/ms 67
Average remove ops/ms 74
Size = 43451
Performance for container SynchronizedMap
Average get ops/ms 58
Average put ops/ms 47
Average remove ops/ms 60
Size = 30996
max capacity = 16384
Performance for container BoundedConcurrentHashMap
Average get ops/ms 118
Average put ops/ms 7
Average remove ops/ms 11
Size = 16358
Performance for container BoundedConcurrentHashMap
Average get ops/ms 76
Average put ops/ms 5
Average remove ops/ms 6
Size = 15488
Performance for container CacheImpl
Average get ops/ms 48
Average put ops/ms 4
Average remove ops/ms 16
Size = 12275
Performance for container ConcurrentHashMap
Average get ops/ms 251
Average put ops/ms 107
Average remove ops/ms 122
Size = 17629
Performance for container SynchronizedMap
Average get ops/ms 51
Average put ops/ms 42
Average remove ops/ms 51
Size = 36978
max capacity = 32768
Performance for container BoundedConcurrentHashMap
Average get ops/ms 72
Average put ops/ms 7
Average remove ops/ms 9
Size = 32405
Performance for container BoundedConcurrentHashMap
Average get ops/ms 13
Average put ops/ms 5
Average remove ops/ms 2
Size = 29214
Performance for container CacheImpl
Average get ops/ms 14
Average put ops/ms 2
Average remove ops/ms 4
Size = 23887
Performance for container ConcurrentHashMap
Average get ops/ms 235
Average put ops/ms 102
Average remove ops/ms 115
Size = 27823
Performance for container SynchronizedMap
Average get ops/ms 55
Average put ops/ms 48
Average remove ops/ms 53
Size = 39650
On 11-06-26 8:44 PM, Sanne Grinovero wrote:
> Hi Vladimir,
> this looks very interesting, I couldn't resist to start some runs.
>
> I noticed the test is quite quick to finish, so I've raised my
> LOOP_FACTOR to 200, but it still finishes in some minutes which is not
> long enough IMHO for these numbers to be really representative.
> I've noticed that the test has some "warmup" boolean, but that's
not
> being used while I think it should.
> Also, the three different operations need of course to happen all
> together to properly "shuffle" the data, but we have to consider while
> interpreting these numbers that some operations will finish before the
> others, so some of the results achieved by the remaining operations
> are not disturbed by the other operations. Maybe it's more interesting
> to have the three operations run in a predictable sequence? or have
> them all work as fast as they can for a given timebox instead of
> "until the keys are finished" ?
>
> Here where my results, if any comparing is useful. To conclude
> something from this data, it looks to me that indeed the put
> operations during LIRS are having something wrong? Also trying to add
> more writers worsens the scenario for LIRS significantly.
>
> When running the test with "doTest(map, 28, 8, 8, true, testName);"
> (adding more put and remove operations) the synchronizedMap is
> significanly faster than the CacheImpl.
>
> Performance for container BoundedConcurrentHashMap
> Average get ops/ms 1711
> Average put ops/ms 63
> Average remove ops/ms 1108
> Size = 480
> Performance for container BoundedConcurrentHashMap
> Average get ops/ms 1851
> Average put ops/ms 665
> Average remove ops/ms 1199
> Size = 463
> Performance for container CacheImpl
> Average get ops/ms 349
> Average put ops/ms 213
> Average remove ops/ms 250
> Size = 459
> Performance for container ConcurrentHashMap
> Average get ops/ms 776
> Average put ops/ms 611
> Average remove ops/ms 606
> Size = 562
> Performance for container SynchronizedMap
> Average get ops/ms 244
> Average put ops/ms 222
> Average remove ops/ms 236
> Size = 50000
>
> Now with doTest(map, 28, 8, 8, true, testName):
>
> Performance for container Infinispan Cache implementation
> Average get ops/ms 71
> Average put ops/ms 47
> Average remove ops/ms 51
> Size = 474
> Performance for container ConcurrentHashMap
> Average get ops/ms 606
> Average put ops/ms 227
> Average remove ops/ms 246
> Size = 49823
> Performance for container synchronizedMap
> Average get ops/ms 175
> Average put ops/ms 141
> Average remove ops/ms 160
>
> As a first glance it doesn't look very nice, but these runs where not
> long enough at all.
>
> Sanne
>
> 2011/6/26 Vladimir Blagojevic<vblagoje(a)redhat.com>:
>> Hi,
>>
>> I would like to review recent DataContainer performance claims and I was
>> wondering if any of you have some spare cycles to help me out.
>>
>> I've added a test[1] to MapStressTest that measures and contrasts single
>> node Cache performance to synchronized HashMap, ConcurrentHashMap and
>> BCHM variants.
>>
>>
>> Performance for container BoundedConcurrentHashMap (LIRS)
>> Average get ops/ms 1063
>> Average put ops/ms 101
>> Average remove ops/ms 421
>> Size = 480
>> Performance for container BoundedConcurrentHashMap (LRU)
>> Average get ops/ms 976
>> Average put ops/ms 306
>> Average remove ops/ms 521
>> Size = 463
>> Performance for container CacheImpl
>> Average get ops/ms 94
>> Average put ops/ms 61
>> Average remove ops/ms 65
>> Size = 453
>> Performance for container ConcurrentHashMap
>> Average get ops/ms 484
>> Average put ops/ms 326
>> Average remove ops/ms 376
>> Size = 49870
>> Performance for container SynchronizedMap
>> Average get ops/ms 96
>> Average put ops/ms 85
>> Average remove ops/ms 96
>> Size = 49935
>>
>>
>> I ran MapStressTest on my Macbook Air, 32 threads continually doing
>> get/put/remove ops. Fore more details see[1]. If my measurements are
>> correct Cache instance seems to be capable of about ~220 ops per
>> millisecond on my crappy hardware setup. As you can see performance of
>> the entire cache structure does not seem to be much worse from a
>> SynchronizedMap which is great in one hand but also leaves us some room
>> for potential improvement since concurrent hashmap and BCHM seem to be
>> substantially faster. I have not tested impact of having a cache store
>> for passivation and I will do that tomorrow/next week.
>>
>> Any comments/ideas going forward?
>>
>> [1]
https://github.com/infinispan/infinispan/pull/404
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache