[infinispan-dev] Cachestores performance

Radim Vansa rvansa at redhat.com
Thu Jun 27 02:54:25 EDT 2013


Yep, write-through. LevelDB JAVA used FileChannelTable implementation (-Dleveldb.mmap), because Mmaping is not implemented very well and causes JVM crashes (I believe it's because of calling non-public API via reflection - I've found post from the Oracle JVM guys discouraging the particular trick it uses). After writing the record to the log, it calls FileChannel.force(true), therefore, it should be really on the disc by that moment.
I have not looked into the JNI implementation but I expect the same.

By the way, I have updated [1] with numbers when running on more data (2 GB instead of 100 MB). I won't retype it here, so look there. The performance is much lower.
I may try also increase JVM heap size and try with a bit more data yet.

Radim

[1] https://community.jboss.org/wiki/FileCacheStoreRedesign

----- Original Message -----
| From: "Erik Salter" <an1310 at hotmail.com>
| To: "infinispan -Dev List" <infinispan-dev at lists.jboss.org>
| Sent: Wednesday, June 26, 2013 7:40:19 PM
| Subject: Re: [infinispan-dev] Cachestores performance
| 
| These were write-through cache stores, right?  And with LevelDB, this was
| through to the database file itself?
| 
| Erik
| 
| -----Original Message-----
| From: infinispan-dev-bounces at lists.jboss.org
| [mailto:infinispan-dev-bounces at lists.jboss.org] On Behalf Of Radim Vansa
| Sent: Wednesday, June 26, 2013 11:24 AM
| To: infinispan -Dev List
| Subject: [infinispan-dev] Cachestores performance
| 
| Hi all,
| 
| according to [1] I've created the comparison of performance in stress-tests.
| 
| All setups used local-cache, benchmark was executed via Radargun (actually
| version not merged into master yet [2]). I've used 4 nodes just to get more
| data - each slave was absolutely independent of the others.
| 
| First test was preloading performance - the cache started and tried to load
| 1GB of data from harddrive. Without cachestore the startup takes about 2 - 4
| seconds, average numbers for the cachestores are below:
| 
| FileCacheStore:        9.8 s
| KarstenFileCacheStore:  14 s
| LevelDB-JAVA impl.:   12.3 s
| LevelDB-JNI impl.:    12.9 s
| 
| IMO nothing special, all times seem affordable. We don't benchmark exactly
| storing the data into the cachestore, here FileCacheStore took about 44
| minutes, while Karsten about 38 seconds, LevelDB-JAVA 4 minutes and
| LevelDB-JNI 96 seconds. The units are right, it's minutes compared to
| seconds. But we all know that FileCacheStore is bloody slow.
| 
| Second test is stress test (5 minutes, preceded by 2 minute warmup) where
| each of 10 threads works on 10k entries with 1kB values (~100 MB in total).
| 20 % writes, 80 % reads, as usual. No eviction is configured, therefore the
| cache-store works as a persistent storage only for case of crash.
| 
| FileCacheStore:         3.1M reads/s   112 writes/s  // on one node the
| performance was only 2.96M reads/s 75 writes/s
| KarstenFileCacheStore:  9.2M reads/s  226k writes/s  // yikes!
| LevelDB-JAVA impl.:     3.9M reads/s  5100 writes/s
| LevelDB-JNI impl.:      6.6M reads/s   14k writes/s  // on one node the
| performance was 3.9M/8.3k - about half of the others
| Without cache store:   15.5M reads/s  4.4M writes/s
| 
| Karsten implementation pretty rules here for two reasons. First of all, it
| does not flush the data (it calls only RandomAccessFile.write()). Other
| cheat is that it stores in-memory the keys and offsets of data values in the
| database file. Therefore, it's definitely the best choice for this scenario,
| but it does not allow to scale the cache-store, especially in cases where
| the keys are big and values small. However, this performance boost is
| definitely worth checking - I could think of caching the disk offsets in
| memory and querying persistent index only in case of missing record, with
| part of the persistent index flushed asynchronously (the index can be always
| rebuilt during the preloading for case of crash).
| 
| The third test should have tested the scenario with more data to be stored
| than memory - therefore, the stressors operated on 100k entries (~100 MB of
| data) but eviction was set to 10k entries (9216 entries ended up in memory
| after the test has ended).
| 
| FileCacheStore:            750 reads/s         285 writes/s  // one node had
| only 524 reads and 213 writes per second
| KarstenFileCacheStore:    458k reads/s        137k writes/s
| LevelDB-JAVA impl.:        21k reads/s          9k writes/s  // a bit varying
| performance
| LevelDB-JNI impl.:     13k-46k reads/s  6.6k-15.2k writes/s  // the
| performance varied a lot!
| 
| 100 MB of data is not much, but it takes so long to push it into
| FileCacheStore that I won't use more unless we exclude this loser from the
| comparison :)
| 
| Radim
| 
| [1] https://community.jboss.org/wiki/FileCacheStoreRedesign
| [2] https://github.com/rvansa/radargun/tree/t_keygen
| 
| -----------------------------------------------------------
| Radim Vansa
| Quality Assurance Engineer
| JBoss Datagrid
| tel. +420532294559 ext. 62559
| 
| Red Hat Czech, s.r.o.
| Brno, Purkyňova 99/71, PSČ 612 45
| Czech Republic
| 
| 
| _______________________________________________
| infinispan-dev mailing list
| infinispan-dev at lists.jboss.org
| https://lists.jboss.org/mailman/listinfo/infinispan-dev
| 
| 
| _______________________________________________
| infinispan-dev mailing list
| infinispan-dev at lists.jboss.org
| https://lists.jboss.org/mailman/listinfo/infinispan-dev



More information about the infinispan-dev mailing list