[infinispan-dev] Appending to file

Radim Vansa rvansa at redhat.com
Mon Jul 8 02:32:56 EDT 2013



----- Original Message -----
| From: "Galder Zamarreño" <galder at redhat.com>
| To: "infinispan -Dev List" <infinispan-dev at lists.jboss.org>
| Sent: Friday, July 5, 2013 7:46:03 AM
| Subject: Re: [infinispan-dev] Appending to file
| 
| 
| On Jul 3, 2013, at 10:20 AM, Radim Vansa <rvansa at redhat.com> wrote:
| 
| > | > 
| > | > I've used three implementations: one simply synchronizing the access
| > | > and
| > | > calling force(false) after each write (by default 1kB). Second with
| > | > threads cooperating - every thread puts its data into queue and waits
| > | > for
| > | > a short period of time - if then its data are still in the queue, it
| > | > writes whole queue to disk, flushes it and wakes up other waiting
| > | > threads.
| > | > Third implementation (actually three flavours) used one spooler thread
| > | > which polls the queue, writes as much as it can to disk, flushes and
| > | > notifies waiting threads.
| > | 
| > | ^ Hmmm, aren't option 2 and 3 different flavours of the async store
| > | rather
| > | than the file cache store? IOW, we want the file cache store to behave in
| > | such way that it provides "reasonable" guarantees where store() finishes.
| > | We
| > | are aware of different guarantees been given depending on whether force
| > | is
| > | called or not, but adding queue/threads is something that's
| > | responsibility
| > | of the async store. If you really wanna compare apples with apples, you
| > | should do be comaring the performance of:
| > | 
| > | A) async store enabled + option 1 (sync force call)
| > | B) option 2 (threads cooperating)
| > | C) option 3 (spooler)
| > | 
| > | We've done a lot of work to improve the performance of the async store
| > | and
| > | we're happy with its current performance numbers. Karsten helped hugely
| > | with
| > | that, so I really have doubts any file cache store implementation should
| > | be
| > | reimplementing async store-like logic.
| > | 
| > 
| > Maybe I am missing something, but async store (aka write-behind) does not
| > wait until the entry is written to the disk (and possibly flushed). The
| > aim of these two strategies is to reduce number of flushes by collating
| > stores into batch and flushing only once after several stores, but the
| > storing threads are not released until the flush happens.
| 
| Also, how do the numbers you got with your implementations compared with
| running the same tests with Karten's cache stores, or LevelDB JNI?

I don't quite understand the question. These were separate microbenchmarks, the strategies were not implemented in the cache stores.

Radim



More information about the infinispan-dev mailing list