L1 consistency for transactional caches.
by Pedro Ruivo
Hi all,
simple question: What are the consistency guaranties that is supposed to
be ensured?
I have the following scenario (happened in a test case):
NonOwner: remote get key
BackupOwner: receives the remote get and replies (with the correct value)
BackupOwner: put in L1 the value
PrimaryOwner: [at the same time] is committing a transaction that will
update the key.
PrimaryOwer: receives the remote get after sending the commit. The
invalidation for L1 is not sent to NonOwner.
The test finishes and I perform a check for the key value in all the
caches. The NonOwner returns the L1 cached value (==test fail).
IMO, this is bug (or not) depending what guaranties we provide.
wdyt?
Pedro
11 years, 3 months
L1 Data Container
by William Burns
All the L1 data for a DIST cache is stored in the same data container as
the actual distributed data itself. I wanted to propose breaking this out
so there is a separate data container for the L1 cache as compared to the
distributed data.
I thought of a few quick benefits/drawbacks:
Benefits:
1. L1 cache can be separately tuned - L1 maxEntries for example
2. L1 values will not cause eviction of real data
3. Would make https://issues.jboss.org/browse/ISPN-3229 an easy fix
4. Could add a new DataContainer implementation specific to L1 with
additional optimizations
5. Help with some concurrency issues with L1 without requiring wider
locking (such as locking a key for an entire ClusteredGet rpc call) -
https://issues.jboss.org/browse/ISPN-3197.
Drawbacks:
1. Would require, depending on configuration, an additional thread for
eviction
2. Users upgrading could have double memory used up due to 2 data containers
Both?:
1. Additional configuration available
a. Add maxEntires just like the normal data container (use data
container size if not configured?)
b. Eviction wakeup timer? We could just reuse the task cleanup
frequency?
c. Eviction strategy? I would think the default data container's would
be sufficient.
I was wondering what you guys thought.
Thanks,
- Will
11 years, 3 months
Isolation level in Repeatable Read + remote get
by Pedro Ruivo
Hi guys,
re: ISPN-2840, ISPN-3235, ISPN-3236
short: transaction isolation in repeatable read
Dan came up with an idea (a good idea IMO) to change a little the logic
how entries are put in the context for transactional caches.
One of the Repeatable Read properties is after a key is accessed, the
transaction should never see other concurrent transaction values, even
if they commit first. In result, we can optimize the distributed mode by
only do a remote get on the first time a transaction access a key.
My idea (and the one implemented in the Pull Request) was adding a flag
to mark the entry as accessed. All future access to that key will not
try to fetch the data from container neither from a remote location (we
have a small but with last one).
The Dan's idea is more simple but require some change in the
EntryFactory logic. At this stage, we only put an entry in the
transaction context if the following conditions are respected:
* the entry exists in data container (i.e. the node is owner or it is in L1)
* we put a RepeatableReadEntry with null value in the transaction context if
** the entry does not exist in the container but the node is owner
** the entry does not exist in the data container but the command has
flags to skip the remote fetch (like IGNORE_RETURN_VALUE or
SKIP_REMOTE_LOOKUP). Of course the conditional commands needs special
attention here.
Note: as usual, if the entry exists in the context, then nothing is done.
At the TxDistributionInterceptor level, the check to see if the remote
get should be done is simple as check if lookupEntries(k)==null.
Dan, if I said something wrong let me know. If I was not clear at some
point let me know too.
Anyone see any issue with this approach?
Any other suggestions?
Cheers,
Pedro Ruivo
11 years, 3 months
RemoteCache vs BasicCache
by Tristan Tarrant
Dear all,
during my latest destructive commits, I have liberated
infinispan-client-hotrod from infinispan-core.
One of the things I did was remove the inheritance of RemoteCache from
BasicCache, since my argument was that our base API contract was the
ConcurrentMap interface and that would suffice, since it implied that a
remote Cache could match a core Cache in functionality (which is
somewhat true). Now, I'm not convinced it was a good choice after all,
since there are indeed a ton of common methods (the async API for one).
I would like people's opinion on the above, and propose one of the
following:
1. we leave it as it is
2. we move org.infinispan.api to infinispan-commons and make RemoteCache
inherit from BasicCache
3. we go even further and split the concept of BasicCache into multiple
interfaces: AsyncCache, TransactionalCache, QueryableCache, etc and add
them to the RemoteCache as we will in the blanks, since we are aiming at
feature parity. This could also mix well with the ideal of having the
JCache API as our public API.
Tristan
11 years, 3 months
MongoDB cachestore 5.3.0.Final now in jboss maven repo
by Tristan Tarrant
Dearl all,
I have released to maven the missing MongoDB cachestore for 5.3.0.Final.
I have not re-released the zip distributions to avoid confusion, but
let's make sure it is included in future releases.
Tristan
11 years, 3 months
Cachestores performance
by Radim Vansa
Hi all,
according to [1] I've created the comparison of performance in stress-tests.
All setups used local-cache, benchmark was executed via Radargun (actually version not merged into master yet [2]). I've used 4 nodes just to get more data - each slave was absolutely independent of the others.
First test was preloading performance - the cache started and tried to load 1GB of data from harddrive. Without cachestore the startup takes about 2 - 4 seconds, average numbers for the cachestores are below:
FileCacheStore: 9.8 s
KarstenFileCacheStore: 14 s
LevelDB-JAVA impl.: 12.3 s
LevelDB-JNI impl.: 12.9 s
IMO nothing special, all times seem affordable. We don't benchmark exactly storing the data into the cachestore, here FileCacheStore took about 44 minutes, while Karsten about 38 seconds, LevelDB-JAVA 4 minutes and LevelDB-JNI 96 seconds. The units are right, it's minutes compared to seconds. But we all know that FileCacheStore is bloody slow.
Second test is stress test (5 minutes, preceded by 2 minute warmup) where each of 10 threads works on 10k entries with 1kB values (~100 MB in total). 20 % writes, 80 % reads, as usual. No eviction is configured, therefore the cache-store works as a persistent storage only for case of crash.
FileCacheStore: 3.1M reads/s 112 writes/s // on one node the performance was only 2.96M reads/s 75 writes/s
KarstenFileCacheStore: 9.2M reads/s 226k writes/s // yikes!
LevelDB-JAVA impl.: 3.9M reads/s 5100 writes/s
LevelDB-JNI impl.: 6.6M reads/s 14k writes/s // on one node the performance was 3.9M/8.3k - about half of the others
Without cache store: 15.5M reads/s 4.4M writes/s
Karsten implementation pretty rules here for two reasons. First of all, it does not flush the data (it calls only RandomAccessFile.write()). Other cheat is that it stores in-memory the keys and offsets of data values in the database file. Therefore, it's definitely the best choice for this scenario, but it does not allow to scale the cache-store, especially in cases where the keys are big and values small. However, this performance boost is definitely worth checking - I could think of caching the disk offsets in memory and querying persistent index only in case of missing record, with part of the persistent index flushed asynchronously (the index can be always rebuilt during the preloading for case of crash).
The third test should have tested the scenario with more data to be stored than memory - therefore, the stressors operated on 100k entries (~100 MB of data) but eviction was set to 10k entries (9216 entries ended up in memory after the test has ended).
FileCacheStore: 750 reads/s 285 writes/s // one node had only 524 reads and 213 writes per second
KarstenFileCacheStore: 458k reads/s 137k writes/s
LevelDB-JAVA impl.: 21k reads/s 9k writes/s // a bit varying performance
LevelDB-JNI impl.: 13k-46k reads/s 6.6k-15.2k writes/s // the performance varied a lot!
100 MB of data is not much, but it takes so long to push it into FileCacheStore that I won't use more unless we exclude this loser from the comparison :)
Radim
[1] https://community.jboss.org/wiki/FileCacheStoreRedesign
[2] https://github.com/rvansa/radargun/tree/t_keygen
-----------------------------------------------------------
Radim Vansa
Quality Assurance Engineer
JBoss Datagrid
tel. +420532294559 ext. 62559
Red Hat Czech, s.r.o.
Brno, Purkyňova 99/71, PSČ 612 45
Czech Republic
11 years, 3 months
New bundler performance
by Radim Vansa
Hi,
I was going through the commits (running tests on each of them) to seek the performance regression we've recently discovered and it seems that our test (replicated udp non-transactional stress test on 4 nodes) experiences a serious regression on the commit
ISPN-2848 Use the new bundling mechanism from JGroups 3.3.0 (73da108cdcf9db4f3edbcd6dbda6938d6e45d148)
The performance drops from about 7800 writes/s to 4800 writes/s, and from 1.5M reads/s to 1.2M reads/s (having slower reads in replicated mode is really odd).
It seems that the bundler is not really as good as we hoped for - it may be the bottleneck. I have tried to create another bundler which shares the queue between 4 instances of TransportQueueBundler (so, 4 threads are actually sending the messages which go into one queue) and the performance mildly improved - to 5200 writes/s, but that's not enough.
Radim
Note: you may have seen my conversation with Pedro Ruivo on IRC about the bundler several days ago, in that time our configuration had old bundler. This was fixed, but as I have not built Infinispan properly (something got cached), I have not noticed the regression between these builds.
-----------------------------------------------------------
Radim Vansa
Quality Assurance Engineer
JBoss Datagrid
tel. +420532294559 ext. 62559
Red Hat Czech, s.r.o.
Brno, Purkyňova 99/71, PSČ 612 45
Czech Republic
11 years, 3 months
Re: [infinispan-dev] [infinispan-internal] LevelDB performance testing
by Galder Zamarreño
Putting Infinispan-Development list again to get others' thoughts...
On Jun 24, 2013, at 11:57 AM, Radim Vansa <rvansa(a)redhat.com> wrote:
>
>
> ----- Original Message -----
> | From: "Galder Zamarreño" <galder(a)redhat.com>
> | To: "Radim Vansa" <rvansa(a)redhat.com>
> | Sent: Monday, June 24, 2013 10:45:32 AM
> | Subject: Re: [infinispan-internal] LevelDB performance testing
> |
> | Hey Radim,
> |
> | Thanks a lot for running these tests. Comments inline...
> |
> | On Jun 21, 2013, at 3:33 PM, Radim Vansa <rvansa(a)redhat.com> wrote:
> |
> | > Hi all,
> | >
> | > I've got some results from LevelDB performance testing.
> | >
> | > Use case description:
> | > - writes with unique keys about 100 bytes in size, no value.
> | > - reads rare
> | >
> | > Infinispan configuration:
> | > - eviction enabled (no passivation), 10000 entries is memory allowed (we
> | > don't need to occupy the memory if almost all operations are writes to
> | > unique keys)
> | > - synchronous mode (test have not revealed any significant performance gain
> | > using async mode) - distributed cache with 2 owners and 40 segments
> | > - write-behind cache store with 4 threads
> |
> | ^ Hmmmm, why write-behind? If you really wanna test the performance of
> | LevelDB, and you want to check how fast it writes or deletes, how are you
> | gonna measure that if you're using an async store? Read operations, assuming
> | that data is evicted, will go all the way to the cache store, and that's
> | really the only numbers you can get out this test…
>
> Because I want to handle stuff ASAP. I don't really care how long each operation takes, I am just interested in the throughput I can reach. And this should be best with write-behind, because we're not waiting for the data to be written down, right?
^ Indeed, but then not sure you're really getting throughput numbers based on what the cache store implementation. In other words, with this test you're testing the async store logic, to see how much throughput you can handle queuing up those modifications… unless you have a validation phase that times how long it takes for the cache store to contain all the data that is supposed to have. Then yes, you'd be measuring how long it takes for all that queued up modifications to be applied, which is a factor or many things, one of which, is the speed of the cache store implementation to apply those changes at the persistence layer.
>
> |
> | Assuming the levelDB cache store is not shared, why test a 4-node cluster? If
> | what you're trying to figure out is how fast LevelDB is as a unshared cache
> | store (assuming this is going to be used instead of the stock FCS…), then
> | you could have just the tests against a local cache, right?
> |
> | > Test description:
> | > - we use 100-byte keys with key-id encoded in the first 8 bytes and the
> | > rest is filled with random bytes (equal for all keys).
> | > - value is an empty byte array
> |
> | ^ Empty? That's not very reallistic, and you'd hardly stress how the cache
> | store deals with differently sized data. I'd say at least 1kb would be
> | minimum?
>
> That's exactly what the customer asked for - usually we really use this 1kB, but his scenario probably uses just some dummy tombstones for most of data.
Fair enough.
>
> |
> | > - 99 % of requests are PUTs (without any flags such as
> | > IGNORE_RETURN_VALUES), 1 % are GETs
> | > - we do not pre-load any keys to the cache, everything starts empty and
> | > each PUT operation adds a new key
> | > - GET operation reads one of the keys that is already in the cache, with
> | > uniform distribution of probability
> | > - 4-node JDG cluster stressed from 3 machines
> | >
> | >> From client-stress test (200 new clients added every 3 minutes) we have
> | >> found out optimal value of 3500 clients (in total) which led to about 33k
> | >> operations per second.
> | > Soak test was executed to test higher amount of entries - the test was
> | > running with all 3500 clients right after the startup, executing requests
> | > and therefore increasing the size of DB.
> | >
> | > During first 15 minutes a stable throughput of 33k operations per second
> | > was experienced, adding 2M new keys (200 MB) every minute. iostat shows
> | > about 22 MB/s reads and 11 MB/s writes, 550 read and 130 write (2600
> | > merged) requests to disk.
> | > Then the throughput started decreasing, during following 35 minutes it
> | > dropped to 25k operations per second. No exceptions have occurred in this
> | > period. By the end iostat shows 29 MB/s reads, 10 MB/s writes (up to 750
> | > w/s (2400 wrqm/s), 115 r/s.
> | >
> | >
> | > After these 50 minutes of working test error messages started to appear on
> | > one of the nodes:
> | >
> | > 08:20:07,009 WARN [org.infinispan.loaders.decorators.AsyncStore]
> | > (AsyncStoreProcessor-testCache-0) ISPN000053: Unable to process some async
> | > modifications after 3 retries!
> | > and
> | > 08:20:09,720 ERROR
> | > [org.infinispan.interceptors.InvocationContextInterceptor]
> | > (HotRodServerWorker-84) ISPN000136: Execution error:
> | > org.infinispan.loaders.CacheLoaderException:
> | > org.iq80.leveldb.impl.DbImpl$BackgroundProcessingException:
> | > java.io.FileNotFoundException: /tmp/leveldb/testCache/data/026662.sst (No
> | > space left on device)
> | >
> | > At this moment LevelDB was occupying about 2 GB of 16 GB available space,
> | > there's plenty of free inodes and db directory has about 4000 files inside
> | > (1100 with some content, rest of them empty).
> | > 2 minutes later JVM with one of the nodes crashed (see
> | > https://bugzilla.redhat.com/show_bug.cgi?id=976664). The test was
> | > terminated as the cluster was practically not responding and did not met
> | > performance criteria.
> |
> | ^ What was the performance criteria?
>
> Sorry for bottling this up - 90 % of requests should have response time < 500 ms. This is a very liberal setting, stopping the test only when things really screw up.
>
> Radim
>
> |
> | >
> | > I believe that exact performance results after things got wrong are not
> | > necessary to be presented.
> | >
> | > Radim
> | >
> | > -----------------------------------------------------------
> | > Radim Vansa
> | > Quality Assurance Engineer
> | > JBoss Datagrid
> | > tel. +420532294559 ext. 62559
> | >
> | > Red Hat Czech, s.r.o.
> | > Brno, Purkyňova 99/71, PSČ 612 45
> | > Czech Republic
> | >
> | >
> |
> |
> | --
> | Galder Zamarreño
> | galder(a)redhat.com
> | twitter.com/galderz
> |
> | Project Lead, Escalante
> | http://escalante.io
> |
> | Engineer, Infinispan
> | http://infinispan.org
> |
> |
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 3 months
Re: [infinispan-dev] [infinispan-internal] PutMapCommand is ineffective
by Manik Surtani
Agreed. It does sound pretty heavy. We should investigate a better implementation - the two approaches you suggest both sound good, could you create a JIRA for this?
Adding infinispan-dev, that's the correct place to discuss this.
Cheers
Manik
On 7 Jun 2013, at 13:39, Radim Vansa <rvansa(a)redhat.com> wrote:
> Hi,
>
> recently I was looking into the performance of PutMapCommand and what's in fact going on under the hood. From what I've seen (not from the code but from message flow analysis), in non-transactional synchronous mode this happens:
>
> A wants to execute PutMapCommand with many keys - let's assume that in fact the keys span all nodes in the cluster.
>
> 1. A locks all local keys and sends via unicast a message to each primary owner of some of the keys in the map
> 2. A sends unicast message to each node, requesting the operation
> 3. Each node locks its keys and sends multicast message to ALL other nodes in the cluster
> This happens N - 1 times:
> 4. Each node receives the multicast message, (updates the non-primary segments) and sends reply back to the sender of mcast message.
> 5. The primary owners send confirmation back to A.
>
> Let's compute how many messages are here received - it's
> N - 1 // A's request
> (N - 1) * (N - 1) // multicast message received
> (N - 1) * (N - 1) // reply to the multicast message received
> N - 1 // response to A
> That's 2*N^2 - 2*N messages. In case nobody needs flow control replenishments, nothing is lost etc. I don't like that ^2 exponent - does not look like the cluster is really scaling. It could be fun to see execute it on 64-node cluster, spawning thousands of messages just for one putAll (with, say 100 key-value pairs - I don't want to compute the exact probability on how many nodes would such set of keys have primary segments).
>
> Could the requestor orchestrate the whole operation? The idea is that all messages are sent only between requestor and other nodes, never between the other nodes. The requestor would lock the primary keys by one set of messages (waiting for reply), updating the non-primaries by another set of messages and then again unlocking all primaries by last message.
> The set of messages could be either unicast with selected keys only for the recipient, or multicast with whole map - rationalization which one is actually better is subject to performance test.
> This results in 6*N - 6 messages (or 5*N - 5 if the last message wouldn't require the reply). You can easily see when 5*(N - 1) is better than 2*N*(N - 1).
> Or is this too similar to transactions with multiple keys?
>
> I think that with current implementation, the putAll operation should be discouraged as it does not provide better performance than multiple put (and in terms of atomicity it's probably not much better either).
>
> WDYT?
>
> Radim
>
> -----------------------------------------------------------
> Radim Vansa
> Quality Assurance Engineer
> JBoss Datagrid
> tel. +420532294559 ext. 62559
>
> Red Hat Czech, s.r.o.
> Brno, Purkyňova 99/71, PSČ 612 45
> Czech Republic
>
>
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
11 years, 3 months
Appending to file
by Radim Vansa
Hi,
I was playing with efficient ways to write to append-only log and I have some results below.
I've used three implementations: one simply synchronizing the access and calling force(false) after each write (by default 1kB). Second with threads cooperating - every thread puts its data into queue and waits for a short period of time - if then its data are still in the queue, it writes whole queue to disk, flushes it and wakes up other waiting threads. Third implementation (actually three flavours) used one spooler thread which polls the queue, writes as much as it can to disk, flushes and notifies waiting threads. The flavours are in the type of waiting for the spooler to write their data - either spinning on volatile counter, checking this counter with yielding and waiting via Object.wait()/Object.notify().
According to the results, the spooler variant with yielding proved as the best, with 20 threads even 10x faster than the basic variant (which simply does not scale) - as I've experimented more, it scales even more - with 80 threads I have achieved 435k appends - that's 14.16 MB/s. For better context, the basic variant without flushing on 80 threads would handle 1.67M appends = 54.26 MB/s, the spooler 660k appends = 21.48 MB/s (curiously, non-flushing spooler with wait/notify would do 1.19M appends). I do not present the whole table for higher numbers as I don't think that much threads would really access the disk concurrently.
Note: the limit in WaitingAppendOnly says with how big queue the thread won't wait but begins immediately writing the data. Setting this to thread count provides obviously the best performance but in practice the thread number cannot be as easily guessed - therefore, I provide results for exact guess, half the number and twice the number of threads.
Note2: in SingleWriterAppendLog the limit is not that important - it just limits the number of data written in one row before flush.
Note3: on my local machine the spinning version was slowest of the three ones, but still faster than the simple variant.
Source code of the test is in attachment.
Radim
Executed on 8-core AMD Opteron(tm) Processor 6128
4 threads, 16988 appends ( 16988 flushes) in 30006.30 ms to SimpleAppendLog
8 threads, 17108 appends ( 17108 flushes) in 30018.71 ms to SimpleAppendLog
12 threads, 17139 appends ( 17139 flushes) in 30029.29 ms to SimpleAppendLog
16 threads, 17280 appends ( 17280 flushes) in 30033.49 ms to SimpleAppendLog
20 threads, 17511 appends ( 17511 flushes) in 30043.29 ms to SimpleAppendLog
4 threads, 37956 appends ( 9489 flushes) in 30002.46 ms to WaitingAppendLog{limit=4, waitTime=1}
4 threads, 29536 appends ( 7384 flushes) in 30001.88 ms to WaitingAppendLog{limit=8, waitTime=1}
4 threads, 29530 appends ( 14765 flushes) in 30001.27 ms to WaitingAppendLog{limit=2, waitTime=1}
4 threads, 39280 appends ( 9820 flushes) in 30001.71 ms to WaitingAppendLog{limit=4, waitTime=2}
4 threads, 23720 appends ( 5930 flushes) in 30002.85 ms to WaitingAppendLog{limit=8, waitTime=2}
4 threads, 29302 appends ( 14651 flushes) in 30001.53 ms to WaitingAppendLog{limit=2, waitTime=2}
4 threads, 39620 appends ( 9905 flushes) in 30004.34 ms to WaitingAppendLog{limit=4, waitTime=4}
4 threads, 16600 appends ( 4150 flushes) in 30002.58 ms to WaitingAppendLog{limit=8, waitTime=4}
4 threads, 29234 appends ( 14617 flushes) in 30002.93 ms to WaitingAppendLog{limit=2, waitTime=4}
8 threads, 71457 appends ( 8933 flushes) in 30004.31 ms to WaitingAppendLog{limit=8, waitTime=1}
8 threads, 54152 appends ( 6769 flushes) in 30003.77 ms to WaitingAppendLog{limit=16, waitTime=1}
8 threads, 38104 appends ( 9526 flushes) in 30003.66 ms to WaitingAppendLog{limit=4, waitTime=1}
8 threads, 70248 appends ( 8781 flushes) in 30002.34 ms to WaitingAppendLog{limit=8, waitTime=2}
8 threads, 44240 appends ( 5530 flushes) in 30003.16 ms to WaitingAppendLog{limit=16, waitTime=2}
8 threads, 38788 appends ( 9697 flushes) in 30002.56 ms to WaitingAppendLog{limit=4, waitTime=2}
8 threads, 68456 appends ( 8557 flushes) in 30002.56 ms to WaitingAppendLog{limit=8, waitTime=4}
8 threads, 32432 appends ( 4054 flushes) in 30008.19 ms to WaitingAppendLog{limit=16, waitTime=4}
8 threads, 38164 appends ( 9541 flushes) in 30003.94 ms to WaitingAppendLog{limit=4, waitTime=4}
12 threads, 97084 appends ( 8091 flushes) in 30005.87 ms to WaitingAppendLog{limit=12, waitTime=1}
12 threads, 78684 appends ( 6557 flushes) in 30005.24 ms to WaitingAppendLog{limit=24, waitTime=1}
12 threads, 51462 appends ( 8577 flushes) in 30002.70 ms to WaitingAppendLog{limit=6, waitTime=1}
12 threads, 100200 appends ( 8350 flushes) in 30004.20 ms to WaitingAppendLog{limit=12, waitTime=2}
12 threads, 66283 appends ( 5524 flushes) in 30005.82 ms to WaitingAppendLog{limit=24, waitTime=2}
12 threads, 52134 appends ( 8689 flushes) in 30003.80 ms to WaitingAppendLog{limit=6, waitTime=2}
12 threads, 95885 appends ( 7991 flushes) in 30007.94 ms to WaitingAppendLog{limit=12, waitTime=4}
12 threads, 47700 appends ( 3975 flushes) in 30009.75 ms to WaitingAppendLog{limit=24, waitTime=4}
12 threads, 51822 appends ( 8637 flushes) in 30003.71 ms to WaitingAppendLog{limit=6, waitTime=4}
16 threads, 126192 appends ( 7887 flushes) in 30005.76 ms to WaitingAppendLog{limit=16, waitTime=1}
16 threads, 104800 appends ( 6550 flushes) in 30002.69 ms to WaitingAppendLog{limit=32, waitTime=1}
16 threads, 68168 appends ( 8521 flushes) in 30008.17 ms to WaitingAppendLog{limit=8, waitTime=1}
16 threads, 119643 appends ( 7478 flushes) in 30005.76 ms to WaitingAppendLog{limit=16, waitTime=2}
16 threads, 84576 appends ( 5286 flushes) in 30004.32 ms to WaitingAppendLog{limit=32, waitTime=2}
16 threads, 70368 appends ( 8796 flushes) in 30003.51 ms to WaitingAppendLog{limit=8, waitTime=2}
16 threads, 119486 appends ( 7468 flushes) in 30005.89 ms to WaitingAppendLog{limit=16, waitTime=4}
16 threads, 62452 appends ( 3904 flushes) in 30007.32 ms to WaitingAppendLog{limit=32, waitTime=4}
16 threads, 67912 appends ( 8489 flushes) in 30004.71 ms to WaitingAppendLog{limit=8, waitTime=4}
20 threads, 139788 appends ( 6990 flushes) in 30005.29 ms to WaitingAppendLog{limit=20, waitTime=1}
20 threads, 120999 appends ( 6050 flushes) in 30007.57 ms to WaitingAppendLog{limit=40, waitTime=1}
20 threads, 80130 appends ( 8013 flushes) in 30004.85 ms to WaitingAppendLog{limit=10, waitTime=1}
20 threads, 140020 appends ( 7001 flushes) in 30008.80 ms to WaitingAppendLog{limit=20, waitTime=2}
20 threads, 101877 appends ( 5094 flushes) in 30006.94 ms to WaitingAppendLog{limit=40, waitTime=2}
20 threads, 78710 appends ( 7871 flushes) in 30005.05 ms to WaitingAppendLog{limit=10, waitTime=2}
20 threads, 150128 appends ( 7507 flushes) in 30009.89 ms to WaitingAppendLog{limit=20, waitTime=4}
20 threads, 77120 appends ( 3856 flushes) in 30009.32 ms to WaitingAppendLog{limit=40, waitTime=4}
20 threads, 78450 appends ( 7845 flushes) in 30007.46 ms to WaitingAppendLog{limit=10, waitTime=4}
4 threads, 40707 appends ( 17046 flushes) in 30002.08 ms to SingleWriterAppendLog.Spinning{limit=4}
4 threads, 42595 appends ( 20804 flushes) in 30000.96 ms to SingleWriterAppendLog.Yielding{limit=4}
4 threads, 28979 appends ( 14492 flushes) in 30001.37 ms to SingleWriterAppendLog.Waiting{limit=4}
8 threads, 6252 appends ( 1255 flushes) in 30011.34 ms to SingleWriterAppendLog.Spinning{limit=8}
8 threads, 85144 appends ( 17859 flushes) in 30002.90 ms to SingleWriterAppendLog.Yielding{limit=8}
8 threads, 38323 appends ( 9583 flushes) in 30006.94 ms to SingleWriterAppendLog.Waiting{limit=8}
12 threads, 7241 appends ( 880 flushes) in 30026.81 ms to SingleWriterAppendLog.Spinning{limit=12}
12 threads, 126233 appends ( 17027 flushes) in 30003.59 ms to SingleWriterAppendLog.Yielding{limit=12}
12 threads, 55867 appends ( 9314 flushes) in 30004.56 ms to SingleWriterAppendLog.Waiting{limit=12}
16 threads, 8956 appends ( 895 flushes) in 30021.69 ms to SingleWriterAppendLog.Spinning{limit=16}
16 threads, 158579 appends ( 15380 flushes) in 30002.46 ms to SingleWriterAppendLog.Yielding{limit=16}
16 threads, 82333 appends ( 10293 flushes) in 30007.82 ms to SingleWriterAppendLog.Waiting{limit=16}
20 threads, 10037 appends ( 862 flushes) in 30008.67 ms to SingleWriterAppendLog.Spinning{limit=20}
20 threads, 187211 appends ( 14644 flushes) in 30006.72 ms to SingleWriterAppendLog.Yielding{limit=20}
20 threads, 116620 appends ( 11664 flushes) in 30007.56 ms to SingleWriterAppendLog.Waiting{limit=20}
-----------------------------------------------------------
Radim Vansa
Quality Assurance Engineer
JBoss Datagrid
tel. +420532294559 ext. 62559
Red Hat Czech, s.r.o.
Brno, Purkyňova 99/71, PSČ 612 45
Czech Republic
11 years, 3 months