Re: [infinispan-dev] [infinispan-internal] LevelDB performance testing
by Galder Zamarreño
Putting Infinispan-Development list again to get others' thoughts...
On Jun 24, 2013, at 11:57 AM, Radim Vansa <rvansa(a)redhat.com> wrote:
>
>
> ----- Original Message -----
> | From: "Galder Zamarreño" <galder(a)redhat.com>
> | To: "Radim Vansa" <rvansa(a)redhat.com>
> | Sent: Monday, June 24, 2013 10:45:32 AM
> | Subject: Re: [infinispan-internal] LevelDB performance testing
> |
> | Hey Radim,
> |
> | Thanks a lot for running these tests. Comments inline...
> |
> | On Jun 21, 2013, at 3:33 PM, Radim Vansa <rvansa(a)redhat.com> wrote:
> |
> | > Hi all,
> | >
> | > I've got some results from LevelDB performance testing.
> | >
> | > Use case description:
> | > - writes with unique keys about 100 bytes in size, no value.
> | > - reads rare
> | >
> | > Infinispan configuration:
> | > - eviction enabled (no passivation), 10000 entries is memory allowed (we
> | > don't need to occupy the memory if almost all operations are writes to
> | > unique keys)
> | > - synchronous mode (test have not revealed any significant performance gain
> | > using async mode) - distributed cache with 2 owners and 40 segments
> | > - write-behind cache store with 4 threads
> |
> | ^ Hmmmm, why write-behind? If you really wanna test the performance of
> | LevelDB, and you want to check how fast it writes or deletes, how are you
> | gonna measure that if you're using an async store? Read operations, assuming
> | that data is evicted, will go all the way to the cache store, and that's
> | really the only numbers you can get out this test…
>
> Because I want to handle stuff ASAP. I don't really care how long each operation takes, I am just interested in the throughput I can reach. And this should be best with write-behind, because we're not waiting for the data to be written down, right?
^ Indeed, but then not sure you're really getting throughput numbers based on what the cache store implementation. In other words, with this test you're testing the async store logic, to see how much throughput you can handle queuing up those modifications… unless you have a validation phase that times how long it takes for the cache store to contain all the data that is supposed to have. Then yes, you'd be measuring how long it takes for all that queued up modifications to be applied, which is a factor or many things, one of which, is the speed of the cache store implementation to apply those changes at the persistence layer.
>
> |
> | Assuming the levelDB cache store is not shared, why test a 4-node cluster? If
> | what you're trying to figure out is how fast LevelDB is as a unshared cache
> | store (assuming this is going to be used instead of the stock FCS…), then
> | you could have just the tests against a local cache, right?
> |
> | > Test description:
> | > - we use 100-byte keys with key-id encoded in the first 8 bytes and the
> | > rest is filled with random bytes (equal for all keys).
> | > - value is an empty byte array
> |
> | ^ Empty? That's not very reallistic, and you'd hardly stress how the cache
> | store deals with differently sized data. I'd say at least 1kb would be
> | minimum?
>
> That's exactly what the customer asked for - usually we really use this 1kB, but his scenario probably uses just some dummy tombstones for most of data.
Fair enough.
>
> |
> | > - 99 % of requests are PUTs (without any flags such as
> | > IGNORE_RETURN_VALUES), 1 % are GETs
> | > - we do not pre-load any keys to the cache, everything starts empty and
> | > each PUT operation adds a new key
> | > - GET operation reads one of the keys that is already in the cache, with
> | > uniform distribution of probability
> | > - 4-node JDG cluster stressed from 3 machines
> | >
> | >> From client-stress test (200 new clients added every 3 minutes) we have
> | >> found out optimal value of 3500 clients (in total) which led to about 33k
> | >> operations per second.
> | > Soak test was executed to test higher amount of entries - the test was
> | > running with all 3500 clients right after the startup, executing requests
> | > and therefore increasing the size of DB.
> | >
> | > During first 15 minutes a stable throughput of 33k operations per second
> | > was experienced, adding 2M new keys (200 MB) every minute. iostat shows
> | > about 22 MB/s reads and 11 MB/s writes, 550 read and 130 write (2600
> | > merged) requests to disk.
> | > Then the throughput started decreasing, during following 35 minutes it
> | > dropped to 25k operations per second. No exceptions have occurred in this
> | > period. By the end iostat shows 29 MB/s reads, 10 MB/s writes (up to 750
> | > w/s (2400 wrqm/s), 115 r/s.
> | >
> | >
> | > After these 50 minutes of working test error messages started to appear on
> | > one of the nodes:
> | >
> | > 08:20:07,009 WARN [org.infinispan.loaders.decorators.AsyncStore]
> | > (AsyncStoreProcessor-testCache-0) ISPN000053: Unable to process some async
> | > modifications after 3 retries!
> | > and
> | > 08:20:09,720 ERROR
> | > [org.infinispan.interceptors.InvocationContextInterceptor]
> | > (HotRodServerWorker-84) ISPN000136: Execution error:
> | > org.infinispan.loaders.CacheLoaderException:
> | > org.iq80.leveldb.impl.DbImpl$BackgroundProcessingException:
> | > java.io.FileNotFoundException: /tmp/leveldb/testCache/data/026662.sst (No
> | > space left on device)
> | >
> | > At this moment LevelDB was occupying about 2 GB of 16 GB available space,
> | > there's plenty of free inodes and db directory has about 4000 files inside
> | > (1100 with some content, rest of them empty).
> | > 2 minutes later JVM with one of the nodes crashed (see
> | > https://bugzilla.redhat.com/show_bug.cgi?id=976664). The test was
> | > terminated as the cluster was practically not responding and did not met
> | > performance criteria.
> |
> | ^ What was the performance criteria?
>
> Sorry for bottling this up - 90 % of requests should have response time < 500 ms. This is a very liberal setting, stopping the test only when things really screw up.
>
> Radim
>
> |
> | >
> | > I believe that exact performance results after things got wrong are not
> | > necessary to be presented.
> | >
> | > Radim
> | >
> | > -----------------------------------------------------------
> | > Radim Vansa
> | > Quality Assurance Engineer
> | > JBoss Datagrid
> | > tel. +420532294559 ext. 62559
> | >
> | > Red Hat Czech, s.r.o.
> | > Brno, Purkyňova 99/71, PSČ 612 45
> | > Czech Republic
> | >
> | >
> |
> |
> | --
> | Galder Zamarreño
> | galder(a)redhat.com
> | twitter.com/galderz
> |
> | Project Lead, Escalante
> | http://escalante.io
> |
> | Engineer, Infinispan
> | http://infinispan.org
> |
> |
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 6 months
Re: [infinispan-dev] [infinispan-internal] PutMapCommand is ineffective
by Manik Surtani
Agreed. It does sound pretty heavy. We should investigate a better implementation - the two approaches you suggest both sound good, could you create a JIRA for this?
Adding infinispan-dev, that's the correct place to discuss this.
Cheers
Manik
On 7 Jun 2013, at 13:39, Radim Vansa <rvansa(a)redhat.com> wrote:
> Hi,
>
> recently I was looking into the performance of PutMapCommand and what's in fact going on under the hood. From what I've seen (not from the code but from message flow analysis), in non-transactional synchronous mode this happens:
>
> A wants to execute PutMapCommand with many keys - let's assume that in fact the keys span all nodes in the cluster.
>
> 1. A locks all local keys and sends via unicast a message to each primary owner of some of the keys in the map
> 2. A sends unicast message to each node, requesting the operation
> 3. Each node locks its keys and sends multicast message to ALL other nodes in the cluster
> This happens N - 1 times:
> 4. Each node receives the multicast message, (updates the non-primary segments) and sends reply back to the sender of mcast message.
> 5. The primary owners send confirmation back to A.
>
> Let's compute how many messages are here received - it's
> N - 1 // A's request
> (N - 1) * (N - 1) // multicast message received
> (N - 1) * (N - 1) // reply to the multicast message received
> N - 1 // response to A
> That's 2*N^2 - 2*N messages. In case nobody needs flow control replenishments, nothing is lost etc. I don't like that ^2 exponent - does not look like the cluster is really scaling. It could be fun to see execute it on 64-node cluster, spawning thousands of messages just for one putAll (with, say 100 key-value pairs - I don't want to compute the exact probability on how many nodes would such set of keys have primary segments).
>
> Could the requestor orchestrate the whole operation? The idea is that all messages are sent only between requestor and other nodes, never between the other nodes. The requestor would lock the primary keys by one set of messages (waiting for reply), updating the non-primaries by another set of messages and then again unlocking all primaries by last message.
> The set of messages could be either unicast with selected keys only for the recipient, or multicast with whole map - rationalization which one is actually better is subject to performance test.
> This results in 6*N - 6 messages (or 5*N - 5 if the last message wouldn't require the reply). You can easily see when 5*(N - 1) is better than 2*N*(N - 1).
> Or is this too similar to transactions with multiple keys?
>
> I think that with current implementation, the putAll operation should be discouraged as it does not provide better performance than multiple put (and in terms of atomicity it's probably not much better either).
>
> WDYT?
>
> Radim
>
> -----------------------------------------------------------
> Radim Vansa
> Quality Assurance Engineer
> JBoss Datagrid
> tel. +420532294559 ext. 62559
>
> Red Hat Czech, s.r.o.
> Brno, Purkyňova 99/71, PSČ 612 45
> Czech Republic
>
>
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
11 years, 6 months
Appending to file
by Radim Vansa
Hi,
I was playing with efficient ways to write to append-only log and I have some results below.
I've used three implementations: one simply synchronizing the access and calling force(false) after each write (by default 1kB). Second with threads cooperating - every thread puts its data into queue and waits for a short period of time - if then its data are still in the queue, it writes whole queue to disk, flushes it and wakes up other waiting threads. Third implementation (actually three flavours) used one spooler thread which polls the queue, writes as much as it can to disk, flushes and notifies waiting threads. The flavours are in the type of waiting for the spooler to write their data - either spinning on volatile counter, checking this counter with yielding and waiting via Object.wait()/Object.notify().
According to the results, the spooler variant with yielding proved as the best, with 20 threads even 10x faster than the basic variant (which simply does not scale) - as I've experimented more, it scales even more - with 80 threads I have achieved 435k appends - that's 14.16 MB/s. For better context, the basic variant without flushing on 80 threads would handle 1.67M appends = 54.26 MB/s, the spooler 660k appends = 21.48 MB/s (curiously, non-flushing spooler with wait/notify would do 1.19M appends). I do not present the whole table for higher numbers as I don't think that much threads would really access the disk concurrently.
Note: the limit in WaitingAppendOnly says with how big queue the thread won't wait but begins immediately writing the data. Setting this to thread count provides obviously the best performance but in practice the thread number cannot be as easily guessed - therefore, I provide results for exact guess, half the number and twice the number of threads.
Note2: in SingleWriterAppendLog the limit is not that important - it just limits the number of data written in one row before flush.
Note3: on my local machine the spinning version was slowest of the three ones, but still faster than the simple variant.
Source code of the test is in attachment.
Radim
Executed on 8-core AMD Opteron(tm) Processor 6128
4 threads, 16988 appends ( 16988 flushes) in 30006.30 ms to SimpleAppendLog
8 threads, 17108 appends ( 17108 flushes) in 30018.71 ms to SimpleAppendLog
12 threads, 17139 appends ( 17139 flushes) in 30029.29 ms to SimpleAppendLog
16 threads, 17280 appends ( 17280 flushes) in 30033.49 ms to SimpleAppendLog
20 threads, 17511 appends ( 17511 flushes) in 30043.29 ms to SimpleAppendLog
4 threads, 37956 appends ( 9489 flushes) in 30002.46 ms to WaitingAppendLog{limit=4, waitTime=1}
4 threads, 29536 appends ( 7384 flushes) in 30001.88 ms to WaitingAppendLog{limit=8, waitTime=1}
4 threads, 29530 appends ( 14765 flushes) in 30001.27 ms to WaitingAppendLog{limit=2, waitTime=1}
4 threads, 39280 appends ( 9820 flushes) in 30001.71 ms to WaitingAppendLog{limit=4, waitTime=2}
4 threads, 23720 appends ( 5930 flushes) in 30002.85 ms to WaitingAppendLog{limit=8, waitTime=2}
4 threads, 29302 appends ( 14651 flushes) in 30001.53 ms to WaitingAppendLog{limit=2, waitTime=2}
4 threads, 39620 appends ( 9905 flushes) in 30004.34 ms to WaitingAppendLog{limit=4, waitTime=4}
4 threads, 16600 appends ( 4150 flushes) in 30002.58 ms to WaitingAppendLog{limit=8, waitTime=4}
4 threads, 29234 appends ( 14617 flushes) in 30002.93 ms to WaitingAppendLog{limit=2, waitTime=4}
8 threads, 71457 appends ( 8933 flushes) in 30004.31 ms to WaitingAppendLog{limit=8, waitTime=1}
8 threads, 54152 appends ( 6769 flushes) in 30003.77 ms to WaitingAppendLog{limit=16, waitTime=1}
8 threads, 38104 appends ( 9526 flushes) in 30003.66 ms to WaitingAppendLog{limit=4, waitTime=1}
8 threads, 70248 appends ( 8781 flushes) in 30002.34 ms to WaitingAppendLog{limit=8, waitTime=2}
8 threads, 44240 appends ( 5530 flushes) in 30003.16 ms to WaitingAppendLog{limit=16, waitTime=2}
8 threads, 38788 appends ( 9697 flushes) in 30002.56 ms to WaitingAppendLog{limit=4, waitTime=2}
8 threads, 68456 appends ( 8557 flushes) in 30002.56 ms to WaitingAppendLog{limit=8, waitTime=4}
8 threads, 32432 appends ( 4054 flushes) in 30008.19 ms to WaitingAppendLog{limit=16, waitTime=4}
8 threads, 38164 appends ( 9541 flushes) in 30003.94 ms to WaitingAppendLog{limit=4, waitTime=4}
12 threads, 97084 appends ( 8091 flushes) in 30005.87 ms to WaitingAppendLog{limit=12, waitTime=1}
12 threads, 78684 appends ( 6557 flushes) in 30005.24 ms to WaitingAppendLog{limit=24, waitTime=1}
12 threads, 51462 appends ( 8577 flushes) in 30002.70 ms to WaitingAppendLog{limit=6, waitTime=1}
12 threads, 100200 appends ( 8350 flushes) in 30004.20 ms to WaitingAppendLog{limit=12, waitTime=2}
12 threads, 66283 appends ( 5524 flushes) in 30005.82 ms to WaitingAppendLog{limit=24, waitTime=2}
12 threads, 52134 appends ( 8689 flushes) in 30003.80 ms to WaitingAppendLog{limit=6, waitTime=2}
12 threads, 95885 appends ( 7991 flushes) in 30007.94 ms to WaitingAppendLog{limit=12, waitTime=4}
12 threads, 47700 appends ( 3975 flushes) in 30009.75 ms to WaitingAppendLog{limit=24, waitTime=4}
12 threads, 51822 appends ( 8637 flushes) in 30003.71 ms to WaitingAppendLog{limit=6, waitTime=4}
16 threads, 126192 appends ( 7887 flushes) in 30005.76 ms to WaitingAppendLog{limit=16, waitTime=1}
16 threads, 104800 appends ( 6550 flushes) in 30002.69 ms to WaitingAppendLog{limit=32, waitTime=1}
16 threads, 68168 appends ( 8521 flushes) in 30008.17 ms to WaitingAppendLog{limit=8, waitTime=1}
16 threads, 119643 appends ( 7478 flushes) in 30005.76 ms to WaitingAppendLog{limit=16, waitTime=2}
16 threads, 84576 appends ( 5286 flushes) in 30004.32 ms to WaitingAppendLog{limit=32, waitTime=2}
16 threads, 70368 appends ( 8796 flushes) in 30003.51 ms to WaitingAppendLog{limit=8, waitTime=2}
16 threads, 119486 appends ( 7468 flushes) in 30005.89 ms to WaitingAppendLog{limit=16, waitTime=4}
16 threads, 62452 appends ( 3904 flushes) in 30007.32 ms to WaitingAppendLog{limit=32, waitTime=4}
16 threads, 67912 appends ( 8489 flushes) in 30004.71 ms to WaitingAppendLog{limit=8, waitTime=4}
20 threads, 139788 appends ( 6990 flushes) in 30005.29 ms to WaitingAppendLog{limit=20, waitTime=1}
20 threads, 120999 appends ( 6050 flushes) in 30007.57 ms to WaitingAppendLog{limit=40, waitTime=1}
20 threads, 80130 appends ( 8013 flushes) in 30004.85 ms to WaitingAppendLog{limit=10, waitTime=1}
20 threads, 140020 appends ( 7001 flushes) in 30008.80 ms to WaitingAppendLog{limit=20, waitTime=2}
20 threads, 101877 appends ( 5094 flushes) in 30006.94 ms to WaitingAppendLog{limit=40, waitTime=2}
20 threads, 78710 appends ( 7871 flushes) in 30005.05 ms to WaitingAppendLog{limit=10, waitTime=2}
20 threads, 150128 appends ( 7507 flushes) in 30009.89 ms to WaitingAppendLog{limit=20, waitTime=4}
20 threads, 77120 appends ( 3856 flushes) in 30009.32 ms to WaitingAppendLog{limit=40, waitTime=4}
20 threads, 78450 appends ( 7845 flushes) in 30007.46 ms to WaitingAppendLog{limit=10, waitTime=4}
4 threads, 40707 appends ( 17046 flushes) in 30002.08 ms to SingleWriterAppendLog.Spinning{limit=4}
4 threads, 42595 appends ( 20804 flushes) in 30000.96 ms to SingleWriterAppendLog.Yielding{limit=4}
4 threads, 28979 appends ( 14492 flushes) in 30001.37 ms to SingleWriterAppendLog.Waiting{limit=4}
8 threads, 6252 appends ( 1255 flushes) in 30011.34 ms to SingleWriterAppendLog.Spinning{limit=8}
8 threads, 85144 appends ( 17859 flushes) in 30002.90 ms to SingleWriterAppendLog.Yielding{limit=8}
8 threads, 38323 appends ( 9583 flushes) in 30006.94 ms to SingleWriterAppendLog.Waiting{limit=8}
12 threads, 7241 appends ( 880 flushes) in 30026.81 ms to SingleWriterAppendLog.Spinning{limit=12}
12 threads, 126233 appends ( 17027 flushes) in 30003.59 ms to SingleWriterAppendLog.Yielding{limit=12}
12 threads, 55867 appends ( 9314 flushes) in 30004.56 ms to SingleWriterAppendLog.Waiting{limit=12}
16 threads, 8956 appends ( 895 flushes) in 30021.69 ms to SingleWriterAppendLog.Spinning{limit=16}
16 threads, 158579 appends ( 15380 flushes) in 30002.46 ms to SingleWriterAppendLog.Yielding{limit=16}
16 threads, 82333 appends ( 10293 flushes) in 30007.82 ms to SingleWriterAppendLog.Waiting{limit=16}
20 threads, 10037 appends ( 862 flushes) in 30008.67 ms to SingleWriterAppendLog.Spinning{limit=20}
20 threads, 187211 appends ( 14644 flushes) in 30006.72 ms to SingleWriterAppendLog.Yielding{limit=20}
20 threads, 116620 appends ( 11664 flushes) in 30007.56 ms to SingleWriterAppendLog.Waiting{limit=20}
-----------------------------------------------------------
Radim Vansa
Quality Assurance Engineer
JBoss Datagrid
tel. +420532294559 ext. 62559
Red Hat Czech, s.r.o.
Brno, Purkyňova 99/71, PSČ 612 45
Czech Republic
11 years, 6 months
Fw: Infinispan DEF - Custom parameters upon a failover
by Strahinja Lazetic
Hi,
Sorry for a bit late response, I was very busy with some other stuff. I have some doubts and comments on your suggestions:
>Well, but you wanted to have a setEnvironment that takes extra params,
so I guess you'll be calling at >some point, so can't you just cast the
callable to your own and call whatever setters you want? I'm >very weary
of adding random parameter to setEnvironment for "random" stuff :)
I am not sure where should I cast my Callable and call the setter from, upon a failover. I mean, can use some of the existing DEF interfaces for it or do it deeper in the code?
>Maybe your distributed callable could have a reference to your own failover policy and query it?
In a case that my Callable is running on a remote node, to achieve this I guess I would have to make the failover policy Serializable and hold a remote reference to it, right? And from the design point of view I am not sure whether it is good that the Callable is aware of its failover policy.
I would appreciate any comment on this.
Best regards,
Strahinja Lazetic
________________________________
From: Galder Zamarreño <galder(a)redhat.com>
To: Strahinja Lazetic <lazetics(a)yahoo.com>
Cc: infinispan -Dev List <infinispan-dev(a)lists.jboss.org>
Sent: Wednesday, May 29, 2013 9:59 AM
Subject: Re: [infinispan-dev] Infinispan DEF - Custom parameters upon a failover
On May 28, 2013, at 11:56 AM, Strahinja Lazetic <lazetics(a)yahoo.com> wrote:
> Hi Galder,
>
> thank you for your response. I can serialize the parameters that I passed in the constructor upon first start, but they remain the same upon a failover and recreation of the Callable, so I can not get any new info based on the failover.
Well, but you wanted to have a setEnvironment that takes extra params, so I guess you'll be
calling at some point, so can't you just cast the callable to your own and call whatever setters you want? I'm very weary of adding random parameter to setEnvironment for "random" stuff :)
> As I mentioned in my example (though maybe not very smart), how can I say to my Callable that it does not start for the first time but rather it was restarted. Or any other data that depends on the specific conditions of the failover (e.g. Callable needs to know a new ip or port that was determined in the FailoverPolicy based on the available targets...).
^ Maybe your distributed callable could have a reference to your own failover policy and query it?
>
> Thanks,
> Strahinja Lazetic
>
>
> From: Galder Zamarreño <galder(a)redhat.com>
> To: Strahinja Lazetic <lazetics(a)yahoo.com>; infinispan -Dev List <infinispan-dev(a)lists.jboss.org>
> Sent: Tuesday, May 28, 2013 10:58 AM
> Subject: Re: [infinispan-dev] Infinispan DEF - Custom parameters upon a failover
>
> Hi Strahinja,
>
> Glad to hear that you're using Infinispan in your master thesis :)
>
> I have some reservations about adding these methods.
>
> You can create a custom distributed callable, so why not pass in this info on construction of the callable, and make sure it's serialized so these parameters are available when these classes are sent to other nodes?
>
> Cheers,
>
> On May 27, 2013, at 6:51 PM, Strahinja Lazetic <lazetics(a)yahoo.com> wrote:
>
> > Hi,
> >
> > I am currently using Infinispan Distributed Execution Framework for my Master thesis project and I am wondering if it is possible that a DistributedCallable receives some custom parameters from the framework upon a failover, besides Cache instance and Set of keys passed in the setEnvironment method. As a simplified example, a DistributedCallable wants to know whether it was restarted or freshly started. If this is not possible, does it make sense to add a new method to the DistributedTaskFailoverPolicy which will return a user supplied parameters Map and then read it from the setEnvironment method in the DistributedCallable? Here are the signatures of the methods I was thinking off:
> >
> > In the DistributedTaskFailoverPolicy class:
> >
> > Map<Object, Object> getEnvironmentParameters()
> >
> > In the DistributedCallable
class:
> >
> > setEnvironment(Cache<K, V> cache, Set<K> inputKeys, Map<Object, Object> params)
> >
> > I was free to try to implement this and did not take to much time, so now I am wondering if there was already a way to do something like this in the Infinispan DEF.
> >
> > Thank you in advance for your comments and suggestions.
> > Strahinja Lazetic
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev(a)lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> --
> Galder Zamarreño
> galder(a)redhat.com
> twitter.com/galderz
>
> Project Lead, Escalante
> http://escalante.io
>
> Engineer, Infinispan
> http://infinispan.org
>
>
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 6 months
Re: [infinispan-dev] File cache store comparison tests
by Mircea Markus
Adding infinispan dev.
Sent from my iPhone
On 27 Jun 2013, at 22:59, Divya Mehra <dmehra(a)redhat.com> wrote:
> Fyi -
>
> Input from Hiram on #jdg regarding levelDB configuration for better performance [1]
>
> chirino
> 2:48 was reviewing the leveldb bits and it seems like compression is off by default. Can anyone comment?
> dmehra
> 3:29 chirino: which leveldb bits are you referring to?
> chirino
> 3:30 https://github.com/infinispan/infinispan/blob/master/cachestore/leveldb/s...
> dmehra
> 3:31 chirino: what is the impact? and what should the value be?
> chirino
> 3:32 set it to CompressionType.SNAPPY
> 3:32 you get better perf.
>
> chirino
> 3:45 yeah if you guys are doing perf comparisons you really need to run /w SNAPPY enabled.
> 3:46 it can be a totally different beast.
> dmehra
> 3:46 chirino: how much expected speedup?
> chirino
> 3:46 depends on the data, but if the data compresses well, it can be HUGE.
> 3:47 there are JNI snappy libs too.. use that if you can as those too are faster.
> just add the following jars to your classpath:
> 3:49 https://gist.github.com/chirino/5879733
> 3:49 that way you can get JNI speed if it's available and fallback to pure java snappy if it's not.
>
> [1] https://community.jboss.org/wiki/FileCacheStoreRedesign
>
> Thanks,
> Divya
11 years, 6 months
Introducing builder subpackages
by Navin Surtani
While working through ISPN-2463, and the sub-tasks I was wondering about the organisation of the ConfigurationBuilder classes.
Currently, they are located in org.infinispan.configuration.cache.etc or org.infinispan.configuration.global.etc. The actual Configuration classes are within the same package already as well. To me, this sounds a little bit cluttered and perhaps not very intuitive and I was wondering if it might be a better idea to have something like:
org.infinispan.configuration.cache.builders.ConfigurationBuilder (and others)
org.infinispan.configuration.global.builders.GlobalConfigurationBuilder (etc etc)
Another suggestion could be:
org.infinispan.configuration.builders.cache.etc
org.infinispan.configuration.builders.glocal.etc
The only problem with that would be breaking backward compatibility, but from ISPN 6.x onwards I think that there are a fair few classes and packages being moved around anyway. It's just an idea that might make the API seem a little bit cleaner as to where it's located.
Thoughts?
------------------------
Navin Surtani
Software Engineer
JBoss SET
JBoss EAP
Twitter: @navssurtani
Blog: navssurtani.blogspot.com
11 years, 6 months
Big commit coming up
by Manik Surtani
Guys,
I'm about to put in a big commit on master, to switch to the Apache license. The patch will contain:
* Updated README.mkdn
* Updated LICENSE.txt.vm
* A new COPYRIGHT.mkdn in the project root
* Removal of *all* copyright headers on all source code files
Point #4 will mean that all outstanding pull requests will need to be rebased before being merged in.
Also, as a note to reviewers, please ensure there are NO copyright headers on any src code files after this patch is in.
Now before I submit the patch - what does everyone think of this?
Cheers
Manik
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Platform Architect, JBoss Data Grid
http://red.ht/data-grid
11 years, 6 months