[infinispan-dev] Performance gap between different value sizes and between key loactions

Dan Berindei dan.berindei at gmail.com
Mon Dec 15 10:43:41 EST 2014


JR, could you share your test, or at least the configuration you used
and what key/value types you used?

Like Radim said, in your 1+0 scenario with storeAsBinary disabled and
no cache store attached, I would expect the latency to be exactly the
same for all value sizes.

Cheers
Dan


On Mon, Dec 15, 2014 at 3:34 PM, Radim Vansa <rvansa at redhat.com> wrote:
> Hi JR,
>
> thanks for those findings! I was benchmarking the dependency of achieved
> throughput based on entry size in the past, and I found the sweet spot
> on 8k values (likely because our machines had 9k MTU). Regrettably, we
> were focusing on throughput rather than on latency.
>
> I think that the increased latency could be on the account of:
> a) marshalling - this is the top suspect
> b) when receiving the data from network (in JGroups), those are copied
> from the socket to buffer
> c) general GC activity - with larger data flow you're about to trigger
> GC sooner
>
> Though, I am quite surprised by such linear scaling, usually RPC latency
> or waiting for locks is the villain. Unless you set in cache
> configuration to storeAsBinary, Infinispan treats values as references
> and there should be no overhead involved.
>
> Could you set up sampling mode profiler and check what it reports? All
> the above are just slightly educated guesses.
>
> Radim
>
> On 12/15/2014 01:54 PM, linjunru wrote:
>>
>> Hi, all:
>>
>> I have tested infinispan in distributed mode in terms of latency of
>> put(k,v) operation. The own_num is 1 and the key we put/write locates
>> in the same node as the put operation occurs(In the table,“1+0”
>> represents this scenario), the results indicates that the latency
>> increases as the size of the value increases. However the increments
>> seem to be a little “unreasonable” to me, because the bandwidth of the
>> memory system is quite huge, and the number of keys (10000) remains
>> the same during the experiment. So, here is the questions: which
>> operations inside infinspan have strong relatives with the size of
>> value, and why they costs so much as the size increases?
>>
>> We have also tested infinispan in the scenario which the key and the
>> put/write(key,value) operation reside in different nodes(we noted it
>> as “0+1”). Compared with “1+0”, “0+1” triggers network communications,
>> however, the network latency is much smaller compared to the
>> performance gas between the two scenarios. Why this situation happens?
>> For example, with a 25K bytes ping packet, the RTT is about 0.713ms
>> while performance gas between the two scenarios is about 8.4ms,what
>> operations inside infinispan used the other 7.6ms?
>>
>> UDP is utilized as the transport protocol, the infinispan version we
>> used is 7.0 and there are 4 nodes in the cluster, each has 10000 keys,
>> all of them have memory bigger than 32G, and all of them have xeon cpu
>> e5-2407 x2.
>>
>> Value size
>>
>>
>>
>> 250B( us)
>>
>>
>>
>> 2.5K( us)
>>
>>
>>
>> 25k(us)
>>
>>
>>
>> 250k(us)
>>
>>
>>
>> 2.5M(us)
>>
>>
>>
>> 25M(us)
>>
>> 1+0
>>
>>
>>
>> 463
>>
>>
>>
>> 726
>>
>>
>>
>> 3 236
>>
>>
>>
>> 26 560
>>
>>
>>
>> 354 454
>>
>>
>>
>> 3 979 830
>>
>> 0+1
>>
>>
>>
>> 1 807
>>
>>
>>
>> 2 829
>>
>>
>>
>> 11 635
>>
>>
>>
>> 87 540
>>
>>
>>
>> 1 035 133
>>
>>
>>
>> 11 653 389
>>
>> Thanks!
>>
>> Best Regards,
>>
>> JR
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> --
> Radim Vansa <rvansa at redhat.com>
> JBoss DataGrid QA
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev



More information about the infinispan-dev mailing list