[infinispan-dev] Performance gap between different value sizes and between key loactions

Radim Vansa rvansa at redhat.com
Wed Dec 17 10:17:58 EST 2014


I think that you're configuring the storeAsBinary correctly - however, 
one marshalling call is required anyway, this value just means that you 
store the marshalled form in cluster (more suitable when reading the 
value remotely, but requires unmarshalling every time you read that 
locally).

Could you try to use profiler? Especially with larger values (and 
response times in orders of hundreds of milliseconds) it has quite a 
good chance to hit the hot spot. Just stick to sampling, instrumentation 
would skew the results too much (at least I was never able to get any 
reasonable readings when using instrumentation).

Radim

On 12/17/2014 03:17 PM, linjunru wrote:
> Hi, all:
> 	I tested infinispan again with 10Gbe L2 Network.  Write latency of ‘1+0’ scenario almost remain the same, and latency of '0+1' scenario has little improvement especially when size of “value" increase. Does these results indicate that the network has little impact on infinispan's write latency in these scenario.
> 	As Radim mentioned, marshalling and general GC activity may be the other two reason for cause the huge latency. I have tried to disable storeAsBinary, but there is not much differences. I'm not sure whether I configure the infinspan in the right way, so I list my configuration at the end of the email, again(^-^), if there is anything wrong of the configuration, please point out.
> 	Refereed to the GC activity, is there any configurations can optimize it?
> 	At last, is there anybody/company use infiinspan to store media such as images, videos or big files?
>
>
> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:7.0 http://www.infinispan.org/schemas/infinispan-config-7.0.xsd" xmlns="urn:infinispan:config:7.0">
>   <jgroups>
>    <stack-file name="udp" path="jgroups.xml" />
>    </jgroups>
>   <cache-container default-cache="default">
>    <transport stack="udp" node-name="${nodeName}" />
>    <replicated-cache name="repl" mode="SYNC" />
>    <local-cache name="local" />
>    <distributed-cache name="dist" mode="SYNC" owners="1" >
> 	<store-as-binary keys="false" values="false" />
>    </distributed-cache>
>    </cache-container>
>    </infinispan>
> Ps: only distributed cache ”dist" is utilized and tested.
>
> 	Thanks!
>
> Best Regards,
> JR
>
>
> 	
>> -----Original Message-----
>> From: infinispan-dev-bounces at lists.jboss.org
>> [mailto:infinispan-dev-bounces at lists.jboss.org] On Behalf Of linjunru
>> Sent: Tuesday, December 16, 2014 8:37 PM
>> To: infinispan -Dev List
>> Subject: Re: [infinispan-dev] Performance gap between different value sizes and
>> between key loactions
>>
>> Dan & Radim, Thanks!
>>
>> 	I have attempted to disable storeAsBinary with the followed infinispan
>> configurations, but the results don't show much differences.
>>
>> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>> xsi:schemaLocation="urn:infinispan:config:7.0
>> http://www.infinispan.org/schemas/infinispan-config-7.0.xsd"
>> xmlns="urn:infinispan:config:7.0">
>> - <jgroups>
>>    <stack-file name="udp" path="jgroups.xml" />
>>    </jgroups>
>> - <cache-container default-cache="default">
>>    <transport stack="udp" node-name="${nodeName}" />
>>    <replicated-cache name="repl" mode="SYNC" />
>>    <distributed-cache name="dist" mode="SYNC" owners="1" >
>> 	<store-as-binary keys="false" values="false" />
>>    </distributed-cache>
>>    </cache-container>
>>    </infinispan>
>>
>>
>> 	The infinispan configurations utilized by the previous experiments is:
>> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>> xsi:schemaLocation="urn:infinispan:config:7.0
>> http://www.infinispan.org/schemas/infinispan-config-7.0.xsd"
>> xmlns="urn:infinispan:config:7.0">
>> - <jgroups>
>>    <stack-file name="udp" path="jgroups.xml" />
>>    </jgroups>
>> - <cache-container default-cache="default">
>>    <transport stack="udp" node-name="${nodeName}" />
>>    <replicated-cache name="repl" mode="SYNC" />
>>    <local-cache name="local" />
>>    <distributed-cache name="dist" mode="SYNC" owners="1" />
>>    </cache-container>
>>    </infinispan>
>>
>> Best Regards,
>> JR
>>
>>
>>> -----Original Message-----
>>> From: infinispan-dev-bounces at lists.jboss.org
>>> [mailto:infinispan-dev-bounces at lists.jboss.org] On Behalf Of Dan
>>> Berindei
>>> Sent: Monday, December 15, 2014 11:44 PM
>>> To: infinispan -Dev List
>>> Subject: Re: [infinispan-dev] Performance gap between different value
>>> sizes and between key loactions
>>>
>>> JR, could you share your test, or at least the configuration you used
>>> and what key/value types you used?
>>>
>>> Like Radim said, in your 1+0 scenario with storeAsBinary disabled and
>>> no cache store attached, I would expect the latency to be exactly the
>>> same for all value sizes.
>>>
>>> Cheers
>>> Dan
>>>
>>>
>>> On Mon, Dec 15, 2014 at 3:34 PM, Radim Vansa <rvansa at redhat.com>
>> wrote:
>>>> Hi JR,
>>>>
>>>> thanks for those findings! I was benchmarking the dependency of
>>>> achieved throughput based on entry size in the past, and I found the
>>>> sweet spot on 8k values (likely because our machines had 9k MTU).
>>>> Regrettably, we were focusing on throughput rather than on latency.
>>>>
>>>> I think that the increased latency could be on the account of:
>>>> a) marshalling - this is the top suspect
>>>> b) when receiving the data from network (in JGroups), those are
>>>> copied from the socket to buffer
>>>> c) general GC activity - with larger data flow you're about to
>>>> trigger GC sooner
>>>>
>>>> Though, I am quite surprised by such linear scaling, usually RPC
>>>> latency or waiting for locks is the villain. Unless you set in cache
>>>> configuration to storeAsBinary, Infinispan treats values as
>>>> references and there should be no overhead involved.
>>>>
>>>> Could you set up sampling mode profiler and check what it reports?
>>>> All the above are just slightly educated guesses.
>>>>
>>>> Radim
>>>>
>>>> On 12/15/2014 01:54 PM, linjunru wrote:
>>>>> Hi, all:
>>>>>
>>>>> I have tested infinispan in distributed mode in terms of latency of
>>>>> put(k,v) operation. The own_num is 1 and the key we put/write
>>>>> locates in the same node as the put operation occurs(In the table,“1+0”
>>>>> represents this scenario), the results indicates that the latency
>>>>> increases as the size of the value increases. However the
>>>>> increments seem to be a little “unreasonable” to me, because the
>>>>> bandwidth of the memory system is quite huge, and the number of
>>>>> keys (10000) remains the same during the experiment. So, here is the
>> questions:
>>>>> which operations inside infinspan have strong relatives with the
>>>>> size of value, and why they costs so much as the size increases?
>>>>>
>>>>> We have also tested infinispan in the scenario which the key and
>>>>> the
>>>>> put/write(key,value) operation reside in different nodes(we noted
>>>>> it as “0+1”). Compared with “1+0”, “0+1” triggers network
>>>>> communications, however, the network latency is much smaller
>>>>> compared to the performance gas between the two scenarios. Why this
>>>>> situation
>>> happens?
>>>>> For example, with a 25K bytes ping packet, the RTT is about 0.713ms
>>>>> while performance gas between the two scenarios is about 8.4ms,what
>>>>> operations inside infinispan used the other 7.6ms?
>>>>>
>>>>> UDP is utilized as the transport protocol, the infinispan version
>>>>> we used is 7.0 and there are 4 nodes in the cluster, each has 10000
>>>>> keys, all of them have memory bigger than 32G, and all of them have
>>>>> xeon cpu
>>>>> e5-2407 x2.
>>>>>
>>>>> Value size
>>>>>
>>>>>
>>>>>
>>>>> 250B( us)
>>>>>
>>>>>
>>>>>
>>>>> 2.5K( us)
>>>>>
>>>>>
>>>>>
>>>>> 25k(us)
>>>>>
>>>>>
>>>>>
>>>>> 250k(us)
>>>>>
>>>>>
>>>>>
>>>>> 2.5M(us)
>>>>>
>>>>>
>>>>>
>>>>> 25M(us)
>>>>>
>>>>> 1+0
>>>>>
>>>>>
>>>>>
>>>>> 463
>>>>>
>>>>>
>>>>>
>>>>> 726
>>>>>
>>>>>
>>>>>
>>>>> 3 236
>>>>>
>>>>>
>>>>>
>>>>> 26 560
>>>>>
>>>>>
>>>>>
>>>>> 354 454
>>>>>
>>>>>
>>>>>
>>>>> 3 979 830
>>>>>
>>>>> 0+1
>>>>>
>>>>>
>>>>>
>>>>> 1 807
>>>>>
>>>>>
>>>>>
>>>>> 2 829
>>>>>
>>>>>
>>>>>
>>>>> 11 635
>>>>>
>>>>>
>>>>>
>>>>> 87 540
>>>>>
>>>>>
>>>>>
>>>>> 1 035 133
>>>>>
>>>>>
>>>>>
>>>>> 11 653 389
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Best Regards,
>>>>>
>>>>> JR
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev at lists.jboss.org
>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>> --
>>>> Radim Vansa <rvansa at redhat.com>
>>>> JBoss DataGrid QA
>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


-- 
Radim Vansa <rvansa at redhat.com>
JBoss DataGrid QA



More information about the infinispan-dev mailing list