[infinispan-dev] Performance gap between different value sizes and between key loactions

linjunru linjunru at huawei.com
Tue Dec 16 07:37:29 EST 2014


Dan & Radim, Thanks! 

	I have attempted to disable storeAsBinary with the followed infinispan configurations, but the results don't show much differences.

<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:7.0 http://www.infinispan.org/schemas/infinispan-config-7.0.xsd" xmlns="urn:infinispan:config:7.0">
- <jgroups>
  <stack-file name="udp" path="jgroups.xml" /> 
  </jgroups>
- <cache-container default-cache="default">
  <transport stack="udp" node-name="${nodeName}" /> 
  <replicated-cache name="repl" mode="SYNC" /> 
  <distributed-cache name="dist" mode="SYNC" owners="1" >
	<store-as-binary keys="false" values="false" />
  </distributed-cache> 
  </cache-container>
  </infinispan>


	The infinispan configurations utilized by the previous experiments is:
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:7.0 http://www.infinispan.org/schemas/infinispan-config-7.0.xsd" xmlns="urn:infinispan:config:7.0">
- <jgroups>
  <stack-file name="udp" path="jgroups.xml" /> 
  </jgroups>
- <cache-container default-cache="default">
  <transport stack="udp" node-name="${nodeName}" /> 
  <replicated-cache name="repl" mode="SYNC" /> 
  <local-cache name="local" /> 
  <distributed-cache name="dist" mode="SYNC" owners="1" /> 
  </cache-container>
  </infinispan>

Best Regards,
JR


> -----Original Message-----
> From: infinispan-dev-bounces at lists.jboss.org
> [mailto:infinispan-dev-bounces at lists.jboss.org] On Behalf Of Dan Berindei
> Sent: Monday, December 15, 2014 11:44 PM
> To: infinispan -Dev List
> Subject: Re: [infinispan-dev] Performance gap between different value sizes and
> between key loactions
> 
> JR, could you share your test, or at least the configuration you used and what
> key/value types you used?
> 
> Like Radim said, in your 1+0 scenario with storeAsBinary disabled and no cache
> store attached, I would expect the latency to be exactly the same for all value
> sizes.
> 
> Cheers
> Dan
> 
> 
> On Mon, Dec 15, 2014 at 3:34 PM, Radim Vansa <rvansa at redhat.com> wrote:
> > Hi JR,
> >
> > thanks for those findings! I was benchmarking the dependency of
> > achieved throughput based on entry size in the past, and I found the
> > sweet spot on 8k values (likely because our machines had 9k MTU).
> > Regrettably, we were focusing on throughput rather than on latency.
> >
> > I think that the increased latency could be on the account of:
> > a) marshalling - this is the top suspect
> > b) when receiving the data from network (in JGroups), those are copied
> > from the socket to buffer
> > c) general GC activity - with larger data flow you're about to trigger
> > GC sooner
> >
> > Though, I am quite surprised by such linear scaling, usually RPC
> > latency or waiting for locks is the villain. Unless you set in cache
> > configuration to storeAsBinary, Infinispan treats values as references
> > and there should be no overhead involved.
> >
> > Could you set up sampling mode profiler and check what it reports? All
> > the above are just slightly educated guesses.
> >
> > Radim
> >
> > On 12/15/2014 01:54 PM, linjunru wrote:
> >>
> >> Hi, all:
> >>
> >> I have tested infinispan in distributed mode in terms of latency of
> >> put(k,v) operation. The own_num is 1 and the key we put/write locates
> >> in the same node as the put operation occurs(In the table,“1+0”
> >> represents this scenario), the results indicates that the latency
> >> increases as the size of the value increases. However the increments
> >> seem to be a little “unreasonable” to me, because the bandwidth of
> >> the memory system is quite huge, and the number of keys (10000)
> >> remains the same during the experiment. So, here is the questions:
> >> which operations inside infinspan have strong relatives with the size
> >> of value, and why they costs so much as the size increases?
> >>
> >> We have also tested infinispan in the scenario which the key and the
> >> put/write(key,value) operation reside in different nodes(we noted it
> >> as “0+1”). Compared with “1+0”, “0+1” triggers network
> >> communications, however, the network latency is much smaller compared
> >> to the performance gas between the two scenarios. Why this situation
> happens?
> >> For example, with a 25K bytes ping packet, the RTT is about 0.713ms
> >> while performance gas between the two scenarios is about 8.4ms,what
> >> operations inside infinispan used the other 7.6ms?
> >>
> >> UDP is utilized as the transport protocol, the infinispan version we
> >> used is 7.0 and there are 4 nodes in the cluster, each has 10000
> >> keys, all of them have memory bigger than 32G, and all of them have
> >> xeon cpu
> >> e5-2407 x2.
> >>
> >> Value size
> >>
> >>
> >>
> >> 250B( us)
> >>
> >>
> >>
> >> 2.5K( us)
> >>
> >>
> >>
> >> 25k(us)
> >>
> >>
> >>
> >> 250k(us)
> >>
> >>
> >>
> >> 2.5M(us)
> >>
> >>
> >>
> >> 25M(us)
> >>
> >> 1+0
> >>
> >>
> >>
> >> 463
> >>
> >>
> >>
> >> 726
> >>
> >>
> >>
> >> 3 236
> >>
> >>
> >>
> >> 26 560
> >>
> >>
> >>
> >> 354 454
> >>
> >>
> >>
> >> 3 979 830
> >>
> >> 0+1
> >>
> >>
> >>
> >> 1 807
> >>
> >>
> >>
> >> 2 829
> >>
> >>
> >>
> >> 11 635
> >>
> >>
> >>
> >> 87 540
> >>
> >>
> >>
> >> 1 035 133
> >>
> >>
> >>
> >> 11 653 389
> >>
> >> Thanks!
> >>
> >> Best Regards,
> >>
> >> JR
> >>
> >>
> >>
> >> _______________________________________________
> >> infinispan-dev mailing list
> >> infinispan-dev at lists.jboss.org
> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> >
> > --
> > Radim Vansa <rvansa at redhat.com>
> > JBoss DataGrid QA
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
-------------- next part --------------
A non-text attachment was scrubbed...
Name: jgroups.xml
Type: application/xml
Size: 4393 bytes
Desc: jgroups.xml
Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141216/7d87de4e/attachment-0001.rdf 


More information about the infinispan-dev mailing list