The benchmarking in question is simple insertions and retrievals run via sockets, these benchmarks return better results when run on a local machine, however the testing in question is being done on the Amazon E2 cloud. Running on the E2 was a problem in itself, but I followed the instructions on a blog and used an xml file to configure the transport properties.<br>

<br><pre>&lt;config xmlns=&quot;urn:org:jgroups&quot; xmlns:xsi=&quot;<a href="http://www.w3.org/2001/XMLSchema-instance" class="external free" title="http://www.w3.org/2001/XMLSchema-instance" rel="nofollow">http://www.w3.org/2001/XMLSchema-instance</a>&quot;  xsi:schemaLocation=&quot;urn:org:jgroups file:schema/JGroups-2.8.xsd&quot;&gt;<br>

<br>        &lt;TCP bind_port=&quot;7800&quot; /&gt;<br>        &lt;TCPPING timeout=&quot;3000&quot;<br>                 initial_hosts=&quot;${jgroups.tcpping.initial_hosts:10.209.166.79[7800],10.209.198.176[7800],10.208.199.223[7800],10.208.190.224[7800],10.208.70.112[7800]}&quot;<br>

                port_range=&quot;1&quot;<br>                num_initial_members=&quot;3&quot;/&gt;<br>         &lt;MERGE2 max_interval=&quot;30000&quot;  min_interval=&quot;10000&quot;/&gt;<br>         &lt;FD_SOCK/&gt;<br>

         &lt;FD timeout=&quot;10000&quot; max_tries=&quot;5&quot; /&gt;<br>         &lt;VERIFY_SUSPECT timeout=&quot;1500&quot;  /&gt;<br>        &lt;pbcast.NAKACK<br>                 use_mcast_xmit=&quot;false&quot; gc_lag=&quot;0&quot;<br>

                 retransmit_timeout=&quot;300,600,1200,2400,4800&quot;<br>                discard_delivered_msgs=&quot;true&quot;/&gt;<br>        &lt;UNICAST timeout=&quot;300,600,1200&quot; /&gt;<br>        &lt;pbcast.STABLE stability_delay=&quot;1000&quot; desired_avg_gossip=&quot;50000&quot;  max_bytes=&quot;400000&quot;/&gt;<br>

         &lt;pbcast.GMS print_local_addr=&quot;true&quot; join_timeout=&quot;3000&quot;   view_bundling=&quot;true&quot;/&gt;<br>        &lt;FC max_credits=&quot;2000000&quot;  min_threshold=&quot;0.10&quot;/&gt;<br>        &lt;FRAG2 frag_size=&quot;60000&quot;  /&gt;<br>

        &lt;pbcast.STREAMING_STATE_TRANSFER/&gt;<br>&lt;/config&gt;</pre>I have a theory, that perhaps the introduction of TCPPING in the jgroups file is resulting in some form of polling before the actual get request is processed and returned. Could this be the case ?<br>

<br><div class="gmail_quote">On Tue, Nov 17, 2009 at 12:03 AM, Manik Surtani <span dir="ltr">&lt;<a href="mailto:manik@jboss.org">manik@jboss.org</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">

Hi Michael<br>
<br>
Could you please detail your benchmark test a bit more?  We have done some internal benchmarks as well and things do look significantly different.  Could you also tell us which version you have been benchmarking?  We&#39;ve made some significant changes to DIST between CR1 and CR2 with regards to performance.<br>


<br>
FYI, we use the CacheBenchFwk [1] to help benchmark stuff; you may find this useful too.<br>
<br>
Cheers<br>
Manik<br>
<br>
[1] <a href="http://cachebenchfwk.sourceforge.net" target="_blank">http://cachebenchfwk.sourceforge.net</a><br>
<div><div></div><div class="h5"><br>
<br>
On 15 Nov 2009, at 22:00, Michael Lawson (mshindo) wrote:<br>
<br>
&gt; Hi,<br>
&gt; I have been performing some benchmark testing on Infinispan Running in Distributed mode, with some unexpected results.<br>
&gt;<br>
&gt; For an insertion with a Key size of 100 Bytes, and Value size 100 Bytes, the insertion time was 0.13ms and retrieval was 128.06ms.<br>
&gt;<br>
&gt; Communication with the infinispan nodes is being done via a socket interface, using standard java serialization.<br>
&gt;<br>
&gt; The retrieval time is consistently high in comparison to other systems, and I am wondering whether there are some other benchmark reports floating around that I can compare results with.<br>
&gt;<br>
&gt; --<br>
&gt; Michael Lawson<br>
&gt;<br>
</div></div>&gt; _______________________________________________<br>
&gt; infinispan-dev mailing list<br>
&gt; <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
&gt; <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
<br>
--<br>
Manik Surtani<br>
<a href="mailto:manik@jboss.org">manik@jboss.org</a><br>
Lead, Infinispan<br>
Lead, JBoss Cache<br>
<a href="http://www.infinispan.org" target="_blank">http://www.infinispan.org</a><br>
<a href="http://www.jbosscache.org" target="_blank">http://www.jbosscache.org</a><br>
<br>
<br>
<br>
<br>
<br>
_______________________________________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
</blockquote></div><br><br clear="all"><br>-- <br>Michael Lawson<br><br>