Sadly no, I spent some time playing round with the client/server code to try and increase efficiency, but the best round trip time I could get was 44ms for a get request. I believe this is where hotrod should come into play.<br>
<br><div class="gmail_quote">On Tue, Nov 24, 2009 at 11:18 PM, Manik Surtani <span dir="ltr"><<a href="mailto:manik@jboss.org">manik@jboss.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Did we ever get to the bottom of this?<br>
<div><div></div><div class="h5"><br>
On 19 Nov 2009, at 07:36, Bela Ban wrote:<br>
<br>
> I looked at your test and noticed you're using<br>
> Object{Output,Input}Streams. These are very inefficient !<br>
><br>
> I suggest use simple data types for now, e.g. only ints for keys and<br>
> values. This way you could send a put as CMD | KEY | VAL, which is 3<br>
> ints. This would allow you to simply use a Data{Output,Input}Stream.<br>
><br>
> This is not the real world, I know, but we should focus on round trip<br>
> times and latency before we get into complex data type marshalling overhead.<br>
><br>
><br>
> Michael Lawson (mshindo) wrote:<br>
>> We have rejected the possibility of the problem being related to JGroups, as<br>
>> when running then same configuration locally (not on the amazon e2).<br>
>><br>
>> *Let me outline the testing more specifically:*<br>
>><br>
>> I have created a very simple socket client and server to communicate with<br>
>> infinispan nodes. This provides a mechanism to connect, send get and insert<br>
>> commands coupled with the required data to the targeted infinispan nodes.<br>
>> These insertions and retrievals are then timed from the client. As it stands<br>
>> this system works perfectly in a local environment on my own network.<br>
>> However as soon we attempt to test on the amazon e2 cloud, which is required<br>
>> for benchmarking against other products, the retrieval times jump from under<br>
>> a millisecond to around 160ms dependent on the value size number of nodes in<br>
>> the cluster.<br>
>><br>
>> The reason we are testing using this client -> server model is that we are<br>
>> also testing concurrency, to see what happens when we send thousands of<br>
>> requests from different sources.<br>
>><br>
>> I have used TCPPing both locally and on the amazon cloud (as multi-casting<br>
>> is not allowed in this environment), and the results are exactly the same.<br>
>> Perfect numbers locally, bad numbers remotely. This is proving to be quite a<br>
>> mystery.<br>
>><br>
>> I have uploaded my client and server code online base code:<br>
>> <a href="http://pastebin.org/54960" target="_blank">http://pastebin.org/54960</a>.<br>
>><br>
>> Any clues ?<br>
>><br>
>> On Wed, Nov 18, 2009 at 4:34 PM, Michael Lawson (mshindo) <<br>
>> <a href="mailto:michael@sphinix.com">michael@sphinix.com</a>> wrote:<br>
>><br>
>><br>
>>> Are there any official socket clients available?<br>
>>><br>
>>><br>
>>> On Tue, Nov 17, 2009 at 11:40 PM, Manik Surtani <<a href="mailto:manik@jboss.org">manik@jboss.org</a>> wrote:<br>
>>><br>
>>><br>
>>>> On 17 Nov 2009, at 04:54, Michael Lawson (mshindo) wrote:<br>
>>>><br>
>>>> The benchmarking in question is simple insertions and retrievals run via<br>
>>>> sockets, these benchmarks return better results when run on a local machine,<br>
>>>> however the testing in question is being done on the Amazon E2 cloud.<br>
>>>> Running on the E2 was a problem in itself, but I followed the instructions<br>
>>>> on a blog and used an xml file to configure the transport properties.<br>
>>>><br>
>>>> <config xmlns="urn:org:jgroups" xmlns:xsi="<a href="http://www.w3.org/2001/XMLSchema-instance" target="_blank">http://www.w3.org/2001/XMLSchema-instance</a>" xsi:schemaLocation="urn:org:jgroups file:schema/JGroups-2.8.xsd"><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> <TCP bind_port="7800" /><br>
>>>> <TCPPING timeout="3000"<br>
>>>> initial_hosts="${jgroups.tcpping.initial_hosts:10.209.166.79[7800],10.209.198.176[7800],10.208.199.223[7800],10.208.190.224[7800],10.208.70.112[7800]}"<br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> port_range="1"<br>
>>>> num_initial_members="3"/><br>
>>>> <MERGE2 max_interval="30000" min_interval="10000"/><br>
>>>> <FD_SOCK/><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> <FD timeout="10000" max_tries="5" /><br>
>>>> <VERIFY_SUSPECT timeout="1500" /><br>
>>>> <pbcast.NAKACK<br>
>>>> use_mcast_xmit="false" gc_lag="0"<br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> retransmit_timeout="300,600,1200,2400,4800"<br>
>>>> discard_delivered_msgs="true"/><br>
>>>> <UNICAST timeout="300,600,1200" /><br>
>>>> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> <pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true"/><br>
>>>> <FC max_credits="2000000" min_threshold="0.10"/><br>
>>>> <FRAG2 frag_size="60000" /><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> <pbcast.STREAMING_STATE_TRANSFER/><br>
>>>> </config><br>
>>>><br>
>>>> I have a theory, that perhaps the introduction of TCPPING in the jgroups<br>
>>>> file is resulting in some form of polling before the actual get request is<br>
>>>> processed and returned. Could this be the case ?<br>
>>>><br>
>>>><br>
>>>> It could be - JGroups also has an experimental protocol called S3_PING<br>
>>>> which could help.<br>
>>>><br>
>>>><br>
>>>> <a href="http://javagroups.cvs.sourceforge.net/viewvc/javagroups/JGroups/src/org/jgroups/protocols/S3_PING.java?revision=1.2&view=markup" target="_blank">http://javagroups.cvs.sourceforge.net/viewvc/javagroups/JGroups/src/org/jgroups/protocols/S3_PING.java?revision=1.2&view=markup</a><br>
>>>><br>
>>>> Another approach for discovery in an EC2 environment is to use a<br>
>>>> GossipRouter, but I'd give S3_PING a try first.<br>
>>>><br>
>>>> Cheers<br>
>>>> Manik<br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> On Tue, Nov 17, 2009 at 12:03 AM, Manik Surtani <<a href="mailto:manik@jboss.org">manik@jboss.org</a>> wrote:<br>
>>>><br>
>>>><br>
>>>>> Hi Michael<br>
>>>>><br>
>>>>> Could you please detail your benchmark test a bit more? We have done<br>
>>>>> some internal benchmarks as well and things do look significantly different.<br>
>>>>> Could you also tell us which version you have been benchmarking? We've<br>
>>>>> made some significant changes to DIST between CR1 and CR2 with regards to<br>
>>>>> performance.<br>
>>>>><br>
>>>>> FYI, we use the CacheBenchFwk [1] to help benchmark stuff; you may find<br>
>>>>> this useful too.<br>
>>>>><br>
>>>>> Cheers<br>
>>>>> Manik<br>
>>>>><br>
>>>>> [1] <a href="http://cachebenchfwk.sourceforge.net" target="_blank">http://cachebenchfwk.sourceforge.net</a><br>
>>>>><br>
>>>>><br>
>>>>> On 15 Nov 2009, at 22:00, Michael Lawson (mshindo) wrote:<br>
>>>>><br>
>>>>><br>
>>>>>> Hi,<br>
>>>>>> I have been performing some benchmark testing on Infinispan Running in<br>
>>>>>><br>
>>>>> Distributed mode, with some unexpected results.<br>
>>>>><br>
>>>>>> For an insertion with a Key size of 100 Bytes, and Value size 100<br>
>>>>>><br>
>>>>> Bytes, the insertion time was 0.13ms and retrieval was 128.06ms.<br>
>>>>><br>
>>>>>> Communication with the infinispan nodes is being done via a socket<br>
>>>>>><br>
>>>>> interface, using standard java serialization.<br>
>>>>><br>
>>>>>> The retrieval time is consistently high in comparison to other systems,<br>
>>>>>><br>
>>>>> and I am wondering whether there are some other benchmark reports floating<br>
>>>>> around that I can compare results with.<br>
>>>>><br>
>>>>>> --<br>
>>>>>> Michael Lawson<br>
>>>>>><br>
>>>>>> _______________________________________________<br>
>>>>>> infinispan-dev mailing list<br>
>>>>>> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
>>>>>> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
>>>>>><br>
>>>>> --<br>
>>>>> Manik Surtani<br>
>>>>> <a href="mailto:manik@jboss.org">manik@jboss.org</a><br>
>>>>> Lead, Infinispan<br>
>>>>> Lead, JBoss Cache<br>
>>>>> <a href="http://www.infinispan.org" target="_blank">http://www.infinispan.org</a><br>
>>>>> <a href="http://www.jbosscache.org" target="_blank">http://www.jbosscache.org</a><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> _______________________________________________<br>
>>>>> infinispan-dev mailing list<br>
>>>>> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
>>>>> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
>>>>><br>
>>>>><br>
>>>><br>
>>>> --<br>
>>>> Michael Lawson<br>
>>>><br>
>>>> _______________________________________________<br>
>>>> infinispan-dev mailing list<br>
>>>> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
>>>> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
>>>><br>
>>>><br>
>>>> --<br>
>>>> Manik Surtani<br>
>>>> <a href="mailto:manik@jboss.org">manik@jboss.org</a><br>
>>>> Lead, Infinispan<br>
>>>> Lead, JBoss Cache<br>
>>>> <a href="http://www.infinispan.org" target="_blank">http://www.infinispan.org</a><br>
>>>> <a href="http://www.jbosscache.org" target="_blank">http://www.jbosscache.org</a><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> _______________________________________________<br>
>>>> infinispan-dev mailing list<br>
>>>> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
>>>> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
>>>><br>
>>>><br>
>>><br>
>>> --<br>
>>> Michael Lawson (mshindo)<br>
>>><br>
>>><br>
>>><br>
>><br>
>><br>
>><br>
>> ------------------------------------------------------------------------<br>
>><br>
>> _______________________________________________<br>
>> infinispan-dev mailing list<br>
>> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
>> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
><br>
> --<br>
> Bela Ban<br>
> Lead JGroups / Clustering Team<br>
> JBoss<br>
><br>
> _______________________________________________<br>
> infinispan-dev mailing list<br>
> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
<br>
--<br>
Manik Surtani<br>
<a href="mailto:manik@jboss.org">manik@jboss.org</a><br>
Lead, Infinispan<br>
Lead, JBoss Cache<br>
<a href="http://www.infinispan.org" target="_blank">http://www.infinispan.org</a><br>
<a href="http://www.jbosscache.org" target="_blank">http://www.jbosscache.org</a><br>
<br>
<br>
<br>
<br>
<br>
_______________________________________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Michael Lawson (mshindo)<br><br>