[infinispan-dev] MFC/UFC credits in default config

Dan Berindei dan.berindei at gmail.com
Tue Dec 18 02:57:08 EST 2012


Hi Radim

If you run the test with only 2 nodes and FC disabled, it's going to
perform even better. But then as you increase the number of nodes, the
speed with no FC will drop dramatically (when we didn't have RSVP enabled,
with only 3 nodes, it didn't manage to send 1 x 10MB message in 10 minutes).

Please run the tests with as many nodes as possible and just 1 message x
10MB. If 500k still performs better, create a JIRA to change the default.

Cheers
Dan



On Mon, Dec 17, 2012 at 4:55 PM, Radim Vansa <rvansa at redhat.com> wrote:

> Sorry I haven't specified the amount, I am a stupido... my tests are
> working with 500k credits.
>
> UUPerf (JGroups 3.2.4.Final-redhat-1) from one computer in perflab to
> another, 2 threads (default), 1000x sends 10MB message (default chunkSize =
> 10000 * our entry size is usually 1kB)  executed 3x
>
> 200k: Average of 6.02 requests / sec (60.19MB / sec), 166.13 ms /request
> (prot=UNICAST2)
>       Average of 5.61 requests / sec (56.09MB / sec), 178.30 ms /request
> (prot=UNICAST2)
>       Average of 5.49 requests / sec (54.94MB / sec), 182.03 ms /request
> (prot=UNICAST2)
>
> 500k: Average of 7.93 requests / sec (79.34MB / sec), 126.04 ms /request
> (prot=UNICAST2)
>       Average of 8.18 requests / sec (81.82MB / sec), 122.23 ms /request
> (prot=UNICAST2)
>       Average of 8.41 requests / sec (84.09MB / sec), 118.92 ms /request
> (prot=UNICAST2)
>
> Can you also reproduce such results? I think that suggests that 500k
> behaves really better.
>
> Radun
>
>
> ----- Original Message -----
> | From: "Dan Berindei" <dan.berindei at gmail.com>
> | To: "infinispan -Dev List" <infinispan-dev at lists.jboss.org>
> | Sent: Monday, December 17, 2012 12:43:37 PM
> | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
> |
> |
> |
> |
> |
> | On Mon, Dec 17, 2012 at 1:28 PM, Bela Ban < bban at redhat.com > wrote:
> |
> |
> | Dan reduced those values to 200K, IIRC it was for UUPerfwhich behaved
> | best with 200K. Idon't know if this is still needed. Dan ?
> |
> |
> |
> |
> | I haven't run UUPerf in a while...
> |
> |
> |
> |
> | On 12/17/12 12:19 PM, Radim Vansa wrote:
> | > Hi,
> | >
> | > recently I have synchronized our jgroups configuration with the
> | > default one shipped with Infinispan
> | > (core/src/main/resources/jgroups-(tcp|udp).xml) and it has shown
> | > that 200k credits in UFC/MFC (I keep the two values in sync) is
> | > not enough even for our smallest resilience test (killing one of
> | > four nodes). The state transfer was often blocked when requesting
> | > for more credits which resulted in not completing it within the
> | > time limit.
> | > Therefore, I'd like to suggest to increase the amount of credits in
> | > default configuration as well, because we simply cannot use the
> | > lower setting and it's preferable to have the configurations as
> | > close as possible. The only settings we need to keep different are
> | > thread pool sizes and addresses and ports.
> | >
> |
> |
> | What value would you like to use instead?
> |
> | Can you try UUPerf with 200k and your proposed configuration and
> | compare the results?
> |
> | Cheers
> | Dan
> |
> |
> | _______________________________________________
> | infinispan-dev mailing list
> | infinispan-dev at lists.jboss.org
> | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20121218/94524b1a/attachment-0001.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: uuperf_results.ods
Type: application/vnd.oasis.opendocument.spreadsheet
Size: 14386 bytes
Desc: not available
Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20121218/94524b1a/attachment-0001.ods 


More information about the infinispan-dev mailing list