[infinispan-dev] MFC/UFC credits in default config

Dan Berindei dan.berindei at gmail.com
Thu Jan 3 10:46:37 EST 2013


On Thu, Jan 3, 2013 at 5:26 PM, Radim Vansa <rvansa at redhat.com> wrote:

>
> |
> | |
> | |
> | | Bela, I'm pretty sure these tests use UDP. I'd be really surprised
> | | if
> | | we could improve TCP performance by lowering max_credits.
> |
> | True, they do.
> |
> | So you are running the tests with TCP?
> |
>
> No, I have confirmed the first sentence. The tests use UDP.
>
>
Cool, I wasn't sure which sentence you were replying to.


> |
> | |
> | | We do have a JIRA to change the state transfer behaviour to request
> | | state from only a few nodes at a time (perhaps only 1):
> | | https://issues.jboss.org/browse/ISPN-2580 . Adrian is working on it
> |
> | | ATM, and once it's integrated it would make UUPerf performance
> | | largely irrelevant.
> |
> | I don't think so, I expect that e.g. 3 nodes to make ST from is
> | perfectly reasonable scenario and as these tests are ran with 4
> | nodes, this is the case.
> |
> |
> |
> | Based on the test results we have so far, I think it will be very
> | hard to come up with a configuration that performs better with state
> | transfer 3 sources than with 2 sources. That's even without
> | considering the effects on performance when there isn't a state
> | transfer in progress.
> |
> |
> | So we could spend a lot of time on improving the performance with 3
> | sources, and never quite get to the 2-sources performance, or we
> | could just make 2 the default and "not recommend" changing the
> | value. (We could also hard-code the number of sources, but exposing
> | the setting will make it easier to test different values and confirm
> | which one is best).
> |
>
> I must agree, or rather the results are the only right judge, not any of
> our (~my) assumptions.
>
> |
> | |
> | | Even if Adrian's fix doesn't make it into Final, I think a
> | | max_credits of only 20k would impact performance in the "stable
> | | state" (i.e. what UPerf is testing). So maybe we can find a
> | | workaround, like lowering Infinispan's stateTransfer.chunkSize.
> |
> | Yeah, I have used 10MB messages for testing, I should do that for
> | smaller ones as well.
> |
> |
> | |
> | | I wonder if we could automate UPerf and UUPerf, like RadarGun does
> | | (or maybe make them RadarGun test scenarios?), so we can gather
> | | more
> | | data points. At the moment there's a lot of manual work involved in
> | | running the tests with all the possible configurations
> | | (TCP/UNICAST2, TCP/UNICAST2/UFC, UDP/UNICAST, UDP/UNICAST/UFC,
> | | UDP/UNICAST2/UFC, UDP/UNICAST2/UFC/RSVP, each protocol with several
> | | tweak-able attributes) and figuring out which configuration is
> | | "best".
> |
> | This sounds good, using JGroups cachewrapper I could just do GET on
> | one slave in a loop, right? The only modification required is that
> | the JGroupsWrapper.get should do dispatcher.callRemoteMethods(...)
> | with all members instead of just single invocation. And maybe the
> | I think I could grab some time for this next week.
> |
> |
> | I think to make it really like state transfer you'd have to keep one
> | GET target, but make all nodes pick the same target (e.g. the first
> | node) and make the key really big. Making all nodes targets would
> | work as well, but you'd have to do that on only one node to mimic a
> | single joiner asking for state.
> |
>
> Single joiner flooded by data was the problem, wasn't it? We could test
> both, of course, single joiner to big cluster and superelasticity where
> many nodes try to request data from single node. Still, the second one is
> not problematic for flow control, because the source will supply the data
> as fast as it can but all nodes can handle the fraction of data.
>
>
1-to-n GET requests with huge values or n-to-1 GET requests with huge keys
should be roughly equivalent, as they'd both test many nodes sending
messages to a single node (i.e. the joiner). I don't think it's worth
testing the many joiners case either.


> Radim
>
> | |
> | |
> | |
> | | On Thu, Jan 3, 2013 at 12:42 PM, Bela Ban < bban at redhat.com >
> | | wrote:
> | |
> | |
> | | Let's make sure though that we have a meaningful default that's not
> | | optimized for an edge case. Also, if we use TCP, we can remove UFC
> | | from
> | | the config, as TCP already performs point-to-point flow control.
> | |
> | |
> | |
> | | On 1/3/13 11:29 AM, Radim Vansa wrote:
> | | > 20k credits seems to be the best choice for this test:
> | | >
> | | > 10k: bad performance
> | | > 20k: Average of 2.79 requests / sec (27.87MB / sec), 358.81 ms
> | | > /request (prot=UNICAST2)
> | | > 30k: Average of 2.52 requests / sec (25.18MB / sec), 397.15 ms
> | | > /request (prot=UNICAST2)
> | | > 50k: Average of 2.35 requests / sec (23.47MB / sec), 426.10 ms
> | | > /request (prot=UNICAST2)
> | | > 80k: Average of 1.29 requests / sec (12.94MB / sec), 772.78 ms
> | | > /request (prot=UNICAST2)
> | | > 200k: bad performance
> | | >
> | | > (for remembrance: 4 nodes in hyperion, for these results I've set
> | | > up 8k frag size)
> | | >
> | | > I have held dot key for the duration of the test so you can see
> | | > how
> | | > long each apply state took as the dots were inserted into console
> | | > in constant rate (lame ascii chart). See attachements.
> | | >
> | | > Radim
> | | >
> | | > ----- Original Message -----
> | | > | From: "Dan Berindei" < dan.berindei at gmail.com >
> | | > | To: "infinispan -Dev List" < infinispan-dev at lists.jboss.org >
> | | > | Sent: Monday, December 24, 2012 8:01:26 AM
> | | > | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
> | | > |
> | | > |
> | | > |
> | | > |
> | | > | This is weird, I would have expected problems with the last
> | | > | message,
> | | > | but not in the middle of the sequence (that's why I suggested
> | | > | sending only 1 message). Maybe we need an an even lower
> | | > | max_credits...
> | | > |
> | | > | Merry Christmas to you, too!
> | | > |
> | | > | Dan
> | | > | On 21 Dec 2012 16:41, "Radim Vansa" < rvansa at redhat.com >
> | | > | wrote:
> | | > |
> | | > |
> | | > | Hi Dan,
> | | > |
> | | > | I have ran the test on 4 nodes in hyperion (just for the start
> | | > | to
> | | > | see
> | | > | how it will behave) but with 100 messages (1 message is nothing
> | | > | for
> | | > | a statistician) each 10MB and I see a weird behaviour - there
> | | > | are
> | | > | about 5-10 messages received in a fast succession and then the
> | | > | nothing is received for several seconds. I experience this
> | | > | behaviour
> | | > | for both 200k and 500k credits. Is this really how it should
> | | > | perform?
> | | > |
> | | > | Merry Christmas and tons of snow :)
> | | > |
> | | > | Radim
> | | > |
> | | > | <h1>☃</h1>
> | | > |
> | | > | ----- Original Message -----
> | | > | | From: "Dan Berindei" < dan.berindei at gmail.com >
> | | > | | To: "infinispan -Dev List" < infinispan-dev at lists.jboss.org >
> | | > | | Sent: Tuesday, December 18, 2012 8:57:08 AM
> | | > | | Subject: Re: [infinispan-dev] MFC/UFC credits in default
> | | > | | config
> | | > | |
> | | > | |
> | | > | | Hi Radim
> | | > | |
> | | > | | If you run the test with only 2 nodes and FC disabled, it's
> | | > | | going
> | | > | | to
> | | > | | perform even better. But then as you increase the number of
> | | > | | nodes,
> | | > | | the speed with no FC will drop dramatically (when we didn't
> | | > | | have
> | | > | | RSVP enabled, with only 3 nodes, it didn't manage to send 1 x
> | | > | | 10MB
> | | > | | message in 10 minutes).
> | | > | |
> | | > | | Please run the tests with as many nodes as possible and just
> | | > | | 1
> | | > | | message x 10MB. If 500k still performs better, create a JIRA
> | | > | | to
> | | > | | change the default.
> | | > | |
> | | > | | Cheers
> | | > | | Dan
> | | > | |
> | | > | |
> | | > | |
> | | > | |
> | | > | |
> | | > | | On Mon, Dec 17, 2012 at 4:55 PM, Radim Vansa <
> | | > | | rvansa at redhat.com >
> | | > | | wrote:
> | | > | |
> | | > | |
> | | > | | Sorry I haven't specified the amount, I am a stupido... my
> | | > | | tests
> | | > | | are
> | | > | | working with 500k credits.
> | | > | |
> | | > | | UUPerf (JGroups 3.2.4.Final-redhat-1) from one computer in
> | | > | | perflab
> | | > | | to
> | | > | | another, 2 threads (default), 1000x sends 10MB message
> | | > | | (default
> | | > | | chunkSize = 10000 * our entry size is usually 1kB) executed
> | | > | | 3x
> | | > | |
> | | > | | 200k: Average of 6.02 requests / sec (60.19MB / sec), 166.13
> | | > | | ms
> | | > | | /request (prot=UNICAST2)
> | | > | | Average of 5.61 requests / sec (56.09MB / sec), 178.30 ms
> | | > | | /request
> | | > | | (prot=UNICAST2)
> | | > | | Average of 5.49 requests / sec (54.94MB / sec), 182.03 ms
> | | > | | /request
> | | > | | (prot=UNICAST2)
> | | > | |
> | | > | | 500k: Average of 7.93 requests / sec (79.34MB / sec), 126.04
> | | > | | ms
> | | > | | /request (prot=UNICAST2)
> | | > | | Average of 8.18 requests / sec (81.82MB / sec), 122.23 ms
> | | > | | /request
> | | > | | (prot=UNICAST2)
> | | > | | Average of 8.41 requests / sec (84.09MB / sec), 118.92 ms
> | | > | | /request
> | | > | | (prot=UNICAST2)
> | | > | |
> | | > | | Can you also reproduce such results? I think that suggests
> | | > | | that
> | | > | | 500k
> | | > | | behaves really better.
> | | > | |
> | | > | | Radun
> | | > | |
> | | > | |
> | | > | |
> | | > | |
> | | > | | ----- Original Message -----
> | | > | | | From: "Dan Berindei" < dan.berindei at gmail.com >
> | | > | | | To: "infinispan -Dev List" < infinispan-dev at lists.jboss.org
> | | > | | | >
> | | > | | | Sent: Monday, December 17, 2012 12:43:37 PM
> | | > | | | Subject: Re: [infinispan-dev] MFC/UFC credits in default
> | | > | | | config
> | | > | | |
> | | > | | |
> | | > | | |
> | | > | | |
> | | > | | |
> | | > | | | On Mon, Dec 17, 2012 at 1:28 PM, Bela Ban < bban at redhat.com
> | | > | | | >
> | | > | | | wrote:
> | | > | | |
> | | > | | |
> | | > | | | Dan reduced those values to 200K, IIRC it was for
> | | > | | | UUPerfwhich
> | | > | | | behaved
> | | > | | | best with 200K. Idon't know if this is still needed. Dan ?
> | | > | | |
> | | > | | |
> | | > | | |
> | | > | | |
> | | > | | | I haven't run UUPerf in a while...
> | | > | | |
> | | > | | |
> | | > | | |
> | | > | | |
> | | > | | | On 12/17/12 12:19 PM, Radim Vansa wrote:
> | | > | | | > Hi,
> | | > | | | >
> | | > | | | > recently I have synchronized our jgroups configuration
> | | > | | | > with
> | | > | | | > the
> | | > | | | > default one shipped with Infinispan
> | | > | | | > (core/src/main/resources/jgroups-(tcp|udp).xml) and it
> | | > | | | > has
> | | > | | | > shown
> | | > | | | > that 200k credits in UFC/MFC (I keep the two values in
> | | > | | | > sync) is
> | | > | | | > not enough even for our smallest resilience test (killing
> | | > | | | > one
> | | > | | | > of
> | | > | | | > four nodes). The state transfer was often blocked when
> | | > | | | > requesting
> | | > | | | > for more credits which resulted in not completing it
> | | > | | | > within
> | | > | | | > the
> | | > | | | > time limit.
> | | > | | | > Therefore, I'd like to suggest to increase the amount of
> | | > | | | > credits
> | | > | | | > in
> | | > | | | > default configuration as well, because we simply cannot
> | | > | | | > use
> | | > | | | > the
> | | > | | | > lower setting and it's preferable to have the
> | | > | | | > configurations as
> | | > | | | > close as possible. The only settings we need to keep
> | | > | | | > different
> | | > | | | > are
> | | > | | | > thread pool sizes and addresses and ports.
> | | > | | | >
> | | > | | |
> | | > | | |
> | | > | | | What value would you like to use instead?
> | | > | | |
> | | > | | | Can you try UUPerf with 200k and your proposed
> | | > | | | configuration
> | | > | | | and
> | | > | | | compare the results?
> | | > | | |
> | | > | | | Cheers
> | | > | | | Dan
> | | > | | |
> | | > | | |
> | | > | |
> | | > | |
> | | > | | | _______________________________________________
> | | > | | | infinispan-dev mailing list
> | | > | | | infinispan-dev at lists.jboss.org
> | | > | | | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | | > | | _______________________________________________
> | | > | | infinispan-dev mailing list
> | | > | | infinispan-dev at lists.jboss.org
> | | > | | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | | > | |
> | | > | |
> | | > | | _______________________________________________
> | | > | | infinispan-dev mailing list
> | | > | | infinispan-dev at lists.jboss.org
> | | > | | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | | > |
> | | > | _______________________________________________
> | | > | infinispan-dev mailing list
> | | > | infinispan-dev at lists.jboss.org
> | | > | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | | > | _______________________________________________
> | | > | infinispan-dev mailing list
> | | > | infinispan-dev at lists.jboss.org
> | | > | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | | >
> | | >
> | | > _______________________________________________
> | | > infinispan-dev mailing list
> | | > infinispan-dev at lists.jboss.org
> | | > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | |
> | |
> | | --
> | | Bela Ban, JGroups lead ( http://www.jgroups.org )
> | |
> | |
> | |
> | | _______________________________________________
> | | infinispan-dev mailing list
> | | infinispan-dev at lists.jboss.org
> | | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | |
> | | _______________________________________________
> | | infinispan-dev mailing list
> | | infinispan-dev at lists.jboss.org
> | | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> |
> | _______________________________________________
> | infinispan-dev mailing list
> | infinispan-dev at lists.jboss.org
> | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> |
> | _______________________________________________
> | infinispan-dev mailing list
> | infinispan-dev at lists.jboss.org
> | https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20130103/d1ad74a2/attachment-0001.html 


More information about the infinispan-dev mailing list