[infinispan-dev] Staggering remote GET calls

Manik Surtani msurtani at redhat.com
Tue Feb 26 05:56:29 EST 2013


I'm not surprised that read performance suffers a bit actually.  Which is why we broadcast the GETs originally.  But once the staggering timeout becomes configurable, this should be something people can tune.

- M

On 21 Feb 2013, at 11:40, Radim Vansa <rvansa at redhat.com> wrote:

> Hi,
> 
> so I have re-ran (and checked this time!) both tests with the same setting.
> The result is not as miraculous as the previous one but it is still 500 compared to 200, which is good.
> However, read performance has dropped by 25%, but write performance has increased by 57%!
> 
> See the charts (mainly the OOB maximum size distribution) in attachment.
> 
> Config: library mode, non-transactional distributed cache with 2 owners and 512 segments, sync replication.
> Test: 10m warmup, 20m test, 10 stressing threads per node.
> 
> Radim
> 
> ----- Original Message -----
> | From: "Radim Vansa" <rvansa at redhat.com>
> | To: "infinispan -Dev List" <infinispan-dev at lists.jboss.org>
> | Sent: Wednesday, February 20, 2013 3:38:49 PM
> | Subject: Re: [infinispan-dev] Staggering remote GET calls
> | 
> | Ouch, call me dumbass... I haven't checked the test results.
> | Something revoked my cluster allocation and the test was prematurely
> | stopped.
> | 
> | I'll rerun it (and check!), and show performance numbers as well.
> | 
> | Radim (the dumbass)
> | 
> | ----- Original Message -----
> | | From: "Dan Berindei" <dan.berindei at gmail.com>
> | | To: "infinispan -Dev List" <infinispan-dev at lists.jboss.org>
> | | Cc: "Manik Surtani" <msurtani at redhat.com>
> | | Sent: Wednesday, February 20, 2013 3:24:46 PM
> | | Subject: Re: [infinispan-dev] Staggering remote GET calls
> | |  
> | | 
> | | Radim, just to be sure, you are testing embedded mode with
> | | RadarGun,
> | | right? With HotRod most of the get operations should be initiated
> | | from the main owner, so Manik's changes shouldn't make a big
> | | difference in the number of active threads.
> | | 
> | | How about throughput, has it also improved compared to 5.2.0.CR3,
> | | or
> | | is it the same?
> | | 
> | | 
> | | 
> | | 
> | | 
> | | On Wed, Feb 20, 2013 at 2:15 PM, Radim Vansa < rvansa at redhat.com >
> | | wrote:
> | | 
> | | 
> | | Hi Manik,
> | | 
> | | so I have tried to compile this branch and issued a 20 minute
> | | stress
> | | test (preceded by 10 minute warmup) on 128 nodes, where each node
> | | has 10 stressor threads.
> | | While in 5.2.0.CR3 the maximum OOB threadpool size was 553 with
> | | this
> | | configuration, with t_825 it was 219. This looks good, but it's
> | | actually better :). When I looked on the per-node maximum, in t_825
> | | there was only one node with the 219 threads (as the max), others
> | | were usually around 25, few around 40. On the contrary, in
> | | 5.2.0.CR3
> | | all the nodes had maximum around 500!
> | | 
> | | Glad to bring good news :)
> | | 
> | | Radim
> | | 
> | | 
> | | 
> | | ----- Original Message -----
> | | | From: "Manik Surtani" < msurtani at redhat.com >
> | | | To: "infinispan -Dev List" < infinispan-dev at lists.jboss.org >,
> | | | "Radim Vansa" < rvansa at redhat.com >
> | | | Sent: Tuesday, February 19, 2013 6:33:04 PM
> | | | Subject: Staggering remote GET calls
> | | | 
> | | | Guys,
> | | | 
> | | | I have a topic branch with a fix for ISPN-825, to stagger remote
> | | | GET
> | | | calls. (See the JIRA for details on this patch).
> | | | 
> | | | This should have an interesting effect on greatly reducing the
> | | | pressure on the OOB thread pool. This isn't a *real* fix for the
> | | | problem that Radim reported (Pedro is working on that with Bela),
> | | | but reducing pressure on the OOB thread pool is a side effect of
> | | | this fix.
> | | | 
> | | | It should generally make things faster too, with less traffic on
> | | | the
> | | | network. I'd be curious for you to give this branch a try, Radim
> | | | -
> | | | see how it impacts your tests.
> | | | 
> | | | https://github.com/maniksurtani/infinispan/tree/t_825
> | | | 
> | | | Cheers
> | | | Manik
> | | | --
> | | | Manik Surtani
> | | | manik at jboss.org
> | | | twitter.com/maniksurtani
> | | | 
> | | | Platform Architect, JBoss Data Grid
> | | | http://red.ht/data-grid
> | | | 
> | | | 
> | | _______________________________________________
> | | infinispan-dev mailing list
> | | infinispan-dev at lists.jboss.org
> | | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | | 
> | | 
> | | _______________________________________________
> | | infinispan-dev mailing list
> | | infinispan-dev at lists.jboss.org
> | | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | _______________________________________________
> | infinispan-dev mailing list
> | infinispan-dev at lists.jboss.org
> | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | 
> <t825.pdf>_______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
manik at jboss.org
twitter.com/maniksurtani

Platform Architect, JBoss Data Grid
http://red.ht/data-grid




More information about the infinispan-dev mailing list