Hi Manik,
so I have tried to compile this branch and issued a 20 minute stress test (preceded by 10 minute warmup) on 128 nodes, where each node has 10 stressor threads.
While in 5.2.0.CR3 the maximum OOB threadpool size was 553 with this configuration, with t_825 it was 219. This looks good, but it's actually better :). When I looked on the per-node maximum, in t_825 there was only one node with the 219 threads (as the max), others were usually around 25, few around 40. On the contrary, in 5.2.0.CR3 all the nodes had maximum around 500!
Glad to bring good news :)
Radim
----- Original Message -----
| From: "Manik Surtani" <msurtani@redhat.com>
| To: "infinispan -Dev List" <infinispan-dev@lists.jboss.org>, "Radim Vansa" <rvansa@redhat.com>
| Sent: Tuesday, February 19, 2013 6:33:04 PM
| Subject: Staggering remote GET calls
|
| Guys,
|
| I have a topic branch with a fix for ISPN-825, to stagger remote GET
| calls. (See the JIRA for details on this patch).
|
| This should have an interesting effect on greatly reducing the
| pressure on the OOB thread pool. This isn't a *real* fix for the
| problem that Radim reported (Pedro is working on that with Bela),
| but reducing pressure on the OOB thread pool is a side effect of
| this fix.
|
| It should generally make things faster too, with less traffic on the
| network. I'd be curious for you to give this branch a try, Radim -
| see how it impacts your tests.
|
| https://github.com/maniksurtani/infinispan/tree/t_825
|
| Cheers
| Manik
| --
| Manik Surtani
| manik@jboss.org
| twitter.com/maniksurtani
|
| Platform Architect, JBoss Data Grid
| http://red.ht/data-grid
|
|
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev