On 20 Jan 2012, at 17:57, Sanne Grinovero wrote:
inline:
On 20 January 2012 08:43, Bela Ban <bban(a)redhat.com> wrote:
> Hi Sanne,
>
> (redirected back to infinispan-dev)
>
>> Hello,
>> I've run the same Infinispan benchmark mentioned today on the
>> Infinispan mailing list, but having the goal to test NAKACK2
>> development.
>>
>> Infinispan 5.1.0 at 2d7c65e with JGroups 3.0.2.Final :
>>
>> Done 844,952,883 transactional operations in 22.08 minutes using
>> 5.1.0-SNAPSHOT
>> 839,810,425 reads and 5,142,458 writes
>> Reads / second: 634,028
>> Writes/ second: 3,882
>>
>> Same Infinispan, with JGroups b294965 (and reconfigured for NAKACK2):
>>
>>
>> Done 807,220,989 transactional operations in 18.15 minutes using
>> 5.1.0-SNAPSHOT
>> 804,162,247 reads and 3,058,742 writes
>> Reads / second: 738,454
>> Writes/ second: 2,808
>>
>> same versions and configuration, run it again as I was too surprised:
>>
>> Done 490,928,700 transactional operations in 10.94 minutes using
>> 5.1.0-SNAPSHOT
>> 489,488,339 reads and 1,440,361 writes
>> Reads / second: 745,521
>> Writes/ second: 2,193
>>
>> So the figures aren't very stable, I might need to run longer tests,
>> but there seems to be a trend of this new protocol speeding up Read
>> operations at the cost of writes.
>
>
>
> This is really strange !
>
> In my own tests with 2 members on the same box (using MPerf), I found that
> the blockings on Table.add() and Table.removeMany() were much smaller than
> in the previous tests, and now the TP.TransferQueueBundler.send() method was
> the culprit #1 by far ! Of course, still being much smaller than the
> previous highest blockings !
I totally believe you, I'm wondering if the fact that JGroups is more
efficient is making Infinispan writes slower. Consider as well that
these read figures are stellar, it's never been that fast before (on
this test on my laptop), makes me think of some unfair lock acquired
by readers so that writers are not getting a chance to make any
progress.
Manik, Dan, any such lock around? If I profiler monitors, these
figures change dramatically..
Yes, our (transactional) reads are phenomenally fast now. I think it has to do with
contention on the CHMs in the transaction table being optimised. In terms of JGroups,
perhaps writer threads being faster reduce the contention on these CHMs so more reads can
be squeezed through. This is REPL mode though. In DIST our reads are about the same as
5.0.
We could be in a situation in which the faster JGroups gets, the worse
the write numbers I get
That's the fault of the test. In a real-world scenario, faster reads will always be
good, since the reads (per timeslice) are finite. Once they are done, they are done, and
the writes can proceed. To model this in your test, fix the number of reads and writes
that will be performed. Maybe even per timeslice like per minute or something, and then
measure the average time per read or write operation.
- not sure why, but I'm suspecting that we
shouldn't use this test to evaluate on JGroups code effectiveness,
there is too much other stuff going on.
>
> I'll run Transactional on my own box today, to see the diffs between various
> versions of JGroups.
> Can you send me your bench.sh ? If I don't change the values, the test takes
> forever !
This is exactly how I'm running it:
https://github.com/Sanne/InfinispanStartupBenchmark/commit/c4efbc66abb6bf...
Attention on:
CFG="-Dbench.loops=100000000 $CFG"
(sets the amount of max iterations to perform)
And:
versions="infinispan-5.1.SNAPSHOT"
to avoid running the previous versions, which are significantly slower
Set some good MAVEN_OPTS options as well.. carefull with the specific
paths for my system, which are the reason to not merge this patch in
master.
>
>
>
>
> --
> Bela Ban
> Lead JGroups (
http://www.jgroups.org)
> JBoss / Red Hat
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Manik Surtani
manik(a)jboss.org
twitter.com/maniksurtani
Lead, Infinispan
http://www.infinispan.org