[infinispan-dev] New bundler performance
Mircea Markus
mmarkus at redhat.com
Wed Jun 12 10:57:58 EDT 2013
On 12 Jun 2013, at 15:38, Bela Ban <bban at redhat.com> wrote:
>>>> I was going through the commits (running tests on each of them) to
>>>> seek the performance regression we've recently discovered and it
>>>> seems that our test (replicated udp non-transactional stress test on
>>>> 4 nodes) experiences a serious regression on the commit
>>>>
>>>> ISPN-2848 Use the new bundling mechanism from JGroups 3.3.0
>>>> (73da108cdcf9db4f3edbcd6dbda6938d6e45d148)
>>>>
>>>> The performance drops from about 7800 writes/s to 4800 writes/s, and
>>>> from 1.5M reads/s to 1.2M reads/s (having slower reads in replicated
>>>> mode is really odd).
>>>
>>>
>>> Is this using sync or async replication ?
>>>
>>> You could set UDP.bundler_type="old" to see if the old bundler makes a
>>> difference.
>> Right now we've removed the DONT_BUNDLE flag for the sync calls, so putting the old bundler in place would require some code changes.
>> Is there anything wrong with using DONT_BUNDLE on the sync calls with the new bundler? I'm thinking we should support both for now, as the new bundling needs to prove itself still.
>
>
> I think we should treat this as a bug and I'll look into this, but I
> won't have much time until after JBW. If you want to write code that can
> handle both old and new bundler, that would be good, although that code
> should be removed after the bug has been fixed.
The only thing I'm aware of that's needed to support old bundling is to add DONT_BUNDLE to sync messages, which seems pretty straight forward.
I remember discussing that DONT_BUNDLE + sync messages + new bundler is suboptimal for some reason (batching?) - am I wrong saying that?
We're currently holding the 5.3.0.CR2 release for this. I'm thinking to:
- support old bundling as well
- based on the when the jgroups perf problem is fixed we might release CR3 or include the fix in 5.3.1.Final
Cheers,
--
Mircea Markus
Infinispan lead (www.infinispan.org)
More information about the infinispan-dev
mailing list