Lots of CPU going in Channel.write() (Yourkit profile attached)
Utkarsh Srivastava
utkarsh at gmail.com
Mon Sep 21 16:01:52 EDT 2009
Hi Trustin,
I tested the new build and the results are awesome. Throughput is way up (at
least a 30% jump for 1K messages) and CPU utilization is way down. People
who have benchmarked Netty previously should re-run their benchmarks.
On the cautious side, I noticed that in the patch, you just omitted the call
to channel.isConnected(). Is that a long-term solution or just a stop-gap?
When are you planning to put this into maven so that our builds can pick it
up?
Thanks
Utkarsh
On Sun, Sep 20, 2009 at 7:11 PM, Trustin Lee (이희승) <trustin at gmail.com>wrote:
> Hi Utkarsh,
>
> Thanks for reporting this issue.
>
> I've just checked in a tiny modification that reduces the number of
> Channel.isConnected() calls. Could you test the latest build and let
> me know how it looks like now?
>
> http://trustin.dyndns.org/hudson/job/netty-trunk-deploy/111/
>
> — Trustin Lee, http://gleamynode.net/
>
>
>
> On Mon, Sep 14, 2009 at 1:30 AM, Utkarsh Srivastava <utkarsh at gmail.com>
> wrote:
> > Retrying ...
> > Did anyone get a chance to look at this profile?
> > I think there are real opportunities to optimize this code path in netty
> > based on the profile attached. Channel.write() should not be taking up so
> > much CPU.
> > Utkarsh
> >
> > On Thu, Sep 10, 2009 at 11:37 PM, Utkarsh Srivastava <utkarsh at gmail.com>
> > wrote:
> >>
> >> Hi,
> >> I recently rewrote an application that was not using Netty (or even NIO)
> >> so as to use Netty. Netty has been great to use, the API, the doc,
> >> everything. Thanks for the good work.
> >> Using netty 3.1.2.GA, my application basically blasts out messages to 2
> >> servers. I would expect to be able to saturate the network pipe. I am
> able
> >> to do so at larger message sizes (e.g. 16K).
> >> But for small messages (e.g. 1K), the application doesn't saturate the
> >> network but becomes CPU bound. I profiled the application using Yourkit
> and
> >> one place that I noticed that had a high CPU consumption but shouldn't
> was
> >> channel.write().
> >> The breakdown of that method is attached. What is strange is 19% of the
> >> time going in NioClientSocketPipelineSink.eventSunk() and its own time
> is
> >> nontrivial (170 ms = 7%). Looking at the code, I don't see any place
> where
> >> that CPU can be spent (since NIOWorker.write() and
> >> NIOSocketChannel$WriteBuffer.offer() have their own separate
> contributions
> >> of 9 and 3% respectively. So this is extremely puzzling. Anybody have
> >> thoughts on this?
> >> Also, is it necessary to make that system call every time to check if
> the
> >> channel is connected? That alone is taking up 6%.
> >> Thanks
> >> Utkarsh
> >>
> >
> >
> > _______________________________________________
> > netty-users mailing list
> > netty-users at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/netty-users
> >
> >
>
> _______________________________________________
> netty-users mailing list
> netty-users at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/netty-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/netty-users/attachments/20090921/28636075/attachment.html
More information about the netty-users
mailing list