Lots of CPU going in Channel.write() (Yourkit profile attached)

Alan Wolff fear2tread at gmail.com
Sat Dec 5 09:26:57 EST 2009


Hi

Is there any update regarding this issue?

Trustin have you committed an attempted fix yet?

Thanks

On Wed, Sep 30, 2009 at 9:15 AM, Trustin Lee (이희승) <trustin at gmail.com> wrote:
> Hi Utkarsh,
>
> I found some complication due to this change, so I had to revert it
> back in the trunk unfortunately.  I removed an indirect call on NIO
> socket to save some time, but I'm not sure it will improve the
> performance or not.  Here's the build:
>
>    http://trustin.dyndns.org/hudson/job/netty-trunk-deploy/135/
>
> Could you please rerun the test and share what has been changed?
>
> I'm particularly interested in on what task
> NioSocketChannel.isConnected() is spending its time.  Your previous
> screenshot didn't show that because the tree was collapsed - please
> expand it before capturing.  Alternatively, you could just send me the
> snapshot directly if you don't mind as I have YourKit profiler
> license.
>
> Thanks
>
> -- Trustin Lee, http://gleamynode.net/
>
>
>
> On Tue, Sep 22, 2009 at 9:33 AM, Trustin Lee (이희승) <trustin at gmail.com> wrote:
>> Hi Utkarsh,
>>
>> The modification I made is permanent.  I skipped calling isConnected()
>> because trying to write something on a closed channel raises
>> ClosedChannelException anyway.  There was no need to double-check - OS
>> (or JDK) checks before actually writing.
>>
>> Thanks for suggestion, and it's nice to see better performance. :)
>>
>> Cheers
>>
>> -- Trustin Lee, http://gleamynode.net/
>>
>>
>> On Tue, Sep 22, 2009 at 5:01 AM, Utkarsh Srivastava <utkarsh at gmail.com> wrote:
>>> Hi Trustin,
>>> I tested the new build and the results are awesome. Throughput is way up (at
>>> least a 30% jump for 1K messages) and CPU utilization is way down. People
>>> who have benchmarked Netty previously should re-run their benchmarks.
>>> On the cautious side, I noticed that in the patch, you just omitted the call
>>> to channel.isConnected(). Is that a long-term solution or just a stop-gap?
>>> When are you planning to put this into maven so that our builds can pick it
>>> up?
>>> Thanks
>>> Utkarsh
>>>
>>> On Sun, Sep 20, 2009 at 7:11 PM, Trustin Lee (이희승) <trustin at gmail.com>
>>> wrote:
>>>>
>>>> Hi Utkarsh,
>>>>
>>>> Thanks for reporting this issue.
>>>>
>>>> I've just checked in a tiny modification that reduces the number of
>>>> Channel.isConnected() calls.  Could you test the latest build and let
>>>> me know how it looks like now?
>>>>
>>>>   http://trustin.dyndns.org/hudson/job/netty-trunk-deploy/111/
>>>>
>>>> -- Trustin Lee, http://gleamynode.net/
>>>>
>>>>
>>>>
>>>> On Mon, Sep 14, 2009 at 1:30 AM, Utkarsh Srivastava <utkarsh at gmail.com>
>>>> wrote:
>>>> > Retrying ...
>>>> > Did anyone get a chance to look at this profile?
>>>> > I think there are real opportunities to optimize this code path in netty
>>>> > based on the profile attached. Channel.write() should not be taking up
>>>> > so
>>>> > much CPU.
>>>> > Utkarsh
>>>> >
>>>> > On Thu, Sep 10, 2009 at 11:37 PM, Utkarsh Srivastava <utkarsh at gmail.com>
>>>> > wrote:
>>>> >>
>>>> >> Hi,
>>>> >> I recently rewrote an application that was not using Netty (or even
>>>> >> NIO)
>>>> >> so as to use Netty. Netty has been great to use, the API, the doc,
>>>> >> everything. Thanks for the good work.
>>>> >> Using netty 3.1.2.GA, my application basically blasts out messages to 2
>>>> >> servers. I would expect to be able to saturate the network pipe. I am
>>>> >> able
>>>> >> to do so at larger message sizes (e.g. 16K).
>>>> >> But for small messages (e.g. 1K), the application doesn't saturate the
>>>> >> network but becomes CPU bound. I profiled the application using Yourkit
>>>> >> and
>>>> >> one place that I noticed that had a high CPU consumption but shouldn't
>>>> >> was
>>>> >> channel.write().
>>>> >> The breakdown of that method is attached. What is strange is 19% of the
>>>> >> time going in NioClientSocketPipelineSink.eventSunk() and its own time
>>>> >> is
>>>> >> nontrivial (170 ms = 7%). Looking at the code, I don't see any place
>>>> >> where
>>>> >> that CPU can be spent (since NIOWorker.write() and
>>>> >> NIOSocketChannel$WriteBuffer.offer() have their own separate
>>>> >> contributions
>>>> >> of 9 and 3% respectively. So this is extremely puzzling. Anybody have
>>>> >> thoughts on this?
>>>> >> Also, is it necessary to make that system call every time to check if
>>>> >> the
>>>> >> channel is connected? That alone is taking up 6%.
>>>> >> Thanks
>>>> >> Utkarsh
>>>> >>
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > netty-users mailing list
>>>> > netty-users at lists.jboss.org
>>>> > https://lists.jboss.org/mailman/listinfo/netty-users
>>>> >
>>>> >
>>>>
>>>> _______________________________________________
>>>> netty-users mailing list
>>>> netty-users at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/netty-users
>>>
>>>
>>> _______________________________________________
>>> netty-users mailing list
>>> netty-users at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/netty-users
>>>
>>>
>>
>
> _______________________________________________
> netty-users mailing list
> netty-users at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/netty-users
>



More information about the netty-users mailing list