Optimizing Netty for fast socket accept/creation
"이희승 (Trustin Lee)"
trustin at gmail.com
Wed Jul 8 06:31:41 EDT 2009
What's more interesting is that the accept throughput doesn't seem to
get better or worse if non-blocking connection attempts are made. It is
probably because NIO ServerSocketChannelImpl acquires a lock in accept():
public SocketChannel accept() throws IOException {
synchronized (lock) {
if (!isOpen())
throw new ClosedChannelException();
if (!isBound())
throw new NotYetBoundException();
SocketChannel sc = null;
...
So, it seems like the only way to improve the performance is to open
more ports in NIO unfortunately.
Please let me know if you found other interesting stuff regrading this
issue.
Thanks,
Trustin
On 07/08/2009 04:47 PM, 이희승 (Trustin Lee) wrote:
> Hi Carl,
>
> One more question. :)
>
> Does the accept throughput increase linearly as the number of accept
> threads (the number of bound ports in your case) increases?
>
> If you take a look at NioServerSocketPipelineSink.java:161, you will see
> the following code:
>
> bossExecutor.execute(new IoWorkerRunnable(
> new ThreadRenamingRunnable(
> new Boss(channel),
> "New I/O server boss #" + id +
> " (channelId: " + channel.getId() +
> ", " + channel.getLocalAddress() + ')')));
>
> I increased the number of acceptor threads like the following:
>
> for (int i = 0; i < 2; i ++) {
> bossExecutor.execute(new IoWorkerRunnable(
> new ThreadRenamingRunnable(
> new Boss(channel),
> "New I/O server boss #" + id + '-' + i +
> " (channelId: " + channel.getId() +
> ", " + channel.getLocalAddress() + ')')));
> }
>
> and was indeed able to get much better result. However, increasing the
> number of acceptor threads to 3 or 4 did not help much. Do you have the
> same experience with your workaround? Could you apply this modification
> to see how the modified Netty performs comparing to your workaround?
>
> Thanks,
> Trustin
>
> On 07/07/2009 06:48 PM, Carl Byström wrote:
>> Hi Trustin!
>>
>> I wouldn't call it bad performance, it's just that I need more
>> performance :)
>> While it's understandable to use one accept (boss thread) for accepting
>> connections, I can't help thinking if things could be improved with
>> multiple accept threads.
>> When using the forking model in Unix and forking after you start
>> listening on your port you can get the effect of having multiple
>> processes listening on the same socket (file descriptor).
>> However, the JVM is inherently single-process so achieving the same
>> thing there is harder (impossible?). Maybe there are some OS-specific
>> "hacks" that can be used? Relying on certain behaviors, such as
>> fork/accept in Unix etc. Also, I know that combining these things with
>> epoll can be tricky. Have had some problems with Python with epoll +
>> multiple accept processes.
>>
>> Think I'll resort to listening on multiple ports (if nothing else comes
>> along) to achieve a better socket open/close performance.
>> Been using 3.1.0.CR1 for my tests.
>>
>> - Carl
>>
>>
>> On Mon, Jul 6, 2009 at 8:59 AM, "이희승 (Trustin Lee)"
>> <trustin at gmail.com <mailto:trustin at gmail.com>> wrote:
>>
>> Hi Carl,
>>
>> Sorry for the late response first of all.
>>
>> On 06/30/2009 11:05 PM, Carl Byström wrote:
>> > Been experimenting some with Netty the last couple of days. As an avid
>> > MINA fan, Netty looks great!
>>
>> Thanks! :)
>>
>> > What's been bothering is the "slow" socket accept, which limits the
>> > throughput of my application. (Request/response based ála HTTP 1.0, so
>> > no keep-alive).
>> > (The socket accept operation isn't "slow", I just feel that it
>> could be
>> > improved thus increasing my throughput)
>> >
>> > After profiling my application I found that accept is taking up
>> most of
>> > the time. While this probably is as expected, I can't help feeling
>> > limited by that fact.
>> > I've tried having the same server listening on multiple ports thus
>> > getting a separate boss thread doing the accept. This actually doubles
>> > the number of req/s while retaining the same response time. I realize
>> > this isn't anything peculiar (multi-core machine with another accept
>> > thread for it to chew on). But it gets me thinking if there's more one
>> > can do to optimize this.
>> >
>> > Is it possible to speed up the socket accept operation
>> > a) by tuning Netty?
>> > b) by tuning the JVM?
>> > c) by tuning the OS/kernel?
>> >
>> > While I fear things like this might be very sensitive to how the OS
>> > implements the TCP stack, I hope at least there are some general
>> advice
>> > or places to start looking.
>> > Appreciate any hints or tips regarding this. (Running on RHEL5 64-bit,
>> > Java 1.6.0_10-b33, Intel Quad-Core Q6600)
>>
>> Which Netty version are you using? Netty uses only one thread for
>> accepting an incoming channel for each port, and it could be why it's
>> not performing good enough. If you observe the accept performance issue
>> with 3.1.0.CR1, I'd like to fix it before GA.
>>
>> Thanks!
>> Trusitn
>> >
>> > - Carl
>> >
>> >
>> >
>> ------------------------------------------------------------------------
>> >
>> > _______________________________________________
>> > netty-users mailing list
>> > netty-users at lists.jboss.org <mailto:netty-users at lists.jboss.org>
>> > https://lists.jboss.org/mailman/listinfo/netty-users
>>
>> _______________________________________________
>> netty-users mailing list
>> netty-users at lists.jboss.org <mailto:netty-users at lists.jboss.org>
>> https://lists.jboss.org/mailman/listinfo/netty-users
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> netty-users mailing list
>> netty-users at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/netty-users
>
> _______________________________________________
> netty-users mailing list
> netty-users at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/netty-users
More information about the netty-users
mailing list