Netty/HTTP dropping connections with httpperf
Sébastien Pierre
sebastien.pierre at gmail.com
Fri Jan 8 08:50:35 EST 2010
Hello !
So I did the comparison between Netty and HttpCore/NIO Netty, and I can
confirm the behaviour I saw before:
- *Netty produces more errors than HttpCore/NIO*, dropping connections when
concurency is between 3000 and 5000
- HttpCore/NIO is slighly less performant, but significantly less errors
overall
I used the stock Netty HTTP snoop example (see previous message in this
thread) and a modified version of the NHttpServer example of HttpCore
(attached). To compiled it and run it, do the following in HttpCore's
repository:
javac -cp
httpcore/target/httpcore-4.1-alpha2-SNAPSHOT.jar:./httpcore-nio/target/httpcore-nio-4.1-alpha2-SNAPSHOT.jar
NHttpServer.java
java -cp
.:httpcore/target/httpcore-4.1-alpha2-SNAPSHOT.jar:./httpcore-nio/target/httpcore-nio-4.1-alpha2-SNAPSHOT.jar
NHttpServer
The test I run is the httperf-based benchmark. as presented earlier. I also
started by doing multiple 'ab' benchmarks to see if connections errors may
appear, but I didn't get any. I still don't know if the errors are related
to improper system configuration, but among the various tests I've run,
I've seen Netty drop connections and become non-reactive more often than
HttpCore/NIO.
I also attached the HttpCore/NIO test case as well sa graphs illustrating
the test results or running httperf on both.
Cheers,
-- Sébastien
Le 7 janvier 2010 13:16, Sébastien Pierre <sebastien.pierre at gmail.com> a
écrit :
> Hi Trustin,
>
> So I checked the number of sockets in *_WAIT state on both client and
> server machines, using the following commands
>
> server> watch -n1 'lsof -nl | egrep "TCP|UDP" | grep "java" | wc -l '
> client> watch -n1 'lsof -nl | grep WAIT | wc -l'
>
> the number of opened sockets on the server oscillates between 0 and 500,
> while the number of *WAIT sockets on the client is always 0, so I doubt it's
> related to an overflow of TIME_WAIT or CLOSE_WAIT.
>
> I'll prepare more detailed tests with the HttpCore/NIO version.
>
> -- Sébastien
>
> 2010/1/6 "Trustin Lee (이희승)" <trustin at gmail.com>
>
>> Hi Sébastien,
>>
>>
>> You might be running out of available ports due to many TIME_WAIT either
>> on the client or server side. Could you confirm it? If so, you need to
>> wait until the TIME_WAIT state is cleared between each run.
>>
>> If HttpCore/NIO version works fine when Netty fails, I'd like to compare
>> the two to see what the differences are. Please paste or attach the
>> source code.
>>
>> Thanks,
>> Trustin
>>
>> Sébastien Pierre wrote:
>> > Hi there !
>> >
>> > I just moved a test HTTP service from HttpCore/NIO to Netty based on the
>> > HTTP snoop example. I experienced (despite good performance at first)
>> that
>> > the server is dropping connections when the load is heavy. To reproduce
>> > this, simply start the snoop example:
>> >
>> > java -cp ./src/main/java/:./jar/netty-3.2.0.ALPHA2.jar
>> > org.jboss.netty.example.http.snoop.HttpServer
>> >
>> > and then do
>> >
>> > python -c'for r in range(500,10500,500): import os ; os.system("httperf
>> > --hog --timeout=60 --client=0/1 --server=127.0.0.1 --port=8080 --uri=/
>> > --rate=%s --send-buffer=4096 --recv-buffer=16384 --num-conns=10000
>> > --num-calls=1" % (r))'
>> >
>> > which is the same as running this command with --rate growing from 500
>> to
>> > 10000 with 500 increases.
>> >
>> > httperf --hog --timeout=60 --client=0/1 --server=127.0.0.1 --port=8080
>> > --uri=/ --rate=500 --send-buffer=4096 --recv-buffer=16384
>> --num-conns=10000
>> > --num-calls=1
>> >
>> > so in this example, Netty often drops connections between rates of 3000
>> and
>> > 5000, which you'll see in httpperf's log with the following lines:
>> >
>> > httperf: connection failed with unexpected error 98
>> > ...
>> > Errors: total 2236 client-timo 0 socket-timo 0 connrefused 0 connreset 0
>> > Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 2236
>> >
>> > I attached a graph where you'll see that for the rate=4000 and
>> rate=45000
>> > the server just dropped all the connections (from httperf's perspective
>> at
>> > least). So do you know any way to prevent or at least detect that ?
>> >
>> > Thanks !
>> >
>> > -- Sébastien
>> >
>> > PS: I should add that I've run the test a couple of times, and quite
>> often
>> > have trouble in the rate=3000-5000 range
>> >
>> >
>> >
>> > ------------------------------------------------------------------------
>> >
>> > _______________________________________________
>> > netty-users mailing list
>> > netty-users at lists.jboss.org
>> > https://lists.jboss.org/mailman/listinfo/netty-users
>>
>> --
>> what we call human nature in actuality is human habit
>> http://gleamynode.net/
>>
>>
>>
>> _______________________________________________
>> netty-users mailing list
>> netty-users at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/netty-users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/netty-users/attachments/20100108/a7334f60/attachment-0001.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: NHttpServer.java
Type: text/x-java
Size: 7215 bytes
Desc: not available
Url : http://lists.jboss.org/pipermail/netty-users/attachments/20100108/a7334f60/attachment-0001.bin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: netty-scalability-2.png
Type: image/png
Size: 25962 bytes
Desc: not available
Url : http://lists.jboss.org/pipermail/netty-users/attachments/20100108/a7334f60/attachment-0002.png
-------------- next part --------------
A non-text attachment was scrubbed...
Name: httpcore-scalability.png
Type: image/png
Size: 30129 bytes
Desc: not available
Url : http://lists.jboss.org/pipermail/netty-users/attachments/20100108/a7334f60/attachment-0003.png
More information about the netty-users
mailing list