An existing connection was forcibly closed by the remote host

"이희승 (Trustin Lee)" trustin at gmail.com
Tue Jul 14 23:17:18 EDT 2009


Hi Luis and Neil,

On 07/05/2009 10:17 PM, Luis Neves wrote:
> neilson9 wrote:
>> Hi,
>> Ive been testing netty (CR2 nightly snapshot) for the last few weeks
>> after
>> integrating it into our environment. At a stage
>> where we experience a burst of traffic - 70-100 machines all establish
>> connections to stream results, we end with many of the experiencing
>> 'existing connection was forcibly closed by the remote host' (as below) -
>> its only during this short burst where the exception is experienced.
>> We also
>> have retry logic to throw the existing conection establish a new one -
>> they
>> do eventually get through but also experience subsequent exceptions.
>>
>> The environment is a series of windows machines -the main servers
>> nodes are
>> Windows2003Server (Ive upped the Tcpip.sys to 2000) - and while
>> monitoring
>> the network we peak at 1095 connections.
>>
>> I would have thought this number of connections wouldnt be a problem - is
>> this a windows OS problem or something I can work around, or tune in
>> Netty? 
> 
> Hi! Misery loves company. I'm facing the exact same issue while testing
> the Netty Http Server on Windows 2003.
> 
> I get a bunch of
> "java.io.IOException: An established connection was aborted by the
> software in your host machine"
> and
> "java.io.IOException: An existing connection was forcibly closed by the
> remote host"
> 
> It works flawlessly (with amazing performance) on Linux but on Windows
> as the number off connected clients increases the above errors start to
> pop up.
> 
> What helps somewhat is to increase the socket backlog.
> 
> ChannelFactory factory = new
> NioServerSocketChannelFactory(Executors.newCachedThreadPool(),
> Executors.newCachedThreadPool());
>        
> ServerBootstrap bootstrap = new ServerBootstrap(factory);
> 
> bootstrap.setOption("backlog", 1024);
> 
> <aside>
> getting and setting socket options in Netty is not straightforward, it's
> not obvious from the API what can be changed, you must know before hand
> the names of the properties you want to get/set.
> </aside>
> 
> 
> This helps but doesn't solve the problem, it only slightly rises the bar
> for the number of connected clients... I read somewhere that the maximum
> value for the socket backlog on windows is 200.
> 
> It may very well be an issue of the JVM network stack on windows but
> grizzly and mina don't appear to suffer from it (at least on my initial
> testing)
> Like you I'm also tunning the Windows tcp parameters but with no luck so
> far.
> 

I'm not sure if there's something that can be tuned.  I recently
improved the acceptance throughput in Netty NIO transport, so it might
help to some extent.  The max backlog of Linux is 128 unless the kernel
source code is modified, and I don't see such an exception at all,
therefore I don't think backlog is a problem here.  Please keep me
informed as you get more clue about this issue.  I'd be glad to help you.

Thanks,
Trustin



More information about the netty-users mailing list