Recommendations for server process with thousands of listening ports?

Jordan Sissel jls at semicomplete.com
Wed Jun 22 18:40:24 EDT 2011


On Wed, Jun 22, 2011 at 3:33 PM, Vassilis Bekiaris <v.bekiaris at saicon.gr>wrote:

> **
> Hi!
>
> recently (on another unrelated thread) an issue came up with failure to
> open sockets due to the limit of file descriptors available per user per
> process (this was on Ubuntu linux IIRC), could this be your case? Check this
> article by Trustin for further reading:
>
> http://gleamynode.net/articles/1557/
>
> Cheers,
> Vassilis
>

Turns out my issue was that Fedora ships with a default 'user process' limit
of 1024 - which is roughly threads in java.

So that solves the crashing OOM/no-more-threads problem. My next question is
this:

Why are so many threads created? My experience in other event-driven
networking systems (eventmachine, libev, libevent, etc) is that only 1
thread (or only a few) are used to process network IO; in Netty, using the
Executors.newCachedThreadPool it seems to make 1 thread per server socket.
If I switch to using newFixedThreadPool(N) only the first N sockets are
usable.

I feel like I'm doing something wrong here; should I be expecting a 1:1
thread:socket mapping? That seems odd.

-Jordan



>
> On 23/06/2011 1:14 , Jordan Sissel wrote:
>
> Howdy :)
>
>  I have a service that currently listens on thousands of tcp and udp ports
> (for different channels of data); due to various problems, I am rewriting it
> in Java and decided to use Netty.
>
>  I've tried a few different ways of doing things -
>
>  First, one NioServerSocketChannelFactory (or NioDatagramChannelFactory)
> per listening port. Problem is, after a few thousand server channels (via
> ConnectionlessBootstrap and ServerBootstrap), I get OOM while creating
> threads.
>
>  Second, one NioServerSocketChannelFactory shared across all TCP server
> ports and one NioDatagramChannelFactory for all UDP server ports. Problem
> here is that after about a few hundred open sockets with error
> "org.jboss.netty.channel.ChannelException: Failed to bind to: /
> 0.0.0.0:15725" - watching strace shows bind(2) successfully bind on this
> port, for example, so the cause is not a bind specific failure. Trolling the
> logs, I see about 911 successful TCP server sockets before tcp fails with
> this message
>
>  I am using NioServerSocketChannelFactory with
> Executors.newCachedThreadPool().
>
>  I can try to make some standalone code that reproduces this behavior for
> further study, but if there's anything obvious I'm missing, please let me
> know.
>
>  -Jordna
>
>
> _______________________________________________
> netty-users mailing listnetty-users at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/netty-users
>
>
> _______________________________________________
> netty-users mailing list
> netty-users at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/netty-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/netty-users/attachments/20110622/124ee51f/attachment.html 


More information about the netty-users mailing list