We have a JBM test which quickly creates and closes connections in a loop which eventually
fails with:
| New I/O server boss #1 (channelId: 28338721, /127.0.0.1:5445) 11:41:29,851 WARN
[NioServerSocketPipelineSink] Failed to accept a connection.
| java.io.IOException: Too many open files
| at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
| at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
| at
org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:205)
| at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
| at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675)
| at java.lang.Thread.run(Thread.java:595)
|
Creating a JBM JMS and closing it involves creating a Netty bootstrap, creating a channel,
closing the channel and closing the bootstrap.
This is done in quick succession.
Debugging this a bit, it seems that the netty channel disconnected event can arrive at the
server significantly later than when the channel is closed from the client side.
So.. over time we end up with the rate of creating channels exceeds the rate that channel
disconnected arrives at the server and the server soon runs out of file handles.
I would have thought disconnection would be synchronous, otherwise a client could easily
create a DoS attack.
Any ideas?
View the original post :
http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4227770#...
Reply to the post :
http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&a...