Deadlock with OrderedMemroyAwareThreadPoolExecutor

Lev Shock dein at hackerdom.ru
Sun Aug 28 02:15:39 EDT 2011


Hello.

Recently my server hanged again. I've analyzed thread dump and that's what
I've found:

"pool-3-thread-10" - Thread t at 67
java.lang.Thread.State: WAITING on
org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor$Limiter at 1742e6c
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:485)
at
org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor$Limiter.increase(MemoryAwareThreadPoolExecutor.java:546)
at
org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor.increaseCounter(MemoryAwareThreadPoolExecutor.java:410)
at
org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor.execute(MemoryAwareThreadPoolExecutor.java:344)
at
org.jboss.netty.handler.execution.ExecutionHandler.handleUpstream(ExecutionHandler.java:146)
at
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireChannelUnbound(Channels.java:382)
at org.jboss.netty.channel.socket.nio.NioWorker.close(NioWorker.java:596)
at
org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:119)
at
org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:76)
at
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:771)
at
org.jboss.netty.handler.execution.ExecutionHandler.handleDownstream(ExecutionHandler.java:167)
at
org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at
org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
at org.jboss.netty.channel.Channels.close(Channels.java:720)
at org.jboss.netty.channel.AbstractChannel.close(AbstractChannel.java:200)
 at com.blabla.NettySession.close(NettySession.java:84)
<omitted my classes stack trace>
at
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
at
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)
at
org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:69)
at
org.jboss.netty.handler.execution.OrderedMemoryAwareThreadPoolExecutor$ChildExecutor.run(OrderedMemoryAwareThreadPoolExecutor.java:316)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

This is only an example. All OrderedMemoryAwareThreadPoolExecutor's Threads
(which were named  "pool-3-thread-x") were waiting on themselves! Also all
Netty worker threads were waiting on  OrderedMemoryAwareThreadPoolExecutor:

"New I/O server worker #1-8" - Thread t at 66
java.lang.Thread.State: WAITING on
org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor$Limiter at 1742e6c
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:485)
at
org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor$Limiter.increase(MemoryAwareThreadPoolExecutor.java:546)
at
org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor.increaseCounter(MemoryAwareThreadPoolExecutor.java:410)
at
org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor.execute(MemoryAwareThreadPoolExecutor.java:344)
at
org.jboss.netty.handler.execution.ExecutionHandler.handleUpstream(ExecutionHandler.java:146)
at
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:349)
at
org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)
at
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

As a result, server stopped processing all network I/O.

Main problem is: I am closing Netty channel at this point in stack trace:
"at com.blabla.NettySession.close(NettySession.java:84)".

This action raises ChannelEvent. But this event cannot be processed because
all executor threads are busy! So   OrderedMemoryAwareThreadPoolExecutor is
waiting on itself and we have a deadlock.

I need an advice. If that is problem of my code, how can I close channel
properly without causing this deadlock when all
OrderedMemoryAwareThreadPoolExecutor  threads are busy?
Or maybe this is some kind of bug in Netty?

Any help will be appreciated.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/netty-users/attachments/20110828/67ed044c/attachment.html 


More information about the netty-users mailing list