[ https://issues.jboss.org/browse/WFCORE-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12991477#comment-12991477 ]
Stefan Schueffler commented on WFCORE-40:
-----------------------------------------
Every time this happens to us, one thread is blocked in {code}Throwable.fillInStackTrace(){code}.
I'll try to get a reproducable deployment / configuration, and i'll try whether this only happens in our jdk 1.7.0_45 or also in never releases of jdk.
> Deadlock while logging
> ----------------------
>
> Key: WFCORE-40
> URL: https://issues.jboss.org/browse/WFCORE-40
> Project: WildFly Core
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Logging
> Environment: CentOS 6.5 64bit, java7u45 64bit (and 32 bit, the same behavior)
> Reporter: Stefan Schueffler
> Assignee: James Perkins
> Priority: Critical
>
> We hit really often a deadlock? in org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474)
> Even if the "StdioContext" belongs to Jboss-Logging, the problem occurs in our production wildfly installation twice to 4 times a day - all threads are deadlocking while trying to log.debug, log.error, or (sometimes) System.out.println from our application code, and wildfly does not respond anymore...
> The partial stacktrace always is similar to this one:
> {code}
> "default task-64" prio=10 tid=0x4c539c00 nid=0x5ef9 waiting for monitor entry [0x495e0000]
> java.lang.Thread.State: BLOCKED (on object monitor)
> at java.io.PrintStream.println(PrintStream.java:806)
> - waiting to lock <0x5ee0adf8> (a java.io.PrintStream)
> at org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474)
> at jsp.communications.statuschange.selectStatus_jsp._jspService(selectStatus_jsp.java:413)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:69)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
> at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:82)
> {code}
> While investigating the StdioContext - class, i really wondered whether the used "locking/checking by using a threadlocal" could have worked in a multi-threaded environment (it should have the very same problems as every "double checking" algorithm without proper synchronization).
> If all threads are hanging in this particular lock, only a full wildfly-restart recovers in our case.
> My preferred solution would be a rework of the used org.jboss.stdio. classes, as the used idioms of ThreadLocals for reentrant-checks are at least highly unusual?
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)