[jboss-jira] [JBoss JIRA] (WFLY-3620) Deadlock while logging

James Perkins (JIRA) issues at jboss.org
Fri Aug 1 13:54:30 EDT 2014


    [ https://issues.jboss.org/browse/WFLY-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12989940#comment-12989940 ] 

James Perkins commented on WFLY-3620:
-------------------------------------

This is a tough question. It seems to be stuck on the {{Throwable.fillInStackTrace()}} in a native method. I'm not sure if the native method has a bug or it just happened to be there when the thread dump happened.

If it happens again I'd to see if the thread is blocked on the {{Throwable.fillInStackTrace()}} native method again. Or if you have a way to reproduce that would be even better and I can test it :)

> Deadlock while logging
> ----------------------
>
>                 Key: WFLY-3620
>                 URL: https://issues.jboss.org/browse/WFLY-3620
>             Project: WildFly
>          Issue Type: Bug
>      Security Level: Public(Everyone can see) 
>          Components: Logging
>    Affects Versions: 8.1.0.Final
>         Environment: CentOS 6.5 64bit, java7u45 64bit (and 32 bit, the same behavior)
>            Reporter: Stefan Schueffler
>            Assignee: James Perkins
>            Priority: Critical
>
> We hit really often a deadlock? in org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474)
> Even if the "StdioContext" belongs to Jboss-Logging, the problem occurs in our production wildfly installation twice to 4 times a day - all threads are deadlocking while trying to log.debug, log.error, or (sometimes) System.out.println from our application code, and wildfly does not respond anymore...
> The partial stacktrace always is similar to this one:
> {code}
>  "default task-64" prio=10 tid=0x4c539c00 nid=0x5ef9 waiting for monitor entry [0x495e0000]
>    java.lang.Thread.State: BLOCKED (on object monitor)
>         at java.io.PrintStream.println(PrintStream.java:806)
>         - waiting to lock <0x5ee0adf8> (a java.io.PrintStream)
>         at org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474)
>         at jsp.communications.statuschange.selectStatus_jsp._jspService(selectStatus_jsp.java:413)
>         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:69)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>         at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
>         at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:82)
> {code}
> While investigating the StdioContext - class, i really wondered whether the used "locking/checking by using a threadlocal" could have worked in a multi-threaded environment (it should have the very same problems as every "double checking" algorithm without proper synchronization).
> If all threads are hanging in this particular lock, only a full wildfly-restart recovers in our case.
> My preferred solution would be a rework of the used org.jboss.stdio. classes, as the used idioms of ThreadLocals for reentrant-checks are at least highly unusual?



--
This message was sent by Atlassian JIRA
(v6.2.6#6264)


More information about the jboss-jira mailing list