[jboss-jira] [JBoss JIRA] (WFLY-3620) Deadlock while logging

James Perkins (JIRA) issues at jboss.org
Mon Jul 14 20:06:29 EDT 2014


    [ https://issues.jboss.org/browse/WFLY-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12984946#comment-12984946 ] 

James Perkins commented on WFLY-3620:
-------------------------------------

The use of the {{ThreadLocal}} isn't for thread safety, it's for ensuring there is not a stack overflow if the delegating stream tries to also does invokes a {{printxx()}}. The {{StdioContext$DelegatingPrintStream}} doesn't do any locking so the locking is happening in the delegate stream. I would imagine it's a {{org.jboss.stdio.LoggingOutputStream}}.

>From this I can't really see what the deadlock would be from. It looks like it's coming from the delegating stream

> Deadlock while logging
> ----------------------
>
>                 Key: WFLY-3620
>                 URL: https://issues.jboss.org/browse/WFLY-3620
>             Project: WildFly
>          Issue Type: Bug
>      Security Level: Public(Everyone can see) 
>          Components: Logging
>    Affects Versions: 8.1.0.Final
>         Environment: CentOS 6.5 64bit, java7u45 64bit (and 32 bit, the same behavior)
>            Reporter: Stefan Schueffler
>            Assignee: James Perkins
>            Priority: Critical
>
> We hit really often a deadlock? in org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474)
> Even if the "StdioContext" belongs to Jboss-Logging, the problem occurs in our production wildfly installation twice to 4 times a day - all threads are deadlocking while trying to log.debug, log.error, or (sometimes) System.out.println from our application code, and wildfly does not respond anymore...
> The partial stacktrace always is similar to this one:
> {code}
>  "default task-64" prio=10 tid=0x4c539c00 nid=0x5ef9 waiting for monitor entry [0x495e0000]
>    java.lang.Thread.State: BLOCKED (on object monitor)
>         at java.io.PrintStream.println(PrintStream.java:806)
>         - waiting to lock <0x5ee0adf8> (a java.io.PrintStream)
>         at org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474)
>         at jsp.communications.statuschange.selectStatus_jsp._jspService(selectStatus_jsp.java:413)
>         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:69)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>         at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
>         at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:82)
> {code}
> While investigating the StdioContext - class, i really wondered whether the used "locking/checking by using a threadlocal" could have worked in a multi-threaded environment (it should have the very same problems as every "double checking" algorithm without proper synchronization).
> If all threads are hanging in this particular lock, only a full wildfly-restart recovers in our case.
> My preferred solution would be a rework of the used org.jboss.stdio. classes, as the used idioms of ThreadLocals for reentrant-checks are at least highly unusual?



--
This message was sent by Atlassian JIRA
(v6.2.6#6264)


More information about the jboss-jira mailing list