[
https://issues.jboss.org/browse/WFCORE-40?page=com.atlassian.jira.plugin....
]
Stefan Schueffler commented on WFCORE-40:
-----------------------------------------
Unfortunately i'm not able to provide an easy to reproduce kickstart project. None of
my test-projects (using jmeter for load, different loggings, System.out, Stacktraces and
so on) leads to the same locking/blocking of threads.
Apart from this, in our live production environment the error is always the same: either
we reduce logging to an absolute minimum (i.e. no logging at all), or we will run in the
locking/waiting for lock situation sooner or later (at least once a day).
I found this issue in jira, which sounds the same in terms of "problem
description/behavior" : endless loop in fillInStackTrace. (Un)fortunately, this bug
is solved (according to the jira ticket), and the fixed version is installed in wildfly -
so, we do not have the exact same problem here, but a very similar problem behavior...
https://issues.jboss.org/browse/XNIO-215
Deadlock while logging
----------------------
Key: WFCORE-40
URL:
https://issues.jboss.org/browse/WFCORE-40
Project: WildFly Core
Issue Type: Bug
Security Level: Public(Everyone can see)
Components: Logging
Environment: CentOS 6.5 64bit, java7u45 64bit (and 32 bit, the same behavior)
Reporter: Stefan Schueffler
Assignee: James Perkins
Priority: Critical
We hit really often a deadlock? in
org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474)
Even if the "StdioContext" belongs to Jboss-Logging, the problem occurs in our
production wildfly installation twice to 4 times a day - all threads are deadlocking while
trying to log.debug, log.error, or (sometimes) System.out.println from our application
code, and wildfly does not respond anymore...
The partial stacktrace always is similar to this one:
{code}
"default task-64" prio=10 tid=0x4c539c00 nid=0x5ef9 waiting for monitor entry
[0x495e0000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.io.PrintStream.println(PrintStream.java:806)
- waiting to lock <0x5ee0adf8> (a java.io.PrintStream)
at
org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474)
at
jsp.communications.statuschange.selectStatus_jsp._jspService(selectStatus_jsp.java:413)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:69)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:82)
{code}
While investigating the StdioContext - class, i really wondered whether the used
"locking/checking by using a threadlocal" could have worked in a multi-threaded
environment (it should have the very same problems as every "double checking"
algorithm without proper synchronization).
If all threads are hanging in this particular lock, only a full wildfly-restart recovers
in our case.
My preferred solution would be a rework of the used org.jboss.stdio. classes, as the used
idioms of ThreadLocals for reentrant-checks are at least highly unusual?
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)