[
https://issues.jboss.org/browse/WFLY-3652?page=com.atlassian.jira.plugin....
]
Seth Miller edited comment on WFLY-3652 at 9/25/14 6:45 PM:
------------------------------------------------------------
I think there were 3 separate leaks in this case:
* Client keeping connection open - fixed by setting read-timeout available in the 8.x
branch (used tom's branch until it was committed to wildfly/8.x recently). This
caused continuously growing tcp ESTABLISHED connections with clients that did not close
sockets cleanly. Also this is reproducible without async. Launch JMeter and force kill
it during an HTTP GET load test without read-timeout set.
* With versions of the CometD Async servlet prior to the latest, a filehandle growth until
the server undeploys itself related to ServletOutputStream.print() being called
{code}
2014-09-19 14:42:41,677 ERROR[stderr] Exception in thread "default task-371"
java.lang.IllegalStateException: UT010003: Cannot call getInputStream(), getReader()
already called
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.servlet.spec.HttpServletRequestImpl.getInputStream(HttpServletRequestImpl.java:576)
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.servlet.core.ServletBlockingHttpExchange.getInputStream(ServletBlockingHttpExchange.java:51)
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.server.HttpServerExchange.getInputStream(HttpServerExchange.java:1325)
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.servlet.spec.AsyncContextImpl$3.run(AsyncContextImpl.java:311)
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.servlet.spec.AsyncContextImpl$6.run(AsyncContextImpl.java:450)
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.servlet.spec.AsyncContextImpl$TaskDispatchRunnable.run(AsyncContextImpl.java:567)
2014-09-19 14:42:41,679 ERROR[stderr] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
2014-09-19 14:42:41,679 ERROR[stderr] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
2014-09-19 14:42:41,679 ERROR[stderr] at java.lang.Thread.run(Thread.java:745)
{code}
That can be fixed by updating CometD which I believe works around the issue (or it was
just a CometD bug, not sure of the servlet spec).
* For the last leak - memory related to AsyncContext, I filed
https://issues.jboss.org/browse/UNDERTOW-316 and am looking into how the latest CometD
3.0.1 release handles AsyncContext.complete() and will reply to that portion in that
ticket.
Thanks!
was (Author: innerverse):
I think there were 3 separate leaks in this case:
* Client keeping connection open - fixed by setting read-timeout available in the 8.x
branch (used tom's branch until it was committed to wildfly/8.x recently). This
caused continuously growing tcp ESTABLISHED connections with clients that did not exist
cleanly. Also this is reproducible without async. Launch JMeter and force kill it during
an HTTP GET load test without read-timeout set.
* With versions of the CometD Async servlet prior to the latest, a filehandle growth until
the server undeploys itself related to ServletOutputStream.print() being called
{code}
2014-09-19 14:42:41,677 ERROR[stderr] Exception in thread "default task-371"
java.lang.IllegalStateException: UT010003: Cannot call getInputStream(), getReader()
already called
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.servlet.spec.HttpServletRequestImpl.getInputStream(HttpServletRequestImpl.java:576)
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.servlet.core.ServletBlockingHttpExchange.getInputStream(ServletBlockingHttpExchange.java:51)
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.server.HttpServerExchange.getInputStream(HttpServerExchange.java:1325)
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.servlet.spec.AsyncContextImpl$3.run(AsyncContextImpl.java:311)
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.servlet.spec.AsyncContextImpl$6.run(AsyncContextImpl.java:450)
2014-09-19 14:42:41,678 ERROR[stderr] at
io.undertow.servlet.spec.AsyncContextImpl$TaskDispatchRunnable.run(AsyncContextImpl.java:567)
2014-09-19 14:42:41,679 ERROR[stderr] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
2014-09-19 14:42:41,679 ERROR[stderr] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
2014-09-19 14:42:41,679 ERROR[stderr] at java.lang.Thread.run(Thread.java:745)
{code}
That can be fixed by updating CometD which I believe works around the issue (or it was
just a CometD bug, not sure of the servlet spec).
* For the last leak - memory related to AsyncContext, I filed
https://issues.jboss.org/browse/UNDERTOW-316 and am looking into how the latest CometD
3.0.1 release handles AsyncContext.complete() and will reply to that portion in that
ticket.
Thanks!
Network connection leak
-----------------------
Key: WFLY-3652
URL:
https://issues.jboss.org/browse/WFLY-3652
Project: WildFly
Issue Type: Bug
Components: Web (Undertow)
Affects Versions: 8.1.0.Final
Environment: Linux 2.6.38-16-server
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Reporter: Jan Vanhercke
Assignee: Stuart Douglas
Fix For: 9.0.0.Alpha1
When using Asynchronous servlets and AsyncListeners for long polling we observe a
connection leak in the undertow subsystem.
Heap dumps show a large number of org.xnio.io.NioSocketConduit,
io.undertow.server.protocol.http.HttpServerConnection and related objects.
However, since the effective number of connections is far less, nearly all AsyncContext
instances we find are in a complete state and lsof output returns a large number of
sockets with 'can't identify protocol' entries indicating that sockets are
kept open by the JVM but are in fact half closed by the network stack.
Not all connections appear to be leaking, but over time, depending on the load, the
server instance fills up.
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)