[jboss-jira] [JBoss JIRA] (WFLY-4696) OutOfMemory DirectByteBuffer XNIO
Carlos Rodríguez Aguado (JIRA)
issues at jboss.org
Mon Jul 13 13:08:05 EDT 2015
[ https://issues.jboss.org/browse/WFLY-4696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13088985#comment-13088985 ]
Carlos Rodríguez Aguado edited comment on WFLY-4696 at 7/13/15 1:07 PM:
------------------------------------------------------------------------
The problem has not been solved.
The memory leak continues. *We have a attached a video showing the problem: !wfly.mp4!*
In the video we had just started WildFly 8.2 Final. We opened also jvisualvm to monitorize the direct buffer pools, where memory is not being released. We entered to our web application (servlets, jsps, AJAX, jQuery) and began to reproduce the problem just refreshing/reloading (F5) the browser very fast. The server responses but channel is broken before the response arrives to the client (browser). After a few seconds direct buffers pool memory had increased and after executing a GC the memory is not released in the direct buffer pool (only released in java heap).
We have been working a lot of time in this problem. We have been investigating where the memory was not released. We found that 3 direct buffer pools of class org.xnio.ssl.JsseSslConduitEngine were not released:
/** The buffer into which incoming SSL data is written. */
private final Pooled<ByteBuffer> receiveBuffer;
/** The buffer from which outbound SSL data is sent. */
private final Pooled<ByteBuffer> sendBuffer;
/** The buffer into which inbound clear data is written. */
private final Pooled<ByteBuffer> readBuffer;
We investigated how to release this buffers and finally we changed the code of write() methods in class io.undertow.server.protocol.http.HttpResponseConduit adding a try/catch to call to a generic method to free HttpServerExchange:
public int write(final ByteBuffer src) throws IOException
{
int oldState = this.state;
int state = oldState & MASK_STATE;
int alreadyWritten = 0;
int originalRemaining = -1;
{color:red}try
{{color}
if (state != 0)
{
originalRemaining = src.remaining();
state = processWrite(state, src, -1, -1);
if (state != 0)
{
return 0;
}
alreadyWritten = originalRemaining - src.remaining();
if (allAreSet(oldState, FLAG_SHUTDOWN))
{
next.terminateWrites();
throw new ClosedChannelException();
}
}
if (alreadyWritten != originalRemaining)
{
return next.write(src) + alreadyWritten;
}
return alreadyWritten;
{color:red}}
catch (IOException | RuntimeException re)
{
try
{
exchange.getRequestChannel().suspendReads();
}
catch (Exception e)
{
}
try
{
exchange.getResponseChannel().shutdownWrites();
}
catch (Exception e)
{
}
try
{
((HttpServerConnection) exchange.getConnection()).getChannel().getSinkChannel().shutdownWrites();
}
catch (Exception e)
{
}
IoUtils.safeClose(exchange.getConnection());
throw re;
}{color}
finally
{
this.state = oldState & ~MASK_STATE | state;
}
}
public long write(final ByteBuffer[] srcs) throws IOException
{
return write(srcs, 0, srcs.length);
}
public long write(final ByteBuffer[] srcs, final int offset, final int length) throws IOException
{
if (length == 0)
{
return 0L;
}
int oldVal = state;
int state = oldVal & MASK_STATE;
{color:red}try
{{color}
if (state != 0)
{
long rem = Buffers.remaining(srcs, offset, length);
state = processWrite(state, srcs, offset, length);
long ret = rem - Buffers.remaining(srcs, offset, length);
if (state != 0)
{
return ret;
}
if (allAreSet(oldVal, FLAG_SHUTDOWN))
{
next.terminateWrites();
throw new ClosedChannelException();
}
//we don't attempt to write again
return ret;
}
return length == 1 ? next.write(srcs[offset]) : next.write(srcs, offset, length);
{color:red}}
catch (IOException | RuntimeException re)
{
try
{
exchange.getRequestChannel().suspendReads();
}
catch (Exception e)
{
}
try
{
exchange.getResponseChannel().shutdownWrites();
}
catch (Exception e)
{
}
try
{
((HttpServerConnection) exchange.getConnection()).getChannel().getSinkChannel().shutdownWrites();
}
catch (Exception e)
{
}
IoUtils.safeClose(exchange.getConnection());
throw re;
}{color}
finally
{
this.state = oldVal & ~MASK_STATE | state;
}
}
We tested these changes and obtained an improvement:
* Before: OutOfMemory was reproduced in 4 or 5 days
* After: More than 2 weeks working and OutOfMemory has not been reproduced!. *But the memory is still increasing!!* Maybe exists another memory leak or the solution we applied is not the right one.
We tried also with WildFly 9 CR1 but the same problem occurs.
We don't understand how it is possible that this problem don't occur to more WildFly users.
Please, can you help us?.
was (Author: carlosra85):
The problem has not been solved.
The memory leak continues. I have a attached a video showing the problem.
In the video we had just started WildFly 8.2 Final. We opened also jvisualvm to monitorize the direct buffer pools, where memory is not being released. We entered to our web application (servlets, jsps, AJAX, jQuery) and began to reproduce the problem just refreshing/reloading (F5) the browser very fast. The server responses but channel is broken before the response arrives to the client (browser). After a few seconds direct buffers pool memory had increased and after executing a GC the memory is not released in the direct buffer pool (only released in java heap).
We have been working a lot of time in this problem. We have been investigating where the memory was not released. We found that 3 direct buffer pools of class org.xnio.ssl.JsseSslConduitEngine were not released:
/** The buffer into which incoming SSL data is written. */
private final Pooled<ByteBuffer> receiveBuffer;
/** The buffer from which outbound SSL data is sent. */
private final Pooled<ByteBuffer> sendBuffer;
/** The buffer into which inbound clear data is written. */
private final Pooled<ByteBuffer> readBuffer;
We investigated how to release this buffers and finally we changed the code of write() methods in class io.undertow.server.protocol.http.HttpResponseConduit adding a try/catch to call to a generic method to free HttpServerExchange:
public int write(final ByteBuffer src) throws IOException
{
int oldState = this.state;
int state = oldState & MASK_STATE;
int alreadyWritten = 0;
int originalRemaining = -1;
{color:red}try
{{color}
if (state != 0)
{
originalRemaining = src.remaining();
state = processWrite(state, src, -1, -1);
if (state != 0)
{
return 0;
}
alreadyWritten = originalRemaining - src.remaining();
if (allAreSet(oldState, FLAG_SHUTDOWN))
{
next.terminateWrites();
throw new ClosedChannelException();
}
}
if (alreadyWritten != originalRemaining)
{
return next.write(src) + alreadyWritten;
}
return alreadyWritten;
{color:red}}
catch (IOException | RuntimeException re)
{
try
{
exchange.getRequestChannel().suspendReads();
}
catch (Exception e)
{
}
try
{
exchange.getResponseChannel().shutdownWrites();
}
catch (Exception e)
{
}
try
{
((HttpServerConnection) exchange.getConnection()).getChannel().getSinkChannel().shutdownWrites();
}
catch (Exception e)
{
}
IoUtils.safeClose(exchange.getConnection());
throw re;
}{color}
finally
{
this.state = oldState & ~MASK_STATE | state;
}
}
public long write(final ByteBuffer[] srcs) throws IOException
{
return write(srcs, 0, srcs.length);
}
public long write(final ByteBuffer[] srcs, final int offset, final int length) throws IOException
{
if (length == 0)
{
return 0L;
}
int oldVal = state;
int state = oldVal & MASK_STATE;
{color:red}try
{{color}
if (state != 0)
{
long rem = Buffers.remaining(srcs, offset, length);
state = processWrite(state, srcs, offset, length);
long ret = rem - Buffers.remaining(srcs, offset, length);
if (state != 0)
{
return ret;
}
if (allAreSet(oldVal, FLAG_SHUTDOWN))
{
next.terminateWrites();
throw new ClosedChannelException();
}
//we don't attempt to write again
return ret;
}
return length == 1 ? next.write(srcs[offset]) : next.write(srcs, offset, length);
{color:red}}
catch (IOException | RuntimeException re)
{
try
{
exchange.getRequestChannel().suspendReads();
}
catch (Exception e)
{
}
try
{
exchange.getResponseChannel().shutdownWrites();
}
catch (Exception e)
{
}
try
{
((HttpServerConnection) exchange.getConnection()).getChannel().getSinkChannel().shutdownWrites();
}
catch (Exception e)
{
}
IoUtils.safeClose(exchange.getConnection());
throw re;
}{color}
finally
{
this.state = oldVal & ~MASK_STATE | state;
}
}
We tested these changes and obtained an improvement:
* Before: OutOfMemory was reproduced in 4 or 5 days
* After: More than 2 weeks working and OutOfMemory has not been reproduced!. *But the memory is still increasing!!* Maybe exists another memory leak or the solution we applied is not the right one.
We tried also with WildFly 9 CR1 but the same problem occurs.
We don't understand how it is possible that this problem don't occur to more WildFly users.
Please, can you help us?.
> OutOfMemory DirectByteBuffer XNIO
> ---------------------------------
>
> Key: WFLY-4696
> URL: https://issues.jboss.org/browse/WFLY-4696
> Project: WildFly
> Issue Type: Bug
> Components: IO, Web (Undertow)
> Affects Versions: 8.1.0.Final, 8.2.0.Final
> Reporter: Carlos Rodríguez Aguado
> Assignee: Stuart Douglas
> Priority: Blocker
> Fix For: 9.0.0.CR2, 10.0.0.Alpha3
>
> Attachments: wlfy.mp4
>
>
> I get this errors constantly in my server when a web connection is interrupted from the browser for instance:
> 11:50:45,301 ERROR [stderr] (default task-339) Exception in thread "default task-339" java.nio.BufferOverflowException
> 11:50:45,301 ERROR [stderr] (default task-339) at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:363)
> 11:50:45,301 ERROR [stderr] (default task-339) at java.nio.ByteBuffer.put(ByteBuffer.java:859)
> 11:50:45,301 ERROR [stderr] (default task-339) at io.undertow.util.HttpString.appendTo(HttpString.java:204)
> 11:50:45,301 ERROR [stderr] (default task-339) at io.undertow.server.protocol.http.HttpResponseConduit.processWrite(HttpResponseConduit.java:150)
> 11:50:45,301 ERROR [stderr] (default task-339) at io.undertow.server.protocol.http.HttpResponseConduit.flush(HttpResponseConduit.java:629)
> 11:50:45,301 ERROR [stderr] (default task-339) at io.undertow.conduits.AbstractFixedLengthStreamSinkConduit.flush(AbstractFixedLengthStreamSinkConduit.java:205)
> 11:50:45,301 ERROR [stderr] (default task-339) at org.xnio.conduits.ConduitStreamSinkChannel.flush(ConduitStreamSinkChannel.java:162)
> 11:50:45,301 ERROR [stderr] (default task-339) at io.undertow.channels.DetachableStreamSinkChannel.flush(DetachableStreamSinkChannel.java:100)
> 11:50:45,301 ERROR [stderr] (default task-339) at io.undertow.server.HttpServerExchange.closeAndFlushResponse(HttpServerExchange.java:1489)
> 11:50:45,317 ERROR [stderr] (default task-339) at io.undertow.server.HttpServerExchange.endExchange(HttpServerExchange.java:1470)
> 11:50:45,317 ERROR [stderr] (default task-339) at io.undertow.server.Connectors.executeRootHandler(Connectors.java:201)
> 11:50:45,317 ERROR [stderr] (default task-339) at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:727)
> 11:50:45,317 ERROR [stderr] (default task-339) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 11:50:45,317 ERROR [stderr] (default task-339) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 11:50:45,317 ERROR [stderr] (default task-339) at java.lang.Thread.run(Thread.java:745)
> And then, I think this errors lead to a OutOfMemory crash:
> 14:23:09,592 ERROR [org.xnio.listener] (default I/O-3) XNIO001007: A channel event listener threw an exception: java.lang.OutOfMemoryError
> at sun.misc.Unsafe.allocateMemory(Native Method) [rt.jar:1.8.0_20]
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:127) [rt.jar:1.8.0_20]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) [rt.jar:1.8.0_20]
> at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:57) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:55) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ByteBufferSlicePool.allocate(ByteBufferSlicePool.java:149) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ssl.JsseSslConduitEngine.<init>(JsseSslConduitEngine.java:143) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ssl.JsseSslStreamConnection.<init>(JsseSslStreamConnection.java:71) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ssl.JsseAcceptingSslStreamConnection.accept(JsseAcceptingSslStreamConnection.java:45) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ssl.JsseAcceptingSslStreamConnection.accept(JsseAcceptingSslStreamConnection.java:37) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ssl.AbstractAcceptingSslChannel.accept(AbstractAcceptingSslChannel.java:187) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:289) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:286) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ChannelListeners$DelegatingChannelListener.handleEvent(ChannelListeners.java:1092) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.nio.NioTcpServerHandle.handleReady(NioTcpServerHandle.java:53) [xnio-nio-3.2.2.Final.jar:3.2.2.Final]
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:539) [xnio-nio-3.2.2.Final.jar:3.2.2.Final]
> I also have found that if I perform a full GC manually the server recovers, but it can not recover by itself, by performing other types of GCs.
> This is the trace for version 8.2 of WildFly:
> 17:17:16,957 ERROR [io.undertow.request] (default task-49) Undertow request failed HttpServerExchange{ GET /modulab/servlet/ShowPDFReportServlet}: java.nio.BufferOverflowException
> at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:363) [rt.jar:1.8.0_20]
> at java.nio.ByteBuffer.put(ByteBuffer.java:859) [rt.jar:1.8.0_20]
> at io.undertow.util.HttpString.appendTo(HttpString.java:204)
> at io.undertow.server.protocol.http.HttpResponseConduit.processWrite(HttpResponseConduit.java:166)
> at io.undertow.server.protocol.http.HttpResponseConduit.write(HttpResponseConduit.java:564)
> at io.undertow.conduits.AbstractFixedLengthStreamSinkConduit.write(AbstractFixedLengthStreamSinkConduit.java:106)
> at org.xnio.conduits.Conduits.writeFinalBasic(Conduits.java:132) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at io.undertow.conduits.AbstractFixedLengthStreamSinkConduit.writeFinal(AbstractFixedLengthStreamSinkConduit.java:175)
> at org.xnio.conduits.ConduitStreamSinkChannel.writeFinal(ConduitStreamSinkChannel.java:104) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at io.undertow.channels.DetachableStreamSinkChannel.writeFinal(DetachableStreamSinkChannel.java:194)
> at io.undertow.server.HttpServerExchange$WriteDispatchChannel.writeFinal(HttpServerExchange.java:1829)
> at io.undertow.servlet.spec.ServletOutputStreamImpl.writeBufferBlocking(ServletOutputStreamImpl.java:565)
> at io.undertow.servlet.spec.ServletOutputStreamImpl.close(ServletOutputStreamImpl.java:600)
> at io.undertow.servlet.spec.HttpServletResponseImpl.closeStreamAndWriter(HttpServletResponseImpl.java:497)
> at io.undertow.servlet.spec.HttpServletResponseImpl.responseDone(HttpServletResponseImpl.java:581)
> at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:308)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:247)
> at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:76)
> at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:166)
> at io.undertow.server.Connectors.executeRootHandler(Connectors.java:197)
> at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:759)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_20]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_20]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_20]
> 10:57:12,389 ERROR [org.xnio.listener] (default I/O-4) XNIO001007: A channel event listener threw an exception: java.lang.OutOfMemoryError
> at sun.misc.Unsafe.allocateMemory(Native Method) [rt.jar:1.8.0_20]
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:127) [rt.jar:1.8.0_20]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) [rt.jar:1.8.0_20]
> at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:57) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:55) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ByteBufferSlicePool.allocate(ByteBufferSlicePool.java:143) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ssl.JsseSslConduitEngine.<init>(JsseSslConduitEngine.java:146) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ssl.JsseSslStreamConnection.<init>(JsseSslStreamConnection.java:71) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ssl.JsseAcceptingSslStreamConnection.accept(JsseAcceptingSslStreamConnection.java:45) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ssl.JsseAcceptingSslStreamConnection.accept(JsseAcceptingSslStreamConnection.java:37) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ssl.AbstractAcceptingSslChannel.accept(AbstractAcceptingSslChannel.java:187) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:289) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:286) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ChannelListeners$DelegatingChannelListener.handleEvent(ChannelListeners.java:1092) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) [xnio-api-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.nio.NioTcpServerHandle.handleReady(NioTcpServerHandle.java:53) [xnio-nio-3.3.0.Final.jar:3.3.0.Final]
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:539) [xnio-nio-3.3.0.Final.jar:3.3.0.Final]
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
More information about the jboss-jira
mailing list