[JBoss JIRA] Commented: (NETTY-315) Memory Leak when HttpContentCompressor is in server pipeline

Greg Haines (JIRA) jira-events at lists.jboss.org
Wed May 19 09:53:06 EDT 2010


    [ https://jira.jboss.org/browse/NETTY-315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12531473#action_12531473 ] 

Greg Haines commented on NETTY-315:
-----------------------------------

Memory is holding steady under a barrage of requests with GZIP enabled.
Looks like you nailed it, Trustin! Thanks!

> Memory Leak when HttpContentCompressor is in server pipeline
> ------------------------------------------------------------
>
>                 Key: NETTY-315
>                 URL: https://jira.jboss.org/browse/NETTY-315
>             Project: Netty
>          Issue Type: Bug
>          Components: Handler
>    Affects Versions: 3.2.0.CR1
>         Environment: Linux version 2.6.18-164.el5 (CentOS)
> Java HotSpot(TM) 64-Bit Server VM (build 16.0-b13, mixed mode)
>            Reporter: Greg Haines
>            Assignee: Trustin Lee
>            Priority: Critical
>             Fix For: 3.2.0.Final
>
>
> I'm encountering a crippling memory leak when a client request has the "ACCEPT-ENCODING: gzip" header and requests numerous large files from my Netty-based HTTP server. It works for a few thousand requests then inevitably bombs with an java.lang.OutOfMemoryError: Java heap space. I tried playing with the GC tuning options to no avail.
> I ran the resulting heap dump through Eclispe Memory Analysis Tool and this was the result:
> The class "org.jboss.netty.channel.AbstractChannel", loaded by "sun.misc.Launcher$AppClassLoader @ 0xb6c345f0", occupies 1,555,736,280 (99.95%) bytes. The memory is accumulated in one instance of "org.jboss.netty.util.internal.ConcurrentHashMap$Segment[]" loaded by "sun.misc.Launcher$AppClassLoader @ 0xb6c345f0".Keywords
> org.jboss.netty.util.internal.ConcurrentHashMap$Segment[]
> sun.misc.Launcher$AppClassLoader @ 0xb6c345f0
> org.jboss.netty.channel.AbstractChannel
> It appears that the ConcurrentHashMap variable called allChannels is the culprit. All the values are of EmbeddedChannels.
> Here is my setup:
> Server startup command:
> java -server -Xmx1536M -Xms1536M -Xmn512M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=31 -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -jar uvs-1.0-SNAPSHOT.jar
> Server pipeline factory:
> 	final ChannelPipeline pipeline = Channels.pipeline();
> 	pipeline.addLast("decoder", new HttpRequestDecoder());
> 	pipeline.addLast("encoder", new HttpResponseEncoder());
> 	pipeline.addLast("deflater", new HttpContentCompressor());
> 	pipeline.addLast("uvsHandler", this.uvsRequestHandler); // My @Sharable handler
> 	return pipeline;
> Server handler code:
> 	final RandomAccessFile raf = new RandomAccessFile(file, "r");
> 	final HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
> 	response.setHeader(LAST_MODIFIED, HttpUtils.getLastModifiedString(file));
> 	response.setHeader(CONTENT_TYPE, TECH_BINARY_CONTENT_TYPE);
> 	HttpHeaders.setContentLength(response, file.length());
> 	final FileChannel fc = raf.getChannel();
> 	response.setContent(ChannelBuffers.wrappedBuffer(fc.map(MapMode.READ_ONLY, 0, fc.size())));
> 	final ChannelFuture future = channel.write(response);
> 	future.addListener(new FileChannelCloseListener(fc)); // simply calls fc.close() in its operationComplete()
> 	if (!keepAlive)
> 	{
> 		future.addListener(ChannelFutureListener.CLOSE);
> 	}
> Client pipeline:
> 	final ChannelPipeline pipeline = Channels.pipeline();
> 	pipeline.addLast("codec", new HttpClientCodec());
> 	pipeline.addLast("inflater", new HttpContentDecompressor());
> 	pipeline.addLast("handler", this.httpResponseHandler); // My @Sharable handler
> 	return pipeline;
> Client handler code:
> 	public void messageReceived(final ChannelHandlerContext ctx, final MessageEvent e)
> 	{
> 		final TDFState tdfState = (TDFState) ctx.getAttachment();
> 		if (!tdfState.isReadingChunks())
> 		{
> 			final HttpResponse response = (HttpResponse) e.getMessage();
> 			tdfState.setResponse(response);
> 			if (response.isChunked())
> 			{
> 				tdfState.setReadingChunks(true);
> 			}
> 			else
> 			{
> 				final int b = response.getContent().readableBytes();
> 				if (b > 0)
> 				{
> 					final ByteBuffer buf = ByteBuffer.allocate(b);
> 					response.getContent().readBytes(buf);
> 				}
> 				tdfState.addBytes(b);
> 				tdfState.requestFulfilled();
> 			}
> 		}
> 		else
> 		{
> 			final HttpChunk chunk = (HttpChunk) e.getMessage();
> 			if (chunk.isLast())
> 			{
> 				tdfState.setReadingChunks(false);
> 				tdfState.requestFulfilled();
> 			}
> 			else
> 			{
> 				final int b = chunk.getContent().readableBytes();
> 				final ByteBuffer buf = ByteBuffer.allocate(b);
> 				chunk.getContent().readBytes(buf);
> 				tdfState.addBytes(b);
> 			}
> 		}
> 	}

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        


More information about the netty-dev mailing list