Pooling of direct ByteBuffer

Derek deminix at gmail.com
Thu May 13 17:10:52 EDT 2010


I see that DirectChannelBufferFactory.java is reducing the number of direct
ByteBuffer allocations and deallocations and I could see how that would be
helpful but I don't see how it handles all shortcomings of JVM direct
memory.  Given the number of references to a part of these large direct
buffers (all the small slices), it seems likely that the large buffer will
survive young generation collection simply because one of the small slices
does, and be kept around until an old generation collection.

If old generation collections are infrequent there may be quite a few of
these around, so it doesn't seem to guard against hitting the max direct
memory limit and/or limiting c heap use.  Are there reasons that normalizing
allocation request sizes and direct bytebuffer pooling isn't done?  Was it
previously done but abandoned in favor of this?  I'm asking because we are
moving from an internal i/o library to Netty and this is how we currently
handle direct buffers.  Netty is in many ways superior, but I want to make
sure we don't regress on what was previously a hard learned lesson.

Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/netty-dev/attachments/20100513/885df364/attachment.html 


More information about the netty-dev mailing list