I see that DirectChannelBufferFactory.java is reducing the number
of direct ByteBuffer allocations and deallocations and I could see how
that would be helpful but I don't see how it handles all shortcomings of
JVM direct memory. Given the number of references to a
part of these large direct buffers (all the small slices), it seems
likely that the large buffer will survive young generation collection
simply because one of the small slices does, and be kept around until an
old generation collection. <br><br>If old generation collections are infrequent there may be quite a few of these around, so it doesn't seem to
guard against hitting the max direct memory limit and/or limiting c heap
use. Are there reasons that normalizing allocation request sizes and
direct bytebuffer pooling isn't done? Was it previously done but
abandoned in favor of this? I'm asking because we are moving from an
internal i/o library to <span class="il">Netty</span> and this is how we currently handle direct buffers. <span class="il">Netty</span> is in many ways superior, but I want to make sure we don't regress on what was previously a hard learned lesson.<br>
<div id=":8w" class="ii gt">
<br>Thanks.</div>