direct buffer memory issues

"Trustin Lee (이희승)" trustin at gmail.com
Mon Mar 8 03:34:16 EST 2010


That's a great news.  SocketSendBufferPool uses SoftReference to hold
direct buffers (JDK does the same) so I was wondering what the cause
might be.

Thanks for a quick follow-up,
Trustin

Adam Fisk wrote:
> I think I got it Trustin -- I wasn't calling releaseExternalResources
> in all cases. It's a little tricky to handle that in the proxy
> scenario where the clients themselves have "handlers" in the pipeline.
> It looks like *I should not call it* in channelClosed event handlers,
> is that correct? Do so generated CPU spikes for me.
> 
> All looking good now, though -- total OS memory hovering around 65 MB
> with opening lots of connections.
> 
> Thanks.
> 
> -Adam
> 
> On Sun, Mar 7, 2010 at 8:20 PM, "Trustin Lee (이희승)" <trustin at gmail.com> wrote:
>> Hi Adam,
>>
>> Thanks for reporting the problem.
>>
>> Could you send me the heap dump?  I need it to determine if it's a JDK
>> issue or Netty SocketSendBufferPool issue.
>>
>> I also would like to know if you are getting many ClosedChannelException
>> or NotYetConnectedException.  If that's the case, try this build: (wait
>> for a while if you get 404)
>>
>>    http://hudson.jboss.org/hudson/view/Netty/job/netty/3416/
>>
>> Decreasing the preallocation size will not fix the problem because
>> preallocation is already shared by many write requests for different
>> connections to maximize its utilization.
>>
>> HTH,
>> Trustin
>>
>> Adam Fisk wrote:
>>> I'm experiencing an issue where LittleProxy eventually runs out of
>>> memory, but in direct memory allocated through allocateDirect, not in
>>> the object heap. Here's a link to the issue with the stack trace:
>>>
>>> http://dev.littleshoot.org:8081/browse/LP-16
>>>
>>> I tried using -XX:MaxDirectMemorySize=256M, but that just seems to
>>> delay the issue. If I look at the actual OS-level memory allocated,
>>> once it gets up to around 300MB, I can't allocate more (basically the
>>> 256 + 40MB or so for the object heap and whatever else).
>>>
>>> It fails in SocketSendBufferPool when creating the "Preallocation"
>>> with DEFAULT_PREALLOCATION_SIZE = 65536. In LittleProxy's case, that's
>>> when we're allocating new HTTP clients to go out to servers to fetch
>>> data. Many of those connections don't end up passing much data
>>> upstream (although some POST requests clearly will). Any way to set
>>> that buffer size on a per-client basis?
>>>
>>> I'm not exactly sure what to do other than boosting
>>> XX:MaxDirectMemorySize even further.
>>>
>>> The other odd thing is the memory *never seems to go down*, even after
>>> all the threads are done. Is it possible I'm not cleaning up those
>>> client connections properly? It seems like any buffer allocated with
>>> allocateDirect never gets freed.
>>>
>>> Any suggestions you may have would be much appreciated.
>>>
>>> Outside of this issue, things are looking great!!
>>>
>>> Thanks Trustin.
>>>
>>> -Adam
>>>
>> --
>> what we call human nature in actuality is human habit
>> http://gleamynode.net/
>>
>>
>>
>> _______________________________________________
>> netty-users mailing list
>> netty-users at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/netty-users
>>
> 
> 
> 

-- 
what we call human nature in actuality is human habit
http://gleamynode.net/


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 260 bytes
Desc: OpenPGP digital signature
Url : http://lists.jboss.org/pipermail/netty-users/attachments/20100308/eb61bf2d/attachment.bin 


More information about the netty-users mailing list