blocking when sending lots of data
"이희승 (Trustin Lee)"
trustin at gmail.com
Mon Jul 6 12:42:39 EDT 2009
On 07/06/2009 09:37 PM, mra wrote:
> in my system i often want to send out LOTS of data in a burst, maybe 100's or
> 1000's of packets (google protobufs) of about 1,000,000 bytes each. now i
> know in a typical use of OIO, writing to Socket.getOutputStream(), the O/S
> will block if TCP and its buffers can't immediately absorb the data. but
> with netty, i'm worried that an unlimited number of bytes could wind up
> absorbed into the JVM even if the underlying TCP blocks, leading to
> OutOfMemory exceptions? i think i may have observed this in practice with
> netty (but i'm not 100% sure that's the root cause), and so i'm wondering if
> i should throttle output manually instead.
Channel has 'isWritable()' method, which returns false if the channels
internal buffer is filled up with write request so that writing more
data into the channel could lead to OOME. So, if you are going to write
a large amount of data, you need to check if isWritable() returns true
before writing something. If it returns false, you need to wait until
channelInterestChanged event is fired. In your handler's
channelInterestChanged method, you can continue writing the remaining
data (also making sure isWrite() returns true.)
Actually, there's a built-in handler called ChunkedWriteHandler which
allows you to write a stream or a file very easily without experiencing
memory pressure. Please refer to the HTTP file server example. You
might also want to review the source code of ChunkedWriteHandler to
understand how I implemented it.
HTH,
Trustin
More information about the netty-users
mailing list