Ensuring all pending writes complete before close
Iain McGinniss
iainmcgin at gmail.com
Wed Aug 12 04:52:46 EDT 2009
On 12 Aug 2009, at 03:20, 이희승 (Trustin Lee) wrote:
> Hi Iain,
>
> On 08/06/2009 11:47 PM, Iain McGinniss wrote:
>>
>> 1. Allowing all pending writes to be completed before I close the
>> channel.
>
> You can achieve this pretty simply:
>
> channel.write(ChannelBuffers.EMPTY_BUFFER)
> .addListener(ChannelFutureListener.CLOSE);
Netty is hidden beneath several layers of abstraction, so it is
difficult for me to expose this kind of write-then-close semantics
where it is needed. The code right now is more akin to
pipe.write(message);
pipe.close();
Where pipe.close() should not return until all pending writes are
completed and the channel is closed, or some error condition occurs
and the pipe is closed.
>
>> 2. Preventing new writes from being requested during shutdown.
>
> This is more tricky. You could maintain a boolean flag as you
> mentioned
> above. An alternative is to write a handler that discards or rejects
> any write requests and insert the handler into the pipeline right
> after
> you write an EMPTY_BUFFER.
>
>> 3. Allow for the entity that requested the close to wait for the
>> channel to close using the standard mechanism.
>
> I don't follow this. Could you elaborate?
All I meant here was that I could call
channel.close().awaitUninterruptibly(), and that this wouldn't be
affected by whatever mechanism I had chosen to ensure all writes would
be performed before the close would occur.
The technique I went for in the end involved handling closeRequested()
in my handler, and flipping a boolean to indicate we were closing the
channel. For every write handled via writeRequested(), I stored the
ChannelFuture for that write and added a listener to it, placing the
ChannelFuture into a set. Once the closing flag is set, all future
write requests are simply ignored (with a WARNING log). For each call
to operationComplete on my ChannelFutureListener, I remove that future
from the pending set. Once the pending set is empty, I send the close
request downstream. In the simple case, with a quiet channel,
closeRequested can directly see that the pending set is empty and so
can immediately send the close request downstream.
This appears to work, I'm using a ReentrantLock for concurrency
control but I think there is a way to do it using less explicit
locking (using an AtomicInteger count of pending writes, for instance).
There are a lot of interesting problems like this in network
programming, that Netty could really benefit from having good examples
of in the documentation. I believe this would help illuminate the
problems that people will face with a fully asynchronous, concurrent
networking stack.
Iain
More information about the netty-users
mailing list