Concurrency within ChannelHandlers?
"이희승 (Trustin Lee)"
trustin at gmail.com
Thu Jul 23 23:07:26 EDT 2009
Hello Iain,
On 07/20/2009 10:43 PM, Iain McGinniss wrote:
> Hello all,
>
> I am currently trying to put together my own HTTP tunnel. So far, I've
> been writing the client side. What I want to ensure is that there will
> always be at least one request sent to the server end of the tunnel,
> which it can use to stream responses back to me (with a maximum 16KB
> payload per request / response). As sending is happening
> asynchronously to receiving, I don't know what the best approach is in
> Netty when it comes to concurrency in the handlers. The send handler
> may, for instance, choose to halt the transmission of the current
> request so that it becomes available for the server to send messages
> back sooner - this can be achieved by sending less chunks for the
> current request, so that more requests are generated (higher overhead,
> but ensures the server is not blocked waiting for another response).
>
> So in this kind of situation, where the send behaviour is dependent on
> the timing of received messages, how should I orchestrate this? Is
> there a way to get a handle on the thread pool used for the channel,
> and schedule tasks for later execution? Or is there some more elegant
> way of doing things like this in Netty? Ideally, I'd like to avoid
> creating my own pools, to prevent a situation where the number of
> threads in use grows with the number of HTTP tunnels.
It's an interesting idea. So, you want to suspend the transmission of
the current HTTP request content if the server started to send a
response, and then resume the transmission of the request content as a
new HTTP request once the response was fully received. Did I understand
correctly?
IIUC, it can be done without introducing more threads. The client could
split a large write request into many chunks, and write them one by one.
To send chunks one by one instead of writing them all into the Netty
write queue, you need some state management in your handler. You could
refer to the source code of ChunkedWriteHandler in
org.jboss.netty.handler.stream to implement such a behavior.
And then, if you are notified with messageReceived event which contains
the first response chunk, you could set some AtomicBoolean (or volatile
boolean) flag to true so that channelInterestChanged() or
ChannelFutureListener does not issue a write request anymore. If you
receive the last chunk (i.e. the chunk whose length is 0 in HTTP chunked
encoding), you could set the flag back to false and continue writing the
remaining chunks with a new HTTP request.
I'm not sure I explained well enough. You know, this sort of thing is
better explained with a working code. ;) Let me know if you have
further questions. I'd be glad to help you.
HTH,
Trustin
More information about the netty-users
mailing list