synchronous client send/rcv help

Trustin Lee (이희승) trustin at gmail.com
Sat Nov 21 00:03:36 EST 2009


Hi Jason,

I would not suggest using a lock or semaphore.  The most efficient way
is to design the state machine carefully so that the premature write
request is not made at all.  If it is not easy to implement, you can
store the write requests in the queue while handshake is in progress,
and then flush them all after the handshake.

HTH

— Trustin Lee, http://gleamynode.net/



On Thu, Nov 19, 2009 at 7:24 AM, Jason Ward <jward.dww at gmail.com> wrote:
> I'm looking for some advice on a good design for the following problem.
>
> I have a need to perform an app-level handshake once the channel is
> connected. I've implemented this via a handshake method call fired from
> the channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) of
> my IoHandler (extending SimpleChannelUpstreamHandler).
>
> The crux of the issue is a need for this app-level handshake to occur
> synchronously blocking each subsequent send (in the handshake) until a
> previous response from the peer is rcvd and validated. I've found that
> blocking during my handshake method in fact blocks the I/O thread
> altogether. I can see the peer actually receives the req and sends a
> response, but a corresponding messageReceived(ChannelHandlerContext ctx,
> MessageEvent e) isn't fired until after my originating caller times
> out... which just so happens to initiate a disconnect and disposal of
> resources as well.
>
> I can easily workaround this issue by chaining messageReceived
> callbacks, but I'm looking for a more elegant solution. Does anyone have
> advice on how I can pull this off? Is there any example of how to force
> only some requests to be blocking? I've seen the thread on the forum
> where Trustin indicates adding a LinkedBlockingQueue is a solution, but
> the original poster's use-case is quite different from mine in that all
> requests needed to be blocking, and as best I understand it, I'm
> effectively doing the same thing but creating a problem because I'm
> calling my blocking method from channelConnected()... altho I might be
> totally wrong.
>
>
> Any advice is greatly appreciated.
>
>
> Note : Because this might be a simple problem with my I/O thread pools,
> I'll include my client code here (only slightly redacted for protected
> info).
>
>            ClientSocketChannelFactory factory = new
> NioClientSocketChannelFactory(
>                    Executors.newCachedThreadPool(new
> NamedThreadFactory("NioClientSocketChannelFactoryBoss_"+name)),
>                    Executors.newCachedThreadPool(new
> NamedThreadFactory("NioClientSocketChannelFactory_"+name))
>            );
>            bootstrap = new ClientBootstrap(factory);
>            bootstrap.setOption("tcpNoDelay", true);
>            bootstrap.setOption("keepAlive", true);
>
>            OrderedMemoryAwareThreadPoolExecutor threadPool = new
> OrderedMemoryAwareThreadPoolExecutor(
>                    corePoolSize, 0, 0, keepAliveTime, TimeUnit.SECONDS,
> new NamedThreadFactory("XXX"+name));
>
>            ChannelPipeline pipeline = bootstrap.getPipeline();
>            pipeline.addLast("exec", new ExecutionHandler(threadPool));
>            pipeline.addLast("decoder", new AsyncMSGIoDecoder());
>            pipeline.addLast("encoder", new AsyncMSGIoEncoder());
>            pipeline.addLast("handler", new MSGIoHandler(this));
>
>            channelFuture = bootstrap.connect(new
> InetSocketAddress(host, port));
>
>
> Thanks in advance,
> JW
> _______________________________________________
> netty-users mailing list
> netty-users at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/netty-users
>



More information about the netty-users mailing list