One message at a time in each handler?

"이희승 (Trustin Lee)" trustin at gmail.com
Thu Jun 17 07:55:16 EDT 2010


Not all applications require an ExecutionHandler in their pipeline.
Some performs only CPU-intensive tasks in their handlers.  In such a
case, it's often a better idea not to use ExecutionHandler.

HTH,
Trustin

On 06/16/2010 11:26 PM, Marc-André Laverdière wrote:
> You're right, adding the execution handler is pretty simple. I put it
> in my pipeline, and it looks like its vastly improving the performance
> (~3x).
> If that's the case, why not make it a default?
> Marc-André LAVERDIÈRE
> "Perseverance must finish its work so that you may be mature and
> complete, not lacking anything." -James 1:4
> mlaverd.theunixplace.com/blog
> 
>  /"\
>  \ /    ASCII Ribbon Campaign
>   X      against HTML e-mail
>  / \
> 
> 
> 
> 2010/6/12 "이희승 (Trustin Lee)" <trustin at gmail.com>:
>> Yes, you should handle the events in a different thread pool than the
>> Netty I/O threads, typically using ExecutionHandler.
>>
>> There is no way to combine SimpleChannelHandler and ExecutionHandler.
>> It's actually very simple to add two handlers into the pipeline, so I'm
>> not sure introducing such a class is a good idea.
>>
>> HTH,
>> Trustin
>>
>> On 06/07/2010 01:49 PM, Marc-André Laverdière wrote:
>>> I was wondering which option would be the best for my app, SEDA or not?
>>>
>>> I have a server that will need to handle a big load, and the average
>>> execution time of each request handler is ~4-5 (but it can stretch
>>> beyond that sometimes).
>>> In that kind of application, is having yet another thread pool going to
>>> be useful?
>>>
>>> I was wondering, is there a way to combine SimpleChannelHandler with
>>> ExecutionHandler in the standard Netty API, for convenience's sake? I
>>> suggest the name of DelegatedChannelHandler.
>>>
>>> Regards,
>>>
>>> Marc-André LAVERDIÈRE
>>> "Perseverance must finish its work so that you may be mature and
>>> complete, not lacking anything." -James 1:4
>>> mlaverd.theunixplace.com/blog <http://mlaverd.theunixplace.com/blog>
>>>
>>> /"\
>>> \ /    ASCII Ribbon Campaign
>>>   X      against HTML e-mail
>>> / \
>>>
>>>
>>> 2010/6/7 "Trustin Lee (이희승)" <trustin at gmail.com
>>> <mailto:trustin at gmail.com>>
>>>
>>>     Yes, only one event at a time will flow through individual channel
>>>     handlers in the same pipeline in Netty.  However, this is somewhat
>>>     different from SEDA.  It's not because of the queues but because of how
>>>     pipeline is implemented.
>>>
>>>     When an event is triggered, Netty calls the handler directly.  It means
>>>     Netty I/O thread will wait for your handler to return the control back.
>>>       Until Netty takes the control back, Netty will not generate any event.
>>>       Also, the same applies to event propagation in the pipeline.
>>>     Forwarding an event to the next handler is simply calling the handler
>>>     directly.
>>>
>>>     To let user implement SEDA, Netty provides ExecutionHandler, which has a
>>>     queue and a thread pool in org.jboss.netty.handler.execution.  If you do
>>>     not like it, you can always write your own handler that decouples the
>>>     event execution from the Netty I/O threads.
>>>
>>>     HTH,
>>>     Trustin
>>>
>>>     falconair wrote:
>>>      > Does netty's ChannelHandler semantics include a guarantee that
>>>     only one
>>>      > message at a time will flow through individual channel handlers?
>>>      >
>>>      > I didn't read any such thing in the docs, but I understand netty
>>>     is based on
>>>      > SEDA architecture, and I understand SEDA includes such guarantee.
>>>      >
>>>      > In other words, which is more accurate representation of netty:
>>>      >
>>>      > [h1]->[h2]->[h3]
>>>      >
>>>      > or
>>>      >
>>>      > {q1}==>[h1]->{q2}==>[h2]->{q3}==>[h3]
>>>      >
>>>      > In the first scenario, multiple [h2] handler might be invoked for
>>>     the SAME
>>>      > pipeline, which means instance variables are prone to threading
>>>     issues.
>>>      >
>>>      > In the second scenario, interleaved queues make sure only one
>>>     message is
>>>      > processed by handler (for the same pipeline), therefore instance
>>>     variables
>>>      > are safe from dead-locks and race conditions.
>>>
>>>     --
>>>     what we call human nature in actuality is human habit
>>>     http://gleamynode.net/
>>>
>>>     _______________________________________________
>>>     netty-users mailing list
>>>     netty-users at lists.jboss.org <mailto:netty-users at lists.jboss.org>
>>>     https://lists.jboss.org/mailman/listinfo/netty-users
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> netty-users mailing list
>>> netty-users at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/netty-users
>>
>> --
>> what we call human nature in actuality is human habit
>> http://gleamynode.net/
>> _______________________________________________
>> netty-users mailing list
>> netty-users at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/netty-users
> 
> _______________________________________________
> netty-users mailing list
> netty-users at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/netty-users

-- 
what we call human nature in actuality is human habit
http://gleamynode.net/

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 293 bytes
Desc: OpenPGP digital signature
Url : http://lists.jboss.org/pipermail/netty-users/attachments/20100617/3f010a56/attachment.bin 


More information about the netty-users mailing list