It's pretty simple. Every message is associated with an Executor. You have a
"main" Executor, which is typically a plain thread pool, which does any work
that can be done in an unordered fashion. New messages that aren't related to any
previous messages would be processed by this executor directly. Then you create a single
OrderedExecutorFactory with your "main" Executor as its parent.
Now if you know that all messages tagged with some ID must be processed in order, you get
a new Executor from the factory and associate it with this ID. Then all your message
processing for that ID is done sequentially. If there are some asynchronous things that
can be done, you can always submit another task to the main Executor.
anonymous wrote : Also I'd caution against the "execute on current thread if
queue is empty" optimisation since if the current thread is the thread that does IO
from the selector this can keep that thread tied up for a long time, during which time it
can't service any more IO requests.
Not sure what you're getting at here. There's no such optimization - basically
the code says, "if this is the first time a task was added to the queue, then this
executor is not running, so run it", and on removal, "if this was the last task
removed from the queue, no need to run anymore".
As far as using a concurrent queue - perhaps you could squeeze out a tiny bit more
performance, but unless it shows up as a blip on a profiler, I'm not going to bother
because frankly I doubt that there will be significant (unwanted) contention for the lock.
(Remember that the purpose is to serialize access, after all)
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4140667#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...