"bill.burke(a)jboss.com" wrote : anonymous wrote :
| | The ServerInvoker has a bounded "ServerThread"s thread pool, the number
of simultaneous invocations that can be handled at a time is limited by invoker's
maxPoolSize configuration parameter. Once a "ServerThread" is allocated to a
client connection, it will stay associated to that connection until the client closes the
socket or a socket timeout occurs.This could lead to thread starvation in presence of a
large number of clients. Increasing the number of server threads will draw down the
performance because the large number of context switches.
| |
|
| IIRC, Tom took a lot from the PooledInvoker. The PooledInvoker had a LRU queue that
closed idle connections when one was needed so that there was no starving. Also, the
PooledInvoker tried to avoid a large number of context switches by associating the client
connection to a specific server thread. Therefore, no thread context switches were
required. Of course, this design was predicated on "keep alive" clients. So if
the use case was very "keep alive" oriented, it performed really well.
|
| For the TransactionManager recover prototype I did, I experimented with a batch queue
for writing to a log. Obviously having one thread exclusively write to the file was a lot
faster and scalable than individual threads writing to the one file. It would be cool to
combine both the concepts of pooling and this type of multiplexing. Never did the
benchmarks on that type of design.
|
|
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3978924#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...