So having separate pools allows HTTP requests to be parsed and queued for processing even if all workers are blocked on JDBC calls, but their processing still waits until worker threads are available. With just a large IO pool even HTTP parsing will be blocked when all threads are blocked on JDBC.

Have I got that right or am I missing other drawbacks?

Yes, I understand that having more threads then maximum active connections in pool is futile.

--
Chandra Sekar.S

On Fri, Jul 8, 2016 at 6:28 PM, Bill O'Neil <bill@dartalley.com> wrote:
Basically the IO threads will open and manage the socket then push the request into a queue for the workers. You can have 1000 HTTP requests queued up to a worker thread pool that only has 10 threads (plus the 4-8 IO threads). With the thread per request model you would need 1000 threads. How you configure it all depends on the type of work that needs to be done. Maybe you want a larger worker thread pool maybe you want a smaller one. The benefit of splitting them out is you get more control.

Also keep in mind you should probably be using a JDBC connection pool. If you have 1000 threads fighting over the limited number of JDBC connections it will cause more contention than a much smaller number of worker threads. JDBC connection pools generally have a much lower max number of connections than the number of requests your web server accepts.

On Fri, Jul 8, 2016 at 8:46 AM, Chandru <chandru.in@gmail.com> wrote:
Given every request requires a blocking IO call through JDBC, doesn't using worker pool still boil down to thread per request, as requests will be queued when all threads in the worker pool are blocked on JDBC?

--
Chandra Sekar.S

On Fri, Jul 8, 2016 at 6:10 PM, Bill O'Neil <bill@dartalley.com> wrote:
The cost of switching threads is fairly negligible. If you are concerned about that level of performance then you are more likely to run into issues using the on thread per connection model. If you use the 1 thread per connection model you can only have that number of HTTP clients connected to your web server at any given time. However with the IO / Worker thread model you can have a low number of IO threads that accept a very large number of concurrent connections which then queue up all of the blocking requests for the workers to handle.

The thread per request model is like going back in time to tomcat days where every webserver gets configured with hundreds to thousands of threads.

However if you don't have a higher number of concurrent users it realistically shouldn't matter which way you choose. Just keep in mind the thread per request model doesn't scale as far when you have blocking work.

On Fri, Jul 8, 2016 at 8:31 AM, Chandru <chandru.in@gmail.com> wrote:
My line of thought was, if every request requires a blocking DB call, why incur the cost of switching threads within a request, if I can instead simply increase the number of IO threads without any adverse effect.

--
Chandra Sekar.S

On Fri, Jul 8, 2016 at 5:54 PM, Bill O'Neil <bill@dartalley.com> wrote:
This is exactly what the worker thread pool is built for why would you want to use the IO threads instead? The IO threads are for reading / writing to the socket of the HTTP request. All blocking operations SHOULD be dispatched to worker threads.

On Fri, Jul 8, 2016 at 8:20 AM, Chandru <chandru.in@gmail.com> wrote:
If I have a HTTP service where every request requires a blocking JDBC call, is it acceptable to increase the number of IO threads to a large value (say, 10*cores) instead of dispatching to worker thread pool on each request?

Will configuring such a large number of IO threads lead to any adverse effect on throughput or latency?

--
Chandra Sekar.S

_______________________________________________
undertow-dev mailing list
undertow-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/undertow-dev