Thats the TCP backlog queue, which is only for connections that are being
initially established. Once the connection is accepted (which happens
pretty quickly since undertows I/o handling is non-blocking) then the queue
entry is removed. Worker size does not affect undertows ability to handle
incoming connections.
The reason applications set limit is just a safeguard against some types of
flood/dos.
On Feb 22, 2018, at 9:38 AM, Brad Wood <bdw429s(a)gmail.com> wrote:
Thanks guys. I seem to have gotten a couple conflicting replies :)
it defaults to a queue size of 1000
It is unbounded
What exactly is the 1000 backlog setting doing?
Thanks!
~Brad
*Developer Advocate*
*Ortus Solutions, Corp *
E-mail: brad(a)coldbox.org
ColdBox Platform:
http://www.coldbox.org
Blog:
http://www.codersrevolution.com
On Thu, Feb 22, 2018 at 4:32 AM, Stuart Douglas <sdouglas(a)redhat.com> wrote:
On Thu, Feb 22, 2018 at 4:04 PM, Brad Wood <bdw429s(a)gmail.com> wrote:
> I'm looking for a bit of understanding on just how Undertow handles large
> numbers of requests coming into a server. Specifically when more requests
> are being sent in than are being completed. I've been doing some load
> testing on a CFML app (Lucee Server) where I happen to have my io-threads
> set to 4 and my worker-threads set to 150. I'm using a monitoring tool
> (FusionReactor) that shows me the number of currently executing threads at
> my app and under heavy load I see exact 150 running HTTP threads in my app
> server, which makes sense since I have 150 worker threads. I'm assuming
> here that I can't simultaneously process more requests than I have worker
> threads (please correct if I'm off there)
>
> So assuming I'm thinking that through correctly, what happens to
> additional requests that are coming into the server at that point? I
> assume they are being queued somewhere, but
>
> - What is this queue?
>
> The queue is in the XNIO worker (although if you want you could just use
a different executor which has its own queue).
>
> - How do I monitor it?
>
> XNIO binds an MBean that you can use to inspect the worker queue size.
>
> - How big can it get?
>
> It is unbounded, as rejecting tasks from the worker is very problematic
in some circumstances. If you want to limit the number of concurrent
requests use the io.undertow.server.handlers.RequestLimitingHandler
>
> - Where do I change the size?
> - How long do things stay in there before sending back an error to
> the HTTP client?
>
> If you use the RequestLimitingHandler this is configurable. It has its
own queue with a fixed size, and a configurable timeout.
>
> - Can I control what error comes back to the HTTP client in that
> scenario?
>
> You can using io.undertow.server.handlers.RequestLimit#setFailureHandler
>
> - If I'm using an HTTP/S and AJP listener, do they all share the same
> settings? Do they share the same queues?
>
> In general yes. You could give each listener its own limiting handler
with its own queue if you wanted by explicitly setting the listeners root
handler, however in general they will all just use the same handler chain.
Stuart
> I've done a bit of Googling and reviewed some docs but haven't quite
> found any definitive information on this, and a lot of info I found was
> about Wildfly specifically so I wasn't sure how much of it applied.
>
> Thanks!
>
> ~Brad
>
> *Developer Advocate*
> *Ortus Solutions, Corp *
>
> E-mail: brad(a)coldbox.org
> ColdBox Platform:
http://www.coldbox.org
> Blog:
http://www.codersrevolution.com
>
>
> _______________________________________________
> undertow-dev mailing list
> undertow-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/undertow-dev
>
_______________________________________________
undertow-dev mailing list
undertow-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/undertow-dev