You can use jconsole to check org.xnio MBean. (By the way, if you use
WildFly 11, you can also check the same metrics under io subsystem through
If you use Undertow with XNIO 3.4.3 or later, you can also obtain
XnioWorkerMXBean from XnioWorker through Undertow API. For example:
- From io.undertow.server.HttpServerExchange:
- From io.undertow.Undertow: undertow.getWorker().getMXBean()
Then, you can access the following metrics from the MXBean:
- getBusyWorkerThreadCount() = current busy worker thread count
- getWorkerQueueSize() = current queue size of worker thread pool
- getCoreWorkerPoolSize() = core worker thread pool size which you
- getMaxWorkerPoolSize() = max worker thread pool size which you configured
- getIoThreadCount() = io thread pool size which you configured
Though the latest undertow pom.xml specifies XNIO 3.3.8.Final as a
dependency, I believe it's compatible and it works with newer versions of
XNIO. (Actually, WildFly 11 comes with Undertow 1.4.18.Final and XNIO
On Fri, Feb 23, 2018 at 1:03 AM, Brad Wood <bdw429s(a)gmail.com> wrote:
Perfect. Thanks everyone! One final question-- how do I get the
monitor the queued requests? A tutorial link would be fine.
Ortus Solutions, Corp
ColdBox Platform: http://www.coldbox.org
On Thu, Feb 22, 2018 at 9:54 AM, Jason Greene <jason.greene(a)redhat.com>
> Thats the TCP backlog queue, which is only for connections that are being
> initially established. Once the connection is accepted (which happens
> quickly since undertows I/o handling is non-blocking) then the
> is removed. Worker size does not affect undertows ability to
> The reason applications set limit is just a safeguard against some types
> of flood/dos.
> On Feb 22, 2018, at 9:38 AM, Brad Wood <bdw429s(a)gmail.com> wrote:
> Thanks guys. I seem to have gotten a couple conflicting replies :)
> > it defaults to a queue size of 1000
> > It is unbounded
> What exactly is the 1000 backlog setting doing?
> Developer Advocate
> Ortus Solutions, Corp
> E-mail: brad(a)coldbox.org
> ColdBox Platform: http://www.coldbox.org
> Blog: http://www.codersrevolution.com
> On Thu, Feb 22, 2018 at 4:32 AM, Stuart Douglas <sdouglas(a)redhat.com>
>> On Thu, Feb 22, 2018 at 4:04 PM, Brad Wood <bdw429s(a)gmail.com> wrote:
>>> I'm looking for a bit of understanding on just how Undertow handles
>>> large numbers of requests coming into a server. Specifically when more
>>> requests are being sent in than are being completed. I've been doing
>>> load testing on a CFML app (Lucee Server) where I happen
to have my
>>> io-threads set to 4 and my worker-threads set to 150. I'm using a
>>> monitoring tool (FusionReactor) that shows me the number of currently
>>> executing threads at my app and under heavy load I see exact 150
>>> HTTP threads in my app server, which makes sense since I
>>> threads. I'm assuming here that I can't
simultaneously process more
>>> requests than I have worker threads (please correct if I'm off there)
>>> So assuming I'm thinking that through correctly, what happens to
>>> additional requests that are coming into the server at that point? I
>>> they are being queued somewhere, but
>>> What is this queue?
>> The queue is in the XNIO worker (although if you want you could just use
>> a different executor which has its own queue).
>>> How do I monitor it?
>> XNIO binds an MBean that you can use to inspect the worker queue size.
>>> How big can it get?
>> It is unbounded, as rejecting tasks from the worker is very problematic
>> in some circumstances. If you want to limit the number of concurrent
>> requests use the io.undertow.server.handlers.RequestLimitingHandler
>>> Where do I change the size?
>>> How long do things stay in there before sending back an error to the
>>> HTTP client?
>> If you use the RequestLimitingHandler this is configurable. It has its
>> own queue with a fixed size, and a configurable timeout.
>>> Can I control what error comes back to the HTTP client in that
>> You can using io.undertow.server.handlers.RequestLimit#setFailureHandler
>>> If I'm using an HTTP/S and AJP listener, do they all share the same
>>> settings? Do they share the same queues?
>> In general yes. You could give each listener its own limiting handler
>> with its own queue if you wanted by explicitly setting the listeners
>> handler, however in general they will all just use the same
>>> I've done a bit of Googling and reviewed some docs but haven't quite
>>> found any definitive information on this, and a lot of info I found was
>>> about Wildfly specifically so I wasn't sure how much of it applied.
>>> Developer Advocate
>>> Ortus Solutions, Corp
>>> E-mail: brad(a)coldbox.org
>>> ColdBox Platform: http://www.coldbox.org
>>> Blog: http://www.codersrevolution.com
>>> undertow-dev mailing list
> undertow-dev mailing list
undertow-dev mailing list