I'm not positive but I believe you need to look into the AcceptingChannel.
See the bound AcceptingChannel for HTTP here.
https://github.com/undertow-io/undertow/blob/master/core/src/main/java/io...
Digging a little further you find NioXnioWorker.createTcpConnectionServer
https://github.com/xnio/xnio/blob/3.x/nio-impl/src/main/java/org/xnio/nio...
Notice the backlog option. The Javadoc from ServerSocker.bind
* The {@code backlog} argument is the requested maximum number of
* pending connections on the socket. Its exact semantics are
implementation
* specific. In particular, an implementation may impose a maximum
length
* or may choose to ignore the parameter altogther. The value provided
* should be greater than {@code 0}. If it is less than or equal to
* {@code 0}, then an implementation specific default will be used.
* @param endpoint The IP address and port number to bind to.
* @param backlog requested maximum length of the queue of
* incoming connections.
So depending on the implementation it may or may not queue.
Then back to Undertow you can find
.set(Options.BACKLOG, 1000)
https://github.com/undertow-io/undertow/blob/master/core/src/main/java/io...
So it defaults to a queue size of 1000 as long as the underlying
implementation supports it. You can configure it with the BACKLOG option.
Bill
On Thu, Feb 22, 2018 at 12:04 AM, Brad Wood <bdw429s(a)gmail.com> wrote:
I'm looking for a bit of understanding on just how Undertow
handles large
numbers of requests coming into a server. Specifically when more requests
are being sent in than are being completed. I've been doing some load
testing on a CFML app (Lucee Server) where I happen to have my io-threads
set to 4 and my worker-threads set to 150. I'm using a monitoring tool
(FusionReactor) that shows me the number of currently executing threads at
my app and under heavy load I see exact 150 running HTTP threads in my app
server, which makes sense since I have 150 worker threads. I'm assuming
here that I can't simultaneously process more requests than I have worker
threads (please correct if I'm off there)
So assuming I'm thinking that through correctly, what happens to
additional requests that are coming into the server at that point? I
assume they are being queued somewhere, but
- What is this queue?
- How do I monitor it?
- How big can it get?
- Where do I change the size?
- How long do things stay in there before sending back an error to the
HTTP client?
- Can I control what error comes back to the HTTP client in that
scenario?
- If I'm using an HTTP/S and AJP listener, do they all share the same
settings? Do they share the same queues?
I've done a bit of Googling and reviewed some docs but haven't quite found
any definitive information on this, and a lot of info I found was about
Wildfly specifically so I wasn't sure how much of it applied.
Thanks!
~Brad
*Developer Advocate*
*Ortus Solutions, Corp *
E-mail: brad(a)coldbox.org
ColdBox Platform:
http://www.coldbox.org
Blog:
http://www.codersrevolution.com
_______________________________________________
undertow-dev mailing list
undertow-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/undertow-dev