dynamic URIs and overlapping method requests on URIs
by David Robinson
I believe these two questions are very basic based on the posts I've read
so far on the mailing list. I appreciate your insights into how to handle
these correctly using undertow handlers (was thinking custom handlers).
First topic: a portion of the URI for my app is "dynamic" and changes as my
program runs.
For example, with this URI:
/xd/cohort1/healthevents/examdata
the "cohort1" portion "appears" after my program starts.
Each cohort may have distinct processing - so this URI:
/xd/cohort0/healthevents/examdata
needs to be handled by code that may be entirely different from the first
URI.
In a first approach, a single custom handler for the path /sd/* was
configured:
rh0 = new RoutingHandler().post("/xd/*", new
UndertowEventHttpHandler(config, posthandlers));
Undertow server = Undertow.builder().addHttpListener(ipPort,
ipAddress).setHandler(rh0).build();
That examined the URI, validated the "cohort" piece, and then called a
separate helper class directly (no dispatch, or async) to process the
request.
Is this the best approach or is there a better way to handle this with
Undertow?
(The out-of-the-box Path Template handler, for example, did not seem
dynamic enough to handle this sort of URI.)
Second topic: the application can receive "GETS" and "POSTS" to the same
URI.
A "GET" to /xd/cohort1/healthevents/examdata would return data where a
POST to /xd/cohort1/healthevents/examdata would expect the body of the
message to have JSON to be stored.
How to best handle both GET and POST messages coming to the same URI with
Undertow? I started to add an extra layer of abstraction - a generic
handler that looked at the request method
HttpString hString = exchange.getRequestMethod();
but was not sure where to go from there...do I then call the specialized
POST and GET handlers with .dispatch (deprecated except if I provide an
Executor?) or is there a way to register two handlers for the same
path...just different request methods?
Thank you,
11 months, 1 week
Re: [undertow-dev] Too many open files: Exception accepting request, closing server channel TCP server (NIO)
by Nishant Kumar
I agree that it's a load-balancing issue but we can't do much about it at
this moment.
I still see issues after using the latest XNIO (3.7.7) with Undertow. what
I have observed it that when there is a spike in request
and CONNECTION_HIGH_WATER is reached, the server stops accepting new
connection as expected and the client starts to close the connection
because of delay (we have strict low latency requirement < 100ms) and try
to create new connection again (which will also not be accepted) but server
has not closed those connections (NO_REQUEST_TIMEOUT = 6000) and there will
be high number of CLOSE_WAIT connections at this moment. The server is
considering CLOSE_WAIT + ESTABLISHED for CONNECTION_HIGH_WATER (my
understanding).
Is there a way that I can close all CLOSE_WAIT connection at this moment
so that connection counts drop under CONNECTION_HIGH_WATER and we start
responding to newly established connections? or any other suggestions? I
have tried removing CONNECTION_HIGH_WATER and relying on the FD limit but
that didn't work.
On Sun, Mar 1, 2020 at 7:47 AM Stan Rosenberg <stan.rosenberg(a)gmail.com>
wrote:
> On Sat, Feb 29, 2020 at 8:18 PM Nishant Kumar <nishantkumar35(a)gmail.com>
> wrote:
>
>> Thanks for the reply. I am running it under supervisord and i have
>> updated open file limit in supervisord config. The problem seems to be same
>> as what @Carter has mentioned. It happens mostly during sudden traffic
>> spike and then sudden increase (~30k-300k) of TIME_WAIT socket.
>>
>
> The changes in
> https://github.com/xnio/xnio/pull/206/files#diff-23a6a7997705ea72e4016c11... are
> likely to improve the exceptional case of exceeding the file descriptor
> limit. However, if you're already setting the limit too high (e.g., in our
> case it was 795588), then exceeding it is a symptom of not properly
> load-balancing your traffic; with that many connections, you'd better have
> a ton of free RAM available.
>
--
Nishant Kumar
Bangalore, India
Mob: +91 80088 42030
Email: nishantkumar35(a)gmail.com
11 months, 2 weeks
Proxy and chunking.
by Jocke Eriksson
Hi devs and followers we have a problem and maybe someone on this list have some input.
We use undertow as an API gateway in front of JBoss application servers.
One API we have is responding with a chunked response, and the receiving side is expecting that each chunk is a valid JSon object.
When we analyze the TCP dumps it looks like some chunks are not the full JSon response we are expecting.
So the question is could this be a problem in the gateway?
I found a flag called PRE_CHUNKED_RESPONSE could this be the answer.
If so, how do we apply it in a proxy handler?
Regards Joakim.
11 months, 4 weeks
Worker queue timeout
by James Howe
Under load, when the worker threads are saturated, new requests will be queued.
Is there a way to timeout these requests, so that it doesn't waste time serving them later when the client has already given up?
I saw a reference to RequestLimitingHandler having a configurable timeout, but that doesn't appear to be true.
I also don't want any limit to how many request that can be waiting, only how long they can wait.
This doesn't seem to be a feature offered by any of the standard Java ExecutorService implementations,
but perhaps Undertow has added this (or an appropriate extension point for this) somewhere?
Thanks
James
12 months
Too many open files: Exception accepting request, closing server channel TCP server (NIO)
by Nishant Kumar
Hi,
I am using undertow-core (2.0.29.Final) with Java 11. Every once in a while
(1-2 days) I got Too many open files error on my high load server (30k
JSON Request/sec) and then it stops responding. I have noticed that it
happens during full GC (but not during all full GC). Could you please help
me to fix this? I have also noticed that Undertow is using a very old
version of XNIO (3.3.8.Final). Any particular reason for that?
*System info:*
*RAM:* 64 GB
*Core:* 24 with hyperthreading (2 Thread per core)
*Java version: *11
*VM option: *-server -Xms12g -Xmx16g -verbose:gc
-Xloggc:/home/platform/platform-java/logs/platform-gc.log
-Dorg.wildfly.openssl.path=/usr/local/ssl/lib
-Dlog4j.configurationFile=/home/platform/platform-java/src/main/resources/log4j2.xml
*$] netstat -nalp | grep java | grep -E ":80 |:443 " | awk '{print $6}'|
sort | uniq -c*
7916 ESTABLISHED
2 LISTEN
*$] ulimit -a*
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256172
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 800000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 15360
cpu time (seconds, -t) unlimited
max user processes (-u) 65535
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
*$] openssl version*
OpenSSL 1.1.1d 10 Sep 2019
*$] cat /etc/redhat-release*
CentOS Linux release 7.6.1810 (Core)
*$] uname -a*
Linux web2.ams.bidstalk.com 5.1.6-1.el7.elrepo.x86_64 #1 SMP Fri May 31
11:04:58 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux
*Undertow code:*
server = Undertow.builder()
.addHttpListener(80, "0.0.0.0")
.addHttpsListener(443, "0.0.0.0", sslContext)
.setWorkerThreads(32)
.setServerOption(UndertowOptions.ENABLE_HTTP2, true)
.setServerOption(UndertowOptions.IDLE_TIMEOUT, 120000) // 12000ms
.setServerOption(org.xnio.Options.SSL_SERVER_SESSION_CACHE_SIZE,
1024 * 20) // 20000 sessions
.setServerOption(org.xnio.Options.SSL_SERVER_SESSION_TIMEOUT,
3000) // 5m
.setIoThreads(20)
.setWorkerOption(org.xnio.Options.TCP_NODELAY, true)
.setSocketOption(org.xnio.Options.TCP_NODELAY, true)
.setSocketOption(org.xnio.Options.KEEP_ALIVE, true)
.setSocketOption(org.xnio.Options.REUSE_ADDRESSES, true)
.setSocketOption(org.xnio.Options.CONNECTION_HIGH_WATER, 20000)
.setSocketOption(org.xnio.Options.CONNECTION_LOW_WATER, 20000)
.setWorkerOption(org.xnio.Options.THREAD_AFFINITY, false)
.setHandler(Handlers.routing().post("/", new
RequestHandler(appContext)))
.build();
server.start();
*error.log*
2020-02-28 23:19:05.997 [ERROR] [XNIO-1 Accept]
org.xnio.nio.QueuedNioTcpServerHandle:handleReady(-1): Exception
accepting request, closing server channel TCP server (NIO) <910f2428>
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ~[?:?]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:533)
~[?:?]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:285)
~[?:?]
at org.xnio.nio.QueuedNioTcpServer.handleReady(QueuedNioTcpServer.java:467)
[xnio-nio-3.3.8.Final.jar:3.3.8.Final]
at org.xnio.nio.QueuedNioTcpServerHandle.handleReady(QueuedNioTcpServerHandle.java:38)
[xnio-nio-3.3.8.Final.jar:3.3.8.Final]
at org.xnio.nio.WorkerThread.run(WorkerThread.java:561)
[xnio-nio-3.3.8.Final.jar:3.3.8.Final]
2020-02-28 23:19:06.201 [ERROR] [XNIO-1 Accept]
org.xnio.nio.QueuedNioTcpServerHandle:handleReady(-1): Exception
accepting request, closing server channel TCP server (NIO) <9834817a>
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ~[?:?]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:533)
~[?:?]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:285)
~[?:?]
at org.xnio.nio.QueuedNioTcpServer.handleReady(QueuedNioTcpServer.java:467)
[xnio-nio-3.3.8.Final.jar:3.3.8.Final]
at org.xnio.nio.QueuedNioTcpServerHandle.handleReady(QueuedNioTcpServerHandle.java:38)
[xnio-nio-3.3.8.Final.jar:3.3.8.Final]
at org.xnio.nio.WorkerThread.run(WorkerThread.java:561)
[xnio-nio-3.3.8.Final.jar:3.3.8.Final]*/var/log/messages :*
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:52 web2 kernel: net_ratelimit: 21918 callbacks suppressed
Feb 28 23:18:52 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:52 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:52 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:52 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:52 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:52 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:52 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:52 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:52 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:52 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:57 web2 kernel: net_ratelimit: 60720 callbacks suppressed
Feb 28 23:18:57 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:57 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
Feb 28 23:18:57 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
..
..
--
Nishant Kumar
Bangalore, India
Mob: +91 80088 42030
Email: nishantkumar35(a)gmail.com
12 months