have you check the logs
Your log had these entries:
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full, dropping packet
What is the
sysctl -a | grep conntrack | grep timeout
Generally, clients also close the connection after a few thousand requests other than normal fatal conditions. There might be other cases too but I am not aware of it. They keep initiating new connections if we are not responding within the threshold time frame. This is a server to server communication system.
On Mon, Mar 2, 2020 at 10:26 AM Stuart Douglas <firstname.lastname@example.org> wrote:
This sounds like a bug, when the client closes the connection it should wake up the read listener, which will read -1 and then cleanly close the socket.
Are the clients closing idle connections or connections processing a request?
On Mon, 2 Mar 2020 at 14:31, Nishant Kumar <email@example.com> wrote:
I agree that it's a load-balancing issue but we can't do much about it at this moment.
I still see issues after using the latest XNIO (3.7.7) with Undertow. what I have observed it that when there is a spike in request and CONNECTION_HIGH_WATER is reached, the server stops accepting new connection as expected and the client starts to close the connection because of delay (we have strict low latency requirement < 100ms) and try to create new connection again (which will also not be accepted) but server has not closed those connections (NO_REQUEST_TIMEOUT = 6000) and there will be high number of CLOSE_WAIT connections at this moment. The server is considering CLOSE_WAIT + ESTABLISHED for CONNECTION_HIGH_WATER (my understanding).
Is there a way that I can close all CLOSE_WAIT connection at this moment so that connection counts drop under CONNECTION_HIGH_WATER and we start responding to newly established connections? or any other suggestions? I have tried removing CONNECTION_HIGH_WATER and relying on the FD limit but that didn't work.
On Sun, Mar 1, 2020 at 7:47 AM Stan Rosenberg <firstname.lastname@example.org> wrote:
On Sat, Feb 29, 2020 at 8:18 PM Nishant Kumar <email@example.com> wrote:
Thanks for the reply. I am running it under supervisord and i have updated open file limit in supervisord config. The problem seems to be same as what @Carter has mentioned. It happens mostly during sudden traffic spike and then sudden increase (~30k-300k) of TIME_WAIT socket.
The changes in https://github.com/xnio/xnio/pull/206/files#diff-23a6a7997705ea72e4016c11bf9d214bR453 are likely to improve the exceptional case of exceeding the file descriptor limit. However, if you're already setting the limit too high (e.g., in our case it was 795588), then exceeding it is a symptom of not properly load-balancing your traffic; with that many connections, you'd better have a ton of free RAM available.
undertow-dev mailing list
_______________________________________________ undertow-dev mailing list firstname.lastname@example.org https://lists.jboss.org/mailman/listinfo/undertow-dev