have you check the logs again?
Your log had these entries:
*/var/log/messages :*
Feb 28 23:18:47 web2 kernel: nf_conntrack: nf_conntrack: table full,
dropping packet
What is the output of
|sysctl -a | grep conntrack | grep timeout ||Please read:
|https://security.stackexchange.com/questions/43205/nf-conntrack-table-full-dropping-packet
||
On 2-3-2020 09:59, Nishant Kumar wrote:
Generally, clients also close the connection after a few thousand
requests other than normal fatal conditions. There might be other
cases too but I am not aware of it. They keep initiating new
connections if we are not responding within the threshold time frame.
This is a server to server communication system.
On Mon, Mar 2, 2020 at 10:26 AM Stuart Douglas <sdouglas(a)redhat.com
<mailto:sdouglas@redhat.com>> wrote:
This sounds like a bug, when the client closes the connection it
should wake up the read listener, which will read -1 and then
cleanly close the socket.
Are the clients closing idle connections or connections processing
a request?
Stuart
On Mon, 2 Mar 2020 at 14:31, Nishant Kumar
<nishantkumar35(a)gmail.com <mailto:nishantkumar35@gmail.com>> wrote:
I agree that it's a load-balancing issue but we can't do much
about it at this moment.
I still see issues after using the latest XNIO (3.7.7) with
Undertow. what I have observed it that when there is a spike
in request and CONNECTION_HIGH_WATER is reached, the server
stops accepting new connection as expected and the client
starts to close the connection because of delay (we have
strict low latency requirement < 100ms) and try to create new
connection again (which will also not be accepted) but server
has not closed those connections (NO_REQUEST_TIMEOUT = 6000)
and there will be high number of CLOSE_WAIT connections at
this moment. The server is considering CLOSE_WAIT +
ESTABLISHED for CONNECTION_HIGH_WATER (my understanding).
Is there a way that I can close all CLOSE_WAIT connection at
this moment so that connection counts drop
under CONNECTION_HIGH_WATER and we start responding to newly
established connections? or any other suggestions? I have
tried removing CONNECTION_HIGH_WATER and relying on the FD
limit but that didn't work.
On Sun, Mar 1, 2020 at 7:47 AM Stan Rosenberg
<stan.rosenberg(a)gmail.com <mailto:stan.rosenberg@gmail.com>>
wrote:
On Sat, Feb 29, 2020 at 8:18 PM Nishant Kumar
<nishantkumar35(a)gmail.com
<mailto:nishantkumar35@gmail.com>> wrote:
Thanks for the reply. I am running it under
supervisord and i have updated open file limit in
supervisord config. The problem seems to be same as
what @Carter has mentioned. It happens mostly during
sudden traffic spike and then sudden increase
(~30k-300k) of TIME_WAIT socket.
The changes in
https://github.com/xnio/xnio/pull/206/files#diff-23a6a7997705ea72e4016c11...
likely to improve the exceptional case of exceeding the
file descriptor limit. However, if you're already setting
the limit too high (e.g., in our case it was 795588), then
exceeding it is a symptom of not properly load-balancing
your traffic; with that many connections, you'd better
have a ton of free RAM available.
--
Nishant Kumar
Bangalore, India
Mob: +91 80088 42030
Email: nishantkumar35(a)gmail.com <mailto:nishantkumar35@gmail.com>
_______________________________________________
undertow-dev mailing list
undertow-dev(a)lists.jboss.org <mailto:undertow-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/undertow-dev
--
Nishant Kumar
Bangalore, India
Mob: +91 80088 42030
Email: nishantkumar35(a)gmail.com <mailto:nishantkumar35@gmail.com>
_______________________________________________
undertow-dev mailing list
undertow-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/undertow-dev