That sounds like the same issue as mine (see other thread this month) - sockets not being
closed while waiting for a worker.
On 2 Mar 2020, at 04:57, Stuart Douglas
<sdouglas@redhat.com<mailto:sdouglas@redhat.com>> wrote:
This sounds like a bug, when the client closes the connection it should wake up the read
listener, which will read -1 and then cleanly close the socket.
Are the clients closing idle connections or connections processing a request?
Stuart
On Mon, 2 Mar 2020 at 14:31, Nishant Kumar
<nishantkumar35@gmail.com<mailto:nishantkumar35@gmail.com>> wrote:
I agree that it's a load-balancing issue but we can't do much about it at this
moment.
I still see issues after using the latest XNIO (3.7.7) with Undertow. what I have observed
it that when there is a spike in request and CONNECTION_HIGH_WATER is reached, the server
stops accepting new connection as expected and the client starts to close the connection
because of delay (we have strict low latency requirement < 100ms) and try to create new
connection again (which will also not be accepted) but server has not closed those
connections (NO_REQUEST_TIMEOUT = 6000) and there will be high number of CLOSE_WAIT
connections at this moment. The server is considering CLOSE_WAIT + ESTABLISHED for
CONNECTION_HIGH_WATER (my understanding).
Is there a way that I can close all CLOSE_WAIT connection at this moment so that
connection counts drop under CONNECTION_HIGH_WATER and we start responding to newly
established connections? or any other suggestions? I have tried removing
CONNECTION_HIGH_WATER and relying on the FD limit but that didn't work.
On Sun, Mar 1, 2020 at 7:47 AM Stan Rosenberg
<stan.rosenberg@gmail.com<mailto:stan.rosenberg@gmail.com>> wrote:
On Sat, Feb 29, 2020 at 8:18 PM Nishant Kumar
<nishantkumar35@gmail.com<mailto:nishantkumar35@gmail.com>> wrote:
Thanks for the reply. I am running it under supervisord and i have updated open file limit
in supervisord config. The problem seems to be same as what @Carter has mentioned. It
happens mostly during sudden traffic spike and then sudden increase (~30k-300k) of
TIME_WAIT socket.
The changes in
https://github.com/xnio/xnio/pull/206/files#diff-23a6a7997705ea72e4016c11... are
likely to improve the exceptional case of exceeding the file descriptor limit. However, if
you're already setting the limit too high (e.g., in our case it was 795588), then
exceeding it is a symptom of not properly load-balancing your traffic; with that many
connections, you'd better have a ton of free RAM available.
--
Nishant Kumar
Bangalore, India
Mob: +91 80088 42030
Email: nishantkumar35@gmail.com<mailto:nishantkumar35@gmail.com>
_______________________________________________
undertow-dev mailing list
undertow-dev@lists.jboss.org<mailto:undertow-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/undertow-dev
_______________________________________________
undertow-dev mailing list
undertow-dev@lists.jboss.org<mailto:undertow-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/undertow-dev