On Thu, Dec 1, 2016 at 8:37 PM, Pere Ferrera <ferrerabertran(a)gmail.com> wrote:
Hi Stuart,
Thanks for the prompt reply. I am developing a critical high-performance web
server and I am unsure whether I could hit the issue of many connections
sitting in TIME_WAIT state or not (I came across it through a wrongly
designed benchmark), I know this could happen under different obscure
scenarios but it mostly doesn't if things are more or less correct... I just
wanted to be on the safe side to be able to set SO_LINGER to 0 if I ever
need to.
It looks like SO_LINGER support is not implemented in XNIO, so it is
not possible to set at the moment. It would be easy to add though.
For 2), I gave it a try and it seems to work, I see the semantics are
similar to those of DoSFilter in Jetty: it rate-limits the number of
requests per active connection, not overall. Except that there is a
"pending" queue. What I like from the DoSFilter in Jetty is that it can also
kill active requests if they take more than a certain number of
milliseconds, is there something in Undertow like that or how would you
implement it?
It is a global rate limit, not a per connection one. If you are using
HTTP/2 and have multiple requests per connection you can use the
MAX_CONCURRENT_STREAMS option to limit the number of requests per
connection.
Looks like there is a bug in the implementation in terms of the queue,
it should support a queue size of zero which means no queuing, however
this is interpreted as an unbounded queue instead. For now you could
set the size to 1.
Killing requests that take more that X ms is somewhat more
problematic. You can close the socket easily enough, but that is no
guarentee the thread will stop what it is doing (you can interrupt the
thread, but if you are doing anything async that is very problematic,
as the thread may no longer be associated with the request.
A simple implementation might look like:
@Override
public void handleRequest(final HttpServerExchange exchange) throws Exception {
exchange.getIoThread().executeAfter(new Runnable() {
@Override
public void run() {
if(!exchange.isComplete()) {
IoUtils.safeClose(exchange.getConnection());
}
}
}, 1000, TimeUnit.MILLISECONDS);
next.handleRequest(exchange);
}
From a performance point of view this is not great, as timers can be
relatively expensive. A better approach may be to have a task that
executes every X/10 ms, and just add all the requests to a queue
(along with start time). The timer thread can then poll the queue and
kill any that have gone to long.
Stuart
Thanks,
On Wed, Nov 30, 2016 at 10:23 PM, Stuart Douglas <sdouglas(a)redhat.com>
wrote:
>
> 1) Why do you need to set this? In general Undertow won't close the
> socket until all messages have been sent anyway.
>
> 2) Do you mean limit the number of active requests, or limit the
> number of bytes per second that can be sent?
>
> io.undertow.server.handlers.RequestLimitingHandler can limit the
> number of active requests, while
> io.undertow.server.handlers.ResponseRateLimitingHandler can be used to
> limit the rate data is sent on a connection.
>
> Stuart
>
> On Wed, Nov 30, 2016 at 9:53 PM, Pere Ferrera <ferrerabertran(a)gmail.com>
> wrote:
> > Hi there,
> > I have two questions: 1) How can I configure the underlying socket
> > parameter
> > SO_LINGER using the Undertow API ? and 2) Is there something that I can
> > use
> > to rate-limit requests issued to an Undertow server ? (something similar
> > to
> > the DoS Filter in Jetty)
> >
> > Thanks,
> >
> > _______________________________________________
> > undertow-dev mailing list
> > undertow-dev(a)lists.jboss.org
> >
https://lists.jboss.org/mailman/listinfo/undertow-dev