So just to be clear, am I correct in my understanding that is it safe to invoke any method on the Sender instance returned by getResponseSender() from a Worker Thread without any extra threading considerations?  Can you confirm this is true as well for the HeaderMap instance returned by getResponseHeaders() and also for invocations of setStatusCode(...) ?

Thanks.


On 7/25/2018 6:31 PM, Stuart Douglas wrote:


On Thu, Jul 26, 2018 at 5:27 AM R. Matt Barnett <barnett@rice.edu> wrote:

I've been able to observe 1...8 on Red Hat by adding the following statements to my handler (and setting the worker thread pool size to 8):


@Override
public void handleRequest(HttpServerExchange httpServerExchange) throws Exception
{
    if (httpServerExchange.isInIoThread()) {
        httpServerExchange.dispatch(this);
        return;
    }
...
}
I have a few questions about this technique though:

1.) How are dispatch actions mapped onto worker threads? New connections were not mapped to available idle IO threads, so is it possible dispatches also won't be mapped to available idle worker threads but instead queued for currently
busy threads?

IO threads are tied to the connection. Once a connection has been accepted only that IO thread will be used to service it. This avoids contention from having a larger number of IO threads waiting on a single selector. The worker thread pool is basically just a normal executor, that will run tasks in a FIFO manner.
 
2.) The Undertow documentation states that HttpServerExchange is not thread-safe. However the documentation states that dispatch(...) has happens-before semantics with respect to the worker thread accessing httpServerExchange.
That would seem to make it ok to read from httpServerExchange in the worker thread.  Assuming that an IO thread will be responsible for writing the http response back to the client, what steps do I need to take in the body
of handleRequest to ensure that my writes to httpServerExchange in the worker thread are observed by the IO thread responsible for transmitting the response to the client? Is invoking httpServerExchange.endExchange(); in the 
worker thread as the final statement sufficient?  

Not all writes are done from the IO thread. For instance if you use blocking IO and are using a Stream then the writes are done from the worker.

If you use the Sender to perform async IO then the initial write is done from the original thread, and the IO thread is only involved if the response is too larger to just write out immediately. In this case though the Sender will take care of the thread safety aspects, as the underlying SelectionKey will not have its interest ops set until after the current stack has returned. 

Basically if you call dispatch(), or perform an action that requires async IO nothing happens immediately, it just sets a flag in the HttpServerExchange. Once the call stack returns (i.e. the current thread is done) one of three things will happen:
- If dispatch was called the dispatch task will be run in an executor
- If async IO was required the underlying SelectionKey will have its interest ops modified, so the IO thread can perform the IO
- If neither of the above happened then the exchange is ended.

Stuart

 

-- Matt

On 7/25/2018 11:26 AM, R. Matt Barnett wrote:
Corrected test to resolve test/set race.


https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa


I've also discovered this morning that I *can* see 1-8 printed on Red 
Hat when I generate load using ab from Windows, but only 1-4 when 
running ab on Red Hat (both locally and from a remote server).  I'm 
wondering if perhaps there is some sort of connection reuse shenanigans 
going on.  My assumption of the use of the -c 8 parameter was "make 8 
sockets" but maybe not.  I'll dig in and report back.


-- Matt


On 7/24/2018 6:56 PM, R. Matt Barnett wrote:
Hello,

I'm experiencing an Undertow performance issue I fail to understand.  I
am able to reproduce the issue with the code linked bellow. The problem
is that on Red Hat (and not Windows) I'm unable to concurrently process
more than 4 overlapping requests even with 8 configured IO Threads.
For example, if I run the following program (1 file, 55 lines):

https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5

... on Red Hat and then send requests to the server using Apache
Benchmark...

      > ab -n 1000 -c 8 localhost:8080/

I see the following output from the Undertow process:

      Server started on port 8080

      1
      2
      3
      4

I believe this demonstrates that only 4 requests are ever processed in
parallel.  I would expect 8.  In fact, when I run the same experiment on
Windows I see the expected output of

      Server started on port 8080
      1
      2
      3
      4
      5
      6
      7
      8

Any thoughts as to what might explain this behavior?

Best,

Matt

_______________________________________________
undertow-dev mailing list
undertow-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/undertow-dev
_______________________________________________
undertow-dev mailing list
undertow-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/undertow-dev

_______________________________________________
undertow-dev mailing list
undertow-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/undertow-dev