<div dir="ltr"><div dir="ltr"><div>Hi,</div><div>I am trying to tune Undertow for our
high traffic application and as part of that I am running a load test
from a client host (different from the host where undertow is running)
that calls endpoints that involve writing multiple 1-1.5 MB files to
disk.</div><div><br></div><div>These are two scenarios that I run the load test:</div><div>1)
Single client with 100 concurrent threads , each thread uploadsĀ files
of size > 1 MB and repeats the process for 2 minutes</div><div>2) Two
clients each with 50 concurrent threads, each thread uploadsĀ files of
size > 1 MB and repeats the process for 2 minutes</div><div><br></div><div>Observations:</div><div>In the first scenario, I am seeing a latency of about 5 secs per upload.</div><div>In the second scenario, I am only seeing a latency of 2.5 secs per upload.</div><div><br></div><div>I am calculating the latency at the client.<br></div><div>I've checked the number of connections open to the server in both cases and they are around 100 at the server.</div><div><br></div><div>Is
there any setting on the server side that would explain this behavior?
Since the number of connections are same shouldn't the latency be same
more or less in both scenarios? <br></div><div><br></div><div>I have 32 IO Threads and 160 Worker threads with a Backlog of 1000 configured at the server.</div><div>Thank you for all the help.</div><div><br></div><div>-Ravi<div class="gmail-adL"><br></div></div></div></div>