[undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat
R. Matt Barnett
barnett at rice.edu
Wed Aug 15 12:06:44 EDT 2018
Cool thx.
On 8/14/2018 11:27 PM, Stuart Douglas wrote:
> I have created https://issues.jboss.org/browse/XNIO-328.
>
> Stuart
>
> On Tue, Aug 14, 2018 at 6:49 AM R. Matt Barnett <barnett at rice.edu
> <mailto:barnett at rice.edu>> wrote:
>
> Did you all ever open a ticket on this? If so could you link me so
> I can follow along?
>
>
> Thanks,
>
> Matt
>
>
> On 7/26/2018 9:11 PM, Jason Greene wrote:
>> Looks like we need to tweak the hash:
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=07f4c90062f8fc7c8c26f8f95324cbe8fa3145a5
>>
>>
>>
>>
>> On Jul 26, 2018, at 7:13 PM, Stuart Douglas <sdouglas at redhat.com
>> <mailto:sdouglas at redhat.com>> wrote:
>>
>>> They are all even numbers :-(
>>>
>>> This does not play well with our hash if C1 is also even:
>>>
>>> (((C1 * 23) + P) * 23 + C2) % 8
>>>
>>> If C1 is even the C1 * 23 is even. This means ((C1 * 23) + P) *
>>> 23 is even. Depending on the value of C2 this means the result
>>> is always even or always odd, so with an evenly divisible number
>>> of threads you are only ever going to allocate to half of them.
>>>
>>> The good news is this should be easily fixed by using an odd
>>> number of IO threads, but we probably should revisit this.
>>>
>>> Stuart
>>>
>>> On Fri, Jul 27, 2018 at 4:34 AM R. Matt Barnett
>>> <barnett at rice.edu <mailto:barnett at rice.edu>> wrote:
>>>
>>> Backlog setting is 1000.
>>>
>>> Is this what you are interested in from netstat? This was
>>> for ab with a -c of 50.
>>>
>>>
>>> [barnett at apigateway_test ~]$ java -jar
>>> undertow-test-0.1.0-jar-with-dependencies.jar &
>>> [1] 7329
>>> [barnett at apigateway_test ~]$ Jul 26, 2018 1:30:22 PM
>>> org.xnio.Xnio <clinit>
>>> INFO: XNIO version 3.3.8.Final
>>> Jul 26, 2018 1:30:23 PM org.xnio.nio.NioXnio <clinit>
>>> INFO: XNIO NIO Implementation Version 3.3.8.Final
>>>
>>>
>>> Server started on port 8080
>>> 1
>>> 2
>>> 3
>>> 4
>>> [barnett at apigateway_test ~]$ netstat -t | grep
>>> apigateway_loadge | grep ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51580 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51614 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51622 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51626 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51612 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51578 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51636 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51616 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51582 ESTABLISHED
>>> tcp6 0 0 apigateway_tes:webcache
>>> apigateway_loadge:51556 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51588 ESTABLISHED
>>> tcp6 0 0 apigateway_tes:webcache
>>> apigateway_loadge:51558 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51586 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51648 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51632 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51652 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51654 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51574 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51640 ESTABLISHED
>>> tcp6 0 0 apigateway_tes:webcache
>>> apigateway_loadge:51564 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51590 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51610 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51594 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51592 ESTABLISHED
>>> tcp6 0 0 apigateway_tes:webcache
>>> apigateway_loadge:51568 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51620 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51598 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51600 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51584 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51630 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51596 ESTABLISHED
>>> tcp6 0 0 apigateway_tes:webcache
>>> apigateway_loadge:51566 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51650 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51656 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51624 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51662 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51642 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51604 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51608 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51634 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51658 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51628 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51660 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51572 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51606 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51602 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51638 ESTABLISHED
>>> tcp6 0 0 apigateway_tes:webcache
>>> apigateway_loadge:51570 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51618 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51646 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51644 ESTABLISHED
>>> tcp6 97 0 apigateway_tes:webcache
>>> apigateway_loadge:51576 ESTABLISHED
>>>
>>>
>>> On 7/25/2018 9:23 PM, Jason Greene wrote:
>>>> Could you post a netstat output so we can see what port
>>>> numbers your host is picking?
>>>>
>>>> Also is your backlog setting low by chance?
>>>>
>>>> On Jul 25, 2018, at 6:24 PM, Stuart Douglas
>>>> <sdouglas at redhat.com <mailto:sdouglas at redhat.com>> wrote:
>>>>
>>>>> The mapping is done by a hash of the remote IP+port. It
>>>>> sounds like maybe this machine is allocating ports in a
>>>>> way that does not map well to our hash.
>>>>>
>>>>> Because the remote IP is the same it is really only the
>>>>> port that comes into effect. The algorithm is
>>>>> in org.xnio.nio.QueuedNioTcpServer#handleReady and in this
>>>>> case would simplify down to:
>>>>>
>>>>> (((C1 * 23) + P) * 23 + C2) % 8
>>>>>
>>>>> Where C1 is a hash of the remote IP, and C2 is a hash of
>>>>> the local IP+port combo.
>>>>>
>>>>> Stuart
>>>>>
>>>>> On Thu, Jul 26, 2018 at 3:52 AM R. Matt Barnett
>>>>> <barnett at rice.edu <mailto:barnett at rice.edu>> wrote:
>>>>>
>>>>> I did. I set the concurrency level of ab to 128. I
>>>>> still see only 4 overlaps:
>>>>>
>>>>> $ java -jar
>>>>> undertow-test-0.1.0-jar-with-dependencies.jar &
>>>>>
>>>>> Server started on port 8080
>>>>> 1
>>>>> 2
>>>>> 3
>>>>> 4
>>>>>
>>>>> $ netstat -t | grep apigateway_loadge | grep -c
>>>>> ESTABLISHED
>>>>> 126
>>>>>
>>>>>
>>>>> What is the algorithm for mapping connections to IO
>>>>> threads? As a new Undertow user I had assumed round
>>>>> robin, but it sounds like this is not the case.
>>>>>
>>>>>
>>>>> -- Matt
>>>>>
>>>>>
>>>>> On 7/25/2018 11:49 AM, Bill O'Neil wrote:
>>>>>> Did you try setting the concurrency level much higher
>>>>>> than 8 like I suggested earlier? You are probably
>>>>>> having multiple connections assigned to the same IO
>>>>>> threads.
>>>>>>
>>>>>> On Wed, Jul 25, 2018 at 12:26 PM, R. Matt Barnett
>>>>>> <barnett at rice.edu <mailto:barnett at rice.edu>> wrote:
>>>>>>
>>>>>> Corrected test to resolve test/set race.
>>>>>>
>>>>>>
>>>>>> https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa
>>>>>>
>>>>>>
>>>>>> I've also discovered this morning that I *can*
>>>>>> see 1-8 printed on Red
>>>>>> Hat when I generate load using ab from Windows,
>>>>>> but only 1-4 when
>>>>>> running ab on Red Hat (both locally and from a
>>>>>> remote server). I'm
>>>>>> wondering if perhaps there is some sort of
>>>>>> connection reuse shenanigans
>>>>>> going on. My assumption of the use of the -c 8
>>>>>> parameter was "make 8
>>>>>> sockets" but maybe not. I'll dig in and report back.
>>>>>>
>>>>>>
>>>>>> -- Matt
>>>>>>
>>>>>>
>>>>>> On 7/24/2018 6:56 PM, R. Matt Barnett wrote:
>>>>>> > Hello,
>>>>>> >
>>>>>> > I'm experiencing an Undertow performance issue
>>>>>> I fail to understand. I
>>>>>> > am able to reproduce the issue with the code
>>>>>> linked bellow. The problem
>>>>>> > is that on Red Hat (and not Windows) I'm unable
>>>>>> to concurrently process
>>>>>> > more than 4 overlapping requests even with 8
>>>>>> configured IO Threads.
>>>>>> > For example, if I run the following program (1
>>>>>> file, 55 lines):
>>>>>> >
>>>>>> >
>>>>>> https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5
>>>>>> >
>>>>>> > ... on Red Hat and then send requests to the
>>>>>> server using Apache
>>>>>> > Benchmark...
>>>>>> >
>>>>>> > > ab -n 1000 -c 8 localhost:8080/
>>>>>> >
>>>>>> > I see the following output from the Undertow
>>>>>> process:
>>>>>> >
>>>>>> > Server started on port 8080
>>>>>> >
>>>>>> > 1
>>>>>> > 2
>>>>>> > 3
>>>>>> > 4
>>>>>> >
>>>>>> > I believe this demonstrates that only 4
>>>>>> requests are ever processed in
>>>>>> > parallel. I would expect 8. In fact, when I
>>>>>> run the same experiment on
>>>>>> > Windows I see the expected output of
>>>>>> >
>>>>>> > Server started on port 8080
>>>>>> > 1
>>>>>> > 2
>>>>>> > 3
>>>>>> > 4
>>>>>> > 5
>>>>>> > 6
>>>>>> > 7
>>>>>> > 8
>>>>>> >
>>>>>> > Any thoughts as to what might explain this
>>>>>> behavior?
>>>>>> >
>>>>>> > Best,
>>>>>> >
>>>>>> > Matt
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > undertow-dev mailing list
>>>>>> > undertow-dev at lists.jboss.org
>>>>>> <mailto:undertow-dev at lists.jboss.org>
>>>>>> >
>>>>>> https://lists.jboss.org/mailman/listinfo/undertow-dev
>>>>>>
>>>>>> _______________________________________________
>>>>>> undertow-dev mailing list
>>>>>> undertow-dev at lists.jboss.org
>>>>>> <mailto:undertow-dev at lists.jboss.org>
>>>>>> https://lists.jboss.org/mailman/listinfo/undertow-dev
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> undertow-dev mailing list
>>>>> undertow-dev at lists.jboss.org
>>>>> <mailto:undertow-dev at lists.jboss.org>
>>>>> https://lists.jboss.org/mailman/listinfo/undertow-dev
>>>>>
>>>>> _______________________________________________
>>>>> undertow-dev mailing list
>>>>> undertow-dev at lists.jboss.org
>>>>> <mailto:undertow-dev at lists.jboss.org>
>>>>> https://lists.jboss.org/mailman/listinfo/undertow-dev
>>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180815/0f985000/attachment-0001.html
More information about the undertow-dev
mailing list