<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>Looks like we need to tweak the hash:</div><div><br></div><a href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=07f4c90062f8fc7c8c26f8f95324cbe8fa3145a5">https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=07f4c90062f8fc7c8c26f8f95324cbe8fa3145a5</a><div><br></div><div><br><div><br>On Jul 26, 2018, at 7:13 PM, Stuart Douglas &lt;<a href="mailto:sdouglas@redhat.com">sdouglas@redhat.com</a>&gt; wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr">They are all even numbers :-(<div><br></div><div>This does not play well with our hash if C1 is also even:<div><br></div><div>(((C1 * 23) + P) * 23 + C2) % 8<br></div><div><br></div><div>If C1 is even the C1 * 23 is even. This means ((C1 * 23) + P) * 23 is even. Depending on the value of C2 this means the result is always even or always odd, so with an evenly divisible number of threads you are only ever going to allocate to half of them.</div><div><br></div><div>The good news is this should be easily fixed by using an odd number of IO threads, but we probably should revisit this.</div><div><br></div><div>Stuart</div><div></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Jul 27, 2018 at 4:34 AM R. Matt Barnett &lt;<a href="mailto:barnett@rice.edu">barnett@rice.edu</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div text="#000000" bgcolor="#FFFFFF">
    <p>Backlog setting is 1000.</p>
    <p>Is this what you are interested in from netstat?  This was for ab
      with a -c of 50.<br>
    </p>
    <p><br>
    </p>
    <p>[barnett@apigateway_test ~]$ java -jar
      undertow-test-0.1.0-jar-with-dependencies.jar &amp;<br>
      [1] 7329<br>
      [barnett@apigateway_test ~]$ Jul 26, 2018 1:30:22 PM org.xnio.Xnio
      &lt;clinit&gt;<br>
      INFO: XNIO version 3.3.8.Final<br>
      Jul 26, 2018 1:30:23 PM org.xnio.nio.NioXnio &lt;clinit&gt;<br>
      INFO: XNIO NIO Implementation Version 3.3.8.Final<br>
      <br>
      <br>
      Server started on port 8080<br>
      1<br>
      2<br>
      3<br>
      4<br>
      [barnett@apigateway_test ~]$ netstat -t | grep apigateway_loadge |
      grep ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51580 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51614 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51622 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51626 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51612 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51578 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51636 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51616 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51582 ESTABLISHED<br>
      tcp6       0      0 apigateway_tes:webcache
      apigateway_loadge:51556 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51588 ESTABLISHED<br>
      tcp6       0      0 apigateway_tes:webcache
      apigateway_loadge:51558 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51586 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51648 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51632 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51652 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51654 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51574 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51640 ESTABLISHED<br>
      tcp6       0      0 apigateway_tes:webcache
      apigateway_loadge:51564 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51590 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51610 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51594 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51592 ESTABLISHED<br>
      tcp6       0      0 apigateway_tes:webcache
      apigateway_loadge:51568 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51620 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51598 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51600 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51584 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51630 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51596 ESTABLISHED<br>
      tcp6       0      0 apigateway_tes:webcache
      apigateway_loadge:51566 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51650 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51656 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51624 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51662 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51642 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51604 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51608 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51634 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51658 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51628 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51660 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51572 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51606 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51602 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51638 ESTABLISHED<br>
      tcp6       0      0 apigateway_tes:webcache
      apigateway_loadge:51570 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51618 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51646 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51644 ESTABLISHED<br>
      tcp6      97      0 apigateway_tes:webcache
      apigateway_loadge:51576 ESTABLISHED<br>
    </p>
    <br>
    <div class="m_-2588482696608169146moz-cite-prefix">On 7/25/2018 9:23 PM, Jason Greene
      wrote:<br>
    </div>
    <blockquote type="cite">
      
      <div><span></span></div>
      <div>Could you post a netstat output so we can see what port
        numbers your host is picking?
        <div><br>
        </div>
        <div>
          <div>Also is your backlog setting low by chance? </div>
          <div><br>
          </div>
          <div>On Jul 25, 2018, at 6:24 PM, Stuart Douglas &lt;<a href="mailto:sdouglas@redhat.com" target="_blank">sdouglas@redhat.com</a>&gt;
            wrote:<br>
            <br>
          </div>
          <blockquote type="cite">
            <div>
              <div dir="ltr">The mapping is done by a hash of the remote
                IP+port. It sounds like maybe this machine is allocating
                ports in a way that does not map well to our hash. 
                <div><br>
                </div>
                <div>Because the remote IP is the same it is really only
                  the port that comes into effect. The algorithm is
                  in org.xnio.nio.QueuedNioTcpServer#handleReady and in
                  this case would simplify down to:</div>
                <div><br>
                </div>
                <div>(((C1 * 23) + P) * 23 + C2) % 8</div>
                <div><br>
                </div>
                <div>Where C1 is a hash of the remote IP, and C2 is a
                  hash of the local IP+port combo. </div>
                <div>
                  <div>
                    <div><br>
                    </div>
                    <div>Stuart</div>
                  </div>
                </div>
              </div>
              <br>
              <div class="gmail_quote">
                <div dir="ltr">On Thu, Jul 26, 2018 at 3:52 AM R. Matt
                  Barnett &lt;<a href="mailto:barnett@rice.edu" target="_blank">barnett@rice.edu</a>&gt;
                  wrote:<br>
                </div>
                <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                  <div text="#000000" bgcolor="#FFFFFF">
                    <p>I did. I set the concurrency level of ab to 128.
                      I still see only 4 overlaps:</p>
                    <blockquote>
                      <p>$ java -jar
                        undertow-test-0.1.0-jar-with-dependencies.jar
                        &amp;<br>
                        <br>
                        Server started on port 8080<br>
                        1<br>
                        2<br>
                        3<br>
                        4<br>
                      </p>
                      <p>$ netstat -t | grep apigateway_loadge | grep -c
                        ESTABLISHED<br>
                        126</p>
                    </blockquote>
                    <p><br>
                    </p>
                    <p>What is the algorithm for mapping connections to
                      IO threads?  As a new Undertow user I had assumed
                      round robin, but it sounds like this is not the
                      case.</p>
                    <p><br>
                    </p>
                    <p>-- Matt<br>
                    </p>
                    <br>
                    <div class="m_-2588482696608169146m_-3278563139687851367moz-cite-prefix">On
                      7/25/2018 11:49 AM, Bill O&#39;Neil wrote:<br>
                    </div>
                    <blockquote type="cite">
                      <div dir="ltr">Did you try setting the concurrency
                        level much higher than 8 like I suggested
                        earlier? You are probably having multiple
                        connections assigned to the same IO threads.<input name="virtru-metadata" value="{&quot;email-policy&quot;:{&quot;state&quot;:&quot;closed&quot;,&quot;expirationUnit&quot;:&quot;days&quot;,&quot;disableCopyPaste&quot;:false,&quot;disablePrint&quot;:false,&quot;disableForwarding&quot;:false,&quot;expires&quot;:false,&quot;isManaged&quot;:false},&quot;attachments&quot;:{},&quot;compose-window&quot;:{&quot;secure&quot;:false}}" type="hidden">
                        <div class="gmail_extra"><br>
                          <div class="gmail_quote">On Wed, Jul 25, 2018
                            at 12:26 PM, R. Matt Barnett <span dir="ltr">&lt;<a href="mailto:barnett@rice.edu" target="_blank">barnett@rice.edu</a>&gt;</span>
                            wrote:<br>
                            <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Corrected
                              test to resolve test/set race.<br>
                              <br>
                              <br>
                              <a href="https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa" rel="noreferrer" target="_blank">https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa</a><br>
                              <br>
                              <br>
                              I&#39;ve also discovered this morning that I
                              *can* see 1-8 printed on Red <br>
                              Hat when I generate load using ab from
                              Windows, but only 1-4 when <br>
                              running ab on Red Hat (both locally and
                              from a remote server).  I&#39;m <br>
                              wondering if perhaps there is some sort of
                              connection reuse shenanigans <br>
                              going on.  My assumption of the use of the
                              -c 8 parameter was &quot;make 8 <br>
                              sockets&quot; but maybe not.  I&#39;ll dig in and
                              report back.<br>
                              <span class="m_-2588482696608169146m_-3278563139687851367HOEnZb"><font color="#888888"><br>
                                  <br>
                                  -- Matt<br>
                                </font></span>
                              <div class="m_-2588482696608169146m_-3278563139687851367HOEnZb">
                                <div class="m_-2588482696608169146m_-3278563139687851367h5"><br>
                                  <br>
                                  On 7/24/2018 6:56 PM, R. Matt Barnett
                                  wrote:<br>
                                  &gt; Hello,<br>
                                  &gt;<br>
                                  &gt; I&#39;m experiencing an Undertow
                                  performance issue I fail to
                                  understand.  I<br>
                                  &gt; am able to reproduce the issue
                                  with the code linked bellow. The
                                  problem<br>
                                  &gt; is that on Red Hat (and not
                                  Windows) I&#39;m unable to concurrently
                                  process<br>
                                  &gt; more than 4 overlapping requests
                                  even with 8 configured IO Threads.<br>
                                  &gt; For example, if I run the
                                  following program (1 file, 55 lines):<br>
                                  &gt;<br>
                                  &gt; <a href="https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5" rel="noreferrer" target="_blank">https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5</a><br>
                                  &gt;<br>
                                  &gt; ... on Red Hat and then send
                                  requests to the server using Apache<br>
                                  &gt; Benchmark...<br>
                                  &gt;<br>
                                  &gt;       &gt; ab -n 1000 -c 8
                                  localhost:8080/<br>
                                  &gt;<br>
                                  &gt; I see the following output from
                                  the Undertow process:<br>
                                  &gt;<br>
                                  &gt;       Server started on port 8080<br>
                                  &gt;<br>
                                  &gt;       1<br>
                                  &gt;       2<br>
                                  &gt;       3<br>
                                  &gt;       4<br>
                                  &gt;<br>
                                  &gt; I believe this demonstrates that
                                  only 4 requests are ever processed in<br>
                                  &gt; parallel.  I would expect 8.  In
                                  fact, when I run the same experiment
                                  on<br>
                                  &gt; Windows I see the expected output
                                  of<br>
                                  &gt;<br>
                                  &gt;       Server started on port 8080<br>
                                  &gt;       1<br>
                                  &gt;       2<br>
                                  &gt;       3<br>
                                  &gt;       4<br>
                                  &gt;       5<br>
                                  &gt;       6<br>
                                  &gt;       7<br>
                                  &gt;       8<br>
                                  &gt;<br>
                                  &gt; Any thoughts as to what might
                                  explain this behavior?<br>
                                  &gt;<br>
                                  &gt; Best,<br>
                                  &gt;<br>
                                  &gt; Matt<br>
                                  &gt;<br>
                                  &gt;
                                  _______________________________________________<br>
                                  &gt; undertow-dev mailing list<br>
                                  &gt; <a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a><br>
                                  &gt; <a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/undertow-dev</a><br>
                                  <br>
_______________________________________________<br>
                                  undertow-dev mailing list<br>
                                  <a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a><br>
                                  <a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/undertow-dev</a></div>
                              </div>
                            </blockquote>
                          </div>
                          <br>
                        </div>
                      </div>
                    </blockquote>
                    <br>
                  </div>
                  _______________________________________________<br>
                  undertow-dev mailing list<br>
                  <a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a><br>
                  <a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/undertow-dev</a></blockquote>
              </div>
            </div>
          </blockquote>
          <blockquote type="cite">
            <div><span>_______________________________________________</span><br>
              <span>undertow-dev mailing list</span><br>
              <span><a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a></span><br>
              <span><a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/undertow-dev</a></span></div>
          </blockquote>
        </div>
      </div>
    </blockquote>
    <br>
  </div>

</blockquote></div>
</div></blockquote></div></body></html>