<div dir="ltr">The mapping is done by a hash of the remote IP+port. It sounds like maybe this machine is allocating ports in a way that does not map well to our hash. <div><br></div><div>Because the remote IP is the same it is really only the port that comes into effect. The algorithm is in org.xnio.nio.QueuedNioTcpServer#handleReady and in this case would simplify down to:</div><div><br></div><div>(((C1 * 23) + P) * 23 + C2) % 8</div><div><br></div><div>Where C1 is a hash of the remote IP, and C2 is a hash of the local IP+port combo. </div><div><div><div><br></div><div>Stuart</div></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Jul 26, 2018 at 3:52 AM R. Matt Barnett &lt;<a href="mailto:barnett@rice.edu">barnett@rice.edu</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div text="#000000" bgcolor="#FFFFFF">
    <p>I did. I set the concurrency level of ab to 128. I still see only
      4 overlaps:</p>
    <blockquote>
      <p>$ java -jar undertow-test-0.1.0-jar-with-dependencies.jar &amp;<br>
        <br>
        Server started on port 8080<br>
        1<br>
        2<br>
        3<br>
        4<br>
      </p>
      <p>$ netstat -t | grep apigateway_loadge | grep -c ESTABLISHED<br>
        126</p>
    </blockquote>
    <p><br>
    </p>
    <p>What is the algorithm for mapping connections to IO threads?  As
      a new Undertow user I had assumed round robin, but it sounds like
      this is not the case.</p>
    <p><br>
    </p>
    <p>-- Matt<br>
    </p>
    <br>
    <div class="m_-3278563139687851367moz-cite-prefix">On 7/25/2018 11:49 AM, Bill O&#39;Neil
      wrote:<br>
    </div>
    <blockquote type="cite">
      
      <div dir="ltr">Did you try setting the concurrency level much
        higher than 8 like I suggested earlier? You are probably having
        multiple connections assigned to the same IO threads.<input name="virtru-metadata" value="{&quot;email-policy&quot;:{&quot;state&quot;:&quot;closed&quot;,&quot;expirationUnit&quot;:&quot;days&quot;,&quot;disableCopyPaste&quot;:false,&quot;disablePrint&quot;:false,&quot;disableForwarding&quot;:false,&quot;expires&quot;:false,&quot;isManaged&quot;:false},&quot;attachments&quot;:{},&quot;compose-window&quot;:{&quot;secure&quot;:false}}" type="hidden">
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Wed, Jul 25, 2018 at
            12:26 PM, R. Matt Barnett <span dir="ltr">&lt;<a href="mailto:barnett@rice.edu" target="_blank">barnett@rice.edu</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Corrected
              test to resolve test/set race.<br>
              <br>
              <br>
              <a href="https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa" rel="noreferrer" target="_blank">https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa</a><br>
              <br>
              <br>
              I&#39;ve also discovered this morning that I *can* see 1-8
              printed on Red <br>
              Hat when I generate load using ab from Windows, but only
              1-4 when <br>
              running ab on Red Hat (both locally and from a remote
              server).  I&#39;m <br>
              wondering if perhaps there is some sort of connection
              reuse shenanigans <br>
              going on.  My assumption of the use of the -c 8 parameter
              was &quot;make 8 <br>
              sockets&quot; but maybe not.  I&#39;ll dig in and report back.<br>
              <span class="m_-3278563139687851367HOEnZb"><font color="#888888"><br>
                  <br>
                  -- Matt<br>
                </font></span>
              <div class="m_-3278563139687851367HOEnZb">
                <div class="m_-3278563139687851367h5"><br>
                  <br>
                  On 7/24/2018 6:56 PM, R. Matt Barnett wrote:<br>
                  &gt; Hello,<br>
                  &gt;<br>
                  &gt; I&#39;m experiencing an Undertow performance issue I
                  fail to understand.  I<br>
                  &gt; am able to reproduce the issue with the code
                  linked bellow. The problem<br>
                  &gt; is that on Red Hat (and not Windows) I&#39;m unable
                  to concurrently process<br>
                  &gt; more than 4 overlapping requests even with 8
                  configured IO Threads.<br>
                  &gt; For example, if I run the following program (1
                  file, 55 lines):<br>
                  &gt;<br>
                  &gt; <a href="https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5" rel="noreferrer" target="_blank">https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5</a><br>
                  &gt;<br>
                  &gt; ... on Red Hat and then send requests to the
                  server using Apache<br>
                  &gt; Benchmark...<br>
                  &gt;<br>
                  &gt;       &gt; ab -n 1000 -c 8 localhost:8080/<br>
                  &gt;<br>
                  &gt; I see the following output from the Undertow
                  process:<br>
                  &gt;<br>
                  &gt;       Server started on port 8080<br>
                  &gt;<br>
                  &gt;       1<br>
                  &gt;       2<br>
                  &gt;       3<br>
                  &gt;       4<br>
                  &gt;<br>
                  &gt; I believe this demonstrates that only 4 requests
                  are ever processed in<br>
                  &gt; parallel.  I would expect 8.  In fact, when I run
                  the same experiment on<br>
                  &gt; Windows I see the expected output of<br>
                  &gt;<br>
                  &gt;       Server started on port 8080<br>
                  &gt;       1<br>
                  &gt;       2<br>
                  &gt;       3<br>
                  &gt;       4<br>
                  &gt;       5<br>
                  &gt;       6<br>
                  &gt;       7<br>
                  &gt;       8<br>
                  &gt;<br>
                  &gt; Any thoughts as to what might explain this
                  behavior?<br>
                  &gt;<br>
                  &gt; Best,<br>
                  &gt;<br>
                  &gt; Matt<br>
                  &gt;<br>
                  &gt; _______________________________________________<br>
                  &gt; undertow-dev mailing list<br>
                  &gt; <a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a><br>
                  &gt; <a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/undertow-dev</a><br>
                  <br>
                  _______________________________________________<br>
                  undertow-dev mailing list<br>
                  <a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a><br>
                  <a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/undertow-dev</a></div>
              </div>
            </blockquote>
          </div>
          <br>
        </div>
      </div>
    </blockquote>
    <br>
  </div>

_______________________________________________<br>
undertow-dev mailing list<br>
<a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/undertow-dev</a></blockquote></div>