From scrapmachines at gmail.com Mon Jul 2 06:52:46 2018 From: scrapmachines at gmail.com (Girish Sharma) Date: Mon, 2 Jul 2018 16:22:46 +0530 Subject: [undertow-dev] Reading request body parsed/buffered by RequestBufferingHandler Message-ID: Hi, I tried searching around in github/stackexchange but could not find anything related to RequestBufferingHandler. I understand that RequestBufferingHandler would read the request body for me and then call the handler assigned as the next handler. But how does the next handler read the buffered request body? Looking around the code of RequestBufferingHandler, I see that it adds an attachment to the exchange, but the key for that attachment is protected to the undertow package. How can I read the value of that attachment? Or is there some other way to read the buffered request body? PS: I know the alternate approach of reading request body i.e startBlocking + getInputStream and getRequestReceiver().receiveFullString , but I am interested in using the RequestBufferingHandler in particular. -- Girish Sharma -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180702/0dcf8f6a/attachment.html From sdouglas at redhat.com Mon Jul 2 21:04:01 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Tue, 3 Jul 2018 11:04:01 +1000 Subject: [undertow-dev] Reading request body parsed/buffered by RequestBufferingHandler In-Reply-To: References: Message-ID: You just read it as normal. The advantage is that if you are going to dispatch to a worker thread then the dispatch does not happen until the request has been read, thus reducing the amount of time a worker spends processing the request. Essentially this allows you to take advantage of non-blocking IO even for applications that use blocking IO, but at the expense of memory for buffering. Stuart On Mon, Jul 2, 2018 at 8:55 PM Girish Sharma wrote: > Hi, > > I tried searching around in github/stackexchange but could not find > anything related to RequestBufferingHandler. > > I understand that RequestBufferingHandler would read the request body for > me and then call the handler assigned as the next handler. But how does the > next handler read the buffered request body? > > Looking around the code of RequestBufferingHandler, I see that it adds an > attachment to the exchange, but the key for that attachment is protected to > the undertow package. > > How can I read the value of that attachment? Or is there some other way to > read the buffered request body? > > PS: I know the alternate approach of reading request body i.e > startBlocking + getInputStream and getRequestReceiver().receiveFullString , > but I am interested in using the RequestBufferingHandler in particular. > > -- > Girish Sharma > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180703/46a39843/attachment-0001.html From scrapmachines at gmail.com Tue Jul 3 06:14:37 2018 From: scrapmachines at gmail.com (Girish Sharma) Date: Tue, 3 Jul 2018 15:44:37 +0530 Subject: [undertow-dev] Reading request body parsed/buffered by RequestBufferingHandler In-Reply-To: References: Message-ID: Hi Stuart, Thanks for getting back to me. I have some comments below: You just read it as normal > I am assuming that by normal, you mean the exchange.startBlocking() + (InputStream stream = exchange.getInputStream()) approach to read the input stream? But then what benefit we get here as compared to not using the RequestBufferingHandler and directly using this approach? Is the following loop (typically used in reading input stream) going to be much more efficient?: while ((line = bufferedReader.readLine()) != null) { stringBuffer.append(line); } Regards On Tue, Jul 3, 2018 at 6:34 AM Stuart Douglas wrote: > You just read it as normal. The advantage is that if you are going to > dispatch to a worker thread then the dispatch does not happen until the > request has been read, thus reducing the amount of time a worker spends > processing the request. Essentially this allows you to take advantage of > non-blocking IO even for applications that use blocking IO, but at the > expense of memory for buffering. > > Stuart > > On Mon, Jul 2, 2018 at 8:55 PM Girish Sharma > wrote: > >> Hi, >> >> I tried searching around in github/stackexchange but could not find >> anything related to RequestBufferingHandler. >> >> I understand that RequestBufferingHandler would read the request body for >> me and then call the handler assigned as the next handler. But how does the >> next handler read the buffered request body? >> >> Looking around the code of RequestBufferingHandler, I see that it adds an >> attachment to the exchange, but the key for that attachment is protected to >> the undertow package. >> >> How can I read the value of that attachment? Or is there some other way >> to read the buffered request body? >> >> PS: I know the alternate approach of reading request body i.e >> startBlocking + getInputStream and getRequestReceiver().receiveFullString , >> but I am interested in using the RequestBufferingHandler in particular. >> >> -- >> Girish Sharma >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev > > -- Girish Sharma B.Tech(H), Civil Engineering, Indian Institute of Technology, Kharagpur -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180703/c193a322/attachment.html From sdouglas at redhat.com Tue Jul 3 19:35:43 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Wed, 4 Jul 2018 09:35:43 +1000 Subject: [undertow-dev] Reading request body parsed/buffered by RequestBufferingHandler In-Reply-To: References: Message-ID: On Tue, Jul 3, 2018 at 8:15 PM Girish Sharma wrote: > Hi Stuart, > > Thanks for getting back to me. I have some comments below: > > You just read it as normal >> > I am assuming that by normal, you mean the > > exchange.startBlocking() + (InputStream stream = > exchange.getInputStream()) > > approach to read the input stream? But then what benefit we get here as > compared to not using the RequestBufferingHandler and directly using this > approach? Is the following loop (typically used in reading input stream) > going to be much more efficient?: > Say you have a slow mobile client that takes 2s to upload its request. Without the request buffering handler the worker thread will be blocked for 2s. If you have lots of these slow clients you could exhaust the worker thread pool, causing request to queue and generally decreasing performance. If you use the request buffering handler the worker thread will not start work until after the data has been read, so it does not block on IO. Stuart > > while ((line = bufferedReader.readLine()) != null) { > stringBuffer.append(line); > } > > Regards > > > On Tue, Jul 3, 2018 at 6:34 AM Stuart Douglas wrote: > >> You just read it as normal. The advantage is that if you are going to >> dispatch to a worker thread then the dispatch does not happen until the >> request has been read, thus reducing the amount of time a worker spends >> processing the request. Essentially this allows you to take advantage of >> non-blocking IO even for applications that use blocking IO, but at the >> expense of memory for buffering. >> >> Stuart >> >> On Mon, Jul 2, 2018 at 8:55 PM Girish Sharma >> wrote: >> >>> Hi, >>> >>> I tried searching around in github/stackexchange but could not find >>> anything related to RequestBufferingHandler. >>> >>> I understand that RequestBufferingHandler would read the request body >>> for me and then call the handler assigned as the next handler. But how does >>> the next handler read the buffered request body? >>> >>> Looking around the code of RequestBufferingHandler, I see that it adds >>> an attachment to the exchange, but the key for that attachment is protected >>> to the undertow package. >>> >>> How can I read the value of that attachment? Or is there some other way >>> to read the buffered request body? >>> >>> PS: I know the alternate approach of reading request body i.e >>> startBlocking + getInputStream and getRequestReceiver().receiveFullString , >>> but I am interested in using the RequestBufferingHandler in particular. >>> >>> -- >>> Girish Sharma >>> _______________________________________________ >>> undertow-dev mailing list >>> undertow-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/undertow-dev >> >> > > -- > Girish Sharma > B.Tech(H), Civil Engineering, > Indian Institute of Technology, Kharagpur > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180704/f04b1ca0/attachment.html From kr at asseco.dk Wed Jul 4 04:07:51 2018 From: kr at asseco.dk (Kim Rasmussen) Date: Wed, 4 Jul 2018 10:07:51 +0200 Subject: [undertow-dev] UTF8 characters in response headers when using HTTP/2 Message-ID: Hi, I have a setup where I have my own variant of a ProxyHandler within undertow. In one case, I proxy requests towards an IIS running MailEnable - if I try to download a webmail attachment where the filename contains non-ascii characters, MailEnable sends the filename in UTF-8 characters in the HTTP header. I guess this is kinda a violation of the HTTP protocol, but thats how it is. When I run my undertow proxy using HTTP1.1 between the browser and undertow, everything works as expected - the browser detects and supports UTF-8 characters in the filename in the HTTP headers. But, if I run HTTP/2 between the browser and undertow, using Chrome I am getting an SPDY_PROTOCOL_ERROR displayed within chrome. So, I guess that it is because Chrome chokes on the UTF-8 characters in the HTTP/2 headers - I tried digging into the spec but I cannot really find anything mentioned there regarding restrictions on header content - just on header naming. Any suggestions ? I could of course strip the invalid characters from the response header before forwarding them but wanted to check if there is a better way first.... -- Med venlig hilsen / Best regards *Kim Rasmussen* Partner, IT Architect *Asseco Denmark A/S* Kronprinsessegade 54 DK-1306 Copenhagen K Mobile: +45 26 16 40 23 Ph.: +45 33 36 46 60 Fax: +45 33 36 46 61 https://ceptor.io https://asseco.dk -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180704/2a9bfa45/attachment.html From scrapmachines at gmail.com Wed Jul 4 12:46:10 2018 From: scrapmachines at gmail.com (Girish Sharma) Date: Wed, 4 Jul 2018 22:16:10 +0530 Subject: [undertow-dev] Reading request body parsed/buffered by RequestBufferingHandler In-Reply-To: References: Message-ID: Got it, thanks! I implemented and tested out all three approaches and found that the RequestBufferingHandler is the most efficient (even if it consumes 2-3% higher CPU usage than the request receiver approach). Regards. On Wed, Jul 4, 2018 at 5:05 AM Stuart Douglas wrote: > > > On Tue, Jul 3, 2018 at 8:15 PM Girish Sharma > wrote: > >> Hi Stuart, >> >> Thanks for getting back to me. I have some comments below: >> >> You just read it as normal >>> >> I am assuming that by normal, you mean the >> >> exchange.startBlocking() + (InputStream stream = >> exchange.getInputStream()) >> >> approach to read the input stream? But then what benefit we get here as >> compared to not using the RequestBufferingHandler and directly using this >> approach? Is the following loop (typically used in reading input stream) >> going to be much more efficient?: >> > > Say you have a slow mobile client that takes 2s to upload its request. > Without the request buffering handler the worker thread will be blocked for > 2s. If you have lots of these slow clients you could exhaust the worker > thread pool, causing request to queue and generally decreasing performance. > If you use the request buffering handler the worker thread will not start > work until after the data has been read, so it does not block on IO. > > Stuart > > > >> >> while ((line = bufferedReader.readLine()) != null) { >> stringBuffer.append(line); >> } >> >> Regards >> >> >> On Tue, Jul 3, 2018 at 6:34 AM Stuart Douglas >> wrote: >> >>> You just read it as normal. The advantage is that if you are going to >>> dispatch to a worker thread then the dispatch does not happen until the >>> request has been read, thus reducing the amount of time a worker spends >>> processing the request. Essentially this allows you to take advantage of >>> non-blocking IO even for applications that use blocking IO, but at the >>> expense of memory for buffering. >>> >>> Stuart >>> >>> On Mon, Jul 2, 2018 at 8:55 PM Girish Sharma >>> wrote: >>> >>>> Hi, >>>> >>>> I tried searching around in github/stackexchange but could not find >>>> anything related to RequestBufferingHandler. >>>> >>>> I understand that RequestBufferingHandler would read the request body >>>> for me and then call the handler assigned as the next handler. But how does >>>> the next handler read the buffered request body? >>>> >>>> Looking around the code of RequestBufferingHandler, I see that it adds >>>> an attachment to the exchange, but the key for that attachment is protected >>>> to the undertow package. >>>> >>>> How can I read the value of that attachment? Or is there some other way >>>> to read the buffered request body? >>>> >>>> PS: I know the alternate approach of reading request body i.e >>>> startBlocking + getInputStream and getRequestReceiver().receiveFullString , >>>> but I am interested in using the RequestBufferingHandler in particular. >>>> >>>> -- >>>> Girish Sharma >>>> _______________________________________________ >>>> undertow-dev mailing list >>>> undertow-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/undertow-dev >>> >>> >> >> -- >> Girish Sharma >> B.Tech(H), Civil Engineering, >> Indian Institute of Technology, Kharagpur >> > -- Girish Sharma B.Tech(H), Civil Engineering, Indian Institute of Technology, Kharagpur -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180704/3070da72/attachment-0001.html From sdouglas at redhat.com Wed Jul 4 21:10:27 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Thu, 5 Jul 2018 11:10:27 +1000 Subject: [undertow-dev] UTF8 characters in response headers when using HTTP/2 In-Reply-To: References: Message-ID: On Wed, Jul 4, 2018 at 6:15 PM Kim Rasmussen wrote: > Hi, > > I have a setup where I have my own variant of a ProxyHandler within > undertow. > In one case, I proxy requests towards an IIS running MailEnable - if I try > to download a webmail attachment where the filename contains non-ascii > characters, MailEnable sends the filename in UTF-8 characters in the HTTP > header. > > I guess this is kinda a violation of the HTTP protocol, but thats how it > is. > > When I run my undertow proxy using HTTP1.1 between the browser and > undertow, everything works as expected - the browser detects and supports > UTF-8 characters in the filename in the HTTP headers. > But, if I run HTTP/2 between the browser and undertow, using Chrome I am > getting an SPDY_PROTOCOL_ERROR displayed within chrome. > Does it work with other browsers? Its possible that we have a bug in how we handle this, but I think it is more likely that chrome is just being more strict with HTTP/2 and enforcing the spec. > > So, I guess that it is because Chrome chokes on the UTF-8 characters in > the HTTP/2 headers - I tried digging into the spec but I cannot really find > anything mentioned there regarding restrictions on header content - just on > header naming. > https://tools.ietf.org/html/rfc7230 " Historically, HTTP has allowed field content with text in the ISO-8859-1 charset [ISO-8859-1], supporting other charsets only through use of [RFC2047] encoding. In practice, most HTTP header field values use only a subset of the US-ASCII charset [USASCII]. Newly defined header fields SHOULD limit their field values to US-ASCII octets. A recipient SHOULD treat other octets in field content (obs-text) as opaque data." > > Any suggestions ? I could of course strip the invalid characters from the > response header before forwarding them but wanted to check if there is a > better way first.... > Maybe you could use RFC2047 encoding, although I don't think it is particularly widely used, but I guess chrome probably supports it. Looking at our HTTP/2 encoder it does not attempt to deal with UTF-8 at all, it just casts the character to a byte, so we would not be encoding the characters properly anyway, but I am not sure if it matters as I don't think the browser would treat them as UTF-8 anyway. Stuart > > -- > Med venlig hilsen / Best regards > > *Kim Rasmussen* > Partner, IT Architect > > *Asseco Denmark A/S* > Kronprinsessegade 54 > DK-1306 Copenhagen K > Mobile: +45 26 16 40 23 > Ph.: +45 33 36 46 60 > Fax: +45 33 36 46 61 > > https://ceptor.io > https://asseco.dk > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180705/3ac822be/attachment.html From kr at asseco.dk Thu Jul 5 03:17:02 2018 From: kr at asseco.dk (Kim Rasmussen) Date: Thu, 5 Jul 2018 09:17:02 +0200 Subject: [undertow-dev] UTF8 characters in response headers when using HTTP/2 In-Reply-To: References: Message-ID: Thanks for your fast response. It fails in edge, and firefox too, so unfortunately not just chrome. I agree with you that the IIS server is violating the spec - it should never attempt to send UTF-8 encoded characters in the header, but it does so anyway and it works with chome, firefox and edge when using http1.1 (they must autodetect the utf-8 encoding) - but with http/2 it fails on the transport level somewhere. I guess that undertow should not really try to guess a codepage and encode it different, so there is probably not anything else to do - I just wanted to check if there could be another bug that caused the protocol violation error, or if you know of the http/2 spec mentioning anything about this, or if undertow did something special with non-ascii characters. I'll try encoding the header if I detect non-ascii characters in it and see how it goes... Below is the information from a trace in my proxy - sorry for the image, but it doesn't cut'n paste well - the odd characters in the Content-Disposition field are UTF-8 encoded version of of the danish letter "?" (å in html) 2018-07-05 3:10 GMT+02:00 Stuart Douglas : > > > On Wed, Jul 4, 2018 at 6:15 PM Kim Rasmussen wrote: > >> Hi, >> >> I have a setup where I have my own variant of a ProxyHandler within >> undertow. >> In one case, I proxy requests towards an IIS running MailEnable - if I >> try to download a webmail attachment where the filename contains non-ascii >> characters, MailEnable sends the filename in UTF-8 characters in the HTTP >> header. >> >> I guess this is kinda a violation of the HTTP protocol, but thats how it >> is. >> > >> When I run my undertow proxy using HTTP1.1 between the browser and >> undertow, everything works as expected - the browser detects and supports >> UTF-8 characters in the filename in the HTTP headers. >> But, if I run HTTP/2 between the browser and undertow, using Chrome I am >> getting an SPDY_PROTOCOL_ERROR displayed within chrome. >> > > Does it work with other browsers? Its possible that we have a bug in how > we handle this, but I think it is more likely that chrome is just being > more strict with HTTP/2 and enforcing the spec. > > >> >> So, I guess that it is because Chrome chokes on the UTF-8 characters in >> the HTTP/2 headers - I tried digging into the spec but I cannot really find >> anything mentioned there regarding restrictions on header content - just on >> header naming. >> > > https://tools.ietf.org/html/rfc7230 > > " Historically, HTTP has allowed field content with text in the > ISO-8859-1 charset [ISO-8859-1], supporting other charsets only > through use of [RFC2047] encoding. In practice, most HTTP header > field values use only a subset of the US-ASCII charset [USASCII]. > Newly defined header fields SHOULD limit their field values to > US-ASCII octets. A recipient SHOULD treat other octets in field > content (obs-text) as opaque data." > > >> >> Any suggestions ? I could of course strip the invalid characters from the >> response header before forwarding them but wanted to check if there is a >> better way first.... >> > > Maybe you could use RFC2047 encoding, although I don't think it is > particularly widely used, but I guess chrome probably supports it. > > Looking at our HTTP/2 encoder it does not attempt to deal with UTF-8 at > all, it just casts the character to a byte, so we would not be encoding the > characters properly anyway, but I am not sure if it matters as I don't > think the browser would treat them as UTF-8 anyway. > > Stuart > > >> >> -- >> Med venlig hilsen / Best regards >> >> *Kim Rasmussen* >> Partner, IT Architect >> >> *Asseco Denmark A/S* >> Kronprinsessegade 54 >> >> DK-1306 Copenhagen >> >> K >> Mobile: +45 26 16 40 23 >> Ph.: +45 33 36 46 60 >> Fax: +45 33 36 46 61 >> >> https://ceptor.io >> https://asseco.dk >> >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev > > -- Med venlig hilsen / Best regards *Kim Rasmussen* Partner, IT Architect *Asseco Denmark A/S* Kronprinsessegade 54 DK-1306 Copenhagen K Mobile: +45 26 16 40 23 Ph.: +45 33 36 46 60 Fax: +45 33 36 46 61 https://ceptor.io https://asseco.dk -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180705/15e078f2/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 244734 bytes Desc: not available Url : http://lists.jboss.org/pipermail/undertow-dev/attachments/20180705/15e078f2/attachment-0001.png From ploffay at redhat.com Thu Jul 12 04:46:35 2018 From: ploffay at redhat.com (Pavol Loffay) Date: Thu, 12 Jul 2018 10:46:35 +0200 Subject: [undertow-dev] Microprofile-OpenTracing Async issues in TCK when running on Wildfly In-Reply-To: References: <0978492b-f5be-7fb3-a96e-1893ebd62dcf@redhat.com> Message-ID: Adding this also to undertow-dev ML. On Wed, Jul 11, 2018 at 7:09 PM Pavol Loffay wrote: > > > On Wed, Jul 11, 2018 at 6:51 PM Alessio Soldano > wrote: > >> I've pushed a fix for the ResteasyServletInitializer to master >> https://github.com/resteasy/Resteasy/commit/1b3870b0b7210a8b4ab70eb51fc903abdaac9b41 >> ; we'll double check it with tck before next release, anyway. >> >> Going back to the issue, 3.0.24 basically removes any doubt I had >> regarding recent asyn changes, it's quite an old version. >> The AsyncListener ... is this >> https://github.com/pavolloffay/smallrye-opentracing/blob/00f6c4dced4990b4e7ec0af57671399b6877f8a4/tck/src/test/java/io/smallrye/opentracing/ServletContextTracingInstaller.java#L27-L31 >> what you're referring to and which you believe is not working? it's really >> a servlet/undertow thing btw... >> > > Yes the AsyncListener is added in the filter which your link references. > > Martin also commented here > https://issues.jboss.org/browse/UNDERTOW-1258?focusedCommentId=13603809&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13603809 > on this issues. > > >> >> On Wed, Jul 11, 2018 at 2:40 PM, Martin Kouba wrote: >> >>> Also note that we use a modified ResteasyServletInitializer to init >>> resteasy in the TCK: >>> https://github.com/smallrye/smallrye-opentracing/pull/4/files#diff-ec8fa59dbdd6534b47de691e766aff61 >>> >>> M >>> >>> Dne 11.7.2018 v 13:29 Martin Kouba napsal(a): >>> >>> You're right, 3.0.24.Final. >>>> >>>> Dne 11.7.2018 v 13:09 Pavol Loffay napsal(a): >>>> >>>>> Hi Alessio, >>>>> >>>>> resteasy version in Thorntail and SmallRye should be the same - >>>>> 3.0.24.Final. I have added Ken and Martin in case I am wrong. >>>>> >>>>> Regards, >>>>> >>>>> On Wed, Jul 11, 2018 at 11:30 AM Alessio Soldano >>>> > wrote: >>>>> >>>>> CC-ing Pavol, not sure he's subscribed to the list >>>>> >>>>> On Wed, Jul 11, 2018 at 11:29 AM, Alessio Soldano >>>>> > wrote: >>>>> >>>>> Hi Pavol, >>>>> I'm forwarding this to the dev-list, so that the whole team can >>>>> read and help. >>>>> Can you start by telling which version of RESTEasy was used in >>>>> the previous and current integration? >>>>> There's been a bunch of changes around async lately, which >>>>> might >>>>> possibly be related to the issue you're seeing. >>>>> >>>>> Cheers >>>>> >>>>> ---------- Forwarded message ---------- >>>>> From: *Pavol Loffay* >>>> > >>>>> Date: Tue, Jul 10, 2018 at 6:15 PM >>>>> Subject: Microprofile-OpenTracing Async issues in TCK when >>>>> running on Wildfly >>>>> To: Alessio Soldano >>>> > >>>>> Cc: jean-frederic clere >>>> > >>>>> >>>>> >>>>> Hi Alessio, >>>>> >>>>> Jean Frederic pointed me to you as the contact for issues >>>>> related to Resteasy/undertow in Wildfly. >>>>> >>>>> I am migrating Microprofile-OpenTracing implementation from >>>>> Thorntail [1] to SmallRye [2]. TCK in Thorntail was passing >>>>> fine. Now when it's deployed on Wildfly a test for async >>>>> endpoint is failing. Basically, the AsyncListener (added in >>>>> filter) which reports some data is never called. >>>>> >>>>> The issue is described on the PR >>>>> >>>>> https://github.com/smallrye/smallrye-opentracing/pull/4#issuecomment-403847333. >>>>> >>>>> >>>>> >>>>> Could you please have a look and comment on the PR? Is it safe >>>>> to rely on AsyncListener. Can it happen that the listener added >>>>> in the filter will not be invoked? >>>>> >>>>> [1]: >>>>> >>>>> https://github.com/thorntail/thorntail/tree/master/fractions/microprofile/microprofile-opentracing >>>>> [2]: https://github.com/smallrye/smallrye-opentracing/pull/4 >>>>> >>>>> Regards, >>>>> -- >>>>> PAVOL LOFFAY >>>>> >>>>> SOFTWARE ENGINEER >>>>> >>>>> Red Hat >>>>> >>>>> M: +41791562647 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Alessio Soldano >>>>> >>>>> Associate Manager >>>>> >>>>> Red Hat >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Alessio Soldano >>>>> >>>>> Associate Manager >>>>> >>>>> Red Hat >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> PAVOL LOFFAY >>>>> >>>>> SOFTWARE ENGINEER >>>>> >>>>> Red Hat >>>>> >>>>> M: +41791562647 >>>>> >>>>> >>>>> >>>>> >>>> >>> -- >>> Martin Kouba >>> Senior Software Engineer >>> Red Hat, Czech Republic >>> >> >> >> >> -- >> >> Alessio Soldano >> >> Associate Manager >> >> Red Hat >> >> >> >> > > > -- > > PAVOL LOFFAY > > SOFTWARE ENGINEER > > Red Hat > > M: +41791562647 > > -- PAVOL LOFFAY SOFTWARE ENGINEER Red Hat M: +41791562647 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180712/57fcd6c1/attachment.html From bdw429s at gmail.com Thu Jul 12 12:15:38 2018 From: bdw429s at gmail.com (Brad Wood) Date: Thu, 12 Jul 2018 11:15:38 -0500 Subject: [undertow-dev] [1.4.23.Final] Invalid character | in request-target Message-ID: I just had a user who updated to the latest version of my Undertow-powered server report an error when his query string contained unencoded pipe characters. (error at the bottom) This didn't happen in older versions but appears to be a valid check. In this case, my user has no control over the URL that's being sent to his site as it comes from a Microsoft Office365 app that opens a popup window to one of his URLs for authentication. It looks like this: https://127.0.0.1:1443/index.cfm/login:main/index?_host_Info=outlook|web|16.01|en-us|89b212f8-4618-9ca2-bcf7-f1e8cb0969be|isDialog I have a feeling this is "working as designed" but is there a way to relax the validation here as he has no control over this URL and it is a hard stop for him? [DEBUG] io.undertow.request.io: UT005014: Failed to parse request io.undertow.util.BadRequestException: UT000165: Invalid character | in request-target at io.undertow.server.protocol.http.HttpRequestParser.handleQueryParameters(HttpRequestParser.java:523) at io.undertow.server.protocol.http.HttpRequestParser.beginQueryParameters(HttpRequestParser.java:486) at io.undertow.server.protocol.http.HttpRequestParser.handlePath(HttpRequestParser.java:410) at io.undertow.server.protocol.http.HttpRequestParser.handle(HttpRequestParser.java:248) at io.undertow.server.protocol.http.HttpReadListener.handleEventWithNoRunningRequest(HttpReadListener.java:187) at io.undertow.server.protocol.http.HttpReadListener.handleEvent(HttpReadListener.java:136) at io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:151) at io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:92) at io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:51) at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:291) at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:286) at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) at org.xnio.nio.QueuedNioTcpServer$1.run(QueuedNioTcpServer.java:129) at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:582) at org.xnio.nio.WorkerThread.run(WorkerThread.java:466) Thanks! ~Brad *Developer Advocate* *Ortus Solutions, Corp * E-mail: brad at coldbox.org ColdBox Platform: http://www.coldbox.org Blog: http://www.codersrevolution.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180712/9ecdac78/attachment-0001.html From sdouglas at redhat.com Thu Jul 12 19:23:51 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Fri, 13 Jul 2018 09:23:51 +1000 Subject: [undertow-dev] [1.4.23.Final] Invalid character | in request-target In-Reply-To: References: Message-ID: The io.undertow.UndertowOptions#ALLOW_UNESCAPED_CHARACTERS_IN_URL option allows you to control this. Stuart On Fri, Jul 13, 2018 at 2:23 AM Brad Wood wrote: > I just had a user who updated to the latest version of my Undertow-powered > server report an error when his query string contained unencoded pipe > characters. (error at the bottom) This didn't happen in older versions but > appears to be a valid check. In this case, my user has no control over the > URL that's being sent to his site as it comes from a Microsoft Office365 > app that opens a popup window to one of his URLs for authentication. It > looks like this: > > > https://127.0.0.1:1443/index.cfm/login:main/index?_host_Info=outlook|web|16.01|en-us|89b212f8-4618-9ca2-bcf7-f1e8cb0969be|isDialog > > I have a feeling this is "working as designed" but is there a way to relax > the validation here as he has no control over this URL and it is a hard > stop for him? > > [DEBUG] io.undertow.request.io: UT005014: Failed to parse request > io.undertow.util.BadRequestException: UT000165: Invalid character | in > request-target > at > io.undertow.server.protocol.http.HttpRequestParser.handleQueryParameters(HttpRequestParser.java:523) > at > io.undertow.server.protocol.http.HttpRequestParser.beginQueryParameters(HttpRequestParser.java:486) > at > io.undertow.server.protocol.http.HttpRequestParser.handlePath(HttpRequestParser.java:410) > at > io.undertow.server.protocol.http.HttpRequestParser.handle(HttpRequestParser.java:248) > at > io.undertow.server.protocol.http.HttpReadListener.handleEventWithNoRunningRequest(HttpReadListener.java:187) > at > io.undertow.server.protocol.http.HttpReadListener.handleEvent(HttpReadListener.java:136) > at > io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:151) > at > io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:92) > at > io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:51) > at > org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) > at > org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:291) > at > org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:286) > at > org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) > at > org.xnio.nio.QueuedNioTcpServer$1.run(QueuedNioTcpServer.java:129) > at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:582) > at org.xnio.nio.WorkerThread.run(WorkerThread.java:466) > > Thanks! > > ~Brad > > *Developer Advocate* > *Ortus Solutions, Corp * > > E-mail: brad at coldbox.org > ColdBox Platform: http://www.coldbox.org > Blog: http://www.codersrevolution.com > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180713/4754bd16/attachment.html From bdw429s at gmail.com Thu Jul 12 20:07:09 2018 From: bdw429s at gmail.com (Brad Wood) Date: Thu, 12 Jul 2018 19:07:09 -0500 Subject: [undertow-dev] [1.4.23.Final] Invalid character | in request-target In-Reply-To: References: Message-ID: Cool, thanks. I actually found that online after posting but for the life of me couldn't figure out how to reply to my own topic on the mailing list since you don't get Emailed for your own post and the web site doesn't seem to have posting capabilities. On a related note, hjave you considered switching to Google Groups or something? The JBoss lists are seriously outdated. Like in an embarrassing way :) Thanks! ~Brad *Developer Advocate* *Ortus Solutions, Corp * E-mail: brad at coldbox.org ColdBox Platform: http://www.coldbox.org Blog: http://www.codersrevolution.com On Thu, Jul 12, 2018 at 6:24 PM Stuart Douglas wrote: > The io.undertow.UndertowOptions#ALLOW_UNESCAPED_CHARACTERS_IN_URL option > allows you to control this. > > Stuart > > On Fri, Jul 13, 2018 at 2:23 AM Brad Wood wrote: > >> I just had a user who updated to the latest version of my >> Undertow-powered server report an error when his query string contained >> unencoded pipe characters. (error at the bottom) This didn't happen in >> older versions but appears to be a valid check. In this case, my user has >> no control over the URL that's being sent to his site as it comes from a >> Microsoft Office365 app that opens a popup window to one of his URLs for >> authentication. It looks like this: >> >> >> https://127.0.0.1:1443/index.cfm/login:main/index?_host_Info=outlook|web|16.01|en-us|89b212f8-4618-9ca2-bcf7-f1e8cb0969be|isDialog >> >> I have a feeling this is "working as designed" but is there a way to >> relax the validation here as he has no control over this URL and it is a >> hard stop for him? >> >> [DEBUG] io.undertow.request.io: UT005014: Failed to parse request >> io.undertow.util.BadRequestException: UT000165: Invalid character | in >> request-target >> at >> io.undertow.server.protocol.http.HttpRequestParser.handleQueryParameters(HttpRequestParser.java:523) >> at >> io.undertow.server.protocol.http.HttpRequestParser.beginQueryParameters(HttpRequestParser.java:486) >> at >> io.undertow.server.protocol.http.HttpRequestParser.handlePath(HttpRequestParser.java:410) >> at >> io.undertow.server.protocol.http.HttpRequestParser.handle(HttpRequestParser.java:248) >> at >> io.undertow.server.protocol.http.HttpReadListener.handleEventWithNoRunningRequest(HttpReadListener.java:187) >> at >> io.undertow.server.protocol.http.HttpReadListener.handleEvent(HttpReadListener.java:136) >> at >> io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:151) >> at >> io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:92) >> at >> io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:51) >> at >> org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) >> at >> org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:291) >> at >> org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:286) >> at >> org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) >> at >> org.xnio.nio.QueuedNioTcpServer$1.run(QueuedNioTcpServer.java:129) >> at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:582) >> at org.xnio.nio.WorkerThread.run(WorkerThread.java:466) >> >> Thanks! >> >> ~Brad >> >> *Developer Advocate* >> *Ortus Solutions, Corp * >> >> E-mail: brad at coldbox.org >> ColdBox Platform: http://www.coldbox.org >> Blog: http://www.codersrevolution.com >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180712/b26b35cf/attachment.html From stan.rosenberg at acm.org Sat Jul 14 21:49:43 2018 From: stan.rosenberg at acm.org (Stan Rosenberg) Date: Sat, 14 Jul 2018 21:49:43 -0400 Subject: [undertow-dev] how to implement request processing timeout Message-ID: Apologies if this question has already been answered elsewhere; closest I could find is this thread: http://lists.jboss.org/pipermail/undertow-dev/2014-August/000898.html HttpServerExchange cannot be manipulated from multiple threads (without locking). Thus, dispatch and executeAfter wouldn't work if the goal is to end the exchange after the max. time to process (request) has been exceeded. I can implement this timeout mechanism using out-of-band thread executor but was hoping there is a more efficient way provided by the framework. Thanks. Best, stan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180714/3c6f6473/attachment.html From sdouglas at redhat.com Sun Jul 15 21:48:04 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Mon, 16 Jul 2018 11:48:04 +1000 Subject: [undertow-dev] how to implement request processing timeout In-Reply-To: References: Message-ID: So there are a few options, but if you actually want to generate a HTTP response instead of just dropping the connection then your application code is going to have to take some responsibility. If all you want to do is drop the connection then just scheduling a task that does exchange.getConnection().close() is fine, no HTTP response will be returned to the client. If you want to actually send a response to the client then you are going to have to have some kind of lock/CAS that prevents your application from writing once the timeout has taken effect. Are you using the Servlet API or the HttpServerExchange API? The best way to approach this is a bit different depending on what you are doing. Stuart On Sun, Jul 15, 2018 at 11:50 AM Stan Rosenberg wrote: > Apologies if this question has already been answered elsewhere; closest I > could find is this thread: > http://lists.jboss.org/pipermail/undertow-dev/2014-August/000898.html > > HttpServerExchange cannot be manipulated from multiple threads (without > locking). Thus, dispatch and executeAfter wouldn't work if the goal is to > end the exchange after the max. time to process (request) has been exceeded. > > I can implement this timeout mechanism using out-of-band thread executor > but was hoping there is a more efficient way provided by the framework. > Thanks. > > Best, > > stan > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180716/bdc426c7/attachment-0001.html From sdouglas at redhat.com Sun Jul 15 22:45:26 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Mon, 16 Jul 2018 12:45:26 +1000 Subject: [undertow-dev] how to implement request processing timeout In-Reply-To: References: Message-ID: On Mon, Jul 16, 2018 at 12:16 PM Stan Rosenberg wrote: > On Sun, Jul 15, 2018 at 9:48 PM, Stuart Douglas > wrote: > >> If all you want to do is drop the connection then just scheduling a task >> that does exchange.getConnection().close() is fine, no HTTP response will >> be returned to the client. >> > > That would imply exchange.getConnection().close() is thread-safe; just > double-checking. > Yes. > > >> >> If you want to actually send a response to the client then you are going >> to have to have some kind of lock/CAS that prevents your application from >> writing once the timeout has taken effect. >> >> > ?Makes sense, but that's custom logic; i.e., not available in the API, > right? > Yes. The issue with including something like this in the core API is that every request has to pay the thread safety price even if they don't use it. > > Are you using the Servlet API or the HttpServerExchange API? The best way >> to approach this is a bit different depending on what you are doing. >> > > HttpServerExchange API > ?. Thanks!? > This is a bit harder to do in the general case. With Servlet you could just create a thread safe wrapper, where the wrapper basically disconnects from the underlying request on timeout. The Undertow native API is not designed around wrapping though, so it needs cooperation from the application to manage this. If you know the application is only going to be writing data (and not setting headers) then you should be able to make this work via a ConduitFactory implementation that handles the locking, although if this is not the case then you are going to need some kind of external lock. Stuart -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180716/801bd311/attachment.html From ploffay at redhat.com Mon Jul 16 05:19:33 2018 From: ploffay at redhat.com (Pavol Loffay) Date: Mon, 16 Jul 2018 11:19:33 +0200 Subject: [undertow-dev] MicroProfile-OpenTracing failing async TCK - AsyncListener not being invoked Message-ID: Hi, I am migrating MicroProfile-OpenTracing implementation from Thorntaln/WF-Swarm to SmallRye [1]. We had all TCKs passing in Thorntail, however now in SmallRye one test for async endpoint is failing. The issue is that AsyncListener which reports tracing data is not being invoked. The listener is added in a servlet filter see https://github.com/opentracing-contrib/java-jaxrs/blob/master/opentracing-jaxrs2/src/main/java/io/opentracing/contrib/jaxrs2/server/SpanFinishingFilter.java#L68 . Should be AsyncListener always invoked for async requests? Or is it just undertow bug/ behavior? If the listener is not invoked tracing data is not reported and hence failing TCK. Tests in SmallRye are based on WF13 and resteasy 3.0.24.Final the Thorntail uses the same resteasy version. The issue is also described on pull request [1]. [1]: https://github.com/smallrye/smallrye-opentracing/pull/4 Regards, -- PAVOL LOFFAY SOFTWARE ENGINEER Red Hat M: +41791562647 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180716/cea16f75/attachment.html From ploffay at redhat.com Mon Jul 16 05:36:07 2018 From: ploffay at redhat.com (Pavol Loffay) Date: Mon, 16 Jul 2018 11:36:07 +0200 Subject: [undertow-dev] MicroProfile-OpenTracing failing async TCK - AsyncListener not being invoked In-Reply-To: References: Message-ID: Adding a pointer related issue https://issues.jboss.org/browse/UNDERTOW-1258 On Mon, Jul 16, 2018 at 11:19 AM Pavol Loffay wrote: > Hi, > > I am migrating MicroProfile-OpenTracing implementation from > Thorntaln/WF-Swarm to SmallRye [1]. We had all TCKs passing in Thorntail, > however now in SmallRye one test for async endpoint is failing. > > The issue is that AsyncListener which reports tracing data is not being > invoked. The listener is added in a servlet filter see > https://github.com/opentracing-contrib/java-jaxrs/blob/master/opentracing-jaxrs2/src/main/java/io/opentracing/contrib/jaxrs2/server/SpanFinishingFilter.java#L68 > . > > Should be AsyncListener always invoked for async requests? Or is it just > undertow bug/ behavior? If the listener is not invoked tracing data is not > reported and hence failing TCK. > > Tests in SmallRye are based on WF13 and resteasy 3.0.24.Final the > Thorntail uses the same resteasy version. The issue is also described on > pull request [1]. > > [1]: https://github.com/smallrye/smallrye-opentracing/pull/4 > > Regards, > > -- > > PAVOL LOFFAY > > SOFTWARE ENGINEER > > Red Hat > > M: +41791562647 > > -- PAVOL LOFFAY SOFTWARE ENGINEER Red Hat M: +41791562647 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180716/1da568ed/attachment.html From sdouglas at redhat.com Mon Jul 16 21:13:51 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Tue, 17 Jul 2018 11:13:51 +1000 Subject: [undertow-dev] MicroProfile-OpenTracing failing async TCK - AsyncListener not being invoked In-Reply-To: References: Message-ID: On Mon, Jul 16, 2018 at 7:23 PM Pavol Loffay wrote: > Hi, > > I am migrating MicroProfile-OpenTracing implementation from > Thorntaln/WF-Swarm to SmallRye [1]. We had all TCKs passing in Thorntail, > however now in SmallRye one test for async endpoint is failing. > > The issue is that AsyncListener which reports tracing data is not being > invoked. The listener is added in a servlet filter see > https://github.com/opentracing-contrib/java-jaxrs/blob/master/opentracing-jaxrs2/src/main/java/io/opentracing/contrib/jaxrs2/server/SpanFinishingFilter.java#L68 > . > > Should be AsyncListener always invoked for async requests? Or is it just > undertow bug/ behavior? If the listener is not invoked tracing data is not > reported and hence failing TCK. > Looking at this again I think this is a bug in Undertow. At the moment we don't really delay the async complete() call properly, I have re-opened to attached JIRA and will work on a fix. Stuart > > Tests in SmallRye are based on WF13 and resteasy 3.0.24.Final the > Thorntail uses the same resteasy version. The issue is also described on > pull request [1]. > > [1]: https://github.com/smallrye/smallrye-opentracing/pull/4 > > Regards, > > -- > > PAVOL LOFFAY > > SOFTWARE ENGINEER > > Red Hat > > M: +41791562647 > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180717/9cbf3937/attachment-0001.html From barnett at rice.edu Tue Jul 24 19:56:53 2018 From: barnett at rice.edu (R. Matt Barnett) Date: Tue, 24 Jul 2018 18:56:53 -0500 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat Message-ID: Hello, I'm experiencing an Undertow performance issue I fail to understand.? I am able to reproduce the issue with the code linked bellow. The problem is that on Red Hat (and not Windows) I'm unable to concurrently process more than 4 overlapping requests even with 8 configured IO Threads.?? For example, if I run the following program (1 file, 55 lines): https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 ... on Red Hat and then send requests to the server using Apache Benchmark... ??? > ab -n 1000 -c 8 localhost:8080/ I see the following output from the Undertow process: ??? Server started on port 8080 ??? 1 ??? 2 ??? 3 ??? 4 I believe this demonstrates that only 4 requests are ever processed in parallel.? I would expect 8.? In fact, when I run the same experiment on Windows I see the expected output of ??? Server started on port 8080 ??? 1 ??? 2 ??? 3 ??? 4 ??? 5 ??? 6 ??? 7 ??? 8 Any thoughts as to what might explain this behavior? Best, Matt From sdouglas at redhat.com Tue Jul 24 20:13:31 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Wed, 25 Jul 2018 10:13:31 +1000 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: There is no guarantee that connections will be evenly distributed between IO threads. Once a client has connected the connection is tied to that IO thread, so it may be that you are just ending up with 2 connections on 4 threads. Stuart On Wed, Jul 25, 2018 at 10:02 AM R. Matt Barnett wrote: > Hello, > > I'm experiencing an Undertow performance issue I fail to understand. I > am able to reproduce the issue with the code linked bellow. The problem > is that on Red Hat (and not Windows) I'm unable to concurrently process > more than 4 overlapping requests even with 8 configured IO Threads. > For example, if I run the following program (1 file, 55 lines): > > https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 > > ... on Red Hat and then send requests to the server using Apache > Benchmark... > > > ab -n 1000 -c 8 localhost:8080/ > > I see the following output from the Undertow process: > > Server started on port 8080 > > 1 > 2 > 3 > 4 > > I believe this demonstrates that only 4 requests are ever processed in > parallel. I would expect 8. In fact, when I run the same experiment on > Windows I see the expected output of > > Server started on port 8080 > 1 > 2 > 3 > 4 > 5 > 6 > 7 > 8 > > Any thoughts as to what might explain this behavior? > > Best, > > Matt > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180725/57d05f5d/attachment.html From barnett at rice.edu Tue Jul 24 20:56:20 2018 From: barnett at rice.edu (R. Matt Barnett) Date: Tue, 24 Jul 2018 19:56:20 -0500 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: <20180724195620.Horde.m_mPoqHaU56HEUKObe0gSw2@webmail.rice.edu> My test is a little bogus anyway because I realized on the drive home I did an invalid test/set on max seen. But I think the conclusion still stands because we only see 4 printfs. I'm not very experienced with nio, but the way I assumed Undertow worked, at a high level, was as follows: 1.) Each incoming socket connection generated a channel. 2.) Each channel created by step 1.) was associated with a singleton selector. 3.) All IO threads polled the singleton selector waiting for requests to process. Sort of a multi-producer/multi-consumer with a singleton queue model. Is this not the case? Is it the case that only one thread can poll from a selector? -- Matt Quoting Stuart Douglas : > There is no guarantee that connections will be evenly distributed between > IO threads. Once a client has connected the connection is tied to that IO > thread, so it may be that you are just ending up with 2 connections on 4 > threads. > > Stuart > > > > > On Wed, Jul 25, 2018 at 10:02 AM R. Matt Barnett wrote: > >> Hello, >> >> I'm experiencing an Undertow performance issue I fail to understand. I >> am able to reproduce the issue with the code linked bellow. The problem >> is that on Red Hat (and not Windows) I'm unable to concurrently process >> more than 4 overlapping requests even with 8 configured IO Threads. >> For example, if I run the following program (1 file, 55 lines): >> >> https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 >> >> ... on Red Hat and then send requests to the server using Apache >> Benchmark... >> >> > ab -n 1000 -c 8 localhost:8080/ >> >> I see the following output from the Undertow process: >> >> Server started on port 8080 >> >> 1 >> 2 >> 3 >> 4 >> >> I believe this demonstrates that only 4 requests are ever processed in >> parallel. I would expect 8. In fact, when I run the same experiment on >> Windows I see the expected output of >> >> Server started on port 8080 >> 1 >> 2 >> 3 >> 4 >> 5 >> 6 >> 7 >> 8 >> >> Any thoughts as to what might explain this behavior? >> >> Best, >> >> Matt >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev From bill at dartalley.com Tue Jul 24 20:59:40 2018 From: bill at dartalley.com (Bill O'Neil) Date: Tue, 24 Jul 2018 20:59:40 -0400 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: <20180724195620.Horde.m_mPoqHaU56HEUKObe0gSw2@webmail.rice.edu> References: <20180724195620.Horde.m_mPoqHaU56HEUKObe0gSw2@webmail.rice.edu> Message-ID: I believe one of the selector implementations buckets the connections to IO threads based on the port the socket is open on. This means with bad luck you could have 4 connections waiting on a single IO thread while other IO threads are idle especially if the connections are keepalive. I would try boosting the concurrency level quite a bit and see if you notice more IO threads being used. On Tue, Jul 24, 2018 at 8:56 PM, R. Matt Barnett wrote: > My test is a little bogus anyway because I realized on the drive home > I did an invalid test/set on max seen. But I think the conclusion > still stands because we only see 4 printfs. > > I'm not very experienced with nio, but the way I assumed Undertow > worked, at a high level, was as follows: > > 1.) Each incoming socket connection generated a channel. > 2.) Each channel created by step 1.) was associated with a singleton > selector. > 3.) All IO threads polled the singleton selector waiting for requests > to process. > > Sort of a multi-producer/multi-consumer with a singleton queue model. > > Is this not the case? Is it the case that only one thread can poll > from a selector? > > -- Matt > > Quoting Stuart Douglas : > > > There is no guarantee that connections will be evenly distributed between > > IO threads. Once a client has connected the connection is tied to that IO > > thread, so it may be that you are just ending up with 2 connections on 4 > > threads. > > > > Stuart > > > > > > > > > > On Wed, Jul 25, 2018 at 10:02 AM R. Matt Barnett > wrote: > > > >> Hello, > >> > >> I'm experiencing an Undertow performance issue I fail to understand. I > >> am able to reproduce the issue with the code linked bellow. The problem > >> is that on Red Hat (and not Windows) I'm unable to concurrently process > >> more than 4 overlapping requests even with 8 configured IO Threads. > >> For example, if I run the following program (1 file, 55 lines): > >> > >> https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 > >> > >> ... on Red Hat and then send requests to the server using Apache > >> Benchmark... > >> > >> > ab -n 1000 -c 8 localhost:8080/ > >> > >> I see the following output from the Undertow process: > >> > >> Server started on port 8080 > >> > >> 1 > >> 2 > >> 3 > >> 4 > >> > >> I believe this demonstrates that only 4 requests are ever processed in > >> parallel. I would expect 8. In fact, when I run the same experiment on > >> Windows I see the expected output of > >> > >> Server started on port 8080 > >> 1 > >> 2 > >> 3 > >> 4 > >> 5 > >> 6 > >> 7 > >> 8 > >> > >> Any thoughts as to what might explain this behavior? > >> > >> Best, > >> > >> Matt > >> > >> _______________________________________________ > >> undertow-dev mailing list > >> undertow-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/undertow-dev > > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180724/d6f3c4d6/attachment.html From ckozak at apache.org Tue Jul 24 21:06:49 2018 From: ckozak at apache.org (Carter Kozak) Date: Tue, 24 Jul 2018 21:06:49 -0400 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: <20180724195620.Horde.m_mPoqHaU56HEUKObe0gSw2@webmail.rice.edu> Message-ID: When I run the test case locally I immediately see 1-8 printed. Perhaps there aren't many cores available on your test machine? Also note that writing the response is handled asynchronously, so it's likely not a very large percentage of the total request handling time spent inside your HttpHandler implementation. -ck On Tue, Jul 24, 2018 at 8:59 PM, Bill O'Neil wrote: > I believe one of the selector implementations buckets the connections to IO > threads based on the port the socket is open on. This means with bad luck > you could have 4 connections waiting on a single IO thread while other IO > threads are idle especially if the connections are keepalive. I would try > boosting the concurrency level quite a bit and see if you notice more IO > threads being used. > > > On Tue, Jul 24, 2018 at 8:56 PM, R. Matt Barnett wrote: >> >> My test is a little bogus anyway because I realized on the drive home >> I did an invalid test/set on max seen. But I think the conclusion >> still stands because we only see 4 printfs. >> >> I'm not very experienced with nio, but the way I assumed Undertow >> worked, at a high level, was as follows: >> >> 1.) Each incoming socket connection generated a channel. >> 2.) Each channel created by step 1.) was associated with a singleton >> selector. >> 3.) All IO threads polled the singleton selector waiting for requests >> to process. >> >> Sort of a multi-producer/multi-consumer with a singleton queue model. >> >> Is this not the case? Is it the case that only one thread can poll >> from a selector? >> >> -- Matt >> >> Quoting Stuart Douglas : >> >> > There is no guarantee that connections will be evenly distributed >> > between >> > IO threads. Once a client has connected the connection is tied to that >> > IO >> > thread, so it may be that you are just ending up with 2 connections on 4 >> > threads. >> > >> > Stuart >> > >> > >> > >> > >> > On Wed, Jul 25, 2018 at 10:02 AM R. Matt Barnett >> > wrote: >> > >> >> Hello, >> >> >> >> I'm experiencing an Undertow performance issue I fail to understand. I >> >> am able to reproduce the issue with the code linked bellow. The problem >> >> is that on Red Hat (and not Windows) I'm unable to concurrently process >> >> more than 4 overlapping requests even with 8 configured IO Threads. >> >> For example, if I run the following program (1 file, 55 lines): >> >> >> >> https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 >> >> >> >> ... on Red Hat and then send requests to the server using Apache >> >> Benchmark... >> >> >> >> > ab -n 1000 -c 8 localhost:8080/ >> >> >> >> I see the following output from the Undertow process: >> >> >> >> Server started on port 8080 >> >> >> >> 1 >> >> 2 >> >> 3 >> >> 4 >> >> >> >> I believe this demonstrates that only 4 requests are ever processed in >> >> parallel. I would expect 8. In fact, when I run the same experiment >> >> on >> >> Windows I see the expected output of >> >> >> >> Server started on port 8080 >> >> 1 >> >> 2 >> >> 3 >> >> 4 >> >> 5 >> >> 6 >> >> 7 >> >> 8 >> >> >> >> Any thoughts as to what might explain this behavior? >> >> >> >> Best, >> >> >> >> Matt >> >> >> >> _______________________________________________ >> >> undertow-dev mailing list >> >> undertow-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/undertow-dev >> >> >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev > > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev From barnett at rice.edu Tue Jul 24 21:08:07 2018 From: barnett at rice.edu (R. Matt Barnett) Date: Tue, 24 Jul 2018 20:08:07 -0500 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: <20180724195620.Horde.m_mPoqHaU56HEUKObe0gSw2@webmail.rice.edu> Message-ID: <20180724200807.Horde.RW6OTmu7ZEIcXsHSF77YOg1@webmail.rice.edu> Quoting Carter Kozak : > When I run the test case locally I immediately see 1-8 printed. > Perhaps there aren't many cores available on your test machine? The test machine is an 8 core machine. > > Also note that writing the response is handled asynchronously, so it's > likely not a very large percentage of the total request handling time > spent inside your HttpHandler implementation. > Do you think the 1 second sleep in the body is not sufficient? -- Matt From barnett at rice.edu Wed Jul 25 12:26:49 2018 From: barnett at rice.edu (R. Matt Barnett) Date: Wed, 25 Jul 2018 11:26:49 -0500 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: Corrected test to resolve test/set race. https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa I've also discovered this morning that I *can* see 1-8 printed on Red Hat when I generate load using ab from Windows, but only 1-4 when running ab on Red Hat (both locally and from a remote server).? I'm wondering if perhaps there is some sort of connection reuse shenanigans going on.? My assumption of the use of the -c 8 parameter was "make 8 sockets" but maybe not.? I'll dig in and report back. -- Matt On 7/24/2018 6:56 PM, R. Matt Barnett wrote: > Hello, > > I'm experiencing an Undertow performance issue I fail to understand.? I > am able to reproduce the issue with the code linked bellow. The problem > is that on Red Hat (and not Windows) I'm unable to concurrently process > more than 4 overlapping requests even with 8 configured IO Threads. > For example, if I run the following program (1 file, 55 lines): > > https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 > > ... on Red Hat and then send requests to the server using Apache > Benchmark... > > ??? > ab -n 1000 -c 8 localhost:8080/ > > I see the following output from the Undertow process: > > ??? Server started on port 8080 > > ??? 1 > ??? 2 > ??? 3 > ??? 4 > > I believe this demonstrates that only 4 requests are ever processed in > parallel.? I would expect 8.? In fact, when I run the same experiment on > Windows I see the expected output of > > ??? Server started on port 8080 > ??? 1 > ??? 2 > ??? 3 > ??? 4 > ??? 5 > ??? 6 > ??? 7 > ??? 8 > > Any thoughts as to what might explain this behavior? > > Best, > > Matt > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev From bill at dartalley.com Wed Jul 25 12:49:18 2018 From: bill at dartalley.com (Bill O'Neil) Date: Wed, 25 Jul 2018 12:49:18 -0400 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: Did you try setting the concurrency level much higher than 8 like I suggested earlier? You are probably having multiple connections assigned to the same IO threads. On Wed, Jul 25, 2018 at 12:26 PM, R. Matt Barnett wrote: > Corrected test to resolve test/set race. > > > https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa > > > I've also discovered this morning that I *can* see 1-8 printed on Red > Hat when I generate load using ab from Windows, but only 1-4 when > running ab on Red Hat (both locally and from a remote server). I'm > wondering if perhaps there is some sort of connection reuse shenanigans > going on. My assumption of the use of the -c 8 parameter was "make 8 > sockets" but maybe not. I'll dig in and report back. > > > -- Matt > > > On 7/24/2018 6:56 PM, R. Matt Barnett wrote: > > Hello, > > > > I'm experiencing an Undertow performance issue I fail to understand. I > > am able to reproduce the issue with the code linked bellow. The problem > > is that on Red Hat (and not Windows) I'm unable to concurrently process > > more than 4 overlapping requests even with 8 configured IO Threads. > > For example, if I run the following program (1 file, 55 lines): > > > > https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 > > > > ... on Red Hat and then send requests to the server using Apache > > Benchmark... > > > > > ab -n 1000 -c 8 localhost:8080/ > > > > I see the following output from the Undertow process: > > > > Server started on port 8080 > > > > 1 > > 2 > > 3 > > 4 > > > > I believe this demonstrates that only 4 requests are ever processed in > > parallel. I would expect 8. In fact, when I run the same experiment on > > Windows I see the expected output of > > > > Server started on port 8080 > > 1 > > 2 > > 3 > > 4 > > 5 > > 6 > > 7 > > 8 > > > > Any thoughts as to what might explain this behavior? > > > > Best, > > > > Matt > > > > _______________________________________________ > > undertow-dev mailing list > > undertow-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/undertow-dev > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180725/2b2dd406/attachment.html From barnett at rice.edu Wed Jul 25 13:40:13 2018 From: barnett at rice.edu (R. Matt Barnett) Date: Wed, 25 Jul 2018 12:40:13 -0500 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: I did. I set the concurrency level of ab to 128. I still see only 4 overlaps: $ java -jar undertow-test-0.1.0-jar-with-dependencies.jar & Server started on port 8080 1 2 3 4 $ netstat -t | grep apigateway_loadge | grep -c ESTABLISHED 126 What is the algorithm for mapping connections to IO threads?? As a new Undertow user I had assumed round robin, but it sounds like this is not the case. -- Matt On 7/25/2018 11:49 AM, Bill O'Neil wrote: > Did you try setting the concurrency level much higher than 8 like I > suggested earlier? You are probably having multiple connections > assigned to the same IO threads. > > On Wed, Jul 25, 2018 at 12:26 PM, R. Matt Barnett > wrote: > > Corrected test to resolve test/set race. > > > https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa > > > > I've also discovered this morning that I *can* see 1-8 printed on Red > Hat when I generate load using ab from Windows, but only 1-4 when > running ab on Red Hat (both locally and from a remote server).? I'm > wondering if perhaps there is some sort of connection reuse > shenanigans > going on.? My assumption of the use of the -c 8 parameter was "make 8 > sockets" but maybe not.? I'll dig in and report back. > > > -- Matt > > > On 7/24/2018 6:56 PM, R. Matt Barnett wrote: > > Hello, > > > > I'm experiencing an Undertow performance issue I fail to > understand.? I > > am able to reproduce the issue with the code linked bellow. The > problem > > is that on Red Hat (and not Windows) I'm unable to concurrently > process > > more than 4 overlapping requests even with 8 configured IO Threads. > > For example, if I run the following program (1 file, 55 lines): > > > > > https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 > > > > > ... on Red Hat and then send requests to the server using Apache > > Benchmark... > > > >? ???? > ab -n 1000 -c 8 localhost:8080/ > > > > I see the following output from the Undertow process: > > > >? ???? Server started on port 8080 > > > >? ???? 1 > >? ???? 2 > >? ???? 3 > >? ???? 4 > > > > I believe this demonstrates that only 4 requests are ever > processed in > > parallel.? I would expect 8.? In fact, when I run the same > experiment on > > Windows I see the expected output of > > > >? ???? Server started on port 8080 > >? ???? 1 > >? ???? 2 > >? ???? 3 > >? ???? 4 > >? ???? 5 > >? ???? 6 > >? ???? 7 > >? ???? 8 > > > > Any thoughts as to what might explain this behavior? > > > > Best, > > > > Matt > > > > _______________________________________________ > > undertow-dev mailing list > > undertow-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/undertow-dev > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180725/82e45fd1/attachment-0001.html From barnett at rice.edu Wed Jul 25 15:23:06 2018 From: barnett at rice.edu (R. Matt Barnett) Date: Wed, 25 Jul 2018 14:23:06 -0500 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: I've been able to observe 1...8 on Red Hat by adding the following statements to my handler (and setting the worker thread pool size to 8): @Override public void handleRequest(HttpServerExchange httpServerExchange)throws Exception { if (httpServerExchange.isInIoThread()) { httpServerExchange.dispatch(this); return; } ... } I have a few questions about this technique though: 1.) How are dispatch actions mapped onto worker threads? New connections were not mapped to available idle IO threads, so is it possible dispatches also won't be mapped to available idle worker threads but instead queued for currently busy threads? 2.) The Undertow documentation states that HttpServerExchange is not thread-safe. However the documentation states that dispatch(...) has happens-before semantics with respect to the worker thread accessing httpServerExchange. That would seem to make it ok to read from httpServerExchange in the worker thread. Assuming that an IO thread will be responsible for writing the http response back to the client, what steps do I need to take in the body ofhandleRequest to ensure that my writes to httpServerExchange in the worker thread are observed by the IO thread responsible for transmitting the response to the client? Is invoking httpServerExchange.endExchange(); in the worker thread as the final statement sufficient? -- Matt On 7/25/2018 11:26 AM, R. Matt Barnett wrote: > Corrected test to resolve test/set race. > > > https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa > > > I've also discovered this morning that I *can* see 1-8 printed on Red > Hat when I generate load using ab from Windows, but only 1-4 when > running ab on Red Hat (both locally and from a remote server).? I'm > wondering if perhaps there is some sort of connection reuse shenanigans > going on.? My assumption of the use of the -c 8 parameter was "make 8 > sockets" but maybe not.? I'll dig in and report back. > > > -- Matt > > > On 7/24/2018 6:56 PM, R. Matt Barnett wrote: >> Hello, >> >> I'm experiencing an Undertow performance issue I fail to understand.? I >> am able to reproduce the issue with the code linked bellow. The problem >> is that on Red Hat (and not Windows) I'm unable to concurrently process >> more than 4 overlapping requests even with 8 configured IO Threads. >> For example, if I run the following program (1 file, 55 lines): >> >> https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 >> >> ... on Red Hat and then send requests to the server using Apache >> Benchmark... >> >> ??? > ab -n 1000 -c 8 localhost:8080/ >> >> I see the following output from the Undertow process: >> >> ??? Server started on port 8080 >> >> ??? 1 >> ??? 2 >> ??? 3 >> ??? 4 >> >> I believe this demonstrates that only 4 requests are ever processed in >> parallel.? I would expect 8.? In fact, when I run the same experiment on >> Windows I see the expected output of >> >> ??? Server started on port 8080 >> ??? 1 >> ??? 2 >> ??? 3 >> ??? 4 >> ??? 5 >> ??? 6 >> ??? 7 >> ??? 8 >> >> Any thoughts as to what might explain this behavior? >> >> Best, >> >> Matt >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180725/b48ea6a1/attachment.html From sdouglas at redhat.com Wed Jul 25 19:24:48 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Thu, 26 Jul 2018 09:24:48 +1000 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: The mapping is done by a hash of the remote IP+port. It sounds like maybe this machine is allocating ports in a way that does not map well to our hash. Because the remote IP is the same it is really only the port that comes into effect. The algorithm is in org.xnio.nio.QueuedNioTcpServer#handleReady and in this case would simplify down to: (((C1 * 23) + P) * 23 + C2) % 8 Where C1 is a hash of the remote IP, and C2 is a hash of the local IP+port combo. Stuart On Thu, Jul 26, 2018 at 3:52 AM R. Matt Barnett wrote: > I did. I set the concurrency level of ab to 128. I still see only 4 > overlaps: > > $ java -jar undertow-test-0.1.0-jar-with-dependencies.jar & > > Server started on port 8080 > 1 > 2 > 3 > 4 > > $ netstat -t | grep apigateway_loadge | grep -c ESTABLISHED > 126 > > > What is the algorithm for mapping connections to IO threads? As a new > Undertow user I had assumed round robin, but it sounds like this is not the > case. > > > -- Matt > > On 7/25/2018 11:49 AM, Bill O'Neil wrote: > > Did you try setting the concurrency level much higher than 8 like I > suggested earlier? You are probably having multiple connections assigned to > the same IO threads. > > On Wed, Jul 25, 2018 at 12:26 PM, R. Matt Barnett > wrote: > >> Corrected test to resolve test/set race. >> >> >> https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa >> >> >> I've also discovered this morning that I *can* see 1-8 printed on Red >> Hat when I generate load using ab from Windows, but only 1-4 when >> running ab on Red Hat (both locally and from a remote server). I'm >> wondering if perhaps there is some sort of connection reuse shenanigans >> going on. My assumption of the use of the -c 8 parameter was "make 8 >> sockets" but maybe not. I'll dig in and report back. >> >> >> -- Matt >> >> >> On 7/24/2018 6:56 PM, R. Matt Barnett wrote: >> > Hello, >> > >> > I'm experiencing an Undertow performance issue I fail to understand. I >> > am able to reproduce the issue with the code linked bellow. The problem >> > is that on Red Hat (and not Windows) I'm unable to concurrently process >> > more than 4 overlapping requests even with 8 configured IO Threads. >> > For example, if I run the following program (1 file, 55 lines): >> > >> > https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 >> > >> > ... on Red Hat and then send requests to the server using Apache >> > Benchmark... >> > >> > > ab -n 1000 -c 8 localhost:8080/ >> > >> > I see the following output from the Undertow process: >> > >> > Server started on port 8080 >> > >> > 1 >> > 2 >> > 3 >> > 4 >> > >> > I believe this demonstrates that only 4 requests are ever processed in >> > parallel. I would expect 8. In fact, when I run the same experiment on >> > Windows I see the expected output of >> > >> > Server started on port 8080 >> > 1 >> > 2 >> > 3 >> > 4 >> > 5 >> > 6 >> > 7 >> > 8 >> > >> > Any thoughts as to what might explain this behavior? >> > >> > Best, >> > >> > Matt >> > >> > _______________________________________________ >> > undertow-dev mailing list >> > undertow-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/undertow-dev >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev >> > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180726/56cc0502/attachment.html From sdouglas at redhat.com Wed Jul 25 19:31:46 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Thu, 26 Jul 2018 09:31:46 +1000 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: On Thu, Jul 26, 2018 at 5:27 AM R. Matt Barnett wrote: > I've been able to observe 1...8 on Red Hat by adding the following > statements to my handler (and setting the worker thread pool size to 8): > > > @Overridepublic void handleRequest(HttpServerExchange httpServerExchange) throws Exception > { > if (httpServerExchange.isInIoThread()) { > httpServerExchange.dispatch(this); > return; > } > ... > } > > I have a few questions about this technique though: > > 1.) How are dispatch actions mapped onto worker threads? New connections were not mapped to available idle IO threads, so is it possible dispatches also won't be mapped to available idle worker threads but instead queued for currently > busy threads? > > IO threads are tied to the connection. Once a connection has been accepted only that IO thread will be used to service it. This avoids contention from having a larger number of IO threads waiting on a single selector. The worker thread pool is basically just a normal executor, that will run tasks in a FIFO manner. > > 2.) The Undertow documentation states that HttpServerExchange is not thread-safe. However the documentation states that dispatch(...) has happens-before semantics with respect to the worker thread accessing httpServerExchange. > That would seem to make it ok to read from httpServerExchange in the worker thread. Assuming that an IO thread will be responsible for writing the http response back to the client, what steps do I need to take in the body > of handleRequest to ensure that my writes to httpServerExchange in the worker thread are observed by the IO thread responsible for transmitting the response to the client? Is invoking httpServerExchange.endExchange(); in the > worker thread as the final statement sufficient? > > Not all writes are done from the IO thread. For instance if you use blocking IO and are using a Stream then the writes are done from the worker. If you use the Sender to perform async IO then the initial write is done from the original thread, and the IO thread is only involved if the response is too larger to just write out immediately. In this case though the Sender will take care of the thread safety aspects, as the underlying SelectionKey will not have its interest ops set until after the current stack has returned. Basically if you call dispatch(), or perform an action that requires async IO nothing happens immediately, it just sets a flag in the HttpServerExchange. Once the call stack returns (i.e. the current thread is done) one of three things will happen: - If dispatch was called the dispatch task will be run in an executor - If async IO was required the underlying SelectionKey will have its interest ops modified, so the IO thread can perform the IO - If neither of the above happened then the exchange is ended. Stuart > > -- Matt > > On 7/25/2018 11:26 AM, R. Matt Barnett wrote: > > Corrected test to resolve test/set race. > > https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa > > > I've also discovered this morning that I *can* see 1-8 printed on Red > Hat when I generate load using ab from Windows, but only 1-4 when > running ab on Red Hat (both locally and from a remote server). I'm > wondering if perhaps there is some sort of connection reuse shenanigans > going on. My assumption of the use of the -c 8 parameter was "make 8 > sockets" but maybe not. I'll dig in and report back. > > > -- Matt > > > On 7/24/2018 6:56 PM, R. Matt Barnett wrote: > > Hello, > > I'm experiencing an Undertow performance issue I fail to understand. I > am able to reproduce the issue with the code linked bellow. The problem > is that on Red Hat (and not Windows) I'm unable to concurrently process > more than 4 overlapping requests even with 8 configured IO Threads. > For example, if I run the following program (1 file, 55 lines): > https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 > > ... on Red Hat and then send requests to the server using Apache > Benchmark... > > > ab -n 1000 -c 8 localhost:8080/ > > I see the following output from the Undertow process: > > Server started on port 8080 > > 1 > 2 > 3 > 4 > > I believe this demonstrates that only 4 requests are ever processed in > parallel. I would expect 8. In fact, when I run the same experiment on > Windows I see the expected output of > > Server started on port 8080 > 1 > 2 > 3 > 4 > 5 > 6 > 7 > 8 > > Any thoughts as to what might explain this behavior? > > Best, > > Matt > > _______________________________________________ > undertow-dev mailing listundertow-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/undertow-dev > > _______________________________________________ > undertow-dev mailing listundertow-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/undertow-dev > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180726/8b8c108a/attachment-0001.html From barnett at rice.edu Wed Jul 25 19:54:12 2018 From: barnett at rice.edu (R. Matt Barnett) Date: Wed, 25 Jul 2018 18:54:12 -0500 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: <8825690f-22a0-ec07-c335-069c894b0831@rice.edu> So just to be clear, am I correct in my understanding that is it safe to invoke any method on the Sender instance returned by getResponseSender() from a Worker Thread without any extra threading considerations?? Can you confirm this is true as well for the HeaderMap instance returned by getResponseHeaders() and also for invocations of setStatusCode(...) ? Thanks. On 7/25/2018 6:31 PM, Stuart Douglas wrote: > > > On Thu, Jul 26, 2018 at 5:27 AM R. Matt Barnett > wrote: > > I've been able to observe 1...8 on Red Hat by adding the following > statements to my handler (and setting the worker thread pool size > to 8): > > > @Override public void handleRequest(HttpServerExchange httpServerExchange)throws Exception > { > if (httpServerExchange.isInIoThread()) { > httpServerExchange.dispatch(this); > return; > } > ... > } > > I have a few questions about this technique though: > > 1.) How are dispatch actions mapped onto worker threads? New connections were not mapped to available idle IO threads, so is it possible dispatches also won't be mapped to available idle worker threads but instead queued for currently > busy threads? > > > IO threads are tied to the connection. Once a connection has been > accepted only that IO thread will be used to service it. This avoids > contention from having a larger number of IO threads waiting on a > single selector. The worker thread pool is basically just a normal > executor, that will run tasks in a FIFO manner. > > 2.) The Undertow documentation states that HttpServerExchange is not thread-safe. However the documentation states that dispatch(...) has happens-before semantics with respect to the worker thread accessing httpServerExchange. > That would seem to make it ok to read from httpServerExchange in the worker thread. Assuming that an IO thread will be responsible for writing the http response back to the client, what steps do I need to take in the body > ofhandleRequest to ensure that my writes to httpServerExchange in the worker thread are observed by the IO thread responsible for transmitting the response to the client? Is invoking httpServerExchange.endExchange(); in the > worker thread as the final statement sufficient? > > > Not all writes are done from the IO thread. For instance if you use > blocking IO and are using a Stream then the writes are done from the > worker. > > If you use the Sender to perform async IO then the initial write is > done from the original thread, and the IO thread is only involved if > the response is too larger to just write out immediately. In this case > though the Sender will take care of the thread safety aspects, as the > underlying SelectionKey will not have its interest ops set until after > the current stack has returned. > > Basically if you call dispatch(), or perform an action that requires > async IO nothing happens immediately, it just sets a flag in the > HttpServerExchange. Once the call stack returns (i.e. the current > thread is done) one of three things will happen: > - If dispatch was called the dispatch task will be run in an executor > - If async IO was required the underlying SelectionKey will have its > interest ops modified, so the IO thread can perform the IO > - If neither of the above happened then the exchange is ended. > > Stuart > > > -- Matt > > On 7/25/2018 11:26 AM, R. Matt Barnett wrote: >> Corrected test to resolve test/set race. >> >> >> https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa >> >> >> I've also discovered this morning that I *can* see 1-8 printed on Red >> Hat when I generate load using ab from Windows, but only 1-4 when >> running ab on Red Hat (both locally and from a remote server).? I'm >> wondering if perhaps there is some sort of connection reuse shenanigans >> going on.? My assumption of the use of the -c 8 parameter was "make 8 >> sockets" but maybe not.? I'll dig in and report back. >> >> >> -- Matt >> >> >> On 7/24/2018 6:56 PM, R. Matt Barnett wrote: >>> Hello, >>> >>> I'm experiencing an Undertow performance issue I fail to understand.? I >>> am able to reproduce the issue with the code linked bellow. The problem >>> is that on Red Hat (and not Windows) I'm unable to concurrently process >>> more than 4 overlapping requests even with 8 configured IO Threads. >>> For example, if I run the following program (1 file, 55 lines): >>> >>> https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 >>> >>> ... on Red Hat and then send requests to the server using Apache >>> Benchmark... >>> >>> ??? > ab -n 1000 -c 8 localhost:8080/ >>> >>> I see the following output from the Undertow process: >>> >>> ??? Server started on port 8080 >>> >>> ??? 1 >>> ??? 2 >>> ??? 3 >>> ??? 4 >>> >>> I believe this demonstrates that only 4 requests are ever processed in >>> parallel.? I would expect 8.? In fact, when I run the same experiment on >>> Windows I see the expected output of >>> >>> ??? Server started on port 8080 >>> ??? 1 >>> ??? 2 >>> ??? 3 >>> ??? 4 >>> ??? 5 >>> ??? 6 >>> ??? 7 >>> ??? 8 >>> >>> Any thoughts as to what might explain this behavior? >>> >>> Best, >>> >>> Matt >>> >>> _______________________________________________ >>> undertow-dev mailing list >>> undertow-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/undertow-dev >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180725/8582ef69/attachment.html From sdouglas at redhat.com Wed Jul 25 21:36:34 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Thu, 26 Jul 2018 11:36:34 +1000 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: <8825690f-22a0-ec07-c335-069c894b0831@rice.edu> References: <8825690f-22a0-ec07-c335-069c894b0831@rice.edu> Message-ID: Yes. The worker thread will not start executing the handler until the current stack has returned, at which point the IO thread will no longer have anything to do with the exchange (it has handed it off to the worker thread). You can then invoke any method, as anything that will involve the IO thread is deferred until after the worker thread has returned. Stuart On Thu, Jul 26, 2018 at 9:54 AM R. Matt Barnett wrote: > So just to be clear, am I correct in my understanding that is it safe to > invoke any method on the Sender instance returned by getResponseSender() > from a Worker Thread without any extra threading considerations? Can you > confirm this is true as well for the HeaderMap instance returned by > getResponseHeaders() and also for invocations of setStatusCode(...) ? > > Thanks. > > On 7/25/2018 6:31 PM, Stuart Douglas wrote: > > > > On Thu, Jul 26, 2018 at 5:27 AM R. Matt Barnett wrote: > >> I've been able to observe 1...8 on Red Hat by adding the following >> statements to my handler (and setting the worker thread pool size to 8): >> >> >> @Overridepublic void handleRequest(HttpServerExchange httpServerExchange) throws Exception >> { >> if (httpServerExchange.isInIoThread()) { >> httpServerExchange.dispatch(this); >> return; >> } >> ... >> } >> >> I have a few questions about this technique though: >> >> 1.) How are dispatch actions mapped onto worker threads? New connections were not mapped to available idle IO threads, so is it possible dispatches also won't be mapped to available idle worker threads but instead queued for currently >> busy threads? >> >> > IO threads are tied to the connection. Once a connection has been accepted > only that IO thread will be used to service it. This avoids contention from > having a larger number of IO threads waiting on a single selector. The > worker thread pool is basically just a normal executor, that will run tasks > in a FIFO manner. > > >> 2.) The Undertow documentation states that HttpServerExchange is not thread-safe. However the documentation states that dispatch(...) has happens-before semantics with respect to the worker thread accessing httpServerExchange. >> That would seem to make it ok to read from httpServerExchange in the worker thread. Assuming that an IO thread will be responsible for writing the http response back to the client, what steps do I need to take in the body >> of handleRequest to ensure that my writes to httpServerExchange in the worker thread are observed by the IO thread responsible for transmitting the response to the client? Is invoking httpServerExchange.endExchange(); in the >> worker thread as the final statement sufficient? >> >> > Not all writes are done from the IO thread. For instance if you use > blocking IO and are using a Stream then the writes are done from the worker. > > If you use the Sender to perform async IO then the initial write is done > from the original thread, and the IO thread is only involved if the > response is too larger to just write out immediately. In this case though > the Sender will take care of the thread safety aspects, as the underlying > SelectionKey will not have its interest ops set until after the current > stack has returned. > > Basically if you call dispatch(), or perform an action that requires async > IO nothing happens immediately, it just sets a flag in the > HttpServerExchange. Once the call stack returns (i.e. the current thread is > done) one of three things will happen: > - If dispatch was called the dispatch task will be run in an executor > - If async IO was required the underlying SelectionKey will have its > interest ops modified, so the IO thread can perform the IO > - If neither of the above happened then the exchange is ended. > > Stuart > > > >> >> -- Matt >> >> On 7/25/2018 11:26 AM, R. Matt Barnett wrote: >> >> Corrected test to resolve test/set race. >> >> https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa >> >> >> I've also discovered this morning that I *can* see 1-8 printed on Red >> Hat when I generate load using ab from Windows, but only 1-4 when >> running ab on Red Hat (both locally and from a remote server). I'm >> wondering if perhaps there is some sort of connection reuse shenanigans >> going on. My assumption of the use of the -c 8 parameter was "make 8 >> sockets" but maybe not. I'll dig in and report back. >> >> >> -- Matt >> >> >> On 7/24/2018 6:56 PM, R. Matt Barnett wrote: >> >> Hello, >> >> I'm experiencing an Undertow performance issue I fail to understand. I >> am able to reproduce the issue with the code linked bellow. The problem >> is that on Red Hat (and not Windows) I'm unable to concurrently process >> more than 4 overlapping requests even with 8 configured IO Threads. >> For example, if I run the following program (1 file, 55 lines): >> https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 >> >> ... on Red Hat and then send requests to the server using Apache >> Benchmark... >> >> > ab -n 1000 -c 8 localhost:8080/ >> >> I see the following output from the Undertow process: >> >> Server started on port 8080 >> >> 1 >> 2 >> 3 >> 4 >> >> I believe this demonstrates that only 4 requests are ever processed in >> parallel. I would expect 8. In fact, when I run the same experiment on >> Windows I see the expected output of >> >> Server started on port 8080 >> 1 >> 2 >> 3 >> 4 >> 5 >> 6 >> 7 >> 8 >> >> Any thoughts as to what might explain this behavior? >> >> Best, >> >> Matt >> >> _______________________________________________ >> undertow-dev mailing listundertow-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/undertow-dev >> >> _______________________________________________ >> undertow-dev mailing listundertow-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/undertow-dev >> >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180726/3743517b/attachment-0001.html From jason.greene at redhat.com Wed Jul 25 22:23:15 2018 From: jason.greene at redhat.com (Jason Greene) Date: Wed, 25 Jul 2018 19:23:15 -0700 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: Could you post a netstat output so we can see what port numbers your host is picking? Also is your backlog setting low by chance? On Jul 25, 2018, at 6:24 PM, Stuart Douglas wrote: The mapping is done by a hash of the remote IP+port. It sounds like maybe this machine is allocating ports in a way that does not map well to our hash. Because the remote IP is the same it is really only the port that comes into effect. The algorithm is in org.xnio.nio.QueuedNioTcpServer#handleReady and in this case would simplify down to: (((C1 * 23) + P) * 23 + C2) % 8 Where C1 is a hash of the remote IP, and C2 is a hash of the local IP+port combo. Stuart On Thu, Jul 26, 2018 at 3:52 AM R. Matt Barnett wrote: > I did. I set the concurrency level of ab to 128. I still see only 4 > overlaps: > > $ java -jar undertow-test-0.1.0-jar-with-dependencies.jar & > > Server started on port 8080 > 1 > 2 > 3 > 4 > > $ netstat -t | grep apigateway_loadge | grep -c ESTABLISHED > 126 > > > What is the algorithm for mapping connections to IO threads? As a new > Undertow user I had assumed round robin, but it sounds like this is not the > case. > > > -- Matt > > On 7/25/2018 11:49 AM, Bill O'Neil wrote: > > Did you try setting the concurrency level much higher than 8 like I > suggested earlier? You are probably having multiple connections assigned to > the same IO threads. > > On Wed, Jul 25, 2018 at 12:26 PM, R. Matt Barnett > wrote: > >> Corrected test to resolve test/set race. >> >> >> https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa >> >> >> I've also discovered this morning that I *can* see 1-8 printed on Red >> Hat when I generate load using ab from Windows, but only 1-4 when >> running ab on Red Hat (both locally and from a remote server). I'm >> wondering if perhaps there is some sort of connection reuse shenanigans >> going on. My assumption of the use of the -c 8 parameter was "make 8 >> sockets" but maybe not. I'll dig in and report back. >> >> >> -- Matt >> >> >> On 7/24/2018 6:56 PM, R. Matt Barnett wrote: >> > Hello, >> > >> > I'm experiencing an Undertow performance issue I fail to understand. I >> > am able to reproduce the issue with the code linked bellow. The problem >> > is that on Red Hat (and not Windows) I'm unable to concurrently process >> > more than 4 overlapping requests even with 8 configured IO Threads. >> > For example, if I run the following program (1 file, 55 lines): >> > >> > https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 >> > >> > ... on Red Hat and then send requests to the server using Apache >> > Benchmark... >> > >> > > ab -n 1000 -c 8 localhost:8080/ >> > >> > I see the following output from the Undertow process: >> > >> > Server started on port 8080 >> > >> > 1 >> > 2 >> > 3 >> > 4 >> > >> > I believe this demonstrates that only 4 requests are ever processed in >> > parallel. I would expect 8. In fact, when I run the same experiment on >> > Windows I see the expected output of >> > >> > Server started on port 8080 >> > 1 >> > 2 >> > 3 >> > 4 >> > 5 >> > 6 >> > 7 >> > 8 >> > >> > Any thoughts as to what might explain this behavior? >> > >> > Best, >> > >> > Matt >> > >> > _______________________________________________ >> > undertow-dev mailing list >> > undertow-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/undertow-dev >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev >> > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev _______________________________________________ undertow-dev mailing list undertow-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180725/8ff04624/attachment.html From barnett at rice.edu Thu Jul 26 14:33:51 2018 From: barnett at rice.edu (R. Matt Barnett) Date: Thu, 26 Jul 2018 13:33:51 -0500 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: Backlog setting is 1000. Is this what you are interested in from netstat?? This was for ab with a -c of 50. [barnett at apigateway_test ~]$ java -jar undertow-test-0.1.0-jar-with-dependencies.jar & [1] 7329 [barnett at apigateway_test ~]$ Jul 26, 2018 1:30:22 PM org.xnio.Xnio INFO: XNIO version 3.3.8.Final Jul 26, 2018 1:30:23 PM org.xnio.nio.NioXnio INFO: XNIO NIO Implementation Version 3.3.8.Final Server started on port 8080 1 2 3 4 [barnett at apigateway_test ~]$ netstat -t | grep apigateway_loadge | grep ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51580 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51614 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51622 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51626 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51612 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51578 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51636 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51616 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51582 ESTABLISHED tcp6?????? 0????? 0 apigateway_tes:webcache apigateway_loadge:51556 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51588 ESTABLISHED tcp6?????? 0????? 0 apigateway_tes:webcache apigateway_loadge:51558 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51586 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51648 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51632 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51652 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51654 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51574 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51640 ESTABLISHED tcp6?????? 0????? 0 apigateway_tes:webcache apigateway_loadge:51564 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51590 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51610 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51594 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51592 ESTABLISHED tcp6?????? 0????? 0 apigateway_tes:webcache apigateway_loadge:51568 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51620 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51598 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51600 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51584 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51630 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51596 ESTABLISHED tcp6?????? 0????? 0 apigateway_tes:webcache apigateway_loadge:51566 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51650 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51656 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51624 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51662 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51642 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51604 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51608 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51634 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51658 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51628 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51660 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51572 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51606 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51602 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51638 ESTABLISHED tcp6?????? 0????? 0 apigateway_tes:webcache apigateway_loadge:51570 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51618 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51646 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51644 ESTABLISHED tcp6????? 97????? 0 apigateway_tes:webcache apigateway_loadge:51576 ESTABLISHED On 7/25/2018 9:23 PM, Jason Greene wrote: > Could you post a netstat output so we can see what port numbers your > host is picking? > > Also is your backlog setting low by chance? > > On Jul 25, 2018, at 6:24 PM, Stuart Douglas > wrote: > >> The mapping is done by a hash of the remote IP+port. It sounds like >> maybe this machine is allocating ports in a way that does not map >> well to our hash. >> >> Because the remote IP is the same it is really only the port that >> comes into effect. The algorithm is >> in?org.xnio.nio.QueuedNioTcpServer#handleReady and in this case would >> simplify down to: >> >> (((C1 * 23)?+ P) * 23?+ C2) % 8 >> >> Where C1 is a hash of the remote IP, and C2 is a hash of the local >> IP+port combo. >> >> Stuart >> >> On Thu, Jul 26, 2018 at 3:52 AM R. Matt Barnett > > wrote: >> >> I did. I set the concurrency level of ab to 128. I still see only >> 4 overlaps: >> >> $ java -jar undertow-test-0.1.0-jar-with-dependencies.jar & >> >> Server started on port 8080 >> 1 >> 2 >> 3 >> 4 >> >> $ netstat -t | grep apigateway_loadge | grep -c ESTABLISHED >> 126 >> >> >> What is the algorithm for mapping connections to IO threads?? As >> a new Undertow user I had assumed round robin, but it sounds like >> this is not the case. >> >> >> -- Matt >> >> >> On 7/25/2018 11:49 AM, Bill O'Neil wrote: >>> Did you try setting the concurrency level much higher than 8 >>> like I suggested earlier? You are probably having multiple >>> connections assigned to the same IO threads. >>> >>> On Wed, Jul 25, 2018 at 12:26 PM, R. Matt Barnett >>> > wrote: >>> >>> Corrected test to resolve test/set race. >>> >>> >>> https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa >>> >>> >>> I've also discovered this morning that I *can* see 1-8 >>> printed on Red >>> Hat when I generate load using ab from Windows, but only 1-4 >>> when >>> running ab on Red Hat (both locally and from a remote >>> server).? I'm >>> wondering if perhaps there is some sort of connection reuse >>> shenanigans >>> going on.? My assumption of the use of the -c 8 parameter >>> was "make 8 >>> sockets" but maybe not.? I'll dig in and report back. >>> >>> >>> -- Matt >>> >>> >>> On 7/24/2018 6:56 PM, R. Matt Barnett wrote: >>> > Hello, >>> > >>> > I'm experiencing an Undertow performance issue I fail to >>> understand.? I >>> > am able to reproduce the issue with the code linked >>> bellow. The problem >>> > is that on Red Hat (and not Windows) I'm unable to >>> concurrently process >>> > more than 4 overlapping requests even with 8 configured IO >>> Threads. >>> > For example, if I run the following program (1 file, 55 >>> lines): >>> > >>> > >>> https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 >>> > >>> > ... on Red Hat and then send requests to the server using >>> Apache >>> > Benchmark... >>> > >>> >? ???? > ab -n 1000 -c 8 localhost:8080/ >>> > >>> > I see the following output from the Undertow process: >>> > >>> >? ???? Server started on port 8080 >>> > >>> >? ???? 1 >>> >? ???? 2 >>> >? ???? 3 >>> >? ???? 4 >>> > >>> > I believe this demonstrates that only 4 requests are ever >>> processed in >>> > parallel.? I would expect 8.? In fact, when I run the same >>> experiment on >>> > Windows I see the expected output of >>> > >>> >? ???? Server started on port 8080 >>> >? ???? 1 >>> >? ???? 2 >>> >? ???? 3 >>> >? ???? 4 >>> >? ???? 5 >>> >? ???? 6 >>> >? ???? 7 >>> >? ???? 8 >>> > >>> > Any thoughts as to what might explain this behavior? >>> > >>> > Best, >>> > >>> > Matt >>> > >>> > _______________________________________________ >>> > undertow-dev mailing list >>> > undertow-dev at lists.jboss.org >>> >>> > https://lists.jboss.org/mailman/listinfo/undertow-dev >>> >>> _______________________________________________ >>> undertow-dev mailing list >>> undertow-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/undertow-dev >>> >>> >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180726/56648a20/attachment-0001.html From sdouglas at redhat.com Thu Jul 26 20:13:57 2018 From: sdouglas at redhat.com (Stuart Douglas) Date: Fri, 27 Jul 2018 10:13:57 +1000 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: They are all even numbers :-( This does not play well with our hash if C1 is also even: (((C1 * 23) + P) * 23 + C2) % 8 If C1 is even the C1 * 23 is even. This means ((C1 * 23) + P) * 23 is even. Depending on the value of C2 this means the result is always even or always odd, so with an evenly divisible number of threads you are only ever going to allocate to half of them. The good news is this should be easily fixed by using an odd number of IO threads, but we probably should revisit this. Stuart On Fri, Jul 27, 2018 at 4:34 AM R. Matt Barnett wrote: > Backlog setting is 1000. > > Is this what you are interested in from netstat? This was for ab with a > -c of 50. > > > [barnett at apigateway_test ~]$ java -jar > undertow-test-0.1.0-jar-with-dependencies.jar & > [1] 7329 > [barnett at apigateway_test ~]$ Jul 26, 2018 1:30:22 PM org.xnio.Xnio > > INFO: XNIO version 3.3.8.Final > Jul 26, 2018 1:30:23 PM org.xnio.nio.NioXnio > INFO: XNIO NIO Implementation Version 3.3.8.Final > > > Server started on port 8080 > 1 > 2 > 3 > 4 > [barnett at apigateway_test ~]$ netstat -t | grep apigateway_loadge | grep > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51580 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51614 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51622 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51626 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51612 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51578 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51636 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51616 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51582 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51556 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51588 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51558 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51586 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51648 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51632 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51652 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51654 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51574 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51640 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51564 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51590 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51610 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51594 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51592 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51568 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51620 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51598 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51600 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51584 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51630 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51596 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51566 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51650 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51656 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51624 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51662 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51642 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51604 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51608 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51634 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51658 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51628 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51660 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51572 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51606 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51602 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51638 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51570 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51618 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51646 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51644 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51576 > ESTABLISHED > > On 7/25/2018 9:23 PM, Jason Greene wrote: > > Could you post a netstat output so we can see what port numbers your host > is picking? > > Also is your backlog setting low by chance? > > On Jul 25, 2018, at 6:24 PM, Stuart Douglas wrote: > > The mapping is done by a hash of the remote IP+port. It sounds like maybe > this machine is allocating ports in a way that does not map well to our > hash. > > Because the remote IP is the same it is really only the port that comes > into effect. The algorithm is > in org.xnio.nio.QueuedNioTcpServer#handleReady and in this case would > simplify down to: > > (((C1 * 23) + P) * 23 + C2) % 8 > > Where C1 is a hash of the remote IP, and C2 is a hash of the local IP+port > combo. > > Stuart > > On Thu, Jul 26, 2018 at 3:52 AM R. Matt Barnett wrote: > >> I did. I set the concurrency level of ab to 128. I still see only 4 >> overlaps: >> >> $ java -jar undertow-test-0.1.0-jar-with-dependencies.jar & >> >> Server started on port 8080 >> 1 >> 2 >> 3 >> 4 >> >> $ netstat -t | grep apigateway_loadge | grep -c ESTABLISHED >> 126 >> >> >> What is the algorithm for mapping connections to IO threads? As a new >> Undertow user I had assumed round robin, but it sounds like this is not the >> case. >> >> >> -- Matt >> >> On 7/25/2018 11:49 AM, Bill O'Neil wrote: >> >> Did you try setting the concurrency level much higher than 8 like I >> suggested earlier? You are probably having multiple connections assigned to >> the same IO threads. >> >> On Wed, Jul 25, 2018 at 12:26 PM, R. Matt Barnett >> wrote: >> >>> Corrected test to resolve test/set race. >>> >>> >>> https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa >>> >>> >>> I've also discovered this morning that I *can* see 1-8 printed on Red >>> Hat when I generate load using ab from Windows, but only 1-4 when >>> running ab on Red Hat (both locally and from a remote server). I'm >>> wondering if perhaps there is some sort of connection reuse shenanigans >>> going on. My assumption of the use of the -c 8 parameter was "make 8 >>> sockets" but maybe not. I'll dig in and report back. >>> >>> >>> -- Matt >>> >>> >>> On 7/24/2018 6:56 PM, R. Matt Barnett wrote: >>> > Hello, >>> > >>> > I'm experiencing an Undertow performance issue I fail to understand. I >>> > am able to reproduce the issue with the code linked bellow. The problem >>> > is that on Red Hat (and not Windows) I'm unable to concurrently process >>> > more than 4 overlapping requests even with 8 configured IO Threads. >>> > For example, if I run the following program (1 file, 55 lines): >>> > >>> > >>> https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 >>> > >>> > ... on Red Hat and then send requests to the server using Apache >>> > Benchmark... >>> > >>> > > ab -n 1000 -c 8 localhost:8080/ >>> > >>> > I see the following output from the Undertow process: >>> > >>> > Server started on port 8080 >>> > >>> > 1 >>> > 2 >>> > 3 >>> > 4 >>> > >>> > I believe this demonstrates that only 4 requests are ever processed in >>> > parallel. I would expect 8. In fact, when I run the same experiment >>> on >>> > Windows I see the expected output of >>> > >>> > Server started on port 8080 >>> > 1 >>> > 2 >>> > 3 >>> > 4 >>> > 5 >>> > 6 >>> > 7 >>> > 8 >>> > >>> > Any thoughts as to what might explain this behavior? >>> > >>> > Best, >>> > >>> > Matt >>> > >>> > _______________________________________________ >>> > undertow-dev mailing list >>> > undertow-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/undertow-dev >>> >>> _______________________________________________ >>> undertow-dev mailing list >>> undertow-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/undertow-dev >>> >> >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180727/d69fb8c0/attachment-0001.html From jason.greene at redhat.com Thu Jul 26 22:11:05 2018 From: jason.greene at redhat.com (Jason Greene) Date: Thu, 26 Jul 2018 19:11:05 -0700 Subject: [undertow-dev] Unable to concurrently use all available IO Threads under load on Red Hat In-Reply-To: References: Message-ID: Looks like we need to tweak the hash: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=07f4c90062f8fc7c8c26f8f95324cbe8fa3145a5 On Jul 26, 2018, at 7:13 PM, Stuart Douglas wrote: They are all even numbers :-( This does not play well with our hash if C1 is also even: (((C1 * 23) + P) * 23 + C2) % 8 If C1 is even the C1 * 23 is even. This means ((C1 * 23) + P) * 23 is even. Depending on the value of C2 this means the result is always even or always odd, so with an evenly divisible number of threads you are only ever going to allocate to half of them. The good news is this should be easily fixed by using an odd number of IO threads, but we probably should revisit this. Stuart On Fri, Jul 27, 2018 at 4:34 AM R. Matt Barnett wrote: > Backlog setting is 1000. > > Is this what you are interested in from netstat? This was for ab with a > -c of 50. > > > [barnett at apigateway_test ~]$ java -jar > undertow-test-0.1.0-jar-with-dependencies.jar & > [1] 7329 > [barnett at apigateway_test ~]$ Jul 26, 2018 1:30:22 PM org.xnio.Xnio > > INFO: XNIO version 3.3.8.Final > Jul 26, 2018 1:30:23 PM org.xnio.nio.NioXnio > INFO: XNIO NIO Implementation Version 3.3.8.Final > > > Server started on port 8080 > 1 > 2 > 3 > 4 > [barnett at apigateway_test ~]$ netstat -t | grep apigateway_loadge | grep > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51580 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51614 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51622 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51626 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51612 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51578 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51636 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51616 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51582 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51556 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51588 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51558 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51586 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51648 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51632 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51652 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51654 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51574 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51640 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51564 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51590 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51610 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51594 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51592 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51568 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51620 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51598 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51600 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51584 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51630 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51596 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51566 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51650 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51656 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51624 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51662 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51642 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51604 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51608 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51634 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51658 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51628 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51660 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51572 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51606 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51602 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51638 > ESTABLISHED > tcp6 0 0 apigateway_tes:webcache apigateway_loadge:51570 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51618 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51646 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51644 > ESTABLISHED > tcp6 97 0 apigateway_tes:webcache apigateway_loadge:51576 > ESTABLISHED > > On 7/25/2018 9:23 PM, Jason Greene wrote: > > Could you post a netstat output so we can see what port numbers your host > is picking? > > Also is your backlog setting low by chance? > > On Jul 25, 2018, at 6:24 PM, Stuart Douglas wrote: > > The mapping is done by a hash of the remote IP+port. It sounds like maybe > this machine is allocating ports in a way that does not map well to our > hash. > > Because the remote IP is the same it is really only the port that comes > into effect. The algorithm is > in org.xnio.nio.QueuedNioTcpServer#handleReady and in this case would > simplify down to: > > (((C1 * 23) + P) * 23 + C2) % 8 > > Where C1 is a hash of the remote IP, and C2 is a hash of the local IP+port > combo. > > Stuart > > On Thu, Jul 26, 2018 at 3:52 AM R. Matt Barnett wrote: > >> I did. I set the concurrency level of ab to 128. I still see only 4 >> overlaps: >> >> $ java -jar undertow-test-0.1.0-jar-with-dependencies.jar & >> >> Server started on port 8080 >> 1 >> 2 >> 3 >> 4 >> >> $ netstat -t | grep apigateway_loadge | grep -c ESTABLISHED >> 126 >> >> >> What is the algorithm for mapping connections to IO threads? As a new >> Undertow user I had assumed round robin, but it sounds like this is not the >> case. >> >> >> -- Matt >> >> On 7/25/2018 11:49 AM, Bill O'Neil wrote: >> >> Did you try setting the concurrency level much higher than 8 like I >> suggested earlier? You are probably having multiple connections assigned to >> the same IO threads. >> >> On Wed, Jul 25, 2018 at 12:26 PM, R. Matt Barnett >> wrote: >> >>> Corrected test to resolve test/set race. >>> >>> >>> https://gist.github.com/rmbarnett-rice/1179c4ad1d3344bb247c8b8daed3e4fa >>> >>> >>> I've also discovered this morning that I *can* see 1-8 printed on Red >>> Hat when I generate load using ab from Windows, but only 1-4 when >>> running ab on Red Hat (both locally and from a remote server). I'm >>> wondering if perhaps there is some sort of connection reuse shenanigans >>> going on. My assumption of the use of the -c 8 parameter was "make 8 >>> sockets" but maybe not. I'll dig in and report back. >>> >>> >>> -- Matt >>> >>> >>> On 7/24/2018 6:56 PM, R. Matt Barnett wrote: >>> > Hello, >>> > >>> > I'm experiencing an Undertow performance issue I fail to understand. I >>> > am able to reproduce the issue with the code linked bellow. The problem >>> > is that on Red Hat (and not Windows) I'm unable to concurrently process >>> > more than 4 overlapping requests even with 8 configured IO Threads. >>> > For example, if I run the following program (1 file, 55 lines): >>> > >>> > >>> https://gist.github.com/rmbarnett-rice/668db6b4e9f8f8da7093a3659b6ae2b5 >>> > >>> > ... on Red Hat and then send requests to the server using Apache >>> > Benchmark... >>> > >>> > > ab -n 1000 -c 8 localhost:8080/ >>> > >>> > I see the following output from the Undertow process: >>> > >>> > Server started on port 8080 >>> > >>> > 1 >>> > 2 >>> > 3 >>> > 4 >>> > >>> > I believe this demonstrates that only 4 requests are ever processed in >>> > parallel. I would expect 8. In fact, when I run the same experiment >>> on >>> > Windows I see the expected output of >>> > >>> > Server started on port 8080 >>> > 1 >>> > 2 >>> > 3 >>> > 4 >>> > 5 >>> > 6 >>> > 7 >>> > 8 >>> > >>> > Any thoughts as to what might explain this behavior? >>> > >>> > Best, >>> > >>> > Matt >>> > >>> > _______________________________________________ >>> > undertow-dev mailing list >>> > undertow-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/undertow-dev >>> >>> _______________________________________________ >>> undertow-dev mailing list >>> undertow-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/undertow-dev >>> >> >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20180726/0dbc52b9/attachment-0001.html