From kid at bitkid.com Fri Jun 9 08:14:35 2017 From: kid at bitkid.com (Sascha Sadat-Guscheh) Date: Fri, 9 Jun 2017 14:14:35 +0200 Subject: [undertow-dev] how to disable io exception logging Message-ID: <14324060-7567-410F-8483-E7DDBBC79859@bitkid.com> hello undertow developers! we are trying to disable logging of io exceptions like the comment in UndertowLogger.java suggests: /** * Logger used for IO exceptions. Generally these should be suppressed, because they are of little interest, and it is easy for an * attacker to fill up the logs by intentionally causing IO exceptions. */ UndertowLogger REQUEST_IO_LOGGER = Logger.getMessageLogger(UndertowLogger.class, UndertowLogger.class.getPackage().getName() + ".request.io"); so we disable io.undertow.request.io but it seems that there are still io exceptions logged as io.undertow.request, i see log entries like this: io.undertow.request 2017-06-08 23:44:14,898 UT005003: IOException reading from channel java.io.IOException: UT000128: Remote peer closed connection before all data could be read at io.undertow.conduits.FixedLengthStreamSourceConduit.exitRead(FixedLengthStreamSourceConduit.java:338) at io.undertow.conduits.FixedLengthStreamSourceConduit.read(FixedLengthStreamSourceConduit.java:255) at org.xnio.conduits.ConduitStreamSourceChannel.read(ConduitStreamSourceChannel.java:127) at io.undertow.channels.DetachableStreamSourceChannel.read(DetachableStreamSourceChannel.java:209) at io.undertow.server.HttpServerExchange$ReadDispatchChannel.read(HttpServerExchange.java:2287) at io.undertow.server.handlers.form.FormEncodedDataDefinition$FormEncodedDataParser.doParse(FormEncodedDataDefinition.java:134) at io.undertow.server.handlers.form.FormEncodedDataDefinition$FormEncodedDataParser.handleEvent(FormEncodedDataDefinition.java:115 to me it seems that the handler for form encoded post data uses the wrong logger for its io exceptions. best, sascha -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170609/2eb08f89/attachment-0001.html From milesg78 at gmail.com Mon Jun 12 05:06:21 2017 From: milesg78 at gmail.com (Violeta Georgieva) Date: Mon, 12 Jun 2017 12:06:21 +0300 Subject: [undertow-dev] Questions about Undertow's ByteBufferPool Message-ID: Hi, I have few questions about Undertow's ByteBufferPool. Are there any best practices how it should be used? Also are the ByteBuffers always fixed in size? Thanks, Violeta -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170612/a4ee6e38/attachment.html From me at christophsturm.com Mon Jun 12 12:37:17 2017 From: me at christophsturm.com (Christoph Sturm) Date: Mon, 12 Jun 2017 18:37:17 +0200 Subject: [undertow-dev] UT005085 connection was not closed cleanly, forcibly closing connection Message-ID: hello undertow developers! We see this exception UT005085: Connection io.undertow.server.protocol.http.HttpServerConnection at 7ba04d76 for exchange HttpServerExchange{ POST /pixel ?.. response {Connection=[close], Content-Length=[0], Date=[Mon, 12 Jun 2017 16:31:06 GMT]}} was not closed cleanly, forcibly closing connection in our log files, and looking at the undertow source that should never happen. is this something related to our code or is it just some strange behaviour from the client? if it?s something that we cannot fix, maybe it can be logged by the io logger instead to so we can turn it off easily? or is there a jboss-logging way to disable logging for a single error code? thanks chris From sdouglas at redhat.com Tue Jun 13 20:11:43 2017 From: sdouglas at redhat.com (Stuart Douglas) Date: Wed, 14 Jun 2017 02:11:43 +0200 Subject: [undertow-dev] UT005085 connection was not closed cleanly, forcibly closing connection In-Reply-To: References: Message-ID: Which version of Undertow? There was a recent bug that could potentially cause this (UNDERTOW-1068) , that should be fixed in the 1.4.16.Final release. Stuart On Mon, Jun 12, 2017 at 6:37 PM, Christoph Sturm wrote: > hello undertow developers! > > We see this exception > > UT005085: Connection io.undertow.server.protocol.http.HttpServerConnection at 7ba04d76 for exchange HttpServerExchange{ POST /pixel ?.. response {Connection=[close], Content-Length=[0], Date=[Mon, 12 Jun 2017 16:31:06 GMT]}} was not closed cleanly, forcibly closing connection > > in our log files, and looking at the undertow source that should never happen. is this something related to our code or is it just some strange behaviour from the client? > if it?s something that we cannot fix, maybe it can be logged by the io logger instead to so we can turn it off easily? > > or is there a jboss-logging way to disable logging for a single error code? > > thanks > chris > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev From sdouglas at redhat.com Tue Jun 13 20:21:02 2017 From: sdouglas at redhat.com (Stuart Douglas) Date: Wed, 14 Jun 2017 02:21:02 +0200 Subject: [undertow-dev] Questions about Undertow's ByteBufferPool In-Reply-To: References: Message-ID: It should be used anytime you require a large direct buffer (generally for IO operations). Depending on your app you may not need to use them directly at all, although Undertow will always use them internally. The buffers are fixed size which is determined dynamically (unless explicitly specified), generally they will be 16kb as this seems to give optimal IO performance. Stuart On Mon, Jun 12, 2017 at 11:06 AM, Violeta Georgieva wrote: > Hi, > > I have few questions about Undertow's ByteBufferPool. > Are there any best practices how it should be used? > Also are the ByteBuffers always fixed in size? > > Thanks, > Violeta > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev From sdouglas at redhat.com Tue Jun 13 20:21:44 2017 From: sdouglas at redhat.com (Stuart Douglas) Date: Wed, 14 Jun 2017 02:21:44 +0200 Subject: [undertow-dev] how to disable io exception logging In-Reply-To: <14324060-7567-410F-8483-E7DDBBC79859@bitkid.com> References: <14324060-7567-410F-8483-E7DDBBC79859@bitkid.com> Message-ID: If you file a JIRA I will fix this. Stuart On Fri, Jun 9, 2017 at 2:14 PM, Sascha Sadat-Guscheh wrote: > hello undertow developers! > > we are trying to disable logging of io exceptions like the comment in > UndertowLogger.java suggests: > > /** > * Logger used for IO exceptions. Generally these should be suppressed, > because they are of little interest, and it is easy for an > * attacker to fill up the logs by intentionally causing IO exceptions. > */ > UndertowLogger REQUEST_IO_LOGGER = > Logger.getMessageLogger(UndertowLogger.class, > UndertowLogger.class.getPackage().getName() + ".request.io"); > > so we disable io.undertow.request.io > > but it seems that there are still io exceptions logged as > io.undertow.request, i see log entries like this: > io.undertow.request 2017-06-08 23:44:14,898 UT005003: IOException reading > from channel java.io.IOException: UT000128: Remote peer closed connection > before all data could be read > at > io.undertow.conduits.FixedLengthStreamSourceConduit.exitRead(FixedLengthStreamSourceConduit.java:338) > at > io.undertow.conduits.FixedLengthStreamSourceConduit.read(FixedLengthStreamSourceConduit.java:255) > at > org.xnio.conduits.ConduitStreamSourceChannel.read(ConduitStreamSourceChannel.java:127) > at > io.undertow.channels.DetachableStreamSourceChannel.read(DetachableStreamSourceChannel.java:209) > at > io.undertow.server.HttpServerExchange$ReadDispatchChannel.read(HttpServerExchange.java:2287) > at > io.undertow.server.handlers.form.FormEncodedDataDefinition$FormEncodedDataParser.doParse(FormEncodedDataDefinition.java:134) > at > io.undertow.server.handlers.form.FormEncodedDataDefinition$FormEncodedDataParser.handleEvent(FormEncodedDataDefinition.java:115 > > to me it seems that the handler for form encoded post data uses the wrong > logger for its io exceptions. > > best, sascha > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev From me at christophsturm.com Wed Jun 14 03:10:42 2017 From: me at christophsturm.com (Christoph Sturm) Date: Wed, 14 Jun 2017 09:10:42 +0200 Subject: [undertow-dev] how to disable io exception logging In-Reply-To: References: <14324060-7567-410F-8483-E7DDBBC79859@bitkid.com> Message-ID: <426BC83F-8E79-4D17-8007-561F6BBFA517@christophsturm.com> I?m a coworker of sascha?s and i filed a pr (#525) for this -chris > On 14 Jun 2017, at 02:21, Stuart Douglas wrote: > > If you file a JIRA I will fix this. > > Stuart > > On Fri, Jun 9, 2017 at 2:14 PM, Sascha Sadat-Guscheh wrote: >> hello undertow developers! >> >> we are trying to disable logging of io exceptions like the comment in >> UndertowLogger.java suggests: >> >> /** >> * Logger used for IO exceptions. Generally these should be suppressed, >> because they are of little interest, and it is easy for an >> * attacker to fill up the logs by intentionally causing IO exceptions. >> */ >> UndertowLogger REQUEST_IO_LOGGER = >> Logger.getMessageLogger(UndertowLogger.class, >> UndertowLogger.class.getPackage().getName() + ".request.io"); >> >> so we disable io.undertow.request.io >> >> but it seems that there are still io exceptions logged as >> io.undertow.request, i see log entries like this: >> io.undertow.request 2017-06-08 23:44:14,898 UT005003: IOException reading >> from channel java.io.IOException: UT000128: Remote peer closed connection >> before all data could be read >> at >> io.undertow.conduits.FixedLengthStreamSourceConduit.exitRead(FixedLengthStreamSourceConduit.java:338) >> at >> io.undertow.conduits.FixedLengthStreamSourceConduit.read(FixedLengthStreamSourceConduit.java:255) >> at >> org.xnio.conduits.ConduitStreamSourceChannel.read(ConduitStreamSourceChannel.java:127) >> at >> io.undertow.channels.DetachableStreamSourceChannel.read(DetachableStreamSourceChannel.java:209) >> at >> io.undertow.server.HttpServerExchange$ReadDispatchChannel.read(HttpServerExchange.java:2287) >> at >> io.undertow.server.handlers.form.FormEncodedDataDefinition$FormEncodedDataParser.doParse(FormEncodedDataDefinition.java:134) >> at >> io.undertow.server.handlers.form.FormEncodedDataDefinition$FormEncodedDataParser.handleEvent(FormEncodedDataDefinition.java:115 >> >> to me it seems that the handler for form encoded post data uses the wrong >> logger for its io exceptions. >> >> best, sascha >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev From me at christophsturm.com Wed Jun 14 03:13:15 2017 From: me at christophsturm.com (Christoph Sturm) Date: Wed, 14 Jun 2017 09:13:15 +0200 Subject: [undertow-dev] UT005085 connection was not closed cleanly, forcibly closing connection In-Reply-To: References: Message-ID: <42F39B99-00CD-4B33-94EC-41F079AAACFF@christophsturm.com> this was with the latest version built from the 1.4.x branch. > On 14 Jun 2017, at 02:11, Stuart Douglas wrote: > > Which version of Undertow? There was a recent bug that could > potentially cause this (UNDERTOW-1068) , that should be fixed in the > 1.4.16.Final release. > > Stuart > > On Mon, Jun 12, 2017 at 6:37 PM, Christoph Sturm wrote: >> hello undertow developers! >> >> We see this exception >> >> UT005085: Connection io.undertow.server.protocol.http.HttpServerConnection at 7ba04d76 for exchange HttpServerExchange{ POST /pixel ?.. response {Connection=[close], Content-Length=[0], Date=[Mon, 12 Jun 2017 16:31:06 GMT]}} was not closed cleanly, forcibly closing connection >> >> in our log files, and looking at the undertow source that should never happen. is this something related to our code or is it just some strange behaviour from the client? >> if it?s something that we cannot fix, maybe it can be logged by the io logger instead to so we can turn it off easily? >> >> or is there a jboss-logging way to disable logging for a single error code? >> >> thanks >> chris >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev From sdouglas at redhat.com Wed Jun 14 21:04:40 2017 From: sdouglas at redhat.com (Stuart Douglas) Date: Thu, 15 Jun 2017 03:04:40 +0200 Subject: [undertow-dev] UT005085 connection was not closed cleanly, forcibly closing connection In-Reply-To: <42F39B99-00CD-4B33-94EC-41F079AAACFF@christophsturm.com> References: <42F39B99-00CD-4B33-94EC-41F079AAACFF@christophsturm.com> Message-ID: I had a quick look into this, and I can't really see how this could be generated. Is your application registering its own conduits by any chance? The other thing that seems a bit odd is that the connection is being closed, which is not the default. Are you explicitly setting the close header? Stuart On Wed, Jun 14, 2017 at 9:13 AM, Christoph Sturm wrote: > this was with the latest version built from the 1.4.x branch. >> On 14 Jun 2017, at 02:11, Stuart Douglas wrote: >> >> Which version of Undertow? There was a recent bug that could >> potentially cause this (UNDERTOW-1068) , that should be fixed in the >> 1.4.16.Final release. >> >> Stuart >> >> On Mon, Jun 12, 2017 at 6:37 PM, Christoph Sturm wrote: >>> hello undertow developers! >>> >>> We see this exception >>> >>> UT005085: Connection io.undertow.server.protocol.http.HttpServerConnection at 7ba04d76 for exchange HttpServerExchange{ POST /pixel ?.. response {Connection=[close], Content-Length=[0], Date=[Mon, 12 Jun 2017 16:31:06 GMT]}} was not closed cleanly, forcibly closing connection >>> >>> in our log files, and looking at the undertow source that should never happen. is this something related to our code or is it just some strange behaviour from the client? >>> if it?s something that we cannot fix, maybe it can be logged by the io logger instead to so we can turn it off easily? >>> >>> or is there a jboss-logging way to disable logging for a single error code? >>> >>> thanks >>> chris >>> _______________________________________________ >>> undertow-dev mailing list >>> undertow-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/undertow-dev > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev From milesg78 at gmail.com Fri Jun 16 10:35:55 2017 From: milesg78 at gmail.com (Violeta Georgieva) Date: Fri, 16 Jun 2017 17:35:55 +0300 Subject: [undertow-dev] Questions about Undertow's ByteBufferPool In-Reply-To: References: Message-ID: Hi Stuart, Spring Framework 5 uses Undertow APIs directly, i.e. not going through the Servlet API. When reading the request the ByteBufferPool is obtained from the connection and then the PooledByteBuffer is allocated and used [1]. Also the PooledByteBuffer is closed in ExchangeCompletionListener [2]. So currently the usage is limited only to the request reading. In order to extend the usage of the ByteBufferPool we need to be able to allocate buffers with a concrete size. At the moment (what I saw in the sources) the ByteBufferPool is created with a specific buffer size and the pooled buffers are created with exactly this size. For comparison see the io.netty.buffer.PooledByteBufAllocator in Netty which provides a functionality to allocated buffers with a specific size [3], [4], [5]. So is it possible to achieve the same with the Undertow's ByteBufferPool? If such functionality is not existing do you think it is feasible to add such API? Thanks, Violeta [1] https://github.com/spring-projects/spring-framework/blob/master/spring-web/src/main/java/org/springframework/http/server/reactive/UndertowServerHttpRequest.java#L144-L148 [2] https://github.com/spring-projects/spring-framework/blob/master/spring-web/src/main/java/org/springframework/http/server/reactive/UndertowServerHttpRequest.java#L127-L133 [3] https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java#L107-L112 [4] https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java#L301 [5] https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java#L318 2017-06-14 3:21 GMT+03:00 Stuart Douglas : > It should be used anytime you require a large direct buffer (generally > for IO operations). Depending on your app you may not need to use them > directly at all, although Undertow will always use them internally. > > The buffers are fixed size which is determined dynamically (unless > explicitly specified), generally they will be 16kb as this seems to > give optimal IO performance. > > Stuart > > On Mon, Jun 12, 2017 at 11:06 AM, Violeta Georgieva > wrote: > > Hi, > > > > I have few questions about Undertow's ByteBufferPool. > > Are there any best practices how it should be used? > > Also are the ByteBuffers always fixed in size? > > > > Thanks, > > Violeta > > > > _______________________________________________ > > undertow-dev mailing list > > undertow-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170616/e255f729/attachment.html From sdouglas at redhat.com Sun Jun 18 19:24:54 2017 From: sdouglas at redhat.com (Stuart Douglas) Date: Mon, 19 Jun 2017 01:24:54 +0200 Subject: [undertow-dev] Questions about Undertow's ByteBufferPool In-Reply-To: References: Message-ID: Looking at the Netty code it does not pool arbitrary sized buffers, but has a few predefined sizes, and if a buffer that is larger than supported is requested it will be allocated directly. Why do you need arbitrary sized buffers? Anything smaller than the pool size is obviously not a problem, but for anything larger you can just allocate multiple buffers (generally up to some sort of limit). An example of Undertow code that does this is in ServletOutputStream, which if the provided byte[] array is too large we attempt to allocate more buffers, and if it still does not fit utilise multiple gathering writes: https://github.com/undertow-io/undertow/blob/master/servlet/src/main/java/io/undertow/servlet/spec/ServletOutputStreamImpl.java#L154 (although without knowing exactly what your use case is it is hard to say how relevant this is to you). Stuart On Fri, Jun 16, 2017 at 4:35 PM, Violeta Georgieva wrote: > Hi Stuart, > > Spring Framework 5 uses Undertow APIs directly, i.e. not going through the > Servlet API. > When reading the request the ByteBufferPool is obtained from the connection > and then the PooledByteBuffer is allocated and used [1]. > Also the PooledByteBuffer is closed in ExchangeCompletionListener [2]. > > So currently the usage is limited only to the request reading. > In order to extend the usage of the ByteBufferPool we need to be able to > allocate buffers with a concrete size. > At the moment (what I saw in the sources) the ByteBufferPool is created with > a specific buffer size > and the pooled buffers are created with exactly this size. > > For comparison see the io.netty.buffer.PooledByteBufAllocator in Netty > which provides a functionality to allocated buffers with a specific size > [3], [4], [5]. > > So is it possible to achieve the same with the Undertow's ByteBufferPool? > If such functionality is not existing do you think it is feasible to add > such API? > > Thanks, > Violeta > > [1] > https://github.com/spring-projects/spring-framework/blob/master/spring-web/src/main/java/org/springframework/http/server/reactive/UndertowServerHttpRequest.java#L144-L148 > [2] > https://github.com/spring-projects/spring-framework/blob/master/spring-web/src/main/java/org/springframework/http/server/reactive/UndertowServerHttpRequest.java#L127-L133 > [3] > https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java#L107-L112 > [4] > https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java#L301 > [5] > https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java#L318 > > > 2017-06-14 3:21 GMT+03:00 Stuart Douglas : >> >> It should be used anytime you require a large direct buffer (generally >> for IO operations). Depending on your app you may not need to use them >> directly at all, although Undertow will always use them internally. >> >> The buffers are fixed size which is determined dynamically (unless >> explicitly specified), generally they will be 16kb as this seems to >> give optimal IO performance. >> >> Stuart >> >> On Mon, Jun 12, 2017 at 11:06 AM, Violeta Georgieva >> wrote: >> > Hi, >> > >> > I have few questions about Undertow's ByteBufferPool. >> > Are there any best practices how it should be used? >> > Also are the ByteBuffers always fixed in size? >> > >> > Thanks, >> > Violeta >> > >> > _______________________________________________ >> > undertow-dev mailing list >> > undertow-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/undertow-dev > > From me at christophsturm.com Mon Jun 19 05:18:39 2017 From: me at christophsturm.com (Christoph Sturm) Date: Mon, 19 Jun 2017 11:18:39 +0200 Subject: [undertow-dev] UT005085 connection was not closed cleanly, forcibly closing connection In-Reply-To: References: <42F39B99-00CD-4B33-94EC-41F079AAACFF@christophsturm.com> Message-ID: <869C1858-B121-47B2-A4EB-E8E8B4E80E6E@christophsturm.com> Hello Stuart! the handler that generates this does not do anything special. It does some processing and then it calls endExchange without writing a response. it also also does not set any special headers. we do register our own conduits but in a separate xnio server where we listen on a raw socket but thats probably not related. thanks chris > On 15 Jun 2017, at 03:04, Stuart Douglas wrote: > > I had a quick look into this, and I can't really see how this could be > generated. Is your application registering its own conduits by any > chance? > > The other thing that seems a bit odd is that the connection is being > closed, which is not the default. Are you explicitly setting the close > header? > > Stuart > > On Wed, Jun 14, 2017 at 9:13 AM, Christoph Sturm wrote: >> this was with the latest version built from the 1.4.x branch. >>> On 14 Jun 2017, at 02:11, Stuart Douglas wrote: >>> >>> Which version of Undertow? There was a recent bug that could >>> potentially cause this (UNDERTOW-1068) , that should be fixed in the >>> 1.4.16.Final release. >>> >>> Stuart >>> >>> On Mon, Jun 12, 2017 at 6:37 PM, Christoph Sturm wrote: >>>> hello undertow developers! >>>> >>>> We see this exception >>>> >>>> UT005085: Connection io.undertow.server.protocol.http.HttpServerConnection at 7ba04d76 for exchange HttpServerExchange{ POST /pixel ?.. response {Connection=[close], Content-Length=[0], Date=[Mon, 12 Jun 2017 16:31:06 GMT]}} was not closed cleanly, forcibly closing connection >>>> >>>> in our log files, and looking at the undertow source that should never happen. is this something related to our code or is it just some strange behaviour from the client? >>>> if it?s something that we cannot fix, maybe it can be logged by the io logger instead to so we can turn it off easily? >>>> >>>> or is there a jboss-logging way to disable logging for a single error code? >>>> >>>> thanks >>>> chris >>>> _______________________________________________ >>>> undertow-dev mailing list >>>> undertow-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/undertow-dev >> >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev From sdouglas at redhat.com Tue Jun 20 00:03:37 2017 From: sdouglas at redhat.com (Stuart Douglas) Date: Tue, 20 Jun 2017 14:03:37 +1000 Subject: [undertow-dev] UT005085 connection was not closed cleanly, forcibly closing connection In-Reply-To: <869C1858-B121-47B2-A4EB-E8E8B4E80E6E@christophsturm.com> References: <42F39B99-00CD-4B33-94EC-41F079AAACFF@christophsturm.com> <869C1858-B121-47B2-A4EB-E8E8B4E80E6E@christophsturm.com> Message-ID: Should be fixed in 1.4.17.Final Stuart On Mon, Jun 19, 2017 at 7:18 PM, Christoph Sturm wrote: > Hello Stuart! > > the handler that generates this does not do anything special. It does some processing and then it calls endExchange without writing a response. > > it also also does not set any special headers. > > we do register our own conduits but in a separate xnio server where we listen on a raw socket but thats probably not related. > > thanks > chris > > >> On 15 Jun 2017, at 03:04, Stuart Douglas wrote: >> >> I had a quick look into this, and I can't really see how this could be >> generated. Is your application registering its own conduits by any >> chance? >> >> The other thing that seems a bit odd is that the connection is being >> closed, which is not the default. Are you explicitly setting the close >> header? >> >> Stuart >> >> On Wed, Jun 14, 2017 at 9:13 AM, Christoph Sturm wrote: >>> this was with the latest version built from the 1.4.x branch. >>>> On 14 Jun 2017, at 02:11, Stuart Douglas wrote: >>>> >>>> Which version of Undertow? There was a recent bug that could >>>> potentially cause this (UNDERTOW-1068) , that should be fixed in the >>>> 1.4.16.Final release. >>>> >>>> Stuart >>>> >>>> On Mon, Jun 12, 2017 at 6:37 PM, Christoph Sturm wrote: >>>>> hello undertow developers! >>>>> >>>>> We see this exception >>>>> >>>>> UT005085: Connection io.undertow.server.protocol.http.HttpServerConnection at 7ba04d76 for exchange HttpServerExchange{ POST /pixel ?.. response {Connection=[close], Content-Length=[0], Date=[Mon, 12 Jun 2017 16:31:06 GMT]}} was not closed cleanly, forcibly closing connection >>>>> >>>>> in our log files, and looking at the undertow source that should never happen. is this something related to our code or is it just some strange behaviour from the client? >>>>> if it?s something that we cannot fix, maybe it can be logged by the io logger instead to so we can turn it off easily? >>>>> >>>>> or is there a jboss-logging way to disable logging for a single error code? >>>>> >>>>> thanks >>>>> chris >>>>> _______________________________________________ >>>>> undertow-dev mailing list >>>>> undertow-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/undertow-dev >>> >>> >>> _______________________________________________ >>> undertow-dev mailing list >>> undertow-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/undertow-dev > From senthilec566 at gmail.com Thu Jun 22 14:00:07 2017 From: senthilec566 at gmail.com (SenthilKumar K) Date: Thu, 22 Jun 2017 23:30:07 +0530 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance Message-ID: Hello Undertow Dev Team , I have been working on the use case where i should create simple http server to serve 1.5 Million Requests per Second per Instance .. Here is the benchmark result of Undertow : Running 1m test @ http://127.0.0.1:8009/ 20 threads and 40 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.51ms 10.75ms 282.22ms 99.28% Req/Sec 1.12k 316.65 1.96k 54.50% Latency Distribution 50% 1.43ms 75% 2.38ms 90% 2.90ms 99% 10.45ms 1328133 requests in 1.00m, 167.19MB read Requests/sec: *22127*.92 Transfer/sec: 2.79MB This is less compared to other frameworks like Jetty and Netty .. But originally Undertow is high performant http server .. Hardware details: Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 GHz) , Memory : 32 G , Available memory 31 G. I would need Undertow experts to review the server code below and advice me on tuning to achieve my goal( ~1.5 Million requests/sec ). Server : Undertow server = Undertow.builder() .addHttpListener(8009, "localhost") .setHandler(new Handler()).build(); server.start(); Handler.Java final Pooled pooledByteBuffer = exchange.getConnection().getBufferPool().allocate(); final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); byteBuffer.clear(); exchange.getRequestChannel().read(byteBuffer); int pos = byteBuffer.position(); byteBuffer.rewind(); byte[] bytes = new byte[pos]; byteBuffer.get(bytes); String requestBody = new String(bytes, Charset.forName("UTF-8") ); byteBuffer.clear(); pooledByteBuffer.free(); final PostToKafka post2Kafka = new PostToKafka(); try { *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 Millions events per sec }* } catch (Exception e) { e.printStackTrace(); } exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchange.getResponseSender().send("SUCCESS"); --Senthil -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170622/2a3028f4/attachment-0001.html From senthilec566 at gmail.com Thu Jun 22 14:27:34 2017 From: senthilec566 at gmail.com (SenthilKumar K) Date: Thu, 22 Jun 2017 23:57:34 +0530 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: Seems to Reading Request body is wrong , So what is the efficient way of reading request body in undertow ? --Senthil On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K wrote: > Hello Undertow Dev Team , > > I have been working on the use case where i should create simple > http server to serve 1.5 Million Requests per Second per Instance .. > > > Here is the benchmark result of Undertow : > > Running 1m test @ http://127.0.0.1:8009/ > 20 threads and 40 connections > Thread Stats Avg Stdev Max +/- Stdev > Latency 2.51ms 10.75ms 282.22ms 99.28% > Req/Sec 1.12k 316.65 1.96k 54.50% > Latency Distribution > 50% 1.43ms > 75% 2.38ms > 90% 2.90ms > 99% 10.45ms > 1328133 requests in 1.00m, 167.19MB read > Requests/sec: *22127*.92 > Transfer/sec: 2.79MB > > This is less compared to other frameworks like Jetty and Netty .. But > originally Undertow is high performant http server .. > > Hardware details: > Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 GHz) > , Memory : 32 G , Available memory 31 G. > > I would need Undertow experts to review the server code below and advice > me on tuning to achieve my goal( ~1.5 Million requests/sec ). > > Server : > > Undertow server = Undertow.builder() > .addHttpListener(8009, "localhost") > .setHandler(new Handler()).build(); > server.start(); > > > Handler.Java > > final Pooled pooledByteBuffer = > exchange.getConnection().getBufferPool().allocate(); > final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); > byteBuffer.clear(); > exchange.getRequestChannel().read(byteBuffer); > int pos = byteBuffer.position(); > byteBuffer.rewind(); > byte[] bytes = new byte[pos]; > byteBuffer.get(bytes); > String requestBody = new String(bytes, Charset.forName("UTF-8") ); > byteBuffer.clear(); > pooledByteBuffer.free(); > final PostToKafka post2Kafka = new PostToKafka(); > try { > *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 Millions > events per sec }* > } catch (Exception e) { > e.printStackTrace(); > } > exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); > exchange.getResponseSender().send("SUCCESS"); > > > --Senthil > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170622/8a1fcade/attachment.html From senthilec566 at gmail.com Thu Jun 22 15:13:03 2017 From: senthilec566 at gmail.com (SenthilKumar K) Date: Fri, 23 Jun 2017 00:43:03 +0530 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: At last i modified the code as below and still i see ~50K requests/sec .. public class HelloWorldServer { public static void main(final String[] args) { Undertow server = Undertow.builder().addHttpListener(8009, "localhost").setHandler(new HttpHandler() { @Override public void handleRequest(final HttpServerExchange exchange) throws Exception { if (exchange.isInIoThread()) { exchange.dispatch(this); return; } if (exchange.getRequestMethod().equals(Methods.POST)) { BufferedReader reader = null; StringBuilder builder = new StringBuilder(); try { exchange.startBlocking(); reader = new BufferedReader(new InputStreamReader(exchange.getInputStream())); String line; while ((line = reader.readLine()) != null) { builder.append(line); } } catch (IOException e) { e.printStackTrace(); } finally { if (reader != null) { try { reader.close(); } catch (IOException e) { e.printStackTrace(); } } } String body = builder.toString(); System.out.println("Req Body ==> " + body); exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchange.getResponseSender().send("SUCCESS"); } else { exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchange.getResponseSender().send("FAILURE"); } } }).build(); server.start(); } } On Thu, Jun 22, 2017 at 11:57 PM, SenthilKumar K wrote: > Seems to Reading Request body is wrong , So what is the efficient way of > reading request body in undertow ? > > --Senthil > > On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K > wrote: > >> Hello Undertow Dev Team , >> >> I have been working on the use case where i should create simple >> http server to serve 1.5 Million Requests per Second per Instance .. >> >> >> Here is the benchmark result of Undertow : >> >> Running 1m test @ http://127.0.0.1:8009/ >> 20 threads and 40 connections >> Thread Stats Avg Stdev Max +/- Stdev >> Latency 2.51ms 10.75ms 282.22ms 99.28% >> Req/Sec 1.12k 316.65 1.96k 54.50% >> Latency Distribution >> 50% 1.43ms >> 75% 2.38ms >> 90% 2.90ms >> 99% 10.45ms >> 1328133 requests in 1.00m, 167.19MB read >> Requests/sec: *22127*.92 >> Transfer/sec: 2.79MB >> >> This is less compared to other frameworks like Jetty and Netty .. But >> originally Undertow is high performant http server .. >> >> Hardware details: >> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 GHz) >> , Memory : 32 G , Available memory 31 G. >> >> I would need Undertow experts to review the server code below and advice >> me on tuning to achieve my goal( ~1.5 Million requests/sec ). >> >> Server : >> >> Undertow server = Undertow.builder() >> .addHttpListener(8009, "localhost") >> .setHandler(new Handler()).build(); >> server.start(); >> >> >> Handler.Java >> >> final Pooled pooledByteBuffer = >> exchange.getConnection().getBufferPool().allocate(); >> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >> byteBuffer.clear(); >> exchange.getRequestChannel().read(byteBuffer); >> int pos = byteBuffer.position(); >> byteBuffer.rewind(); >> byte[] bytes = new byte[pos]; >> byteBuffer.get(bytes); >> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >> byteBuffer.clear(); >> pooledByteBuffer.free(); >> final PostToKafka post2Kafka = new PostToKafka(); >> try { >> *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 Millions >> events per sec }* >> } catch (Exception e) { >> e.printStackTrace(); >> } >> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >> "text/plain"); >> exchange.getResponseSender().send("SUCCESS"); >> >> >> --Senthil >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170623/72881def/attachment.html From ecki at zusammenkunft.net Thu Jun 22 15:28:53 2017 From: ecki at zusammenkunft.net (Bernd Eckenfels) Date: Thu, 22 Jun 2017 19:28:53 +0000 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: I would start with eliminating the byte array, String and the new PostToKafka instance. But I seriously doubt you will come close to your performance goal. Can you show the Jetty and Nett versions which are faster, and if yes how much? I am not sure what the characteristic of your Kafka client is, but it is likely that it should run on a worker thread. Gruss Bernd -- http://bernd.eckenfels.net _____________________________ From: SenthilKumar K > Sent: Donnerstag, Juni 22, 2017 9:21 PM Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance To: >, Senthil kumar > Hello Undertow Dev Team , I have been working on the use case where i should create simple http server to serve 1.5 Million Requests per Second per Instance .. Here is the benchmark result of Undertow : Running 1m test @ http://127.0.0.1:8009/ 20 threads and 40 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.51ms 10.75ms 282.22ms 99.28% Req/Sec 1.12k 316.65 1.96k 54.50% Latency Distribution 50% 1.43ms 75% 2.38ms 90% 2.90ms 99% 10.45ms 1328133 requests in 1.00m, 167.19MB read Requests/sec: 22127.92 Transfer/sec: 2.79MB This is less compared to other frameworks like Jetty and Netty .. But originally Undertow is high performant http server .. Hardware details: Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 GHz) , Memory : 32 G , Available memory 31 G. I would need Undertow experts to review the server code below and advice me on tuning to achieve my goal( ~1.5 Million requests/sec ). Server : Undertow server = Undertow.builder() .addHttpListener(8009, "localhost") .setHandler(new Handler()).build(); server.start(); Handler.Java final Pooled pooledByteBuffer = exchange.getConnection().getBufferPool().allocate(); final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); byteBuffer.clear(); exchange.getRequestChannel().read(byteBuffer); int pos = byteBuffer.position(); byteBuffer.rewind(); byte[] bytes = new byte[pos]; byteBuffer.get(bytes); String requestBody = new String(bytes, Charset.forName("UTF-8") ); byteBuffer.clear(); pooledByteBuffer.free(); final PostToKafka post2Kafka = new PostToKafka(); try { post2Kafka.write2Kafka(requestBody); { This API can handle ~2 Millions events per sec } } catch (Exception e) { e.printStackTrace(); } exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchange.getResponseSender().send("SUCCESS"); --Senthil -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170622/660e04a4/attachment-0001.html From bill at dartalley.com Thu Jun 22 15:34:10 2017 From: bill at dartalley.com (Bill O'Neil) Date: Thu, 22 Jun 2017 15:34:10 -0400 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: I'm not an expert on the high performance part but heres my thoughts. 1. Is the Kafka API non blocking? I would assume not. If it's not you are basically running on only IO threads (roughly 1 per CPU) and could be blocking incoming requests. Try wrapping your HttpHandler in the BlockingHandler which will give it more threads to work with the network IO and shouldn't block / refuse incoming connections. 2. The HttpServerExchange should be handling all of the buffers under the hood for you. I believe you are just doing extra work here. If you use a blocking handler you can simply call exchange.getInputStream() to get the request body. I'm not sure the best way to handle the read in a non blocking manner. On Thu, Jun 22, 2017 at 2:00 PM, SenthilKumar K wrote: > Hello Undertow Dev Team , > > I have been working on the use case where i should create simple > http server to serve 1.5 Million Requests per Second per Instance .. > > > Here is the benchmark result of Undertow : > > Running 1m test @ http://127.0.0.1:8009/ > 20 threads and 40 connections > Thread Stats Avg Stdev Max +/- Stdev > Latency 2.51ms 10.75ms 282.22ms 99.28% > Req/Sec 1.12k 316.65 1.96k 54.50% > Latency Distribution > 50% 1.43ms > 75% 2.38ms > 90% 2.90ms > 99% 10.45ms > 1328133 requests in 1.00m, 167.19MB read > Requests/sec: *22127*.92 > Transfer/sec: 2.79MB > > This is less compared to other frameworks like Jetty and Netty .. But > originally Undertow is high performant http server .. > > Hardware details: > Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 GHz) > , Memory : 32 G , Available memory 31 G. > > I would need Undertow experts to review the server code below and advice > me on tuning to achieve my goal( ~1.5 Million requests/sec ). > > Server : > > Undertow server = Undertow.builder() > .addHttpListener(8009, "localhost") > .setHandler(new Handler()).build(); > server.start(); > > > Handler.Java > > final Pooled pooledByteBuffer = > exchange.getConnection().getBufferPool().allocate(); > final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); > byteBuffer.clear(); > exchange.getRequestChannel().read(byteBuffer); > int pos = byteBuffer.position(); > byteBuffer.rewind(); > byte[] bytes = new byte[pos]; > byteBuffer.get(bytes); > String requestBody = new String(bytes, Charset.forName("UTF-8") ); > byteBuffer.clear(); > pooledByteBuffer.free(); > final PostToKafka post2Kafka = new PostToKafka(); > try { > *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 Millions > events per sec }* > } catch (Exception e) { > e.printStackTrace(); > } > exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); > exchange.getResponseSender().send("SUCCESS"); > > > --Senthil > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170622/c3926c7d/attachment.html From senthilec566 at gmail.com Thu Jun 22 15:46:22 2017 From: senthilec566 at gmail.com (SenthilKumar K) Date: Fri, 23 Jun 2017 01:16:22 +0530 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: Thanks Bill .. Yes Kafka is Non Blocking ( We use Async feature of Kafka Producer ). I've changed code as below i.e Wrapped BlockingHandler .. Is this Correct Implementation ? public class HelloWorldServer { public static void main(final String[] args) { Undertow server = Undertow.builder().addHttpListener(8009, "localhost").setHandler( new BlockingHandler( new HttpHandler() { @Override public void handleRequest(final HttpServerExchange exchange) throws Exception { if (exchange.getRequestMethod().equals(Methods.POST)) { BufferedReader reader = null; StringBuilder builder = new StringBuilder(); try { exchange.startBlocking(); reader = new BufferedReader(new InputStreamReader(exchange.getInputStream())); String line; while ((line = reader.readLine()) != null) { builder.append(line); } } catch (IOException e) { e.printStackTrace(); } finally { if (reader != null) { try { reader.close(); } catch (IOException e) { e.printStackTrace(); } } } String body = builder.toString(); System.out.println("Req Body ==> " + body); exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchange.getResponseSender().send("SUCCESS"); } else { exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchange.getResponseSender().send("FAILURE"); } } })).build(); server.start(); } } Without using Kafka API here , i still see only ~50K per sec.. Running 1m test @ http://localhost:8009/ 100 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 20.48ms 11.98ms 328.07ms 91.59% Req/Sec 510.03 104.80 4.63k 91.33% Latency Distribution 50% 18.23ms 75% 20.48ms 90% 24.35ms 99% 69.33ms 3014791 requests in 1.00m, 379.52MB read Requests/sec: *50163*.73 Transfer/sec: 6.31MB --Senthil On Fri, Jun 23, 2017 at 1:04 AM, Bill O'Neil wrote: > I'm not an expert on the high performance part but heres my thoughts. > > > 1. Is the Kafka API non blocking? I would assume not. If it's not you are > basically running on only IO threads (roughly 1 per CPU) and could be > blocking incoming requests. Try wrapping your HttpHandler in the > BlockingHandler which will give it more threads to work with the network IO > and shouldn't block / refuse incoming connections. > 2. The HttpServerExchange should be handling all of the buffers under the > hood for you. I believe you are just doing extra work here. If you use a > blocking handler you can simply call exchange.getInputStream() to get the > request body. I'm not sure the best way to handle the read in a non > blocking manner. > > On Thu, Jun 22, 2017 at 2:00 PM, SenthilKumar K > wrote: > >> Hello Undertow Dev Team , >> >> I have been working on the use case where i should create simple >> http server to serve 1.5 Million Requests per Second per Instance .. >> >> >> Here is the benchmark result of Undertow : >> >> Running 1m test @ http://127.0.0.1:8009/ >> 20 threads and 40 connections >> Thread Stats Avg Stdev Max +/- Stdev >> Latency 2.51ms 10.75ms 282.22ms 99.28% >> Req/Sec 1.12k 316.65 1.96k 54.50% >> Latency Distribution >> 50% 1.43ms >> 75% 2.38ms >> 90% 2.90ms >> 99% 10.45ms >> 1328133 requests in 1.00m, 167.19MB read >> Requests/sec: *22127*.92 >> Transfer/sec: 2.79MB >> >> This is less compared to other frameworks like Jetty and Netty .. But >> originally Undertow is high performant http server .. >> >> Hardware details: >> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 GHz) >> , Memory : 32 G , Available memory 31 G. >> >> I would need Undertow experts to review the server code below and advice >> me on tuning to achieve my goal( ~1.5 Million requests/sec ). >> >> Server : >> >> Undertow server = Undertow.builder() >> .addHttpListener(8009, "localhost") >> .setHandler(new Handler()).build(); >> server.start(); >> >> >> Handler.Java >> >> final Pooled pooledByteBuffer = >> exchange.getConnection().getBufferPool().allocate(); >> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >> byteBuffer.clear(); >> exchange.getRequestChannel().read(byteBuffer); >> int pos = byteBuffer.position(); >> byteBuffer.rewind(); >> byte[] bytes = new byte[pos]; >> byteBuffer.get(bytes); >> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >> byteBuffer.clear(); >> pooledByteBuffer.free(); >> final PostToKafka post2Kafka = new PostToKafka(); >> try { >> *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 Millions >> events per sec }* >> } catch (Exception e) { >> e.printStackTrace(); >> } >> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >> "text/plain"); >> exchange.getResponseSender().send("SUCCESS"); >> >> >> --Senthil >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170623/13f4f1ab/attachment-0001.html From bill at dartalley.com Thu Jun 22 16:04:02 2017 From: bill at dartalley.com (Bill O'Neil) Date: Thu, 22 Jun 2017 16:04:02 -0400 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: If kafka is non blocking and writing async then you might want to remove the blocking handler. This might be helpful for the non blocking read http://lists.jboss.org/pipermail/undertow-dev/2016-February/001503.html HttpServerExchange.getRequestReceiver().receiveFullBytes((exchange, data) -> { //do stuff with the data }, (exchange, exception) -> { //optional error handler } ); On Thu, Jun 22, 2017 at 2:27 PM, SenthilKumar K wrote: > Seems to Reading Request body is wrong , So what is the efficient way of > reading request body in undertow ? > > --Senthil > > On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K > wrote: > >> Hello Undertow Dev Team , >> >> I have been working on the use case where i should create simple >> http server to serve 1.5 Million Requests per Second per Instance .. >> >> >> Here is the benchmark result of Undertow : >> >> Running 1m test @ http://127.0.0.1:8009/ >> 20 threads and 40 connections >> Thread Stats Avg Stdev Max +/- Stdev >> Latency 2.51ms 10.75ms 282.22ms 99.28% >> Req/Sec 1.12k 316.65 1.96k 54.50% >> Latency Distribution >> 50% 1.43ms >> 75% 2.38ms >> 90% 2.90ms >> 99% 10.45ms >> 1328133 requests in 1.00m, 167.19MB read >> Requests/sec: *22127*.92 >> Transfer/sec: 2.79MB >> >> This is less compared to other frameworks like Jetty and Netty .. But >> originally Undertow is high performant http server .. >> >> Hardware details: >> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 GHz) >> , Memory : 32 G , Available memory 31 G. >> >> I would need Undertow experts to review the server code below and advice >> me on tuning to achieve my goal( ~1.5 Million requests/sec ). >> >> Server : >> >> Undertow server = Undertow.builder() >> .addHttpListener(8009, "localhost") >> .setHandler(new Handler()).build(); >> server.start(); >> >> >> Handler.Java >> >> final Pooled pooledByteBuffer = >> exchange.getConnection().getBufferPool().allocate(); >> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >> byteBuffer.clear(); >> exchange.getRequestChannel().read(byteBuffer); >> int pos = byteBuffer.position(); >> byteBuffer.rewind(); >> byte[] bytes = new byte[pos]; >> byteBuffer.get(bytes); >> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >> byteBuffer.clear(); >> pooledByteBuffer.free(); >> final PostToKafka post2Kafka = new PostToKafka(); >> try { >> *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 Millions >> events per sec }* >> } catch (Exception e) { >> e.printStackTrace(); >> } >> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >> "text/plain"); >> exchange.getResponseSender().send("SUCCESS"); >> >> >> --Senthil >> > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170622/ce81eca4/attachment.html From antoine.girard at ymail.com Thu Jun 22 16:17:49 2017 From: antoine.girard at ymail.com (Antoine Girard) Date: Thu, 22 Jun 2017 22:17:49 +0200 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: You can use the Receiver API, specifically for that purpose. On the exchange, call: getRequestReceiver(); You will get a receiver object: https://github.com/undertow-io/undertow/blob/master/core/src/main/java/io/undertow/io/Receiver.java On the receiver you can call: receiveFullString, you have to pass it a callback that will be called when the whole body has been read. Please share your results when you test this further! Cheers, Antoine On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K wrote: > Seems to Reading Request body is wrong , So what is the efficient way of > reading request body in undertow ? > > --Senthil > > On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K > wrote: > >> Hello Undertow Dev Team , >> >> I have been working on the use case where i should create simple >> http server to serve 1.5 Million Requests per Second per Instance .. >> >> >> Here is the benchmark result of Undertow : >> >> Running 1m test @ http://127.0.0.1:8009/ >> 20 threads and 40 connections >> Thread Stats Avg Stdev Max +/- Stdev >> Latency 2.51ms 10.75ms 282.22ms 99.28% >> Req/Sec 1.12k 316.65 1.96k 54.50% >> Latency Distribution >> 50% 1.43ms >> 75% 2.38ms >> 90% 2.90ms >> 99% 10.45ms >> 1328133 requests in 1.00m, 167.19MB read >> Requests/sec: *22127*.92 >> Transfer/sec: 2.79MB >> >> This is less compared to other frameworks like Jetty and Netty .. But >> originally Undertow is high performant http server .. >> >> Hardware details: >> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 GHz) >> , Memory : 32 G , Available memory 31 G. >> >> I would need Undertow experts to review the server code below and advice >> me on tuning to achieve my goal( ~1.5 Million requests/sec ). >> >> Server : >> >> Undertow server = Undertow.builder() >> .addHttpListener(8009, "localhost") >> .setHandler(new Handler()).build(); >> server.start(); >> >> >> Handler.Java >> >> final Pooled pooledByteBuffer = >> exchange.getConnection().getBufferPool().allocate(); >> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >> byteBuffer.clear(); >> exchange.getRequestChannel().read(byteBuffer); >> int pos = byteBuffer.position(); >> byteBuffer.rewind(); >> byte[] bytes = new byte[pos]; >> byteBuffer.get(bytes); >> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >> byteBuffer.clear(); >> pooledByteBuffer.free(); >> final PostToKafka post2Kafka = new PostToKafka(); >> try { >> *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 Millions >> events per sec }* >> } catch (Exception e) { >> e.printStackTrace(); >> } >> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >> "text/plain"); >> exchange.getResponseSender().send("SUCCESS"); >> >> >> --Senthil >> > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170622/eb5eb081/attachment.html From senthilec566 at gmail.com Thu Jun 22 16:42:40 2017 From: senthilec566 at gmail.com (SenthilKumar K) Date: Fri, 23 Jun 2017 02:12:40 +0530 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: Thanks Bill n Antoine .. Here is the updated one : ( tried without Kafka API ) . public class HelloWorldServer { public static void main(final String[] args) { Undertow server = Undertow.builder().addHttpListener(8009, "localhost").setHandler(new HttpHandler() { @Override public void handleRequest(final HttpServerExchange exchange) throws Exception { if (exchange.getRequestMethod().equals(Methods.POST)) { exchange.getRequestReceiver().receiveFullString(new Receiver.FullStringCallback() { @Override public void handle(HttpServerExchange exchange, String message) { System.out.println(" Received String ==> "+message); exchange.getResponseSender().send(message); } }); } else { exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchange.getResponseSender().send("FAILURE"); } } }).build(); server.start(); } } Oops seems to no improvement : Running 1m test @ http://localhost:8009/ 100 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 25.79ms 22.18ms 289.48ms 67.66% Req/Sec 437.76 61.71 2.30k 80.26% Latency Distribution 50% 22.60ms 75% 37.83ms 90% 55.32ms 99% 90.47ms 2625607 requests in 1.00m, 2.76GB read *Requests/sec: 43688.42* Transfer/sec: 47.08MB :-( :-( .. --Senthil On Fri, Jun 23, 2017 at 1:47 AM, Antoine Girard wrote: > You can use the Receiver API, specifically for that purpose. > On the exchange, call: getRequestReceiver(); > > You will get a receiver object: > https://github.com/undertow-io/undertow/blob/master/core/ > src/main/java/io/undertow/io/Receiver.java > > On the receiver you can call: receiveFullString, you have to pass it a > callback that will be called when the whole body has been read. > > Please share your results when you test this further! > > Cheers, > Antoine > > > On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K > wrote: > >> Seems to Reading Request body is wrong , So what is the efficient way of >> reading request body in undertow ? >> >> --Senthil >> >> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K >> wrote: >> >>> Hello Undertow Dev Team , >>> >>> I have been working on the use case where i should create simple >>> http server to serve 1.5 Million Requests per Second per Instance .. >>> >>> >>> Here is the benchmark result of Undertow : >>> >>> Running 1m test @ http://127.0.0.1:8009/ >>> 20 threads and 40 connections >>> Thread Stats Avg Stdev Max +/- Stdev >>> Latency 2.51ms 10.75ms 282.22ms 99.28% >>> Req/Sec 1.12k 316.65 1.96k 54.50% >>> Latency Distribution >>> 50% 1.43ms >>> 75% 2.38ms >>> 90% 2.90ms >>> 99% 10.45ms >>> 1328133 requests in 1.00m, 167.19MB read >>> Requests/sec: *22127*.92 >>> Transfer/sec: 2.79MB >>> >>> This is less compared to other frameworks like Jetty and Netty .. But >>> originally Undertow is high performant http server .. >>> >>> Hardware details: >>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 >>> GHz) , Memory : 32 G , Available memory 31 G. >>> >>> I would need Undertow experts to review the server code below and advice >>> me on tuning to achieve my goal( ~1.5 Million requests/sec ). >>> >>> Server : >>> >>> Undertow server = Undertow.builder() >>> .addHttpListener(8009, "localhost") >>> .setHandler(new Handler()).build(); >>> server.start(); >>> >>> >>> Handler.Java >>> >>> final Pooled pooledByteBuffer = >>> exchange.getConnection().getBufferPool().allocate(); >>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >>> byteBuffer.clear(); >>> exchange.getRequestChannel().read(byteBuffer); >>> int pos = byteBuffer.position(); >>> byteBuffer.rewind(); >>> byte[] bytes = new byte[pos]; >>> byteBuffer.get(bytes); >>> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >>> byteBuffer.clear(); >>> pooledByteBuffer.free(); >>> final PostToKafka post2Kafka = new PostToKafka(); >>> try { >>> *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 >>> Millions events per sec }* >>> } catch (Exception e) { >>> e.printStackTrace(); >>> } >>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >>> "text/plain"); >>> exchange.getResponseSender().send("SUCCESS"); >>> >>> >>> --Senthil >>> >> >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170623/8ba236e9/attachment-0001.html From antoine.girard at ymail.com Thu Jun 22 16:53:05 2017 From: antoine.girard at ymail.com (Antoine Girard) Date: Thu, 22 Jun 2017 22:53:05 +0200 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: Do you warm up your jvm prior to the testing? Cheers, Antoine On Thu, Jun 22, 2017 at 10:42 PM, SenthilKumar K wrote: > Thanks Bill n Antoine .. > > > Here is the updated one : ( tried without Kafka API ) . > > public class HelloWorldServer { > > public static void main(final String[] args) { > Undertow server = Undertow.builder().addHttpListener(8009, > "localhost").setHandler(new HttpHandler() { > @Override > public void handleRequest(final HttpServerExchange exchange) throws > Exception { > if (exchange.getRequestMethod().equals(Methods.POST)) { > exchange.getRequestReceiver().receiveFullString(new > Receiver.FullStringCallback() { > @Override > public void handle(HttpServerExchange exchange, String > message) { > System.out.println(" Received String ==> "+message); > exchange.getResponseSender().send(message); > } > }); > } else { > exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); > exchange.getResponseSender().send("FAILURE"); > } > } > }).build(); > server.start(); > } > } > > > Oops seems to no improvement : > > Running 1m test @ http://localhost:8009/ > 100 threads and 1000 connections > Thread Stats Avg Stdev Max +/- Stdev > Latency 25.79ms 22.18ms 289.48ms 67.66% > Req/Sec 437.76 61.71 2.30k 80.26% > Latency Distribution > 50% 22.60ms > 75% 37.83ms > 90% 55.32ms > 99% 90.47ms > 2625607 requests in 1.00m, 2.76GB read > *Requests/sec: 43688.42* > Transfer/sec: 47.08MB > > > :-( :-( .. > > > --Senthil > > > On Fri, Jun 23, 2017 at 1:47 AM, Antoine Girard > wrote: > >> You can use the Receiver API, specifically for that purpose. >> On the exchange, call: getRequestReceiver(); >> >> You will get a receiver object: >> https://github.com/undertow-io/undertow/blob/master/core/src >> /main/java/io/undertow/io/Receiver.java >> >> On the receiver you can call: receiveFullString, you have to pass it a >> callback that will be called when the whole body has been read. >> >> Please share your results when you test this further! >> >> Cheers, >> Antoine >> >> >> On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K >> wrote: >> >>> Seems to Reading Request body is wrong , So what is the efficient way of >>> reading request body in undertow ? >>> >>> --Senthil >>> >>> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K >> > wrote: >>> >>>> Hello Undertow Dev Team , >>>> >>>> I have been working on the use case where i should create simple >>>> http server to serve 1.5 Million Requests per Second per Instance .. >>>> >>>> >>>> Here is the benchmark result of Undertow : >>>> >>>> Running 1m test @ http://127.0.0.1:8009/ >>>> 20 threads and 40 connections >>>> Thread Stats Avg Stdev Max +/- Stdev >>>> Latency 2.51ms 10.75ms 282.22ms 99.28% >>>> Req/Sec 1.12k 316.65 1.96k 54.50% >>>> Latency Distribution >>>> 50% 1.43ms >>>> 75% 2.38ms >>>> 90% 2.90ms >>>> 99% 10.45ms >>>> 1328133 requests in 1.00m, 167.19MB read >>>> Requests/sec: *22127*.92 >>>> Transfer/sec: 2.79MB >>>> >>>> This is less compared to other frameworks like Jetty and Netty .. But >>>> originally Undertow is high performant http server .. >>>> >>>> Hardware details: >>>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 >>>> GHz) , Memory : 32 G , Available memory 31 G. >>>> >>>> I would need Undertow experts to review the server code below and >>>> advice me on tuning to achieve my goal( ~1.5 Million requests/sec ). >>>> >>>> Server : >>>> >>>> Undertow server = Undertow.builder() >>>> .addHttpListener(8009, "localhost") >>>> .setHandler(new Handler()).build(); >>>> server.start(); >>>> >>>> >>>> Handler.Java >>>> >>>> final Pooled pooledByteBuffer = >>>> exchange.getConnection().getBufferPool().allocate(); >>>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >>>> byteBuffer.clear(); >>>> exchange.getRequestChannel().read(byteBuffer); >>>> int pos = byteBuffer.position(); >>>> byteBuffer.rewind(); >>>> byte[] bytes = new byte[pos]; >>>> byteBuffer.get(bytes); >>>> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >>>> byteBuffer.clear(); >>>> pooledByteBuffer.free(); >>>> final PostToKafka post2Kafka = new PostToKafka(); >>>> try { >>>> *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 >>>> Millions events per sec }* >>>> } catch (Exception e) { >>>> e.printStackTrace(); >>>> } >>>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >>>> "text/plain"); >>>> exchange.getResponseSender().send("SUCCESS"); >>>> >>>> >>>> --Senthil >>>> >>> >>> >>> _______________________________________________ >>> undertow-dev mailing list >>> undertow-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/undertow-dev >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170622/d5fb7ba8/attachment.html From senthilec566 at gmail.com Thu Jun 22 17:01:02 2017 From: senthilec566 at gmail.com (SenthilKumar K) Date: Fri, 23 Jun 2017 02:31:02 +0530 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: Sorry , I'm not an expert in JVM .. How do we do Warm Up JVM ? Here is the JVM args to Server: nohup java -Xmx4g -Xms4g -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -cp undertow-0.0.1.jar HelloWorldServer --Senthil On Fri, Jun 23, 2017 at 2:23 AM, Antoine Girard wrote: > Do you warm up your jvm prior to the testing? > > Cheers, > Antoine > > On Thu, Jun 22, 2017 at 10:42 PM, SenthilKumar K > wrote: > >> Thanks Bill n Antoine .. >> >> >> Here is the updated one : ( tried without Kafka API ) . >> >> public class HelloWorldServer { >> >> public static void main(final String[] args) { >> Undertow server = Undertow.builder().addHttpListener(8009, >> "localhost").setHandler(new HttpHandler() { >> @Override >> public void handleRequest(final HttpServerExchange exchange) throws >> Exception { >> if (exchange.getRequestMethod().equals(Methods.POST)) { >> exchange.getRequestReceiver().receiveFullString(new >> Receiver.FullStringCallback() { >> @Override >> public void handle(HttpServerExchange exchange, String >> message) { >> System.out.println(" Received String ==> "+message); >> exchange.getResponseSender().send(message); >> } >> }); >> } else { >> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); >> exchange.getResponseSender().send("FAILURE"); >> } >> } >> }).build(); >> server.start(); >> } >> } >> >> >> Oops seems to no improvement : >> >> Running 1m test @ http://localhost:8009/ >> 100 threads and 1000 connections >> Thread Stats Avg Stdev Max +/- Stdev >> Latency 25.79ms 22.18ms 289.48ms 67.66% >> Req/Sec 437.76 61.71 2.30k 80.26% >> Latency Distribution >> 50% 22.60ms >> 75% 37.83ms >> 90% 55.32ms >> 99% 90.47ms >> 2625607 requests in 1.00m, 2.76GB read >> *Requests/sec: 43688.42* >> Transfer/sec: 47.08MB >> >> >> :-( :-( .. >> >> >> --Senthil >> >> >> On Fri, Jun 23, 2017 at 1:47 AM, Antoine Girard > > wrote: >> >>> You can use the Receiver API, specifically for that purpose. >>> On the exchange, call: getRequestReceiver(); >>> >>> You will get a receiver object: >>> https://github.com/undertow-io/undertow/blob/master/core/src >>> /main/java/io/undertow/io/Receiver.java >>> >>> On the receiver you can call: receiveFullString, you have to pass it a >>> callback that will be called when the whole body has been read. >>> >>> Please share your results when you test this further! >>> >>> Cheers, >>> Antoine >>> >>> >>> On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K >>> wrote: >>> >>>> Seems to Reading Request body is wrong , So what is the efficient way >>>> of reading request body in undertow ? >>>> >>>> --Senthil >>>> >>>> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K < >>>> senthilec566 at gmail.com> wrote: >>>> >>>>> Hello Undertow Dev Team , >>>>> >>>>> I have been working on the use case where i should create simple >>>>> http server to serve 1.5 Million Requests per Second per Instance .. >>>>> >>>>> >>>>> Here is the benchmark result of Undertow : >>>>> >>>>> Running 1m test @ http://127.0.0.1:8009/ >>>>> 20 threads and 40 connections >>>>> Thread Stats Avg Stdev Max +/- Stdev >>>>> Latency 2.51ms 10.75ms 282.22ms 99.28% >>>>> Req/Sec 1.12k 316.65 1.96k 54.50% >>>>> Latency Distribution >>>>> 50% 1.43ms >>>>> 75% 2.38ms >>>>> 90% 2.90ms >>>>> 99% 10.45ms >>>>> 1328133 requests in 1.00m, 167.19MB read >>>>> Requests/sec: *22127*.92 >>>>> Transfer/sec: 2.79MB >>>>> >>>>> This is less compared to other frameworks like Jetty and Netty .. But >>>>> originally Undertow is high performant http server .. >>>>> >>>>> Hardware details: >>>>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 >>>>> GHz) , Memory : 32 G , Available memory 31 G. >>>>> >>>>> I would need Undertow experts to review the server code below and >>>>> advice me on tuning to achieve my goal( ~1.5 Million requests/sec ). >>>>> >>>>> Server : >>>>> >>>>> Undertow server = Undertow.builder() >>>>> .addHttpListener(8009, "localhost") >>>>> .setHandler(new Handler()).build(); >>>>> server.start(); >>>>> >>>>> >>>>> Handler.Java >>>>> >>>>> final Pooled pooledByteBuffer = >>>>> exchange.getConnection().getBufferPool().allocate(); >>>>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >>>>> byteBuffer.clear(); >>>>> exchange.getRequestChannel().read(byteBuffer); >>>>> int pos = byteBuffer.position(); >>>>> byteBuffer.rewind(); >>>>> byte[] bytes = new byte[pos]; >>>>> byteBuffer.get(bytes); >>>>> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >>>>> byteBuffer.clear(); >>>>> pooledByteBuffer.free(); >>>>> final PostToKafka post2Kafka = new PostToKafka(); >>>>> try { >>>>> *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 >>>>> Millions events per sec }* >>>>> } catch (Exception e) { >>>>> e.printStackTrace(); >>>>> } >>>>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >>>>> "text/plain"); >>>>> exchange.getResponseSender().send("SUCCESS"); >>>>> >>>>> >>>>> --Senthil >>>>> >>>> >>>> >>>> _______________________________________________ >>>> undertow-dev mailing list >>>> undertow-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/undertow-dev >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170623/8ada9b40/attachment-0001.html From bill at dartalley.com Thu Jun 22 17:21:50 2017 From: bill at dartalley.com (Bill O'Neil) Date: Thu, 22 Jun 2017 17:21:50 -0400 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: Take out the if (exchange.isInIoThread()) { exchange.dispatch(this); return; } How are you testing it? Are you sure you are sending enough requests a second? Also try commenting out the kafka part just to see the speed. If it does increase then we knows in on the undertow side. It also doesn't look like you used the method receiveFullBytes I suggested from the link. On Thu, Jun 22, 2017 at 3:13 PM, SenthilKumar K wrote: > At last i modified the code as below and still i see ~50K requests/sec .. > > public class HelloWorldServer { > > public static void main(final String[] args) { > Undertow server = Undertow.builder().addHttpListener(8009, > "localhost").setHandler(new HttpHandler() { > @Override > public void handleRequest(final HttpServerExchange exchange) throws > Exception { > > if (exchange.isInIoThread()) { > exchange.dispatch(this); > return; > } > if (exchange.getRequestMethod().equals(Methods.POST)) { > BufferedReader reader = null; > StringBuilder builder = new StringBuilder(); > try { > exchange.startBlocking(); > reader = new BufferedReader(new InputStreamReader(exchange. > getInputStream())); > String line; > while ((line = reader.readLine()) != null) { > builder.append(line); > } > } catch (IOException e) { > e.printStackTrace(); > } finally { > if (reader != null) { > try { > reader.close(); > } catch (IOException e) { > e.printStackTrace(); > } > } > } > String body = builder.toString(); > System.out.println("Req Body ==> " + body); > exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); > exchange.getResponseSender().send("SUCCESS"); > } else { > exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); > exchange.getResponseSender().send("FAILURE"); > } > } > }).build(); > server.start(); > } > } > > On Thu, Jun 22, 2017 at 11:57 PM, SenthilKumar K > wrote: > >> Seems to Reading Request body is wrong , So what is the efficient way of >> reading request body in undertow ? >> >> --Senthil >> >> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K >> wrote: >> >>> Hello Undertow Dev Team , >>> >>> I have been working on the use case where i should create simple >>> http server to serve 1.5 Million Requests per Second per Instance .. >>> >>> >>> Here is the benchmark result of Undertow : >>> >>> Running 1m test @ http://127.0.0.1:8009/ >>> 20 threads and 40 connections >>> Thread Stats Avg Stdev Max +/- Stdev >>> Latency 2.51ms 10.75ms 282.22ms 99.28% >>> Req/Sec 1.12k 316.65 1.96k 54.50% >>> Latency Distribution >>> 50% 1.43ms >>> 75% 2.38ms >>> 90% 2.90ms >>> 99% 10.45ms >>> 1328133 requests in 1.00m, 167.19MB read >>> Requests/sec: *22127*.92 >>> Transfer/sec: 2.79MB >>> >>> This is less compared to other frameworks like Jetty and Netty .. But >>> originally Undertow is high performant http server .. >>> >>> Hardware details: >>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 >>> GHz) , Memory : 32 G , Available memory 31 G. >>> >>> I would need Undertow experts to review the server code below and advice >>> me on tuning to achieve my goal( ~1.5 Million requests/sec ). >>> >>> Server : >>> >>> Undertow server = Undertow.builder() >>> .addHttpListener(8009, "localhost") >>> .setHandler(new Handler()).build(); >>> server.start(); >>> >>> >>> Handler.Java >>> >>> final Pooled pooledByteBuffer = >>> exchange.getConnection().getBufferPool().allocate(); >>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >>> byteBuffer.clear(); >>> exchange.getRequestChannel().read(byteBuffer); >>> int pos = byteBuffer.position(); >>> byteBuffer.rewind(); >>> byte[] bytes = new byte[pos]; >>> byteBuffer.get(bytes); >>> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >>> byteBuffer.clear(); >>> pooledByteBuffer.free(); >>> final PostToKafka post2Kafka = new PostToKafka(); >>> try { >>> *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 >>> Millions events per sec }* >>> } catch (Exception e) { >>> e.printStackTrace(); >>> } >>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >>> "text/plain"); >>> exchange.getResponseSender().send("SUCCESS"); >>> >>> >>> --Senthil >>> >> >> > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170622/52f206e3/attachment.html From senthilec566 at gmail.com Thu Jun 22 17:35:40 2017 From: senthilec566 at gmail.com (SenthilKumar K) Date: Fri, 23 Jun 2017 03:05:40 +0530 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: I removed Kafka part already, Pls check my latest respond where you can find utilizing Receiver api ... I'm using WRK http benchmarking tool for sending requests ... --Senthil On Jun 23, 2017 2:51 AM, "Bill O'Neil" wrote: > Take out the > if (exchange.isInIoThread()) { > exchange.dispatch(this); > return; > } > > How are you testing it? Are you sure you are sending enough requests a > second? Also try commenting out the kafka part just to see the speed. If it > does increase then we knows in on the undertow side. > > It also doesn't look like you used the method receiveFullBytes I > suggested from the link. > > On Thu, Jun 22, 2017 at 3:13 PM, SenthilKumar K > wrote: > >> At last i modified the code as below and still i see ~50K requests/sec .. >> >> public class HelloWorldServer { >> >> public static void main(final String[] args) { >> Undertow server = Undertow.builder().addHttpListener(8009, >> "localhost").setHandler(new HttpHandler() { >> @Override >> public void handleRequest(final HttpServerExchange exchange) throws >> Exception { >> >> if (exchange.isInIoThread()) { >> exchange.dispatch(this); >> return; >> } >> if (exchange.getRequestMethod().equals(Methods.POST)) { >> BufferedReader reader = null; >> StringBuilder builder = new StringBuilder(); >> try { >> exchange.startBlocking(); >> reader = new BufferedReader(new InputStreamReader(exchange.get >> InputStream())); >> String line; >> while ((line = reader.readLine()) != null) { >> builder.append(line); >> } >> } catch (IOException e) { >> e.printStackTrace(); >> } finally { >> if (reader != null) { >> try { >> reader.close(); >> } catch (IOException e) { >> e.printStackTrace(); >> } >> } >> } >> String body = builder.toString(); >> System.out.println("Req Body ==> " + body); >> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); >> exchange.getResponseSender().send("SUCCESS"); >> } else { >> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); >> exchange.getResponseSender().send("FAILURE"); >> } >> } >> }).build(); >> server.start(); >> } >> } >> >> On Thu, Jun 22, 2017 at 11:57 PM, SenthilKumar K >> wrote: >> >>> Seems to Reading Request body is wrong , So what is the efficient way of >>> reading request body in undertow ? >>> >>> --Senthil >>> >>> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K >> > wrote: >>> >>>> Hello Undertow Dev Team , >>>> >>>> I have been working on the use case where i should create simple >>>> http server to serve 1.5 Million Requests per Second per Instance .. >>>> >>>> >>>> Here is the benchmark result of Undertow : >>>> >>>> Running 1m test @ http://127.0.0.1:8009/ >>>> 20 threads and 40 connections >>>> Thread Stats Avg Stdev Max +/- Stdev >>>> Latency 2.51ms 10.75ms 282.22ms 99.28% >>>> Req/Sec 1.12k 316.65 1.96k 54.50% >>>> Latency Distribution >>>> 50% 1.43ms >>>> 75% 2.38ms >>>> 90% 2.90ms >>>> 99% 10.45ms >>>> 1328133 requests in 1.00m, 167.19MB read >>>> Requests/sec: *22127*.92 >>>> Transfer/sec: 2.79MB >>>> >>>> This is less compared to other frameworks like Jetty and Netty .. But >>>> originally Undertow is high performant http server .. >>>> >>>> Hardware details: >>>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 >>>> GHz) , Memory : 32 G , Available memory 31 G. >>>> >>>> I would need Undertow experts to review the server code below and >>>> advice me on tuning to achieve my goal( ~1.5 Million requests/sec ). >>>> >>>> Server : >>>> >>>> Undertow server = Undertow.builder() >>>> .addHttpListener(8009, "localhost") >>>> .setHandler(new Handler()).build(); >>>> server.start(); >>>> >>>> >>>> Handler.Java >>>> >>>> final Pooled pooledByteBuffer = >>>> exchange.getConnection().getBufferPool().allocate(); >>>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >>>> byteBuffer.clear(); >>>> exchange.getRequestChannel().read(byteBuffer); >>>> int pos = byteBuffer.position(); >>>> byteBuffer.rewind(); >>>> byte[] bytes = new byte[pos]; >>>> byteBuffer.get(bytes); >>>> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >>>> byteBuffer.clear(); >>>> pooledByteBuffer.free(); >>>> final PostToKafka post2Kafka = new PostToKafka(); >>>> try { >>>> *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 >>>> Millions events per sec }* >>>> } catch (Exception e) { >>>> e.printStackTrace(); >>>> } >>>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >>>> "text/plain"); >>>> exchange.getResponseSender().send("SUCCESS"); >>>> >>>> >>>> --Senthil >>>> >>> >>> >> >> _______________________________________________ >> undertow-dev mailing list >> undertow-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/undertow-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170623/5a023834/attachment-0001.html From kr at asseco.dk Thu Jun 22 17:58:10 2017 From: kr at asseco.dk (Kim Rasmussen) Date: Thu, 22 Jun 2017 23:58:10 +0200 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: Instead of blocking on IO, read the request asynchronously with callbacks, similar to this: public void handleRequest(final HttpServerExchange exchange) throws Exception { if (exchange.getRequestMethod().equals(Methods.POST)) { exchange.getRequestReceiver().receiveFullBytes((exchange, data) -> { // Read succeeded exchange.dispatch(() -> { // Do something with the byte array here // When you are done, call: exchange.dispatch(exchange.getIoThread(), () -> { exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchange.getResponseSender().send("SUCCESS"); }); } }, (exchange, exception) -> { // Handle failure exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchange.getResponseSender().send("FAILURE"); } } .... and never EVER (unless "debugging") write to system out :) /Kim 2017-06-22 21:13 GMT+02:00 SenthilKumar K : > At last i modified the code as below and still i see ~50K requests/sec .. > > public class HelloWorldServer { > > public static void main(final String[] args) { > Undertow server = Undertow.builder().addHttpListener(8009, > "localhost").setHandler(new HttpHandler() { > @Override > public void handleRequest(final HttpServerExchange exchange) throws > Exception { > > if (exchange.isInIoThread()) { > exchange.dispatch(this); > return; > } > if (exchange.getRequestMethod().equals(Methods.POST)) { > BufferedReader reader = null; > StringBuilder builder = new StringBuilder(); > try { > exchange.startBlocking(); > reader = new BufferedReader(new InputStreamReader(exchange. > getInputStream())); > String line; > while ((line = reader.readLine()) != null) { > builder.append(line); > } > } catch (IOException e) { > e.printStackTrace(); > } finally { > if (reader != null) { > try { > reader.close(); > } catch (IOException e) { > e.printStackTrace(); > } > } > } > String body = builder.toString(); > System.out.println("Req Body ==> " + body); > exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); > exchange.getResponseSender().send("SUCCESS"); > } else { > exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); > exchange.getResponseSender().send("FAILURE"); > } > } > }).build(); > server.start(); > } > } > > On Thu, Jun 22, 2017 at 11:57 PM, SenthilKumar K > wrote: > >> Seems to Reading Request body is wrong , So what is the efficient way of >> reading request body in undertow ? >> >> --Senthil >> >> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K >> wrote: >> >>> Hello Undertow Dev Team , >>> >>> I have been working on the use case where i should create simple >>> http server to serve 1.5 Million Requests per Second per Instance .. >>> >>> >>> Here is the benchmark result of Undertow : >>> >>> Running 1m test @ http://127.0.0.1:8009/ >>> 20 threads and 40 connections >>> Thread Stats Avg Stdev Max +/- Stdev >>> Latency 2.51ms 10.75ms 282.22ms 99.28% >>> Req/Sec 1.12k 316.65 1.96k 54.50% >>> Latency Distribution >>> 50% 1.43ms >>> 75% 2.38ms >>> 90% 2.90ms >>> 99% 10.45ms >>> 1328133 requests in 1.00m, 167.19MB read >>> Requests/sec: *22127*.92 >>> Transfer/sec: 2.79MB >>> >>> This is less compared to other frameworks like Jetty and Netty .. But >>> originally Undertow is high performant http server .. >>> >>> Hardware details: >>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 >>> GHz) , Memory : 32 G , Available memory 31 G. >>> >>> I would need Undertow experts to review the server code below and advice >>> me on tuning to achieve my goal( ~1.5 Million requests/sec ). >>> >>> Server : >>> >>> Undertow server = Undertow.builder() >>> .addHttpListener(8009, "localhost") >>> .setHandler(new Handler()).build(); >>> server.start(); >>> >>> >>> Handler.Java >>> >>> final Pooled pooledByteBuffer = >>> exchange.getConnection().getBufferPool().allocate(); >>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >>> byteBuffer.clear(); >>> exchange.getRequestChannel().read(byteBuffer); >>> int pos = byteBuffer.position(); >>> byteBuffer.rewind(); >>> byte[] bytes = new byte[pos]; >>> byteBuffer.get(bytes); >>> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >>> byteBuffer.clear(); >>> pooledByteBuffer.free(); >>> final PostToKafka post2Kafka = new PostToKafka(); >>> try { >>> *post2Kafka.write2Kafka(requestBody); { This API can handle ~2 >>> Millions events per sec }* >>> } catch (Exception e) { >>> e.printStackTrace(); >>> } >>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >>> "text/plain"); >>> exchange.getResponseSender().send("SUCCESS"); >>> >>> >>> --Senthil >>> >> >> > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > -- Med venlig hilsen / Best regards *Kim Rasmussen* Partner, IT Architect *Asseco Denmark A/S* Kronprinsessegade 54 DK-1306 Copenhagen K Mobile: +45 26 16 40 23 Ph.: +45 33 36 46 60 Fax: +45 33 36 46 61 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170622/a7c66095/attachment.html From sdouglas at redhat.com Thu Jun 22 19:24:48 2017 From: sdouglas at redhat.com (Stuart Douglas) Date: Fri, 23 Jun 2017 09:24:48 +1000 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: Are you actually testing with the 'System.out.println(" Received String ==> "+message);'. System.out is incredibly slow. Stuart On Fri, Jun 23, 2017 at 7:01 AM, SenthilKumar K wrote: > Sorry , I'm not an expert in JVM .. How do we do Warm Up JVM ? > > Here is the JVM args to Server: > > nohup java -Xmx4g -Xms4g -XX:MetaspaceSize=96m -XX:+UseG1GC > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 > -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 > -XX:MaxMetaspaceFreeRatio=80 -cp undertow-0.0.1.jar HelloWorldServer > > > --Senthil > > > On Fri, Jun 23, 2017 at 2:23 AM, Antoine Girard > wrote: >> >> Do you warm up your jvm prior to the testing? >> >> Cheers, >> Antoine >> >> On Thu, Jun 22, 2017 at 10:42 PM, SenthilKumar K >> wrote: >>> >>> Thanks Bill n Antoine .. >>> >>> >>> Here is the updated one : ( tried without Kafka API ) . >>> >>> public class HelloWorldServer { >>> >>> public static void main(final String[] args) { >>> Undertow server = Undertow.builder().addHttpListener(8009, >>> "localhost").setHandler(new HttpHandler() { >>> @Override >>> public void handleRequest(final HttpServerExchange exchange) throws >>> Exception { >>> if (exchange.getRequestMethod().equals(Methods.POST)) { >>> exchange.getRequestReceiver().receiveFullString(new >>> Receiver.FullStringCallback() { >>> @Override >>> public void handle(HttpServerExchange exchange, String >>> message) { >>> System.out.println(" Received String ==> "+message); >>> exchange.getResponseSender().send(message); >>> } >>> }); >>> } else { >>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); >>> exchange.getResponseSender().send("FAILURE"); >>> } >>> } >>> }).build(); >>> server.start(); >>> } >>> } >>> >>> >>> Oops seems to no improvement : >>> >>> Running 1m test @ http://localhost:8009/ >>> 100 threads and 1000 connections >>> Thread Stats Avg Stdev Max +/- Stdev >>> Latency 25.79ms 22.18ms 289.48ms 67.66% >>> Req/Sec 437.76 61.71 2.30k 80.26% >>> Latency Distribution >>> 50% 22.60ms >>> 75% 37.83ms >>> 90% 55.32ms >>> 99% 90.47ms >>> 2625607 requests in 1.00m, 2.76GB read >>> Requests/sec: 43688.42 >>> Transfer/sec: 47.08MB >>> >>> >>> :-( :-( .. >>> >>> >>> --Senthil >>> >>> >>> On Fri, Jun 23, 2017 at 1:47 AM, Antoine Girard >>> wrote: >>>> >>>> You can use the Receiver API, specifically for that purpose. >>>> On the exchange, call: getRequestReceiver(); >>>> >>>> You will get a receiver object: >>>> >>>> https://github.com/undertow-io/undertow/blob/master/core/src/main/java/io/undertow/io/Receiver.java >>>> >>>> On the receiver you can call: receiveFullString, you have to pass it a >>>> callback that will be called when the whole body has been read. >>>> >>>> Please share your results when you test this further! >>>> >>>> Cheers, >>>> Antoine >>>> >>>> >>>> On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K >>>> wrote: >>>>> >>>>> Seems to Reading Request body is wrong , So what is the efficient way >>>>> of reading request body in undertow ? >>>>> >>>>> --Senthil >>>>> >>>>> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K >>>>> wrote: >>>>>> >>>>>> Hello Undertow Dev Team , >>>>>> >>>>>> I have been working on the use case where i should create simple >>>>>> http server to serve 1.5 Million Requests per Second per Instance .. >>>>>> >>>>>> >>>>>> Here is the benchmark result of Undertow : >>>>>> >>>>>> Running 1m test @ http://127.0.0.1:8009/ >>>>>> 20 threads and 40 connections >>>>>> Thread Stats Avg Stdev Max +/- Stdev >>>>>> Latency 2.51ms 10.75ms 282.22ms 99.28% >>>>>> Req/Sec 1.12k 316.65 1.96k 54.50% >>>>>> Latency Distribution >>>>>> 50% 1.43ms >>>>>> 75% 2.38ms >>>>>> 90% 2.90ms >>>>>> 99% 10.45ms >>>>>> 1328133 requests in 1.00m, 167.19MB read >>>>>> Requests/sec: 22127.92 >>>>>> Transfer/sec: 2.79MB >>>>>> >>>>>> This is less compared to other frameworks like Jetty and Netty .. But >>>>>> originally Undertow is high performant http server .. >>>>>> >>>>>> Hardware details: >>>>>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 >>>>>> GHz) , Memory : 32 G , Available memory 31 G. >>>>>> >>>>>> I would need Undertow experts to review the server code below and >>>>>> advice me on tuning to achieve my goal( ~1.5 Million requests/sec ). >>>>>> >>>>>> Server : >>>>>> >>>>>> Undertow server = Undertow.builder() >>>>>> .addHttpListener(8009, "localhost") >>>>>> .setHandler(new Handler()).build(); >>>>>> server.start(); >>>>>> >>>>>> >>>>>> Handler.Java >>>>>> >>>>>> final Pooled pooledByteBuffer = >>>>>> exchange.getConnection().getBufferPool().allocate(); >>>>>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >>>>>> byteBuffer.clear(); >>>>>> exchange.getRequestChannel().read(byteBuffer); >>>>>> int pos = byteBuffer.position(); >>>>>> byteBuffer.rewind(); >>>>>> byte[] bytes = new byte[pos]; >>>>>> byteBuffer.get(bytes); >>>>>> String requestBody = new String(bytes, Charset.forName("UTF-8") ); >>>>>> byteBuffer.clear(); >>>>>> pooledByteBuffer.free(); >>>>>> final PostToKafka post2Kafka = new PostToKafka(); >>>>>> try { >>>>>> post2Kafka.write2Kafka(requestBody); { This API can handle ~2 >>>>>> Millions events per sec } >>>>>> } catch (Exception e) { >>>>>> e.printStackTrace(); >>>>>> } >>>>>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >>>>>> "text/plain"); >>>>>> exchange.getResponseSender().send("SUCCESS"); >>>>>> >>>>>> >>>>>> --Senthil >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> undertow-dev mailing list >>>>> undertow-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/undertow-dev >>>> >>>> >>> >> > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev From antoine.girard at ymail.com Fri Jun 23 04:00:23 2017 From: antoine.girard at ymail.com (Antoine Girard) Date: Fri, 23 Jun 2017 10:00:23 +0200 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: Also, to come back on the JVM warmup, this will give you enough answers: https://stackoverflow.com/questions/36198278/why-does-the-jvm-require-warmup For your, it means that you have to run your tests for a few minutes before starting your actual measurements. I am also interested about how Netty / Jetty perform under the same conditions, please post! Cheers, Antoine On Fri, Jun 23, 2017 at 1:24 AM, Stuart Douglas wrote: > Are you actually testing with the 'System.out.println(" Received > String ==> "+message);'. System.out is incredibly slow. > > Stuart > > On Fri, Jun 23, 2017 at 7:01 AM, SenthilKumar K > wrote: > > Sorry , I'm not an expert in JVM .. How do we do Warm Up JVM ? > > > > Here is the JVM args to Server: > > > > nohup java -Xmx4g -Xms4g -XX:MetaspaceSize=96m -XX:+UseG1GC > > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 > > -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 > > -XX:MaxMetaspaceFreeRatio=80 -cp undertow-0.0.1.jar HelloWorldServer > > > > > > --Senthil > > > > > > On Fri, Jun 23, 2017 at 2:23 AM, Antoine Girard < > antoine.girard at ymail.com> > > wrote: > >> > >> Do you warm up your jvm prior to the testing? > >> > >> Cheers, > >> Antoine > >> > >> On Thu, Jun 22, 2017 at 10:42 PM, SenthilKumar K < > senthilec566 at gmail.com> > >> wrote: > >>> > >>> Thanks Bill n Antoine .. > >>> > >>> > >>> Here is the updated one : ( tried without Kafka API ) . > >>> > >>> public class HelloWorldServer { > >>> > >>> public static void main(final String[] args) { > >>> Undertow server = Undertow.builder().addHttpListener(8009, > >>> "localhost").setHandler(new HttpHandler() { > >>> @Override > >>> public void handleRequest(final HttpServerExchange exchange) throws > >>> Exception { > >>> if (exchange.getRequestMethod().equals(Methods.POST)) { > >>> exchange.getRequestReceiver().receiveFullString(new > >>> Receiver.FullStringCallback() { > >>> @Override > >>> public void handle(HttpServerExchange exchange, > String > >>> message) { > >>> System.out.println(" Received String ==> > "+message); > >>> exchange.getResponseSender().send(message); > >>> } > >>> }); > >>> } else { > >>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); > >>> exchange.getResponseSender().send("FAILURE"); > >>> } > >>> } > >>> }).build(); > >>> server.start(); > >>> } > >>> } > >>> > >>> > >>> Oops seems to no improvement : > >>> > >>> Running 1m test @ http://localhost:8009/ > >>> 100 threads and 1000 connections > >>> Thread Stats Avg Stdev Max +/- Stdev > >>> Latency 25.79ms 22.18ms 289.48ms 67.66% > >>> Req/Sec 437.76 61.71 2.30k 80.26% > >>> Latency Distribution > >>> 50% 22.60ms > >>> 75% 37.83ms > >>> 90% 55.32ms > >>> 99% 90.47ms > >>> 2625607 requests in 1.00m, 2.76GB read > >>> Requests/sec: 43688.42 > >>> Transfer/sec: 47.08MB > >>> > >>> > >>> :-( :-( .. > >>> > >>> > >>> --Senthil > >>> > >>> > >>> On Fri, Jun 23, 2017 at 1:47 AM, Antoine Girard > >>> wrote: > >>>> > >>>> You can use the Receiver API, specifically for that purpose. > >>>> On the exchange, call: getRequestReceiver(); > >>>> > >>>> You will get a receiver object: > >>>> > >>>> https://github.com/undertow-io/undertow/blob/master/core/ > src/main/java/io/undertow/io/Receiver.java > >>>> > >>>> On the receiver you can call: receiveFullString, you have to pass it a > >>>> callback that will be called when the whole body has been read. > >>>> > >>>> Please share your results when you test this further! > >>>> > >>>> Cheers, > >>>> Antoine > >>>> > >>>> > >>>> On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K < > senthilec566 at gmail.com> > >>>> wrote: > >>>>> > >>>>> Seems to Reading Request body is wrong , So what is the efficient way > >>>>> of reading request body in undertow ? > >>>>> > >>>>> --Senthil > >>>>> > >>>>> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K > >>>>> wrote: > >>>>>> > >>>>>> Hello Undertow Dev Team , > >>>>>> > >>>>>> I have been working on the use case where i should create > simple > >>>>>> http server to serve 1.5 Million Requests per Second per Instance .. > >>>>>> > >>>>>> > >>>>>> Here is the benchmark result of Undertow : > >>>>>> > >>>>>> Running 1m test @ http://127.0.0.1:8009/ > >>>>>> 20 threads and 40 connections > >>>>>> Thread Stats Avg Stdev Max +/- Stdev > >>>>>> Latency 2.51ms 10.75ms 282.22ms 99.28% > >>>>>> Req/Sec 1.12k 316.65 1.96k 54.50% > >>>>>> Latency Distribution > >>>>>> 50% 1.43ms > >>>>>> 75% 2.38ms > >>>>>> 90% 2.90ms > >>>>>> 99% 10.45ms > >>>>>> 1328133 requests in 1.00m, 167.19MB read > >>>>>> Requests/sec: 22127.92 > >>>>>> Transfer/sec: 2.79MB > >>>>>> > >>>>>> This is less compared to other frameworks like Jetty and Netty .. > But > >>>>>> originally Undertow is high performant http server .. > >>>>>> > >>>>>> Hardware details: > >>>>>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity 4 > >>>>>> GHz) , Memory : 32 G , Available memory 31 G. > >>>>>> > >>>>>> I would need Undertow experts to review the server code below and > >>>>>> advice me on tuning to achieve my goal( ~1.5 Million requests/sec ). > >>>>>> > >>>>>> Server : > >>>>>> > >>>>>> Undertow server = Undertow.builder() > >>>>>> .addHttpListener(8009, "localhost") > >>>>>> .setHandler(new Handler()).build(); > >>>>>> server.start(); > >>>>>> > >>>>>> > >>>>>> Handler.Java > >>>>>> > >>>>>> final Pooled pooledByteBuffer = > >>>>>> exchange.getConnection(). > getBufferPool().allocate(); > >>>>>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); > >>>>>> byteBuffer.clear(); > >>>>>> exchange.getRequestChannel().read(byteBuffer); > >>>>>> int pos = byteBuffer.position(); > >>>>>> byteBuffer.rewind(); > >>>>>> byte[] bytes = new byte[pos]; > >>>>>> byteBuffer.get(bytes); > >>>>>> String requestBody = new String(bytes, Charset.forName("UTF-8") > ); > >>>>>> byteBuffer.clear(); > >>>>>> pooledByteBuffer.free(); > >>>>>> final PostToKafka post2Kafka = new PostToKafka(); > >>>>>> try { > >>>>>> post2Kafka.write2Kafka(requestBody); { This API can handle ~2 > >>>>>> Millions events per sec } > >>>>>> } catch (Exception e) { > >>>>>> e.printStackTrace(); > >>>>>> } > >>>>>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, > >>>>>> "text/plain"); > >>>>>> exchange.getResponseSender().send("SUCCESS"); > >>>>>> > >>>>>> > >>>>>> --Senthil > >>>>> > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> undertow-dev mailing list > >>>>> undertow-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/undertow-dev > >>>> > >>>> > >>> > >> > > > > > > _______________________________________________ > > undertow-dev mailing list > > undertow-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170623/885be25e/attachment.html From openindiana at out-side.nl Mon Jun 26 08:52:20 2017 From: openindiana at out-side.nl (the outsider) Date: Mon, 26 Jun 2017 14:52:20 +0200 Subject: [undertow-dev] questions about undertow.js In-Reply-To: <000e01d2ee79$4ac6ed10$e054c730$@out-side.nl> References: <000e01d2ee79$4ac6ed10$e054c730$@out-side.nl> Message-ID: <002101d2ee7b$0c6a5910$253f0b30$@out-side.nl> Dear team, Is there a place where i can ask some questions regarding undertow.js ? For some reason I cannot get the POST function functioning in combination with angular POST requests. It must be something stupid on my side, but I am a bit lost at the moment. (I have read http://wildfly.org/news/2015/08/10/Javascript-Support-In-Wildfly/ but the post examples don't work either) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170626/294d6db7/attachment-0001.html From senthilec566 at gmail.com Wed Jun 28 03:42:30 2017 From: senthilec566 at gmail.com (SenthilKumar K) Date: Wed, 28 Jun 2017 13:12:30 +0530 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: After modifying the code below i could see the improvement ( not much slightly ) in server - 65k req/sec. import io.undertow.server.HttpHandler; import io.undertow.server.HttpServerExchange; import io.undertow.util.Headers; import io.undertow.util.Methods; public class DLRHandler implements HttpHandler { final public static String _SUCCESS="SUCCESS"; final public static String _FAILURE="FAILURE"; final PostToKafka post2Kafka = new PostToKafka(); @Override public void handleRequest( final HttpServerExchange exchange) throws Exception { if (exchange.getRequestMethod().equals(Methods.POST)) { exchange.getRequestReceiver().receiveFullString(( exchangeReq, data) -> { exchangeReq.dispatch(() -> { post2Kafka.write2Kafka(data); // write it to Kafka exchangeReq.dispatch(exchangeReq.getIoThread(), () -> { exchangeReq.getResponseHeaders(). put(Headers.CONTENT_TYPE, "text/plain"); exchangeReq.getResponseSender().send(_SUCCESS); }); }); }, (exchangeReq, exception) -> { exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchangeReq.getResponseSender().send(_FAILURE); }); }else{ throw new Exception("Method GET not supported by Server "); } } } Pls review this and let me know if i'm doing anything wrong here ... --Senthil On Fri, Jun 23, 2017 at 1:30 PM, Antoine Girard wrote: > Also, to come back on the JVM warmup, this will give you enough answers: > https://stackoverflow.com/questions/36198278/why-does- > the-jvm-require-warmup > > For your, it means that you have to run your tests for a few minutes > before starting your actual measurements. > > I am also interested about how Netty / Jetty perform under the same > conditions, please post! > > Cheers, > Antoine > > On Fri, Jun 23, 2017 at 1:24 AM, Stuart Douglas > wrote: > >> Are you actually testing with the 'System.out.println(" Received >> String ==> "+message);'. System.out is incredibly slow. >> >> Stuart >> >> On Fri, Jun 23, 2017 at 7:01 AM, SenthilKumar K >> wrote: >> > Sorry , I'm not an expert in JVM .. How do we do Warm Up JVM ? >> > >> > Here is the JVM args to Server: >> > >> > nohup java -Xmx4g -Xms4g -XX:MetaspaceSize=96m -XX:+UseG1GC >> > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 >> > -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 >> > -XX:MaxMetaspaceFreeRatio=80 -cp undertow-0.0.1.jar HelloWorldServer >> > >> > >> > --Senthil >> > >> > >> > On Fri, Jun 23, 2017 at 2:23 AM, Antoine Girard < >> antoine.girard at ymail.com> >> > wrote: >> >> >> >> Do you warm up your jvm prior to the testing? >> >> >> >> Cheers, >> >> Antoine >> >> >> >> On Thu, Jun 22, 2017 at 10:42 PM, SenthilKumar K < >> senthilec566 at gmail.com> >> >> wrote: >> >>> >> >>> Thanks Bill n Antoine .. >> >>> >> >>> >> >>> Here is the updated one : ( tried without Kafka API ) . >> >>> >> >>> public class HelloWorldServer { >> >>> >> >>> public static void main(final String[] args) { >> >>> Undertow server = Undertow.builder().addHttpListener(8009, >> >>> "localhost").setHandler(new HttpHandler() { >> >>> @Override >> >>> public void handleRequest(final HttpServerExchange exchange) throws >> >>> Exception { >> >>> if (exchange.getRequestMethod().equals(Methods.POST)) { >> >>> exchange.getRequestReceiver().receiveFullString(new >> >>> Receiver.FullStringCallback() { >> >>> @Override >> >>> public void handle(HttpServerExchange exchange, >> String >> >>> message) { >> >>> System.out.println(" Received String ==> >> "+message); >> >>> exchange.getResponseSender().send(message); >> >>> } >> >>> }); >> >>> } else { >> >>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >> "text/plain"); >> >>> exchange.getResponseSender().send("FAILURE"); >> >>> } >> >>> } >> >>> }).build(); >> >>> server.start(); >> >>> } >> >>> } >> >>> >> >>> >> >>> Oops seems to no improvement : >> >>> >> >>> Running 1m test @ http://localhost:8009/ >> >>> 100 threads and 1000 connections >> >>> Thread Stats Avg Stdev Max +/- Stdev >> >>> Latency 25.79ms 22.18ms 289.48ms 67.66% >> >>> Req/Sec 437.76 61.71 2.30k 80.26% >> >>> Latency Distribution >> >>> 50% 22.60ms >> >>> 75% 37.83ms >> >>> 90% 55.32ms >> >>> 99% 90.47ms >> >>> 2625607 requests in 1.00m, 2.76GB read >> >>> Requests/sec: 43688.42 >> >>> Transfer/sec: 47.08MB >> >>> >> >>> >> >>> :-( :-( .. >> >>> >> >>> >> >>> --Senthil >> >>> >> >>> >> >>> On Fri, Jun 23, 2017 at 1:47 AM, Antoine Girard >> >>> wrote: >> >>>> >> >>>> You can use the Receiver API, specifically for that purpose. >> >>>> On the exchange, call: getRequestReceiver(); >> >>>> >> >>>> You will get a receiver object: >> >>>> >> >>>> https://github.com/undertow-io/undertow/blob/master/core/src >> /main/java/io/undertow/io/Receiver.java >> >>>> >> >>>> On the receiver you can call: receiveFullString, you have to pass it >> a >> >>>> callback that will be called when the whole body has been read. >> >>>> >> >>>> Please share your results when you test this further! >> >>>> >> >>>> Cheers, >> >>>> Antoine >> >>>> >> >>>> >> >>>> On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K < >> senthilec566 at gmail.com> >> >>>> wrote: >> >>>>> >> >>>>> Seems to Reading Request body is wrong , So what is the efficient >> way >> >>>>> of reading request body in undertow ? >> >>>>> >> >>>>> --Senthil >> >>>>> >> >>>>> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K >> >>>>> wrote: >> >>>>>> >> >>>>>> Hello Undertow Dev Team , >> >>>>>> >> >>>>>> I have been working on the use case where i should create >> simple >> >>>>>> http server to serve 1.5 Million Requests per Second per Instance >> .. >> >>>>>> >> >>>>>> >> >>>>>> Here is the benchmark result of Undertow : >> >>>>>> >> >>>>>> Running 1m test @ http://127.0.0.1:8009/ >> >>>>>> 20 threads and 40 connections >> >>>>>> Thread Stats Avg Stdev Max +/- Stdev >> >>>>>> Latency 2.51ms 10.75ms 282.22ms 99.28% >> >>>>>> Req/Sec 1.12k 316.65 1.96k 54.50% >> >>>>>> Latency Distribution >> >>>>>> 50% 1.43ms >> >>>>>> 75% 2.38ms >> >>>>>> 90% 2.90ms >> >>>>>> 99% 10.45ms >> >>>>>> 1328133 requests in 1.00m, 167.19MB read >> >>>>>> Requests/sec: 22127.92 >> >>>>>> Transfer/sec: 2.79MB >> >>>>>> >> >>>>>> This is less compared to other frameworks like Jetty and Netty .. >> But >> >>>>>> originally Undertow is high performant http server .. >> >>>>>> >> >>>>>> Hardware details: >> >>>>>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity >> 4 >> >>>>>> GHz) , Memory : 32 G , Available memory 31 G. >> >>>>>> >> >>>>>> I would need Undertow experts to review the server code below and >> >>>>>> advice me on tuning to achieve my goal( ~1.5 Million requests/sec >> ). >> >>>>>> >> >>>>>> Server : >> >>>>>> >> >>>>>> Undertow server = Undertow.builder() >> >>>>>> .addHttpListener(8009, "localhost") >> >>>>>> .setHandler(new Handler()).build(); >> >>>>>> server.start(); >> >>>>>> >> >>>>>> >> >>>>>> Handler.Java >> >>>>>> >> >>>>>> final Pooled pooledByteBuffer = >> >>>>>> exchange.getConnection().getBu >> fferPool().allocate(); >> >>>>>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >> >>>>>> byteBuffer.clear(); >> >>>>>> exchange.getRequestChannel().read(byteBuffer); >> >>>>>> int pos = byteBuffer.position(); >> >>>>>> byteBuffer.rewind(); >> >>>>>> byte[] bytes = new byte[pos]; >> >>>>>> byteBuffer.get(bytes); >> >>>>>> String requestBody = new String(bytes, Charset.forName("UTF-8") >> ); >> >>>>>> byteBuffer.clear(); >> >>>>>> pooledByteBuffer.free(); >> >>>>>> final PostToKafka post2Kafka = new PostToKafka(); >> >>>>>> try { >> >>>>>> post2Kafka.write2Kafka(requestBody); { This API can handle ~2 >> >>>>>> Millions events per sec } >> >>>>>> } catch (Exception e) { >> >>>>>> e.printStackTrace(); >> >>>>>> } >> >>>>>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >> >>>>>> "text/plain"); >> >>>>>> exchange.getResponseSender().send("SUCCESS"); >> >>>>>> >> >>>>>> >> >>>>>> --Senthil >> >>>>> >> >>>>> >> >>>>> >> >>>>> _______________________________________________ >> >>>>> undertow-dev mailing list >> >>>>> undertow-dev at lists.jboss.org >> >>>>> https://lists.jboss.org/mailman/listinfo/undertow-dev >> >>>> >> >>>> >> >>> >> >> >> > >> > >> > _______________________________________________ >> > undertow-dev mailing list >> > undertow-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/undertow-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170628/1bfc0db8/attachment-0001.html From sdouglas at redhat.com Wed Jun 28 18:59:22 2017 From: sdouglas at redhat.com (Stuart Douglas) Date: Thu, 29 Jun 2017 08:59:22 +1000 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: The multiple dispatches() are unnecessary (well the second one to the IO thread is definitely unnecessary, the first one is only required if post2Kafka.write2Kafka(data); is a blocking operation and needs to be executed in a worker thread). Stuart On Wed, Jun 28, 2017 at 5:42 PM, SenthilKumar K wrote: > After modifying the code below i could see the improvement ( not much > slightly ) in server - 65k req/sec. > > import io.undertow.server.HttpHandler; > import io.undertow.server.HttpServerExchange; > import io.undertow.util.Headers; > import io.undertow.util.Methods; > > public class DLRHandler implements HttpHandler { > > final public static String _SUCCESS="SUCCESS"; > final public static String _FAILURE="FAILURE"; > final PostToKafka post2Kafka = new PostToKafka(); > > @Override > public void handleRequest( final HttpServerExchange exchange) throws > Exception { > if (exchange.getRequestMethod().equals(Methods.POST)) { > exchange.getRequestReceiver().receiveFullString(( > exchangeReq, data) -> { > exchangeReq.dispatch(() -> { > post2Kafka.write2Kafka(data); // write it to Kafka > exchangeReq.dispatch(exchangeReq.getIoThread(), () -> > { > > exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); > exchangeReq.getResponseSender().send(_SUCCESS); > }); > }); > }, > (exchangeReq, exception) -> { > exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE, > "text/plain"); > exchangeReq.getResponseSender().send(_FAILURE); > }); > }else{ > throw new Exception("Method GET not supported by Server "); > } > } > } > > > Pls review this and let me know if i'm doing anything wrong here ... > --Senthil > > On Fri, Jun 23, 2017 at 1:30 PM, Antoine Girard > wrote: >> >> Also, to come back on the JVM warmup, this will give you enough answers: >> >> https://stackoverflow.com/questions/36198278/why-does-the-jvm-require-warmup >> >> For your, it means that you have to run your tests for a few minutes >> before starting your actual measurements. >> >> I am also interested about how Netty / Jetty perform under the same >> conditions, please post! >> >> Cheers, >> Antoine >> >> On Fri, Jun 23, 2017 at 1:24 AM, Stuart Douglas >> wrote: >>> >>> Are you actually testing with the 'System.out.println(" Received >>> String ==> "+message);'. System.out is incredibly slow. >>> >>> Stuart >>> >>> On Fri, Jun 23, 2017 at 7:01 AM, SenthilKumar K >>> wrote: >>> > Sorry , I'm not an expert in JVM .. How do we do Warm Up JVM ? >>> > >>> > Here is the JVM args to Server: >>> > >>> > nohup java -Xmx4g -Xms4g -XX:MetaspaceSize=96m -XX:+UseG1GC >>> > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 >>> > -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 >>> > -XX:MaxMetaspaceFreeRatio=80 -cp undertow-0.0.1.jar HelloWorldServer >>> > >>> > >>> > --Senthil >>> > >>> > >>> > On Fri, Jun 23, 2017 at 2:23 AM, Antoine Girard >>> > >>> > wrote: >>> >> >>> >> Do you warm up your jvm prior to the testing? >>> >> >>> >> Cheers, >>> >> Antoine >>> >> >>> >> On Thu, Jun 22, 2017 at 10:42 PM, SenthilKumar K >>> >> >>> >> wrote: >>> >>> >>> >>> Thanks Bill n Antoine .. >>> >>> >>> >>> >>> >>> Here is the updated one : ( tried without Kafka API ) . >>> >>> >>> >>> public class HelloWorldServer { >>> >>> >>> >>> public static void main(final String[] args) { >>> >>> Undertow server = Undertow.builder().addHttpListener(8009, >>> >>> "localhost").setHandler(new HttpHandler() { >>> >>> @Override >>> >>> public void handleRequest(final HttpServerExchange exchange) throws >>> >>> Exception { >>> >>> if (exchange.getRequestMethod().equals(Methods.POST)) { >>> >>> exchange.getRequestReceiver().receiveFullString(new >>> >>> Receiver.FullStringCallback() { >>> >>> @Override >>> >>> public void handle(HttpServerExchange exchange, >>> >>> String >>> >>> message) { >>> >>> System.out.println(" Received String ==> >>> >>> "+message); >>> >>> exchange.getResponseSender().send(message); >>> >>> } >>> >>> }); >>> >>> } else { >>> >>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >>> >>> "text/plain"); >>> >>> exchange.getResponseSender().send("FAILURE"); >>> >>> } >>> >>> } >>> >>> }).build(); >>> >>> server.start(); >>> >>> } >>> >>> } >>> >>> >>> >>> >>> >>> Oops seems to no improvement : >>> >>> >>> >>> Running 1m test @ http://localhost:8009/ >>> >>> 100 threads and 1000 connections >>> >>> Thread Stats Avg Stdev Max +/- Stdev >>> >>> Latency 25.79ms 22.18ms 289.48ms 67.66% >>> >>> Req/Sec 437.76 61.71 2.30k 80.26% >>> >>> Latency Distribution >>> >>> 50% 22.60ms >>> >>> 75% 37.83ms >>> >>> 90% 55.32ms >>> >>> 99% 90.47ms >>> >>> 2625607 requests in 1.00m, 2.76GB read >>> >>> Requests/sec: 43688.42 >>> >>> Transfer/sec: 47.08MB >>> >>> >>> >>> >>> >>> :-( :-( .. >>> >>> >>> >>> >>> >>> --Senthil >>> >>> >>> >>> >>> >>> On Fri, Jun 23, 2017 at 1:47 AM, Antoine Girard >>> >>> wrote: >>> >>>> >>> >>>> You can use the Receiver API, specifically for that purpose. >>> >>>> On the exchange, call: getRequestReceiver(); >>> >>>> >>> >>>> You will get a receiver object: >>> >>>> >>> >>>> >>> >>>> https://github.com/undertow-io/undertow/blob/master/core/src/main/java/io/undertow/io/Receiver.java >>> >>>> >>> >>>> On the receiver you can call: receiveFullString, you have to pass it >>> >>>> a >>> >>>> callback that will be called when the whole body has been read. >>> >>>> >>> >>>> Please share your results when you test this further! >>> >>>> >>> >>>> Cheers, >>> >>>> Antoine >>> >>>> >>> >>>> >>> >>>> On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K >>> >>>> >>> >>>> wrote: >>> >>>>> >>> >>>>> Seems to Reading Request body is wrong , So what is the efficient >>> >>>>> way >>> >>>>> of reading request body in undertow ? >>> >>>>> >>> >>>>> --Senthil >>> >>>>> >>> >>>>> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K >>> >>>>> wrote: >>> >>>>>> >>> >>>>>> Hello Undertow Dev Team , >>> >>>>>> >>> >>>>>> I have been working on the use case where i should create >>> >>>>>> simple >>> >>>>>> http server to serve 1.5 Million Requests per Second per Instance >>> >>>>>> .. >>> >>>>>> >>> >>>>>> >>> >>>>>> Here is the benchmark result of Undertow : >>> >>>>>> >>> >>>>>> Running 1m test @ http://127.0.0.1:8009/ >>> >>>>>> 20 threads and 40 connections >>> >>>>>> Thread Stats Avg Stdev Max +/- Stdev >>> >>>>>> Latency 2.51ms 10.75ms 282.22ms 99.28% >>> >>>>>> Req/Sec 1.12k 316.65 1.96k 54.50% >>> >>>>>> Latency Distribution >>> >>>>>> 50% 1.43ms >>> >>>>>> 75% 2.38ms >>> >>>>>> 90% 2.90ms >>> >>>>>> 99% 10.45ms >>> >>>>>> 1328133 requests in 1.00m, 167.19MB read >>> >>>>>> Requests/sec: 22127.92 >>> >>>>>> Transfer/sec: 2.79MB >>> >>>>>> >>> >>>>>> This is less compared to other frameworks like Jetty and Netty .. >>> >>>>>> But >>> >>>>>> originally Undertow is high performant http server .. >>> >>>>>> >>> >>>>>> Hardware details: >>> >>>>>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity >>> >>>>>> 4 >>> >>>>>> GHz) , Memory : 32 G , Available memory 31 G. >>> >>>>>> >>> >>>>>> I would need Undertow experts to review the server code below and >>> >>>>>> advice me on tuning to achieve my goal( ~1.5 Million requests/sec >>> >>>>>> ). >>> >>>>>> >>> >>>>>> Server : >>> >>>>>> >>> >>>>>> Undertow server = Undertow.builder() >>> >>>>>> .addHttpListener(8009, "localhost") >>> >>>>>> .setHandler(new Handler()).build(); >>> >>>>>> server.start(); >>> >>>>>> >>> >>>>>> >>> >>>>>> Handler.Java >>> >>>>>> >>> >>>>>> final Pooled pooledByteBuffer = >>> >>>>>> >>> >>>>>> exchange.getConnection().getBufferPool().allocate(); >>> >>>>>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); >>> >>>>>> byteBuffer.clear(); >>> >>>>>> exchange.getRequestChannel().read(byteBuffer); >>> >>>>>> int pos = byteBuffer.position(); >>> >>>>>> byteBuffer.rewind(); >>> >>>>>> byte[] bytes = new byte[pos]; >>> >>>>>> byteBuffer.get(bytes); >>> >>>>>> String requestBody = new String(bytes, Charset.forName("UTF-8") >>> >>>>>> ); >>> >>>>>> byteBuffer.clear(); >>> >>>>>> pooledByteBuffer.free(); >>> >>>>>> final PostToKafka post2Kafka = new PostToKafka(); >>> >>>>>> try { >>> >>>>>> post2Kafka.write2Kafka(requestBody); { This API can handle ~2 >>> >>>>>> Millions events per sec } >>> >>>>>> } catch (Exception e) { >>> >>>>>> e.printStackTrace(); >>> >>>>>> } >>> >>>>>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, >>> >>>>>> "text/plain"); >>> >>>>>> exchange.getResponseSender().send("SUCCESS"); >>> >>>>>> >>> >>>>>> >>> >>>>>> --Senthil >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> _______________________________________________ >>> >>>>> undertow-dev mailing list >>> >>>>> undertow-dev at lists.jboss.org >>> >>>>> https://lists.jboss.org/mailman/listinfo/undertow-dev >>> >>>> >>> >>>> >>> >>> >>> >> >>> > >>> > >>> > _______________________________________________ >>> > undertow-dev mailing list >>> > undertow-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/undertow-dev >> >> > From bill at dartalley.com Wed Jun 28 22:04:05 2017 From: bill at dartalley.com (Bill O'Neil) Date: Wed, 28 Jun 2017 22:04:05 -0400 Subject: [undertow-dev] Undertow Http Server - Handling 2 Millions Requests Per Second Per Instance In-Reply-To: References: Message-ID: 1. Can you run the benchmark with the kafka line commented out at first and then again with it not commented out? 2. What rates were you getting with Jetty and Netty? 3. Are you running the tests from the same machine or a different one? If its the same machine and its using 20 threads they will be contending with undertows IO threads. 4. You can probably ignore the POST check if thats all your going to accept and its not a public api. import io.undertow.server.HttpHandler; import io.undertow.server.HttpServerExchange; import io.undertow.util.Headers; import io.undertow.util.Methods; public class DLRHandler implements HttpHandler { final public static String _SUCCESS="SUCCESS"; final public static String _FAILURE="FAILURE"; final PostToKafka post2Kafka = new PostToKafka(); @Override public void handleRequest( final HttpServerExchange exchange) throws Exception { if (exchange.getRequestMethod().equals(Methods.POST)) { exchange.getRequestReceiver().receiveFullString(( exchangeReq, data) -> { //post2Kafka.write2Kafka(data); // write it to Kafka exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchangeReq.getResponseSender().send(_SUCCESS); }, (exchangeReq, exception) -> { exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain"); exchangeReq.getResponseSender().send(_FAILURE); }); }else{ throw new Exception("Method GET not supported by Server "); } } } On Wed, Jun 28, 2017 at 6:59 PM, Stuart Douglas wrote: > The multiple dispatches() are unnecessary (well the second one to the > IO thread is definitely unnecessary, the first one is only required if > post2Kafka.write2Kafka(data); is a blocking operation and needs to be > executed in a worker thread). > > Stuart > > On Wed, Jun 28, 2017 at 5:42 PM, SenthilKumar K > wrote: > > After modifying the code below i could see the improvement ( not much > > slightly ) in server - 65k req/sec. > > > > import io.undertow.server.HttpHandler; > > import io.undertow.server.HttpServerExchange; > > import io.undertow.util.Headers; > > import io.undertow.util.Methods; > > > > public class DLRHandler implements HttpHandler { > > > > final public static String _SUCCESS="SUCCESS"; > > final public static String _FAILURE="FAILURE"; > > final PostToKafka post2Kafka = new PostToKafka(); > > > > @Override > > public void handleRequest( final HttpServerExchange exchange) throws > > Exception { > > if (exchange.getRequestMethod().equals(Methods.POST)) { > > exchange.getRequestReceiver().receiveFullString(( > > exchangeReq, data) -> { > > exchangeReq.dispatch(() -> { > > post2Kafka.write2Kafka(data); // write it to Kafka > > exchangeReq.dispatch(exchangeReq.getIoThread(), > () -> > > { > > > > exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE, > "text/plain"); > > exchangeReq.getResponseSender( > ).send(_SUCCESS); > > }); > > }); > > }, > > (exchangeReq, exception) -> { > > exchangeReq.getResponseHeaders().put( > Headers.CONTENT_TYPE, > > "text/plain"); > > exchangeReq.getResponseSender().send(_FAILURE); > > }); > > }else{ > > throw new Exception("Method GET not supported by Server "); > > } > > } > > } > > > > > > Pls review this and let me know if i'm doing anything wrong here ... > > --Senthil > > > > On Fri, Jun 23, 2017 at 1:30 PM, Antoine Girard < > antoine.girard at ymail.com> > > wrote: > >> > >> Also, to come back on the JVM warmup, this will give you enough answers: > >> > >> https://stackoverflow.com/questions/36198278/why-does- > the-jvm-require-warmup > >> > >> For your, it means that you have to run your tests for a few minutes > >> before starting your actual measurements. > >> > >> I am also interested about how Netty / Jetty perform under the same > >> conditions, please post! > >> > >> Cheers, > >> Antoine > >> > >> On Fri, Jun 23, 2017 at 1:24 AM, Stuart Douglas > >> wrote: > >>> > >>> Are you actually testing with the 'System.out.println(" Received > >>> String ==> "+message);'. System.out is incredibly slow. > >>> > >>> Stuart > >>> > >>> On Fri, Jun 23, 2017 at 7:01 AM, SenthilKumar K < > senthilec566 at gmail.com> > >>> wrote: > >>> > Sorry , I'm not an expert in JVM .. How do we do Warm Up JVM ? > >>> > > >>> > Here is the JVM args to Server: > >>> > > >>> > nohup java -Xmx4g -Xms4g -XX:MetaspaceSize=96m -XX:+UseG1GC > >>> > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 > >>> > -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 > >>> > -XX:MaxMetaspaceFreeRatio=80 -cp undertow-0.0.1.jar HelloWorldServer > >>> > > >>> > > >>> > --Senthil > >>> > > >>> > > >>> > On Fri, Jun 23, 2017 at 2:23 AM, Antoine Girard > >>> > > >>> > wrote: > >>> >> > >>> >> Do you warm up your jvm prior to the testing? > >>> >> > >>> >> Cheers, > >>> >> Antoine > >>> >> > >>> >> On Thu, Jun 22, 2017 at 10:42 PM, SenthilKumar K > >>> >> > >>> >> wrote: > >>> >>> > >>> >>> Thanks Bill n Antoine .. > >>> >>> > >>> >>> > >>> >>> Here is the updated one : ( tried without Kafka API ) . > >>> >>> > >>> >>> public class HelloWorldServer { > >>> >>> > >>> >>> public static void main(final String[] args) { > >>> >>> Undertow server = Undertow.builder().addHttpListener(8009, > >>> >>> "localhost").setHandler(new HttpHandler() { > >>> >>> @Override > >>> >>> public void handleRequest(final HttpServerExchange exchange) throws > >>> >>> Exception { > >>> >>> if (exchange.getRequestMethod().equals(Methods.POST)) { > >>> >>> exchange.getRequestReceiver().receiveFullString(new > >>> >>> Receiver.FullStringCallback() { > >>> >>> @Override > >>> >>> public void handle(HttpServerExchange exchange, > >>> >>> String > >>> >>> message) { > >>> >>> System.out.println(" Received String ==> > >>> >>> "+message); > >>> >>> exchange.getResponseSender().send(message); > >>> >>> } > >>> >>> }); > >>> >>> } else { > >>> >>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, > >>> >>> "text/plain"); > >>> >>> exchange.getResponseSender().send("FAILURE"); > >>> >>> } > >>> >>> } > >>> >>> }).build(); > >>> >>> server.start(); > >>> >>> } > >>> >>> } > >>> >>> > >>> >>> > >>> >>> Oops seems to no improvement : > >>> >>> > >>> >>> Running 1m test @ http://localhost:8009/ > >>> >>> 100 threads and 1000 connections > >>> >>> Thread Stats Avg Stdev Max +/- Stdev > >>> >>> Latency 25.79ms 22.18ms 289.48ms 67.66% > >>> >>> Req/Sec 437.76 61.71 2.30k 80.26% > >>> >>> Latency Distribution > >>> >>> 50% 22.60ms > >>> >>> 75% 37.83ms > >>> >>> 90% 55.32ms > >>> >>> 99% 90.47ms > >>> >>> 2625607 requests in 1.00m, 2.76GB read > >>> >>> Requests/sec: 43688.42 > >>> >>> Transfer/sec: 47.08MB > >>> >>> > >>> >>> > >>> >>> :-( :-( .. > >>> >>> > >>> >>> > >>> >>> --Senthil > >>> >>> > >>> >>> > >>> >>> On Fri, Jun 23, 2017 at 1:47 AM, Antoine Girard > >>> >>> wrote: > >>> >>>> > >>> >>>> You can use the Receiver API, specifically for that purpose. > >>> >>>> On the exchange, call: getRequestReceiver(); > >>> >>>> > >>> >>>> You will get a receiver object: > >>> >>>> > >>> >>>> > >>> >>>> https://github.com/undertow-io/undertow/blob/master/core/ > src/main/java/io/undertow/io/Receiver.java > >>> >>>> > >>> >>>> On the receiver you can call: receiveFullString, you have to pass > it > >>> >>>> a > >>> >>>> callback that will be called when the whole body has been read. > >>> >>>> > >>> >>>> Please share your results when you test this further! > >>> >>>> > >>> >>>> Cheers, > >>> >>>> Antoine > >>> >>>> > >>> >>>> > >>> >>>> On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K > >>> >>>> > >>> >>>> wrote: > >>> >>>>> > >>> >>>>> Seems to Reading Request body is wrong , So what is the efficient > >>> >>>>> way > >>> >>>>> of reading request body in undertow ? > >>> >>>>> > >>> >>>>> --Senthil > >>> >>>>> > >>> >>>>> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K > >>> >>>>> wrote: > >>> >>>>>> > >>> >>>>>> Hello Undertow Dev Team , > >>> >>>>>> > >>> >>>>>> I have been working on the use case where i should create > >>> >>>>>> simple > >>> >>>>>> http server to serve 1.5 Million Requests per Second per > Instance > >>> >>>>>> .. > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> Here is the benchmark result of Undertow : > >>> >>>>>> > >>> >>>>>> Running 1m test @ http://127.0.0.1:8009/ > >>> >>>>>> 20 threads and 40 connections > >>> >>>>>> Thread Stats Avg Stdev Max +/- Stdev > >>> >>>>>> Latency 2.51ms 10.75ms 282.22ms 99.28% > >>> >>>>>> Req/Sec 1.12k 316.65 1.96k 54.50% > >>> >>>>>> Latency Distribution > >>> >>>>>> 50% 1.43ms > >>> >>>>>> 75% 2.38ms > >>> >>>>>> 90% 2.90ms > >>> >>>>>> 99% 10.45ms > >>> >>>>>> 1328133 requests in 1.00m, 167.19MB read > >>> >>>>>> Requests/sec: 22127.92 > >>> >>>>>> Transfer/sec: 2.79MB > >>> >>>>>> > >>> >>>>>> This is less compared to other frameworks like Jetty and Netty > .. > >>> >>>>>> But > >>> >>>>>> originally Undertow is high performant http server .. > >>> >>>>>> > >>> >>>>>> Hardware details: > >>> >>>>>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, > Capacity > >>> >>>>>> 4 > >>> >>>>>> GHz) , Memory : 32 G , Available memory 31 G. > >>> >>>>>> > >>> >>>>>> I would need Undertow experts to review the server code below > and > >>> >>>>>> advice me on tuning to achieve my goal( ~1.5 Million > requests/sec > >>> >>>>>> ). > >>> >>>>>> > >>> >>>>>> Server : > >>> >>>>>> > >>> >>>>>> Undertow server = Undertow.builder() > >>> >>>>>> .addHttpListener(8009, "localhost") > >>> >>>>>> .setHandler(new Handler()).build(); > >>> >>>>>> server.start(); > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> Handler.Java > >>> >>>>>> > >>> >>>>>> final Pooled pooledByteBuffer = > >>> >>>>>> > >>> >>>>>> exchange.getConnection().getBufferPool().allocate(); > >>> >>>>>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource(); > >>> >>>>>> byteBuffer.clear(); > >>> >>>>>> exchange.getRequestChannel().read(byteBuffer); > >>> >>>>>> int pos = byteBuffer.position(); > >>> >>>>>> byteBuffer.rewind(); > >>> >>>>>> byte[] bytes = new byte[pos]; > >>> >>>>>> byteBuffer.get(bytes); > >>> >>>>>> String requestBody = new String(bytes, > Charset.forName("UTF-8") > >>> >>>>>> ); > >>> >>>>>> byteBuffer.clear(); > >>> >>>>>> pooledByteBuffer.free(); > >>> >>>>>> final PostToKafka post2Kafka = new PostToKafka(); > >>> >>>>>> try { > >>> >>>>>> post2Kafka.write2Kafka(requestBody); { This API can handle ~2 > >>> >>>>>> Millions events per sec } > >>> >>>>>> } catch (Exception e) { > >>> >>>>>> e.printStackTrace(); > >>> >>>>>> } > >>> >>>>>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, > >>> >>>>>> "text/plain"); > >>> >>>>>> exchange.getResponseSender().send("SUCCESS"); > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> --Senthil > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> _______________________________________________ > >>> >>>>> undertow-dev mailing list > >>> >>>>> undertow-dev at lists.jboss.org > >>> >>>>> https://lists.jboss.org/mailman/listinfo/undertow-dev > >>> >>>> > >>> >>>> > >>> >>> > >>> >> > >>> > > >>> > > >>> > _______________________________________________ > >>> > undertow-dev mailing list > >>> > undertow-dev at lists.jboss.org > >>> > https://lists.jboss.org/mailman/listinfo/undertow-dev > >> > >> > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170628/7e917def/attachment-0001.html From bdw429s at gmail.com Fri Jun 30 19:04:57 2017 From: bdw429s at gmail.com (Brad Wood) Date: Fri, 30 Jun 2017 18:04:57 -0500 Subject: [undertow-dev] Can't get remote_user with basic auth Message-ID: Hello, I'm having troubles getting the remote_user cgi variable when using basic authentication. The basic auth itself seems to work fine. The browser challenges me, I enter a user/pass, and the page loads. However request.getRemoteUser() is returning null. Here is the bits that are setting up the basic auth handler: https://github.com/cfmlprojects/runwar/blob/master/src/runwar/security/SecurityManager.java#L24 I've Googled quite a bit and I can't find any guides that indicate that anything special needs set up for remote user to be available. I also found the exchange attribute class for remote user in Undertow, but can't find any docs or guides at all that indicate how it is to be used or if I need to be doing anything with it in regards to basic auth. Can someone provide a sanity check on what is missing for the remote user to be available? Using Undertow 1.4.11.Final Thanks! ~Brad *Developer Advocate* *Ortus Solutions, Corp * E-mail: brad at coldbox.org ColdBox Platform: http://www.coldbox.org Blog: http://www.codersrevolution.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170630/980fcf1a/attachment.html From sdouglas at redhat.com Fri Jun 30 22:02:46 2017 From: sdouglas at redhat.com (Stuart Douglas) Date: Sat, 1 Jul 2017 12:02:46 +1000 Subject: [undertow-dev] Can't get remote_user with basic auth In-Reply-To: References: Message-ID: It sounds like you are trying to use the Servlet API? If so then you don't install the basic auth handler manually, you set the appropriate info into DeploymentInfo and Undertow will set it up for you. Here is an example: https://github.com/undertow-io/undertow/blob/master/servlet/src/test/java/io/undertow/servlet/test/security/basic/ServletBasicAuthTestCase.java#L89 Stuart On Sat, Jul 1, 2017 at 9:04 AM, Brad Wood wrote: > Hello, I'm having troubles getting the remote_user cgi variable when using > basic authentication. The basic auth itself seems to work fine. The > browser challenges me, I enter a user/pass, and the page loads. However > request.getRemoteUser() is returning null. > > Here is the bits that are setting up the basic auth handler: > https://github.com/cfmlprojects/runwar/blob/master/src/runwar/security/SecurityManager.java#L24 > > I've Googled quite a bit and I can't find any guides that indicate that > anything special needs set up for remote user to be available. I also found > the exchange attribute class for remote user in Undertow, but can't find any > docs or guides at all that indicate how it is to be used or if I need to be > doing anything with it in regards to basic auth. > > Can someone provide a sanity check on what is missing for the remote user to > be available? > > Using Undertow 1.4.11.Final > > Thanks! > > ~Brad > > Developer Advocate > Ortus Solutions, Corp > > E-mail: brad at coldbox.org > ColdBox Platform: http://www.coldbox.org > Blog: http://www.codersrevolution.com > > > _______________________________________________ > undertow-dev mailing list > undertow-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/undertow-dev From bdw429s at gmail.com Fri Jun 30 22:31:02 2017 From: bdw429s at gmail.com (Brad Wood) Date: Fri, 30 Jun 2017 21:31:02 -0500 Subject: [undertow-dev] Can't get remote_user with basic auth In-Reply-To: References: Message-ID: Ahh, thanks for that. Let me confer with the lead dev for this project and we'll see if we can get this squared away. Thanks! ~Brad *Developer Advocate* *Ortus Solutions, Corp * E-mail: brad at coldbox.org ColdBox Platform: http://www.coldbox.org Blog: http://www.codersrevolution.com On Fri, Jun 30, 2017 at 9:02 PM, Stuart Douglas wrote: > It sounds like you are trying to use the Servlet API? > > If so then you don't install the basic auth handler manually, you set > the appropriate info into DeploymentInfo and Undertow will set it up > for you. > > Here is an example: > https://github.com/undertow-io/undertow/blob/master/ > servlet/src/test/java/io/undertow/servlet/test/security/basic/ > ServletBasicAuthTestCase.java#L89 > > Stuart > > On Sat, Jul 1, 2017 at 9:04 AM, Brad Wood wrote: > > Hello, I'm having troubles getting the remote_user cgi variable when > using > > basic authentication. The basic auth itself seems to work fine. The > > browser challenges me, I enter a user/pass, and the page loads. However > > request.getRemoteUser() is returning null. > > > > Here is the bits that are setting up the basic auth handler: > > https://github.com/cfmlprojects/runwar/blob/master/src/runwar/security/ > SecurityManager.java#L24 > > > > I've Googled quite a bit and I can't find any guides that indicate that > > anything special needs set up for remote user to be available. I also > found > > the exchange attribute class for remote user in Undertow, but can't find > any > > docs or guides at all that indicate how it is to be used or if I need to > be > > doing anything with it in regards to basic auth. > > > > Can someone provide a sanity check on what is missing for the remote > user to > > be available? > > > > Using Undertow 1.4.11.Final > > > > Thanks! > > > > ~Brad > > > > Developer Advocate > > Ortus Solutions, Corp > > > > E-mail: brad at coldbox.org > > ColdBox Platform: http://www.coldbox.org > > Blog: http://www.codersrevolution.com > > > > > > _______________________________________________ > > undertow-dev mailing list > > undertow-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/undertow-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/undertow-dev/attachments/20170630/7fc31cad/attachment.html