1. Can you run the benchmark with the kafka line commented out at first and
then again with it not commented out?
2. What rates were you getting with Jetty and Netty?
3. Are you running the tests from the same machine or a different one? If
its the same machine and its using 20 threads they will be contending with
undertows IO threads.
4. You can probably ignore the POST check if thats all your going to accept
and its not a public api.
import io.undertow.server.HttpHandler;
import io.undertow.server.HttpServerExchange;
import io.undertow.util.Headers;
import io.undertow.util.Methods;
public class DLRHandler implements HttpHandler {
final public static String _SUCCESS="SUCCESS";
final public static String _FAILURE="FAILURE";
final PostToKafka post2Kafka = new PostToKafka();
@Override
public void handleRequest( final HttpServerExchange exchange) throws
Exception {
if (exchange.getRequestMethod().equals(Methods.POST)) {
exchange.getRequestReceiver().receiveFullString((
exchangeReq, data) -> {
//post2Kafka.write2Kafka(data); // write it to Kafka
exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE,
"text/plain");
exchangeReq.getResponseSender().send(_SUCCESS);
},
(exchangeReq, exception) -> {
exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE,
"text/plain");
exchangeReq.getResponseSender().send(_FAILURE);
});
}else{
throw new Exception("Method GET not supported by Server ");
}
}
}
On Wed, Jun 28, 2017 at 6:59 PM, Stuart Douglas <sdouglas(a)redhat.com> wrote:
The multiple dispatches() are unnecessary (well the second one to
the
IO thread is definitely unnecessary, the first one is only required if
post2Kafka.write2Kafka(data); is a blocking operation and needs to be
executed in a worker thread).
Stuart
On Wed, Jun 28, 2017 at 5:42 PM, SenthilKumar K <senthilec566(a)gmail.com>
wrote:
> After modifying the code below i could see the improvement ( not much
> slightly ) in server - 65k req/sec.
>
> import io.undertow.server.HttpHandler;
> import io.undertow.server.HttpServerExchange;
> import io.undertow.util.Headers;
> import io.undertow.util.Methods;
>
> public class DLRHandler implements HttpHandler {
>
> final public static String _SUCCESS="SUCCESS";
> final public static String _FAILURE="FAILURE";
> final PostToKafka post2Kafka = new PostToKafka();
>
> @Override
> public void handleRequest( final HttpServerExchange exchange) throws
> Exception {
> if (exchange.getRequestMethod().equals(Methods.POST)) {
> exchange.getRequestReceiver().receiveFullString((
> exchangeReq, data) -> {
> exchangeReq.dispatch(() -> {
> post2Kafka.write2Kafka(data); // write it to Kafka
> exchangeReq.dispatch(exchangeReq.getIoThread(),
() ->
> {
>
> exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE,
"text/plain");
> exchangeReq.getResponseSender(
).send(_SUCCESS);
> });
> });
> },
> (exchangeReq, exception) -> {
> exchangeReq.getResponseHeaders().put(
Headers.CONTENT_TYPE,
> "text/plain");
> exchangeReq.getResponseSender().send(_FAILURE);
> });
> }else{
> throw new Exception("Method GET not supported by Server ");
> }
> }
> }
>
>
> Pls review this and let me know if i'm doing anything wrong here ...
> --Senthil
>
> On Fri, Jun 23, 2017 at 1:30 PM, Antoine Girard <
antoine.girard(a)ymail.com>
> wrote:
>>
>> Also, to come back on the JVM warmup, this will give you enough answers:
>>
>>
https://stackoverflow.com/questions/36198278/why-does-
the-jvm-require-warmup
>>
>> For your, it means that you have to run your tests for a few minutes
>> before starting your actual measurements.
>>
>> I am also interested about how Netty / Jetty perform under the same
>> conditions, please post!
>>
>> Cheers,
>> Antoine
>>
>> On Fri, Jun 23, 2017 at 1:24 AM, Stuart Douglas <sdouglas(a)redhat.com>
>> wrote:
>>>
>>> Are you actually testing with the 'System.out.println(" Received
>>> String ==> "+message);'. System.out is incredibly slow.
>>>
>>> Stuart
>>>
>>> On Fri, Jun 23, 2017 at 7:01 AM, SenthilKumar K <
senthilec566(a)gmail.com>
>>> wrote:
>>> > Sorry , I'm not an expert in JVM .. How do we do Warm Up JVM ?
>>> >
>>> > Here is the JVM args to Server:
>>> >
>>> > nohup java -Xmx4g -Xms4g -XX:MetaspaceSize=96m -XX:+UseG1GC
>>> > -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35
>>> > -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50
>>> > -XX:MaxMetaspaceFreeRatio=80 -cp undertow-0.0.1.jar HelloWorldServer
>>> >
>>> >
>>> > --Senthil
>>> >
>>> >
>>> > On Fri, Jun 23, 2017 at 2:23 AM, Antoine Girard
>>> > <antoine.girard(a)ymail.com>
>>> > wrote:
>>> >>
>>> >> Do you warm up your jvm prior to the testing?
>>> >>
>>> >> Cheers,
>>> >> Antoine
>>> >>
>>> >> On Thu, Jun 22, 2017 at 10:42 PM, SenthilKumar K
>>> >> <senthilec566(a)gmail.com>
>>> >> wrote:
>>> >>>
>>> >>> Thanks Bill n Antoine ..
>>> >>>
>>> >>>
>>> >>> Here is the updated one : ( tried without Kafka API ) .
>>> >>>
>>> >>> public class HelloWorldServer {
>>> >>>
>>> >>> public static void main(final String[] args) {
>>> >>> Undertow server = Undertow.builder().addHttpListener(8009,
>>> >>> "localhost").setHandler(new HttpHandler() {
>>> >>> @Override
>>> >>> public void handleRequest(final HttpServerExchange exchange)
throws
>>> >>> Exception {
>>> >>> if (exchange.getRequestMethod().equals(Methods.POST)) {
>>> >>> exchange.getRequestReceiver().receiveFullString(new
>>> >>> Receiver.FullStringCallback() {
>>> >>> @Override
>>> >>> public void handle(HttpServerExchange
exchange,
>>> >>> String
>>> >>> message) {
>>> >>> System.out.println(" Received String
==>
>>> >>> "+message);
>>> >>>
exchange.getResponseSender().send(message);
>>> >>> }
>>> >>> });
>>> >>> } else {
>>> >>> exchange.getResponseHeaders().put(Headers.CONTENT_TYPE,
>>> >>> "text/plain");
>>> >>> exchange.getResponseSender().send("FAILURE");
>>> >>> }
>>> >>> }
>>> >>> }).build();
>>> >>> server.start();
>>> >>> }
>>> >>> }
>>> >>>
>>> >>>
>>> >>> Oops seems to no improvement :
>>> >>>
>>> >>> Running 1m test @
http://localhost:8009/
>>> >>> 100 threads and 1000 connections
>>> >>> Thread Stats Avg Stdev Max +/- Stdev
>>> >>> Latency 25.79ms 22.18ms 289.48ms 67.66%
>>> >>> Req/Sec 437.76 61.71 2.30k 80.26%
>>> >>> Latency Distribution
>>> >>> 50% 22.60ms
>>> >>> 75% 37.83ms
>>> >>> 90% 55.32ms
>>> >>> 99% 90.47ms
>>> >>> 2625607 requests in 1.00m, 2.76GB read
>>> >>> Requests/sec: 43688.42
>>> >>> Transfer/sec: 47.08MB
>>> >>>
>>> >>>
>>> >>> :-( :-( ..
>>> >>>
>>> >>>
>>> >>> --Senthil
>>> >>>
>>> >>>
>>> >>> On Fri, Jun 23, 2017 at 1:47 AM, Antoine Girard
>>> >>> <antoine.girard(a)ymail.com> wrote:
>>> >>>>
>>> >>>> You can use the Receiver API, specifically for that
purpose.
>>> >>>> On the exchange, call: getRequestReceiver();
>>> >>>>
>>> >>>> You will get a receiver object:
>>> >>>>
>>> >>>>
>>> >>>>
https://github.com/undertow-io/undertow/blob/master/core/
src/main/java/io/undertow/io/Receiver.java
>>> >>>>
>>> >>>> On the receiver you can call: receiveFullString, you have
to pass
it
>>> >>>> a
>>> >>>> callback that will be called when the whole body has been
read.
>>> >>>>
>>> >>>> Please share your results when you test this further!
>>> >>>>
>>> >>>> Cheers,
>>> >>>> Antoine
>>> >>>>
>>> >>>>
>>> >>>> On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K
>>> >>>> <senthilec566(a)gmail.com>
>>> >>>> wrote:
>>> >>>>>
>>> >>>>> Seems to Reading Request body is wrong , So what is the
efficient
>>> >>>>> way
>>> >>>>> of reading request body in undertow ?
>>> >>>>>
>>> >>>>> --Senthil
>>> >>>>>
>>> >>>>> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K
>>> >>>>> <senthilec566(a)gmail.com> wrote:
>>> >>>>>>
>>> >>>>>> Hello Undertow Dev Team ,
>>> >>>>>>
>>> >>>>>> I have been working on the use case where i
should create
>>> >>>>>> simple
>>> >>>>>> http server to serve 1.5 Million Requests per
Second per
Instance
>>> >>>>>> ..
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> Here is the benchmark result of Undertow :
>>> >>>>>>
>>> >>>>>> Running 1m test @
http://127.0.0.1:8009/
>>> >>>>>> 20 threads and 40 connections
>>> >>>>>> Thread Stats Avg Stdev Max +/-
Stdev
>>> >>>>>> Latency 2.51ms 10.75ms 282.22ms 99.28%
>>> >>>>>> Req/Sec 1.12k 316.65 1.96k 54.50%
>>> >>>>>> Latency Distribution
>>> >>>>>> 50% 1.43ms
>>> >>>>>> 75% 2.38ms
>>> >>>>>> 90% 2.90ms
>>> >>>>>> 99% 10.45ms
>>> >>>>>> 1328133 requests in 1.00m, 167.19MB read
>>> >>>>>> Requests/sec: 22127.92
>>> >>>>>> Transfer/sec: 2.79MB
>>> >>>>>>
>>> >>>>>> This is less compared to other frameworks like
Jetty and Netty
..
>>> >>>>>> But
>>> >>>>>> originally Undertow is high performant http server
..
>>> >>>>>>
>>> >>>>>> Hardware details:
>>> >>>>>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock
100 MHz,
Capacity
>>> >>>>>> 4
>>> >>>>>> GHz) , Memory : 32 G , Available memory 31 G.
>>> >>>>>>
>>> >>>>>> I would need Undertow experts to review the server
code below
and
>>> >>>>>> advice me on tuning to achieve my goal( ~1.5
Million
requests/sec
>>> >>>>>> ).
>>> >>>>>>
>>> >>>>>> Server :
>>> >>>>>>
>>> >>>>>> Undertow server = Undertow.builder()
>>> >>>>>> .addHttpListener(8009,
"localhost")
>>> >>>>>> .setHandler(new Handler()).build();
>>> >>>>>> server.start();
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> Handler.Java
>>> >>>>>>
>>> >>>>>> final Pooled<ByteBuffer> pooledByteBuffer
=
>>> >>>>>>
>>> >>>>>>
exchange.getConnection().getBufferPool().allocate();
>>> >>>>>> final ByteBuffer byteBuffer =
pooledByteBuffer.getResource();
>>> >>>>>> byteBuffer.clear();
>>> >>>>>> exchange.getRequestChannel().read(byteBuffer);
>>> >>>>>> int pos = byteBuffer.position();
>>> >>>>>> byteBuffer.rewind();
>>> >>>>>> byte[] bytes = new byte[pos];
>>> >>>>>> byteBuffer.get(bytes);
>>> >>>>>> String requestBody = new String(bytes,
Charset.forName("UTF-8")
>>> >>>>>> );
>>> >>>>>> byteBuffer.clear();
>>> >>>>>> pooledByteBuffer.free();
>>> >>>>>> final PostToKafka post2Kafka = new
PostToKafka();
>>> >>>>>> try {
>>> >>>>>> post2Kafka.write2Kafka(requestBody); { This API
can handle ~2
>>> >>>>>> Millions events per sec }
>>> >>>>>> } catch (Exception e) {
>>> >>>>>> e.printStackTrace();
>>> >>>>>> }
>>> >>>>>>
exchange.getResponseHeaders().put(Headers.CONTENT_TYPE,
>>> >>>>>> "text/plain");
>>> >>>>>>
exchange.getResponseSender().send("SUCCESS");
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> --Senthil
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>> _______________________________________________
>>> >>>>> undertow-dev mailing list
>>> >>>>> undertow-dev(a)lists.jboss.org
>>> >>>>>
https://lists.jboss.org/mailman/listinfo/undertow-dev
>>> >>>>
>>> >>>>
>>> >>>
>>> >>
>>> >
>>> >
>>> > _______________________________________________
>>> > undertow-dev mailing list
>>> > undertow-dev(a)lists.jboss.org
>>> >
https://lists.jboss.org/mailman/listinfo/undertow-dev
>>
>>
>
_______________________________________________
undertow-dev mailing list
undertow-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/undertow-dev