. Both tested in default
settings..
Both Undertow and Netty respond SUCCESS or FAILURE.
I'd love to optimize undertow code and rerun the test case if required...
Another test result:
Undertow:
./wrk -c 10000 -d 10m -t 300 -s scripts/post_data.lua -R 500000
300 threads and 10000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 3.70m 2.07m 7.40m 57.73%
Req/Sec 449.71 6.45 474.00 74.93%
80640669 requests in 10.00m, 9.91GB read
Socket errors: connect 0, read 353, write 0, timeout 448
Requests/sec: *134457*.31
Transfer/sec: 16.93MB
Netty:
./wrk -c 10000 -d 10m -t 300 -s scripts/post_data.lua -R 500000
300 threads and 10000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.76m 1.54m 5.70m 57.83%
Req/Sec 763.90 73.21 1.12k 69.15%
137216075 requests in 10.00m, 12.14GB read
Socket errors: connect 0, read 0, write 0, timeout 42
Requests/sec: *228796*.63
Transfer/sec: 20.73MB
--Senthil
On Mon, Jul 10, 2017 at 3:12 AM, Stuart Douglas <sdouglas(a)redhat.com> wrote:
Also it looks like you are sending more data in the undertow
response.
Mb/s is very similar, while req/sec is lower.
Stuart
On 10 Jul. 2017 9:39 am, "Stuart Douglas" <sdouglas(a)redhat.com> wrote:
> Are they both using the same number of threads? Also what are you doing
> in the handler? Are you calling dispatch? Dispatch is relativity slow in
> these micro benchmarks, as it dispatches to a thread pool.
>
> Stuart
>
> On 9 Jul. 2017 4:34 am, "SenthilKumar K" <senthilec566(a)gmail.com>
wrote:
>
> Yet to try that .. My testcase did not cover tuning no of threads .. but
> even if we try to increase number of threads I believe both framework
> performance would improve !! Different thoughts ??
>
> Anyway I like to add another test case by changing threads !!
>
> --Senthil
>
> On Jul 8, 2017 9:38 PM, "Kim Rasmussen" <kr(a)asseco.dk> wrote:
>
>> Have you tried playing around with the number of io and worker threads?
>>
>> lør. 8. jul. 2017 kl. 17.28 skrev SenthilKumar K <senthilec566(a)gmail.com
>> >:
>>
>>> Any comments on *Undertow Vs Netty* ? Am i doing wrong benchmark
>>> testing ?? Should i change benchmark strategy ?
>>>
>>> --Senthil
>>>
>>> On Fri, Jul 7, 2017 at 3:14 PM, SenthilKumar K
<senthilec566(a)gmail.com>
>>> wrote:
>>>
>>>> Sorry for delay in responding to this thread!
>>>>
>>>> Thanks to everyone who helped me to Optimize Undertow Server.
>>>>
>>>> Here is the comparison after benchmarking my use case against Netty:
>>>>
>>>> *Undertow Vs Netty :*
>>>>
>>>> Test Case 1 :
>>>> Simple Request Response ( No Kafka ):
>>>>
>>>> *Undertow:*
>>>> Running 10m test @
http://198.18.134.13:8009/
>>>> 500 threads and 5000 connections
>>>> Thread Stats Avg Stdev Max +/- Stdev
>>>> Latency *3.52m * 2.64m 8.96m 54.63%
>>>> Req/Sec 376.58 103.18 0.99k 80.53%
>>>> 111628942 requests in 10.00m, 13.72GB read
>>>> Socket errors: connect 0, read 28, write 0, timeout 2
>>>> Requests/sec: *186122.56*
>>>> Transfer/sec: 23.43MB
>>>>
>>>> *Netty:*
>>>> Running 10m test @
http://198.18.134.13:8009/
>>>> 500 threads and 5000 connections
>>>> Thread Stats Avg Stdev Max +/- Stdev
>>>> Latency *3.77m* 2.10m 7.51m 57.73%
>>>> Req/Sec 518.63 31.78 652.00 70.25%
>>>> 155406992 requests in 10.00m, 13.82GB read
>>>> Socket errors: connect 0, read 49, write 0, timeout 0
>>>> Requests/sec: *259107*.30
>>>> Transfer/sec: 24.17MB
>>>>
>>>>
>>>> *Test Case 2:*
>>>> Request --> Read --> Send it Kafka :
>>>>
>>>> *Undertow:*
>>>> Running 10m test @
http://198.18.134.13:8009/
>>>> 500 threads and 5000 connections
>>>> Thread Stats Avg Stdev Max +/- Stdev
>>>> Latency *4.37m * 2.46m 8.72m 57.83%
>>>> Req/Sec 267.32 5.17 287.00 74.52%
>>>> 80044045 requests in 10.00m, 9.84GB read
>>>> Socket errors: connect 0, read 121, write 0, timeout 0
>>>> Requests/sec: *133459.79*
>>>> Transfer/sec: 16.80MB
>>>>
>>>> *Netty:*
>>>> Running 10m test @
http://198.18.134.13:8009/
>>>> 500 threads and 5000 connections
>>>> Thread Stats Avg Stdev Max +/- Stdev
>>>> Latency *3.78m * 2.10m 7.55m 57.79%
>>>> Req/Sec 516.92 28.84 642.00 69.60%
>>>> 154770536 requests in 10.00m, 13.69GB read
>>>> Socket errors: connect 0, read 11, write 0, timeout 101
>>>> Requests/sec: *258049.39*
>>>> Transfer/sec: 23.38MB
>>>>
>>>>
>>>>
>>>> CPU Usage:
>>>> *Undertow:*
>>>> [image: Inline image 1]
>>>>
>>>> *Netty:*
>>>> [image: Inline image 2]
>>>>
>>>>
>>>> --Senthil
>>>>
>>>> On Thu, Jun 29, 2017 at 7:34 AM, Bill O'Neil
<bill(a)dartalley.com>
>>>> wrote:
>>>>
>>>>> 1. Can you run the benchmark with the kafka line commented out at
>>>>> first and then again with it not commented out?
>>>>> 2. What rates were you getting with Jetty and Netty?
>>>>> 3. Are you running the tests from the same machine or a different
>>>>> one? If its the same machine and its using 20 threads they will be
>>>>> contending with undertows IO threads.
>>>>> 4. You can probably ignore the POST check if thats all your going to
>>>>> accept and its not a public api.
>>>>>
>>>>> import io.undertow.server.HttpHandler;
>>>>> import io.undertow.server.HttpServerExchange;
>>>>> import io.undertow.util.Headers;
>>>>> import io.undertow.util.Methods;
>>>>>
>>>>> public class DLRHandler implements HttpHandler {
>>>>>
>>>>> final public static String _SUCCESS="SUCCESS";
>>>>> final public static String _FAILURE="FAILURE";
>>>>> final PostToKafka post2Kafka = new PostToKafka();
>>>>>
>>>>> @Override
>>>>> public void handleRequest( final HttpServerExchange exchange)
>>>>> throws Exception {
>>>>> if (exchange.getRequestMethod().equals(Methods.POST)) {
>>>>> exchange.getRequestReceiver().receiveFullString((
>>>>> exchangeReq, data) -> {
>>>>> //post2Kafka.write2Kafka(data); // write it to
>>>>> Kafka
>>>>>
exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE,
>>>>> "text/plain");
>>>>> exchangeReq.getResponseSender().send(_SUCCESS);
>>>>> },
>>>>> (exchangeReq, exception) -> {
>>>>>
exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE,
>>>>> "text/plain");
>>>>> exchangeReq.getResponseSender().send(_FAILURE);
>>>>> });
>>>>> }else{
>>>>> throw new Exception("Method GET not supported by
Server
>>>>> ");
>>>>> }
>>>>> }
>>>>> }
>>>>>
>>>>> On Wed, Jun 28, 2017 at 6:59 PM, Stuart Douglas
<sdouglas(a)redhat.com>
>>>>> wrote:
>>>>>
>>>>>> The multiple dispatches() are unnecessary (well the second one to
the
>>>>>> IO thread is definitely unnecessary, the first one is only
required
>>>>>> if
>>>>>> post2Kafka.write2Kafka(data); is a blocking operation and needs
to be
>>>>>> executed in a worker thread).
>>>>>>
>>>>>> Stuart
>>>>>>
>>>>>> On Wed, Jun 28, 2017 at 5:42 PM, SenthilKumar K <
>>>>>> senthilec566(a)gmail.com> wrote:
>>>>>> > After modifying the code below i could see the improvement
( not
>>>>>> much
>>>>>> > slightly ) in server - 65k req/sec.
>>>>>> >
>>>>>> > import io.undertow.server.HttpHandler;
>>>>>> > import io.undertow.server.HttpServerExchange;
>>>>>> > import io.undertow.util.Headers;
>>>>>> > import io.undertow.util.Methods;
>>>>>> >
>>>>>> > public class DLRHandler implements HttpHandler {
>>>>>> >
>>>>>> > final public static String
_SUCCESS="SUCCESS";
>>>>>> > final public static String
_FAILURE="FAILURE";
>>>>>> > final PostToKafka post2Kafka = new PostToKafka();
>>>>>> >
>>>>>> > @Override
>>>>>> > public void handleRequest( final HttpServerExchange
exchange)
>>>>>> throws
>>>>>> > Exception {
>>>>>> > if
(exchange.getRequestMethod().equals(Methods.POST)) {
>>>>>> >
exchange.getRequestReceiver().receiveFullString((
>>>>>> > exchangeReq, data) -> {
>>>>>> > exchangeReq.dispatch(() -> {
>>>>>> > post2Kafka.write2Kafka(data); // write
it to
>>>>>> Kafka
>>>>>> >
exchangeReq.dispatch(exchangeReq.getIoThread(),
>>>>>> () ->
>>>>>> > {
>>>>>> >
>>>>>> > exchangeReq.getResponseHeaders().put(Headers.CONTENT_TYPE,
>>>>>> "text/plain");
>>>>>> > exchangeReq.getResponseSender
>>>>>> ().send(_SUCCESS);
>>>>>> > });
>>>>>> > });
>>>>>> > },
>>>>>> > (exchangeReq, exception) -> {
>>>>>> > exchangeReq.getResponseHeaders
>>>>>> ().put(Headers.CONTENT_TYPE,
>>>>>> > "text/plain");
>>>>>> >
exchangeReq.getResponseSender().send(_FAILURE);
>>>>>> > });
>>>>>> > }else{
>>>>>> > throw new Exception("Method GET not
supported by
>>>>>> Server ");
>>>>>> > }
>>>>>> > }
>>>>>> > }
>>>>>> >
>>>>>> >
>>>>>> > Pls review this and let me know if i'm doing anything
wrong here
>>>>>> ...
>>>>>> > --Senthil
>>>>>> >
>>>>>> > On Fri, Jun 23, 2017 at 1:30 PM, Antoine Girard <
>>>>>> antoine.girard(a)ymail.com>
>>>>>> > wrote:
>>>>>> >>
>>>>>> >> Also, to come back on the JVM warmup, this will give you
enough
>>>>>> answers:
>>>>>> >>
>>>>>> >>
https://stackoverflow.com/questions/36198278/why-does-the-jv
>>>>>> m-require-warmup
>>>>>> >>
>>>>>> >> For your, it means that you have to run your tests for a
few
>>>>>> minutes
>>>>>> >> before starting your actual measurements.
>>>>>> >>
>>>>>> >> I am also interested about how Netty / Jetty perform
under the
>>>>>> same
>>>>>> >> conditions, please post!
>>>>>> >>
>>>>>> >> Cheers,
>>>>>> >> Antoine
>>>>>> >>
>>>>>> >> On Fri, Jun 23, 2017 at 1:24 AM, Stuart Douglas <
>>>>>> sdouglas(a)redhat.com>
>>>>>> >> wrote:
>>>>>> >>>
>>>>>> >>> Are you actually testing with the
'System.out.println(" Received
>>>>>> >>> String ==> "+message);'. System.out is
incredibly slow.
>>>>>> >>>
>>>>>> >>> Stuart
>>>>>> >>>
>>>>>> >>> On Fri, Jun 23, 2017 at 7:01 AM, SenthilKumar K
<
>>>>>> senthilec566(a)gmail.com>
>>>>>> >>> wrote:
>>>>>> >>> > Sorry , I'm not an expert in JVM .. How do
we do Warm Up JVM ?
>>>>>> >>> >
>>>>>> >>> > Here is the JVM args to Server:
>>>>>> >>> >
>>>>>> >>> > nohup java -Xmx4g -Xms4g -XX:MetaspaceSize=96m
-XX:+UseG1GC
>>>>>> >>> > -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
>>>>>> >>> > -XX:G1HeapRegionSize=16M
-XX:MinMetaspaceFreeRatio=50
>>>>>> >>> > -XX:MaxMetaspaceFreeRatio=80 -cp
undertow-0.0.1.jar
>>>>>> HelloWorldServer
>>>>>> >>> >
>>>>>> >>> >
>>>>>> >>> > --Senthil
>>>>>> >>> >
>>>>>> >>> >
>>>>>> >>> > On Fri, Jun 23, 2017 at 2:23 AM, Antoine
Girard
>>>>>> >>> > <antoine.girard(a)ymail.com>
>>>>>> >>> > wrote:
>>>>>> >>> >>
>>>>>> >>> >> Do you warm up your jvm prior to the
testing?
>>>>>> >>> >>
>>>>>> >>> >> Cheers,
>>>>>> >>> >> Antoine
>>>>>> >>> >>
>>>>>> >>> >> On Thu, Jun 22, 2017 at 10:42 PM,
SenthilKumar K
>>>>>> >>> >> <senthilec566(a)gmail.com>
>>>>>> >>> >> wrote:
>>>>>> >>> >>>
>>>>>> >>> >>> Thanks Bill n Antoine ..
>>>>>> >>> >>>
>>>>>> >>> >>>
>>>>>> >>> >>> Here is the updated one : ( tried
without Kafka API ) .
>>>>>> >>> >>>
>>>>>> >>> >>> public class HelloWorldServer {
>>>>>> >>> >>>
>>>>>> >>> >>> public static void main(final String[]
args) {
>>>>>> >>> >>> Undertow server =
Undertow.builder().addHttpListener(8009,
>>>>>> >>> >>> "localhost").setHandler(new
HttpHandler() {
>>>>>> >>> >>> @Override
>>>>>> >>> >>> public void handleRequest(final
HttpServerExchange exchange)
>>>>>> throws
>>>>>> >>> >>> Exception {
>>>>>> >>> >>> if
(exchange.getRequestMethod().equals(Methods.POST)) {
>>>>>> >>> >>>
exchange.getRequestReceiver().receiveFullString(new
>>>>>> >>> >>> Receiver.FullStringCallback() {
>>>>>> >>> >>> @Override
>>>>>> >>> >>> public void
handle(HttpServerExchange
>>>>>> exchange,
>>>>>> >>> >>> String
>>>>>> >>> >>> message) {
>>>>>> >>> >>>
System.out.println(" Received String ==>
>>>>>> >>> >>> "+message);
>>>>>> >>> >>>
exchange.getResponseSender().s
>>>>>> end(message);
>>>>>> >>> >>> }
>>>>>> >>> >>> });
>>>>>> >>> >>> } else {
>>>>>> >>> >>>
exchange.getResponseHeaders().put(Headers.CONTENT_TYPE,
>>>>>> >>> >>> "text/plain");
>>>>>> >>> >>>
exchange.getResponseSender().send("FAILURE");
>>>>>> >>> >>> }
>>>>>> >>> >>> }
>>>>>> >>> >>> }).build();
>>>>>> >>> >>> server.start();
>>>>>> >>> >>> }
>>>>>> >>> >>> }
>>>>>> >>> >>>
>>>>>> >>> >>>
>>>>>> >>> >>> Oops seems to no improvement :
>>>>>> >>> >>>
>>>>>> >>> >>> Running 1m test @
http://localhost:8009/
>>>>>> >>> >>> 100 threads and 1000 connections
>>>>>> >>> >>> Thread Stats Avg Stdev Max
+/- Stdev
>>>>>> >>> >>> Latency 25.79ms 22.18ms
289.48ms 67.66%
>>>>>> >>> >>> Req/Sec 437.76 61.71
2.30k 80.26%
>>>>>> >>> >>> Latency Distribution
>>>>>> >>> >>> 50% 22.60ms
>>>>>> >>> >>> 75% 37.83ms
>>>>>> >>> >>> 90% 55.32ms
>>>>>> >>> >>> 99% 90.47ms
>>>>>> >>> >>> 2625607 requests in 1.00m, 2.76GB
read
>>>>>> >>> >>> Requests/sec: 43688.42
>>>>>> >>> >>> Transfer/sec: 47.08MB
>>>>>> >>> >>>
>>>>>> >>> >>>
>>>>>> >>> >>> :-( :-( ..
>>>>>> >>> >>>
>>>>>> >>> >>>
>>>>>> >>> >>> --Senthil
>>>>>> >>> >>>
>>>>>> >>> >>>
>>>>>> >>> >>> On Fri, Jun 23, 2017 at 1:47 AM,
Antoine Girard
>>>>>> >>> >>> <antoine.girard(a)ymail.com>
wrote:
>>>>>> >>> >>>>
>>>>>> >>> >>>> You can use the Receiver API,
specifically for that purpose.
>>>>>> >>> >>>> On the exchange, call:
getRequestReceiver();
>>>>>> >>> >>>>
>>>>>> >>> >>>> You will get a receiver object:
>>>>>> >>> >>>>
>>>>>> >>> >>>>
>>>>>> >>> >>>>
https://github.com/undertow-io
>>>>>> /undertow/blob/master/core/src/main/java/io/undertow/io/Rece
>>>>>> iver.java
>>>>>> >>> >>>>
>>>>>> >>> >>>> On the receiver you can call:
receiveFullString, you have
>>>>>> to pass it
>>>>>> >>> >>>> a
>>>>>> >>> >>>> callback that will be called when
the whole body has been
>>>>>> read.
>>>>>> >>> >>>>
>>>>>> >>> >>>> Please share your results when you
test this further!
>>>>>> >>> >>>>
>>>>>> >>> >>>> Cheers,
>>>>>> >>> >>>> Antoine
>>>>>> >>> >>>>
>>>>>> >>> >>>>
>>>>>> >>> >>>> On Thu, Jun 22, 2017 at 8:27 PM,
SenthilKumar K
>>>>>> >>> >>>> <senthilec566(a)gmail.com>
>>>>>> >>> >>>> wrote:
>>>>>> >>> >>>>>
>>>>>> >>> >>>>> Seems to Reading Request body
is wrong , So what is the
>>>>>> efficient
>>>>>> >>> >>>>> way
>>>>>> >>> >>>>> of reading request body in
undertow ?
>>>>>> >>> >>>>>
>>>>>> >>> >>>>> --Senthil
>>>>>> >>> >>>>>
>>>>>> >>> >>>>> On Thu, Jun 22, 2017 at 11:30
PM, SenthilKumar K
>>>>>> >>> >>>>> <senthilec566(a)gmail.com>
wrote:
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> Hello Undertow Dev Team ,
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> I have been working
on the use case where i should
>>>>>> create
>>>>>> >>> >>>>>> simple
>>>>>> >>> >>>>>> http server to serve 1.5
Million Requests per Second per
>>>>>> Instance
>>>>>> >>> >>>>>> ..
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> Here is the benchmark
result of Undertow :
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> Running 1m test @
http://127.0.0.1:8009/
>>>>>> >>> >>>>>> 20 threads and 40
connections
>>>>>> >>> >>>>>> Thread Stats Avg
Stdev Max +/- Stdev
>>>>>> >>> >>>>>> Latency 2.51ms
10.75ms 282.22ms 99.28%
>>>>>> >>> >>>>>> Req/Sec 1.12k
316.65 1.96k 54.50%
>>>>>> >>> >>>>>> Latency Distribution
>>>>>> >>> >>>>>> 50% 1.43ms
>>>>>> >>> >>>>>> 75% 2.38ms
>>>>>> >>> >>>>>> 90% 2.90ms
>>>>>> >>> >>>>>> 99% 10.45ms
>>>>>> >>> >>>>>> 1328133 requests in
1.00m, 167.19MB read
>>>>>> >>> >>>>>> Requests/sec: 22127.92
>>>>>> >>> >>>>>> Transfer/sec: 2.79MB
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> This is less compared to
other frameworks like Jetty and
>>>>>> Netty ..
>>>>>> >>> >>>>>> But
>>>>>> >>> >>>>>> originally Undertow is high
performant http server ..
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> Hardware details:
>>>>>> >>> >>>>>> Xeon CPU E3-1270 v5 machine
with 4 cores ( Clock 100 MHz,
>>>>>> Capacity
>>>>>> >>> >>>>>> 4
>>>>>> >>> >>>>>> GHz) , Memory : 32 G ,
Available memory 31 G.
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> I would need Undertow
experts to review the server code
>>>>>> below and
>>>>>> >>> >>>>>> advice me on tuning to
achieve my goal( ~1.5 Million
>>>>>> requests/sec
>>>>>> >>> >>>>>> ).
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> Server :
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> Undertow server =
Undertow.builder()
>>>>>> >>> >>>>>>
.addHttpListener(8009, "localhost")
>>>>>> >>> >>>>>>
.setHandler(new Handler()).build();
>>>>>> >>> >>>>>> server.start();
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> Handler.Java
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> final
Pooled<ByteBuffer> pooledByteBuffer =
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>>
exchange.getConnection().getBufferPool().allocate();
>>>>>> >>> >>>>>> final ByteBuffer byteBuffer
=
>>>>>> pooledByteBuffer.getResource();
>>>>>> >>> >>>>>> byteBuffer.clear();
>>>>>> >>> >>>>>>
exchange.getRequestChannel().read(byteBuffer);
>>>>>> >>> >>>>>> int pos =
byteBuffer.position();
>>>>>> >>> >>>>>> byteBuffer.rewind();
>>>>>> >>> >>>>>> byte[] bytes = new
byte[pos];
>>>>>> >>> >>>>>> byteBuffer.get(bytes);
>>>>>> >>> >>>>>> String requestBody = new
String(bytes,
>>>>>> Charset.forName("UTF-8")
>>>>>> >>> >>>>>> );
>>>>>> >>> >>>>>> byteBuffer.clear();
>>>>>> >>> >>>>>>
pooledByteBuffer.free();
>>>>>> >>> >>>>>> final PostToKafka
post2Kafka = new PostToKafka();
>>>>>> >>> >>>>>> try {
>>>>>> >>> >>>>>>
post2Kafka.write2Kafka(requestBody); { This API can
>>>>>> handle ~2
>>>>>> >>> >>>>>> Millions events per sec }
>>>>>> >>> >>>>>> } catch (Exception e) {
>>>>>> >>> >>>>>> e.printStackTrace();
>>>>>> >>> >>>>>> }
>>>>>> >>> >>>>>>
exchange.getResponseHeaders()
>>>>>> .put(Headers.CONTENT_TYPE,
>>>>>> >>> >>>>>> "text/plain");
>>>>>> >>> >>>>>>
exchange.getResponseSender().send("SUCCESS");
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>>
>>>>>> >>> >>>>>> --Senthil
>>>>>> >>> >>>>>
>>>>>> >>> >>>>>
>>>>>> >>> >>>>>
>>>>>> >>> >>>>>
_______________________________________________
>>>>>> >>> >>>>> undertow-dev mailing list
>>>>>> >>> >>>>> undertow-dev(a)lists.jboss.org
>>>>>> >>> >>>>>
https://lists.jboss.org/mailman/listinfo/undertow-dev
>>>>>> >>> >>>>
>>>>>> >>> >>>>
>>>>>> >>> >>>
>>>>>> >>> >>
>>>>>> >>> >
>>>>>> >>> >
>>>>>> >>> >
_______________________________________________
>>>>>> >>> > undertow-dev mailing list
>>>>>> >>> > undertow-dev(a)lists.jboss.org
>>>>>> >>> >
https://lists.jboss.org/mailman/listinfo/undertow-dev
>>>>>> >>
>>>>>> >>
>>>>>> >
>>>>>> _______________________________________________
>>>>>> undertow-dev mailing list
>>>>>> undertow-dev(a)lists.jboss.org
>>>>>>
https://lists.jboss.org/mailman/listinfo/undertow-dev
>>>>>>
>>>>>
>>>>>
>>>>
>>> _______________________________________________
>>> undertow-dev mailing list
>>> undertow-dev(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/undertow-dev
>>
>> --
>> Med venlig hilsen / Best regards
>>
>> *Kim Rasmussen*
>> Partner, IT Architect
>>
>> *Asseco Denmark A/S*
>> Kronprinsessegade 54
>> DK-1306 Copenhagen K
>> Mobile: +45 26 16 40 23 <+45%2026%2016%2040%2023>
>> Ph.: +45 33 36 46 60 <+45%2033%2036%2046%2060>
>> Fax: +45 33 36 46 61 <+45%2033%2036%2046%2061>
>>
>
> _______________________________________________
> undertow-dev mailing list
> undertow-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/undertow-dev
>
>
>