<div dir="auto">Yet to try that .. My testcase did not cover tuning no of threads .. but even if we try to increase number of threads I believe both framework performance would improve !! Different thoughts ?? <div dir="auto"><br></div><div dir="auto">Anyway I like to add another test case by changing threads !! </div><div dir="auto"><br></div><div dir="auto">--Senthil</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Jul 8, 2017 9:38 PM, "Kim Rasmussen" <<a href="mailto:kr@asseco.dk">kr@asseco.dk</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto">Have you tried playing around with the number of io and worker threads?</div><br><div class="gmail_quote"><div>lør. 8. jul. 2017 kl. 17.28 skrev SenthilKumar K <<a href="mailto:senthilec566@gmail.com" target="_blank">senthilec566@gmail.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Any comments on <b>Undertow Vs Netty</b> ? Am i doing wrong benchmark testing ?? Should i change benchmark strategy ?<div><br></div><div>--Senthil</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jul 7, 2017 at 3:14 PM, SenthilKumar K <span><<a href="mailto:senthilec566@gmail.com" target="_blank">senthilec566@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Sorry for delay in responding to this thread!<div><br></div><div>Thanks to everyone who helped me to Optimize Undertow Server.</div><div><br></div><div>Here is the comparison after benchmarking my use case against Netty:</div><div><br></div><div><b><font color="#38761d" size="4">Undertow Vs Netty :</font></b><br></div><div><br></div><div>Test Case 1 : </div><div>Simple Request Response ( No Kafka ):</div><div><br></div><div><b><font color="#0000ff">Undertow:</font></b></div><div><div>Running 10m test @ <a href="http://198.18.134.13:8009/" target="_blank">http://198.18.134.13:8009/</a></div><div> 500 threads and 5000 connections</div><span><div> Thread Stats Avg Stdev Max +/- Stdev</div></span><div> Latency <b>3.52m </b> 2.64m 8.96m 54.63%</div><div> Req/Sec 376.58 103.18 0.99k 80.53%</div><div> 111628942 requests in 10.00m, 13.72GB read</div><div> Socket errors: connect 0, read 28, write 0, timeout 2</div><div>Requests/sec: <b>186122.56</b></div><div>Transfer/sec: 23.43MB</div></div><div><br></div><div><div><b><font color="#0000ff">Netty:</font></b></div><div>Running 10m test @ <a href="http://198.18.134.13:8009/" target="_blank">http://198.18.134.13:8009/</a></div><div>500 threads and 5000 connections</div><span><div>Thread Stats Avg Stdev Max +/- Stdev</div></span><div> Latency <b>3.77m</b> 2.10m 7.51m 57.73%</div><div> Req/Sec 518.63 31.78 652.00 70.25%</div><div> 155406992 requests in 10.00m, 13.82GB read</div><div> Socket errors: connect 0, read 49, write 0, timeout 0</div><div>Requests/sec: <b>259107</b>.30</div><div>Transfer/sec: 24.17MB</div></div><div><br></div><div><br></div><div><b>Test Case 2:</b></div><div>Request --> Read --> Send it Kafka :</div><div><br></div><div><b><font color="#0000ff">Undertow:</font></b></div><div><div>Running 10m test @ <a href="http://198.18.134.13:8009/" target="_blank">http://198.18.134.13:8009/</a></div><div>500 threads and 5000 connections</div><span><div>Thread Stats Avg Stdev Max +/- Stdev</div></span><div> Latency <b>4.37m </b> 2.46m 8.72m 57.83%</div><div> Req/Sec 267.32 5.17 287.00 74.52%</div><div> 80044045 requests in 10.00m, 9.84GB read</div><div> Socket errors: connect 0, read 121, write 0, timeout 0</div><div>Requests/sec: <b>133459.79</b></div><div>Transfer/sec: 16.80MB</div></div><div><br></div><div><b><font color="#0000ff">Netty:</font></b></div><div><div>Running 10m test @ <a href="http://198.18.134.13:8009/" target="_blank">http://198.18.134.13:8009/</a></div><div>500 threads and 5000 connections</div><span><div>Thread Stats Avg Stdev Max +/- Stdev</div></span><div> Latency <b>3.78m </b> 2.10m 7.55m 57.79%</div><div> Req/Sec 516.92 28.84 642.00 69.60%</div><div> 154770536 requests in 10.00m, 13.69GB read</div><div> Socket errors: connect 0, read 11, write 0, timeout 101</div><div>Requests/sec: <b>258049.39</b></div><div>Transfer/sec: 23.38MB</div></div><div><br></div><div><br></div><div><br></div><div>CPU Usage:</div><div><b>Undertow:</b></div><div><img src="cid:ii_15d1c6595f909fe6" alt="Inline image 1" style="width:667px;max-width:100%"><br></div><div><br></div><div><b>Netty:</b></div><div><img src="cid:ii_15d1c660bc42b0af" alt="Inline image 2" style="width:667px;max-width:100%"><br></div><div><br></div><div><br></div><div>--Senthil<br></div></div><div class="m_-4931409617143771152m_-1599784569265107073HOEnZb"><div class="m_-4931409617143771152m_-1599784569265107073h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jun 29, 2017 at 7:34 AM, Bill O'Neil <span><<a href="mailto:bill@dartalley.com" target="_blank">bill@dartalley.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>1. Can you run the benchmark with the kafka line commented out at first and then again with it not commented out?</div><div>2. What rates were you getting with Jetty and Netty?</div><div>3. Are you running the tests from the same machine or a different one? If its the same machine and its using 20 threads they will be contending with undertows IO threads.</div><div>4. You can probably ignore the POST check if thats all your going to accept and its not a public api.</div><div><br></div><div><span><div>import io.undertow.server.<wbr>HttpHandler;</div><div>import io.undertow.server.<wbr>HttpServerExchange;</div><div>import io.undertow.util.Headers;</div><div>import io.undertow.util.Methods;</div><div> </div><div>public class DLRHandler implements HttpHandler {</div><div> </div><div> final public static String _SUCCESS="SUCCESS";</div><div> final public static String _FAILURE="FAILURE";</div><div> final PostToKafka post2Kafka = new PostToKafka();</div><div> </div><div> @Override</div><div> public void handleRequest( final HttpServerExchange exchange) throws Exception {</div><div> if (exchange.getRequestMethod().<wbr>equals(Methods.POST)) {</div><div> exchange.getRequestReceiver().<wbr>receiveFullString(( exchangeReq, data) -> {</div></span><div> //post2Kafka.write2Kafka(data)<wbr>; // write it to Kafka</div><div> exchangeReq.<wbr>getResponseHeaders().put(<wbr>Headers.CONTENT_TYPE, "text/plain");</div><div> exchangeReq.getResponseSender(<wbr>).send(_SUCCESS);</div><span><div> },</div><div> (exchangeReq, exception) -> {</div><div> exchangeReq.<wbr>getResponseHeaders().put(<wbr>Headers.CONTENT_TYPE, "text/plain");</div><div> exchangeReq.<wbr>getResponseSender().send(_<wbr>FAILURE);</div><div> });</div><div> }else{</div><div> throw new Exception("Method GET not supported by Server ");</div><div> }</div><div> }</div><div>}</div></span></div></div><div class="m_-4931409617143771152m_-1599784569265107073m_3690032083140187580HOEnZb"><div class="m_-4931409617143771152m_-1599784569265107073m_3690032083140187580h5"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jun 28, 2017 at 6:59 PM, Stuart Douglas <span><<a href="mailto:sdouglas@redhat.com" target="_blank">sdouglas@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">The multiple dispatches() are unnecessary (well the second one to the<br>
IO thread is definitely unnecessary, the first one is only required if<br>
post2Kafka.write2Kafka(data); is a blocking operation and needs to be<br>
executed in a worker thread).<br>
<span class="m_-4931409617143771152m_-1599784569265107073m_3690032083140187580m_-726484296639338407HOEnZb"><font color="#888888"><br>
Stuart<br>
</font></span><div class="m_-4931409617143771152m_-1599784569265107073m_3690032083140187580m_-726484296639338407HOEnZb"><div class="m_-4931409617143771152m_-1599784569265107073m_3690032083140187580m_-726484296639338407h5"><br>
On Wed, Jun 28, 2017 at 5:42 PM, SenthilKumar K <<a href="mailto:senthilec566@gmail.com" target="_blank">senthilec566@gmail.com</a>> wrote:<br>
> After modifying the code below i could see the improvement ( not much<br>
> slightly ) in server - 65k req/sec.<br>
><br>
> import io.undertow.server.<wbr>HttpHandler;<br>
> import io.undertow.server.<wbr>HttpServerExchange;<br>
> import io.undertow.util.Headers;<br>
> import io.undertow.util.Methods;<br>
><br>
> public class DLRHandler implements HttpHandler {<br>
><br>
> final public static String _SUCCESS="SUCCESS";<br>
> final public static String _FAILURE="FAILURE";<br>
> final PostToKafka post2Kafka = new PostToKafka();<br>
><br>
> @Override<br>
> public void handleRequest( final HttpServerExchange exchange) throws<br>
> Exception {<br>
> if (exchange.getRequestMethod().<wbr>equals(Methods.POST)) {<br>
> exchange.getRequestReceiver()<wbr>.receiveFullString((<br>
> exchangeReq, data) -> {<br>
> exchangeReq.dispatch(() -> {<br>
> post2Kafka.write2Kafka(data); // write it to Kafka<br>
> exchangeReq.dispatch(<wbr>exchangeReq.getIoThread(), () -><br>
> {<br>
><br>
> exchangeReq.<wbr>getResponseHeaders().put(<wbr>Headers.CONTENT_TYPE, "text/plain");<br>
> exchangeReq.<wbr>getResponseSender().send(_<wbr>SUCCESS);<br>
> });<br>
> });<br>
> },<br>
> (exchangeReq, exception) -> {<br>
> exchangeReq.<wbr>getResponseHeaders().put(<wbr>Headers.CONTENT_TYPE,<br>
> "text/plain");<br>
> exchangeReq.getResponseSender(<wbr>).send(_FAILURE);<br>
> });<br>
> }else{<br>
> throw new Exception("Method GET not supported by Server ");<br>
> }<br>
> }<br>
> }<br>
><br>
><br>
> Pls review this and let me know if i'm doing anything wrong here ...<br>
> --Senthil<br>
><br>
> On Fri, Jun 23, 2017 at 1:30 PM, Antoine Girard <<a href="mailto:antoine.girard@ymail.com" target="_blank">antoine.girard@ymail.com</a>><br>
> wrote:<br>
>><br>
>> Also, to come back on the JVM warmup, this will give you enough answers:<br>
>><br>
>> <a href="https://stackoverflow.com/questions/36198278/why-does-the-jvm-require-warmup" rel="noreferrer" target="_blank">https://stackoverflow.com/<wbr>questions/36198278/why-does-<wbr>the-jvm-require-warmup</a><br>
>><br>
>> For your, it means that you have to run your tests for a few minutes<br>
>> before starting your actual measurements.<br>
>><br>
>> I am also interested about how Netty / Jetty perform under the same<br>
>> conditions, please post!<br>
>><br>
>> Cheers,<br>
>> Antoine<br>
>><br>
>> On Fri, Jun 23, 2017 at 1:24 AM, Stuart Douglas <<a href="mailto:sdouglas@redhat.com" target="_blank">sdouglas@redhat.com</a>><br>
>> wrote:<br>
>>><br>
>>> Are you actually testing with the 'System.out.println(" Received<br>
>>> String ==> "+message);'. System.out is incredibly slow.<br>
>>><br>
>>> Stuart<br>
>>><br>
>>> On Fri, Jun 23, 2017 at 7:01 AM, SenthilKumar K <<a href="mailto:senthilec566@gmail.com" target="_blank">senthilec566@gmail.com</a>><br>
>>> wrote:<br>
>>> > Sorry , I'm not an expert in JVM .. How do we do Warm Up JVM ?<br>
>>> ><br>
>>> > Here is the JVM args to Server:<br>
>>> ><br>
>>> > nohup java -Xmx4g -Xms4g -XX:MetaspaceSize=96m -XX:+UseG1GC<br>
>>> > -XX:MaxGCPauseMillis=20 -XX:<wbr>InitiatingHeapOccupancyPercent<wbr>=35<br>
>>> > -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50<br>
>>> > -XX:MaxMetaspaceFreeRatio=80 -cp undertow-0.0.1.jar HelloWorldServer<br>
>>> ><br>
>>> ><br>
>>> > --Senthil<br>
>>> ><br>
>>> ><br>
>>> > On Fri, Jun 23, 2017 at 2:23 AM, Antoine Girard<br>
>>> > <<a href="mailto:antoine.girard@ymail.com" target="_blank">antoine.girard@ymail.com</a>><br>
>>> > wrote:<br>
>>> >><br>
>>> >> Do you warm up your jvm prior to the testing?<br>
>>> >><br>
>>> >> Cheers,<br>
>>> >> Antoine<br>
>>> >><br>
>>> >> On Thu, Jun 22, 2017 at 10:42 PM, SenthilKumar K<br>
>>> >> <<a href="mailto:senthilec566@gmail.com" target="_blank">senthilec566@gmail.com</a>><br>
>>> >> wrote:<br>
>>> >>><br>
>>> >>> Thanks Bill n Antoine ..<br>
>>> >>><br>
>>> >>><br>
>>> >>> Here is the updated one : ( tried without Kafka API ) .<br>
>>> >>><br>
>>> >>> public class HelloWorldServer {<br>
>>> >>><br>
>>> >>> public static void main(final String[] args) {<br>
>>> >>> Undertow server = Undertow.builder().<wbr>addHttpListener(8009,<br>
>>> >>> "localhost").setHandler(new HttpHandler() {<br>
>>> >>> @Override<br>
>>> >>> public void handleRequest(final HttpServerExchange exchange) throws<br>
>>> >>> Exception {<br>
>>> >>> if (exchange.getRequestMethod().<wbr>equals(Methods.POST)) {<br>
>>> >>> exchange.getRequestReceiver().<wbr>receiveFullString(new<br>
>>> >>> Receiver.FullStringCallback() {<br>
>>> >>> @Override<br>
>>> >>> public void handle(HttpServerExchange exchange,<br>
>>> >>> String<br>
>>> >>> message) {<br>
>>> >>> System.out.println(" Received String ==><br>
>>> >>> "+message);<br>
>>> >>> exchange.getResponseSender().<wbr>send(message);<br>
>>> >>> }<br>
>>> >>> });<br>
>>> >>> } else {<br>
>>> >>> exchange.getResponseHeaders().<wbr>put(Headers.CONTENT_TYPE,<br>
>>> >>> "text/plain");<br>
>>> >>> exchange.getResponseSender().<wbr>send("FAILURE");<br>
>>> >>> }<br>
>>> >>> }<br>
>>> >>> }).build();<br>
>>> >>> server.start();<br>
>>> >>> }<br>
>>> >>> }<br>
>>> >>><br>
>>> >>><br>
>>> >>> Oops seems to no improvement :<br>
>>> >>><br>
>>> >>> Running 1m test @ <a href="http://localhost:8009/" rel="noreferrer" target="_blank">http://localhost:8009/</a><br>
>>> >>> 100 threads and 1000 connections<br>
>>> >>> Thread Stats Avg Stdev Max +/- Stdev<br>
>>> >>> Latency 25.79ms 22.18ms 289.48ms 67.66%<br>
>>> >>> Req/Sec 437.76 61.71 2.30k 80.26%<br>
>>> >>> Latency Distribution<br>
>>> >>> 50% 22.60ms<br>
>>> >>> 75% 37.83ms<br>
>>> >>> 90% 55.32ms<br>
>>> >>> 99% 90.47ms<br>
>>> >>> 2625607 requests in 1.00m, 2.76GB read<br>
>>> >>> Requests/sec: 43688.42<br>
>>> >>> Transfer/sec: 47.08MB<br>
>>> >>><br>
>>> >>><br>
>>> >>> :-( :-( ..<br>
>>> >>><br>
>>> >>><br>
>>> >>> --Senthil<br>
>>> >>><br>
>>> >>><br>
>>> >>> On Fri, Jun 23, 2017 at 1:47 AM, Antoine Girard<br>
>>> >>> <<a href="mailto:antoine.girard@ymail.com" target="_blank">antoine.girard@ymail.com</a>> wrote:<br>
>>> >>>><br>
>>> >>>> You can use the Receiver API, specifically for that purpose.<br>
>>> >>>> On the exchange, call: getRequestReceiver();<br>
>>> >>>><br>
>>> >>>> You will get a receiver object:<br>
>>> >>>><br>
>>> >>>><br>
>>> >>>> <a href="https://github.com/undertow-io/undertow/blob/master/core/src/main/java/io/undertow/io/Receiver.java" rel="noreferrer" target="_blank">https://github.com/undertow-<wbr>io/undertow/blob/master/core/<wbr>src/main/java/io/undertow/io/<wbr>Receiver.java</a><br>
>>> >>>><br>
>>> >>>> On the receiver you can call: receiveFullString, you have to pass it<br>
>>> >>>> a<br>
>>> >>>> callback that will be called when the whole body has been read.<br>
>>> >>>><br>
>>> >>>> Please share your results when you test this further!<br>
>>> >>>><br>
>>> >>>> Cheers,<br>
>>> >>>> Antoine<br>
>>> >>>><br>
>>> >>>><br>
>>> >>>> On Thu, Jun 22, 2017 at 8:27 PM, SenthilKumar K<br>
>>> >>>> <<a href="mailto:senthilec566@gmail.com" target="_blank">senthilec566@gmail.com</a>><br>
>>> >>>> wrote:<br>
>>> >>>>><br>
>>> >>>>> Seems to Reading Request body is wrong , So what is the efficient<br>
>>> >>>>> way<br>
>>> >>>>> of reading request body in undertow ?<br>
>>> >>>>><br>
>>> >>>>> --Senthil<br>
>>> >>>>><br>
>>> >>>>> On Thu, Jun 22, 2017 at 11:30 PM, SenthilKumar K<br>
>>> >>>>> <<a href="mailto:senthilec566@gmail.com" target="_blank">senthilec566@gmail.com</a>> wrote:<br>
>>> >>>>>><br>
>>> >>>>>> Hello Undertow Dev Team ,<br>
>>> >>>>>><br>
>>> >>>>>> I have been working on the use case where i should create<br>
>>> >>>>>> simple<br>
>>> >>>>>> http server to serve 1.5 Million Requests per Second per Instance<br>
>>> >>>>>> ..<br>
>>> >>>>>><br>
>>> >>>>>><br>
>>> >>>>>> Here is the benchmark result of Undertow :<br>
>>> >>>>>><br>
>>> >>>>>> Running 1m test @ <a href="http://127.0.0.1:8009/" rel="noreferrer" target="_blank">http://127.0.0.1:8009/</a><br>
>>> >>>>>> 20 threads and 40 connections<br>
>>> >>>>>> Thread Stats Avg Stdev Max +/- Stdev<br>
>>> >>>>>> Latency 2.51ms 10.75ms 282.22ms 99.28%<br>
>>> >>>>>> Req/Sec 1.12k 316.65 1.96k 54.50%<br>
>>> >>>>>> Latency Distribution<br>
>>> >>>>>> 50% 1.43ms<br>
>>> >>>>>> 75% 2.38ms<br>
>>> >>>>>> 90% 2.90ms<br>
>>> >>>>>> 99% 10.45ms<br>
>>> >>>>>> 1328133 requests in 1.00m, 167.19MB read<br>
>>> >>>>>> Requests/sec: 22127.92<br>
>>> >>>>>> Transfer/sec: 2.79MB<br>
>>> >>>>>><br>
>>> >>>>>> This is less compared to other frameworks like Jetty and Netty ..<br>
>>> >>>>>> But<br>
>>> >>>>>> originally Undertow is high performant http server ..<br>
>>> >>>>>><br>
>>> >>>>>> Hardware details:<br>
>>> >>>>>> Xeon CPU E3-1270 v5 machine with 4 cores ( Clock 100 MHz, Capacity<br>
>>> >>>>>> 4<br>
>>> >>>>>> GHz) , Memory : 32 G , Available memory 31 G.<br>
>>> >>>>>><br>
>>> >>>>>> I would need Undertow experts to review the server code below and<br>
>>> >>>>>> advice me on tuning to achieve my goal( ~1.5 Million requests/sec<br>
>>> >>>>>> ).<br>
>>> >>>>>><br>
>>> >>>>>> Server :<br>
>>> >>>>>><br>
>>> >>>>>> Undertow server = Undertow.builder()<br>
>>> >>>>>> .addHttpListener(8009, "localhost")<br>
>>> >>>>>> .setHandler(new Handler()).build();<br>
>>> >>>>>> server.start();<br>
>>> >>>>>><br>
>>> >>>>>><br>
>>> >>>>>> Handler.Java<br>
>>> >>>>>><br>
>>> >>>>>> final Pooled<ByteBuffer> pooledByteBuffer =<br>
>>> >>>>>><br>
>>> >>>>>> exchange.getConnection().<wbr>getBufferPool().allocate();<br>
>>> >>>>>> final ByteBuffer byteBuffer = pooledByteBuffer.getResource()<wbr>;<br>
>>> >>>>>> byteBuffer.clear();<br>
>>> >>>>>> exchange.getRequestChannel().<wbr>read(byteBuffer);<br>
>>> >>>>>> int pos = byteBuffer.position();<br>
>>> >>>>>> byteBuffer.rewind();<br>
>>> >>>>>> byte[] bytes = new byte[pos];<br>
>>> >>>>>> byteBuffer.get(bytes);<br>
>>> >>>>>> String requestBody = new String(bytes, Charset.forName("UTF-8")<br>
>>> >>>>>> );<br>
>>> >>>>>> byteBuffer.clear();<br>
>>> >>>>>> pooledByteBuffer.free();<br>
>>> >>>>>> final PostToKafka post2Kafka = new PostToKafka();<br>
>>> >>>>>> try {<br>
>>> >>>>>> post2Kafka.write2Kafka(<wbr>requestBody); { This API can handle ~2<br>
>>> >>>>>> Millions events per sec }<br>
>>> >>>>>> } catch (Exception e) {<br>
>>> >>>>>> e.printStackTrace();<br>
>>> >>>>>> }<br>
>>> >>>>>> exchange.getResponseHeaders()<wbr>.put(Headers.CONTENT_TYPE,<br>
>>> >>>>>> "text/plain");<br>
>>> >>>>>> exchange.getResponseSender().<wbr>send("SUCCESS");<br>
>>> >>>>>><br>
>>> >>>>>><br>
>>> >>>>>> --Senthil<br>
>>> >>>>><br>
>>> >>>>><br>
>>> >>>>><br>
>>> >>>>> ______________________________<wbr>_________________<br>
>>> >>>>> undertow-dev mailing list<br>
>>> >>>>> <a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a><br>
>>> >>>>> <a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/<wbr>mailman/listinfo/undertow-dev</a><br>
>>> >>>><br>
>>> >>>><br>
>>> >>><br>
>>> >><br>
>>> ><br>
>>> ><br>
>>> > ______________________________<wbr>_________________<br>
>>> > undertow-dev mailing list<br>
>>> > <a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a><br>
>>> > <a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/<wbr>mailman/listinfo/undertow-dev</a><br>
>><br>
>><br>
><br>
______________________________<wbr>_________________<br>
undertow-dev mailing list<br>
<a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/<wbr>mailman/listinfo/undertow-dev</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
______________________________<wbr>_________________<br>
undertow-dev mailing list<br>
<a href="mailto:undertow-dev@lists.jboss.org" target="_blank">undertow-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/undertow-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/<wbr>mailman/listinfo/undertow-dev</a></blockquote></div></div><div dir="ltr">-- <br></div><div data-smartmail="gmail_signature"><span style="font-family:arial,sans-serif;font-size:13px;border-collapse:collapse"><div><span style="font-family:verdana,sans-serif;font-size:13px">Med venlig hilsen / Best regards</span></div><div><p><b><span lang="EN-GB" style="font-size:10pt"><font color="#000066"><font face="verdana, sans-serif"><span style="color:rgb(34,34,34);background-color:rgb(255,255,255)">Kim Rasmussen</span></font></font></span></b><b><span lang="EN-GB" style="color:rgb(0,51,102)"><font face="verdana, sans-serif"><br></font></span></b><span lang="EN-GB" style="font-size:10pt;color:rgb(102,102,102)"><font face="verdana, sans-serif">Partner, IT Architect</font></span></p><p><b><span lang="EN-GB" style="font-size:10pt;color:rgb(102,102,102)"><font face="verdana, sans-serif">Asseco Denmark A/S</font></span></b><span lang="EN-GB" style="font-size:10pt;color:rgb(102,102,102)"><font face="verdana, sans-serif"><b><br></b>Kronprinsessegade 54<br>DK-1306 Copenhagen K<br>Mobile: +45 26 16 40 23<br>Ph.: +45 33 36 46 60<br>Fax: +45 33 36 46 61</font></span></p></div></span></div>
</blockquote></div></div>