Different Handler Chains for Listeners
by Bernd Eckenfels
Hello,
I see in the Undertow$Builder, that each listener which is added gets
the rootHandler set. The Builder API does not allow me to set different
handlers for different ports.
Is that an expected and intended use case? Replicating all the setup
code from Undertow.start() might allow me to use that, but it is not an
exported API it seems.
Specifically I want to have a service and a management interface in a
micro service. I could to that with two server instances, just not sure
if this is better? I havent seen a handler which can dispatch based on
the listener, is there any (maybe based on
ServerConnection#getLocalAddress()?)?
Greetings
Bernd
9 years, 8 months
JSR WebSocket onMessage called out-of-order
by Stephan Mehlhase
Hi,
I ran into an issue with the Undertow JSR WebSocket implementation: the
message handler's @OnMessage method is not called with messages in the
same order as these were sent. I encountered this problem first with the
current WildFly 8.2.0.Final version. I posted about it in the forums
[1]. But since I got no replies there and can reproduce this problem
with Undertow 1.2.0.Beta4 (from maven) alone, I thought this might be
the right place to ask about it.
I am developing a @ServerEndpoint which needs to handle clients sending
unfragmented messages in rapid succession. These messages are not always
delivered in the order as they were sent. Since WebSocket is based on
HTTP, I assume that the messages arrive in the "correct" order at the
server. However, from time to time two messages appear switched (e.g.
message no. 4 arrives at @OnMessage before message no. 3). If I fragment
the messages (according to RFC 6455), the order of all messages is
always correct.
I have uploaded an example application which reproduces this to
http://pastebin.com/tcm6ZnsB
It uses some Guava classes for checking the order of messages.
I can also reproduce this problem with another (non-public but RFC
compliant) WebSocket library.
My first question is, if that is intended behavior or a bug? The
WebSocket RFC does not explicitly forbid this kind of behavior, however
for me it was very surprising given that
1. the underlying protocols deliver everything in order,
2. this works as I expected it for fragmented messages and
3. I haven't seen this for other implementations.
If this is not considered a bug, is there an option which basically
forces in-order delivery of messages? Ideally even accessible in
WildFly. If not, does opening a feature request for this make sense
(i.e. does it have a realistic chance of being addressed)?
Best,
Stephan
[1]: https://developer.jboss.org/message/921226#921226
--
Stephan Mehlhase
Email: stephan.mehlhase(a)eml.org
EML European Media Laboratory GmbH
Schloss-Wolfsbrunnenweg 35
69118 Heidelberg
Amtsgericht Mannheim / HRB 335719
Managing Partner: Dr. h. c. Dr.-Ing. E. h. Klaus Tschira, Scientific and
Managing
Director: Prof. Dr.-Ing. Andreas Reuter
http://www.eml.org
9 years, 8 months
Async PUT (fileresource) handler.
by Bernd Eckenfels
Hello,
I have added a small special-purpose PUT handler to a project of mine.
I am using however a blocking exchange to read the uploaded data and
put it into a file. This seems to work, but I wonder if it would be
worthwile to do this async (with channels).
When I understand NIO FileChannels correctly they emulate the
non-blocking behaviour with their own IO threads, so I guess it is not
really better to use it this way (in terms of occupied threads).
But I wonder if somebody did some experiments of has ready-made code
for it?
Just for the record, I am using new File(FileResourceManager.getBase(),
canonicalize(exchange.getRelativePath()) to work around the missing
write support in the FRM. Thats fine of the application is not too
complex and needs no abstraction, but I can imagine it would be better
to have that support (for things like WebDav handler).
Gruss
Bernd
9 years, 8 months
ResourceHandler give 0 byte Content-Length for HEAD
by Bernd Eckenfels
Hello,
with all versions I tried (including 1.2.0.Beta9) undertow-core will
return content-length: 0 if a HEAD request is made.
I am using a simple handler:
FileResourceManager frm = new FileResourceManager(new File(".").getCanonicalFile(), 16*1024);
ResourceHandler res = new ResourceHandler();
res.setResourceManager(frm);
res.setAllowed(Predicates.truePredicate());
res.setDirectoryListingEnabled(true);
PathHandler path = Handlers.path(errorHandler);
path.addPrefixPath("/res", res);
GracefulShutdownHandler shutdownHandler = Handlers.gracefulShutdown(path);
Undertow server = Undertow.builder().addHttpListener(8080, null)
.setServerOption(UndertowOptions.RECORD_REQUEST_START_TIME, Boolean.TRUE)
.setServerOption(UndertowOptions.ALWAYS_SET_DATE, Boolean.TRUE) // default
.setWorkerThreads(2).setHandler(shutdownHandler).build();
server.start();
If I request the GET method the content is found:
curl -v http://localhost:8080/webdav/pom.xml
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /res/pom.xml HTTP/1.1
> User-Agent: curl/7.30.0
> Host: localhost:8080
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Last-Modified: Fri, 13 Mar 2015 19:24:58 GMT
< Content-Length: 1155
< Content-Type: text/xml
< Date: Fri, 13 Mar 2015 19:39:24 GMT
<
...
curl -v -XHEAD http://localhost:8080/webdav/pom.xml
* Connected to localhost (127.0.0.1) port 8080 (#0)
> HEAD /webdav/pom.xml HTTP/1.1
> User-Agent: curl/7.30.0
> Host: localhost:8080
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Last-Modified: Fri, 13 Mar 2015 19:24:58 GMT
< Content-Length: 0
< Content-Type: text/xml
< Date: Fri, 13 Mar 2015 19:40:51 GMT
<
* Connection #0 to host localhost left intact
According to RFC2616 sc 14.13 this needs to be non-null.
9 years, 8 months
TransferTo/sendfile Undertow v1.0.x vs 1.1.x
by Dave O'Brien
Hi everyone,
I wanted to run this 'issue' past the experts, as I've looked into it
myself and hit a bit of a dead end.
Our application is using Undertow 1.0.17 for serving both static and
dynamic Web content. We did some benchmarking early on in the development
of our app and got some very impressive results.
On a modest server with undertow 1.0.17, using wrk defaults we get around
50k rqsts/s for a 2kb file and about 17k rqsts/s for a 100kb file. So far,
so (very) good.
I recently tried updating to version 1.1.2 and ran the original tests and
got similar results as for v1.0.17 for the 2kb file. For the 100kb file,
however the throughput dropped to about 10k rqsts/s.
Not the end of the world, I know but I decided to dig around and see what
was (or wasn't) going on....
Our app has a sendfile threshold (64k), that above which we use the
ResponseChannel 'TransferTo' method, which performs (on Linux anyway) a
native zero-copy sendfile operation using the Java NIO
FileChannel.transferTo call (which calls into Linux sendfile under the
covers if certain conditions are met).
In v1.0.x, the native sendfile method (transferTo0) is ultimately invoked
to send the content. But in v1.1.x (and 1.2.x also) the transferTo method
in the FileChannel instead opts to invoke transferToArbitraryChannel which
resorts to a kernel -> user space copy and a not-insignificant drop in
throughput.
So my question is, is this change across the versions deliberate, a
sacrifice for improvements elsewhere - or is it a bug?
If it's a bug I'm happy to re-instate the native sendfile functionality
myself, but I wanted to check it was an issue first...
Thanks in advance for any assistance/advice on this one...
David.
9 years, 8 months
Lost requests when responding from Akka actor system
by Michael Barton
Hi all,
I've been using Undertow embedded in an Akka application and it is working great apart from a small issue when running load tests.
I've reduced the problem down to a small little demo application which I've attached below. It is in Scala but I can create a Java version if required.
I fire one million POST requests at Undertow and almost all succeed, however some never receive a response. The number that fail varies from 1 to 10.
Increasing the concurrency on the client and server side together and independently increases the number of failures. I've not yet seen any failures with no concurrency on the server side.
The HTTP handler callback is not invoked for the requests that fail. I've verified this by adding a counter before I call exchange.dispatch() and pass it on to the processing actor pool.
I don't think I'm doing anything that is not thread-safe since Akka guarantees that actors do not execute concurrently. However the execution of the actor does move around a number of threads in the underlying Fork Join pool and I'm wondering if that would cause issues with Undertow?
Any suggestions for how I could debug this?
Thanks,
Michael
Code to reproduce
==============
object UndertowTest {
def start(handler: HttpHandler) = Undertow.builder()
.addHttpListener(8888, "localhost")
.setHandler(handler)
.build()
.start()
}
class Responder extends HttpHandler {
override def handleRequest(exchange: HttpServerExchange): Unit = {
exchange.getResponseHeaders.put(io.undertow.util.Headers.CONTENT_TYPE, "text/plain")
exchange.getResponseSender.send("Hello world!")
}
}
class RequestHandler extends Actor with Loggable {
val handler = new Responder
override def receive = {
case exchange: HttpServerExchange =>
handler.handeRequest(exchange)
}
}
object UndertowAkkaTest extends App {
val sys = ActorSystem()
// Changing the number here changes the number of actors handling incoming requests
val handlerPool = sys.actorOf(RoundRobinPool(4).props(Props(classOf[RequestHandler])))
UndertowTest.start(new HttpHandler {
override def handleRequest(exchange: HttpServerExchange): Unit = {
exchange.dispatch()
handlerPool ! exchange
}
})
}
Apache Bench command to test
========================
# -c changes number of concurrent client side requests
ab -k -c 4 -n 100000 -p payload.json -T application/json http://localhost:8888/streams/demo/infrastructure/cpu
Data in payload.json
===============
{
"sampleTime": "2015-02-24T06:02:47",
"contributor": "3b8da322-ef8f-4f6b-93a3-a171dd794308",
"host": "some-host",
"process": "some-process",
"user": 35.7,
"kernel": 12.3
}
______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
______________________________________________________________________
9 years, 9 months
SSL client authorization -- how ?
by John Robinson
What are the detailed configuration instructions to configure
"standalone.xml", web.xml, and jboss-web.xml to set up SSL with client
authorization?
Could someone direct me to the appropriate place to find detailed
configuration information on how to have a WildFly 8.2 server evoke from a
client, a certificate under SSL.
The cerificate, I expect, would be sent via the
"javax.servlet.request.X509Certificate" request attribute.
If this is an inappropriate forum for this question, please feel free to
direct me to the correct forum.
Thanks in advance for your help.
9 years, 9 months
problem with root url redirects
by Bill Burke
Looking at Wildfly 8.2.0,
1. I'm sending a POST to /app
2. Wildfly redirects to /app/ via a 302
3. Browser does a GET /app/
Form data is lost.
Why does Wildfly do a redirect? This has caused me a lot of problems
for our SAML adapter as users have to remember to put a trailing '/' in
their registration URLs.
If you insist on doing a redirect, why not send a 307 instead if it is a
non-GET request?
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
9 years, 9 months
Undertow High Performance
by Chryssanthi Vandera
Hello,
I am a graduate student working on an assignment and I am using undertow.
My assignment has a performance goal (requests per second) and I am trying to configure undertow to perform better to achieve this goal.
I read in the documentation about the number of IO threads as well as the number of Worker threads but I haven’t understood the relation between each other apparently, because when I tweak this configuration I get worse results…
Could you please help me understand what are the factors of undertow performing faster? Is it just those two or I can change some other configuration to make it serve more requests per sec.
Thank you,
Chrysanthi Vandera
9 years, 9 months
setting a bufferSize
by Edgar Espina
Hi,
I found some issues will setting the bufferSize:
1) bufferSize isn't exactly the max number of bytes of a response body. For
example, bufferSize=5 with a "hello" response failed with a byte overflow
error. That's bc undertow requires some extra bytes for response headers
and others. This is not necessarily wrong but a response/buffer size is
usually the max number of bytes for the HTTP body (jetty, netty, others).
It is something minor, but will be nice to change this to represent the max
size of the HTTP body.
2) bufferSize is ignored when Content-Length is set. Here are some logs
from Apache HTTP client calling Undertow and Jetty with a buffer size of 20
bytes.
Undertow: output size is 40bytes, buffer size is 20 bytes
2015/03/01 21:30:32:053 ART [DEBUG] wire - http-outgoing-0 >> "GET /?data=
*ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMN*&len=40 HTTP/1.1[\r][\n]"
2015/03/01 21:30:32:053 ART [DEBUG] wire - http-outgoing-0 >> "Host:
localhost:54697[\r][\n]"
2015/03/01 21:30:32:053 ART [DEBUG] wire - http-outgoing-0 >> "Connection:
Keep-Alive[\r][\n]"
2015/03/01 21:30:32:054 ART [DEBUG] wire - http-outgoing-0 >> "User-Agent:
Apache-HttpClient/4.4-beta1 (Java 1.5 minimum; Java/1.8.0)[\r][\n]"
2015/03/01 21:30:32:054 ART [DEBUG] wire - http-outgoing-0 >>
"Accept-Encoding: gzip,deflate[\r][\n]"
2015/03/01 21:30:32:054 ART [DEBUG] wire - http-outgoing-0 >> "[\r][\n]"
2015/03/01 21:30:32:120 ART [DEBUG] wire - http-outgoing-0 << "HTTP/1.1 200
OK[\r][\n]"
2015/03/01 21:30:32:122 ART [DEBUG] wire - http-outgoing-0 <<
"Content-Type: text/html;charset=UTF-8[\r][\n]"
2015/03/01 21:30:32:122 ART [DEBUG] wire - http-outgoing-0 <<
"Content-Length: 40[\r][\n]"
2015/03/01 21:30:32:122 ART [DEBUG] wire - http-outgoing-0 << "Date: Mon,
02 Mar 2015 00:30:32 GMT[\r][\n]"
2015/03/01 21:30:32:122 ART [DEBUG] wire - http-outgoing-0 << "[\r][\n]"
2015/03/01 21:30:32:122 ART [DEBUG] wire - http-outgoing-0 << "
*ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMN*"
Jetty: output size is 40bytes, buffer size is 20 bytes
2015/03/01 21:33:07:956 ART [DEBUG] wire - http-outgoing-1 >> "GET /?data=
*ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMN*&len=40 HTTP/1.1[\r][\n]"
2015/03/01 21:33:07:957 ART [DEBUG] wire - http-outgoing-1 >> "Host:
localhost:54732[\r][\n]"
2015/03/01 21:33:07:957 ART [DEBUG] wire - http-outgoing-1 >> "Connection:
Keep-Alive[\r][\n]"
2015/03/01 21:33:07:957 ART [DEBUG] wire - http-outgoing-1 >> "User-Agent:
Apache-HttpClient/4.4-beta1 (Java 1.5 minimum; Java/1.8.0)[\r][\n]"
2015/03/01 21:33:07:957 ART [DEBUG] wire - http-outgoing-1 >>
"Accept-Encoding: gzip,deflate[\r][\n]"
2015/03/01 21:33:07:957 ART [DEBUG] wire - http-outgoing-1 >> "[\r][\n]"
2015/03/01 21:33:07:959 ART [DEBUG] wire - http-outgoing-1 << "HTTP/1.1 200
OK[\r][\n]"
2015/03/01 21:33:07:959 ART [DEBUG] wire - http-outgoing-1 <<
"Content-Type: text/html;charset=UTF-8[\r][\n]"
2015/03/01 21:33:07:959 ART [DEBUG] wire - http-outgoing-1 <<
"Content-Length: 40[\r][\n]"
2015/03/01 21:33:07:959 ART [DEBUG] wire - http-outgoing-1 << "[\r][\n]"
2015/03/01 21:33:07:959 ART [DEBUG] wire - http-outgoing-1 << "
*ABCDEFGHIJKLMNOPQRST*"
2015/03/01 21:33:07:959 ART [DEBUG] headers - http-outgoing-1 << HTTP/1.1
200 OK
2015/03/01 21:33:07:959 ART [DEBUG] headers - http-outgoing-1 <<
Content-Type: text/html;charset=UTF-8
2015/03/01 21:33:07:959 ART [DEBUG] headers - http-outgoing-1 <<
Content-Length: 40
2015/03/01 21:33:07:960 ART [DEBUG] MainClientExec - Connection can be kept
alive indefinitely
2015/03/01 21:33:07:960 ART [DEBUG] wire - http-outgoing-1 << "
*UVWXYZABCDEFGHIJKLMN*"
As you can see, undertow seems to ignore the bufferSize option and it sends
the whole output at once, while jetty sent two chunks of 20 bytes each.
BufferSize work when Transfer-Encoding: Chunked is set, but ignored when
Content-Length is set. I didn't try with larger files but this makes me
worry bc I think sending a large file and setting Content-Length might
result in an OOM error.
Found this in latest beta release: 1.2.0beta8 using
exchange.getOutputStream (blocking mode)
Thanks
--
edgar
9 years, 9 months