On May 4, 2016, at 9:11 AM, David M. Lloyd
One thing I noticed is that you went a different way with async
invocations and cancellation support.
The way I originally proposed was that the request/response works as per
normal, but a 1xx header is used to send the client a cancellation token
which can be POSTed back to the server to cancel the invocation. I
understand that this approach requires 1xx support which some clients
might not have.
In your proposal, the async EJB request always returns immediately and
uses an invocation ID which can later be retrieved. I rejected this
approach because it requires the server to maintain state outside of the
request - something that is sure to fail.
I don’t think this is too big of a deal, I mean you just have a data structure with a
built-in automated expiration mechanism. However, it does have a negative in the its
naturally racy, which I guess is your concern?
Also the client doesn't
really have any notion as to when it can GET the response: it would have
to do it more or less immediately to avoid a late response (or when
Future.get() is called), meaning you need two round trips in the common
case, which is not so good.
Yeah I think a key aspect of this is the client has to be able to say it wants blocking
behavior, even if the server side is mapped asynchronously.
I think that the best compromise solution is to treat async invocations
identically to regular invocations, and instead let the *client* give a
cancellation ID to the server, which it can later POST to the server as
I described in my original document. If the server receives the
client's ID (maybe also matching the client's IP address) then the
request can be canceled if it is still running.
I think thats a reasonable way to do it, provided the ID is sufficiently unique to not be
repeated. Actually with HTTP 2, we could just hang-up the stream for cancellation, as the
stream-id is an effective invocation id. However, it’s perhaps best to keep consistent
semantics between h1 and h2.
I think a key question, which I don’t have the answer for, is if we need to support more
concurrent long-running invocations than connections(h1)/streams(h2). If the answer is yes
then long polling is bad. I am also slightly worried about HTTP intermediaries imposing
timeouts for long running operations.
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat