[wildfly-dev] EJB over HTTP

David M. Lloyd david.lloyd at redhat.com
Wed May 4 14:12:17 EDT 2016

On 05/04/2016 12:51 PM, Jason Greene wrote:
>> On May 4, 2016, at 9:11 AM, David M. Lloyd <david.lloyd at redhat.com> wrote:
>> One thing I noticed is that you went a different way with async
>> invocations and cancellation support.
>> The way I originally proposed was that the request/response works as per
>> normal, but a 1xx header is used to send the client a cancellation token
>> which can be POSTed back to the server to cancel the invocation.  I
>> understand that this approach requires 1xx support which some clients
>> might not have.
>> In your proposal, the async EJB request always returns immediately and
>> uses an invocation ID which can later be retrieved.  I rejected this
>> approach because it requires the server to maintain state outside of the
>> request - something that is sure to fail.
> I don’t think this is too big of a deal, I mean you just have a data structure with a built-in automated expiration mechanism. However, it does have a negative in the its naturally racy, which I guess is your concern?

I know it's "just" have a data structure, but it's overhead and 
complexity that the server doesn't need and shouldn't have, and it's one 
more knob for users to screw around with to trade off between memory 
usage and error behavior.  I'd prefer any approach that doesn't have 
state maintained by a timeout and can never fail due to misconfiguration 
of the server, and has deterministic cleanup properties.

Cancellation is always naturally racy but both of my proposals also 
address this: cancellation is idempotent and unrecognized IDs are 
ignored.  The cancellation result is sent back on the original request: 
it either receives a 200 or 204 indicating success (with or without a 
response body), or it receives a 408 indicating that the request was 
cancelled cleanly.  The client can ping the server with cancellation 
POSTs as much as it wants without mucking up the state of the 
invocation, prior cancel requests, or future cancel requests.

>>   Also the client doesn't
>> really have any notion as to when it can GET the response: it would have
>> to do it more or less immediately to avoid a late response (or when
>> Future.get() is called), meaning you need two round trips in the common
>> case, which is not so good.
> Yeah I think a key aspect of this is the client has to be able to say it wants blocking behavior, even if the server side is mapped asynchronously.

I think this misses the complete picture, see below.

>> I think that the best compromise solution is to treat async invocations
>> identically to regular invocations, and instead let the *client* give a
>> cancellation ID to the server, which it can later POST to the server as
>> I described in my original document.  If the server receives the
>> client's ID (maybe also matching the client's IP address) then the
>> request can be canceled if it is still running.
> I think thats a reasonable way to do it, provided the ID is sufficiently unique to not be repeated. Actually with HTTP 2, we could just hang-up the stream for cancellation, as the stream-id is an effective invocation id. However, it’s perhaps best to keep consistent semantics between h1 and h2.
> I think a key question, which I don’t have the answer for, is if we need to support more concurrent long-running invocations than connections(h1)/streams(h2). If the answer is yes then long polling is bad. I am also slightly worried about HTTP intermediaries imposing timeouts for long running operations.

To me this is a configuration concern: the same issues arise for 
REST-ish things.  It's not really long polling per se (which usually 
corresponds to using potentially infinitely long connections to 
accommodate asynchronous requests or messages from server to client, 
something we do not do), just a (possibly) long request (but no longer 
than any typical REST request on a potentially slow resource).  We don't 
want or need a special one-off solution to this (general, long-standing) 
issue in the context of EJB; any solution to this problem should be 
general to all long requests.

The point of my proposals are that there is absolutely no need to 
conflate the HTTP request lifecycle with EJB asynchronous invocation 
semantics, and in fact doing so is actively harmful.  All EJB request 
types - sync and async - have a request and reply and conceptually could 
be cancellable (even in the sync case the client may invoke 
asynchronously or be cancelled), except for one-way invocations which 
would immediately return a 202 status anyway.  The only difference is in 
whether the client thread directly waits for the response or whether it 
waits via a Future, which the client can independently determine based 
on information it already has locally.  By the time the invocation gets 
to the transport provider, the asynchronicity of the request has lost 
its relevance; the transport provider is merely responsible for 
conveyance of the request and response and possibly the cancel request 

So really this is just about how we handle cancellation, and 
cancellation is not a common case so we should optimize for requests 
which aren't cancelled, which means one HTTP request and response per 
EJB invocation with cancellation being somehow out-of-band.


More information about the wildfly-dev mailing list