Nope, unfortunately not.
On 2/7/13 11:09 AM, Dan Berindei wrote:
>
>
> A few changes I have in mind (need to think about it more):
>
> - I want to leave the existing RequestHandler interface in place, so
> current implementation continue to work
> - There will be a new AsyncRequestHandler interface (possibly
> extending
> RequestHandler, so an implementation can decide to implement
> both). The
> RequestCorrelator needs to have either request_handler or
> async_request_handler set. If the former is set, the logic is
> unchanged.
> If the latter is set I'll invoke the async dispatching code
>
> - AsyncRequestHandler will look similar to the following:
> void handle(Message request, Handback hb, boolean requires_response)
> throws Throwable;
> - Handback is an interface, and its impl contains header information
> (e.g. request ID)
> - Handback has a sendReply(Object reply, boolean is_exception) method
> which sends a response (or exception) back to the caller
>
>
> +1 for a new interface. TBH I hadn't read the RequestCorrelator code,
> so I had assumed it was already asynchronous, and only RpcDispatcher
> was synchronous.
It *is* actually called Response (can you read my mind?) :-)
>
> I'm not so sure about the Handback name, how about calling it Response
> instead?
>The way I actually implemented it this morning is to omit the boolean
> - When requires_response is false, the AsyncRequestHandler doesn't
> need to invoke sendReply()
>
>
> I think this should be the other way around: when requires_response is
> true, the AsyncRequestHandler *can* invoke sendReply(), but is not
> required to (the call will just time out on the caller node); when
> requires_response is false, invoking sendReply() should throw an
> exception.
parameter altogether:
void handle(Message request, Response response) throws Exception;
Response is null for async requests.
Yes. I was thinking of adding a second method to the interface, which
>
>
> - Message batching
> - The above interfaces need to take message batching into account,
> e.g.
> the ability to handle multiple requests concurrently (if they
> don't need
> to be executed sequentially)
>
>
> You mean handle() is still going to be called once for each request,
> but second handle() call won't necessarily wait for the first
> message's sendReply() call?
has a message batch as parameter. However, we'd also have to pass in an
array of Response objects and it looked a bit clumsy.
>No, I think it'll apply to all messages. A simple implementation could
> Is this going to apply only to OOB messages, or to regular messages as
> well? I think I'd prefer it if it only applied to OOB messages,
> otherwise we'd have to implement our own ordering for regular/async
> commands.
dispatch OOB messages to the thread pool, as they don't need to be
ordered. Regular messages could be added to a queue where they are
processed sequentially by a *single* thread. Pedro does implement
ordering based on transactions (see his prev email), and I think there
are some other good use cases for regular messages. I think one thing
that could be done for regular messages is to implement something like
SCOPE (remember ?) for async RPCs: updates to different web sessions
could be processed concurrently, only updates to the *same* session
would have to be ordered.
This API is not in stone, we can always change it. Once I'm done with
this and have batching II implemented, plus some other JIRAs, I'll ping
you guys and we should have a meeting discussing
- Async invocation API
- Message batching (also in conjunction with the above)
- Message bundling and OOB / DONT_BUNDLE; bundling of OOB messages
--
Bela Ban, JGroups lead (http://www.jgroups.org)
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev