"tom.elrod(a)jboss.com" wrote : Below is list of possible scenarios for making
remote calls through remoting.
|
| Legend:
| thread blocks = --|
| thread returns = -->
|
| 1. synchronous call
|
| caller thread -- remoting client --| -- NETWORK -- pooled processing thread --
handler
|
| Calling thread goes through remoting client call stack till makes network call and
blocks for response. The pooled processing thread will call on the handler, get the
response from the handler and write it back to the network where will be picked up by the
blocking caller thread.
|
| 2. asynchronous call - client side
|
| caller thread --> remoting client (worker thread) --| -- NETWORK -- pooled
processing thread -- handler
|
| Calling thread makes invocation request and returns before making network invocation.
Remoting client pooled thread takes invocation and makes call over the network. From here
is same as case 1, but response is just thrown away.
|
|
If you are going to throw it away, why write it on the server side in the first place?
Seems wasteful.
anonymous wrote :
| 3. asynchronous call - server side
|
| caller thread -- remoting client --| -- NETWORK -- pooled processing thread -->
pooled async processing thread -- handler
|
| Calling thread goes through remoting client call stack till makes network call and
blocks for response. The pooled processing thread will hand off invocation to a pooled
async processing thread and will return (thus unblocking the calling thread on the
client). The pooled async processing thread will call on handler, get response, and throw
it away.
|
|
Again - why write it in the first place?
anonymous wrote :
|
| 4. non-blocking asynchronous call
|
| caller thread -- remoting client --> -- NETWORK -- pooled processing thread --
handler
|
| Calling thread goes through remoting client call stack till makes network call where
will only write to network, but not wait (block) for server response (see
http://jira.jboss.com/jira/browse/JBREM-548). The pooled processing thread will call on
the handler, get the response from the handler and write it back to the network.
|
|
Again, writing the response seems pointless. Only causes extra traffic and context
switches.
anonymous wrote :
| Not sure yet what will have to be done for this implementation as don't know if
will be a problem with pooled processing thread sending data back to network with no one
on the other side.
|
So why send it?
anonymous wrote :
| Note: in the above scenarios, there is actually an accept thread on server that gets
socket from server socket and passes onto a pooled processing thread and goes back to
listening for next socket request. Have removed it from thread stack diagram to make
easier to read.
|
| 1 - 3 are already available within remoting today. 4 is scheduled to be implemented.
For 2 - 4, only getting request to server is covered. Getting response back to client
will require callbacks. Also important to remember that remoting has one API that all the
transport implementations support. In order to change that API for new desired behavior,
all the transports must be able to support it (how it supports it is an implementation
detail).
|
|
If the API is extended, surely existing transports are unaffacted, they can throw
UnsupportedOperationException for any new methods, which won't be called by old user
code anyway
I guess I should clarify where we are coming from here.
In messaging we need to provide very high throughput, in a different league to your normal
ejb installation. We're talking up to 100s of thousands of messages per sec (depending
on network) and these are 1 or 2K messages.
When we are benchmarked against our competitors it doesn't really matter how much our
core code is optimised if we aren't efficiently handling the network transport. This
is where we will be killed.
Any extra reads or writes or threads blocking when they don't need to do will
contribute to that.
A request-response model is great for RPC style usage patterns, but IMHO doesn't
really suit what we need in messaging.
Also in the future we probably need to provide wire format compatibility with other
protocols so our requirements are very specific:
For our socket transport we need a single TCP connection that can be read from and written
to in a non blocking fashion from both ends.
On the server side we want to do something like the following:
1 Data is read (non blocking) from channel by acceptor thread and work is handed off to
worker.
2 The work may be passed between one or more worker threads each of which is specialised
for a particular type of work. Each worker threads basically goes around in its own loop.
This is basically a SEDA style model and gives us great throughput and scalability w.r.t.
number of concurrent "requests" since there is no thread per request and there
are far less context switches. We already have the SEDA machinery in place in JBM, the
last piece of the puzzle that is missing is the support from remoting for the non blocking
functionality.
3 For some incoming data it may be necessary to write some outgoing data back to the
socket. Note this is done on a completely different thread to the acceptor thread and the
acceptor thread may have served many more incoming data in the mean-time.
It's also crucial to us that we only use a single TCP connection and concurrent
requests are multiplexed over that -i.e. we don't want a socket pool as is currently
the case with the socket transport.
Actually, if the channel abstraction is bidirectional then multiplexing becomes
straightforward too. You simply need to wrap the data requests and responses in a packet
with a header identifying the "logical" connection and correlate them on
receipt.
I don't think it would be hard to write such a transport (in fact some tools such as
Apache MINA make it almost trivial - although I am not sure we should be using that
library) and we in JBM would love to do so and contribute it back to remoting.
The problem I have right now is that the remoting's conceptual model of invocations
seems so far removed from the model we require I don't know how we could shoe-horn it
in to fit.
There is some analogy to the servlet API here. The servlet API was designed a long time
ago when everyone was using the blocking IO api (there was no Java NBIO of course) to
write server applications that had the classic thread per request, blocking on accept,
thread pool model.
As we all know, since the servlet API basically assumes a request/response model it makes
it very hard (impossible) to reconcile with a decoupled request/response approach. So
basically any servlet applications are doomed to not really benefit from non blocking IO.
Remoting also assumes an invocation based model, so IMHO suffers from the same problems.
My personally opinion is that you guys you should extend the great work you have done with
remoting to date by extending the API to support the newer approach, otherwise it's
going to be hard for high performance applications like us to use it. And this will be
more so going ahead, as more people throw out their blocking IO.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3971508#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...