]
Ron Sigal commented on JBREM-706:
---------------------------------
A compromise strategy has been implemented. Once an invocation has been sent for a client
side oneway invocation, a TimerTask is created which will monitor the connection used for
the invocation. An ECHO internal invocation is sent, and, if the invocation times out,
the connection is destroyed. If a response is received, that means that the ServerThread
is done with the oneway invocation, and the connection is returned to the pool.
An earlier attempt to test for a response to the ECHO invocation multiple times failed
because, once the ObjectInputStream detects a timeout, it closes the stream and returns -1
on all subsequent calls to read(). Rather than implement a hack to get around the
problem, a simple strategy was adopted of just trying once and then giving up.
The following parameters may be configured:
MicroSocketClientInvoker.ONEWAY_CONNECTION_DELAY (actual value
"onewayConnectionDelay"): the delay before the TImerTask is executed. Default
value 5000 ms.
MicroSocketClientInvoker.ONEWAY_CONNECTION_TIMEOUT (actual value
"onewayConnectionTimeout"): the timeout with which the socket is configured
before sending the ECHO invocation.. Default value 1000 ms.
Unit test:
org.jboss.test.remoting.transport.socket.oneway.OnewayConnectionManagerTestCase.
Waiting for cruisecontrol results.
In socket transport, prevent client side oneway invocations from
artificially reducing concurrency
--------------------------------------------------------------------------------------------------
Key: JBREM-706
URL:
http://jira.jboss.com/jira/browse/JBREM-706
Project: JBoss Remoting
Issue Type: Task
Security Level: Public(Everyone can see)
Affects Versions: 2.4.0.Beta1 (Pinto)
Reporter: Ron Sigal
Assigned To: Ron Sigal
Fix For: 2.4.0.Beta1 (Pinto)
There is a subtle problem with the handling of client side oneway invocations in the
socket transports (and descendants). After the invocation has been marshalled to the
wire, the connection is returned to the connection pool, from which it can be reused, even
if the server side handler is still processing the previous invocation. Consequently,
invocations can back up and run serially instead of running concurrently.
The following dialogue exposes the problem and possible solutions:
===============================================================================================================
Ron Sigal wrote
Hi Tom,
I just noticed an interesting phenomenon that I wanted to tell you about. Maybe
it'll be obvious to you, but it surprised me.
I'm writing unit tests for socket and http, since they behave differently on oneway
invocations. I've got a oneway ThreadPool with two threads and I make 4 oneway
invocations, one right after the other. The handler waits for 5 seconds before returning.
I expected that they would all run on the server simultaneously, but they don't.
What happens is something like this: (1) invocation 1 gets made on thread 1 and socket
connection 1 to ServerThread 1 is created; (2) invocation 2 gets made on thread 2 and
socket connection 2 to ServerThread 2 gets created; (3) invocation 3 gets made on thead 1
and socket connection 1 gets reused. But ServerThread 1 is still sleeping, so invocation
3 waits for 5 seconds.
Not a big deal, but I might not have guessed it if I hadn't seen it.
===============================================================================================================
Ron Sigal wrote
Hi Tom,
Well, it's stranger than I thought. It turns out that for client side oneway
invocations, socket, which returns immediately after marshaling the invocation, can be
slower than http, which waits for a response.
Consider a client side oneway threadpool with 2 threads and a queue of size 1, and
suppose there are 4 invocations in quick succession. The handler takes 5 seconds per
invocation.
http:
1. 1st invocation runs in the first threadpool thread.
2. 2nd invocation runs in the second threadpool thread
3. 3rd invocation goes on the queue
4 4th invocation runs on the main thread
Assuming there are at least 3 server threads, invocations 1, 2, and 4 run in the first 5
seconds, and then invocation 3 runs. Total time 10 seconds.
socket (one possible scenario):
1. 1st invocation runs in the first threadpool thread and returns the socket connection
to the pool
2. 2nd invocation runs in a threadpool thread, finds the pooled connection, marshals the
invocation, and returns the connection to the pool. On the server side, the ServerThread
runs the invocations after finishing the 1st invocation.
3. 3rd invocation runs in a threadpool thread, finds the pooled connection, marshals the
invocation, and returns the connection to the pool. On the server side, the ServerThread
runs the invocations after finishing the 2nd invocation.
4. 4th invocation runs in a threadpool thread, finds the pooled connection, marshals the
invocation, and returns the connection to the pool. On the server side, the ServerThread
runs the invocations after finishing the 3rd invocation.
Total time: 20 seconds.
I've seen this happen. The scary thing is that the story scales. With more threads
on the client side and server side, http can run more and more invocations in parallel.
Whereas socket could end up running arbitrarily many invocations serially.
I think that having MicroSocketClientInvoker return after marshaling a client side oneway
invocation isn't a good idea. Ovidiu's JBREM-685 (
<
http://jira.jboss.com/jira/browse/JBREM-685>A server needs redundant information to
detect a one way invocation) <
http://jira.jboss.com/jira/browse/JBREM-685> gives me
a half baked idea. Maybe these two ways of tagging a oneway invocation could be used to
distinguish client side from server side oneway invocations. Or something like that.
By the way, I think that there's no problem with server side oneway invocations,
since a single ServerInvoker can keep shoving invocations into the threadpool.
-Ron
===============================================================================================================
Tom Elrod wrote:
Wow. Never thought about this happening but makes sense. So the real problem with the
socket client invoker is it pools the connections when doing oneway invocations. So it is
queuing up invocations on the same connection which can only be processed serially on the
server side. Think maybe your idea about using the ONEWAY_FLAG on the client can work.
Basically am thinking we'd have to check that before returning the connection to pool
and if is set, then throw away the connection (i.e. don't put back in pool). This
way, the next oneway invocation would not be using the same connection and on the server
side, should use another available server thread to do the process (so could be in
parallel). However, this means that incur more overhead on the client side to create a
new connection for each oneway invocation, but is maybe worth the cost (too bad we
can't tell if serve will take a long time or not).
Let me know what you think.
===============================================================================================================
Ron Sigal wrote:
It seems to me that MicroSocketClientInvoker could:
...
2. in the case of client side oneway invocations, either throw away the connection and
return immediately, or just not return until it gets a response from the server. Since
it's on a separate thread, maybe the latter is preferable, since then it can return
the connection to the pool after it gets a response.
===============================================================================================================
Tom Elrod wrote:
Yeah, this is a tricky one. On the one hand, if don't throw away the connection and
wait for the server response AND the server takes a long time, would be possible to
exhaust the pool since all threads will be stuck waiting for server to respond. With the
new change to the thread pool to use the RUN mode, would mean calling threads would then
be making the remote call and would pretty much be negating the purpose of oneway client
side (since client caller thread would be making the network call and blocking until gets
response).
However, if do throw away the connection, then means that in general have defeated
purpose of connection pooling since will have to create a new connection each time oneway
client side invocations are made. So if the server processes the requests quickly, would
be taking on the extra overhead of creating a new client connection for every call.
Hmmm. Could go either way if feel strongly one way or another. However think I am
leaning towards throwing away the connection.
===============================================================================================================
Ron Sigal wrote:
This might be complicating things, but for client side oneway invocations we could set a
short per invocation timeout, say 1 second or 5 seconds. If the response comes back, put
the connection back in the pool. If it times out, throw the connection away. Don't
know how to pick the timeout, though.
===============================================================================================================
Tom Elrod wrote:
I hate getting too complicated as tends to increase the chance of introducing bugs.
However, if wanted to do this, would say need to make the timeout very low, i.e. 1 second.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: