JBoss development,
A new message was posted in the thread "Parallel invocations of JaxWS services and
getPort":
http://community.jboss.org/message/526010#526010
Author : Andrew Dinn
Profile :
http://community.jboss.org/people/adinn
Message:
--------------------------------------------------------------
The JBossTS/XTS multi-threaded test is failing with the trunk XTS and trunk AS. I
investigated the problem and found that it relates to the use of client proxies in
parallel. I think there is a big issue here which needs clarifying and probably requires a
change in the Service/proxy implementation. Here's the situation and the symptoms:
The test runs within a web app in the same container as the XTS service. The test thread
creates 10 child threads each of which executes a TX start and TX commit for a Web
Services Atomic Transaction (WS-AT) then exits. The parent joins each child thread. The
problem is that some of the commit messages never get delivered so the client thread waits
forever fo ra committed response and, the test hangs.
The start and commit operations are implemented using JaxWS to invoke services provided by
the XTS implementation. I will not go into the details of the actual sevice configuration
as they are not particularly important. The important detail is that the threads all
employ a specific JaxWS service -- ATTermination -- to commit the TX, invoking a one way
operation. The service responds by routing a reply back to the client via another JaxWS
service -- ATTerminationInitiator -- again using a one way operation.
My code employs one instance of the ATTermination Service class. Each thread obtains this
Service instance and calls getPort() to obtain a client proxy. The thread configures a
handler via the BindingProvider API. It also obtains the message properties from the proxy
and installs addressing property data using my MAP abstraction API. It then casts the
proxy to the service interface and invokes the remote operation. The MAP data includes a
replyto endpoint for theATTerminationInitiator configured with a reference parameter which
identifies the thread/client making the request. The ATTermination service retrieves this
data from the incoming request and uses it to address and tag the one way message to the
ATTerminationInitiator service. The latter can use the tag to dispatch the result to the
relevant thread.
The problem is that on some occasions the messages received at the ATTermination Service
end have the wrong tag. I traced the calls and found examples where, say, two threads
would supply tag "<blah blah>:2fa" and "<blah blah>:2fe"
but the service would receive two incoming requests with the same tag <blah
blah>:2fe". It appears that the proxy returned to each thread is either the same
object or, at least, shares the same message context with the result that one thread
updates the MAP data on the request message context while another thread is in the middle
of invoking the remote operation.
I checked the JaxWS spec and it does not clarify whether the port returned from the
getPort call is thread safe or not. My assumption has been that each getPort call is
supposed to return a new port object which can be configured and invoked by a client
thread independent of any parallel configuration and invocation of a port returned by a
different call to getPort . Whatever the status of the spec it does not look like this is
happening in our native implementation.
One other way this behaviour migth be specified would be to require that each client
thread employ its own instance of the Service. Thsi would also require the implementation
to ensure that proxies obtained from different services could be used in a thread safe
manner. That seems a bit perverse to me since the threads are actually using the same
Sevrice -- they merely want to employ independent channels for communicating with it --
the port seems to me to be the correct level at which to achieve this.
Also, creating the Service instance requires checking the WSDL, initialising all the
endpoint and operation info etc so this appears to be an expensive operation which I
don't want to do for every JaxWS request. Yet the WS-AT protocol rtequires the use of
6 different services in a given TX but rarely involves more than one, or in some cases
two, JaxWS invocation per service. I could maybe mitigate some of the creation costs using
a cache to store Service instances per-thread but that would still multiply the Sevrice
instances unnecessaril;y. It woudl also require use of a WeakHashMap to ensure that the
cache was garbage collected. This has its own awful performance implications fo rgarbage
collection so I don;t want to have to go down that route. If this is the expecte dmodel
then it's a pretty unattractive proposition. Anyway, it seems to me to be
inappropriate since I don't need lots of copies of the service I need lots of copies
of a port (i.e. proxy) associated with the same service.
This thread-safe/unsafe behaviour is a critical issue which ought to be clearly documented
in our code at least (it ought to be in the spec in bold font but the spec is pretty
half-assed about many things so no surpirse I can't find it). Note that I cannot
resolve this problem simply by introducing synchronization around the invocation of the
proxy method. If I have to resort to inserting my own synchronization then I would have to
synchronize from the point at which I obtain the port, maintain a lock while configuring
the bindings and addressing properties and retain it throughout the duration of the call.
In the case of an RPC message, I would also need keep the proxy locked while I grabbed the
reply message context and retrieved and processed any data attached to it. Bye bye
throughput.
So, any comments regarding what should be specified here and any ideas about what is
actually implemented anbd whetehr it can be made thread-safe?
--------------------------------------------------------------
To reply to this message visit the message page:
http://community.jboss.org/message/526010#526010