[
https://jira.jboss.org/jira/browse/JBAS-4586?page=com.atlassian.jira.plug...
]
Ron Sigal commented on JBAS-4586:
---------------------------------
Hey Troy, are you still out there?
I'm sorry this issue's been flying under the radar for so long.
I think I know what was happening. The standard EJB3 configuration uses the Remoting
"socket" transport, which has a default limit of 300 worker threads on the
server side. Once a worker thread is connected to a client, it will keep listening for
further invocations from that client until either (1) the socket times out, or (2) the
client disconnects. So it's likely that most of the delay experienced by your users
was spent waiting for worker thread to free up.
The number of worker threads can be increased by changing the EJB3 configuration:
<mbean code="org.jboss.remoting.transport.Connector"
name="jboss.remoting:type=Connector,name=DefaultEjb3Connector,handler=ejb3">
<depends>jboss.aop:service=AspectDeployer</depends>
<attribute
name="InvokerLocator">socket://${jboss.bind.address}:3873/?maxPoolSize=600</attribute>
<attribute name="Configuration">
<handlers>
<handler
subsystem="AOP">org.jboss.aspects.remoting.AOPRemotingInvocationHandler</handler>
</handlers>
</attribute>
</mbean>
Here I've added the "maxPoolSize" parameter to the
"InvokerLocator" attribute. An alternative would be to reduce the timeout
value, so worker threads return to the thread pool sooner:
<mbean code="org.jboss.remoting.transport.Connector"
name="jboss.remoting:type=Connector,name=DefaultEjb3Connector,handler=ejb3">
<depends>jboss.aop:service=AspectDeployer</depends>
<attribute
name="InvokerLocator">socket://${jboss.bind.address}:3873/?timeout=1000</attribute>
<attribute name="Configuration">
<handlers>
<handler
subsystem="AOP">org.jboss.aspects.remoting.AOPRemotingInvocationHandler</handler>
</handlers>
</attribute>
</mbean>
Here I've set the "timeout" parameter to "1000".
If you're still interested in pursuing this issue, let me know. Otherwise, I'll
assume it's a configuration problem and close the issue.
unified invoker causes delays under load
----------------------------------------
Key: JBAS-4586
URL:
https://jira.jboss.org/jira/browse/JBAS-4586
Project: JBoss Application Server
Issue Type: Sub-task
Security Level: Public(Everyone can see)
Components: Remoting
Affects Versions: JBossAS-4.2.0.GA, JBossAS-4.2.1.GA
Environment: Linux kernel 2.6.21, Java 1.5.0_12, JBoss 4.2.0.GA
Reporter: Troy Bowman
Assignee: Ron Sigal
We tested Jboss 4.2.0.GA and found it worthy for production. When we actually did put it
in production, and more than about 300 people were using the application simultaneously,
people complained that the application was extremely slow. Some commands, which issued
several rmi invocations, would take several minutes. There was not a heavy load on the
servers, however. It seemed like every request was just waiting for something to timeout,
or that something was causing deadlock in a synchronized block.
I analyzed the problem by both watching the jboss server.log, and also using tcpdump to
watch network traffic. I noticed big pauses where neither the server, client, or network
was doing anything at all. I made tcpdump have a zero snaplen and read through each
packet trying to find what exactly it was doing. I followed the process from ports
1099(naming) to 1098(rmi) to 4446(unified invoker). Right when the client would invoke
the stub it had from rmi, it'd sit there for 12 to 14 seconds. I looked at this from
other commands and indeed, it was waiting for the invoker.
I started googling for hangs with the jboss invoker and stumbled across some bugs, and
found these:
http://jira.jboss.org/jira/browse/JBREM-203?decorator=printable
http://jira.jboss.org/jira/browse/JBREM-167?page=all&decorator=printable
http://jira.jboss.com/jira/browse/JBREM-165;jsessionid=4A6E5196E7A1A78EAF...
They're all old bugs, but at least it is obvious that the unified invoker had
problems in the past with hanging while marshalling/unmarshalling with jboss's custom
(un)marshallers. My suspicion was that they may have solved the problem with the hack
back then, but it is probably showing its ugly head when the server is under a heavy
load.
In searching for more stuff, I found that the readme for jboss-4.2.0.GA said that
"The default invoker for EJBs has been changed from the rmi-invoker to the
unified-invoker, provided by JBoss Remoting". This got me very susupicious of it,
since it was definitely the invoker that was sitting there for around 15 seconds on every
request when the server had a heavier load. I looked at the change to the
standardjboss.xml:
http://viewvc.jboss.org/cgi-bin/viewvc.cgi/jbossas/branches/Branch_4_2/se...
I changed the proxy bindings from the unified invoker back to the rmi invoker in
standardjboss.xml, and the problem went away. Invocations now go through port 4444, and
are instantaneous.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira