[JBoss JIRA] Created: (JBESB-270) JMSQueueListener doesn't manage Thread creation for the ActionProcessingPipeline i.e. can be flooded with messages
by Tom Fennelly (JIRA)
JMSQueueListener doesn't manage Thread creation for the ActionProcessingPipeline i.e. can be flooded with messages
------------------------------------------------------------------------------------------------------------------
Key: JBESB-270
URL: http://jira.jboss.com/jira/browse/JBESB-270
Project: JBoss ESB
Issue Type: Bug
Security Level: Public (Everyone can see)
Components: ESB Core
Affects Versions: 4.0 RC1
Reporter: Tom Fennelly
Assigned To: Mark Little
Fix For: 5.0
This listener receives messages from a JMS queue and spawns a new thread for the ActionProcessingPipeline instance that is created to process the message. If you hammer this listener with messages (e.g 20 concurrent threads continually pumping messages into the in queue), you'll kill the process through running out of threads to process the messages.
The listener should manage the Threads. Use of a Threadpool would probably take care of this for you i.e. can probably get the pool to block while waiting for a free thread to be returned to the pool.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://jira.jboss.com/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
19 years, 3 months
[JBoss JIRA] Created: (JBESB-401) During Trailblazer DEMO, JMS doesn't reconnect after a restart of the App Server [JMS Provider]
by Bruno Georges (JIRA)
During Trailblazer DEMO, JMS doesn't reconnect after a restart of the App Server [JMS Provider]
-----------------------------------------------------------------------------------------------
Key: JBESB-401
URL: http://jira.jboss.com/jira/browse/JBESB-401
Project: JBoss ESB
Issue Type: Bug
Security Level: Public (Everyone can see)
Components: Rosetta
Affects Versions: 4.0
Environment: JBoss ESB 4.0 GA + JBoss AS 4.0.5GA - Win 2000 - Fedora Core 6
Reporter: Bruno Georges
Assigned To: Mark Little
Fix For: 4.0 Maintenance Pack 1
During Trailblazer DEMO, JMS doesn't reconnect when App Server restart: ERROR [JmsCourier] JMS error. Attempting JMS reconnect.
ant runJMSBank
[java] 10:00:01,759 WARN [Connection] Connection failure, use javax.jms.Connection.setExceptionListener() to handle this error and reconnect
[java] org.jboss.mq.SpyJMSException: No pong received; - nested throwable: (java.io.IOException: ping timeout.)
[java] at org.jboss.mq.Connection$PingTask.run(Connection.java:1277)
[java] at EDU.oswego.cs.dl.util.concurrent.ClockDaemon$RunLoop.run(ClockDaemon.java:364)
[java] at java.lang.Thread.run(Thread.java:595)
[java] Caused by: java.io.IOException: ping timeout.
[java] ... 3 more
ant runESB
[java] 09:58:52,056 ERROR [JmsCourier] JMS error. Attempting JMS reconnect.
[java] javax.jms.IllegalStateException: The consumer is closed
[java] at org.jboss.mq.SpyMessageConsumer.checkClosed(SpyMessageConsumer.java:963)
[java] at org.jboss.mq.SpyMessageConsumer.receive(SpyMessageConsumer.java:360)
[java] at org.jboss.internal.soa.esb.couriers.JmsCourier.pickup(JmsCourier.java:349)
[java] at org.jboss.internal.soa.esb.couriers.TwoWayCourierImpl.pickup(TwoWayCourierImpl.java:184)
[java] at org.jboss.internal.soa.esb.couriers.TwoWayCourierImpl.pickup(TwoWayCourierImpl.java:166)
[java] at org.jboss.soa.esb.listeners.message.MessageAwareListener.waitForEventAndProcess(MessageAwareListener.java:246)
[java] at org.jboss.soa.esb.listeners.message.MessageAwareListener.doRun(MessageAwareListener.java:228)
[java] at org.jboss.soa.esb.listeners.lifecycle.AbstractThreadedManagedLifecycle.run(AbstractThreadedManagedLifecycle.java:114)
[java] at java.lang.Thread.run(Thread.java:595)
[java] 09:59:51,775 WARN [Connection] Connection failure, use javax.jms.Connection.setExceptionListener() to handle this error and reconnect
[java] org.jboss.mq.SpyJMSException: No pong received; - nested throwable: (java.io.IOException: ping timeout.)
[java] at org.jboss.mq.Connection$PingTask.run(Connection.java:1277)
[java] at EDU.oswego.cs.dl.util.concurrent.ClockDaemon$RunLoop.run(ClockDaemon.java:364)
[java] at java.lang.Thread.run(Thread.java:595)
[java] Caused by: java.io.IOException: ping timeout.
[java] ... 3 more
[java] 09:59:52,056 ERROR [JmsCourier] JMS error. Attempting JMS reconnect.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://jira.jboss.com/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
19 years, 3 months
[JBoss JIRA] Created: (JBMESSAGING-797) Deadlock in aop stack deployment
by Tim Fox (JIRA)
Deadlock in aop stack deployment
--------------------------------
Key: JBMESSAGING-797
URL: http://jira.jboss.com/jira/browse/JBMESSAGING-797
Project: JBoss Messaging
Issue Type: Bug
Affects Versions: 1.2.0.Beta1
Reporter: Tim Fox
Assigned To: Tim Fox
Fix For: 1.0.1.GA
The following deadlock occures in TRUNK:
Java stack information for the threads listed above:
SERVER 1 STDOUT: ===================================================
SERVER 1 STDOUT: "WorkerThread#0[127.0.0.1:54574]":
SERVER 1 STDOUT: at org.jboss.aop.AspectManager.findAdvisor(AspectManager.java:518)
SERVER 1 STDOUT: - waiting to lock <0x00002b1b28689000> (a java.util.WeakHashMap)
SERVER 1 STDOUT: at org.jboss.aop.AspectManager.getAnyAdvisorIfAdvised(AspectManager.java:537)
SERVER 1 STDOUT: at org.jboss.aop.ClassAdvisor.populateMethodTables(ClassAdvisor.java:1432)
SERVER 1 STDOUT: at org.jboss.aop.ClassAdvisor.createMethodTables(ClassAdvisor.java:1448)
SERVER 1 STDOUT: at org.jboss.aop.ClassAdvisor.access$100(ClassAdvisor.java:82)
SERVER 1 STDOUT: at org.jboss.aop.ClassAdvisor$1.run(ClassAdvisor.java:288)
SERVER 1 STDOUT: at java.security.AccessController.doPrivileged(Native Method)
SERVER 1 STDOUT: at org.jboss.aop.ClassAdvisor.attachClass(ClassAdvisor.java:271)
SERVER 1 STDOUT: - locked <0x00002b1b287e7da0> (a org.jboss.aop.ClassAdvisor)
SERVER 1 STDOUT: at org.jboss.aop.AspectManager.initialiseClassAdvisor(AspectManager.java:587)
SERVER 1 STDOUT: at org.jboss.aop.AspectManager.getAdvisor(AspectManager.java:575)
SERVER 1 STDOUT: at org.jboss.jms.client.delegate.ClientConnectionDelegate.<clinit>(ClientConnectionDelegate.java)
SERVER 1 STDOUT: at org.jboss.jms.server.endpoint.ServerConnectionFactoryEndpoint.createConnectionDelegateInternal(ServerConnectionFactoryEndpoint.java:219)
SERVER 1 STDOUT: at org.jboss.jms.server.endpoint.ServerConnectionFactoryEndpoint.createConnectionDelegate(ServerConnectionFactoryEndpoint.java:132)
SERVER 1 STDOUT: at org.jboss.jms.wireformat.ConnectionFactoryCreateConnectionDelegateRequest.serverInvoke(ConnectionFactoryCreateConnectionDelegateRequest.java:107)
SERVER 1 STDOUT: at org.jboss.jms.server.remoting.JMSServerInvocationHandler.invoke(JMSServerInvocationHandler.java:126)
SERVER 1 STDOUT: at org.jboss.remoting.ServerInvoker.invoke(ServerInvoker.java:715)
SERVER 1 STDOUT: at org.jboss.remoting.transport.socket.ServerThread.processInvocation(ServerThread.java:552)
SERVER 1 STDOUT: at org.jboss.remoting.transport.socket.ServerThread.dorun(ServerThread.java:378)
SERVER 1 STDOUT: at org.jboss.remoting.transport.socket.ServerThread.run(ServerThread.java:158)
SERVER 1 STDOUT: "Thread-4":
SERVER 1 STDOUT: at org.jboss.aop.Advisor.newBindingAdded(Advisor.java:516)
SERVER 1 STDOUT: - waiting to lock <0x00002b1b287e7da0> (a org.jboss.aop.ClassAdvisor)
SERVER 1 STDOUT: at org.jboss.aop.AspectManager.updateAdvisorsForAddedBinding(AspectManager.java:1472)
SERVER 1 STDOUT: - locked <0x00002b1b28689000> (a java.util.WeakHashMap)
SERVER 1 STDOUT: at org.jboss.aop.AspectManager.addBinding(AspectManager.java:1441)
SERVER 1 STDOUT: - locked <0x00002b1b28689000> (a java.util.WeakHashMap)
SERVER 1 STDOUT: - locked <0x00002b1b2868a3d8> (a org.jboss.aop.AspectManager)
SERVER 1 STDOUT: at org.jboss.aop.AspectXmlLoader.deployBinding(AspectXmlLoader.java:286)
SERVER 1 STDOUT: at org.jboss.aop.AspectXmlLoader.deployTopElements(AspectXmlLoader.java:1038)
SERVER 1 STDOUT: at org.jboss.aop.AspectXmlLoader.deployXML(AspectXmlLoader.java:886)
SERVER 1 STDOUT: at org.jboss.jms.client.container.JmsClientAspectXMLLoader.deployXML(JmsClientAspectXMLLoader.java:88)
SERVER 1 STDOUT: at org.jboss.jms.client.ClientAOPStackLoader.load(ClientAOPStackLoader.java:69)
SERVER 1 STDOUT: - locked <0x00002b1b287c7098> (a org.jboss.jms.client.ClientAOPStackLoader)
SERVER 1 STDOUT: at org.jboss.jms.client.JBossConnectionFactory.createConnectionInternal(JBossConnectionFactory.java:199)
SERVER 1 STDOUT: at org.jboss.jms.client.JBossConnectionFactory.createXAConnection(JBossConnectionFactory.java:129)
SERVER 1 STDOUT: at org.jboss.jms.client.JBossConnectionFactory.createXAConnection(JBossConnectionFactory.java:124)
SERVER 1 STDOUT: at org.jboss.jms.recovery.BridgeXAResourceRecovery.hasMoreResources(BridgeXAResourceRecovery.java:229)
SERVER 1 STDOUT: at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.resourceInitiatedRecovery(XARecoveryModule.java:677)
SERVER 1 STDOUT: at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkSecondPass(XARecoveryModule.java:177)
SERVER 1 STDOUT: at com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWork(PeriodicRecovery.java:237)
SERVER 1 STDOUT: at com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:163)
SERVER 1 STDOUT:
SERVER 1 STDOUT: Found 1 deadlock.
and a related one (see forum reference) in 1.0.1.gA
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://jira.jboss.com/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
19 years, 3 months