[JBoss JIRA] (JBMESSAGING-1759) JGroups cannot process incoming messages during the method execution
by Yong Hao Gao (JIRA)
[ https://issues.jboss.org/browse/JBMESSAGING-1759?page=com.atlassian.jira.... ]
Yong Hao Gao updated JBMESSAGING-1759:
--------------------------------------
Fix Version/s: 1.4.8.SP11
(was: 1.4.8.SP10)
> JGroups cannot process incoming messages during the method execution
> --------------------------------------------------------------------
>
> Key: JBMESSAGING-1759
> URL: https://issues.jboss.org/browse/JBMESSAGING-1759
> Project: JBoss Messaging
> Issue Type: Bug
> Components: JMS Clustering
> Affects Versions: 1.4.0.SP3.CP07
> Environment: JBossMessaging-1.4.0_SP3_CP7
> Reporter: Tyronne Wickramarathne
> Assignee: Yong Hao Gao
> Priority: Critical
> Fix For: 1.4.0.SP3.CP15, 1.4.8.SP11
>
>
> The GroupMember#viewAccepted() takes 31min on the IncomingPacketHandler thread. The JGroups cannot process incoming messages during this method execution. It's completely blocked and leads the following log and subsequent WARNs.
> 2009-11-11 11:50:12.634 [ViewHandler] WARN [GMS] - failed to collect all ACKs (1) for view [192.168.1.221:49898|2] [192.168.1.221:49898] after 2000ms, missing ACKs from [192.168.1.221:49898] (received=[]), local_addr=192.168.1.221:49898
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (JBMESSAGING-1754) Implement the JBossMQ behavior on JBM, when stopDelivery() is invoked via JMX console
by Yong Hao Gao (JIRA)
[ https://issues.jboss.org/browse/JBMESSAGING-1754?page=com.atlassian.jira.... ]
Yong Hao Gao updated JBMESSAGING-1754:
--------------------------------------
Fix Version/s: 1.4.8.SP11
(was: 1.4.8.SP10)
> Implement the JBossMQ behavior on JBM, when stopDelivery() is invoked via JMX console
> -------------------------------------------------------------------------------------
>
> Key: JBMESSAGING-1754
> URL: https://issues.jboss.org/browse/JBMESSAGING-1754
> Project: JBoss Messaging
> Issue Type: Feature Request
> Components: Messaging Core
> Affects Versions: 1.4.0.SP3.CP08
> Environment: JBoss-EAP-4.3_CP6, JBM-1.4.0-SP3_CP8P1
> Reporter: Tyronne Wickramarathne
> Assignee: Yong Hao Gao
> Fix For: 1.4.0.SP3.CP15, 1.4.8.SP11
>
>
> When stopDelivery() is invoked via JMX console for any given MDB, the in process messages are rolled back to their corresponding destination without completing the process. This is however *not* a bug, but the expected behavior in JBM, for this is how it has defined the behavior of stopDelivery().
> However on JBossMQ, the in process messages are completed at first, before stopDelivery() is processed. Which means, the call made via stopDelivery() will be kept on hold, until all in process messages are successfully completed. The customers migrating from JBossMQ are seeing this as a compatibility issue, when porting their applications to work on JBM. Hence, would it be possible to accommodate the behavior seen in JBossMQ on JBM please ?
> I have tested this on both JBoss-EAP-4.3_CP6 as well as on JBoss-EAP-4.2_CP7. Both have the same code base for EJB3,JCA but not the JMS provider. Therefore, I'm raising this feature request under JBM.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (JBMESSAGING-1780) When i change the default db schemas of jbm tables the DLQ - if it contains message - run in deadlock by the next server restart, because it wants to use the default schema instead of the modified db schema
by Yong Hao Gao (JIRA)
[ https://issues.jboss.org/browse/JBMESSAGING-1780?page=com.atlassian.jira.... ]
Yong Hao Gao updated JBMESSAGING-1780:
--------------------------------------
Fix Version/s: 1.4.8.SP11
(was: 1.4.8.SP10)
> When i change the default db schemas of jbm tables the DLQ - if it contains message - run in deadlock by the next server restart, because it wants to use the default schema instead of the modified db schema
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: JBMESSAGING-1780
> URL: https://issues.jboss.org/browse/JBMESSAGING-1780
> Project: JBoss Messaging
> Issue Type: Bug
> Affects Versions: 1.4.0.SP3.CP09, 1.4.5.GA, 1.4.6.GA
> Environment: JBoss 5.1.GA+Seam 2.1.1.GA+mysql 5.1
> Reporter: bb bb
> Assignee: Yong Hao Gao
> Fix For: 1.4.0.SP3.CP15, 1.4.8.SP11
>
>
> At first, I modified the default schema of jbm tables in mysql-persistence-service.xml
> so, I changed the sql properties JBM_DUAL to newSchema.JBM_DUAL and so on...
> Everything works fine with the modified schema, until I add a posion message to the dead letter queue.
> Then I want to restart the JBoss server, I get an exception to it run in dead lock, because didn't find the jbm_msg tables (but it would be newSchema.jbm_msg )
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (JBMESSAGING-1777) Error in addMessageIDInHeader while bridging from JBossMQ
by Yong Hao Gao (JIRA)
[ https://issues.jboss.org/browse/JBMESSAGING-1777?page=com.atlassian.jira.... ]
Yong Hao Gao updated JBMESSAGING-1777:
--------------------------------------
Fix Version/s: 1.4.8.SP11
(was: 1.4.8.SP10)
> Error in addMessageIDInHeader while bridging from JBossMQ
> ---------------------------------------------------------
>
> Key: JBMESSAGING-1777
> URL: https://issues.jboss.org/browse/JBMESSAGING-1777
> Project: JBoss Messaging
> Issue Type: Bug
> Affects Versions: 1.4.3.GA
> Environment: JBoss AS 5.1.0.GA
> Reporter: Pedro Gontijo
> Assignee: Yong Hao Gao
> Labels: JMSXDeliveryCount, addMessageIDInHeader, bridge, jbossmq
> Fix For: 1.4.0.SP3.CP15, 1.4.8.SP11
>
>
> When addMessageIDInHeader is set to true in a JBossMQ->JBM bridge scenerio the following error occurs:
> WARN [org.jboss.jms.server.bridge.Bridge] (Thread-27) jboss.messaging:name=MyBridge,service=Bridge Failed to send + acknowledge batch, closing JMS objects
> javax.jms.JMSException: Illegal property name: JMSXDeliveryCount
> at org.jboss.mq.SpyMessage.checkProperty(Unknown Source)
> at org.jboss.mq.SpyMessage.setObjectProperty(Unknown Source)
> at org.jboss.jms.server.bridge.Bridge.addMessageIDInHeader(Bridge.java:1481)
> at org.jboss.jms.server.bridge.Bridge.sendMessages(Bridge.java:1391)
> at org.jboss.jms.server.bridge.Bridge.sendBatchNonTransacted(Bridge.java:1261)
> at org.jboss.jms.server.bridge.Bridge.sendBatch(Bridge.java:1375)
> at org.jboss.jms.server.bridge.Bridge.access$1900(Bridge.java:68)
> at org.jboss.jms.server.bridge.Bridge$BatchTimeChecker.run(Bridge.java:1638)
> at java.lang.Thread.run(Thread.java:595)
> As you can see the error is due JMSX properties, actually, because the addMessageIDInHeader sets then in the MQ message (which is not allowed).
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (JBMESSAGING-1763) Bisocket connection won't be closed if pulling out the ethernet cable between client and server. The failure detection code won't close the failure connection, as a result, the subsequent requests will hang after connection account exceeds the threshold
by Yong Hao Gao (JIRA)
[ https://issues.jboss.org/browse/JBMESSAGING-1763?page=com.atlassian.jira.... ]
Yong Hao Gao updated JBMESSAGING-1763:
--------------------------------------
Fix Version/s: 1.4.8.SP11
(was: 1.4.8.SP10)
> Bisocket connection won't be closed if pulling out the ethernet cable between client and server. The failure detection code won't close the failure connection, as a result, the subsequent requests will hang after connection account exceeds the threshold
> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: JBMESSAGING-1763
> URL: https://issues.jboss.org/browse/JBMESSAGING-1763
> Project: JBoss Messaging
> Issue Type: Bug
> Components: JMS Remoting
> Affects Versions: 1.4.5.GA, 1.4.6.GA
> Environment: OS: Windows Server 2003. JBoss App Server 4.2.3.GA, JBoss Messaging 1.4.5 GA, JBoss Remoting 2.2.3 SP1
> Reporter: mingjun jiang
> Assignee: Yong Hao Gao
> Labels: Failure, are, be, by, cable, caused, closed, connection, ethernet, if, manually, out, pulling, they, won't
> Fix For: 1.4.8.SP11
>
> Attachments: jboss-test-log for JBM1.4.5 & Remoting 2.2.3 SP1.zip, QReceiver.java, QSender.java, remoting-bisocket-service.xml
>
>
> We are using JBoss App Server 4.2.3.GA, JBoss Messaging 1.4.5 GA and JBoss Remoting 2.2.3 SP1. In our application, there are a lot of Message listeners running on the client side, these message listeners will receive messages from queue/topic deployed in JBoss Messaging
> Configuration:
> We created our own JMS Connection factory which uses the default remoting connector. As you know, the default remoting connector is configured to use the bisocket transport. We didn't change the default value of the remoting connector
> During we run our application, we open the JBoss web console to monitor the value of currentClientPoolSize under "Jboss.remoting" JMX MBean.
>
> How to reproduce this issue:
> 1. Run 5 message listeners in the client side to receive messages from JBoss Messaging, then we observe the value of currentClientPoolSize is 10
> 2. After processing several messages, we manually pull out the ethernet cable. The value of currentClientPoolSize is still 10.
> 3. We run another 5 message listeners in client side, then the value of currentClientPoolSize will become 20
> 4. After we do the same operations above several times, the value of currentClientPoolSize will increase continuously. Once the value of currentClientPoolSize is equal to the MaxPoolSize, then the subsequent incoming client requests will hang, and we will encounter the following exception in server side
> 2009-10-20 18:08:09,655 ERROR [org.jboss.remoting.transport.socket.ServerThread] Worker thread initialization failure
> java.net.SocketException: Connection reset
> at java.net.SocketInputStream.read(SocketInputStream.java:168)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:235)
> at java.io.FilterInputStream.read(FilterInputStream.java:66)
> at org.jboss.remoting.transport.socket.ServerThread.readVersion(ServerThread.java:859)
> at org.jboss.remoting.transport.socket.ServerThread.processInvocation(ServerThread.java:545)
> at org.jboss.remoting.transport.socket.ServerThread.dorun(ServerThread.java:406)
> at org.jboss.remoting.transport.socket.ServerThread.run(ServerThread.java:173)
> Conclusion: JBoss Messaging won't close the failure connections if they are caused by manually pulling out ethernet cable. As a result, the value of currentClientPoolSize will increase continuously and finally the new client requests will hang
> Note: If we killed the process of message listener in client side, then the value of currentClientPoolSize will decrease to 0 immediately, it seems that the server could detect the failure connection and perform the corresponding resource releasing.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (JBMESSAGING-1790) A single lagging JBM topic subscriber can cause all publishes to lag
by Yong Hao Gao (JIRA)
[ https://issues.jboss.org/browse/JBMESSAGING-1790?page=com.atlassian.jira.... ]
Yong Hao Gao updated JBMESSAGING-1790:
--------------------------------------
Fix Version/s: 1.4.8.SP11
(was: 1.4.8.SP10)
> A single lagging JBM topic subscriber can cause all publishes to lag
> --------------------------------------------------------------------
>
> Key: JBMESSAGING-1790
> URL: https://issues.jboss.org/browse/JBMESSAGING-1790
> Project: JBoss Messaging
> Issue Type: Bug
> Components: Messaging Core
> Affects Versions: 1.4.3.GA
> Environment: JBoss 5.1.0_GA, JDK 1.6.0_14 and JDK 1.6.0_18, multiple versions of Windows (2008, 2003, Vista)
> Reporter: Jason Burton
> Assignee: Yong Hao Gao
> Fix For: 1.4.8.SP11
>
> Attachments: stack_traces.zip
>
>
> We have an application that makes heavy use of JMS topics. The attached stack traces lead me to believe that we have one client that has some sort of network problem and that JBM is trying to write messages to it. Evidently this client's socket isn't working anymore. That's no problem (and maybe could be expected), but this causes every other client publish to hang.
> The attached stack traces are from JBoss and taken 10 seconds apart during an occurence of this. During this time, we observed that one or more calls to JBossMessageProducer.publish() took 50 seconds to complete (normally takes 2-5 milliseconds). Best I can tell, the "WorkManager(2)-3" thread is the culprit. Over the 50 seconds, it seemed to be stuck in a socketWrite() and has object <0x143a2250> locked. There are a few other publishing threads waiting on that lock.
> I'm pretty sure this relates to a closed bug:
> https://jira.jboss.org/jira/browse/JBMESSAGING-1220
> This bug report states to run a client out of memory to reproduce the problem, but at the point of the attached server stack traces. there aren't any clients that are out of memory. We have been able to reproduce this by running a client out of memory, though. That bug report stated to reduce the prefetchSize to a number of messages whose size would be under the TCP window size. Best I can tell by searching, the TCP window size defaults to 16K on Windows. It is definitely possible that the messages we send are larger than 16K, so even setting the prefetchSize to 1 would be larger that the TCP window size.
> Let me know if you need any more information.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months