[
https://jira.jboss.org/jira/browse/JBMESSAGING-1456?page=com.atlassian.ji...
]
Victor Starenky commented on JBMESSAGING-1456:
----------------------------------------------
Ok, we ran test jars overnight. Results are not very good. While we don't see messages
"stuck" in delivery we do see a bunch of nodes having tons of messages
accumulated on some (not all) queues. That is MessageCount is not zero (rather in the
order of hundreeds) but DeliveringCount is zero. Those nodes have ConsumerCount=0. Looks
like message sucker is not working or something? It is configured as far as I can tell. I
have actually already seen this problem as mentioned in the forum:
http://www.jboss.org/index.html?module=bb&op=viewtopic&t=155317
Where would I send logs and config files? I'm afraid attaching here wouldn't work
for the logs because of the size restriction.
Our plan is to keep the test jars for now but go back to the config files with the vastly
increased timeout values - that seemed to help as a work around.
Messages stuck in being-delivered state in cluster
--------------------------------------------------
Key: JBMESSAGING-1456
URL:
https://jira.jboss.org/jira/browse/JBMESSAGING-1456
Project: JBoss Messaging
Issue Type: Bug
Affects Versions: 1.4.0.SP3_CP03, 1.4.0.SP3.CP07
Reporter: Justin Bertram
Assignee: Howard Gao
Priority: Blocker
Fix For: 1.4.0.SP3.CP08, 1.4.4.GA
Attachments: DeliveringCount.png, kill3_thread_dump.txt, MessageStucked.png,
RemoveAllMessagesException.png, test-1456-jars.zip, thread_dump.txt
Messages become "stuck" in being-delivered state when clients use a clustered
XA connection factory in a cluster of at least 2 nodes.
JBoss setup:
-2 nodes of JBoss EAP 4.3 CP02
-commented out "ClusterPullConnectionFactory" in messaging-service.xml to
prevent message redistribution and eliminate the "message suckers" as the
potential culprit
-MySQL backend using the default mysql-persistence-service.xml (from
<JBOSS_HOME>/docs/examples/jms)
Client setup:
-both nodes have a client which is a separate process (i.e. not inside JBoss)
-clients are Spring based
-one client produces and consumes, the other client just consumes
-both clients use the ClusteredXAConnectionFactory from the default
connection-factories-service.xml
-both clients publish to and consume from "queue/testDistributedQueue"
-clients are configured to send persistent messages, use AUTO_ACKNOWLEDGE, and
transacted sessions
Symptoms of the issue:
-when running the clients I watch the JMX-Console for the
"queue/testDistributedQueue"
-as the consumers pull messages off the queue I can see the MessageCount and
DeliveringCount go to 0 every so often
-after a period of time (usually a few hours) the MessageCount and DeliveringCount
never go back to 0
-I "kill" the clients and wait for the DeliveringCount to go to 0, but it
never does
-after the clients are killed the ConsumerCount for the queue will drop, but never to 0
when messages are "stuck"
-a thread dump reveals at least one JBM server session that is apparently stuck (it
never goes away) - ostensibly this is the consumer that is showing in the JMX-Console for
"queue/testDistributedQueue"
-a "killall -3 java" doesn't produce anything from the clients so I know
their dead
-nothing is in any DLQ or expiry queue
-the database contains as many rows in the JBM_MSG and JBM_MSG_REF tables as the
DeliveringCount in the JMX-Console
-rebooting the node with the stuck messages frees the messages to be consumed (i.e.
un-sticks them)
Other notes:
-nothing else is happening on either node but running the client and running JBoss
-this only appears to happen when a clustered connection factory is used. I tested
using a normal connection factory and after 24 hours couldn't reproduce a stuck
message.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira