"sams" wrote : Thanks for the tip about trying things outside of an MDB. The
simple consumer test worked and proved that the queues would pull messages around as
needed to get the jobs done as quick as possible. When doing the same type of test with
an MDB it would never do this. I spent hours tweaking little config files and I was about
to give up and go to bed but finally a long shot hit me, and it worked...
|
| It seems that even though I'm adding messages using the
ClusteredConnectionFactory, this only works with a standard consumer.
|
The connection factory you use for sending messages has no bearing on the connection
factory used for consuming messages.
anonymous wrote :
| When you start using an MDB, it seems to step in the middle of things and redirect it
all to the old ConnectionFactory instead of the clustered one.
|
There is no redirection occurring
anonymous wrote :
| I even have the @Clustered annotation in my MDB and that doesn't make it work
correctly.
|
| The solution was to simply add the following attributes to the standard
ConnectionFactory in the connections-factory-service.xml file:
|
| | <attribute name="SupportsFailover">true</attribute>
| | <attribute name="SupportsLoadBalancing">true</attribute>
| | <attribute name="PrefetchSize">5</attribute>
| |
|
You shouldn't just change the SupportsFailover or SupportsLoadBalancing attributes -
MDBs should always consume from the local node.
As mentioned before prefetchSize is the parameter you want to change if you don't want
to buffer so many messages. Since you have now reduced that to 5 that is why you are
seeing the difference in behaviour.
anonymous wrote :
| Had to lower the PrefetchSize down to a small number instead of the large 150 or else
they will fetch 150 at a time and not let them go. I'm considering setting this to 1
to ensure even distribution of messages. I have no idea what sort of penalty will be
applied for having such a low prefetch, but I cant imagine it would be too bad. If
someone knows, please enlighten us.
|
Consumer flow control works a bit like TCP flow control. The server has a certain number
of tokens and continues sending messages as long as it has tokens, when it depletes its
tokens it won't send any more. As messages are consumed more tokens are sent to the
server (in chunks) so the server can send more. This prevents the consumer having to go to
the server every time to get a message which would involve a network round trip (RTT) and
be sloooow. This is a standard technique that all messaging systems (apart from jboss mq)
I know deploy. Setting prefetchSize to 1 effectively means the consumer will go to the
server each time to get a message.
So there is a trade-off. Depending on how fast your MDBs consume messages you may not
notice a difference. You can only tell this by experimentation.
Thanks
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4131028#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...