"bander" wrote : Following on from this thread:
http://www.jboss.org/index.html?module=bb&op=viewtopic&t=102491
|
| I'm currently experiencing multiple failover issues with the 1.2.0.GA release.
I'm running two clustered nodes on my local machine (JB4.0.4, Win XP, JVM1.4.2) using
all the default settings, following the clustered node instructions in the user guide.
|
| After starting both messaging-node0 and messaging-node1 I start my test case
(attached).
|
| The first problem I have with the test case that I created is that the message
listener does not receive any of the dispatched messages (the test case creates a message
dispatcher and message listener - the dispatcher sends a message to a queue that the
listener is attached to). This happens regardless of the queue type (i.e.
clustered/non-clustered - in this case testDistributedQueue or testQueue).
| The only way I can get the listener to start receiving messages is to kill one of the
nodes e.g. kill node0.
|
Looking at your code, I see you are creating the first dispatcher connection to node 0 and
the first listener connection to node1. The clustered connection factory will create
subsequent connections on different nodes according to (by default) a round robin policy.
JBossMessaging clustering can be configured in different ways according to the type of
application you are running.
The most common type of clustered app is a bank of servers with homogenous MDBs deployed
on each node (i.e. each node has the same set of MDBs) and producers evenly distributed
acros nodes sending messages. In such a configuration it makes sense for the local queue
to always get the message - i.e. there is no point redistributing it to another node. This
is the default config.
So in your case you are not initially seeing your messages being consumed since your
consumer is on a different node to your producer.
When one of the nodes is killed, both connections end up on the same node hence you see
the messages being consumed.
There are several different common application "types" and JBM can be configured
for all of them. Check out the section on clustering configuration in the 1.2 user guide
for more info. The documentation is also due to be fleshed out in more detail soon too.
In short, if your producers are well distributed across the cluster then you should choose
the default cluster routing policy which always favours the local queue, otherwise you can
use a round robin cluster router policy.
If your consumers are well distributed across the cluster then you do not need message
redistribution (i.e. you should use the NullMessagePullPolicy) otherwise you can use the
DefaultMessagePullPolicy.
Also bear in mind, that a topolgy where you have just one producer on one node and a
single consumer on a different node like your test case is probably not much of a real
world scenario, (why would you want to deploy you application this way?), although we
should of course cope with this (and we do).
I have successfully run your testcase and I am seeing expected behaviour so far. I have
killed alternating servers many times and I am seeing failover occurring fine. We also
have a test that runs as part of the cruisecontrol run that does this and it seems to be
working.
Can you give me any more details as to what errors you are seeing?
anonymous wrote :
|
| The second issue is that it's pretty easy to stop messages being dispatched and
received altogether by randomly stopping and starting the individual nodes e.g. stop both
nodes and bring one back up - my test case was unable to get a connection after both nodes
had been shut down.
|
|
I don't understand this. In order for the client to successfully send/receive messages
you need at least one node in the cluster to be operational.
If you shutdown all the nodes in the cluster then clearly nothing is going to work, the
client needs to talk to a server.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4026437#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...