"bstansberry(a)jboss.com" wrote :
|
| The stuff that should be reused is the interceptor model.
|
|
This is the client side interceptor model?
anonymous wrote :
|
| 2) The JGroups view can't be used directly, as a JGroups Address is not suitable
for use as a Remoting InvokerLocator. At minimum a different port. Quite possibly a
different IP address (e.g. client connections use a different NIC than clustering
traffic.) I think the thing to do is something like adding another Request type where a
server publishes its InvokerLocator(s); the rest of the cluster maintains a map of that
data. That's basically the DistributedReplicantManager concept.
|
|
I think we would need to replicate both the remoting invoker locator and the JGroups
address.
The client needs to know the locator list but a particular server also needs to know who
it's failover server(s) are, this is so it can do stuff like cast messages to it's
failover servers when we do in memory message replication. A JGroups address would be
ideal for this.
We should be able to support multiple failover servers for a particular server.
anonymous wrote :
|
| Perhaps an approach where the client fails over to anyone, but if the server isn't
the appropriate one it communicates that back to the client, along with the updated
topology and info as to who the failover server should be.
|
I like this idea.
anonymous wrote :
| Stupid question: why does the client need to fail over to a particular server? The
failover server needs to do work to recover persistent messages from the store, but once
it's done that, aren't they available to a client connecting to any server?
Each server maintains it's own set of queues each containing different sets of
messages.
So when a node fails the failover node needs to take over responsibility for the queues on
the node that failed.
This means creating those queues in memory and loading any persistent messages from
storage into those queues (if in memory replication is being used the messages will
already be there so no need to load from storage).
This means clients need to reconnect the node which now hosts those queues so they can
continue as if nothing happened.
As an added complication we also need to decide what happens when the failed node comes
back to life.
Does it take over responsbility for the original queues it has?
Actually I don't think this is possible unless there are no clients using those queues
on the failover node, since we would have to disconnect and reconnect the connections on
the resurrected node which would mean losing any persistent messages delivered in those
sessions which would be unacceptable.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3977484#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...