[Design of Messaging on JBoss (Messaging/JBoss)] - Re: Fail-over design questions
by hendra_netm
"timfox" wrote :
| When you failover, non persistent messages would be lost. Therefore we can't just fail over to redistribute load, since people would get upset if suddenly their non persistent messages disappeared.
|
| There are certain situations where it is possible to redistribute a connection, like, if there are no unacked messages in the session, but this gets complex.
Sorry for unclear question. I mean, when I shut down the server the messages and the connection will be not failed over to another node. The fail-over policy is only triggered by crashed condition not normal shut down condition, isn't it?
If a server is shut down, because for example I want to add some components in my server, the messages and connections will not be failed over. Why does shut down not get the same policy like when the server is crashed? Is there any problem that make you differ the crashed and shut down situation?
Thank you for your respond.
Regards,
Hendra
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4003475#4003475
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4003475
19 years, 2 months
[Design of Messaging on JBoss (Messaging/JBoss)] - Re: Fail-over design questions
by timfox
"hendra_netm" wrote : Hello JBoss Messaging Developers,
| I have a question about what will happen to the crashed server when I bring it back again?
|
| For example, I have two servers Server0 and Server1. Server0 is crash. All connections and messages in Server0 will be failed over to Server1.
| When I bring server0 back, I think server0 wil have no client connections, and this will lead to imbalance load between server. Is this correct?
|
| I also want to ask why fail-over action is only trigerred by crashed condition?
| I have a case where I need to shut down JBoss Messaging Server for operational thing. I want when I shut down one server, the connection will be failed over to another node, and when I put the server back, that server will share again the load of messages delivery with other servers. Is this scenario possible?
When you failover, non persistent messages would be lost. Therefore we can't just fail over to redistribute load, since people would get upset if suddenly their non persistent messages disappeared.
There are certain situations where it is possible to redistribute a connection, like, if there are no unacked messages in the session, but this gets complex.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4003467#4003467
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4003467
19 years, 2 months
[Design of Messaging on JBoss (Messaging/JBoss)] - Fail-over design questions
by hendra_netm
Hello JBoss Messaging Developers,
I have a question about what will happen to the crashed server when I bring it back again?
For example, I have two servers Server0 and Server1. Server0 is crash. All connections and messages in Server0 will be failed over to Server1.
When I bring server0 back, I think server0 wil have no client connections, and this will lead to imbalance load between server. Is this correct?
I also want to ask why fail-over action is only trigerred by crashed condition?
I have a case where I need to shut down JBoss Messaging Server for operational thing. I want when I shut down one server, the connection will be failed over to another node, and when I put the server back, that server will share again the load of messages delivery with other servers. Is this scenario possible?
Thank you in advance,
Hendra
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4003456#4003456
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4003456
19 years, 2 months
[Design of JBossCache] - Re: markNodeCurrentlyInUse() - race condition
by bstansberry@jboss.com
>From an earlier jbosscache-dev conversation:
"Manik Surtani" wrote :
| On 15 Nov 2006, at 14:51, Brian Stansberry wrote:
|
| > The purpose of this method is to allow an application to signal the
| > eviction policy to not evict a particular node who's data it's using.
| > Needed in situations where the app doesn't hold a lock on the node
| > *and*
| > where evicting the node will cause a problem.
| >
| > AFAICT, this use case is really just for EJB3 SFSBs, where the problem
| > is that evicting an "in-use" node forces the container to call
| > prePassivate() on the in-use EJB. This is done from a CacheListener.
| >
| > This is all a bit funky. Adding methods to the API to deal with a
| > very
| > specific use case. It also has a small race condition, which I won't
| > get into here.
| >
| > Here's a different (but still funky) approach:
| >
| > Add a new exception class o.j.c.eviction.NodeInUseException extends
| > RuntimeException.
| >
| > The application implements CacheListener. When it gets a
| > nodePassivated(pre=true) event, if it doesn't want the passivation, it
| > throws the NodeInUseException. This will abort the eviction and
| > propagate back to the eviction policy. Eviction policies are already
| > written such that they catch all exceptions, and then throw the node
| > into a retry queue (see BaseEvictionAlgorithm.evictCacheNode()). If we
| > wanted to get fancy, we could specifically catch NodeInUseException
| > and
| > decide from there whether to add it to the retry queue.
| >
| > I don't think we should try to change this this week; more of a short
| > term future thing.
| >
| > Thoughts?
|
| Your second (and still funky) approach does seem a lot cleaner than
| hacking in stuff into the API for a very specific use case. So
| certainly my preferred option. But from a performance standpoint, is
| this really an "exceptional" circumstance?
The above is hacky. Concern I have about adding a field to the node is it forces a read of each node as part of the eviction. IIRC that's not needed now, except in CacheImpl itself as part of the remove. Don't want to add overhead. (I'm in a class now so can't look at code, so please forgive if that's an easily addressable concern.)
I believe the race condition is because eviction is a two step process with 2 queues -- policy reads off all the eviction events off the event queue and uses that to build its eviction queue. Then it goes through the eviction queue and decides whether to evict the nodes. Race can happen if the markNodeCurrentlyInUse() call happens after event queue has been read but before the policy has dealt with the relevant node. This race is really a general problem that could throw off accurate working off any eviction algorithm (e.g. a get() that occurs after the event queue is read doesn't prevent eviction by the LRUPolicy).
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4003454#4003454
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4003454
19 years, 2 months