Hi, sorry for not responding to this sooner. Answers inline:
"jhalliday" wrote :
| Clearly the number of replicas is critical - it must be high enough to ensure at least
one node will survive any outage, but low enough to perform well.
|
| Writes must be synchronous for obvious reasons, but ideally a node that is up should
not halt just because another member of the cluster is down. That model would preserve
information but reduce availability, which is undesirable.
|
I am guessing that you will have session affinity, i.e., for the non-failing cases, it
will always be one instance that works on a single transaction log. Hence, I would
recommend using buddy replication, in sync mode (as per your requirement). BR allows you
to tune how many backup copies are stored as well, and since the number of backups are
fixed, your system will scale well.
"jhalliday" wrote :
| Similarly the crash of one buddy should not halt the system if there is an additional
node available such that the total live number remains more than M.
|
The crash of a buddy will not halt the system. It will just attempt to find an alternate
buddy. Even if you just end up with one node in the system, it will still run, albeit log
some severe warnings that you don't have anywhere to backup to! :-)
"jhalliday" wrote :
| Also, are there any numbers on the performance as a function of groups size,
particularly mixing nodes on the same or different network segments. I'm thinking that
to get independent failure characteristics for the nodes will probably require a
distributed cluster, such that the nodes are on different power supplies etc. Having all
the nodes in the same rack probably provides a false sense of security...
|
BR allows you to provide hints when selecting buddies (see the buddy group cfg attribute)
so that the system will prefer buddies in the same group. You can then create groups that
span racks, e.g., one on each rack.
"jhalliday" wrote :
| On a similar note, whilst cache puts must be synchronous, my design can tolerate
asynchronous removes. Is such a hybrid configuration possible?
|
Option.setForceAsynchronous() allows you to set this on a per-invocation basis.
"jhalliday" wrote :
| Critically this is not the same as having all writes go through to disk. Is it
possible to configure the cache loaders to write only on eviction?
|
Yes. Set passivation to true in your cache loader cfg.
"jhalliday" wrote :
| Also, it vital to ensure there is no circular dependency between the cache and the
transaction manager. I'm assuming this can be achieved simply by ensuring there is no
transaction context on the thread at the time the cache API is called. Or does it use
transactions JTA anywhere internally?
|
Yes. Just suspend any JTA transactions before making cache calls.
"jhalliday" wrote :
| One final question: Am I totally mad, or only mildly demented?
|
No, this sounds pretty interesting. :-)
Re: Bela's comment about this being write-mostly and hence not suited to a cache, I
disagree with this because you have session affinity and the cached dataset by each
instance will not overlap. Hence you don't have concurrent writers to the same
dataset across the cluster, and hence my suggestion on buddy replication. This feels a
lot like HTTP session replication IMO, where only one instance really needs the data; the
backup is just if servers die and things get ugly.
Cheers
Manik
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4186365#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...