From do-not-reply at jboss.org Fri Jan 7 11:58:17 2011
Content-Type: multipart/mixed; boundary="===============3348877411176982072=="
MIME-Version: 1.0
From: do-not-reply at jboss.org
To: hornetq-commits at lists.jboss.org
Subject: [hornetq-commits] JBoss hornetq SVN: r10110 -
trunk/docs/eap-manual/en.
Date: Fri, 07 Jan 2011 11:58:17 -0500
Message-ID: <201101071658.p07GwHdh007733@svn01.web.mwc.hst.phx2.redhat.com>
--===============3348877411176982072==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Author: ataylor
Date: 2011-01-07 11:58:16 -0500 (Fri, 07 Jan 2011)
New Revision: 10110
Modified:
trunk/docs/eap-manual/en/clusters.xml
Log:
updated documentation
Modified: trunk/docs/eap-manual/en/clusters.xml
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/docs/eap-manual/en/clusters.xml 2011-01-07 16:49:22 UTC (rev 1010=
9)
+++ trunk/docs/eap-manual/en/clusters.xml 2011-01-07 16:58:16 UTC (rev 1011=
0)
@@ -54,329 +54,391 @@
Configuration
-
- First lets start with the configuration of the live server,=
we will use the EAP 'all' configuration as
- our starting point. Since this version only supports shared=
store for failover we need to configure this in the
- hornetq-configuration.xml
- file like so:
-
-
- <shared-store>true</shared-store>
-
-
- Obviously this means that the location of the journal files=
etc will have to be configured to be some
- where
- where
- this lives backup can access. You may change the lives conf=
iguration in
- hornetq-configuration.xml
- to
- something like:
-
-
- <large-messages-directory>/media/shared/data/large-messages</larg=
e-messages-directory>
- <bindings-directory>/media/shared/data/bindings</bindings-directo=
ry>
- <journal-directory>/media/shared/data/journal</journal-directory>
- <paging-directory>/media/shared/data/paging</paging-directory>
-
-
- How these paths are configured will of course depend on you=
r network settings or file system.
-
-
- Now we need to configure how remote JMS clients will behave=
if the server is shutdown in a normal
- fashion.
- By
- default
- Clients will not failover if the live server is shutdown. D=
epending on there connection factory settings
- they will either fail or try to reconnect to the live serve=
r.
-
- If you want clients to failover on a normal server shutd=
own the you must configure the
- failover-on-shutdown
- flag to true in the
- hornetq-configuration.xml
- file like so:
-
-
- <failover-on-shutdown>false</failover-on-shutdown>
-
- Don't worry if you have this set to false (which is the =
default) but still want failover to occur,
- simply
- kill
- the
- server process directly or call
- forceFailover
- via jmx or the admin console on the core server object.
-
-
- No lets look at how to create and configure a backup server=
on the same node, lets assume that this
- backups
- live
- server is configured identically to the live server on this=
node for simplicities sake.
-
-
- Firstly we need to define a new HornetQ Server that EAP wil=
l deploy. We do this by creating a new
- hornetq-jboss-beans.xml
- configuration. We will place this under a new directory
- hornetq-backup1
- which will need creating
- in the
- deploy
- directory but in reality it doesn't matter where this is pu=
t. This will look like:
-
-
- <?xml version=3D"1.0" encoding=3D"UTF-8"?>
+
+ Live Server Configuration
+
+ First lets start with the configuration of the live serv=
er, we will use the EAP 'all' configuration as
+ our starting point. Since this version only supports sha=
red store for failover we need to configure
+ this in the
+ hornetq-configuration.xml
+ file like so:
+
+
+ <shared-store>true</shared-store>
+
+
+ Obviously this means that the location of the journal fi=
les etc will have to be configured to be some
+ where
+ where
+ this lives backup can access. You may change the lives c=
onfiguration in
+ hornetq-configuration.xml
+ to
+ something like:
+
+
+ <large-messages-directory>/media/shared/data/large-me=
ssages</large-messages-directory>
+ <bindings-directory>/media/shared/data/bindings</b=
indings-directory>
+ <journal-directory>/media/shared/data/journal</jou=
rnal-directory>
+ <paging-directory>/media/shared/data/paging</pagin=
g-directory>
+
+
+ How these paths are configured will of course depend on =
your network settings or file system.
+
+
+ Now we need to configure how remote JMS clients will beh=
ave if the server is shutdown in a normal
+ fashion.
+ By
+ default
+ Clients will not failover if the live server is shutdown=
. Depending on there connection factory
+ settings
+ they will either fail or try to reconnect to the live se=
rver.
+
+ If you want clients to failover on a normal server sh=
utdown the you must configure the
+ failover-on-shutdown
+ flag to true in the
+ hornetq-configuration.xml
+ file like so:
+
+
+ <failover-on-shutdown>false</failover-on-shutdown>
+
+ Don't worry if you have this set to false (which is t=
he default) but still want failover to occur,
+ simply
+ kill
+ the
+ server process directly or call
+ forceFailover
+ via jmx or the admin console on the core server object.
+
+ We also need to configure the connection factories us=
ed by the client to be HA. This is done by
+ adding
+ certain attributes to the connection factories inhornetq-jms.xml. Lets look at an
+ example:
+
+
+ <connection-factory name=3D"NettyConnectionFactory">
+ <xa>true</xa>
+ <connectors>
+ <connector-ref connector-name=3D"netty"/>
+ </connectors>
+ <entries>
+ <entry name=3D"/ConnectionFactory"/>
+ <entry name=3D"/XAConnectionFactory"/>
+ </entries>
=
- <deployment xmlns=3D"urn:jboss:bean-deployer:2.0">
+ <ha>true</ha>
+ <!-- Pause 1 second between connect attempts -->
+ <retry-interval>1000</retry-interval>
=
- <!-- The core configuration -->
- <bean name=3D"BackupConfiguration" class=3D"org.hornetq.core.conf=
ig.impl.FileConfiguration">
- <property name=3D"configurationUrl">${jboss.server.home.url}/d=
eploy/hornetq-backup1/hornetq-configuration.xml</property>
- </bean>
+ <!-- Multiply subsequent reconnect pauses by this mul=
tiplier. This can be used to
+ implement an exponential back-off. For our purposes we j=
ust set to 1.0 so each reconnect
+ pause is the same length -->
+ <retry-interval-multiplier>1.0</retry-interval-mul=
tiplier>
=
+ <!-- Try reconnecting an unlimited number of times (-=
1 means "unlimited") -->
+ <reconnect-attempts>-1</reconnect-attempts>
+ </connection-factory>
=
- <!-- The core server -->
- <bean name=3D"BackupHornetQServer" class=3D"org.hornetq.core.serv=
er.impl.HornetQServerImpl">
- <constructor>
- <parameter>
- <inject bean=3D"BackupConfiguration"/>
- </parameter>
- <parameter>
- <inject bean=3D"MBeanServer"/>
- </parameter>
- <parameter>
- <inject bean=3D"HornetQSecurityManager"/>
- </parameter>
- </constructor>
- <start ignored=3D"true"/>
- <stop ignored=3D"true"/>
- </bean>
+
+ We have added the following attributes to the connect=
ion factory used by the client:
+
+
+
+ ha
+ - This tells the client it support HA and must alw=
ays be true for failover
+ to occur
+
+
+
+
+ retry-interval
+ - this is how long the client will wait after each=
unsuccessful
+ reconnect to the server
+
+
+
+
+ retry-interval-multiplier
+ - is used to configure an exponential back off for
+ reconnect attempts
+
+
+
+
+ reconnect-attempts
+ - how many reconnect attempts should a client make=
before failing,
+ -1 means unlimited.
+
+
+
+
+
+ Backup Server Configuration
+
+ Now lets look at how to create and configure a backup se=
rver on the same eap instance. This is running
+ on the same eap instance as the live server from the pre=
vious chapter but is configured as the backup
+ for a live server running on a different eap instance.
+
+
+ The first thing to mention is that the backup only needs=
a hornetq-jboss-beans.xml
+ and a hornetq-configuration.xml confi=
guration file. This is because any JMS components
+ are created from the Journal when the backup server beco=
mes live.
+
+
+ Firstly we need to define a new HornetQ Server that EAP =
will deploy. We do this by creating a new
+ hornetq-jboss-beans.xml
+ configuration. We will place this under a new directory
+ hornetq-backup1
+ which will need creating
+ in the
+ deploy
+ directory but in reality it doesn't matter where this is=
put. This will look like:
+
+
+ <?xml version=3D"1.0" encoding=3D"UTF-8"?>
=
- <!-- The JMS server -->
- <bean name=3D"BackupJMSServerManager" class=3D"org.hornetq.jms.se=
rver.impl.JMSServerManagerImpl">
- <constructor>
- <parameter>
- <inject bean=3D"BackupHornetQServer"/>
- </parameter>
- </constructor>
- </bean>
+ <deployment xmlns=3D"urn:jboss:bean-deployer:2.0">
=
- </deployment>
-
-
- The first thing to notice is the BackupConfiguration bean. =
This is configured to pick up the
- configuration
- for
- the
- server which we will place in the same directory.
-
-
- After that we just configure a new HornetQ Server and JMS s=
erver.
-
-
+ <!-- The core configuration -->
+ <bean name=3D"BackupConfiguration" class=3D"org.horne=
tq.core.config.impl.FileConfiguration">
+ <property
+ name=3D"configurationUrl">${jboss.server.home.url}/deplo=
y/hornetq-backup1/hornetq-configuration.xml</property>
+ </bean>
+
+
+ <!-- The core server -->
+ <bean name=3D"BackupHornetQServer" class=3D"org.horne=
tq.core.server.impl.HornetQServerImpl">
+ <constructor>
+ <parameter>
+ <inject bean=3D"BackupConfiguration"/>
+ </parameter>
+ <parameter>
+ <inject bean=3D"MBeanServer"/>
+ </parameter>
+ <parameter>
+ <inject bean=3D"HornetQSecurityManager"/>
+ </parameter>
+ </constructor>
+ <start ignored=3D"true"/>
+ <stop ignored=3D"true"/>
+ </bean>
+
+ <!-- The JMS server -->
+ <bean name=3D"BackupJMSServerManager" class=3D"org.ho=
rnetq.jms.server.impl.JMSServerManagerImpl">
+ <constructor>
+ <parameter>
+ <inject bean=3D"BackupHornetQServer"/>
+ </parameter>
+ </constructor>
+ </bean>
+
+ </deployment>
+
- Notice that the names of the beans have been changed fro=
m that of the live servers configuration. This
- is
- so
- there is no clash. Obviously if you add more backup serv=
ers you will need to rename those as well,
- backup1,
- backup2 etc.
+ The first thing to notice is the BackupConfiguration bea=
n. This is configured to pick up the
+ configuration
+ for
+ the
+ server which we will place in the same directory.
-
-
- Now lets add the server configuration in
- hornetq-configuration.xml
- and add it to the same directory
- deploy/hornetq-backup1
- and configure it like so:
-
-
- <configuration xmlns=3D"urn:hornetq"
- xmlns:xsi=3D"http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation=3D"urn:hornetq /schema/hornetq-configuration.xsd">
+
+ After that we just configure a new HornetQ Server and JM=
S server.
+
+
+
+ Notice that the names of the beans have been changed =
from that of the live servers configuration.
+ This
+ is
+ so
+ there is no clash. Obviously if you add more backup s=
ervers you will need to rename those as well,
+ backup1,
+ backup2 etc.
+
+
+
+ Now lets add the server configuration in
+ hornetq-configuration.xml
+ and add it to the same directory
+ deploy/hornetq-backup1
+ and configure it like so:
+
+
+ <configuration xmlns=3D"urn:hornetq"
+ xmlns:xsi=3D"http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation=3D"urn:hornetq /schema/hornetq-config=
uration.xsd">
=
- <jmx-domain>org.hornetq.backup1</jmx-domain>
+ <jmx-domain>org.hornetq.backup1</jmx-domain>
=
- <clustered>true</clustered>
+ <clustered>true</clustered>
=
- <backup>true</backup>
+ <backup>true</backup>
=
- <shared-store>true</shared-store>
+ <shared-store>true</shared-store>
=
- <allow-failback>true</allow-failback>
+ <allow-failback>true</allow-failback>
=
- <log-delegate-factory-class-name>org.hornetq.integration.loggi=
ng.Log4jLogDelegateFactory</log-delegate-factory-class-name>
+ <log-delegate-factory-class-name>org.hornetq.integrat=
ion.logging.Log4jLogDelegateFactory</log-delegate-factory-class-name>
=
- <bindings-directory>${jboss.server.data.dir}/hornetq-backup/bi=
ndings</bindings-directory>
+ <bindings-directory>/media/shared/data/hornetq-backup=
/bindings</bindings-directory>
=
- <journal-directory>${jboss.server.data.dir}/hornetq-backup/jou=
rnal</journal-directory>
+ <journal-directory>/media/shared/data/hornetq-backup/=
journal</journal-directory>
=
- <journal-min-files>10</journal-min-files>
+ <journal-min-files>10</journal-min-files>
=
- <large-messages-directory>${jboss.server.data.dir}/hornetq-bac=
kup/largemessages</large-messages-directory>
+ <large-messages-directory>/media/shared/data/hornetq-=
backup/largemessages</large-messages-directory>
=
- <paging-directory>${jboss.server.data.dir}/hornetq/paging</=
paging-directory>
+ <paging-directory>/media/shared/data/hornetq-backup/p=
aging</paging-directory>
=
- <connectors>
- <connector name=3D"netty-connector">
- <factory-class>org.hornetq.core.remoting.impl.netty.Nett=
yConnectorFactory</factory-class>
- <param key=3D"host" value=3D"${jboss.bind.address:localh=
ost}"/>
- <param key=3D"port" value=3D"${hornetq.remoting.netty.po=
rt:5446}"/>
- </connector>
+ <connectors>
+ <connector name=3D"netty-connector">
+ <factory-class>org.hornetq.core.remoting.impl.netty.N=
ettyConnectorFactory</factory-class>
+ <param key=3D"host" value=3D"${jboss.bind.address:loc=
alhost}"/>
+ <param key=3D"port" value=3D"${hornetq.remoting.netty=
.port:5446}"/>
+ </connector>
=
- <!--The connetor to the live node that corresponds to this =
backup-->
- <connector name=3D"my-live-connector">
- <factory-class>org.hornetq.core.remoting.impl.netty.Nett=
yConnectorFactory</factory-class>
- <param key=3D"host" value=3D"my-live-host"/>
- <param key=3D"port" value=3D"${hornetq.remoting.netty.po=
rt:5445}"/>
- </connector>
+ <connector name=3D"in-vm">
+ <factory-class>org.hornetq.core.remoting.impl.invm.In=
VMConnectorFactory</factory-class>
+ <param key=3D"server-id" value=3D"${hornetq.server-id=
:0}"/>
+ </connector>
=
- <!--invm connector added by th elive server on this node, u=
sed by the bridges-->
- <connector name=3D"in-vm">
- <factory-class>org.hornetq.core.remoting.impl.invm.InVMC=
onnectorFactory</factory-class>
- <param key=3D"server-id" value=3D"${hornetq.server-id:0}=
"/>
- </connector>
+ </connectors>
=
- </connectors>
+ <acceptors>
+ <acceptor name=3D"netty">
+ <factory-class>org.hornetq.core.remoting.impl.netty.N=
ettyAcceptorFactory</factory-class>
+ <param key=3D"host" value=3D"${jboss.bind.address:loc=
alhost}"/>
+ <param key=3D"port" value=3D"${hornetq.remoting.netty=
.port:5446}"/>
+ </acceptor>
+ </acceptors>
=
- <acceptors>
- <acceptor name=3D"netty">
- <factory-class>org.hornetq.core.remoting.impl.netty.Nett=
yAcceptorFactory</factory-class>
- <param key=3D"host" value=3D"${jboss.bind.address:localh=
ost}"/>
- <param key=3D"port" value=3D"${hornetq.remoting.netty.po=
rt:5446}"/>
- </acceptor>
- </acceptors>
+ <broadcast-groups>
+ <broadcast-group name=3D"bg-group1">
+ <group-address>231.7.7.7</group-address>
+ <group-port>9876</group-port>
+ <broadcast-period>1000</broadcast-period>
+ <connector-ref>netty-connector</connector-ref>
+ </broadcast-group>
+ </broadcast-groups>
=
- <broadcast-groups>
- <broadcast-group name=3D"bg-group1">
- <group-address>231.7.7.7</group-address>
- <group-port>9876</group-port>
- <broadcast-period>1000</broadcast-period>
- <connector-ref>netty-connector</connector-ref>
- </broadcast-group>
- </broadcast-groups>
+ <discovery-groups>
+ <discovery-group name=3D"dg-group1">
+ <group-address>231.7.7.7</group-address>
+ <group-port>9876</group-port>
+ <refresh-timeout>60000</refresh-timeout>
+ </discovery-group>
+ </discovery-groups>
=
- <discovery-groups>
- <discovery-group name=3D"dg-group1">
- <group-address>231.7.7.7</group-address>
- <group-port>9876</group-port>
- <refresh-timeout>60000</refresh-timeout>
- </discovery-group>
- </discovery-groups>
+ <cluster-connections>
+ <cluster-connection name=3D"my-cluster">
+ <address>jms</address>
+ <connector-ref>netty-connector</connector-ref>
+ <discovery-group-ref discovery-group-name=3D"dg-group=
1"/>
+ </cluster-connection>
+ </cluster-connections>
=
- <cluster-connections>
- <cluster-connection name=3D"my-cluster">
- <address>jms</address>
- <connector-ref>netty-connector</connector-ref>
- <discovery-group-ref discovery-group-name=3D"dg-group1"/>
- </cluster-connection>
- </cluster-connections>
+ <security-settings>
+ <security-setting match=3D"#">
+ <permission type=3D"createNonDurableQueue" roles=3D"g=
uest"/>
+ <permission type=3D"deleteNonDurableQueue" roles=3D"g=
uest"/>
+ <permission type=3D"consume" roles=3D"guest"/>
+ <permission type=3D"send" roles=3D"guest"/>
+ </security-setting>
+ </security-settings>
=
- <!-- We need to create a core queue for the JMS queue explicit=
ly because the bridge will be deployed
- before the JMS queue is deployed, so the first time, it otherwise=
won't find the queue -->
- <queues>
- <queue name=3D"jms.queue.testQueue">
- <address>jms.queue.testQueue</address>
- </queue>
- </queues>
- <!-- We set-up a bridge that forwards from a the queue on this=
node to the same address on the live
- node.
- -->
- <bridges>
- <bridge name=3D"testQueueBridge">
- <queue-name>jms.queue.testQueue</queue-name>
- <forwarding-address>jms.queue.testQueue</forwarding-a=
ddress>
- <reconnect-attempts>-1</reconnect-attempts>
- <static-connectors>
- <connector-ref>in-vm</connector-ref>
- </static-connectors>
- </bridge>
- </bridges>
+ <address-settings>
+ <!--default for catch all-->
+ <address-setting match=3D"#">
+ <dead-letter-address>jms.queue.DLQ</dead-letter-ad=
dress>
+ <expiry-address>jms.queue.ExpiryQueue</expiry-addr=
ess>
+ <redelivery-delay>0</redelivery-delay>
+ <max-size-bytes>10485760</max-size-bytes>
+ <message-counter-history-day-limit>10</message-cou=
nter-history-day-limit>
+ <address-full-policy>BLOCK</address-full-policy>
+ </address-setting>
+ </address-settings>
=
- <security-settings>
- <security-setting match=3D"#">
- <permission type=3D"createNonDurableQueue" roles=3D"gues=
t"/>
- <permission type=3D"deleteNonDurableQueue" roles=3D"gues=
t"/>
- <permission type=3D"consume" roles=3D"guest"/>
- <permission type=3D"send" roles=3D"guest"/>
- </security-setting>
- </security-settings>
+ </configuration>
=
- <address-settings>
- <!--default for catch all-->
- <address-setting match=3D"#">
- <dead-letter-address>jms.queue.DLQ</dead-letter-addre=
ss>
- <expiry-address>jms.queue.ExpiryQueue</expiry-address>
- <redelivery-delay>0</redelivery-delay>
- <max-size-bytes>10485760</max-size-bytes>
- <message-counter-history-day-limit>10</message-counte=
r-history-day-limit>
- <address-full-policy>BLOCK</address-full-policy>
- </address-setting>
- </address-settings>
-
- </configuration>
-
-
-
- The first thing you can see is we have added a jmx=
-domain attribute, this is used when
- adding objects, such as the HornetQ server and JMS server t=
o jmx, we change this from the default org.hornetq
- to avoid naming clashes with the live server
-
-
- The first important part of the configuration is to make su=
re that this server starts as a backup server not
- a live server, via the backup attribute.
-
-
- After that we have the same cluster configuration as live, =
that is clustered is true and
- shared-store is true. However you can se=
e we have added a new configuration element
- allow-failback. When this is set to true=
then this backup server will automatically stop
- and fall back into backup node if failover occurs and the l=
ive server has become available. If false then
- the user will have to stop the server manually.
-
-
- Next we can see the configuration for the journal location,=
as in the live configuration this must point to
- the same directory as this backup's live server.
-
-
- Now we see the connectors configuration, we have 3 defined =
which are needed for the following
-
-
-
-
- netty-connector. This is the conne=
ctor used to connect to this backup server once live.
-
-
-
-
- my-live-connector. This is the con=
nector to the live server that this backup is paied to.
- It is used by the cluster connection to announce its =
presence as a backup and to form the cluster when
- this backup becomes live. In reality it doesn't matte=
r what connector the cluster connection uses, it
- could actually use the invm connector and broadcast i=
ts presence via the server on this node if we wanted.
-
-
-
-
- in-vm. This is the invm connector =
that is created by the live server on the same
- node. We will use this to create a bridge to the live=
server to forward messages to.
-
-
-
- After that you will see the acceptors defined, This is t=
he acceptor where clients will reconnect.
-
- The Broadcast groups, Discovery group and cluster configura=
tions are as per normal, details of these
- can be found in the HornetQ user manual.
-
-
- The next part is of interest, here we define a list of queu=
es and bridges. These must match any queues
- and addresses used by MDB's in the live servers configurati=
on. At this point these must be statically
- defined but this may change in future versions. Basically f=
ow every queue or topic definition you need a
- queue configuration using the correct prefix jms.q=
ueue(topic) if using jm and a bridge
- definition that handles the forwarding of any message.
-
-
+
- There is no such thing as a topic in core HornetQ, this =
is basically just an address so we need to create
- a queue that matches the jms address, that is, =
jms.topic.testTopic.
+ The first thing you can see is we have added a
+ jmx-domain
+ attribute, this is used when
+ adding objects, such as the HornetQ server and JMS serve=
r to jmx, we change this from the default
+ org.hornetq
+ to avoid naming clashes with the live server
-
+
+ The first important part of the configuration is to make=
sure that this server starts as a backup
+ server not
+ a live server, via the
+ backup
+ attribute.
+
+
+ After that we have the same cluster configuration as liv=
e, that is
+ clustered
+ is true and
+ shared-store
+ is true. However you can see we have added a new configu=
ration element
+ allow-failback. When this is set to t=
rue then this backup server will automatically
+ stop
+ and fall back into backup node if failover occurs and th=
e live server has become available. If false
+ then
+ the user will have to stop the server manually.
+
+
+ Next we can see the configuration for the journal locati=
on, as in the live configuration this must
+ point to
+ the same directory as this backup's live server.
+
+
+ Now we see the connectors configuration, we have 3 defin=
ed which are needed for the following
+
+
+
+
+ netty-connector.
+ This is the connector used to connect to this back=
up server once live.
+
+
+
+ After that you will see the acceptors defined, This i=
s the acceptor where clients will reconnect.
+
+
+ The Broadcast groups, Discovery group and cluster config=
urations are as per normal, details of these
+ can be found in the HornetQ user manual.
+
+
+ When the backup becomes it will be not be servicing any =
JEE components on this eap instance. Instead any
+ existing messages will be redistributed around the clust=
er and new messages forwarded to and from the backup
+ to service any remote clients it has (if it has any).
+
+
+
+ Configuring multiple backups
+
+ In this instance we have assumed that there are only 2 n=
odes where each node has a backup for the other
+ node. However you may want to configure a server too hav=
e multiple backup nodes. For example you may want
+ 3 nodes where each node has 2 backups, one for each of t=
he other 2 live servers. For this you would simply
+ copy the backup configuration and make sure you do the f=
ollowing:
+
+
+
+
+ Make sure that you give all the beans in the hornetq-jboss-beans.xml configuration
+ file a unique name, i.e.
+
+
+
+
+
+ Running the shipped example
+
+ EAP ships with an example configuration for this topolog=
y. Look under extras/hornetq/resources/examples/symmetric-cluster-=
with-backups-colocated
+ and follow the read me
+
+
--===============3348877411176982072==--