Author: ataylor
Date: 2011-01-07 11:58:16 -0500 (Fri, 07 Jan 2011)
New Revision: 10110
Modified:
trunk/docs/eap-manual/en/clusters.xml
Log:
updated documentation
Modified: trunk/docs/eap-manual/en/clusters.xml
===================================================================
--- trunk/docs/eap-manual/en/clusters.xml 2011-01-07 16:49:22 UTC (rev 10109)
+++ trunk/docs/eap-manual/en/clusters.xml 2011-01-07 16:58:16 UTC (rev 10110)
@@ -54,329 +54,391 @@
</para>
<section>
<title>Configuration</title>
- <para>
- First lets start with the configuration of the live server, we will use
the EAP 'all' configuration as
- our starting point. Since this version only supports shared store for
failover we need to configure this in the
- <literal>hornetq-configuration.xml</literal>
- file like so:
- </para>
- <programlisting>
- <shared-store>true</shared-store>
- </programlisting>
- <para>
- Obviously this means that the location of the journal files etc will have
to be configured to be some
- where
- where
- this lives backup can access. You may change the lives configuration in
- <literal>hornetq-configuration.xml</literal>
- to
- something like:
- </para>
- <programlisting>
-
<large-messages-directory>/media/shared/data/large-messages</large-messages-directory>
-
<bindings-directory>/media/shared/data/bindings</bindings-directory>
- <journal-directory>/media/shared/data/journal</journal-directory>
- <paging-directory>/media/shared/data/paging</paging-directory>
- </programlisting>
- <para>
- How these paths are configured will of course depend on your network
settings or file system.
- </para>
- <para>
- Now we need to configure how remote JMS clients will behave if the server
is shutdown in a normal
- fashion.
- By
- default
- Clients will not failover if the live server is shutdown. Depending on
there connection factory settings
- they will either fail or try to reconnect to the live server.
- </para>
- <para>If you want clients to failover on a normal server shutdown the
you must configure the
- <literal>failover-on-shutdown</literal>
- flag to true in the
- <literal>hornetq-configuration.xml</literal>
- file like so:
- </para>
- <programlisting>
- <failover-on-shutdown>false</failover-on-shutdown>
- </programlisting>
- <para>Don't worry if you have this set to false (which is the
default) but still want failover to occur,
- simply
- kill
- the
- server process directly or call
- <literal>forceFailover</literal>
- via jmx or the admin console on the core server object.
- </para>
- <para>
- No lets look at how to create and configure a backup server on the same
node, lets assume that this
- backups
- live
- server is configured identically to the live server on this node for
simplicities sake.
- </para>
- <para>
- Firstly we need to define a new HornetQ Server that EAP will deploy. We do
this by creating a new
- <literal>hornetq-jboss-beans.xml</literal>
- configuration. We will place this under a new directory
- <literal>hornetq-backup1</literal>
- which will need creating
- in the
- <literal>deploy</literal>
- directory but in reality it doesn't matter where this is put. This
will look like:
- </para>
- <programlisting>
- <?xml version="1.0" encoding="UTF-8"?>
+ <section>
+ <title>Live Server Configuration</title>
+ <para>
+ First lets start with the configuration of the live server, we will use
the EAP 'all' configuration as
+ our starting point. Since this version only supports shared store for
failover we need to configure
+ this in the
+ <literal>hornetq-configuration.xml</literal>
+ file like so:
+ </para>
+ <programlisting>
+ <shared-store>true</shared-store>
+ </programlisting>
+ <para>
+ Obviously this means that the location of the journal files etc will
have to be configured to be some
+ where
+ where
+ this lives backup can access. You may change the lives configuration
in
+ <literal>hornetq-configuration.xml</literal>
+ to
+ something like:
+ </para>
+ <programlisting>
+
<large-messages-directory>/media/shared/data/large-messages</large-messages-directory>
+
<bindings-directory>/media/shared/data/bindings</bindings-directory>
+
<journal-directory>/media/shared/data/journal</journal-directory>
+
<paging-directory>/media/shared/data/paging</paging-directory>
+ </programlisting>
+ <para>
+ How these paths are configured will of course depend on your network
settings or file system.
+ </para>
+ <para>
+ Now we need to configure how remote JMS clients will behave if the
server is shutdown in a normal
+ fashion.
+ By
+ default
+ Clients will not failover if the live server is shutdown. Depending on
there connection factory
+ settings
+ they will either fail or try to reconnect to the live server.
+ </para>
+ <para>If you want clients to failover on a normal server shutdown
the you must configure the
+ <literal>failover-on-shutdown</literal>
+ flag to true in the
+ <literal>hornetq-configuration.xml</literal>
+ file like so:
+ </para>
+ <programlisting>
+ <failover-on-shutdown>false</failover-on-shutdown>
+ </programlisting>
+ <para>Don't worry if you have this set to false (which is the
default) but still want failover to occur,
+ simply
+ kill
+ the
+ server process directly or call
+ <literal>forceFailover</literal>
+ via jmx or the admin console on the core server object.
+ </para>
+ <para>We also need to configure the connection factories used by the
client to be HA. This is done by
+ adding
+ certain attributes to the connection factories
in<literal>hornetq-jms.xml</literal>. Lets look at an
+ example:
+ </para>
+ <programlisting>
+ <connection-factory name="NettyConnectionFactory">
+ <xa>true</xa>
+ <connectors>
+ <connector-ref connector-name="netty"/>
+ </connectors>
+ <entries>
+ <entry name="/ConnectionFactory"/>
+ <entry name="/XAConnectionFactory"/>
+ </entries>
- <deployment xmlns="urn:jboss:bean-deployer:2.0">
+ <ha>true</ha>
+ <!-- Pause 1 second between connect attempts -->
+ <retry-interval>1000</retry-interval>
- <!-- The core configuration -->
- <bean name="BackupConfiguration"
class="org.hornetq.core.config.impl.FileConfiguration">
- <property
name="configurationUrl">${jboss.server.home.url}/deploy/hornetq-backup1/hornetq-configuration.xml</property>
- </bean>
+ <!-- Multiply subsequent reconnect pauses by this multiplier.
This can be used to
+ implement an exponential back-off. For our purposes we just set to 1.0
so each reconnect
+ pause is the same length -->
+
<retry-interval-multiplier>1.0</retry-interval-multiplier>
+ <!-- Try reconnecting an unlimited number of times (-1 means
"unlimited") -->
+ <reconnect-attempts>-1</reconnect-attempts>
+ </connection-factory>
- <!-- The core server -->
- <bean name="BackupHornetQServer"
class="org.hornetq.core.server.impl.HornetQServerImpl">
- <constructor>
- <parameter>
- <inject bean="BackupConfiguration"/>
- </parameter>
- <parameter>
- <inject bean="MBeanServer"/>
- </parameter>
- <parameter>
- <inject bean="HornetQSecurityManager"/>
- </parameter>
- </constructor>
- <start ignored="true"/>
- <stop ignored="true"/>
- </bean>
+ </programlisting>
+ <para>We have added the following attributes to the connection
factory used by the client:</para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal>ha</literal>
+ - This tells the client it support HA and must always be true for
failover
+ to occur
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>retry-interval</literal>
+ - this is how long the client will wait after each unsuccessful
+ reconnect to the server
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>retry-interval-multiplier</literal>
+ - is used to configure an exponential back off for
+ reconnect attempts
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>reconnect-attempts</literal>
+ - how many reconnect attempts should a client make before
failing,
+ -1 means unlimited.
+ </para>
+ </listitem>
+ </itemizedlist>
+ </section>
+ <section>
+ <title>Backup Server Configuration</title>
+ <para>
+ Now lets look at how to create and configure a backup server on the
same eap instance. This is running
+ on the same eap instance as the live server from the previous chapter
but is configured as the backup
+ for a live server running on a different eap instance.
+ </para>
+ <para>
+ The first thing to mention is that the backup only needs a
<literal>hornetq-jboss-beans.xml</literal>
+ and a <literal>hornetq-configuration.xml</literal>
configuration file. This is because any JMS components
+ are created from the Journal when the backup server becomes live.
+ </para>
+ <para>
+ Firstly we need to define a new HornetQ Server that EAP will deploy. We
do this by creating a new
+ <literal>hornetq-jboss-beans.xml</literal>
+ configuration. We will place this under a new directory
+ <literal>hornetq-backup1</literal>
+ which will need creating
+ in the
+ <literal>deploy</literal>
+ directory but in reality it doesn't matter where this is put. This
will look like:
+ </para>
+ <programlisting>
+ <?xml version="1.0" encoding="UTF-8"?>
- <!-- The JMS server -->
- <bean name="BackupJMSServerManager"
class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
- <constructor>
- <parameter>
- <inject bean="BackupHornetQServer"/>
- </parameter>
- </constructor>
- </bean>
+ <deployment xmlns="urn:jboss:bean-deployer:2.0">
- </deployment>
- </programlisting>
- <para>
- The first thing to notice is the BackupConfiguration bean. This is
configured to pick up the
- configuration
- for
- the
- server which we will place in the same directory.
- </para>
- <para>
- After that we just configure a new HornetQ Server and JMS server.
- </para>
- <note>
+ <!-- The core configuration -->
+ <bean name="BackupConfiguration"
class="org.hornetq.core.config.impl.FileConfiguration">
+ <property
+
name="configurationUrl">${jboss.server.home.url}/deploy/hornetq-backup1/hornetq-configuration.xml</property>
+ </bean>
+
+
+ <!-- The core server -->
+ <bean name="BackupHornetQServer"
class="org.hornetq.core.server.impl.HornetQServerImpl">
+ <constructor>
+ <parameter>
+ <inject bean="BackupConfiguration"/>
+ </parameter>
+ <parameter>
+ <inject bean="MBeanServer"/>
+ </parameter>
+ <parameter>
+ <inject bean="HornetQSecurityManager"/>
+ </parameter>
+ </constructor>
+ <start ignored="true"/>
+ <stop ignored="true"/>
+ </bean>
+
+ <!-- The JMS server -->
+ <bean name="BackupJMSServerManager"
class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
+ <constructor>
+ <parameter>
+ <inject bean="BackupHornetQServer"/>
+ </parameter>
+ </constructor>
+ </bean>
+
+ </deployment>
+ </programlisting>
<para>
- Notice that the names of the beans have been changed from that of the
live servers configuration. This
- is
- so
- there is no clash. Obviously if you add more backup servers you will
need to rename those as well,
- backup1,
- backup2 etc.
+ The first thing to notice is the BackupConfiguration bean. This is
configured to pick up the
+ configuration
+ for
+ the
+ server which we will place in the same directory.
</para>
- </note>
- <para>
- Now lets add the server configuration in
- <literal>hornetq-configuration.xml</literal>
- and add it to the same directory
- <literal>deploy/hornetq-backup1</literal>
- and configure it like so:
- </para>
- <programlisting>
- <configuration xmlns="urn:hornetq"
-
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
+ <para>
+ After that we just configure a new HornetQ Server and JMS server.
+ </para>
+ <note>
+ <para>
+ Notice that the names of the beans have been changed from that of
the live servers configuration.
+ This
+ is
+ so
+ there is no clash. Obviously if you add more backup servers you will
need to rename those as well,
+ backup1,
+ backup2 etc.
+ </para>
+ </note>
+ <para>
+ Now lets add the server configuration in
+ <literal>hornetq-configuration.xml</literal>
+ and add it to the same directory
+ <literal>deploy/hornetq-backup1</literal>
+ and configure it like so:
+ </para>
+ <programlisting>
+ <configuration xmlns="urn:hornetq"
+
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="urn:hornetq
/schema/hornetq-configuration.xsd">
- <jmx-domain>org.hornetq.backup1</jmx-domain>
+ <jmx-domain>org.hornetq.backup1</jmx-domain>
- <clustered>true</clustered>
+ <clustered>true</clustered>
- <backup>true</backup>
+ <backup>true</backup>
- <shared-store>true</shared-store>
+ <shared-store>true</shared-store>
- <allow-failback>true</allow-failback>
+ <allow-failback>true</allow-failback>
-
<log-delegate-factory-class-name>org.hornetq.integration.logging.Log4jLogDelegateFactory</log-delegate-factory-class-name>
+
<log-delegate-factory-class-name>org.hornetq.integration.logging.Log4jLogDelegateFactory</log-delegate-factory-class-name>
-
<bindings-directory>${jboss.server.data.dir}/hornetq-backup/bindings</bindings-directory>
+
<bindings-directory>/media/shared/data/hornetq-backup/bindings</bindings-directory>
-
<journal-directory>${jboss.server.data.dir}/hornetq-backup/journal</journal-directory>
+
<journal-directory>/media/shared/data/hornetq-backup/journal</journal-directory>
- <journal-min-files>10</journal-min-files>
+ <journal-min-files>10</journal-min-files>
-
<large-messages-directory>${jboss.server.data.dir}/hornetq-backup/largemessages</large-messages-directory>
+
<large-messages-directory>/media/shared/data/hornetq-backup/largemessages</large-messages-directory>
-
<paging-directory>${jboss.server.data.dir}/hornetq/paging</paging-directory>
+
<paging-directory>/media/shared/data/hornetq-backup/paging</paging-directory>
- <connectors>
- <connector name="netty-connector">
-
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
- <param key="host"
value="${jboss.bind.address:localhost}"/>
- <param key="port"
value="${hornetq.remoting.netty.port:5446}"/>
- </connector>
+ <connectors>
+ <connector name="netty-connector">
+
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
+ <param key="host"
value="${jboss.bind.address:localhost}"/>
+ <param key="port"
value="${hornetq.remoting.netty.port:5446}"/>
+ </connector>
- <!--The connetor to the live node that corresponds to this
backup-->
- <connector name="my-live-connector">
-
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
- <param key="host" value="my-live-host"/>
- <param key="port"
value="${hornetq.remoting.netty.port:5445}"/>
- </connector>
+ <connector name="in-vm">
+
<factory-class>org.hornetq.core.remoting.impl.invm.InVMConnectorFactory</factory-class>
+ <param key="server-id"
value="${hornetq.server-id:0}"/>
+ </connector>
- <!--invm connector added by th elive server on this node, used by the
bridges-->
- <connector name="in-vm">
-
<factory-class>org.hornetq.core.remoting.impl.invm.InVMConnectorFactory</factory-class>
- <param key="server-id"
value="${hornetq.server-id:0}"/>
- </connector>
+ </connectors>
- </connectors>
+ <acceptors>
+ <acceptor name="netty">
+
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
+ <param key="host"
value="${jboss.bind.address:localhost}"/>
+ <param key="port"
value="${hornetq.remoting.netty.port:5446}"/>
+ </acceptor>
+ </acceptors>
- <acceptors>
- <acceptor name="netty">
-
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
- <param key="host"
value="${jboss.bind.address:localhost}"/>
- <param key="port"
value="${hornetq.remoting.netty.port:5446}"/>
- </acceptor>
- </acceptors>
+ <broadcast-groups>
+ <broadcast-group name="bg-group1">
+ <group-address>231.7.7.7</group-address>
+ <group-port>9876</group-port>
+ <broadcast-period>1000</broadcast-period>
+ <connector-ref>netty-connector</connector-ref>
+ </broadcast-group>
+ </broadcast-groups>
- <broadcast-groups>
- <broadcast-group name="bg-group1">
- <group-address>231.7.7.7</group-address>
- <group-port>9876</group-port>
- <broadcast-period>1000</broadcast-period>
- <connector-ref>netty-connector</connector-ref>
- </broadcast-group>
- </broadcast-groups>
+ <discovery-groups>
+ <discovery-group name="dg-group1">
+ <group-address>231.7.7.7</group-address>
+ <group-port>9876</group-port>
+ <refresh-timeout>60000</refresh-timeout>
+ </discovery-group>
+ </discovery-groups>
- <discovery-groups>
- <discovery-group name="dg-group1">
- <group-address>231.7.7.7</group-address>
- <group-port>9876</group-port>
- <refresh-timeout>60000</refresh-timeout>
- </discovery-group>
- </discovery-groups>
+ <cluster-connections>
+ <cluster-connection name="my-cluster">
+ <address>jms</address>
+ <connector-ref>netty-connector</connector-ref>
+ <discovery-group-ref
discovery-group-name="dg-group1"/>
+ </cluster-connection>
+ </cluster-connections>
- <cluster-connections>
- <cluster-connection name="my-cluster">
- <address>jms</address>
- <connector-ref>netty-connector</connector-ref>
- <discovery-group-ref discovery-group-name="dg-group1"/>
- </cluster-connection>
- </cluster-connections>
+ <security-settings>
+ <security-setting match="#">
+ <permission type="createNonDurableQueue"
roles="guest"/>
+ <permission type="deleteNonDurableQueue"
roles="guest"/>
+ <permission type="consume"
roles="guest"/>
+ <permission type="send" roles="guest"/>
+ </security-setting>
+ </security-settings>
- <!-- We need to create a core queue for the JMS queue explicitly because
the bridge will be deployed
- before the JMS queue is deployed, so the first time, it otherwise won't find
the queue -->
- <queues>
- <queue name="jms.queue.testQueue">
- <address>jms.queue.testQueue</address>
- </queue>
- </queues>
- <!-- We set-up a bridge that forwards from a the queue on this node to
the same address on the live
- node.
- -->
- <bridges>
- <bridge name="testQueueBridge">
- <queue-name>jms.queue.testQueue</queue-name>
-
<forwarding-address>jms.queue.testQueue</forwarding-address>
- <reconnect-attempts>-1</reconnect-attempts>
- <static-connectors>
- <connector-ref>in-vm</connector-ref>
- </static-connectors>
- </bridge>
- </bridges>
+ <address-settings>
+ <!--default for catch all-->
+ <address-setting match="#">
+
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
+
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
+ <redelivery-delay>0</redelivery-delay>
+ <max-size-bytes>10485760</max-size-bytes>
+
<message-counter-history-day-limit>10</message-counter-history-day-limit>
+ <address-full-policy>BLOCK</address-full-policy>
+ </address-setting>
+ </address-settings>
- <security-settings>
- <security-setting match="#">
- <permission type="createNonDurableQueue"
roles="guest"/>
- <permission type="deleteNonDurableQueue"
roles="guest"/>
- <permission type="consume" roles="guest"/>
- <permission type="send" roles="guest"/>
- </security-setting>
- </security-settings>
+ </configuration>
- <address-settings>
- <!--default for catch all-->
- <address-setting match="#">
-
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
-
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
- <redelivery-delay>0</redelivery-delay>
- <max-size-bytes>10485760</max-size-bytes>
-
<message-counter-history-day-limit>10</message-counter-history-day-limit>
- <address-full-policy>BLOCK</address-full-policy>
- </address-setting>
- </address-settings>
-
- </configuration>
-
- </programlisting>
- <para>
- The first thing you can see is we have added a
<literal>jmx-domain</literal> attribute, this is used when
- adding objects, such as the HornetQ server and JMS server to jmx, we
change this from the default <literal>org.hornetq</literal>
- to avoid naming clashes with the live server
- </para>
- <para>
- The first important part of the configuration is to make sure that this
server starts as a backup server not
- a live server, via the <literal>backup</literal> attribute.
- </para>
- <para>
- After that we have the same cluster configuration as live, that is
<literal>clustered</literal> is true and
- <literal>shared-store</literal> is true. However you can see
we have added a new configuration element
- <literal>allow-failback</literal>. When this is set to true
then this backup server will automatically stop
- and fall back into backup node if failover occurs and the live server has
become available. If false then
- the user will have to stop the server manually.
- </para>
- <para>
- Next we can see the configuration for the journal location, as in the live
configuration this must point to
- the same directory as this backup's live server.
- </para>
- <para>
- Now we see the connectors configuration, we have 3 defined which are
needed for the following
- </para>
- <itemizedlist>
- <listitem>
- <para>
- <literal>netty-connector.</literal> This is the
connector used to connect to this backup server once live.
- </para>
- </listitem>
- <listitem>
- <para>
- <literal>my-live-connector.</literal> This is the
connector to the live server that this backup is paied to.
- It is used by the cluster connection to announce its presence as a
backup and to form the cluster when
- this backup becomes live. In reality it doesn't matter what
connector the cluster connection uses, it
- could actually use the invm connector and broadcast its presence via
the server on this node if we wanted.
- </para>
- </listitem>
- <listitem>
- <para>
- <literal>in-vm.</literal> This is the invm connector
that is created by the live server on the same
- node. We will use this to create a bridge to the live server to
forward messages to.
- </para>
- </listitem>
- </itemizedlist>
- <para>After that you will see the acceptors defined, This is the
acceptor where clients will reconnect.</para>
- <para>
- The Broadcast groups, Discovery group and cluster configurations are as
per normal, details of these
- can be found in the HornetQ user manual.
- </para>
- <para>
- The next part is of interest, here we define a list of queues and bridges.
These must match any queues
- and addresses used by MDB's in the live servers configuration. At this
point these must be statically
- defined but this may change in future versions. Basically fow every queue
or topic definition you need a
- queue configuration using the correct prefix
<literal>jms.queue(topic)</literal> if using jm and a bridge
- definition that handles the forwarding of any message.
- </para>
- <note>
+ </programlisting>
<para>
- There is no such thing as a topic in core HornetQ, this is basically
just an address so we need to create
- a queue that matches the jms address, that is,
<literal>jms.topic.testTopic</literal>.
+ The first thing you can see is we have added a
+ <literal>jmx-domain</literal>
+ attribute, this is used when
+ adding objects, such as the HornetQ server and JMS server to jmx, we
change this from the default
+ <literal>org.hornetq</literal>
+ to avoid naming clashes with the live server
</para>
- </note>
+ <para>
+ The first important part of the configuration is to make sure that this
server starts as a backup
+ server not
+ a live server, via the
+ <literal>backup</literal>
+ attribute.
+ </para>
+ <para>
+ After that we have the same cluster configuration as live, that is
+ <literal>clustered</literal>
+ is true and
+ <literal>shared-store</literal>
+ is true. However you can see we have added a new configuration element
+ <literal>allow-failback</literal>. When this is set to true
then this backup server will automatically
+ stop
+ and fall back into backup node if failover occurs and the live server
has become available. If false
+ then
+ the user will have to stop the server manually.
+ </para>
+ <para>
+ Next we can see the configuration for the journal location, as in the
live configuration this must
+ point to
+ the same directory as this backup's live server.
+ </para>
+ <para>
+ Now we see the connectors configuration, we have 3 defined which are
needed for the following
+ </para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal>netty-connector.</literal>
+ This is the connector used to connect to this backup server once
live.
+ </para>
+ </listitem>
+ </itemizedlist>
+ <para>After that you will see the acceptors defined, This is the
acceptor where clients will reconnect.
+ </para>
+ <para>
+ The Broadcast groups, Discovery group and cluster configurations are as
per normal, details of these
+ can be found in the HornetQ user manual.
+ </para>
+ <para>
+ When the backup becomes it will be not be servicing any JEE components
on this eap instance. Instead any
+ existing messages will be redistributed around the cluster and new
messages forwarded to and from the backup
+ to service any remote clients it has (if it has any).
+ </para>
+ </section>
+ <section>
+ <title>Configuring multiple backups</title>
+ <para>
+ In this instance we have assumed that there are only 2 nodes where each
node has a backup for the other
+ node. However you may want to configure a server too have multiple
backup nodes. For example you may want
+ 3 nodes where each node has 2 backups, one for each of the other 2 live
servers. For this you would simply
+ copy the backup configuration and make sure you do the following:
+ </para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ Make sure that you give all the beans in the
<literal>hornetq-jboss-beans.xml</literal> configuration
+ file a unique name, i.e.
+ </para>
+ </listitem>
+ </itemizedlist>
+ </section>
+ <section>
+ <title>Running the shipped example</title>
+ <para>
+ EAP ships with an example configuration for this topology. Look under
<literal>extras/hornetq/resources/examples/symmetric-cluster-with-backups-colocated</literal>
+ and follow the read me
+ </para>
+ </section>
</section>
</section>
<section>