[hornetq-commits] JBoss hornetq SVN: r9114 - trunk/docs/user-manual/en.

do-not-reply at jboss.org do-not-reply at jboss.org
Wed Apr 14 09:03:49 EDT 2010


Author: timfox
Date: 2010-04-14 09:03:49 -0400 (Wed, 14 Apr 2010)
New Revision: 9114

Modified:
   trunk/docs/user-manual/en/clusters.xml
Log:
https://jira.jboss.org/jira/browse/HORNETQ-342

Modified: trunk/docs/user-manual/en/clusters.xml
===================================================================
--- trunk/docs/user-manual/en/clusters.xml	2010-04-14 12:28:10 UTC (rev 9113)
+++ trunk/docs/user-manual/en/clusters.xml	2010-04-14 13:03:49 UTC (rev 9114)
@@ -1,5 +1,4 @@
 <?xml version="1.0" encoding="UTF-8"?>
-
 <!-- ============================================================================= -->
 <!-- Copyright © 2009 Red Hat, Inc. and others.                                    -->
 <!--                                                                               -->
@@ -17,26 +16,24 @@
 <!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent        -->
 <!-- permitted by applicable law.                                                  -->
 <!-- ============================================================================= -->
-
 <chapter id="clusters">
-    <title>Clusters</title>    
+    <title>Clusters</title>
     <section>
         <title>Clusters Overview</title>
-        <para>HornetQ clusters allow groups of HornetQ servers to be grouped
-            together in order to share message processing load. Each active node in the cluster is
-            an active HornetQ server which manages its own messages and handles its own
-            connections. A server must be configured to be clustered, you will need to set the
-                <literal>clustered</literal> element in the <literal>hornetq-configuration.xml</literal>
-            configuration file to <literal>true</literal>, this is <literal>false</literal> by
-            default.</para>
+        <para>HornetQ clusters allow groups of HornetQ servers to be grouped together in order to
+            share message processing load. Each active node in the cluster is an active HornetQ
+            server which manages its own messages and handles its own connections. A server must be
+            configured to be clustered, you will need to set the <literal>clustered</literal>
+            element in the <literal>hornetq-configuration.xml</literal> configuration file to
+                <literal>true</literal>, this is <literal>false</literal> by default.</para>
         <para>The cluster is formed by each node declaring <emphasis>cluster connections</emphasis>
-            to other nodes in the core configuration file <literal>hornetq-configuration.xml</literal>.
-            When a node forms a cluster connection to another node, internally it creates a <emphasis>core
-            bridge</emphasis> (as described in <xref
-                linkend="core-bridges" />) connection between it and the other node,
-            this is done transparently behind the scenes - you don't have to declare an explicit
-            bridge for each node. These cluster connections allow messages to flow between the nodes
-            of the cluster to balance load.</para>
+            to other nodes in the core configuration file <literal
+                >hornetq-configuration.xml</literal>. When a node forms a cluster connection to
+            another node, internally it creates a <emphasis>core bridge</emphasis> (as described in
+                <xref linkend="core-bridges"/>) connection between it and the other node, this is
+            done transparently behind the scenes - you don't have to declare an explicit bridge for
+            each node. These cluster connections allow messages to flow between the nodes of the
+            cluster to balance load.</para>
         <para>Nodes can be connected together to form a cluster in many different topologies, we
             will discuss a couple of the more common topologies later in this chapter.</para>
         <para>We'll also discuss client side load balancing, where we can balance client connections
@@ -74,19 +71,21 @@
             <para>A broadcast group is the means by which a server broadcasts connectors over the
                 network. A connector defines a way in which a client (or other server) can make
                 connections to the server. For more information on what a connector is, please see
-                    <xref linkend="configuring-transports" />.</para>
+                    <xref linkend="configuring-transports"/>.</para>
             <para>The broadcast group takes a set of connector pairs, each connector pair contains
                 connection settings for a live and (optional) backup server and broadcasts them on
                 the network. It also defines the UDP address and port settings. </para>
             <para>Broadcast groups are defined in the server configuration file <literal
-                    >hornetq-configuration.xml</literal>. There can be many broadcast groups per HornetQ
-                server. All broadcast groups must be defined in a <literal
+                    >hornetq-configuration.xml</literal>. There can be many broadcast groups per
+                HornetQ server. All broadcast groups must be defined in a <literal
                     >broadcast-groups</literal> element.</para>
             <para>Let's take a look at an example broadcast group from <literal
                     >hornetq-configuration.xml</literal>:</para>
             <programlisting>&lt;broadcast-groups>
-   &lt;broadcast-group name="my-broadcast-group">
-      &lt;local-bind-port>54321&lt;/local-bind-port>
+   &lt;broadcast-group name="my-broadcast-group"></programlisting>
+            <programlisting>     
+      &lt;local-bind-address>172.16.9.3&lt;/local-bind-address>
+      &lt;local-bind-port>5432&lt;/local-bind-port>
       &lt;group-address>231.7.7.7&lt;/group-address>
       &lt;group-port>9876&lt;/group-port>
       &lt;broadcast-period>1000&lt;/broadcast-period>
@@ -103,18 +102,19 @@
                         have a unique name. </para>
                 </listitem>
                 <listitem>
-                    <para><literal>local-bind-address</literal>. This is the local bind
-                        address that the datagram socket is bound to. If you have multiple network
-                        interfaces on your server, you would specify which one you wish to use for
-                        broadcasts by setting this property. If this property is not specified then
-                        the socket will be bound to the wildcard address, an IP address chosen by 
-                        the kernel.</para>
+                    <para><literal>local-bind-address</literal>. This is the local bind address that
+                        the datagram socket is bound to. If you have multiple network interfaces on
+                        your server, you would specify which one you wish to use for broadcasts by
+                        setting this property. If this property is not specified then the socket
+                        will be bound to the wildcard address, an IP address chosen by the
+                        kernel.</para>
                 </listitem>
                 <listitem>
                     <para><literal>local-bind-port</literal>. If you want to specify a local port to
                         which the datagram socket is bound you can specify it here. Normally you
                         would just use the default value of <literal>-1</literal> which signifies
-                        that an anonymous port should be used.</para>
+                        that an anonymous port should be used. This parameter is alawys specified in conjunction with
+                    <literal>local-bind-address</literal>.</para>
                 </listitem>
                 <listitem>
                     <para><literal>group-address</literal>. This is the multicast address to which
@@ -135,11 +135,12 @@
                 <listitem>
                     <para><literal>connector-ref</literal>. This specifies the connector and
                         optional backup connector that will be broadcasted (see <xref
-                            linkend="configuring-transports" /> for more information on
-                        connectors). The connector to be broadcasted is specified by the <literal
+                            linkend="configuring-transports"/> for more information on connectors).
+                        The connector to be broadcasted is specified by the <literal
                             >connector-name</literal> attribute, and the backup connector to be
-                        broadcasted is specified by the <literal>backup-connector</literal> attribute.
-                        The <literal>backup-connector</literal> attribute is optional.</para>
+                        broadcasted is specified by the <literal>backup-connector</literal>
+                        attribute. The <literal>backup-connector</literal> attribute is
+                        optional.</para>
                 </listitem>
             </itemizedlist>
         </section>
@@ -168,12 +169,13 @@
         <section>
             <title>Defining Discovery Groups on the Server</title>
             <para>For cluster connections, discovery groups are defined in the server side
-                configuration file <literal>hornetq-configuration.xml</literal>. All discovery groups
-                must be defined inside a <literal>discovery-groups</literal> element. There can be
-                many discovery groups defined by HornetQ server. Let's look at an
+                configuration file <literal>hornetq-configuration.xml</literal>. All discovery
+                groups must be defined inside a <literal>discovery-groups</literal> element. There
+                can be many discovery groups defined by HornetQ server. Let's look at an
                 example:</para>
             <programlisting>&lt;discovery-groups>
    &lt;discovery-group name="my-discovery-group">
+      &lt;local-bind-address>172.16.9.7&lt;/local-bind-address>
       &lt;group-address>231.7.7.7&lt;/group-address>
       &lt;group-port>9876&lt;/group-port>
       &lt;refresh-timeout>10000&lt;/refresh-timeout>
@@ -186,6 +188,11 @@
                         name per server.</para>
                 </listitem>
                 <listitem>
+                    <para><literal>local-bind-address</literal>. If you are running with multiple network interfaces on the same machine, you 
+                    may want to specify that the discovery group listens only only a specific interface. To do this you can specify the interface
+                    address with this parameter. This parameter is optional.</para>
+                </listitem>
+                <listitem>
                     <para><literal>group-address</literal>. This is the multicast ip address of the
                         group to listen on. It should match the <literal>group-address</literal> in
                         the broadcast group that you wish to listen from. This parameter is
@@ -211,9 +218,9 @@
         </section>
         <section id="clusters-discovery.groups.clientside">
             <title>Discovery Groups on the Client Side</title>
-            <para>Let's discuss how to configure a HornetQ client to use discovery to
-                discover a list of servers to which it can connect. The way to do this differs
-                depending on whether you're using JMS or the core API.</para>
+            <para>Let's discuss how to configure a HornetQ client to use discovery to discover a
+                list of servers to which it can connect. The way to do this differs depending on
+                whether you're using JMS or the core API.</para>
             <section>
                 <title>Configuring client discovery using JMS</title>
                 <para>If you're using JMS and you're also using the JMS Service on the server to
@@ -271,8 +278,7 @@
                     ClientSession session1 = factory.createClientSession(...); ClientSession
                     session2 = factory.createClientSession(...);
                 
-                </programlisting>
-                </para>
+                </programlisting></para>
                 <para>The <literal>refresh-timeout</literal> can be set directly on the session
                     factory by using the setter method <literal>setDiscoveryRefreshTimeout() if you
                         want to change the default value.</literal></para>
@@ -288,12 +294,12 @@
     </section>
     <section>
         <title>Server-Side Message Load Balancing</title>
-        <para>If cluster connections are defined between nodes of a cluster, then HornetQ
-            will load balance messages arriving at a particular node from a client.</para>
+        <para>If cluster connections are defined between nodes of a cluster, then HornetQ will load
+            balance messages arriving at a particular node from a client.</para>
         <para>Let's take a simple example of a cluster of four nodes A, B, C, and D arranged in a
-                <emphasis>symmetric cluster</emphasis> (described in <xref linkend="symmetric-cluster" />).
-                 We have a queue called
-                <literal>OrderQueue</literal> deployed on each node of the cluster.</para>
+                <emphasis>symmetric cluster</emphasis> (described in <xref
+                linkend="symmetric-cluster"/>). We have a queue called <literal>OrderQueue</literal>
+            deployed on each node of the cluster.</para>
         <para>We have client Ca connected to node A, sending orders to the server. We have also have
             order processor clients Pa, Pb, Pc, and Pd connected to each of the nodes A, B, C, D. If
             no cluster connection was defined on node A, then as order messages arrive on node A
@@ -307,20 +313,20 @@
         <para>For example, messages arriving on node A might be distributed in the following order
             between the nodes: B, D, C, A, B, D, C, A, B, D. The exact order depends on the order
             the nodes started up, but the algorithm used is round robin.</para>
-        <para>HornetQ cluster connections can be configured to always blindly load balance
-            messages in a round robin fashion irrespective of whether there are any matching
-            consumers on other nodes, but they can be a bit cleverer than that and also be
-            configured to only distribute to other nodes if they have matching consumers. We'll look
-            at both these cases in turn with some examples, but first we'll discuss configuring
-            cluster connections in general.</para>
+        <para>HornetQ cluster connections can be configured to always blindly load balance messages
+            in a round robin fashion irrespective of whether there are any matching consumers on
+            other nodes, but they can be a bit cleverer than that and also be configured to only
+            distribute to other nodes if they have matching consumers. We'll look at both these
+            cases in turn with some examples, but first we'll discuss configuring cluster
+            connections in general.</para>
         <section id="clusters.cluster-connections">
             <title>Configuring Cluster Connections</title>
             <para>Cluster connections group servers into clusters so that messages can be load
                 balanced between the nodes of the cluster. Let's take a look at a typical cluster
                 connection. Cluster connections are always defined in <literal
-                    >hornetq-configuration.xml</literal> inside a <literal>cluster-connection</literal>
-                element. There can be zero or more cluster connections defined per HornetQ
-                server.</para>
+                    >hornetq-configuration.xml</literal> inside a <literal
+                    >cluster-connection</literal> element. There can be zero or more cluster
+                connections defined per HornetQ server.</para>
             <programlisting>
 &lt;cluster-connections&gt;
     &lt;cluster-connection name="my-cluster"&gt;
@@ -364,7 +370,7 @@
                         the same way as a bridge does.</para>
                     <para>This parameter determines the interval in milliseconds between retry
                         attempts. It has the same meaning as the <literal>retry-interval</literal>
-                        on a bridge (as described in <xref linkend="core-bridges" />).</para>
+                        on a bridge (as described in <xref linkend="core-bridges"/>).</para>
                     <para>This parameter is optional and its default value is <literal>500</literal>
                         milliseconds.</para>
                 </listitem>
@@ -394,11 +400,11 @@
                             <emphasis>not</emphasis> forward messages to other nodes if there are no
                             <emphasis>queues</emphasis> of the same name on the other nodes, even if
                         this parameter is set to <literal>true</literal>.</para>
-                    <para>If this is set to <literal>false</literal> then HornetQ will only
-                        forward messages to other nodes of the cluster if the address to which they
-                        are being forwarded has queues which have consumers, and if those consumers
-                        have message filters (selectors) at least one of those selectors must match
-                        the message.</para>
+                    <para>If this is set to <literal>false</literal> then HornetQ will only forward
+                        messages to other nodes of the cluster if the address to which they are
+                        being forwarded has queues which have consumers, and if those consumers have
+                        message filters (selectors) at least one of those selectors must match the
+                        message.</para>
                     <para>This parameter is optional, the default value is <literal
                         >false</literal>.</para>
                 </listitem>
@@ -407,14 +413,14 @@
                         nodes to which it might load balance a message, those nodes do not have to
                         be directly connected to it via a cluster connection. HornetQ can be
                         configured to also load balance messages to nodes which might be connected
-                        to it only indirectly with other HornetQ servers as intermediates in
-                        a chain.</para>
-                    <para>This allows HornetQ to be configured in more complex topologies
-                        and still provide message load balancing. We'll discuss this more later in
-                        this chapter.</para>
+                        to it only indirectly with other HornetQ servers as intermediates in a
+                        chain.</para>
+                    <para>This allows HornetQ to be configured in more complex topologies and still
+                        provide message load balancing. We'll discuss this more later in this
+                        chapter.</para>
                     <para>The default value for this parameter is <literal>1</literal>, which means
-                        messages are only load balanced to other HornetQ serves which are
-                        directly connected to this server. This parameter is optional.</para>
+                        messages are only load balanced to other HornetQ serves which are directly
+                        connected to this server. This parameter is optional.</para>
                 </listitem>
                 <listitem>
                     <para><literal>discovery-group-ref</literal>. This parameter determines which
@@ -425,28 +431,30 @@
         </section>
         <section id="clusters.clusteruser">
             <title>Cluster User Credentials</title>
-            
             <para>When creating connections between nodes of a cluster to form a cluster connection,
-                HornetQ uses a cluster user and cluster password which is defined in <literal>hornetq-configuration.xml</literal>:</para>
+                HornetQ uses a cluster user and cluster password which is defined in <literal
+                    >hornetq-configuration.xml</literal>:</para>
             <programlisting>
                 &lt;cluster-user&gt;HORNETQ.CLUSTER.ADMIN.USER&lt;/cluster-user&gt;
                 &lt;cluster-password&gt;CHANGE ME!!&lt;/cluster-password&gt;
             </programlisting>
-            <warning><para>It is imperative that these values are changed from their default, or remote clients will be able to make connections
-                to the server using the default values. If they are not
-                changed from the default, HornetQ will detect this and pester you with a warning on every
-                start-up.</para></warning>
+            <warning>
+                <para>It is imperative that these values are changed from their default, or remote
+                    clients will be able to make connections to the server using the default values.
+                    If they are not changed from the default, HornetQ will detect this and pester
+                    you with a warning on every start-up.</para>
+            </warning>
         </section>
     </section>
     <section id="clusters.client.loadbalancing">
         <title>Client-Side Load balancing</title>
-        <para>With HornetQ client-side load balancing, subsequent 
-            sessions created using a single session factory can be connected to different nodes of the
-            cluster. This allows sessions to spread smoothly across the nodes of a cluster and
-            not be "clumped" on any particular node.</para>
+        <para>With HornetQ client-side load balancing, subsequent sessions created using a single
+            session factory can be connected to different nodes of the cluster. This allows sessions
+            to spread smoothly across the nodes of a cluster and not be "clumped" on any particular
+            node.</para>
         <para>The load balancing policy to be used by the client factory is configurable. HornetQ
-            provides two out-of-the-box load balancing policies and you can also implement
-            your own and use that.</para>
+            provides two out-of-the-box load balancing policies and you can also implement your own
+            and use that.</para>
         <para>The out-of-the-box policies are</para>
         <itemizedlist>
             <listitem>
@@ -541,8 +549,8 @@
                         <literal>hornetq-configuration.xml</literal> which will be used as a live
                     connector. The <literal>backup-connector-name</literal> is optional, and if
                     specified it also references a connector defined in <literal
-                        >hornetq-configuration.xml</literal>. For more information on connectors please
-                    see <xref linkend="configuring-transports" />.</para>
+                        >hornetq-configuration.xml</literal>. For more information on connectors
+                    please see <xref linkend="configuring-transports"/>.</para>
                 <para>The connection factory thus maintains a list of [connector, backup connector]
                     pairs, these pairs are then used by the client connection load balancing policy
                     on the client side when creating connections to the cluster.</para>
@@ -596,8 +604,8 @@
                 <para>In the above snippet we create a list of pairs of <literal
                         >TransportConfiguration</literal> objects. Each <literal
                         >TransportConfiguration</literal> object contains knowledge of how to make a
-                    connection to a specific server. For more information on this, please see 
-                    <xref linkend="configuring-transports" />.</para>
+                    connection to a specific server. For more information on this, please see <xref
+                        linkend="configuring-transports"/>.</para>
                 <para>A <literal>ClientSessionFactoryImpl</literal> instance is then created passing
                     the list of servers in the constructor. Any sessions subsequently created by
                     this factory will create sessions according to the client connection load
@@ -629,12 +637,13 @@
                     >connector-name</literal> attribute references a connector defined in <literal
                     >hornetq-configuration.xml</literal> which will be used as a live connector. The
                     <literal>backup-connector-name</literal> is optional, and if specified it also
-                references a connector defined in <literal>hornetq-configuration.xml</literal>. For more
-                information on connectors please see <xref linkend="configuring-transports" />.</para>
+                references a connector defined in <literal>hornetq-configuration.xml</literal>. For
+                more information on connectors please see <xref linkend="configuring-transports"
+                />.</para>
             <note>
-               <para>Due to a limitation in HornetQ 2.0.0, failover is not supported for clusters
-                  defined using a static set of nodes. To support failover over cluster nodes, they 
-                  must be configured to use a discovery group.</para>
+                <para>Due to a limitation in HornetQ 2.0.0, failover is not supported for clusters
+                    defined using a static set of nodes. To support failover over cluster nodes,
+                    they must be configured to use a discovery group.</para>
             </note>
         </section>
     </section>
@@ -648,17 +657,17 @@
             it doesn't solve: What happens if the consumers on a queue close after the messages have
             been sent to the node? If there are no consumers on the queue the message won't get
             consumed and we have a <emphasis>starvation</emphasis> situation.</para>
-        <para>This is where message redistribution comes in. With message redistribution HornetQ
-            can be configured to automatically <emphasis>redistribute</emphasis> messages
-            from queues which have no consumers back to other nodes in the cluster which do have
-            matching consumers.</para>
+        <para>This is where message redistribution comes in. With message redistribution HornetQ can
+            be configured to automatically <emphasis>redistribute</emphasis> messages from queues
+            which have no consumers back to other nodes in the cluster which do have matching
+            consumers.</para>
         <para>Message redistribution can be configured to kick in immediately after the last
             consumer on a queue is closed, or to wait a configurable delay after the last consumer
             on a queue is closed before redistributing. By default message redistribution is
             disabled.</para>
         <para>Message redistribution can be configured on a per address basis, by specifying the
             redistribution delay in the address settings, for more information on configuring
-            address settings, please see <xref linkend="queue-attributes" />.</para>
+            address settings, please see <xref linkend="queue-attributes"/>.</para>
         <para>Here's an address settings snippet from <literal>hornetq-configuration.xml</literal>
             showing how message redistribution is enabled for a set of queues:</para>
         <programlisting>&lt;address-settings>     
@@ -672,8 +681,8 @@
             to addresses that start with "jms.", so the above would enable instant (no delay)
             redistribution for all JMS queues and topic subscriptions.</para>
         <para>The attribute <literal>match</literal> can be an exact match or it can be a string
-            that conforms to the HornetQ wildcard syntax (described in <xref linkend="wildcard-syntax"
-            />).</para>
+            that conforms to the HornetQ wildcard syntax (described in <xref
+                linkend="wildcard-syntax"/>).</para>
         <para>The element <literal>redistribution-delay</literal> defines the delay in milliseconds
             after the last consumer is closed on a queue before redistributing messages from that
             queue to other nodes of the cluster which do have matching consumers. A delay of zero



More information about the hornetq-commits mailing list