[hornetq-commits] JBoss hornetq SVN: r9197 - branches/HnetQ_323_cn/docs/user-manual/zh.

do-not-reply at jboss.org do-not-reply at jboss.org
Tue May 4 11:23:51 EDT 2010


Author: gaohoward
Date: 2010-05-04 11:23:51 -0400 (Tue, 04 May 2010)
New Revision: 9197

Modified:
   branches/HnetQ_323_cn/docs/user-manual/zh/ha.xml
Log:
done


Modified: branches/HnetQ_323_cn/docs/user-manual/zh/ha.xml
===================================================================
--- branches/HnetQ_323_cn/docs/user-manual/zh/ha.xml	2010-05-04 11:27:41 UTC (rev 9196)
+++ branches/HnetQ_323_cn/docs/user-manual/zh/ha.xml	2010-05-04 15:23:51 UTC (rev 9197)
@@ -17,81 +17,56 @@
 <!-- permitted by applicable law.                                                  -->
 <!-- ============================================================================= -->
 <chapter id="ha">
-    <title>High Availability and Failover</title>
-    <para>We define high availability as the <emphasis>ability for the system to continue
-            functioning after failure of one or more of the servers</emphasis>.</para>
-    <para>A part of high availability is <emphasis>failover</emphasis> which we define as the
-            <emphasis>ability for client connections to migrate from one server to another in event
-            of server failure so client applications can continue to operate</emphasis>.</para>
+    <title>高可获得性(High Availability)和失效备援(Failover)</title>
+    <para>高可获得性是指<emphasis>当系统中有一台甚至多台服务器发生故障时还能继续运转的能力</emphasis>。</para>
+    <para>作为高可获得性的一部分,<emphasis>失效备援</emphasis>的含意是
+            <emphasis>当客户端当前连接的服务器发故障时,客户端可以将连接转到另一台正常的服务器,从而能够继续工作</emphasis>。</para>
     <section>
-        <title>Live - Backup Pairs</title>
-        <para>HornetQ allows pairs of servers to be linked together as <emphasis>live -
-                backup</emphasis> pairs. In this release there is a single backup server for each
-            live server. A backup server is owned by only one live server. Backup servers are not
-            operational until failover occurs.</para>
-        <para>Before failover, only the live server is serving the HornetQ clients while the backup
-            server remains passive. When clients fail over to the backup server, the backup server
-            becomes active and starts to service the HornetQ clients.</para>
+        <title>主要-备份对</title>
+        <para>HornetQ可以将两个服务器以<emphasis>主要-备份对</emphasis>的形式连接在一起。目前HornetQ允许一个
+            主要服务器有一个备份服务器,一个备份服务器只有一个主要服务器。在正常情况下主要服务器工作,备份服务器只有当
+            发生失效备援发生时工作。</para>
+        <para>没有发生失效备援时,主要服务器为客户端提供服务,备份服务器处于待机状态。当客户端在失效备援后连接到备份服务
+            器时,备份服务器开始激活并开始工作。</para>
         <section id="ha.mode">
-            <title>HA modes</title>
-            <para>HornetQ provides two different modes for high availability, either by
-                    <emphasis>replicating data</emphasis> from the live server journal to the backup
-                server or using a <emphasis>shared store</emphasis> for both servers.</para>
+            <title>高可获得性(HA)的模式</title>
+            <para>HornetQ的高可获得性有两种模式:一种模式通过由主服务器日志向备份服务器日志
+                <emphasis>复制数据</emphasis>。另一种模式则是主服务器与备份服务器间<emphasis>存贮共享</emphasis>。</para>
             <note>
-                <para>Only persistent message data will survive failover. Any non persistent message
-                    data will not be available after failover.</para>
+                <para>只有持久消息才可以在失效备援时不丢失。所有非持久消息则会丢失。</para>
             </note>
             <section id="ha.mode.replicated">
-                <title>Data Replication</title>
-                <para>In this mode, data stored in the HornetQ journal are replicated from the live
-                    server's journal to the backup server's journal. Note that we do not replicate
-                    the entire server state, we only replicate the journal and other persistent
-                    operations.</para>
-                <para>Replication is performed in an asynchronous fashion between live and backup
-                    server. Data is replicated one way in a stream, and responses that the data has
-                    reached the backup is returned in another stream. Pipelining replications and
-                    responses to replications in separate streams allows replication throughput to
-                    be much higher than if we synchronously replicated data and waited for a
-                    response serially in an RPC manner before replicating the next piece of
-                    data.</para>
-                <para>When the user receives confirmation that a transaction has committed, prepared
-                    or rolled back or a durable message has been sent, we can guarantee it has
-                    reached the backup server and been persisted.</para>
-                <para>Data replication introduces some inevitable performance overhead compared to
-                    non replicated operation, but has the advantage in that it requires no expensive
-                    shared file system (e.g. a SAN) for failover, in other words it is a <emphasis
-                        role="italic">shared-nothing</emphasis> approach to high
-                    availability.</para>
-                <para>Failover with data replication is also faster than failover using shared
-                    storage, since the journal does not have to be reloaded on failover at the
-                    backup node.</para>
+                <title>数据复制</title>
+                <para>在这种模式下,保存在HornetQ主服务器中日志中的数据被复制到备份服务器日志中。注意我们并不复制
+                    服务器的全部状态,而是只复制日志和其它的持久性质的操作。</para>
+                <para>复制的操作是异步进行的。数据通过流的方式复制,复制的結果则通过另一个流来返回。通过这样的异步方式
+                    我们可以获得比同步方式更大的呑吐量。</para>
+                <para>当用户得到确认信息如一个事务已经提交、准备或加滚,或者是一个持久消息被发送时,HornetQ确保这些状态
+                    已经复制到备份服务器上并被持久化。</para>
+                <para>数据复制这种方式不可避免地影响性能,但是另一方面它不依赖于昂贵的文件共享设备(如SAN)。它实际上是
+                    一种<emphasis role="italic">无共享</emphasis>的HA方式。</para>
+                <para>采用数据复制的失效备援比采用共享存储的失效备援要快,这是因为备份服务器在失效备援时不用重新装载日志。</para>
                 <graphic fileref="images/ha-replicated-store.png" align="center"/>
                 <section id="configuring.live.backup">
-                    <title>Configuration</title>
-                    <para>First, on the live server, in <literal
-                        >hornetq-configuration.xml</literal>, configure the live server with
-                        knowledge of its backup server. This is done by specifying a <literal
-                            >backup-connector-ref</literal> element. This element references a
-                        connector, also specified on the live server which specifies how to connect
-                        to the backup server.</para>
-                    <para>Here's a snippet from live server's <literal
-                            >hornetq-configuration.xml</literal> configured to connect to its backup
-                        server:</para>
+                    <title>配置</title>
+                    <para>首先在主服务器的 <literal>hornetq-configuration.xml</literal>文件中配置备份服务器。
+                        配置的参数是<literal>backup-connector-ref</literal>。这个参数指向一个连接器。这个连接器
+                        也在主服务器上配置。它定义了如何与备份服务器建立连接。</para>
+                    <para>下面就是一个在<literal>hornetq-configuration.xml</literal>文件中的例子:</para>
                     <programlisting>
   &lt;backup-connector-ref connector-name="backup-connector"/>
 
   &lt;connectors>
-     &lt;!-- This connector specifies how to connect to the backup server    -->
-     &lt;!-- backup server is located on host "192.168.0.11" and port "5445" -->
+     &lt;!-- 这个连接器用于连接备份服务喝咖啡    -->
+     &lt;!-- 备份服务器在主机"192.168.0.11"上,端口"5445" -->
      &lt;connector name="backup-connector">
        &lt;factory-class>org.hornetq.integration.transports.netty.NettyConnectorFactory&lt;/factory-class>
        &lt;param key="host" value="192.168.0.11"/>
        &lt;param key="port" value="5445"/>
      &lt;/connector>
   &lt;/connectors></programlisting>
-                    <para>Secondly, on the backup server, we flag the server as a backup and make
-                        sure it has an acceptor that the live server can connect to. We also make
-                        sure the shared-store paramater is set to false:</para>
+                    <para>其次在备份服务器上,我们设置了备份服务器的标志,并且配置了相应的接受器以便主服务器能够建立
+                        连接。同时我们将shared-store参数设为false。</para>
                     <programlisting>
   &lt;backup>true&lt;/backup>
   
@@ -105,280 +80,191 @@
      &lt;/acceptor>
   &lt;/acceptors>               
               </programlisting>
-                    <para>For a backup server to function correctly it's also important that it has
-                        the same set of bridges, predefined queues, cluster connections, broadcast
-                        groups and discovery groups as defined on the live node. The easiest way to
-                        ensure this is to copy the entire server side configuration from live to
-                        backup and just make the changes as specified above. </para>
+                    <para>为了使备份服务器正常工作,一定要保证它与主服务器有着同样的桥、预定义的队列、集群连接、
+                        广播组和发现组。最简单的作法是拷贝主服务器的全部配置然后再进行上述的修改。 </para>
                 </section>
                 <section>
-                    <title>Synchronizing a Backup Node to a Live Node</title>
-                    <para>In order for live - backup pairs to operate properly, they must be
-                        identical replicas. This means you cannot just use any backup server that's
-                        previously been used for other purposes as a backup server, since it will
-                        have different data in its persistent storage. If you try to do so, you will
-                        receive an exception in the logs and the server will fail to start.</para>
-                    <para>To create a backup server for a live server that's already been used for
-                        other purposes, it's necessary to copy the <literal>data</literal> directory
-                        from the live server to the backup server. This means the backup server will
-                        have an identical persistent store to the backup server.</para>
-                    <para>One a live server has failed over onto a backup server, the old live
-                        server becomes invalid and cannot just be restarted. To resynchonize the
-                        pair as a working live backup pair again, both servers need to be stopped,
-                        the data copied from the live node to the backup node and restarted
-                        again.</para>
-                    <para>The next release of HornetQ will provide functionality for automatically
-                        synchronizing a new backup node to a live node without having to temporarily
-                        bring down the live node.</para>
+                    <title>备份服务器与主服务器间的同步</title>
+                    <para>为了能正常工作,备份服务器与主服务器必须同步。这意谓着备份服务器不能是当前任意一个备份服
+                        务器。如果你这样做,主服务器将不能成功启动,在日志中会出现异常。</para>
+                    <para>要想将一个现有的服务器配置成一个备份服务器,你需要将主服务器的<literal>data</literal>
+                        文件夹拷贝到并覆盖这个备份
+                        服务器的相同文件夹,这样做保证了备份服务器与主服务器的持久化数据完全一致。</para>
+                    <para>当失效备援发生后,备份服务器代替主服务器工作,原来的主服务器失效。这时简单的重启主服务
+                        器是不行的。要想将主服务器与备份重新进行同步,就必须先将主服务器和备份服务器同时停止,再将
+                        主服务器的数据拷贝到备份服务器,然后再启动。</para>
+                    <para>HornetQ以后将支持备份与主服务器间的自动同步,无需停止主服务器。</para>
                 </section>
             </section>
             <section id="ha.mode.shared">
-                <title>Shared Store</title>
-                <para>When using a shared store, both live and backup servers share the
-                        <emphasis>same</emphasis> journal using a shared file system. </para>
-                <para>When failover occurs and the backup server takes over, it will load the
-                    persistent storage from the shared file system and clients can connect to
-                    it.</para>
-                <para>This style of high availability differs from data replication in that it
-                    requires a shared file system which is accessible by both the live and backup
-                    nodes. Typically this will be some kind of high performance Storage Area Network
-                    (SAN). We do not recommend you use Network Attached Storage (NAS), e.g. NFS
-                    mounts to store any shared journal (NFS is slow).</para>
-                <para>The advantage of shared-store high availability is that no replication occurs
-                    between the live and backup nodes, this means it does not suffer any performance
-                    penalties due to the overhead of replication during normal operation.</para>
-                <para>The disadvantage of shared store replication is that it requires a shared file
-                    system, and when the backup server activates it needs to load the journal from
-                    the shared store which can take some time depending on the amount of data in the
-                    store.</para>
-                <para>If you require the highest performance during normal operation, have access to
-                    a fast SAN, and can live with a slightly slower failover (depending on amount of
-                    data), we recommend shared store high availability</para>
+                <title>存贮共享</title>
+                <para>使用存贮共享,主服务器与备份服务器共用<emphasis>相同</emphasis>的日志,通常是一个共享的
+                      文件系统。</para>
+                <para>当发生失效备援时,工作由备份服务器接管。它首先从共享的文件系统中读取主服务器的持久数据,然后
+                    才能接受客户端的连接请求。</para>
+                <para>与数据复制方式不同的是这种方式需要一个共享的文件系统,主服务器与备份服务器都可以访问。典型的
+                    高性能的共享系统是存贮区域网络(SAN)系统。我们不建议使用网络附加存贮(NAS),如NFS,来存贮共享
+                    日志(主要的原因是它们比较慢)。</para>
+                <para>共享存贮的优点是不需要在主服务器与备份服务器之间进行数据复制,因此对性能不会造成影响。</para>
+                <para>共享存贮的缺点是它需要一个共享文件系统。同时,当备份服务器激活时它需要首先从共享日志中读取相应
+                    的信息,从而占用一定的时间。</para>
+                <para>如果你需要在一般工作情况下保持高性能,并且拥有一个快速的SAN系统,同时能够容忍较慢的失效备援
+                    过程(取决于数据量在多少),我们建议你采用存贮共享方式的高可获得性。</para>
                 <graphic fileref="images/ha-shared-store.png" align="center"/>
                 <section id="ha/mode.shared.configuration">
-                    <title>Configuration</title>
-                    <para>To configure the live and backup server to share their store, configure
-                        both <literal>hornetq-configuration.xml</literal>:</para>
+                    <title>配置</title>
+                    <para>要使用存贮共享模式,在两个服务器的配置文件<literal>hornetq-configuration.xml</literal>
+                        中将作如下设置:</para>
                     <programlisting>
                    &lt;shared-store>true&lt;shared-store>
                 </programlisting>
-                    <para>In order for live - backup pairs to operate properly with a shared store,
-                        both servers must have configured the location of journal directory to point
-                        to the <emphasis>same shared location</emphasis> (as explained in <xref
-                            linkend="configuring.message.journal"/>)</para>
-                    <para>If clients will use automatic failover with JMS, the live server will need
-                        to configure a connector to the backup server and reference it from its
-                            <literal>hornetq-jms.xml</literal> configuration as explained in <xref
-                            linkend="ha.automatic.failover"/>.</para>
+                    <para>另外,需要将主服务器和备份服务器的日志文件位置指向<emphasis>同一个共享位置</emphasis>。
+                        (参见<xref linkend="configuring.message.journal"/>)</para>
+                    <para>如果客户端使用JMS自动失效备援,主服务器除了要配置一个连接器以连接到备份服务器外,还要在
+                        配置文件<literal>hornetq-jms.xml</literal>中指向这个连接器,如
+                        <xref linkend="ha.automatic.failover"/>中所解释的那样。</para>
                 </section>
                 <section>
-                    <title>Synchronizing a Backup Node to a Live Node</title>
-                    <para>As both live and backup servers share the same journal, they do not need
-                        to be synchronized. However until, both live and backup servers are up and
-                        running, high-availability can not be provided with a single server. After
-                        failover, at first opportunity, stop the backup server (which is active) and
-                        restart the live and backup servers.</para>
-                    <para>In the next release of HornetQ we will provide functionality to
-                        automatically synchronize a new backup server with a running live server
-                        without having to temporarily bring the live server down.</para>
+                    <title>备份服务器与主服务器间的同步。</title>
+                    <para>由于主备服务器之间共享存贮,所以它们不需要进行同步。但是它需要主备服务器同时工作以提供
+                        高可获得性。如果一量发生失效备援后,就需要在尽可能早的时间内将备份服务器(处于工作状态)停下来,
+                        然后再启动主服务器和备份服务器。</para>
+                    <para>HornetQ以后将支持自动同步功能,不需要先停止服务器。</para>
                 </section>
             </section>
         </section>
     </section>
     <section id="failover">
-        <title>Failover Modes</title>
-        <para>HornetQ defines two types of client failover:</para>
+        <title>失效备援的模式</title>
+        <para>HornetQ定义了两种客户端的失效备援:</para>
         <itemizedlist>
             <listitem>
-                <para>Automatic client failover</para>
+                <para>自动客户端失效备援</para>
             </listitem>
             <listitem>
-                <para>Application-level client failover</para>
+                <para>应用层的客户端失效备援</para>
             </listitem>
         </itemizedlist>
-        <para>HornetQ also provides 100% transparent automatic reattachment of connections to the
-            same server (e.g. in case of transient network problems). This is similar to failover,
-            except it's reconnecting to the same server and is discussed in <xref
-                linkend="client-reconnection"/></para>
-        <para>During failover, if the client has consumers on any non persistent or temporary
-            queues, those queues will be automatically recreated during failover on the backup node,
-            since the backup node will not have any knowledge of non persistent queues.</para>
+        <para>HornetQ还支持100%透明的同一个服务器的自动连接恢复(适用于网络的临时性故障)。这与失效备援很相似,
+            只不过连接的是同一个服务器,参见<xref linkend="client-reconnection"/>。</para>
+        <para>在发生失效备援时,如果客户端有非持久或临时队列的接收者时,这些队列会自动在备份服务器上重新创建。对于
+            非持久性的队列,备份服务器事先是没有它们的信息的。</para>
         <section id="ha.automatic.failover">
-            <title>Automatic Client Failover</title>
-            <para>HornetQ clients can be configured with knowledge of live and backup servers, so
-                that in event of connection failure at the client - live server connection, the
-                client will detect this and reconnect to the backup server. The backup server will
-                then automatically recreate any sessions and consumers that existed on each
-                connection before failover, thus saving the user from having to hand-code manual
-                reconnection logic.</para>
-            <para>HornetQ clients detect connection failure when it has not received packets from
-                the server within the time given by <literal>client-failure-check-period</literal>
-                as explained in section <xref linkend="connection-ttl"/>. If the client does not
-                receive data in good time, it will assume the connection has failed and attempt
-                failover.</para>
-            <para>HornetQ clients can be configured with the list of live-backup server pairs in a
-                number of different ways. They can be configured explicitly or probably the most
-                common way of doing this is to use <emphasis>server discovery</emphasis> for the
-                client to automatically discover the list. For full details on how to configure
-                server discovery, please see <xref linkend="clusters.server-discovery"/>.
-                Alternatively, the clients can explicitly specifies pairs of live-backup server as
-                explained in <xref linkend="clusters.static.servers"/>.</para>
-            <para>To enable automatic client failover, the client must be configured to allow
-                non-zero reconnection attempts (as explained in <xref linkend="client-reconnection"
-                />).</para>
-            <para>Sometimes you want a client to failover onto a backup server even if the live
-                server is just cleanly shutdown rather than having crashed or the connection failed.
-                To configure this you can set the property <literal
-                    >FailoverOnServerShutdown</literal> to true either on the <literal
-                    >HornetQConnectionFactory</literal> if you're using JMS or in the <literal
-                    >hornetq-jms.xml</literal> file when you define the connection factory, or if
-                using core by setting the property directly on the <literal
-                    >ClientSessionFactoryImpl</literal> instance after creation. The default value
-                for this property is <literal>false</literal>, this means that by default
-                    <emphasis>HornetQ clients will not failover to a backup server if the live
-                    server is simply shutdown cleanly.</emphasis></para>
+            <title>自动客户端失效备援</title>
+            <para>HornetQ的客户端可以配置主/备份服务器的信息,当客户端与主服务器的连接发生故障时,可以自动检测到故障并
+                进行失效备援处理,让客户端连接到备份服务器上。备份服务器可以自动重新创建所有在失效备援之前存在的会话与接收
+                者。客户端不需要进行人工的连接恢复工作,从而节省了客户端的开发工作。</para>
+            <para>HornetQ的客户端在参数<literal>client-failure-check-period</literal>(在
+                <xref linkend="connection-ttl"/>中进行了解释)规定的时间内如果没有收到数据包,则认为连接发生故障。
+                当客户端认为连接故障时,它就会尝试进行失效备援。</para>
+            <para>HornetQ有几种方法来为客户端配置主/备服务器对的列表。可以采用显式指定的方法,或者采用更为常用的
+                <emphasis>服务器发现</emphasis>的方法。有关如何配置服务器发现的详细信息,请参见
+                <xref linkend="clusters.server-discovery"/>。
+                关于如何显式指定主/备服务器对的方法,请参见<xref linkend="clusters.static.servers"/>中的解释。</para>
+            <para>要使客户端具备自动失效备援,在客户端的配置中必须要指定重试的次数要大于零(参见
+                <xref linkend="client-reconnection"/>中的解释)。</para>
+            <para>有时你需要在主服务器正常关机的情况下仍然进行失效备援。如果使用JMS,你需要将<literal
+                    >HornetQConnectionFactory</literal>的<literal
+                    >FailoverOnServerShutdown</literal>属性设为true,或者是在<literal
+                    >hornetq-jms.xml</literal>文件中进行相应的配置。如果使用的是核心接口,可以在创建
+                    <literal>ClientSessionFactoryImpl</literal>实例时将上述同名属性设置为true。
+                这个属性的默认值是false。这表示如果主服务器是正常关机,<emphasis>客户端将不会进行失效备援</emphasis>。</para>
             <para>
                 <note>
-                    <para>By default, cleanly shutting down the server <emphasis role="bold">will
-                            not</emphasis> trigger failover on the client.</para>
-                    <para>Using CTRL-C on a HornetQ server or JBoss AS instance causes the server to
-                            <emphasis role="bold">cleanly shut down</emphasis>, so will not trigger
-                        failover on the client. </para>
-                    <para>If you want the client to failover when its server is cleanly shutdown
-                        then you must set the property <literal>FailoverOnServerShutdown</literal>
-                        to true</para>
+                    <para>默认正常关机<emphasis role="bold">不会</emphasis>不会导致失效备援。</para>
+                    <para>使用CTRL-C来关闭HornetQ服务器或JBoss应用服务器属于正常关机,所以不会触发客户端的失效
+                        备援。</para>
+                    <para>要想在这种情况下进行失效备援必须将属性<literal>FailoverOnServerShutdown</literal>
+                        设为true。</para>
                 </note>
             </para>
-            <para>For examples of automatic failover with transacted and non-transacted JMS
-                sessions, please see <xref linkend="examples.transaction-failover"/> and <xref
-                    linkend="examples.non-transaction-failover"/>.</para>
+            <para>有关事务性及非事务性JMS会话的自动失效备援的例子,请参见
+                    <xref linkend="examples.transaction-failover"/>及<xref
+                    linkend="examples.non-transaction-failover"/>。</para>
             <section id="ha.automatic.failover.noteonreplication">
-                <title>A Note on Server Replication</title>
-                <para>HornetQ does not replicate full server state betwen live and backup servers.
-                    When the new session is automatically recreated on the backup it won't have any
-                    knowledge of messages already sent or acknowledged in that session. Any
-                    in-flight sends or acknowledgements at the time of failover might also be
-                    lost.</para>
-                <para>By replicating full server state, theoretically we could provide a 100%
-                    transparent seamless failover, which would avoid any lost messages or
-                    acknowledgements, however this comes at a great cost: replicating the full
-                    server state (including the queues, session, etc.). This would require
-                    replication of the entire server state machine; every operation on the live
-                    server would have to replicated on the replica server(s) in the exact same
-                    global order to ensure a consistent replica state. This is extremely hard to do
-                    in a performant and scalable way, especially when one considers that multiple
-                    threads are changing the live server state concurrently.</para>
-                <para>It is possible to provide full state machine replication using
-                    techniques such as <emphasis role="italic">virtual synchrony</emphasis>, but
-                    this does not scale well and effectively serializes all operations to a single
-                    thread, dramatically reducing concurrency.</para>
-                <para>Other techniques for multi-threaded active replication exist such as
-                    replicating lock states or replicating thread scheduling but this is very hard
-                    to achieve at a Java level.</para>
-                <para>Consequently it xas decided it was not worth massively reducing performance
-                    and concurrency for the sake of 100% transparent failover. Even without 100%
-                    transparent failover, it is simple to guarantee <emphasis role="italic">once and
-                        only once</emphasis> delivery, even in the case of failure, by using a
-                    combination of duplicate detection and retrying of transactions. However this is
-                    not 100% transparent to the client code.</para>
+                <title>关于服务器的复制</title>
+                <para>HornetQ在主服务器向备份服务器复制时,并不复制服务器的全部状态。所以当一个会话在备份服务器
+                    中重新创建后,它并不知道发送过的消息或通知过的消息。在失效备援的过程中发生的消息发送或通知也可
+                    能丢失。</para>
+                <para>理论上如果进行全部状态的复制,我们可以提供100%的透明的失效备援,不会失去任何的消息或通知。
+                    但是这样做要付出很大的代价:即所有信息都要进行复制(包括队列,会话等等)。也就是要求复制服务
+                    器的每个状态信息,主服务器的每一步操作都将向其备份进行复制,并且要在全局内保持顺序的一致。这样
+                    做就极难保证高性能和可扩展性,特别是考虑到多线程同时改变主服务器的状态的情况,要进行全状态复制
+                    就更加困难。</para>
+                <para>一些技术可以用来实现全状态复制,如<emphasis role="italic">虚拟同步技术
+                    (virtual synchrony)</emphasis>。但是这些技术往往没有很好的可扩展性,并且将所有操作都
+                    进行序列化,由单一线程进行处理,这样明显地将底了并行处理能力。</para>
+                <para>另外还有其它一些多线程主动复制技术,比如复制锁状态或复制线程调度等。这些技术使用Java语言非常
+                    难于实现。</para>
+                <para>因此得出结论,采用大量牺牲性能来换取100%透明的失效备援是得不偿失的。没有100%透明的失效
+                    备援我们仍然可以轻易地保证一次且只有一次的传递。这是通过在发生故障时采用重复检测结合事务重试
+                    来实现的。</para>
             </section>
             <section id="ha.automatic.failover.blockingcalls">
-                <title>Handling Blocking Calls During Failover</title>
-                <para>If the client code is in a blocking call to the server, waiting for a response
-                    to continue its execution, when failover occurs, the new session will not have
-                    any knowledge of the call that was in progress. This call might otherwise hang
-                    for ever, waiting for a response that will never come.</para>
-                <para>To prevent this, HornetQ will unblock any blocking calls that were in progress
-                    at the time of failover by making them throw a <literal
-                        >javax.jms.JMSException</literal> (if using JMS), or a <literal
-                        >HornetQException</literal> with error code <literal
-                        >HornetQException.UNBLOCKED</literal>. It is up to the client code to catch
-                    this exception and retry any operations if desired.</para>
-                <para>If the method being unblocked is a call to commit(), or prepare(), then the
-                    transaction will be automatically rolled back and HornetQ will throw a <literal
-                        >javax.jms.TransactionRolledBackException</literal> (if using JMS), or a
-                        <literal>HornetQException</literal> with error code <literal
-                        >HornetQException.TRANSACTION_ROLLED_BACK</literal> if using the core
-                    API.</para>
+                <title>失效备援时阻塞调用的处理</title>
+                <para>如果当发生失效备援时客户端正面进行一个阻塞调用并等待服务器的返回,新创建的会话不会知道这个调用,
+                    因此客户端可能永远也不会得到响应,也就可能一直阻塞在那里。</para>
+                <para>为了防止这种情况的发生,HornetQ在失效备援时会解除所有的阻塞调用,同时抛出一个
+                    <literal>javax.jms.JMSException</literal>异常(如果是JMS)或<literal
+                        >HornetQException</literal>异常。异常的错误代码是<literal
+                        >HornetQException.UNBLOCKED</literal>。客户端需要自行处理这个异常,并且进行
+                    必要的操作重试。</para>
+                <para>如果被解除阻塞的调用是commit()或者prepare(),那么这个事务会被自动地回滚,并且HornetQ
+                    会抛出一个<literal>javax.jms.TransactionRolledBackException</literal>(如果是JMS)
+                    或都是一个<literal>HornetQException</literal>,错误代码为 <literal
+                        >HornetQException.TRANSACTION_ROLLED_BACK</literal>(如果是核心接口)。</para>
             </section>
             <section id="ha.automatic.failover.transactions">
-                <title>Handling Failover With Transactions</title>
-                <para>If the session is transactional and messages have already been sent or
-                    acknowledged in the current transaction, then the server cannot be sure that
-                    messages sent or acknowledgements have not been lost during the failover.</para>
-                <para>Consequently the transaction will be marked as rollback-only, and any
-                    subsequent attempt to commit it will throw a <literal
-                        >javax.jms.TransactionRolledBackException</literal> (if using JMS), or a
-                        <literal>HornetQException</literal> with error code <literal
-                        >HornetQException.TRANSACTION_ROLLED_BACK</literal> if using the core
-                    API.</para>
-                <para>It is up to the user to catch the exception, and perform any client side local
-                    rollback code as necessary. The user can then just retry the transactional
-                    operations again on the same session.</para>
-                <para>HornetQ ships with a fully functioning example demonstrating how to do this,
-                    please see <xref linkend="examples.transaction-failover"/></para>
-                <para>If failover occurs when a commit call is being executed, the server, as
-                    previously described, will unblock the call to prevent a hang, since no response
-                    will come back. In this case it is not easy for the client to determine whether
-                    the transaction commit was actually processed on the live server before failure
-                    occurred.</para>
-                <para>To remedy this, the client can simply enable duplicate detection (<xref
-                        linkend="duplicate-detection"/>) in the transaction, and retry the
-                    transaction operations again after the call is unblocked. If the transaction had
-                    indeed been committed on the live server successfully before failover, then when
-                    the transaction is retried, duplicate detection will ensure that any durable
-                    messages resent in the transaction will be ignored on the server to prevent them
-                    getting sent more than once.</para>
+                <title>事务的失效备援处理</title>
+                <para>如果在一个事务性会话中,在当前事务中消息已经发出或通知,则服务器在这时如果发生失效备援,它不
+                    能保证发出的消息或通知没有丢失。</para>
+                <para>因此这个事务就会被标记为只回滚,任何尝试提交的操作都会抛出一个<literal
+                        >javax.jms.TransactionRolledBackException</literal>异常(如果是JMS),或者是一
+                        个<literal>HornetQException</literal>的异常,错误代码为<literal
+                        >HornetQException.TRANSACTION_ROLLED_BACK</literal>(如果是核心接口)。</para>
+                <para>客户端需要自行处理这些异常,进行必要的回滚处理。用户可以通过同一个会话重试该事务操作。</para>
+                <para>HornetQ发布包中包括了一个完整的例子来展示如何处理这种情况。参见
+                     <xref linkend="examples.transaction-failover"/></para>
+                <para>如果是在提交过程中发生了失效备援,服务器将这个阻塞调用解除。这种情况下客户端很难确定在事故发生
+                    之前事务是否在主服务器中得到了处理。</para>
+                <para>为了解决这个问题,客户端可以在事务中使用重复检测(<xref linkend="duplicate-detection"/>)
+                    ,并且在提交的调用被解除后重新尝试事务操作。如果在失效备援前事务确实在主服务器上已经完成提交,那么
+                    当事务进行重试时,重复检测功能可以保证重复发送的消息被丢弃,这样避免了消息的重复。</para>
                 <note>
-                    <para>By catching the rollback exceptions and retrying, catching unblocked calls
-                        and enabling duplicate detection, once and only once delivery guarantees for
-                        messages can be provided in the case of failure, guaranteeing 100% no loss
-                        or duplication of messages.</para>
+                    <para>通过处理异常和重试,适当处理被解除的阻塞调用并配合重复检测功能,HornetQ可以在故障条件下保证
+                        一次并且只有一次的消息传递,没有消息丢失和消息重复。</para>
                 </note>
             </section>
             <section id="ha.automatic.failover.nontransactional">
-                <title>Handling Failover With Non Transactional Sessions</title>
-                <para>If the session is non transactional, messages or acknowledgements can be lost
-                    in the event of failover.</para>
-                <para>If you wish to provide <emphasis role="italic">once and only once</emphasis>
-                    delivery guarantees for non transacted sessions too, enabled duplicate
-                    detection, and catch unblock exceptions as described in <xref
-                        linkend="ha.automatic.failover.blockingcalls"/></para>
+                <title>非事务会话的失效备援处理</title>
+                <para>如果会话是非事务性的,那么通过它的消息或通知在故障时可能会丢失。</para>
+                <para>如果你在非事务会话中要保证<emphasis role="italic">一次并且只有一次</emphasis>
+                    的消息传递,你需要使用重复检测功能,并适当处理被解除的阻塞调用。参见 <xref
+                        linkend="ha.automatic.failover.blockingcalls"/>。</para>
             </section>
         </section>
         <section>
-            <title>Getting Notified of Connection Failure</title>
-            <para>JMS provides a standard mechanism for getting notified asynchronously of
-                connection failure: <literal>java.jms.ExceptionListener</literal>. Please consult
-                the JMS javadoc or any good JMS tutorial for more information on how to use
-                this.</para>
-            <para>The HornetQ core API also provides a similar feature in the form of the class
-                    <literal>org.hornet.core.client.SessionFailureListener</literal></para>
-            <para>Any ExceptionListener or SessionFailureListener instance will always be called by
-                HornetQ on event of connection failure, <emphasis role="bold"
-                    >irrespective</emphasis> of whether the connection was successfully failed over,
-                reconnected or reattached.</para>
+            <title>连接故障的通知</title>
+            <para>JMS提供了标准的异步接收连接故障通知的机制:<literal>java.jms.ExceptionListener</literal>。
+                请参考JMS的javadoc或者其它JMS教程来进一步了解怎样使用这个接口。</para>
+            <para>HornetQ的核心接口也提供了一个相似的接口
+                   <literal>org.hornet.core.client.SessionFailureListener</literal>。</para>
+            <para>任何ExceptionListener或SessionFailureListener的实例,在发生连接故障时,都会被HornetQ
+                调用,<emphasis role="bold">不管</emphasis>该连接是否得到了失效备援、重新连接还是得到了恢复。</para>
         </section>
         <section>
-            <title>Application-Level Failover</title>
-            <para>In some cases you may not want automatic client failover, and prefer to handle any
-                connection failure yourself, and code your own manually reconnection logic in your
-                own failure handler. We define this as <emphasis>application-level</emphasis>
-                failover, since the failover is handled at the user application level.</para>
-            <para>To implement application-level failover, if you're using JMS then you need to set
-                an <literal>ExceptionListener</literal> class on the JMS connection. The <literal
-                    >ExceptionListener</literal> will be called by HornetQ in the event that
-                connection failure is detected. In your <literal>ExceptionListener</literal>, you
-                would close your old JMS connections, potentially look up new connection factory
-                instances from JNDI and creating new connections. In this case you may well be using
-                    <ulink url="http://www.jboss.org/community/wiki/JBossHAJNDIImpl">HA-JNDI</ulink>
-                to ensure that the new connection factory is looked up from a different
-                server.</para>
-            <para>For a working example of application-level failover, please see <xref
-                    linkend="application-level-failover"/>.</para>
-            <para>If you are using the core API, then the procedure is very similar: you would set a
-                    <literal>FailureListener</literal> on the core <literal>ClientSession</literal>
-                instances.</para>
+            <title>应用层的失效备援</title>
+            <para>在某些情况下你可能不需要自动的客户端失效备援,希望自己来处理连接故障,使用自己的重新连接方案等。
+                我们把它称之为<emphasis>应用层</emphasis>失效备援,因为它是发生在应用层的程序中。</para>
+            <para>为了实现应用层的失效备援,你可以使用监听器(listener)的方式。如果使用的是JMS,你需要在JMS连接上
+                设置一个<literal>ExceptionListener</literal>类。这个类在连接发生故障时由HornetQ调用。在这个类
+                中你可以将旧的连接关闭,使用JNDI查找新的连接工厂并创建新的连接。这里你可以使用
+                <ulink url="http://www.jboss.org/community/wiki/JBossHAJNDIImpl">HA-JNDI</ulink>
+                来保证新的连接工厂来自于另一个服务器。</para>
+            <para>请参见<xref
+                    linkend="application-level-failover"/>。这是一个完整的应用层失效备援的例子。</para>
+            <para>如果你使用核心接口,则过程也是很相似的:你在核心的<literal>ClientSession</literal>实例上设置一个
+                <literal>FailureListener</literal>,然后在这个类中进行相应的处理即可。</para>
         </section>
     </section>
 </chapter>



More information about the hornetq-commits mailing list