[hornetq-commits] JBoss hornetq SVN: r8571 - trunk/docs/user-manual/en.

do-not-reply at jboss.org do-not-reply at jboss.org
Fri Dec 4 15:54:29 EST 2009


Author: jmesnil
Date: 2009-12-04 15:54:28 -0500 (Fri, 04 Dec 2009)
New Revision: 8571

Modified:
   trunk/docs/user-manual/en/connection-ttl.xml
   trunk/docs/user-manual/en/ha.xml
Log:
documentation update

* typo + fixed title case


Modified: trunk/docs/user-manual/en/connection-ttl.xml
===================================================================
--- trunk/docs/user-manual/en/connection-ttl.xml	2009-12-04 20:34:45 UTC (rev 8570)
+++ trunk/docs/user-manual/en/connection-ttl.xml	2009-12-04 20:54:28 UTC (rev 8571)
@@ -163,7 +163,7 @@
         <para>It is possible instead to use a thread from a thread pool to handle the packents so
            that the remoting thread is not tied up for too long. However, please note that processing 
            operations asynchronously on another thread adds a little more
-            latency. To enable asynchronous connection executin, set the parameter <literal
+            latency. To enable asynchronous connection execution, set the parameter <literal
                 >async-connection-execution-enabled</literal> in <literal
                 >hornetq-configuration.xml</literal> to <literal>true</literal> (default value is
                 <literal>false</literal>).</para>

Modified: trunk/docs/user-manual/en/ha.xml
===================================================================
--- trunk/docs/user-manual/en/ha.xml	2009-12-04 20:34:45 UTC (rev 8570)
+++ trunk/docs/user-manual/en/ha.xml	2009-12-04 20:54:28 UTC (rev 8571)
@@ -36,7 +36,7 @@
             <title>HA modes</title>
             <para>HornetQ provides two different modes for high availability, either by
                     <emphasis>replicating data</emphasis> from the live server journal to the backup
-                server or using a <emphasis>shared state</emphasis> for both servers.</para>
+                server or using a <emphasis>shared store</emphasis> for both servers.</para>
             <section id="ha.mode.replicated">
                 <title>Data Replication</title>
                 <para>In this mode, data stored in the HornetQ journal are replicated from the live
@@ -56,7 +56,7 @@
                 <para>Data replication introduces some inevitable performance overhead compared to
                     non replicated operation, but has the advantage in that it requires no expensive
                     shared file system (e.g. a SAN) for failover, in other words it is a <emphasis
-                        role="italic">shared nothing</emphasis> approach to high
+                        role="italic">shared-nothing</emphasis> approach to high
                     availability.</para>
                 <para>Failover with data replication is also faster than failover using shared
                     storage, since the journal does not have to be reloaded on failover at the
@@ -65,10 +65,10 @@
                 <section id="configuring.live.backup">
                     <title>Configuration</title>
                     <para>First, on the live server, in <literal
-                        >hornetq-configuration.xml</literal>, configures the live server with
+                        >hornetq-configuration.xml</literal>, configure the live server with
                         knowledge of its backup server. This is done by specifying a <literal
                             >backup-connector-ref</literal> element. This element references a
-                        connector, also specified on the live server which contains knowledge of how
+                        connector, also specified on the live server which specifies how
                         to connect to the backup server.</para>
                     <para>Here's a snippet from live server's <literal
                             >hornetq-configuration.xml</literal> configured to connect to its backup
@@ -86,7 +86,7 @@
      &lt;/connector>
   &lt;/connectors></programlisting>
                     <para>Secondly, on the backup server, we flag the server as a backup and make
-                        sure it has an acceptor that the live server can connect to, we also make sure the shared-store paramater is
+                        sure it has an acceptor that the live server can connect to. We also make sure the shared-store paramater is
                     set to false:</para>
                     <programlisting>
   &lt;backup>true&lt;/backup>
@@ -104,15 +104,15 @@
                     <para>For a backup server to function correctly it's also important that it has
                         the same set of bridges, predefined queues, cluster connections, broadcast
                         groups and discovery groups as defined on the live node. The easiest way to
-                        ensure this is just to copy the entire server side configuration from live
+                        ensure this is to copy the entire server side configuration from live
                         to backup and just make the changes as specified above. </para>
                 </section>
                 <section>
-                    <title>Synchronization a backup node to a live node</title>
+                    <title>Synchronizing a Backup Node to a Live Node</title>
                     <para>In order for live - backup pairs to operate properly, they must be
                         identical replicas. This means you cannot just use any backup server that's
                         previously been used for other purposes as a backup server, since it will
-                        have different data in its persistent storage. If you try to do so you will
+                        have different data in its persistent storage. If you try to do so, you will
                         receive an exception in the logs and the server will fail to start.</para>
                     <para>To create a backup server for a live server that's already been used for
                         other purposes, it's necessary to copy the <literal>data</literal> directory
@@ -149,7 +149,7 @@
                     store.</para>
                 <para>If you require the highest performance during normal operation, have access to
                     a fast SAN, and can live with a slightly slower failover (depending on amount of
-                    data) we recommend shared store high availability</para>
+                    data), we recommend shared store high availability</para>
                 <graphic fileref="images/ha-shared-store.png" align="center"/>
                 <section id="ha/mode.shared.configuration">
                     <title>Configuration</title>
@@ -168,7 +168,7 @@
                             linkend="ha.automatic.failover"/>.</para>
                 </section>
                 <section>
-                    <title>Synchronizing a backup node to a live node</title>
+                    <title>Synchronizing a Backup Node to a Live Node</title>
                     <para>As both live and backup servers share the same journal, they do not need
                         to be synchronized. However until, both live and backup servers are up and
                         running, high-availability can not be provided with a single server. After
@@ -237,7 +237,7 @@
                     <para>Using CTRL-C on a HornetQ server or JBoss AS instance causes the server to
                             <emphasis role="bold">cleanly shut down</emphasis>, so will not trigger
                         failover on the client. </para>
-                    <para>If you want the client to failover when it's server is cleanly shutdown
+                    <para>If you want the client to failover when its server is cleanly shutdown
                         then you must set the property <literal>FailoverOnServerShutdown</literal>
                         to true</para>
                 </note>
@@ -246,71 +246,71 @@
                 sessions, please see <xref linkend="examples.transaction-failover"/> and <xref
                     linkend="examples.non-transaction-failover"/>.</para>
             <section id="ha.automatic.failover.noteonreplication">
-                <title>A note on server replication</title>
-                <para>HornetQ does not replicate full server state betwen live and backup servers,
-                    so when the new session is automatically recreated on the backup it won't have
+                <title>A Note on Server Replication</title>
+                <para>HornetQ does not replicate full server state betwen live and backup servers. 
+                    When the new session is automatically recreated on the backup it won't have
                     any knowledge of messages already sent or acknowledged in that session. Any
                     inflight sends or acknowledgements at the time of failover might also be
                     lost.</para>
                 <para>By replicating full server state, theoretically we could provide a 100%
                     transparent seamless failover, which would avoid any lost messages or
-                    acknowledgements, however this comes at a great cost - replicating the full
-                    server state - that's all the queues, sessions etc, would require replication of
-                    the entire server state machine - every operation on the live server would have
+                    acknowledgements, however this comes at a great cost: replicating the full
+                    server state (including the queues, session, etc.). This would require replication of
+                    the entire server state machine; every operation on the live server would have
                     to replicated on the replica server(s) in the exact same global order to ensure
                     a consistent replica state. This is extremely hard to do in a performant and
                     scalable way, especially when one considers that multiple threads are changing
                     the live server state concurrently.</para>
-                <para>Some solutions which do provide full state machine replication do so by using
+                <para>Some solutions which provide full state machine replication use
                     techniques such as <emphasis role="italic">virtual synchrony</emphasis>, but
                     this does not scale well and effectively serializes all operations to a single
                     thread, dramatically reducing concurrency.</para>
                 <para>Other techniques for multi-threaded active replication exist such as
                     replicating lock states or replicating thread scheduling but this is very hard
                     to achieve at a Java level.</para>
-                <para>Consequently it as decided it was not worth massively reducing performance and
+                <para>Consequently it xas decided it was not worth massively reducing performance and
                     concurrency for the sake of 100% transparent failover. Even without 100%
-                    transparent failover it is simple to guarantee <emphasis role="italic">once and
-                        only once</emphasis> delivery guarantees, even in the case of failure, by
-                    using a combination of duplicate detection and retrying of transactions, however
+                    transparent failover, it is simple to guarantee <emphasis role="italic">once and
+                        only once</emphasis> delivery, even in the case of failure, by
+                    using a combination of duplicate detection and retrying of transactions. However
                     this is not 100% transparent to the client code.</para>
             </section>
             <section id="ha.automatic.failover.blockingcalls">
-                <title>Handling blocking calls during failover</title>
-                <para>If the client code is in a blocking call to the server when failover occurs,
-                    expecting a response before it can continue, then on failover the new session
-                    won't have any knowledge of the call that was in progress, and the call might
+                <title>Handling Blocking Calls During Failover</title>
+                <para>If the client code is in a blocking call to the server, waiting for 
+                   a response to continue its execution, when failover occurs, the new session
+                    will not have any knowledge of the call that was in progress. This call might
                     otherwise hang for ever, waiting for a response that will never come.</para>
-                <para>To remedy this, HornetQ will unblock any unblocking calls that were in
+                <para>To prevent this, HornetQ will unblock any blocking calls that were in
                     progress at the time of failover by making them throw a <literal
                         >javax.jms.JMSException</literal> (if using JMS), or a <literal
                         >HornetQException</literal> with error code <literal
-                        >HornetQException.UNBLOCKED</literal>. It is up to the user code to catch
+                        >HornetQException.UNBLOCKED</literal>. It is up to the client code to catch
                     this exception and retry any operations if desired.</para>
             </section>
             <section id="ha.automatic.failover.transactions">
-                <title>Handling failover with transactions</title>
+                <title>Handling Failover With Transactions</title>
                 <para>If the session is transactional and messages have already been sent or
                     acknowledged in the current transaction, then the server cannot be sure that
-                    messages sent or acknowledgements haven't been lost during the failover.</para>
+                    messages sent or acknowledgements have not been lost during the failover.</para>
                 <para>Consequently the transaction will be marked as rollback-only, and any
-                    subsequent attempt to commit it, will throw a <literal
+                    subsequent attempt to commit it will throw a <literal
                         >javax.jms.TransactionRolledBackException</literal> (if using JMS), or a
                         <literal>HornetQException</literal> with error code <literal
                         >HornetQException.TRANSACTION_ROLLED_BACK</literal> if using the core
                     API.</para>
                 <para>It is up to the user to catch the exception, and perform any client side local
-                    rollback code as necessary, the user can then just retry the transactional
+                    rollback code as necessary. The user can then just retry the transactional
                     operations again on the same session.</para>
-                <para>HornetQ ships with a fully functioning example demonstrating how to do this
+                <para>HornetQ ships with a fully functioning example demonstrating how to do this, please
                     see <xref linkend="examples.transaction-failover"/></para>
                 <para>If failover occurs when a commit call is being executed, the server, as
-                    previously described will unblock the call to prevent a hang, since the response
-                    will not come back from the backup node. In this case it is not easy for the
+                    previously described, will unblock the call to prevent a hang, since no response
+                    will come back. In this case it is not easy for the
                     client to determine whether the transaction commit was actually processed on the
                     live server before failure occurred.</para>
                 <para>To remedy this, the client can simply enable duplicate detection (<xref
-                        linkend="duplicate-detection"/>) in the transaction, and just retry the
+                        linkend="duplicate-detection"/>) in the transaction, and  retry the
                     transaction operations again after the call is unblocked. If the transaction had
                     indeed been committed on the live server successfully before failover, then when
                     the transaction is retried, duplicate detection will ensure that any persistent
@@ -324,9 +324,9 @@
                 </note>
             </section>
             <section id="ha.automatic.failover.nontransactional">
-                <title>Handling failover with non transactional sessions</title>
-                <para>If the session is non transactional, you may get lost messages or
-                    acknowledgements in the event of failover.</para>
+                <title>Handling Failover With Non Transactional Sessions</title>
+                <para>If the session is non transactional, messages or
+                    acknowledgements can be lost in the event of failover.</para>
                 <para>If you wish to provide <emphasis role="italic">once and only once</emphasis>
                     delivery guarantees for non transacted sessions too, then make sure you send
                     messages blocking, enabled duplicate detection, and catch unblock exceptions as
@@ -336,7 +336,7 @@
             </section>
         </section>
         <section>
-            <title>Getting notified of connection failure</title>
+            <title>Getting Notified of Connection Failure</title>
             <para>JMS provides a standard mechanism for getting notified asynchronously of
                 connection failure: <literal>java.jms.ExceptionListener</literal>. Please consult
                 the JMS javadoc or any good JMS tutorial for more information on how to use
@@ -354,10 +354,10 @@
                 connection failure yourself, and code your own manually reconnection logic in your
                 own failure handler. We define this as <emphasis>application-level</emphasis>
                 failover, since the failover is handled at the user application level.</para>
-            <para>To implement application-level failover, if you're using JMS then you need to code
+            <para>To implement application-level failover, if you're using JMS then you need to set
                 an <literal>ExceptionListener</literal> class on the JMS connection. The <literal
                     >ExceptionListener</literal> will be called by HornetQ in the event that
-                connection failure is detected. In your <literal>ExceptionListener</literal> you
+                connection failure is detected. In your <literal>ExceptionListener</literal>, you
                 would close your old JMS connections, potentially look up new connection factory
                 instances from JNDI and creating new connections. In this case you may well be using
                     <ulink url="http://www.jboss.org/community/wiki/JBossHAJNDIImpl">HA-JNDI</ulink>
@@ -365,8 +365,8 @@
                 server.</para>
             <para>For a working example of application-level failover, please see <xref
                     linkend="application-level-failover"/>.</para>
-            <para>If you are using the core API, then the procedure is very similar: you would code
-                a <literal>FailureListener</literal> on your core <literal>ClientSession</literal>
+            <para>If you are using the core API, then the procedure is very similar: you would set
+                a <literal>FailureListener</literal> on the core <literal>ClientSession</literal>
                 instances.</para>
         </section>
     </section>



More information about the hornetq-commits mailing list