[hornetq-commits] JBoss hornetq SVN: r8770 - in trunk/docs: user-manual/en and 1 other directory.

do-not-reply at jboss.org do-not-reply at jboss.org
Thu Jan 7 11:24:23 EST 2010


Author: timfox
Date: 2010-01-07 11:24:23 -0500 (Thu, 07 Jan 2010)
New Revision: 8770

Modified:
   trunk/docs/quickstart-guide/en/introduction.xml
   trunk/docs/user-manual/en/configuring-transports.xml
   trunk/docs/user-manual/en/connection-ttl.xml
   trunk/docs/user-manual/en/examples.xml
   trunk/docs/user-manual/en/flow-control.xml
   trunk/docs/user-manual/en/persistence.xml
   trunk/docs/user-manual/en/transaction-config.xml
Log:
docs edits part 3

Modified: trunk/docs/quickstart-guide/en/introduction.xml
===================================================================
--- trunk/docs/quickstart-guide/en/introduction.xml	2010-01-07 14:48:27 UTC (rev 8769)
+++ trunk/docs/quickstart-guide/en/introduction.xml	2010-01-07 16:24:23 UTC (rev 8770)
@@ -21,7 +21,7 @@
     <para>This short guide explains how to download, install and quickly get started with
         HornetQ.</para>
     <para>After downloading and installing we highly recommend you run the examples to get
-        acquainted with HornetQ. We ship with over 65 examples demonstrating most of the
+        acquainted with HornetQ. We ship with over 70 examples demonstrating most of the
         features.</para>
     <para>This guide is not intended to be a replacement for the user manual. The user manual goes
         into much more depth, so please consult that for further information.</para>

Modified: trunk/docs/user-manual/en/configuring-transports.xml
===================================================================
--- trunk/docs/user-manual/en/configuring-transports.xml	2010-01-07 14:48:27 UTC (rev 8769)
+++ trunk/docs/user-manual/en/configuring-transports.xml	2010-01-07 16:24:23 UTC (rev 8770)
@@ -176,7 +176,10 @@
             <title>Configuring Netty TCP</title>
             <para>Netty TCP is a simple unencrypted TCP sockets based transport. Netty TCP can be
                 configured to use old blocking Java IO or non blocking Java NIO. We recommend you
-                use the default Java NIO for better scalability. </para>
+                use the Java NIO on the server side for better scalability with many concurrent
+                connections. However using Java old IO can sometimes give you better latency than
+                NIO when you're not so worried about supporting many thousands of concurrent
+                connections. </para>
             <para>If you're running connections across an untrusted network please bear in mind this
                 transport is unencrypted. You may want to look at the SSL or HTTPS
                 configurations.</para>
@@ -190,25 +193,26 @@
                 can be used to configure Netty for simple TCP:</para>
             <itemizedlist>
                 <listitem>
-                    <para><literal>usenio</literal>. If this is <literal
-                            >true</literal> then Java non blocking NIO will be used. If set to
-                            <literal>false</literal> than old blocking Java IO will be used.</para>
+                    <para><literal>usenio</literal>. If this is <literal>true</literal> then Java
+                        non blocking NIO will be used. If set to <literal>false</literal> than old
+                        blocking Java IO will be used.</para>
                     <para>We highly recommend that you use non blocking Java NIO. Java NIO does not
                         maintain a thread per connection so can scale to many more concurrent
                         connections than with old blocking IO. We recommend the usage of Java 6 for
                         NIO and the best scalability. The default value for this property is
-                            <literal>true</literal>.</para>
+                            <literal>true</literal> on the server side and <literal>false</literal>
+                        on the client side.</para>
                 </listitem>
                 <listitem>
-                    <para><literal>host</literal>. This specified the host
-                        name or IP address to connect to (when configuring a connector) or to listen
-                        on (when configuring an acceptor). The default value for this property is
-                            <literal>localhost</literal>. When configuring acceptors, multiple hosts
-                        or IP addresses can be specified by separating them with commas. It is also
-                        possible to specify <code>0.0.0.0</code> to accept connection from all
-                        the host network interfaces. It's not
-                        valid to specify multiple addresses when specifying the host for a
-                        connector; a connector makes a connection to one specific address.</para>
+                    <para><literal>host</literal>. This specifies the host name or IP address to
+                        connect to (when configuring a connector) or to listen on (when configuring
+                        an acceptor). The default value for this property is <literal
+                            >localhost</literal>. When configuring acceptors, multiple hosts or IP
+                        addresses can be specified by separating them with commas. It is also
+                        possible to specify <code>0.0.0.0</code> to accept connection from all the
+                        host's network interfaces. It's not valid to specify multiple addresses when
+                        specifying the host for a connector; a connector makes a connection to one
+                        specific address.</para>
                     <note>
                         <para>Don't forget to specify a host name or ip address! If you want your
                             server able to accept connections from other nodes you must specify a
@@ -218,22 +222,20 @@
                     </note>
                 </listitem>
                 <listitem>
-                    <para><literal>port</literal>. This specified the port to
-                        connect to (when configuring a connector) or to listen on (when configuring
-                        an acceptor). The default value for this property is <literal
-                        >5445</literal>.</para>
+                    <para><literal>port</literal>. This specified the port to connect to (when
+                        configuring a connector) or to listen on (when configuring an acceptor). The
+                        default value for this property is <literal>5445</literal>.</para>
                 </listitem>
                 <listitem>
-                    <para><literal>tcpnodelay</literal>. If this is <literal
-                            >true</literal> then <ulink
-                            url="http://en.wikipedia.org/wiki/Nagle's_algorithm">Nagle's
+                    <para><literal>tcpnodelay</literal>. If this is <literal>true</literal> then
+                            <ulink url="http://en.wikipedia.org/wiki/Nagle's_algorithm">Nagle's
                             algorithm</ulink> will be enabled. The default value for this property
                         is <literal>true</literal>.</para>
                 </listitem>
                 <listitem>
-                    <para><literal>tcpsendbuffersize</literal>. This
-                        parameter determines the size of the TCP send buffer in bytes. The default
-                        value for this property is <literal>32768</literal> bytes (32KiB).</para>
+                    <para><literal>tcpsendbuffersize</literal>. This parameter determines the size
+                        of the TCP send buffer in bytes. The default value for this property is
+                            <literal>32768</literal> bytes (32KiB).</para>
                     <para>TCP buffer sizes should be tuned according to the bandwidth and latency of
                         your network. Here's a good link that explains the theory behind <ulink
                             url="http://www-didc.lbl.gov/TCP-tuning/">this</ulink>.</para>
@@ -248,10 +250,9 @@
                         defaults.</para>
                 </listitem>
                 <listitem>
-                    <para><literal>tcpreceivebuffersize</literal>. This
-                        parameter determines the size of the TCP receive buffer in bytes. The
-                        default value for this property is <literal>32768</literal> bytes
-                        (32KiB).</para>
+                    <para><literal>tcpreceivebuffersize</literal>. This parameter determines the
+                        size of the TCP receive buffer in bytes. The default value for this property
+                        is <literal>32768</literal> bytes (32KiB).</para>
                 </listitem>
             </itemizedlist>
         </section>
@@ -264,25 +265,24 @@
                 additional properties:</para>
             <itemizedlist>
                 <listitem>
-                    <para><literal>sslenabled</literal>. Must be <literal
-                            >true</literal> to enable SSL.</para>
+                    <para><literal>sslenabled</literal>. Must be <literal>true</literal> to enable
+                        SSL.</para>
                 </listitem>
                 <listitem>
-                    <para><literal>keystorepath</literal>. This is the path
-                        to the SSL key store on the client which holds the client
-                        certificates.</para>
+                    <para><literal>keystorepath</literal>. This is the path to the SSL key store on
+                        the client which holds the client certificates.</para>
                 </listitem>
                 <listitem>
-                    <para><literal>keystorepassword</literal>. This is the
-                        password for the client certificate key store on the client.</para>
+                    <para><literal>keystorepassword</literal>. This is the password for the client
+                        certificate key store on the client.</para>
                 </listitem>
                 <listitem>
-                    <para><literal>truststorepath</literal>. This is the path
-                        to the trusted client certificate store on the server.</para>
+                    <para><literal>truststorepath</literal>. This is the path to the trusted client
+                        certificate store on the server.</para>
                 </listitem>
                 <listitem>
-                    <para><literal>truststorepassword</literal>. This is the
-                        password to the trusted client certificate store on the server.</para>
+                    <para><literal>truststorepassword</literal>. This is the password to the trusted
+                        client certificate store on the server.</para>
                 </listitem>
             </itemizedlist>
         </section>
@@ -295,31 +295,29 @@
                 properties:</para>
             <itemizedlist>
                 <listitem>
-                    <para><literal>httpenabled</literal>. Must be <literal
-                            >true</literal> to enable HTTP.</para>
+                    <para><literal>httpenabled</literal>. Must be <literal>true</literal> to enable
+                        HTTP.</para>
                 </listitem>
                 <listitem>
-                    <para><literal>httpclientidletime</literal>. How long a
-                        client can be idle before sending an empty http request to keep the
-                        connection alive</para>
+                    <para><literal>httpclientidletime</literal>. How long a client can be idle
+                        before sending an empty http request to keep the connection alive</para>
                 </listitem>
                 <listitem>
-                    <para><literal>httpclientidlescanperiod</literal>. How
-                        often, in milliseconds, to scan for idle clients</para>
+                    <para><literal>httpclientidlescanperiod</literal>. How often, in milliseconds,
+                        to scan for idle clients</para>
                 </listitem>
                 <listitem>
-                    <para><literal>httpresponsetime</literal>. How long the
-                        server can wait before sending an empty http response to keep the connection
-                        alive</para>
+                    <para><literal>httpresponsetime</literal>. How long the server can wait before
+                        sending an empty http response to keep the connection alive</para>
                 </listitem>
                 <listitem>
-                    <para><literal>httpserverscanperiod</literal>. How often,
-                        in milliseconds, to scan for clients needing responses</para>
+                    <para><literal>httpserverscanperiod</literal>. How often, in milliseconds, to
+                        scan for clients needing responses</para>
                 </listitem>
                 <listitem>
-                    <para><literal>httprequiressessionid</literal>. If true
-                        the client will wait after the first call to receive a session id. Used the
-                        http connector is connecting to servlet acceptor (not recommended) </para>
+                    <para><literal>httprequiressessionid</literal>. If true the client will wait
+                        after the first call to receive a session id. Used the http connector is
+                        connecting to servlet acceptor (not recommended) </para>
                 </listitem>
             </itemizedlist>
         </section>
@@ -411,9 +409,9 @@
                 </listitem>
             </itemizedlist>
             <para>The servlet pattern configured in the <literal>web.xml</literal> is the path of
-                the URL that is used. The connector param <literal
-                    >servletpath</literal> on the connector config must match
-                this using the application context of the web app if there is one.</para>
+                the URL that is used. The connector param <literal>servletpath</literal> on the
+                connector config must match this using the application context of the web app if
+                there is one.</para>
             <para>Its also possible to use the servlet transport over SSL. simply add the following
                 configuration to the
                 connector:<programlisting>    &lt;connector name="netty-servlet">

Modified: trunk/docs/user-manual/en/connection-ttl.xml
===================================================================
--- trunk/docs/user-manual/en/connection-ttl.xml	2010-01-07 14:48:27 UTC (rev 8769)
+++ trunk/docs/user-manual/en/connection-ttl.xml	2010-01-07 16:24:23 UTC (rev 8770)
@@ -150,20 +150,20 @@
             deploying JMS connection factory instances direct into JNDI on the server side, you can
             specify it in the <literal>hornetq-jms.xml </literal> configuration file, using the
             parameter <literal>client-failure-check-period</literal>.</para>
-        <para>The default value for client failure check period is <literal>30000</literal>ms, i.e. 30
-            seconds. A value of <literal>-1</literal> means the client will never fail the
+        <para>The default value for client failure check period is <literal>30000</literal>ms, i.e.
+            30 seconds. A value of <literal>-1</literal> means the client will never fail the
             connection on the client side if no data is received from the server. Typically this is
             much lower than connection TTL to allow clients to reconnect in case of transitory
             failure.</para>
     </section>
     <section id="connection-ttl.async-connection-execution">
         <title>Configuring Asynchronous Connection Execution</title>
-        <para>By default, packets received on the server side are handed off by the remoting thread
-           for processing.</para>
+        <para>By default, packets received on the server side are executed on the remoting
+            thread.</para>
         <para>It is possible instead to use a thread from a thread pool to handle the packents so
-           that the remoting thread is not tied up for too long. However, please note that processing 
-           operations asynchronously on another thread adds a little more
-            latency. To enable asynchronous connection execution, set the parameter <literal
+            that the remoting thread is not tied up for too long. However, please note that
+            processing operations asynchronously on another thread adds a little more latency. To
+            enable asynchronous connection execution, set the parameter <literal
                 >async-connection-execution-enabled</literal> in <literal
                 >hornetq-configuration.xml</literal> to <literal>true</literal> (default value is
                 <literal>false</literal>).</para>

Modified: trunk/docs/user-manual/en/examples.xml
===================================================================
--- trunk/docs/user-manual/en/examples.xml	2010-01-07 14:48:27 UTC (rev 8769)
+++ trunk/docs/user-manual/en/examples.xml	2010-01-07 16:24:23 UTC (rev 8770)
@@ -18,7 +18,7 @@
 <!-- ============================================================================= -->
 <chapter id="examples">
     <title>Examples</title>
-    <para>The HornetQ distribution comes with over 65 run out-of-the-box examples demonstrating many
+    <para>The HornetQ distribution comes with over 70 run out-of-the-box examples demonstrating many
         of the features.</para>
     <para>The examples are available in the distribution, in the <literal>examples</literal>
         directory. Examples are split into JMS and core examples. JMS examples show how a particular

Modified: trunk/docs/user-manual/en/flow-control.xml
===================================================================
--- trunk/docs/user-manual/en/flow-control.xml	2010-01-07 14:48:27 UTC (rev 8769)
+++ trunk/docs/user-manual/en/flow-control.xml	2010-01-07 16:24:23 UTC (rev 8770)
@@ -19,7 +19,7 @@
 <chapter id="flow-control">
    <title>Flow Control</title>
    <para>Flow control is used to limit the flow of data between a client and server, or a server and
-      a server in order to prevent the client or server being overwhelmed with data.</para>
+      another server in order to prevent the client or server being overwhelmed with data.</para>
    <section>
       <title>Consumer Flow Control</title>
       <para>This controls the flow of data between the server and the client as the client consumes
@@ -79,16 +79,18 @@
                   <para>Slow consumers takes significant time to process each message and it is
                      desirable to prevent buffering messages on the client side so that they can be
                      delivered to another consumer instead.</para>
-                  <para>Consider a situation where a queue has 2 consumers 1 of which is very slow.
+                  <para>Consider a situation where a queue has 2 consumers; 1 of which is very slow.
                      Messages are delivered in a round robin fashion to both consumers, the fast
                      consumer processes all of its messages very quickly until its buffer is empty.
-                     At this point there are still messages awaiting to be processed by the slow
-                     consumer which could be being consumed by the other consumer.</para>
+                     At this point there are still messages awaiting to be processed in the buffer
+                     of the slow consumer thus preventing them being processed by the fast consumer.
+                     The fast consumer is therefore sitting idle when it could be processing the
+                     other messages. </para>
                   <para>To allow slow consumers, set the <literal>consumer-window-size</literal> to
                      0 (for no buffer at all). This will prevent the slow consumer from buffering
                      any messages on the client side. Messages will remain on the server side ready
                      to be consumed by other consumers.</para>
-                  <para>Setting this to -1 can give deterministic distribution between multiple
+                  <para>Setting this to 0 can give deterministic distribution between multiple
                      consumers on a queue.</para>
                </listitem>
             </varlistentry>
@@ -130,13 +132,13 @@
       </section>
       <section>
          <title>Rate limited flow control</title>
-         <para>It is also possible to control the rate at which a consumer can consumer messages.
-            This is a form of throttling and can be used to make sure that a consumer never consumes
-            messages at a rate faster than the rate specified. </para>
-         <para>The rate must be a positive integer to enable and is the maximum desired message
-            consumption rate specified in units of messages per second. Setting this to <literal
-               >-1</literal> disables rate limited flow control. The default value is <literal
-               >-1</literal>.</para>
+         <para>It is also possible to control the <emphasis>rate</emphasis> at which a consumer can
+            consume messages. This is a form of throttling and can be used to make sure that a
+            consumer never consumes messages at a rate faster than the rate specified. </para>
+         <para>The rate must be a positive integer to enable this functionality and is the maximum
+            desired message consumption rate specified in units of messages per second. Setting this
+            to <literal>-1</literal> disables rate limited flow control. The default value is
+               <literal>-1</literal>.</para>
          <para>Please see <xref linkend="examples.consumer-rate-limit"/> for a working example of
             limiting consumer rate.</para>
          <section id="flow-control.rate.core.api">
@@ -192,12 +194,12 @@
             sends them more credits they can send more messages.</para>
          <para>The amount of credits a producer requests in one go is known as the <emphasis
                role="italic">window size</emphasis>.</para>
-         <para>The window size therefore determines the amount of bytes that can be inflight at any
+         <para>The window size therefore determines the amount of bytes that can be in-flight at any
             one time before more need to be requested - this prevents the remoting connection from
             getting overloaded.</para>
          <section>
             <title>Using Core API</title>
-            <para>If the HornetQ core API is being window size can be set via the <literal
+            <para>If the HornetQ core API is being used, window size can be set via the <literal
                   >ClientSessionFactory.setProducerWindowSize(int producerWindowSize)</literal>
                method.</para>
          </section>
@@ -258,6 +260,8 @@
             <para>The above example would set the max size of the JMS queue "exampleQueue" to be
                100000 bytes and would block any producers sending to that address to prevent that
                max size being exceeded.</para>
+            <para>Note the policy must be set to <literal>DROP</literal> to enable blocking producer
+               flow control.</para>
             <para>Please note the default value for <literal>address-full-policy</literal> is to
                   <literal>PAGE</literal>. Please see the chapter on paging for more information on
                paging.</para>
@@ -268,10 +272,10 @@
          <para>HornetQ also allows the rate a producer can emit message to be limited, in units of
             messages per second. By specifying such a rate, HornetQ will ensure that producer never
             produces messages at a rate higher than that specified.</para>
-         <para>The rate must be a positive integer to enable and is the maximum desired message
-            consumption rate specified in units of messages per second. Setting this to <literal
-               >-1</literal> disables rate limited flow control. The default value is <literal
-               >-1</literal>.</para>
+         <para>The rate must be a positive integer to enable this functionality and is the maximum
+            desired message consumption rate specified in units of messages per second. Setting this
+            to <literal>-1</literal> disables rate limited flow control. The default value is
+               <literal>-1</literal>.</para>
          <para>Please see the <xref linkend="producer-rate-limiting-example"/> for a working example
             of limiting producer rate.</para>
          <section id="flow-control.producer.rate.core.api">

Modified: trunk/docs/user-manual/en/persistence.xml
===================================================================
--- trunk/docs/user-manual/en/persistence.xml	2010-01-07 14:48:27 UTC (rev 8769)
+++ trunk/docs/user-manual/en/persistence.xml	2010-01-07 16:24:23 UTC (rev 8770)
@@ -21,9 +21,8 @@
     <para>In this chapter we will describe how persistence works with HornetQ and how to configure
         it.</para>
     <para>HornetQ ships with a high performance journal. Since HornetQ handles its own persistence,
-        rather than relying on a database or other 3rd party persistence engine, we have been able
-        to tune the journal to gain optimal performance for the persistence of messages and
-        transactions.</para>
+        rather than relying on a database or other 3rd party persistence engine it is very highly
+        optimised for the specific messaging use cases.</para>
     <para>A HornetQ journal is an <emphasis>append only</emphasis> journal. It consists of a set of
         files on disk. Each file is pre-created to a fixed size and initially filled with padding.
         As operations are performed on the server, e.g. add message, update message, delete message,
@@ -53,20 +52,18 @@
         <listitem>
             <para>Java <ulink url="http://en.wikipedia.org/wiki/New_I/O">NIO</ulink>.</para>
             <para>The first implementation uses standard Java NIO to interface with the file system.
-                This provides very good performance and runs on any platform where there's a Java 5+
-                runtime.</para>
+                This provides extremely good performance and runs on any platform where there's a
+                Java 5+ runtime.</para>
         </listitem>
         <listitem id="aio-journal">
             <para>Linux Asynchronous IO</para>
             <para>The second implementation uses a thin native code wrapper to talk to the Linux
-                asynchronous IO library (AIO). In a highly concurrent environment, AIO can provide
-                better overall persistent throughput since it does not require explicit syncs to
-                flush operating system buffers to disk. Most disks can only support a limited number
-                of syncs per second, so a syncing approach does not scale well when the number of
-                concurrent transactions needed to be committed grows too large. With AIO, HornetQ
-                will be called back when the data has made it to disk, allowing us to avoid explicit
-                syncs altogether and simply send back confirmation of completion when AIO informs us
-                that the data has been persisted.</para>
+                asynchronous IO library (AIO). With AIO, HornetQ will be called back when the data
+                has made it to disk, allowing us to avoid explicit syncs altogether and simply send
+                back confirmation of completion when AIO informs us that the data has been
+                persisted.</para>
+            <para>Using AIO will typically provide even better performance than using Java
+                NIO.</para>
             <para>The AIO journal is only available when running Linux kernel 2.6 or later and after
                 having installed libaio (if it's not already installed). For instructions on how to
                 install libaio please see <xref linkend="installing-aio"/>.</para>
@@ -87,7 +84,7 @@
         <listitem>
             <para>Message journal.</para>
             <para>This journal instance stores all message related data, including the message
-                themselves and also duplicate id caches.</para>
+                themselves and also duplicate-id caches.</para>
             <para>By default HornetQ will try and use an AIO journal. If AIO is not available, e.g.
                 the platform is not Linux with the correct kernel version or AIO has not been
                 installed then it will automatically fall back to using Java NIO which is available
@@ -96,8 +93,8 @@
     </itemizedlist>
     <para>For large messages, HornetQ persists them outside the message journal. This is discussed
         in <xref linkend="large-messages"/>.</para>
-    <para>HornetQ also pages messages to disk in low memory situations. This is discussed in <xref
-            linkend="paging"/>.</para>
+    <para>HornetQ can also be configured to page messages to disk in low memory situations. This is
+        discussed in <xref linkend="paging"/>.</para>
     <para>If no persistence is required at all, HornetQ can also be configured not to persist any
         data at all to storage as discussed in <xref linkend="persistence.enabled"/>.</para>
     <section id="configuring.bindings.journal">
@@ -132,8 +129,8 @@
                     physical volume in order to minimise disk head movement. If the journal is on a
                     volume which is shared with other processes which might be writing other files
                     (e.g. bindings journal, database, or transaction coordinator) then the disk head
-                    may well be moving rapidly between these files as it writes them, thus reducing
-                    performance.</para>
+                    may well be moving rapidly between these files as it writes them, thus
+                    drastically reducing performance.</para>
                 <para>When the message journal is stored on a SAN we recommend each journal instance
                     that is stored on the SAN is given its own LUN (logical unit).</para>
             </listitem>
@@ -184,10 +181,10 @@
             </listitem>
             <listitem id="configuring.message.journal.journal-max-io">
                 <para><literal>journal-max-io</literal></para>
-                <para>Write requests are queued up before being submitted to the system execution.
-                    This parameter controls the maximum number of write requests that can be in the
-                    IO queue at any one time. If the queue becomes full then writes will block until
-                    space is freed up. </para>
+                <para>Write requests are queued up before being submitted to the system for
+                    execution. This parameter controls the maximum number of write requests that can
+                    be in the IO queue at any one time. If the queue becomes full then writes will
+                    block until space is freed up. </para>
                 <para>When using NIO, this value should always be equal to <literal
                     >1</literal></para>
                 <para>When using AIO, the default should be <literal>500</literal>.</para>
@@ -208,9 +205,11 @@
                     so the system maintains different defaults for both NIO and AIO (default for NIO
                     is 3333333 nanoseconds - 300 times per second, default for AIO is 500000
                     nanoseconds - ie. 2000 times per second).</para>
-                <para>By increasing the timeout, you may be able to increase system throughput at
-                    the expense of latency, the default parameters are chosen to give a reasonable
-                    balance between throughput and latency.</para>
+                <note>
+                    <para>By increasing the timeout, you may be able to increase system throughput
+                        at the expense of latency, the default parameters are chosen to give a
+                        reasonable balance between throughput and latency.</para>
+                </note>
             </listitem>
             <listitem id="configuring.message.journal.journal-buffer-size">
                 <para><literal>journal-buffer-size</literal></para>
@@ -237,44 +236,45 @@
     <section id="disk-write-cache">
         <title>An important note on disabling disk write cache.</title>
         <warning>
-        <para>Most disks contain hardware write caches. A write cache can increase the apparent
-            performance of the disk because writes just go into the cache and are then lazily
-            written to the disk later. </para>
-        <para>This happens irrespective of whether you have executed a fsync() from the operating
-            system or correctly synced data from inside a Java program!</para>
-        <para>By default many systems ship with disk write cache enabled. This means that even after
-            syncing from the operating system there is no guarantee the data has actually made it to
-            disk, so if a failure occurs, critical data can be lost.</para>
-        <para>Some more expensive disks have non volatile or battery backed write caches which won't
-            necessarily lose data on event of failure, but you need to test them!</para>
-        <para>If your disk does not have an expensive non volatile or battery backed cache and it's
-            not part of some kind of redundant array, and you value your data integrity you need to
-            make sure disk write cache is disabled.</para>
-        <para>Be aware that disabling disk write cache can give you a nasty shock performance wise.
-            If you've been used to using disks with write cache enabled in their default setting,
-            unaware that your data integrity could be compromised, then disabling it will give you
-            an idea of how fast your disk can perform when acting really reliably.</para>
-        <para>On Linux you can inspect and/or change your disk's write cache settings using the
-            tools <literal>hdparm</literal> (for IDE disks) or <literal>sdparm</literal> or <literal
-                >sginfo</literal> (for SDSI/SATA disks)</para>
-        <para>On Windows you can check / change the setting by right clicking on the disk and
-        clicking properties.</para>
+            <para>Most disks contain hardware write caches. A write cache can increase the apparent
+                performance of the disk because writes just go into the cache and are then lazily
+                written to the disk later. </para>
+            <para>This happens irrespective of whether you have executed a fsync() from the
+                operating system or correctly synced data from inside a Java program!</para>
+            <para>By default many systems ship with disk write cache enabled. This means that even
+                after syncing from the operating system there is no guarantee the data has actually
+                made it to disk, so if a failure occurs, critical data can be lost.</para>
+            <para>Some more expensive disks have non volatile or battery backed write caches which
+                won't necessarily lose data on event of failure, but you need to test them!</para>
+            <para>If your disk does not have an expensive non volatile or battery backed cache and
+                it's not part of some kind of redundant array (e.g. RAID), and you value your data
+                integrity you need to make sure disk write cache is disabled.</para>
+            <para>Be aware that disabling disk write cache can give you a nasty shock performance
+                wise. If you've been used to using disks with write cache enabled in their default
+                setting, unaware that your data integrity could be compromised, then disabling it
+                will give you an idea of how fast your disk can perform when acting really
+                reliably.</para>
+            <para>On Linux you can inspect and/or change your disk's write cache settings using the
+                tools <literal>hdparm</literal> (for IDE disks) or <literal>sdparm</literal> or
+                    <literal>sginfo</literal> (for SDSI/SATA disks)</para>
+            <para>On Windows you can check / change the setting by right clicking on the disk and
+                clicking properties.</para>
         </warning>
     </section>
     <section id="installing-aio">
         <title>Installing AIO</title>
         <para>The Java NIO journal gives great performance, but If you are running HornetQ using
             Linux Kernel 2.6 or later, we highly recommend you use the <literal>AIO</literal>
-            journal for the best persistence performance especially under high concurrency.</para>
+            journal for the very best persistence performance.</para>
         <para>It's not possible to use the AIO journal under other operating systems or earlier
             versions of the Linux kernel.</para>
         <para>If you are running Linux kernel 2.6 or later and don't already have <literal
                 >libaio</literal> installed, you can easily install it using the following
             steps:</para>
         <para>Using yum, (e.g. on Fedora or Red Hat Enterprise Linux):
-            <programlisting>sudo yum install libaio</programlisting></para>
+            <programlisting>yum install libaio</programlisting></para>
         <para>Using aptitude, (e.g. on Ubuntu or Debian system):
-            <programlisting>sudo apt-get install libaio</programlisting></para>
+            <programlisting>apt-get install libaio</programlisting></para>
     </section>
     <section id="persistence.enabled">
         <title>Configuring HornetQ for Zero Persistence</title>

Modified: trunk/docs/user-manual/en/transaction-config.xml
===================================================================
--- trunk/docs/user-manual/en/transaction-config.xml	2010-01-07 14:48:27 UTC (rev 8769)
+++ trunk/docs/user-manual/en/transaction-config.xml	2010-01-07 16:24:23 UTC (rev 8770)
@@ -29,4 +29,6 @@
             >transaction-timeout</literal> property in <literal>hornetq-configuration.xml</literal> (value must be in milliseconds).
         The property <literal>transaction-timeout-scan-period</literal> configures how often, in
         milliseconds, to scan for old transactions.</para>
+    <para>Please note that HornetQ will not unilaterally rollback any XA transactions in a prepared state - this must be heuristically rolled
+    back via the management API if you are sure they will never be resolved by the transaction manager.</para>
 </chapter>



More information about the hornetq-commits mailing list