[hornetq-commits] JBoss hornetq SVN: r9204 - branches/HnetQ_323_cn/docs/user-manual/zh.

do-not-reply at jboss.org do-not-reply at jboss.org
Thu May 6 11:45:36 EDT 2010


Author: gaohoward
Date: 2010-05-06 11:45:35 -0400 (Thu, 06 May 2010)
New Revision: 9204

Modified:
   branches/HnetQ_323_cn/docs/user-manual/zh/perf-tuning.xml
Log:
done


Modified: branches/HnetQ_323_cn/docs/user-manual/zh/perf-tuning.xml
===================================================================
--- branches/HnetQ_323_cn/docs/user-manual/zh/perf-tuning.xml	2010-05-06 12:55:31 UTC (rev 9203)
+++ branches/HnetQ_323_cn/docs/user-manual/zh/perf-tuning.xml	2010-05-06 15:45:35 UTC (rev 9204)
@@ -17,278 +17,209 @@
 <!-- permitted by applicable law.                                                  -->
 <!-- ============================================================================= -->
 <chapter id="perf-tuning">
-    <title>Performance Tuning</title>
-    <para>In this chapter we'll discuss how to tune HornetQ for optimum performance.</para>
+    <title>性能调优</title>
+    <para>本章讲述如何优化HornetQ的性能</para>
     <section>
-        <title>Tuning persistence</title>
+        <title>持久层的优化</title>
         <itemizedlist>
             <listitem>
-                <para>Put the message journal on its own physical volume. If the disk is shared with
-                    other processes e.g. transaction co-ordinator, database or other journals which
-                    are also reading and writing from it, then this may greatly reduce performance
-                    since the disk head may be skipping all over the place between the different
-                    files. One of the advantages of an append only journal is that disk head
-                    movement is minimised - this advantage is destroyed if the disk is shared. If
-                    you're using paging or large messages make sure they're ideally put on separate
-                    volumes too.</para>
+                <para>将消息日志放到单独的物理卷上。如果与其它数据共享,例如事务管理、数据库或其它日志等,那么就会
+                    增加读写的负担,磁头会在多个不同文件之间频繁地移动,极大地降低性能。我们的日志系统采用的是只
+                    添加的模式,目的就是最大程度減少磁头的移动。如果磁盘被共享,那么这一目的将不能达到。另外如果
+                    你使用分页转存或大消息功能时,你最好分别将它们放到各自的独立卷中。</para>
             </listitem>
             <listitem>
-                <para>Minimum number of journal files. Set <literal>journal-min-files</literal> to a
-                    number of files that would fit your average sustainable rate. If you see new
-                    files being created on the journal data directory too often, i.e. lots of data
-                    is being persisted, you need to increase the minimal number of files, this way
-                    the journal would reuse more files instead of creating new data files.</para>
+                <para>尽量减少日志文件的数量。<literal>journal-min-files</literal>参数的设置应以满足平均
+                    运行需要为准。如果你发现系统中经常有新的日志文件被创建,这说明持久的数据量很大,你需要适当增加
+                    这个参数的值,以使HornetQ更多时候是在重用文件,而不是创建新文件。</para>
             </listitem>
             <listitem>
-                <para>Journal file size. The journal file size should be aligned to the capacity of
-                    a cylinder on the disk. The default value 10MiB should be enough on most
-                    systems.</para>
+                <para>日志文件的大小。日志文件的大小最好要与磁盘的一个柱面的容量对齐。默认值是10MiB,它在绝大多数
+                    的系统中能够满足需要。</para>
             </listitem>
             <listitem>
-                <para>Use AIO journal. If using Linux, try to keep your journal type as AIO. AIO
-                    will scale better than Java NIO.</para>
+                <para>使用AIO日志。在Linux下,尽量使用AIO型的日志。AIO的可扩展性要好于Java的NIO。</para>
             </listitem>
             <listitem>
-                <para>Tune <literal>journal-buffer-timeout</literal>. The timeout can be increased
-                    to increase throughput at the expense of latency.</para>
+                <para>优化 <literal>journal-buffer-timeout</literal>。如果增加它的值,吞吐量会增加,但是
+                    延迟也会增加。</para>
             </listitem>
             <listitem>
-                <para>If you're running AIO you might be able to get some better performance by
-                    increasing <literal>journal-max-io</literal>. DO NOT change this parameter if
-                    you are running NIO.</para>
+                <para>如果使用AIO,适当增加<literal>journal-max-io</literal>可能会提高性能。如果使用的是NIO,
+                    请不要改变这个参数。</para>
             </listitem>
         </itemizedlist>
     </section>
     <section>
-        <title>Tuning JMS</title>
-        <para>There are a few areas where some tweaks can be done if you are using the JMS
-            API</para>
+        <title>优化JMS</title>
+        <para>如果使用JMS接口,有以下几个方面可以改进性能。</para>
         <itemizedlist>
             <listitem>
-                <para>Disable message id. Use the <literal>setDisableMessageID()</literal> method on
-                    the <literal>MessageProducer</literal> class to disable message ids if you don't
-                    need them. This decreases the size of the message and also avoids the overhead
-                    of creating a unique ID.</para>
+                <para>关闭消息id。如果你不需要这个id,用<literal>MessageProducer</literal>的
+                    <literal>setDisableMessageID()</literal>方法可以关闭它。这可以减少消息的大小并且
+                    省去了创建唯一ID的时间。</para>
             </listitem>
             <listitem>
-                <para>Disable message timestamp. Use the <literal
-                        >setDisableMessageTimeStamp()</literal> method on the <literal
-                        >MessageProducer</literal> class to disable message timestamps if you don't
-                    need them.</para>
+                <para>关闭消息的时间戳。如果不需要时间戳,用<literal
+                        >MessageProducer</literal>的<literal
+                        >setDisableMessageTimeStamp()</literal>方法将其关闭。</para>
             </listitem>
             <listitem>
-                <para>Avoid <literal>ObjectMessage</literal>. <literal>ObjectMessage</literal> is
-                    convenient but it comes at a cost. The body of a <literal
-                        >ObjectMessage</literal> uses Java serialization to serialize it to bytes.
-                    The Java serialized form of even small objects is very verbose so takes up a lot
-                    of space on the wire, also Java serialization is slow compared to custom
-                    marshalling techniques. Only use <literal>ObjectMessage</literal> if you really
-                    can't use one of the other message types, i.e. if you really don't know the type
-                    of the payload until run-time.</para>
+                <para>尽量避免使用<literal>ObjectMessage</literal>。<literal>ObjectMessage</literal>会带
+                    来额外的开销。<literal>ObjectMessage</literal>使用Java的序列化将它序列化为字节流。在对小的对象
+                    进行序列化会占用大量的空间,使传输的数据量加大。另外,Java的序列化与其它定制的技术相比要慢。只有在不得
+                    以的情况下才使用它。比如当你在运行时不知道对象的具体类型时,可以用ObjectMessage。</para>
             </listitem>
             <listitem>
-                <para>Avoid <literal>AUTO_ACKNOWLEDGE</literal>. <literal>AUTO_ACKNOWLEDGE</literal>
-                    mode requires an acknowledgement to be sent from the server for each message
-                    received on the client, this means more traffic on the network. If you can, use
-                        <literal>DUPS_OK_ACKNOWLEDGE</literal> or use <literal
-                        >CLIENT_ACKNOWLEDGE</literal> or a transacted session and batch up many
-                    acknowledgements with one acknowledge/commit. </para>
+                <para>避免使用<literal>AUTO_ACKNOWLEDGE</literal>。 <literal>AUTO_ACKNOWLEDGE</literal>
+                    使得每收到一个消息就要向服务器发送一个通知--这样增加的网络传输的负担。如果可能,尽量使用
+                    <literal>DUPS_OK_ACKNOWLEDGE</literal>或者<literal
+                        >CLIENT_ACKNOWLEDGE</literal>。或者使用事务性会话,将通知在提交时批量完成。</para>
             </listitem>
             <listitem>
-                <para>Avoid durable messages. By default JMS messages are durable. If you don't
-                    really need durable messages then set them to be non-durable. Durable messages
-                    incur a lot more overhead in persisting them to storage.</para>
+                <para>避免持久化消息。默认情况下JMS消息是持久的。如果你不需要持久消息,则将其设定为非持久。
+                    持久消息都会被写到磁盘中,这给系统带来了明显的负担。</para>
             </listitem>
         </itemizedlist>
     </section>
     <section>
-        <title>Other Tunings</title>
-        <para>There are various other places in HornetQ where we can perform some tuning:</para>
+        <title>其它优化</title>
+        <para>在HornetQ中还有其它一些地方可以优化:</para>
         <itemizedlist>
             <listitem>
-                <para>Use Asynchronous Send Acknowledgements. If you need to send durable messages
-                    non transactionally and you need a guarantee that they have reached the server
-                    by the time the call to send() returns, don't set durable messages to be sent
-                    blocking, instead use asynchronous send acknowledgements to get your
-                    acknowledgements of send back in a separate stream, see <xref
-                        linkend="send-guarantees"/> for more information on this.</para>
+                <para>使用异步发送通知。如果你在非事务条件下发送持久的消息,并且要保证在send()返回时持久消息已经到达服
+                    务器,不要使用阻塞式发送的方式,应该使用异步发送通知的方式。参见<xref
+                        linkend="send-guarantees"/>中的说明。</para>
             </listitem>
             <listitem>
-                <para>Use pre-acknowledge mode. With pre-acknowledge mode, messages are acknowledged
-                        <literal>before</literal> they are sent to the client. This reduces the
-                    amount of acknowledgement traffic on the wire. For more information on this, see
-                        <xref linkend="pre-acknowledge"/>.</para>
+                <para>使用预先通知模式。预先通知就是在消息发往客户端<literal>之前</literal>进行通知。它节省了正常
+                    的消息通知所占用的通迅时间。详细的解释请参见
+                        <xref linkend="pre-acknowledge"/>。</para>
             </listitem>
             <listitem>
-                <para>Disable security. You may get a small performance boost by disabling security
-                    by setting the <literal>security-enabled</literal> parameter to <literal
-                        >false</literal> in <literal>hornetq-configuration.xml</literal>.</para>
+                <para>关闭安全。将<literal>hornetq-configuration.xml</literal>文件中的<literal>security-enabled</literal>
+                    参数设为<literal>false</literal>以关闭安全。这可以带来一些性能的提高。</para>
             </listitem>
             <listitem>
-                <para>Disable persistence. If you don't need message persistence, turn it off
-                    altogether by setting <literal>persistence-enabled</literal> to false in
-                        <literal>hornetq-configuration.xml</literal>.</para>
+                <para>关闭持久化。如果不你不需要消息持久化,可以将<literal>hornetq-configuration.xml</literal>
+                    文件中的<literal>persistence-enabled</literal>参数设为false来完全关闭持久功能。</para>
             </listitem>
             <listitem>
-                <para>Sync transactions lazily. Setting <literal
-                        >journal-sync-transactional</literal> to <literal>false</literal> in
-                        <literal>hornetq-configuration.xml</literal> can give you better
-                    transactional persistent performance at the expense of some possibility of loss
-                    of transactions on failure. See <xref linkend="send-guarantees"/> for more
-                    information.</para>
+                <para>采用延迟方式事务同步。将<literal>hornetq-configuration.xml</literal>文件中的<literal
+                        >journal-sync-transactional</literal>参数设为<literal>false</literal>可以得到
+                    更好的事务持久化的性能。但是这样做可能会造成在发生故障时事务的丢失。有关详细的说明参见
+                    <xref linkend="send-guarantees"/>。</para>
             </listitem>
             <listitem>
-                <para>Sync non transactional lazily. Setting <literal
-                        >journal-sync-non-transactional</literal> to <literal>false</literal> in
-                        <literal>hornetq-configuration.xml</literal> can give you better
-                    non-transactional persistent performance at the expense of some possibility of
-                    loss of durable messages on failure. See <xref linkend="send-guarantees"/> for
-                    more information.</para>
+                <para>采用延迟方式非事务同步。将<literal>hornetq-configuration.xml</literal>文件中的<literal
+                        >journal-sync-non-transactional</literal>参数设为<literal>false</literal>可以得到
+                    更好的非事务持久化的性能。但是这样做可能会造成在发生故障时持久消息的丢失。有关详细的说明参见
+                    <xref linkend="send-guarantees"/>。</para>
             </listitem>
             <listitem>
-                <para>Send messages non blocking. Setting <literal>block-on-durable-send</literal>
-                    and <literal>block-on-non-durable-send</literal> to <literal>false</literal> in
-                        <literal>hornetq-jms.xml</literal> (if you're using JMS and JNDI) or
-                    directly on the ClientSessionFactory. This means you don't have to wait a whole
-                    network round trip for every message sent. See <xref linkend="send-guarantees"/>
-                    for more information.</para>
+                <para>采用非阻塞方式发送消息。将文件<literal>hornetq-jms.xml</literal>中的参数
+                    <literal>block-on-non-durable-send</literal>设为<literal>false</literal>
+                    (使用JMS和JNDI时)或者直接在上进行相应的设置,可以使
+                    消息发送时不阻塞等待服务器的响应。参见 <xref linkend="send-guarantees"/>。</para>
             </listitem>
             <listitem>
-                <para>Socket NIO vs Socket Old IO. By default HornetQ uses Socket NIO on the server
-                    and old (blocking) IO on the client side (see the chapter on configuring
-                    transports for more information <xref linkend="configuring-transports"/>). NIO
-                    is much more scalable but can give you some latency hit compared to old blocking
-                    IO. If you expect to be able to service many thousands of connections on the
-                    server, then continue to use NIO on the server. However, if don't expect many
-                    thousands of connections on the server you can configure the server acceptors to
-                    use old IO, and might get a small performance advantage.</para>
+                <para>套接字NIO与旧的IO对比。默认情况下HornetQ在服务器端使用套接字NIO技术,而在客户端则使用旧的(阻塞)
+                    IO(参见传输配置一章<xref linkend="configuring-transports"/>)。NIO比旧的阻塞式IO有更
+                    强的可扩展性,但是也会带来一些延时。如果你的服务器要同时有数千个连接,使用NIO效果比较好。但是如果
+                    连接数并没有这么多,你可以配置接收器使用旧的IO还提高性能。</para>
             </listitem>
             <listitem>
-                <para>Use the core API not JMS. Using the JMS API you will have slightly lower
-                    performance than using the core API, since all JMS operations need to be
-                    translated into core operations before the server can handle them. If using the
-                    core API try to use methods that take <literal>SimpleString</literal> as much as
-                    possible. <literal>SimpleString</literal>, unlike java.lang.String does not
-                    require copying before it is written to the wire, so if you re-use <literal
-                        >SimpleString</literal> instances between calls then you can avoid some
-                    unnecessary copying.</para>
+                <para>尽量使用核心接口而不用JMS。使用JMS接口会稍微比使用核心接口性能要低些。这是因为所有JMS操作
+                    实际上要转化为核心的操作才能为服务器所处理。在使用核心接口时,尽量使用带有
+                    <literal>SimpleString</literal>类型参数的方法。<literal>SimpleString</literal>与
+                    java.lang.String不同,它在写入传输层时不需要拷贝。所以你如果在调用中重用<literal
+                        >SimpleString</literal>对象可以避免不必要的拷贝。</para>
             </listitem>
         </itemizedlist>
     </section>
     <section>
-        <title>Tuning Transport Settings</title>
+        <title>传输层的优化</title>
         <itemizedlist>
             <listitem>
-                <para>Enable <ulink url="http://en.wikipedia.org/wiki/Nagle's_algorithm">Nagle's
-                        algorithm</ulink>. If you are sending many small messages, such that more
-                    than one can fit in a single IP packet thus providing better performance. This
-                    is done by setting <literal>tcp-no-delay</literal> to false with the Netty
-                    transports. See <xref linkend="configuring-transports"/> for more information on
-                    this. </para>
-                <para>Enabling Nagle's algorithm can make a very big difference in performance and
-                    is highly recommended if you're sending a lot of asynchronous traffice.</para>
+                <para>使用<ulink url="http://en.wikipedia.org/wiki/Nagle's_algorithm">Nagle's
+                    算法</ulink>。如果发送的是许多小的消息,多个消息可以在一个IP包中发送,因此性能可以提高。
+                    这需要将Netty传输中的<literal>tcp-no-delay</literal>设为false。参见
+                    <xref linkend="configuring-transports"/>中的详细说明。</para>
+                <para>采用Nagle算法可以显著提高性能,如果应用程序有很多异步的发送,强烈推荐使用它。</para>
             </listitem>
             <listitem>
-                <para>TCP buffer sizes. If you have a fast network and fast machines you may get a
-                    performance boost by increasing the TCP send and receive buffer sizes. See the
-                        <xref linkend="configuring-transports"/> for more information on this.
+                <para>TCP缓存大小。如果你的网络速度很快,并且你的主机也很快,你可以通过增加TCP的发送和接收缓存
+                    来提高性能。参见<xref linkend="configuring-transports"/>中的详细说明。
                 </para>
             </listitem>
             <listitem>
-                <para>Increase limit on file handles on the server. If you expect a lot of
-                    concurrent connections on your servers, or if clients are rapidly opening and
-                    closing connections, you should make sure the user running the server has
-                    permission to create sufficient file handles.</para>
-                <para>This varies from operating system to operating system. On Linux systems you
-                    can increase the number of allowable open file handles in the file <literal
-                        >/etc/security/limits.conf</literal> e.g. add the lines
+                <para>增加服务器中文件句柄数量限制。如果你的服务器将要处理很多并行的连接,或者客户端在快速不停地
+                    打开和关闭连接,你要确保在服务器端有足够的文件句柄以供使用。</para>
+                <para>这个限制在不同平台有不同的方法。在Linux系统中,你可以编辑文件<literal
+                        >/etc/security/limits.conf</literal>,增加以下内容:
                     <programlisting>
 serveruser     soft    nofile  20000
 serveruser     hard    nofile  20000                   
                 </programlisting>
-                    This would allow up to 20000 file handles to be open by the user <literal
-                        >serveruser</literal>. </para>
+                    它设置了用户<literal>serveruser</literal>可以最多打开20000个文件句柄。</para>
             </listitem>
         </itemizedlist>
     </section>
     <section>
-        <title>Tuning the VM</title>
-        <para>We highly recommend you use the latest Java 6 JVM, especially in the area of
-            networking, many improvements have been made since Java 5. We test internally using the
-            Sun JVM, so some of these tunings won't apply to JDKs from other providers (e.g. IBM or
-            JRockit)</para>
+        <title>优化虚拟机</title>
+        <para>我们强烈建议你使用最新的Java 6虚拟机。它在很多方面对以前Java 5的虚拟机进行了改进,特别是在网络功能方面。
+            这是根据我们内部使用Sun的实现测试的結果,可能不适用于其它的Java实现(例如IBM或JRockit)。</para>
         <itemizedlist>
             <listitem>
-                <para>Garbage collection. For smooth server operation we recommend using a parallel
-                    garbage collection algorithm, e.g. using the JVM argument <literal
-                        >-XX:+UseParallelGC</literal> on Sun JDKs.</para>
+                <para>拉圾回收。为了使服务器的运行比较平滑,我们建议使用并行拉圾回收的算法。例如在Sun的JDK使用
+                    JVM选项<literal>-XX:+UseParallelGC</literal>.</para>
             </listitem>
             <listitem id="perf-tuning.memory">
-                <para>Memory settings. Give as much memory as you can to the server. HornetQ can run
-                    in low memory by using paging (described in <xref linkend="paging"/>) but if it
-                    can run with all queues in RAM this will improve performance. The amount of
-                    memory you require will depend on the size and number of your queues and the
-                    size and number of your messages. Use the JVM arguments <literal>-Xms</literal>
-                    and <literal>-Xmx</literal> to set server available RAM. We recommend setting
-                    them to the same high value.</para>
-                <para>HornetQ will regularly sample JVM memory and reports if the available memory
-                    is below a configurable threshold. Use this information to properly set JVM
-                    memory and paging. The sample is disabled by default. To enabled it, configure
-                    the sample frequency by setting <literal>memory-measure-interval</literal> in
-                        <literal>hornetq-configuration.xml</literal> (in milliseconds). When the
-                    available memory goes below the configured threshold, a warning is logged. The
-                    threshold can be also configured by setting <literal
-                        >memory-warning-threshold</literal> in <literal
-                        >hornetq-configuration.xml</literal> (default is 25%).</para>
+                <para>内存设置。尽量为服务器分配更多的内存。HornetQ利用其分页转存技术可以在很少的内存下运行(在
+                    <xref linkend="paging"/>中有说明)。但是如果所有队列都在内存运行,性能将会很好。具体需要
+                    多少内存要由你的队列的大小和数量以及消息的大小和数量决定。使用JVM参数<literal>-Xms</literal>
+                    和<literal>-Xmx</literal>来为你的服务器分配内存。我们建议两个参数的设为相同的值。</para>
+                <para>HornetQ可以定期地检测JVM的内存并报告是否可用内存低于指定的值。参考这个报告的值可以对JVM的内存
+                    及分页转存进行合理的设定。这个检测功能默认是关闭的。如要使用,需要配置
+                    <literal>hornetq-configuration.xml</literal>文件中的参数
+                    <literal>memory-measure-interval</literal>,它表示检测的频度(单位毫秒)。
+                    当可用内存低于指定的值时,在日志会输出一个警告。这个指定的内存值也在文件<literal
+                        >hornetq-configuration.xml</literal>中定义,参数名为<literal
+                        >memory-warning-threshold</literal>(默认值25%)。</para>
             </listitem>
             <listitem>
-                <para>Aggressive options. Different JVMs provide different sets of JVM tuning
-                    parameters, for the Sun Hotspot JVM the full list of options is available <ulink
+                <para>主动选项(Aggressive options)。不同JVM有不同的JVM优化参数。对于Sun的Hotspot JVM,在<ulink
                         url="http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp"
-                        >here</ulink>. We recommend at least using <literal
-                        >-XX:+AggressiveOpts</literal> and<literal>
-                        -XX:+UseFastAccessorMethods</literal>. You may get some mileage with the
-                    other tuning parameters depending on your OS platform and application usage
-                    patterns.</para>
+                        >这里</ulink>有一个完整的参数列表。我们建议至少要使用 <literal
+                        >-XX:+AggressiveOpts</literal> 和<literal>
+                        -XX:+UseFastAccessorMethods</literal>选项。根据不同的平台,可能还有其它一些参数供你使用,
+                    以提高JVM的性能。</para>
             </listitem>
         </itemizedlist>
     </section>
     <section>
-        <title>Avoiding Anti-Patterns</title>
+        <title>避免违背设计模式</title>
         <itemizedlist>
             <listitem>
-                <para>Re-use connections / sessions / consumers / producers. Probably the most
-                    common messaging anti-pattern we see is users who create a new
-                    connection/session/producer for every message they send or every message they
-                    consume. This is a poor use of resources. These objects take time to create and
-                    may involve several network round trips. Always re-use them.</para>
+                <para>重用连接/会话/接收者/发送者。最常见的错误恐怕就是每发送/接收一个消息都要创建一个新的连接
+                    /会话/发送者或接收者。这样非常浪费资源。这些对象的创建要占用时间和网络带宽。它们应该进行重用。</para>
                 <note>
-                    <para>Some popular libraries such as the Spring JMS Template are known to use
-                        these anti-patterns. If you're using Spring JMS Template and you're getting
-                        poor performance you know why. Don't blame HornetQ!</para>
+                    <para>有些常用的框架如Spring JMS Template在使用JMS时违背了设计模式。如果你使用了它,性能
+                        就会受到影响。这不是HornetQ的原因!</para>
                 </note>
             </listitem>
             <listitem>
-                <para>Avoid fat messages. Verbose formats such as XML take up a lot of space on the
-                    wire and performance will suffer as result. Avoid XML in message bodies if you
-                    can.</para>
+                <para>避免使用繁锁的消息格式。如XML,它会使数据量变大进而降低性能。所以应该尽量避免在消息体中使用XML。</para>
             </listitem>
             <listitem>
-                <para>Don't create temporary queues for each request. This common anti-pattern
-                    involves the temporary queue request-response pattern. With the temporary queue
-                    request-response pattern a message is sent to a target and a reply-to header is
-                    set with the address of a local temporary queue. When the recipient receives the
-                    message they process it then send back a response to the address specified in
-                    the reply-to. A common mistake made with this pattern is to create a new
-                    temporary queue on each message sent. This will drastically reduce performance.
-                    Instead the temporary queue should be re-used for many requests.</para>
+                <para>不要为每个请求都创建新的临时队列。临时队列通常用于请求-响应模式的消息应用。在这个模式中消息被发往
+                    一个目的,它带有一个reply-to的头属性指向一个本地的临时队列的地址。当消息被收到后,接收方将响应做为消息发
+                    往那个reply-to指定的临时的地址。如果每发一个消息都创建一个临时队列,那么性能将会受很大影响。正确的
+                    作法是在发送消息时重用临时队列。</para>
             </listitem>
             <listitem>
-                <para>Don't use Message-Driven Beans for the sake of it. As soon as you start using
-                    MDBs you are greatly increasing the codepath for each message received compared
-                    to a straightforward message consumer, since a lot of extra application server
-                    code is executed. Ask yourself do you really need MDBs? Can you accomplish the
-                    same task using just a normal message consumer?</para>
+                <para>尽量不要使用MDB。使用MDB,消息的接收过程要比直接接收复杂得多,要执行很多应用服务器内部的代码。
+                    在设计应用时要问一下是否真的需要MDB?可不可以直接使用消息接收者完成同样的任务?</para>
             </listitem>
         </itemizedlist>
     </section>



More information about the hornetq-commits mailing list