[jboss-cvs] JBossAS SVN: r99526 - projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US.

jboss-cvs-commits at lists.jboss.org jboss-cvs-commits at lists.jboss.org
Mon Jan 18 02:35:42 EST 2010


Author: laubai
Date: 2010-01-18 02:35:42 -0500 (Mon, 18 Jan 2010)
New Revision: 99526

Modified:
   projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Book_Info.xml
   projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Clustering_Guide_JBoss_Cache_JGroups.xml
   projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Revision_History.xml
Log:
Partial corection of Clustering JBoss Cache JGroups.xml

Modified: projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Book_Info.xml
===================================================================
--- projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Book_Info.xml	2010-01-18 07:20:03 UTC (rev 99525)
+++ projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Book_Info.xml	2010-01-18 07:35:42 UTC (rev 99526)
@@ -7,7 +7,7 @@
 	<subtitle>for JBoss Enterprise Web Platform 5.0</subtitle>	
 	<edition>1</edition>
 	<issuenum>1</issuenum>
-	<pubsnumber>0.5</pubsnumber>
+	<pubsnumber>0.6</pubsnumber>
 	<productname>JBoss Enterprise Web Platform</productname>
 	<productnumber>5.0</productnumber>
 <!--	<pubdate>, 2009</pubdate> -->

Modified: projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Clustering_Guide_JBoss_Cache_JGroups.xml
===================================================================
--- projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Clustering_Guide_JBoss_Cache_JGroups.xml	2010-01-18 07:20:03 UTC (rev 99525)
+++ projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Clustering_Guide_JBoss_Cache_JGroups.xml	2010-01-18 07:35:42 UTC (rev 99526)
@@ -1169,146 +1169,205 @@
           is configured in the <literal>FC</literal> sub-element under the JGroups
           <literal>Config</literal> element. Here is an example configuration.</para>
 
-<programlisting>
-&lt;FC max_credits="1000000"
+          <programlisting><![CDATA[<FC max_credits="1000000"
 down_thread="false" up_thread="false" 
-    min_threshold="0.10"/&gt;
-</programlisting>
+    min_threshold="0.10"/>]]></programlisting>
           
 
-<para>The configurable attributes in the <literal>FC</literal> element are as follows.</para>
-          <itemizedlist>
-            <listitem>
-              <para><emphasis role="bold">max_credits</emphasis> specifies the maximum number of credits
-                                (in bytes). This value should be smaller than the JVM heap size.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">min_credits</emphasis> specifies the threshold credit on the
-                                sender, below which the receiver should send in more credits.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">min_threshold</emphasis> specifies percentage value of the
-                                threshold. It overrides the <literal>min_credits</literal> attribute.</para>
-            </listitem>
-          </itemizedlist>
+          <para>The configurable attributes in the <literal>FC</literal> element are as follows.</para>
+          <variablelist>
+            <varlistentry>
+              <term><varname>max_credits</varname></term>
+              <listitem><para>Specifies the maximum number of credits in bytes. This value
+              should be smaller than the JVM heap size.</para></listitem>
+            </varlistentry>
+            <varlistentry>
+              <term><varname>min_credits</varname></term>
+              <listitem><para>Specifies the threshold credit on the sender, below which the
+              receiver should send more credits.</para></listitem>
+            </varlistentry>
+            <varlistentry>
+              <term><varname>min_threshold</varname></term>
+              <listitem><para>Specifies percentage value of the threshold. This attribute
+              overrides <varname>min_credits</varname>.</para></listitem>
+            </varlistentry>
+          </variablelist>
 
-<note><title>Note</title>
-	<para>
-		Applications that use synchronous group RPC calls primarily do not require FC protocol in their JGroups protocol stack because synchronous communication, where the hread that makes the call blocks waiting for responses from all the members of the group, already slows overall rate of calls. Even though TCP provides flow control by itself, FC is still required in TCP based JGroups stacks because of group communication, where we essentially have to send group messages at the highest speed the slowest receiver can keep up with. TCP flow control only takes into account individual node  communications and has not a notion of who's the slowest in the group, which is why FC is required.
-	</para>
-</note>
+          <note>
+            <para>
+              Applications that primarily use synchronous group RPC calls do not require flow 
+              control protocol in their JGroups protocol stack because synchronous communication, 
+              where the thread that makes the call blocks waiting for responses from all group
+              members, already slows the overall rate of calls. Even though TCP provides flow 
+              control by itself, <literal>FC</literal> is still required in TCP based JGroups stacks 
+              because of group communication, where we essentially have to send group messages at the 
+              highest speed the slowest receiver can keep up with. TCP flow control only takes 
+              individual node communications into account, not which node is slowest, which is why
+              <literal>FC</literal> is required.
+	          </para>
+          </note>
 	  
 <section>
-	<title>Why is FC needed on top of TCP ? TCP has its own flow control !</title>
+	<title>Why is FC needed on top of TCP? TCP has its own flow control!</title>
 	<para>
-		
-		The reason is group communication, where we essentially have to send group messages at the highest speed the slowest receiver can keep up with. Let's say we have a cluster {A,B,C,D}. D is slow (maybe overloaded), the rest is fast. When A sends a group message, it establishes TCP connections A-A (conceptually), A-B, A-C and A-D (if they don't yet exist). So let's say A sends 100 million messages to the cluster. Because TCP's flow control only applies to A-B, A-C and A-D, but not to A-{B,C,D}, where {B,C,D} is the group, it is possible that A, B and C receive the 100M, but D only received 1M messages. (BTW: this is also the reason why we need NAKACK, although TCP does its own retransmission).
+	  The <literal>FC</literal> element is required for group communication where group messages must be 
+    sent at the highest speed that the slowest receiver can handle.
+  </para>
+  <para>
+    Say we have a cluster, <literal>{A,B,C,D}</literal>. Node <literal>D</literal> is slow, and the other
+    nodes are fast. When <literal>A</literal> sends a group message, it establishes the following TCP
+    connections: <literal>A-A</literal>, <literal>A-B</literal>, <literal>A-C</literal>, and 
+    <literal>A-D</literal>.
+  </para>
+  <para>
+    <literal>A</literal> sends 100 million messages to the cluster. TCP's flow control applies to the
+    connections between <literal>A-B</literal>, <literal>A-C</literal> and <literal>A-D</literal>
+    individually, but not to <literal>A-{B,C,D}</literal>, where <literal>{B,C,D}</literal> is the
+    group. It is therefore possible that nodes <literal>A</literal>, <literal>B</literal> and 
+    <literal>C</literal> receive the 100 million messages, but that node <literal>D</literal> will 
+    only receive one million messages. This is also the reason we need <literal>NAKACK</literal>,
+    even though TCP does its own retransmission.
 	</para>
 	<para>
-		Now JGroups has to buffer all messages in memory for the case when the original sender S dies and a node asks for retransmission of a message of S. Because all members buffer all messages they received, they need to purge stable messages (= messages seen by everyone) every now and then. This is done by the STABLE protocol, which can be configured to run the stability protocol round time based (e.g. every 50s) or size based (whenever 400K data has been received).
+    JGroups has to buffer all messages in memory in case the original sender dies and a node asks for
+    retransmission of a message. Because all members buffer all messages they receive, they must
+    occasionally purge <emphasis>stable</emphasis> messages (messages seen by all nodes). This is done
+    with the <literal>STABLE</literal> protocol, which can be configured to run the stability protocol
+    based on either time (for example, every fifty seconds) or size (every 400 kilobytes of data
+    received).
 	</para>
-	<para>		
-		In the above case, the slow node D will prevent the group from purging messages above 1M, so every member will buffer 99M messages ! This in most cases leads to OOM exceptions. Note that - although the sliding window protocol in TCP will cause writes to block if the window is full - we assume in the above case that this is still much faster for A-B and A-C than for A-D.
-	</para>
 	<para>
-		So, in summary, we need to send messages at a rate the slowest receiver (D) can handle.
+    In the example case, the slow node <literal>D</literal> will prevent the group from purging
+    messages other than the one million seen by <literal>D</literal>. In most cases this leads to
+    out-of-memory exceptions, so messages must be sent at a rate that the slowest receiver can handle.
 	</para>
 </section>
 
 <section>
 		<title>So do I always need FC?</title>
 	<para>
-		This depends on how the application uses the JGroups channel. Referring to the example above, if there was something about the application that would naturally cause A to slow down its rate of sending because D wasn't keeping up, then FC would not be needed.
-	</para>
-	<para>
-		A good example of such an application is one that makes synchronous group RPC calls (typically using a JGroups RpcDispatcher.) By synchronous, we mean the thread that makes the call blocks waiting for responses from all the members of the group. In that kind of application, the threads on A that are making calls would block waiting for responses from D, thus naturally slowing the overall rate of calls.
-	</para>
-	<para>
-		A JBoss Cache cluster configured for REPL_SYNC is a good example of an application that makes synchronous group RPC calls. If a channel is only used for a cache configured for REPL_SYNC, we recommend you remove FC from its protocol stack.
-	</para>
-	<para>
-		And, of course, if your cluster only consists of two nodes, including FC in a TCP-based protocol stack is unnecessary. There is no group beyond the single peer-to-peer relationship, and TCP's internal flow control will handle that just fine.
-	</para>
-	<para>
-		Another case where FC may not be needed is for a channel used by a JBoss Cache configured for buddy replication and a single buddy. Such a channel will in many respects act like a two node cluster, where messages are only exchanged with one other node, the buddy. (There may be other messages related to data gravitation that go to all members, but in a properly engineered buddy replication use case these should be infrequent. But if you remove FC be sure to load test your application.)
-	</para>	
+    This depends on the application's use of the JGroups channel. If node <literal>A</literal> from
+    the previous example was able to slow its send rate because <literal>D</literal> was not keeping up,
+    <literal>FC</literal> would not be required.
+  </para>
+  <para>
+    Applications that make synchronous group RPC calls are unlikely to require <literal>FC</literal>.
+    In synchronous applications, the thread that makes the call blocks waiting for responses from all
+    group members. This means that the threads on node <literal>A</literal> that make the calls would
+    block waiting for responses from node <literal>D</literal>, naturally slowing the overall rate
+    of calls.
+  </para>
+  <para>
+    A JBoss Cache cluster configured for <varname>REPL_SYNC</varname> is one example of an application
+    that mades synchronous group RPC calls. If a channel is used only for a cache configured for
+    <varname>REPL_SYNC</varname>, we recommend removing <literal>FC</literal> from its protocol stack.
+  </para>
+  <para>
+    If your cluster consists of two nodes, including <literal>FC</literal> in a TCP-based protocol
+    stack is unnecessary, since TCP's internal flow control can handle one peer-to-peer
+    relationship.
+  </para>
+  <para>
+    <literal>FC</literal> may also be omitted where a channel is used by a JBoss Cache configured for
+    buddy replication with a single buddy. Such a channel acts much like a two-node cluster, where
+    messages are only exchanged with one other node. Other messages related to data gravitation will be
+    sent to all members, but these should be infrequent.
+  </para>
+  <important>
+    <title>If you remove <literal>FC</literal></title>
+    <para>Be sure to load test your application if you remove the <literal>FC</literal> element.</para>
+  </important>
 </section>
+</section>
 	
 
-	  
-        </section>
-	
-	
 <section><title>Fragmentation</title>
 	<para>
-		This protocol fragments messages larger than certain size. Unfragments at the receiver's side. It works for both unicast and multicast messages. It is configured in the FRAG2 sub-element under the JGroups Config element. Here is an example configuration.
+		This protocol fragments messages larger than certain size. Messages are rejoined at the
+    receiving end. This works for both unicast and multicast messages. It is configured in the
+    <literal>FRAG2</literal> sub-element under the JGroups <literal>Config</literal> element, like so:
 	</para>
-<programlisting><![CDATA[	
-		<FRAG2 frag_size="60000" down_thread="false" up_thread="false"/>]]>
+<programlisting><![CDATA[<FRAG2 frag_size="60000" down_thread="false" up_thread="false"/>]]>
 </programlisting>
 
 <para>
-The configurable attributes in the FRAG2 element are as follows.
+The configurable attributes in the <literal>FRAG2</literal> element are as follows.
 </para>
 
-<itemizedlist>
-	<listitem><para><emphasis role="bold">frag_size</emphasis> specifies the max frag size in bytes. Messages larger than that are fragmented.</para></listitem>
-</itemizedlist>
+<variablelist>
+  <varlistentry>
+    <term><varname>frag_size</varname></term>
+    <listitem><para>Specifies the maximum size of a fragment, in bytes. Messages larger than
+    this value are fragmented.</para></listitem>
+  </varlistentry>
+</variablelist>
 
-<note><title>Note</title>
+<important>
 	<para>
-		TCP protocol already provides fragmentation but a fragmentation JGroups protocol is still needed if FC is used. The reason for this is that if you send a message larger than FC.max_bytes, FC protocol would block. So, frag_size within FRAG2 needs to be set to always be less than FC.max_bytes.
+		The TCP protocol provides fragmentation, but a JGroups fragmentation protocol is still
+    required if <literal>FC</literal> is used, because if you send a message larger than
+    <literal>FC.max_bytes</literal>, the <literal>FC</literal> protocol blocks. The
+    <varname>frag_size</varname> within <literal>FRAG2</literal> must always be less than
+    <literal>FC.max_bytes</literal>.
 	</para>
-</note>
+</important>
 
 
 </section>
 	
         <section id="jbosscache-jgroups-other-st">
           <title>State Transfer</title>
-          <para>The state transfer service transfers the state from an existing node (i.e., the cluster
-                        coordinator) to a newly joining node. It is configured in the
-                        <literal>pbcast.STATE_TRANSFER</literal> sub-element under the JGroups <literal>Config</literal>
-                        element. It does not have any configurable attribute. Here is an example configuration.</para>
+          <para>The state transfer service transfers the state from an existing node (that is, the 
+          cluster coordinator) to a newly joining node. It is configured with the 
+          <literal>pbcast.STATE_TRANSFER</literal> sub-element under the JGroups 
+          <literal>Config</literal> element, as seen in the following code example. It has no 
+          configurable attribute.</para>
 <programlisting>
 &lt;pbcast.STATE_TRANSFER down_thread="false" up_thread="false"/&gt;
 </programlisting>
         </section>
 
 	<section id="jbosscache-jgroups-other-gc">
-          <title>Distributed Garbage Collection</title>
-          <para>
-		  In a JGroups cluster, all nodes have to store all messages received for potential retransmission in case of a failure. However, if we store all messages forever, we will run out of memory. So, the distributed garbage collection service in JGroups periodically purges messages that have seen by all nodes from the memory in each node. The distributed garbage  collection service is configured in the <literal>pbcast.STABLE</literal> sub-element under the JGroups  <literal>Config</literal> element. Here is an example configuration.
+    <title>Distributed Garbage Collection</title>
+    <para>
+      In a JGroups cluster, all nodes must store all messages received for potential 
+      retransmission in case of a failure. However, if we store all messages forever, we will 
+      run out of memory. The distributed garbage collection service in JGroups periodically 
+      purges messages that have seen by all nodes from the memory in each node. The distributed 
+      garbage collection service is configured in the <literal>pbcast.STABLE</literal> sub-element 
+      under the JGroups  <literal>Config</literal> element, like so:
 	  </para>
 
-<programlisting>
-&lt;pbcast.STABLE stability_delay="1000"
+    <programlisting><![CDATA[<pbcast.STABLE stability_delay="1000"
     desired_avg_gossip="5000" 
     down_thread="false" up_thread="false"
-       max_bytes="400000"/&gt;
-</programlisting>
+       max_bytes="400000"/>]]></programlisting>
           
-<para>The configurable attributes in the <literal>pbcast.STABLE</literal> element are as follows.</para>
-          <itemizedlist>
-            <listitem>
-              <para><emphasis role="bold">desired_avg_gossip</emphasis> specifies intervals (in
-                                milliseconds) of garbage collection runs. Value <literal>0</literal> disables this
-                                service.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">max_bytes</emphasis> specifies the maximum number of bytes
-                                received before the cluster triggers a garbage collection run. Value
-                                <literal>0</literal> disables this service.</para>
-            </listitem>
-            <listitem>
-		    <para><emphasis role="bold">stability_delay</emphasis> specifies delay before we send STABILITY msg (give others a change to send first). If used together with max_bytes, this attribute should be set to a small number.</para>
-            </listitem>
-          </itemizedlist>
-          <note>
-            <para>Set the <literal>max_bytes</literal> attribute when you have a high traffic
-                        cluster.</para>
-          </note>
+    <para>The configurable attributes in the <literal>pbcast.STABLE</literal> element are listed here:</para>
+    
+    <variablelist>
+      <varlistentry>
+        <term><varname>desired_avg_gossip</varname></term>
+        <listitem><para>Specifies the interval (in milliseconds) between garbage collection runs.
+        Setting this parameter to <literal>0</literal> disables this service.</para></listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><varname>max_bytes</varname></term>
+        <listitem><para>Specifies the maximum number of bytes to receive before triggering a 
+        garbage collection run. Setting this parameter to <literal>0</literal> disables this 
+        service.</para>
+          <note><para>Set <varname>max_bytes</varname> when you have a high-traffic cluster.</para></note>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><varname>stability_delay</varname></term>
+        <listitem><para>Specifies the delay period before a <literal>STABILITY</literal> message
+        is sent. If used together with <varname>max_bytes</varname>, this attribute should be set
+        to a small number.</para></listitem>
+      </varlistentry>
+    </variablelist>
         </section>
+<!--hajime-->
         <section id="jbosscache-jgroups-other-merge">
           <title>Merging</title>
           <para>

Modified: projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Revision_History.xml
===================================================================
--- projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Revision_History.xml	2010-01-18 07:20:03 UTC (rev 99525)
+++ projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Revision_History.xml	2010-01-18 07:35:42 UTC (rev 99526)
@@ -8,8 +8,8 @@
          <simpara>
                 <revhistory>
                         <revision>
-                                <revnumber>1.6</revnumber>
-                                <date>Wed Jan 13 2010</date>
+                                <revnumber>1.7</revnumber>
+                                <date>Mon Jan 18 2010</date>
                                 <author>
                                         <firstname>Laura</firstname>
                                         <surname>Bailey</surname>




More information about the jboss-cvs-commits mailing list