[jboss-cvs] JBossAS SVN: r99059 - projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US.

jboss-cvs-commits at lists.jboss.org jboss-cvs-commits at lists.jboss.org
Wed Jan 6 02:06:42 EST 2010


Author: laubai
Date: 2010-01-06 02:06:42 -0500 (Wed, 06 Jan 2010)
New Revision: 99059

Modified:
   projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Clustering_Guide_Deployment.xml
Log:
Corrected tags.

Modified: projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Clustering_Guide_Deployment.xml
===================================================================
--- projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Clustering_Guide_Deployment.xml	2010-01-06 06:13:48 UTC (rev 99058)
+++ projects/docs/enterprise/EWP_5.0/Administration_And_Configuration_Guide/en-US/Clustering_Guide_Deployment.xml	2010-01-06 07:06:42 UTC (rev 99059)
@@ -42,8 +42,8 @@
         The JBoss Enterprise Web Platform provides support for a number of
         strategies for helping you deploy clustered singleton services. In
         this section we will explore the different strategies. All of the
-        strategies are built on top of the HAPartition service described
-        in the introduction. They rely on the <literal>HAPartition</literal>
+        strategies are built on top of the <classname>HAPartition</classname> service described
+        in the introduction. They rely on the <classname>HAPartition</classname>
         to provide notifications when different nodes in the cluster start
         and stop; based on those notifications each node in the cluster
         can independently (but consistently) determine if it is now the
@@ -54,43 +54,43 @@
         <title>HASingletonDeployer service</title>
         <para>
           The simplest and most commonly used strategy for deploying an HA
-          singleton is to take an ordinary deployment (war, ear, jar,
-          whatever you would normally put in deploy) and deploy it in the
-          <literal>$JBOSS_HOME/server/all/deploy-hasingleton</literal>
-          directory instead of in <literal>deploy</literal>. The
-          <literal>deploy-hasingleton</literal> directory does not lie under
-          <literal>deploy</literal> nor <literal>farm</literal> directories,
+          singleton is to take an ordinary deployment (WAR, EAR, JAR,
+          whatever you would normally put in <filename>deploy</filename>) and deploy it in the
+          <filename>$JBOSS_HOME/server/$PROFILE/deploy-hasingleton</filename>
+          directory instead of in <filename>deploy</filename>. The
+          <filename>deploy-hasingleton</filename> directory does not lie under
+          <filename>deploy</filename> or <filename>farm</filename> directories,
           so its contents are not automatically deployed 
           when an Enterprise Web Platform instance starts. Instead, deploying the contents of this
           directory is the responsibility of a special service, the
-          <literal>HASingletonDeployer</literal> bean
+          <classname>HASingletonDeployer</classname> bean
           (which itself is deployed via the
-          deploy/deploy-hasingleton-jboss-beans.xml file). The
-          HASingletonDeployer service is itself an HA Singleton, one whose
+          <filename>deploy/deploy-hasingleton-jboss-beans.xml</filename> file). The
+          <classname>HASingletonDeployer</classname> service is itself an HA Singleton, one whose
           provided service, when it becomes master, is to deploy the
-          contents of deploy-hasingleton; and whose service, when it stops
+          contents of <filename>deploy-hasingleton</filename>; and whose service, when it stops
           being the master (typically at server shutdown), is to undeploy
-          the contents of <literal>deploy-hasingleton</literal>.
+          the contents of <filename>deploy-hasingleton</filename>.
         </para>
         <para>
-          So, by placing your deployments in <literal>deploy-hasingleton</literal>
+          So, by placing your deployments in <filename>deploy-hasingleton</filename>
           you know that they will be deployed only on the master node in
           the cluster. If the master node cleanly shuts down, they will be
           cleanly undeployed as part of shutdown. If the master node fails
-          or is shut down, they will be deployed on whatever node takes
+          or is shut down, they will be deployed on whichever node takes
           over as master.
         </para>
         <para>
-          Using deploy-hasingleton is very simple, but it does
+          Using <filename>deploy-hasingleton</filename> is very simple, but it does
           have two drawbacks:
         </para>
         <itemizedlist>
           <listitem>
             <para>
               There is no hot-deployment feature for services in
-              <literal>deploy-hasingleton</literal>
-              . Redeploying a service that has been deployed to
-              <literal>deploy-hasingleton</literal>
+              <filename>deploy-hasingleton</filename>. 
+              Redeploying a service that has been deployed to
+              <filename>deploy-hasingleton</filename>
               requires a server restart.
             </para>
           </listitem>
@@ -111,11 +111,11 @@
       <section>
         <title>POJO deployments using HASingletonController</title>
         <para>
-          If your service is a POJO (i.e., not a J2EE deployment like an ear
-          or war or jar), you can deploy it along with a service called an
-          HASingletonController in order to turn it into an HA singleton.
-          It is the job of the HASingletonController to work with the
-          HAPartition service to monitor the cluster and determine if it
+          If your service is a POJO (that is, not a J2EE deployment like an EAR
+          or WAR or JAR), you can deploy it along with a service called an
+          <classname>HASingletonController</classname> in order to turn it into an HA singleton.
+          It is the job of the <classname>HASingletonController</classname> to work with the
+          <classname>HAPartition</classname> service to monitor the cluster and determine if it
           is now the master node for its service. If it determines it has
           become the master node, it invokes a method on your service
           telling it to begin providing service. If it determines it is no
@@ -125,7 +125,7 @@
         </para>
         <para>
           First, we have a POJO that we want to make
-          an HA singleton. The only thing special about it is it needs to
+          an <classname>HASingleton</classname>. The only thing special about it is it needs to
           expose a public method that can be called when
           it should begin providing service, and another that can be
           called when it should stop providing service:
@@ -160,13 +160,13 @@
 ]]></programlisting>
 
         <para>
-          We used <literal>startSingleton</literal> and <literal>stopSingleton</literal>
+          We used <methodname>startSingleton</methodname> and <methodname>stopSingleton</methodname>
           in the above example, but you could name the methods anything.
         </para>
         <para>
-          Next, we deploy our service, along with an HASingletonController
-          to control it, most likely packaged in a .sar file, with the
-          following <literal>META-INF/jboss-beans.xml</literal>:
+          Next, we deploy our service, along with an <classname>HASingletonController</classname>
+          to control it, most likely packaged in a SAR file, with the
+          following <filename>META-INF/jboss-beans.xml</filename>:
         </para>
         <programlisting><![CDATA[
 <deployment xmlns="urn:jboss:bean-deployer:2.0">
@@ -190,27 +190,26 @@
 </deployment>
 ]]></programlisting>
 
-        <para>Voila! A clustered singleton service.</para>
+        <para>This creates a clustered singleton service.</para>
         <para>
-          The primary advantage of this approach over deploy-ha-singleton.
+          The primary advantage of this approach over <literal>deploy-ha-singleton</literal>.
           is that the above example can be placed in
-          <literal>deploy</literal> or <literal>farm</literal>
+          <filename>deploy</filename> or <filename>farm</filename>
           and thus can be hot deployed and farmed deployed. Also, if our
           example service had complex, time-consuming startup
-          requirements, those could potentially be implemented in create()
-          or start() methods. JBoss will invoke create() and start() as
-          soon as the service is deployed; it doesn't wait until the node
-          becomes the master node. So, the service could be primed and
-          ready to go, just waiting for the controller to implement
-          startSingleton() at which point it can immediately provide
-          service.
+          requirements, those could potentially be implemented in <methodname>create()</methodname>
+          or <methodname>start()</methodname> methods. JBoss will invoke <methodname>create()</methodname> and 
+          <methodname>start()</methodname> as soon as the service is deployed;
+          it doesn't wait until the node becomes the master node. 
+          So, the service could be primed and ready to go, 
+          just waiting for the controller to implement <methodname>startSingleton()</methodname>
+          at which point it can immediately provide service.
         </para>
         <para>
-          Although not demonstrated in the example above, the <literal>HASingletonController</literal>
-          can support an optional argument for either or both of the
-          target start and stop methods.
-          These are specified using the <literal>targetStartMethodArgument</literal> and
-          <literal>TargetStopMethodArgument</literal> properties, respectively.
+          Although not demonstrated in the example above, the <classname>HASingletonController</classname>
+          can support an optional argument for either or both of the target start and stop methods.
+          These are specified using the <varname>targetStartMethodArgument</varname> and
+          <varname>TargetStopMethodArgument</varname> properties, respectively.
           Currently, only string values are supported.
         </para>
       </section>
@@ -218,9 +217,9 @@
       <section>
         <title>HASingleton deployments using a Barrier</title>
         <para>
-          Services deployed normally inside deploy or farm
-          that want to be started/stopped whenever the content of
-          deploy-hasingleton gets deployed/undeployed, (i.e., whenever the
+          Services deployed normally inside <filename>deploy</filename> or <filename>farm</filename>
+          that should be started or stopped whenever the content of
+          <filename>deploy-hasingleton</filename> is deployed or undeployed, (that is, whenever the
           current node becomes the master), need only specify a dependency
           on the Barrier service: 
         </para>
@@ -229,37 +228,38 @@
 ]]></programlisting>
 
         <para>
-          The way it works is that a BarrierController is deployed along with the
-          HASingletonDeployer and listens for JMX
-          notifications from it. A BarrierController is a relatively
+          The way it works is that a <classname>BarrierController</classname> is deployed along with the
+          <classname>HASingletonDeployer</classname> and listens for JMX
+          notifications from it. A <classname>BarrierController</classname> is a relatively
           simple MBean that can subscribe to receive any JMX notification
           in the system. It uses the received notifications to control the
-          lifecycle of a dynamically created MBean called the Barrier.  The
-          Barrier is instantiated, registered and brought to the CREATE
-          state when the BarrierController is deployed. After that, the
-          BarrierController starts and stops the Barrier when matching JMX
+          lifecycle of a dynamically created MBean called the <classname>Barrier</classname>. The
+          <classname>Barrier</classname> is instantiated, registered and brought to the <literal>CREATE</literal>
+          state when the <classname>BarrierController</classname> is deployed. After that, the
+          <classname>BarrierController</classname> starts and stops the <classname>Barrier</classname> when matching JMX
           notifications are received. Thus, other services need only
-          depend on the Barrier bean using the usual &lt;depends&gt; tag, and
-          they will be started and stopped in tandem with the Barrier.
-          When the BarrierController is undeployed the Barrier is also destroyed.
+          depend on the Barrier bean using the usual <literal><![CDATA[<depends>]]></literal> tag, and
+          they will be started and stopped in tandem with the <classname>Barrier</classname>.
+          When the <classname>BarrierController</classname> is undeployed the <classname>Barrier</classname> is also destroyed.
         </para>
         <para>
-          This provides an alternative to the deploy-hasingleton approach in that we can use
-          farming to distribute the service, while content in deploy-hasingleton must be copied
+          This provides an alternative to the <filename>deploy-hasingleton</filename> approach in that we can use
+          farming to distribute the service, while content in <filename>deploy-hasingleton</filename> must be copied
           manually on all nodes.
         </para>
         <para>
-          On the other hand, the barrier-dependent service will be instantiated/created (i.e., any create() method invoked) on all nodes, but only started on the master node. This is different with the deploy-hasingleton approach that will only deploy (instantiate/create/start) the contents of the deploy-hasingleton directory on one of the nodes. 
+          On the other hand, the barrier-dependent service will be instantiated (that is, any <methodname>create()</methodname> method invoked) on all nodes, but only started on the master node. This is different with the <filename>deploy-hasingleton</filename> approach, which will only deploy (instantiate and start) the contents of the <filename>deploy-hasingleton</filename> directory on one of the nodes. 
         </para>
         <para>
-          So services depending on the barrier will need to make sure they do minimal or no work inside their create() step, rather they should use start() to do the work. 
+          So services depending on the barrier will need to make sure they do minimal or no work inside their <methodname>create()</methodname> step, rather they should use <methodname>start()</methodname> to do the work. 
         </para>
         <note>
-          <title>Note</title>
           <para>
-            The Barrier controls the start/stop of dependent services, but not their destruction,
-            which happens only when the <literal>BarrierController</literal> is itself destroyed/undeployed.
-            Thus using the <literal>Barrier</literal> to control services that need to be "destroyed" as part of their normal “undeploy” operation (like, for example, an <literal>EJBContainer</literal>) will not have the desired effect. 
+            The <classname>Barrier</classname> controls the starting and stopping of dependent services,
+            but not their destruction, which happens only when the <literal>BarrierController</literal> 
+            is itself destroyed and undeployed. Thus using the <literal>Barrier</literal> to control services 
+            that need to be <emphasis>destroyed</emphasis> as part of their normal <emphasis>undeploy</emphasis>
+            operation (like, for example, an <classname>EJBContainer</classname>) will not have the desired effect. 
           </para>
         </note>
       </section>
@@ -268,40 +268,58 @@
     <section>
       <title>Determining the master node</title>
       <para>
-        The various clustered singleton management strategies all depend on the fact that each node in the cluster can independently react to changes in cluster membership and correctly decide whether it is now the “master node”. How is this done?
+        The various clustered singleton management strategies all depend on the fact that each 
+        node in the cluster can independently react to changes in cluster membership and correctly 
+        identify whether it is now the <emphasis>master node</emphasis>. How is this done?
       </para>
       <para>
-        For each member of the cluster, the HAPartition service maintains an attribute called the CurrentView, which is basically an ordered list of the current members of the cluster.
-        As nodes join and leave the cluster, JGroups ensures that each surviving member of the cluster gets an updated view.
-        You can see the current view by going into the JMX console, and looking at the CurrentView attribute in the <literal>jboss:service=DefaultPartition</literal> mbean.
+        For each member of the cluster, the <classname>HAPartition</classname> service maintains 
+        an attribute called the <varname>CurrentView</varname>, which is basically an ordered list of the current 
+        members of the cluster. As nodes join and leave the cluster, JGroups ensures that each 
+        surviving member of the cluster receives an updated view. You can see the current view by going 
+        into the JMX console, and looking at the <varname>CurrentView</varname> attribute in the
+        <classname>jboss:service=DefaultPartition</classname> MBean.
         Every member of the cluster will have the same view, with the members in the same order.  
       </para>
       <para>
-        Let's say, for example, that we have a 4 node cluster, nodes A through D, and the current view can be expressed as {A, B, C, D}.
-        Generally speaking, the order of nodes in the view will reflect the order in which they joined the cluster (although this is not always the case, and should not be assumed to be the case).
+        Say we have a four node cluster with nodes <literal>A</literal>, <literal>B</literal>, <literal>C</literal>,
+        and <literal>D</literal>. The current view can be expressed as <literal>{A, B, C, D}</literal>.
       </para>
       <para>
-        To further our example, let's say there is a singleton service (i.e. an <literal>HASingletonController</literal>) named Foo that's deployed around the cluster, except, for whatever reason, on B.
-        The <literal>HAPartition</literal> service maintains across the cluster a registry of what services are deployed where, in view order.
-        So, on every node in the cluster, the <literal>HAPartition</literal> service knows that the view with respect to the Foo service is {A, C, D} (no B).
+        Now, imagine that a singleton service (that is, an <classname>HASingletonController</classname>) 
+        named <literal>Foo</literal> is deployed on all nodes on the cluster except <literal>B</literal>.
+        The <classname>HAPartition</classname> service maintains a registry of services deployed across
+        the cluster, in view order. So, on every node in the cluster, the <classname>HAPartition</classname>
+        service knows that the view with respect to the <literal>Foo</literal> service is 
+        <literal>{A, C, D}</literal>.
       </para>
       <para>
-        Whenever there is a change in the cluster topology of the Foo service, the <literal>HAPartition</literal> service invokes a callback on Foo notifying it of the new topology.
-        So, for example, when Foo started on D, the Foo service running on A, C and D all got callbacks telling them the new view for Foo was {A, C, D}.
-        That callback gives each node enough information to independently decide if it is now the master.
-        The Foo service on each node uses the <literal>HAPartition</literal>'s <literal>HASingletonElectionPolicy</literal> to determine if they are the master, as explained in the <xref linkend="ha-singleton-election-policy"/>.
+        Whenever the cluster topology of the <literal>Foo</literal> service changes, the 
+        <classname>HAPartition</classname> service involves a callback on <literal>Foo</literal>, notifying
+        it of the new topology. So when <literal>Foo</literal> started on node <literal>D</literal>, 
+        the <literal>Foo</literal> service running on <literal>A</literal>, <literal>C</literal> and
+        <literal>D</literal> all received callbacks informing them that the new view for <literal>Foo</literal>
+        was <literal>{A, C, D}</literal>. This callback gives each node enough information to decide
+        independently whether it is now the master node. The <literal>Foo</literal> service on each
+        node uses the <classname>HAPartition</classname>'s <classname>HASingletonElectionPolicy</classname>
+        to determine whether it is the master, as explained in <xref linkend="ha-singleton-election-policy"/>.
       </para>
       <para>
-        If A were to fail or shutdown, Foo on C and D would get a callback with a new view for Foo of {C, D}.
-        C would then become the master.
-        If A restarted, A, C and D would get a callback with a new view for Foo of {C, D, A}.
-        C would remain the master – there's nothing magic about A that would cause it to become the master again just because it was before.
+        If <literal>A</literal> fails or shuts down, <literal>Foo</literal> on <literal>C</literal> and
+        <literal>D</literal> would receive a callback with a new view for <literal>Foo</literal> of
+        <literal>{C, D}</literal>. <literal>C</literal> would then become the master. If <literal>A</literal>
+        restarted, <literal>A</literal>, <literal>C</literal> and <literal>D</literal> would receive a
+        callback with a new view for <literal>Foo</literal> of <literal>{C, D, A}</literal>.
+        <literal>C</literal> would remain the master &#8212; there is no reason that <literal>A</literal>
+        in particular should be reassigned the master role simply because it previously held that role.
       </para>
 
       <section id="ha-singleton-election-policy">
         <title>HA singleton election policy</title>
         <para>
-          The <literal>HASingletonElectionPolicy</literal> object is responsible for electing a master node from a list of available nodes, on behalf of an HA singleton, following a change in cluster topology.
+          The <classname>HASingletonElectionPolicy</classname> object is responsible for electing
+          a master node from a list of available nodes, on behalf of an <classname>HASingleton</classname>, 
+          following a change in cluster topology.
         </para>
         <programlisting><![CDATA[
 public interface HASingletonElectionPolicy
@@ -314,14 +332,17 @@
         </para>
         <variablelist>
           <varlistentry>
-            <term><literal>HASingletonElectionPolicySimple</literal></term>
+            <term><classname>HASingletonElectionPolicySimple</classname></term>
             <listitem>
               <para>
-                This policy selects a master node based relative age.
-                The desired age is configured via the <literal>position</literal> property, which corresponds to the index in the list of available nodes.
-                <literal>position = 0</literal>, the default, refers to the oldest node; <literal>position = 1</literal>, refers to the 2nd oldest; etc.
-                <literal>position</literal> can also be negative to indicate youngness; imagine the list of available nodes as a circular linked list.
-                <literal>position = -1</literal>, refers to the youngest node; <literal>position = -2</literal>, refers to the 2nd youngest node; etc.
+                This policy selects a master node based relative age. The desired age 
+                is configured via the <varname>position</varname> property, which corresponds
+                to the index in the list of available nodes. <code>position = 0</code>, 
+                the default, refers to the oldest node; <code>position = 1</code>, refers to 
+                the second oldest, etc. <varname>position</varname> can also be negative
+                to indicate youth. It is therefore useful to imagine the list of available
+                nodes as a circular linked list. <code>position = -1</code>, refers to the 
+                youngest node; <code>position = -2</code>, refers to the second youngest, etc.
               </para>
               <programlisting><![CDATA[
 <bean class="org.jboss.ha.singleton.HASingletonElectionPolicySimple">
@@ -331,12 +352,14 @@
             </listitem>
           </varlistentry>
           <varlistentry>
-            <term><literal>PreferredMasterElectionPolicy</literal></term>
+            <term><classname>PreferredMasterElectionPolicy</classname></term>
             <listitem>
               <para>
-                This policy extends <literal>HASingletonElectionPolicySimple</literal>, allowing the configuration of a preferred node.
-                The <literal>preferredMaster</literal> property, specified as <emphasis>host:port</emphasis> or <emphasis>address:port</emphasis>, identifies a specific node that should become master, if available.
-                If the preferred node is not available, the election policy will behave as described above.
+                This policy extends <classname>HASingletonElectionPolicySimple</classname>, 
+                allowing the configuration of a preferred node. The <varname>preferredMaster</varname> 
+                property, specified as <literal>host:port</literal> or <literal>address:port</literal>, 
+                identifies a specific node that should become master, if available. If the preferred 
+                node is not available, the election policy will behave as described above.
               </para>
               <programlisting><![CDATA[
 <bean class="org.jboss.ha.singleton.PreferredMasterElectionPolicy">
@@ -355,21 +378,32 @@
     
     <para>
       The easiest way to deploy an application into the cluster is to use the farming service.
-      Using the farming service, you can deploy an application (e.g. EAR, WAR, or SAR; either an archive file or in exploded form) to the
-      <literal>all/farm/</literal> directory of any cluster member and the application will be automatically duplicate across all nodes in the same cluster.
-      If a node joins the cluster later, it will pull in all farm deployed applications in the cluster and deploy them locally at start-up time.
-      If you delete the application from a running clustered server node's <literal>farm/</literal> directory,
-      the application will be undeployed locally and then removed from all other clustered server nodes' <literal>farm/</literal> directories (triggering undeployment).
+      Using the farming service, you can deploy an application (EAR, WAR, or SAR; either an 
+      archive file or in exploded form) to the <classname>$PROFILE/farm/</classname> directory 
+      of any cluster member and the application will be automatically duplicated across all 
+      nodes in the same cluster. If a node joins the cluster later, it will pull in all farm 
+      deployed applications in the cluster and deploy them locally at start-up time.
+      If you delete the application from a running clustered server node's 
+      <filename>farm/</filename> directory, the application will be undeployed locally and 
+      then removed from all other clustered server nodes' <filename>farm/</filename> directories 
+      (triggering undeployment).
     </para>
     
     <para>
-      Farming is enabled by default in the <literal>all</literal> configuration in JBoss Enterprise Web Platform and thus requires no manual setup.
-      The required <filename>farm-deployment-jboss-beans.xml</filename> and <filename>timestamps-jboss-beans.xml</filename> configuration files are located in the <literal>deploy/cluster</literal> directory.
-      If you want to enable farming in a custom configuration, simply copy these files to the corresponding JBoss deploy directory <literal>$JBOSS_HOME/server/your_own_config/deploy/cluster</literal>.
+      Farming is enabled by default in the <literal>all</literal> configuration in 
+      JBoss Enterprise Web Platform and thus requires no manual setup. The required 
+      <filename>farm-deployment-jboss-beans.xml</filename> and 
+      <filename>timestamps-jboss-beans.xml</filename> configuration files are located 
+      in the <filename>deploy/cluster</filename> directory. If you want to enable 
+      farming in a custom configuration, simply copy these files to the corresponding 
+      JBoss <filename>deploy</filename> directory: 
+      <filename>$JBOSS_HOME/server/$CUSTOM_CONFIG/deploy/cluster</filename>.
       Make sure that your custom configuration has clustering enabled.
     </para>
     <para>
-      While there is little need to customize the farming service, it can be customized via the <literal>FarmProfileRepositoryClusteringHandler</literal> bean, whose properties and default values are listed below:
+      While there is little need to customize the farming service, it can be customized 
+      via the <classname>FarmProfileRepositoryClusteringHandler</classname> bean, whose 
+      properties and default values are listed below:
     </para>
     <programlisting><![CDATA[
 <bean name="FarmProfileRepositoryClusteringHandler"
@@ -386,41 +420,69 @@
   <property name="synchronizationPolicy"><inject bean="FarmProfileSynchronizationPolicy"/></property>
 </bean>
 ]]></programlisting>
-    <itemizedlist>
-      <listitem>
-        <para>
-          <emphasis role="bold">partition</emphasis> is a required attribute to inject the HAPartition service that the farm service uses for intra-cluster communication.
-        </para>
-      </listitem>
-      <listitem>
-        <para>
-          <emphasis role="bold">profile[Domain|Server|Name]</emphasis> are all used to identify the profile for which this handler is intended.
-        </para>
-      </listitem>
-      <listitem>
-        <para>
-          <emphasis role="bold">immutable</emphasis> indicates whether or not this handler allows a node to push content changes to the cluster.
-          A value of <literal>true</literal> is equivalent to setting <literal>synchronizationPolicy</literal> to <literal>org.jboss.system.server.profileservice.repository.clustered.sync.</literal> <literal>ImmutableSynchronizationPolicy</literal>.
-        </para>
-      </listitem>
-      <listitem>
-        <para>
-          <emphasis role="bold">lockTimeout</emphasis> defines the number of milliseconds to wait for cluster-wide lock acquisition.
-        </para>
-      </listitem>
-      <listitem>
-        <para>
-          <emphasis role="bold">methodCallTimeout</emphasis> defines the number of milliseconds to wait for invocations on remote cluster nodes.
-        </para>
-      </listitem>
-      <listitem>
-        <para>
-          <emphasis role="bold">synchronizationPolicy</emphasis> decides how to handle content additions, reincarnations, updates, or removals from nodes attempting to join the cluster or from cluster merges. 
-          The policy is consulted on the "authoritative" node, i.e. the master node for the service on the cluster.
-          <emphasis>Reincarnation</emphasis> refers to the phenomenon where a newly started node may contain an application in its <literal>farm/</literal> directory that was previously removed by the farming service but might still exist on the starting node if it was not running when the removal took place.
-          The default synchronization policy is defined as follows:
-        </para>
-        <programlisting><![CDATA[
+    <variablelist>
+      <varlistentry>
+        <term><varname>partition</varname></term>
+        <listitem>
+          <para>
+            Required to inject the <classname>HAPartition</classname> service that the
+            farm service uses for intra-cluster communication.
+          </para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><varname>profileDomain</varname>, <varname>profileServer</varname>, <varname>profileName</varname></term>
+        <listitem>
+          <para>
+            Used to identify the profile for which this handler is intended.
+          </para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><varname>immutable</varname></term>
+        <listitem>
+          <para>
+            Indicates whether this handler allows a node to push content changes to
+            the cluster. A value of <literal>true</literal> is equivalent to setting
+            <varname>synchronizationPolicy</varname> to 
+            <literal>org.jboss.system.server.profileservice.repository.clustered.sync.ImmutableSynchronizationPolicy</literal>.
+          </para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><varname>lockTimeout</varname></term>
+        <listitem>
+          <para>
+            Defines the number of milliseconds to wait for cluster-wide lock acquisition.
+          </para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><varname>methodCallTimeout</varname></term>
+        <listitem>
+          <para>
+            Defines the number of milliseconds to wait for invocations on remote
+            cluster nodes.
+          </para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><varname>synchronizationPolicy</varname></term>
+        <listitem>
+          <para>
+            Determines how to handle content addition, reincarnation, updates and removals
+            from nodes attempting to join the cluster or from cluster merges. The policy
+            is consulted on the <emphasis>authoritative</emphasis> node (the master node
+            for the service on the cluster). <emphasis>Reincarnation</emphasis> describes
+            a situation where a newly started node may contain an application starting node
+            in its <filename>farm</filename> directory that was previously removed by the
+            farming service, but may still exist on the starting node if it was not running
+            when the removal took place.
+          </para>
+          <para>
+            The default synchronization policy is defined as follows:
+          </para>
+          <programlisting><![CDATA[
 <bean name="FarmProfileSynchronizationPolicy"
       class="org.jboss.profileservice.cluster.repository.
       DefaultSynchronizationPolicy">
@@ -436,27 +498,41 @@
   <property name="removalTrackingTime">2592000000</property><!-- 30 days -->
   <property name="timestampService"><inject bean="TimestampDiscrepancyService"/></property>
 </bean>
-]]></programlisting>
-        <itemizedlist>
-          <listitem>
-            <para><emphasis role="bold">allow[Join|Merge][Additions|Reincarnations|Updates|Removals]</emphasis> define fixed responses to requests to allow additions, reincarnations, updates, or removals from joined or merged nodes.</para>
-          </listitem>
-          <listitem>
-            <para><emphasis role="bold">developerMode</emphasis> enables a lenient synchronization policy that allows all changes.
-            Enabling developer mode is equivalent to setting each of the above properties to <literal>true</literal> and is intended for development environments.</para>
-          </listitem>
-          <listitem>
-            <para><emphasis role="bold">removalTrackingTime</emphasis> defines the number of milliseconds for which this policy should remembered removed items, for use in detecting reincarnations.</para>
-          </listitem>
-          <listitem>
-            <para>
-              <emphasis role="bold">timestampService</emphasis> estimates and tracks discrepancies in system clocks for current and past members of the cluster.
-              Default implementation is defined in <filename>timestamps-jboss-beans.xml</filename>.
-            </para>
-          </listitem>
-        </itemizedlist>
-      </listitem>
-    </itemizedlist>
+  ]]></programlisting>
+          <itemizedlist>
+            <listitem>
+              <para>
+                <varname>allow[Join|Merge][Additions|Reincarnations|Updates|Removals]</varname>
+                define fixed responses to requests to allow additions, reincarnations, updates, 
+                or removals from joined or merged nodes.
+              </para>
+            </listitem>
+            <listitem>
+              <para>
+                <varname>developerMode</varname> enables a lenient synchronization policy that 
+                allows all changes. Enabling developer mode is equivalent to setting each of 
+                the above properties to <literal>true</literal> and is intended for 
+                development environments.
+              </para>
+            </listitem>
+            <listitem>
+              <para>
+                <varname>removalTrackingTime</varname> defines the number of milliseconds for 
+                which this policy should remembered removed items, for use in detecting 
+                reincarnations.
+              </para>
+            </listitem>
+            <listitem>
+              <para>
+                <varname>timestampService</varname> estimates and tracks discrepancies in 
+                system clocks for current and past members of the cluster. Default 
+                implementation is defined in <filename>timestamps-jboss-beans.xml</filename>.
+              </para>
+            </listitem>
+          </itemizedlist>
+        </listitem>
+      </varlistentry>
+    </variablelist>
   </section>
       
       <!-- 




More information about the jboss-cvs-commits mailing list