[jboss-cvs] jboss-docs/jbossas/clustering/en ...

Norman Richards norman.richards at jboss.com
Mon Sep 18 12:48:18 EDT 2006


  User: nrichards
  Date: 06/09/18 12:48:18

  Added:       jbossas/clustering/en  master.xml
  Log:
  problems checking in?
  
  Revision  Changes    Path
  1.1      date: 2006/09/18 16:48:18;  author: nrichards;  state: Exp;jboss-docs/jbossas/clustering/en/master.xml
  
  Index: master.xml
  ===================================================================
  <?xml version='1.0' encoding="iso-8859-1"?>
  <book>
      <bookinfo>
          <title>The JBoss 4 Application Server Clustering Guide</title>
          <subtitle>JBoss AS 4.0.4</subtitle>
          <releaseinfo>Release 5</releaseinfo>
          <mediaobject>
              <imageobject>
                  <imagedata fileref="images/title.jpg"/>
              </imageobject>
          </mediaobject>
          <copyright>
              <year>2004</year>
              <year>2005</year>
              <year>2006></year>
              <holder>JBoss, Inc.</holder>
          </copyright>
      </bookinfo>
      <toc/>
      <chapter id="cluster.chapt">
          <title>Clustering</title>
          <subtitle>High Availability Enterprise Services via JBoss Clusters</subtitle>
          <para/>
          <section id="clustering-intro">
              <title>Introduction</title>
              <para>Clustering allows us to run an applications on several parallel servers (a.k.a cluster nodes). The
                  load is distributed across different servers, and even if any of the servers fails, the application is
                  still accessible via other cluster nodes. Clustering is crucial for scalable enterprise applications, as
                  you can improve performance by simply adding more nodes to the cluster.</para>
              <para>The JBoss Application Server (AS) comes with clustering support out of the box. The simplest way to
                  start a JBoss server cluster is to start several JBoss instances on the same local network, using the
                      <literal>run -c all</literal> command for each instance. Those server instances, all started in the
                      <literal>all</literal> configuration, detect each other and automatically form a cluster.</para>
              <para>In the first section of this chapter, I discuss basic concepts behind JBoss's clustering services. It
                  is important that you understand those concepts before reading the rest of the chapter. Clustering
                  configurations for specific types of applications are covered after this section.</para>
              <section id="clustering-intro-def">
                  <title>Cluster Definition</title>
                  <para>A cluster is a set of nodes. In a JBoss cluster, a node is a JBoss server instance. Thus, to build
                      a cluster, several JBoss instances have to be grouped together (known as a "partition"). On a same
                      network, we may have different clusters. In order to differentiate them, each cluster must have an
                      individual name.</para>
                  <para><xref linkend="clustering-Partition.fig"/> shows an example network of JBoss server instances
                      divided into three clusters, with each cluster only having one node. Nodes can be added to or
                      removed from clusters at any time.</para>
                  <figure id="clustering-Partition.fig">
                      <title>Clusters and server nodes</title>
                      <mediaobject>
                          <imageobject>
                              <imagedata align="center" fileref="images/clustering-Partition.png"/>
                          </imageobject>
                      </mediaobject>
                  </figure>
                  <note>
                      <para>While it is technically possible to put a JBoss server instance into multiple clusters at the
                          same time, this practice is generally not recommended, as it increases the management
                          complexity.</para>
                  </note>
                  <para>Each JBoss server instance (node) specifies which cluster (i.e., partition) it joins in the
                          <literal>ClusterPartition</literal> MBean in the <literal>deploy/cluster-service.xml</literal>
                      file. All nodes that have the same <literal>ClusterPartition</literal> MBean configuration join the
                      same cluster. Hence, if you want to divide JBoss nodes in a network into two clusters, you can just
                      come up with two different <literal>ClusterPartition</literal> MBean configurations, and each node
                      would have one of the two configurations depending on which cluster it needs to join. If the
                      designated cluster does not exist when the node is started, the cluster would be created. Likewise,
                      a cluster is removed when all its nodes are removed.</para>
                  <para>The following example shows the MBean definition packaged with the standard JBoss AS distribution.
                      So, if you simply start JBoss servers with their default clustering settings on a local network, you
                      would get a default cluster named <literal>DefaultPartition</literal> that includes all server
                      instances as its nodes.</para>
                  <programlisting>
  &lt;mbean code="org.jboss.ha.framework.server.ClusterPartition"
      name="jboss:service=DefaultPartition">
           
      &lt;! -- Name of the partition being built -->
      &lt;attribute name="PartitionName">
          ${jboss.partition.name:DefaultPartition}
      &lt;/attribute>
  
      &lt;! -- The address used to determine the node name -->
      &lt;attribute name="NodeAddress">${jboss.bind.address}&lt;/attribute>
  
      &lt;! -- Determine if deadlock detection is enabled -->
      &lt;attribute name="DeadlockDetection">False&lt;/attribute>
       
      &lt;! -- Max time (in ms) to wait for state transfer to complete. 
          Increase for large states -->
      &lt;attribute name="StateTransferTimeout">30000&lt;/attribute>
  
      &lt;! -- The JGroups protocol configuration -->
      &lt;attribute name="PartitionConfig">
          ... ...
      &lt;/attribute>
  &lt;/mbean>
              </programlisting>
                  <para>Here, we omitted the detailed JGroups protocol configuration for this cluster. JGroups handles the
                      underlying peer-to-peer communication between nodes, and its configuration is discussed in <xref
                          linkend="jbosscache-jgroups"/>. The following list shows the available configuration attributes
                      in the <literal>ClusterPartition</literal> MBean.</para>
                  <itemizedlist>
                      <listitem>
                          <para><emphasis role="bold">PartitionName</emphasis> is an optional attribute to specify the
                              name of the cluster. Its default value is <literal>DefaultPartition</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">NodeAddress</emphasis> is an optional attribute to specify the
                              binding IP address of this node.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">DeadlockDetection</emphasis> is an optional boolean attribute that
                              tells JGroups to run message deadlock detection algorithms with every request. Its default
                              value is <literal>false</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">StateTransferTimeout</emphasis> is an optional attribute to specify
                              the timeout for state replication across the cluster (in milliseconds). Its default value is
                                  <literal>30000</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">PartitionConfig</emphasis> is an element to specify JGroup
                              configuration options for this cluster (see <xref linkend="jbosscache-jgroups"/>).</para>
                      </listitem>
                  </itemizedlist>
  
                  <para>In order for nodes to form a cluster, they must have the exact same
                      <literal>PartitionName</literal> and the <literal>ParitionConfig</literal> elements. Changes in
                      either element on some but not all nodes would cause the cluster to split. It is generally easier to
                      change the <literal>ParitionConfig</literal> (i.e., the address/port) to run multiple cluster rather
                      than changing the <literal>PartitionName</literal> due to the mulititude of places the former needs
                      to be changed in other configuration files. However, changing the <literal>PartitionName</literal>
                      is made easier in 4.0.2+ due to the use of the <literal>${jboss.partition.name}</literal> property
                      which allows the name to be change via a single <literal>jboss.partition.name</literal> system
                      property</para>
  
                  <para>You can view the current cluster information by pointing your browser to the JMX console of any
                      JBoss instance in the cluster (i.e., <literal>http://hostname:8080/jmx-console/</literal>) and then
                      clicking on the <literal>jboss:service=DefaultPartition</literal> MBean (change the MBean name to
                      reflect your cluster name if this node does not join <literal>DefaultPartition</literal>). A list of
                      IP addresses for the current cluster members is shown in the <literal>CurrentView</literal> field.</para>
                  <note>
                      <para>A cluster (partition) contains a set of nodes that work toward a same goal. Some clustering
                          features require to sub-partition the cluster to achieve a better scalability. For example,
                          let's imagine that we have a 10-node cluster and we want to replicate in memory the state of
                          stateful session beans on all 10 different nodes to provide for fault-tolerant behaviour. It
                          would mean that each node has to store a backup of the 9 other nodes. This would not scale at
                          all (each node would need to carry the whole state cluster load). It is probably much better to
                          have some kind of sub-partitions inside a cluster and have beans state exchanged only between
                          nodes that are part of the same sub-partition. The future JBoss clustering implementation will
                          support sub-partitions and it will allow the cluster administrator to determine the optimal size
                          of a sub-partition. The sub-partition topology computation will be done dynamically by the
                          cluster.</para>
                  </note>
              </section>
              <section id="clustering-intro-arch">
                  <title>Service Architectures</title>
                  <para>The clustering topography defined by the <literal>ClusterPartition</literal> MBean on each node is
                      of great importance to system administrators. But for most application developers, you are probably
                      more concerned about the cluster architecture from a client application's point of view. JBoss AS
                      supports two types of clustering architectures: client-side interceptors (a.k.a proxies or stubs)
                      and load balancers.</para>
                  <section id="clustering-intro-arch-proxy">
                      <title>Client-side interceptor</title>
                      <para>Most remote services provided by the JBoss application server, including JNDI, EJB, RMI and
                          JBoss Remoting, require the client to obtain (e.g., to look up and download) a stub (or proxy)
                          object. The stub object is generated by the server and it implements the business interface of
                          the service. The client then makes local method calls against the stub object. The call is
                          automatically routed across the network and invoked against service objects managed in the
                          server. In a clustering environment, the server-generated stub object is also an interceptor
                          that understand how to route calls to nodes in the cluster. The stub object figures out how to
                          find the appropriate server node, marshal call parameters, un-marshall call results, return the
                          results to the caller client.</para>
                      <para>The stub interceptors have updated knowledge about the cluster. For instance, they know the IP
                          addresses of all available server nodes, the algorithm to distribute load across nodes (see next
                          section), and how to failover the request if the target node not available. With every service
                          request, the server node updates the stub interceptor with the latest changes in the cluster.
                          For instance, if a node drops out of the cluster, each of the client stub interceptor is updated
                          with the new configuration the next time it connects to any active node in the cluster. All the
                          manipulations on the service stub are transparent to the client application. The client-side
                          interceptor clustering architecture is illustrated in <xref
                              linkend="clustering-InterceptorArch.fig"/>.</para>
                      <figure id="clustering-InterceptorArch.fig">
                          <title>The client-side interceptor (proxy) architecture for clustering</title>
                          <mediaobject>
                              <imageobject>
                                  <imagedata align="center" fileref="images/clustering-InterceptorArch.png"/>
                              </imageobject>
                          </mediaobject>
                      </figure>
  
                      <note>
                          <para><xref linkend="clustering-session-slsb21-retry"/> describes how to enable the client proxy
                              to handle the entire cluster restart.</para>
                      </note>
  
                  </section>
                  <section id="clustering-intro-arch-balancer">
                      <title>Load balancer</title>
                      <para>Other JBoss services, in particular the HTTP web services, do not require the client to
                          download anything. The client (e.g., a web browser) sends in requests and receives responses
                          directly over the wire according to certain communication protocols (e.g., the HTTP protocol).
                          In this case, a load balancer is required to process all requests and dispatch them to server
                          nodes in the cluster. The load balancer is typically part of the cluster. It understands the
                          cluster configuration as well as failover policies. The client only needs to know about the load
                          balancer. The load balancer clustering architecture is illustrated in <xref
                              linkend="clustering-BalancerArch.fig"/>.</para>
                      <figure id="clustering-BalancerArch.fig">
                          <title>The load balancer architecture for clustering</title>
                          <mediaobject>
                              <imageobject>
                                  <imagedata align="center" fileref="images/clustering-BalancerArch.png"/>
                              </imageobject>
                          </mediaobject>
                      </figure>
                      <para>A potential problem with the load balancer solution is that the load balancer itself is a
                          single point of failure. It needs to be monitored closely to ensure high availability of the
                          entire cluster services.</para>
                  </section>
              </section>
              <section id="clustering-intro-balancepolicy">
                  <title>Load-Balancing Policies</title>
                  <para>Both the JBoss client-side interceptor (stub) and load balancer use load balancing policies to
                      determine which server node to send a new request to. In this section, let's go over the load
                      balancing policies available in JBoss AS.</para>
                  <section id="clustering-intro-balancepolicy-30">
                      <title>JBoss AS 3.0.x</title>
                      <para>In JBoss 3.0.x, the following two load balancing options are available.</para>
                      <itemizedlist>
                          <listitem>
                              <para>Round-Robin (<literal>org.jboss.ha.framework.interfaces.RoundRobin</literal>): each
                                  call is dispatched to a new node. The first target node is randomly selected from the
                                  list.</para>
                          </listitem>
                          <listitem>
                              <para>First Available (<literal>org.jboss.ha.framework.interfaces.FirstAvailable</literal>):
                                  one of the available target nodes is elected as the main target and is used for every
                                  call: this elected member is randomly chosen from the list of members in the cluster.
                                  When the list of target nodes changes (because a node starts or dies), the policy will
                                  re-elect a target node unless the currently elected node is still available. Each
                                  client-side interceptor or load balancer elects its own target node independently of the
                                  other proxies.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
                  <section id="clustering-intro-balancepolicy-32">
                      <title>JBoss AS 3.2+</title>
                      <para>In JBoss 3.2+, three load balancing options are available. The Round-Robin and First Available
                          options have the same meaning as the ones in JBoss AS 3.0.x.</para>
                      <para>The new load balancing option in JBoss 3.2 is "First AvailableIdenticalAllProxies"
                              (<literal>org.jboss.ha.framework.interfaces.FirstAvailableIdenticalAllProxies</literal>). It
                          has the same behaviour as the "First Available" policy but the elected target node is shared by
                          all client-side interceptors of the same "family".</para>
                      <para>In JBoss 3.2 (and later), the notion of "Proxy Family" is defined. A Proxy Family is a set of
                          stub interceptors that all make invocations against the same replicated target. For EJBs for
                          example, all stubs targeting the same EJB in a given cluster belong to the same proxy family.
                          All interceptors of a given family share the same list of target nodes. Each interceptor also
                          has the ability to share arbitrary information with other interceptors of the same family. A use
                          case for the proxy family is give in <xref linkend="clustering-session-slsb21"/>.</para>
                  </section>
              </section>
              <section id="clustering-intro-farm">
                  <title>Farming Deployment</title>
                  <para>The easiest way to deploy an application into the cluster is to use the farming service. That is
                      to hot-deploy the application archive file (e.g., the EAR, WAR or SAR file) in the
                      <code>all/farm/</code> directory of any of the cluster member and the application is automatically
                      duplicated across all nodes in the same cluster. If node joins the cluster later, it will pull in
                      all farm deployed applications in the cluster and deploy them locally at start-up time. If you
                      delete the application from one of the running cluster server node's <literal>farm/</literal>
                      folder, the application will be undeployed locally and then removed from all other cluster server
                      nodes farm folder (triggers undeployment.) You should manually delete the application from the farm
                      folder of any server node not currently connected to the cluster.</para>
                  <note>
                      <para>Currently, due to an implementation bug, the farm deployment service only works for
                          hot-deployed archives. If you put an application in the <literal>farm/</literal> directory first
                          and then start the server, the application would not be detected and pushed across the cluster.
                          We are working to resolve this issue.</para>
                  </note>
                  <note>
                      <para>You can only put archive files, not exploded directories, in the <literal>farm</literal>
                          directory. This way, the application on a remote node is only deployed when the entire archive
                          file is copied over. Otherwise, the application might be deployed (and failed) when the
                          directory is only partially copied.</para>
                  </note>
                  <para>Farming is enabled by default in the <literal>all</literal> configuration in JBoss AS
                      distributions, so you will not have to set it up yourself. The configuration file is located in the
                          <literal>deploy/deploy.last</literal> directory. If you want to enable farming in your custom
                      configuration, simply create the XML file shown below (named it <literal>farm-service.xml</literal>)
                      and copy it to the JBoss deploy directory
                      <literal>$JBOSS_HOME/server/your_own_config/deploy</literal>. Make sure that you custom
                      configuration has clustering enabled.</para>
                  <programlisting>
  &lt;?xml version="1.0" encoding="UTF-8"?>    
  &lt;server>        
          
      &lt;mbean code="org.jboss.ha.framework.server.FarmMemberService"     
              name="jboss:service=FarmMember,partition=DefaultPartition">     
          ...      
          &lt;attribute name="PartitionName">DefaultPartition&lt;/attribute>      
          &lt;attribute name="ScanPeriod">5000&lt;/attribute>      
          &lt;attribute name="URLs">farm/&lt;/attribute>     
      &lt;/mbean>       
  &lt;/server>
              </programlisting>
                  <para>After deploying <literal>farm-service.xml</literal> you are ready to rumble. The required
                          <literal>FarmMemberService</literal> MBean attributes for configuring a farm are listed below.</para>
                  <itemizedlist>
                      <listitem>
                          <para><emphasis role="bold">PartitionName</emphasis> specifies the name of the cluster for this
                              deployed farm. Its default value is <literal>DefaultPartition</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">URLs</emphasis> points to the directory where deployer watches for
                              files to be deployed. This MBean will create this directory is if does not already exist.
                              Also, "." pertains to the configuration directory (i.e.,
                              <literal>$JBOSS_HOME/server/all/</literal>).</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">ScanPeriod</emphasis> specifies the interval at which the folder
                              must be scanned for changes.. Its default value is <literal>5000</literal>.</para>
                      </listitem>
                  </itemizedlist>
                  <para>The Farming service is an extension of the <literal>URLDeploymentScanner</literal>, which scans
                      for hot deployments in <literal>deploy/</literal> directory. So, you can use all the attributes
                      defined in the <literal>URLDeploymentScanner</literal> MBean in the
                      <literal>FarmMemberService</literal> MBean. In fact, the <literal>URLs</literal> and
                          <literal>ScanPeriod</literal> attributes listed above are inherited from the
                          <literal>URLDeploymentScanner</literal> MBean.</para>
              </section>
              <section id="clustering-intro-state">
                  <title>Distributed state replication services</title>
                  <para>In a clustered server environment, distributed state management is a key service the cluster must
                      provide. For instance, in a stateful session bean application, the session state must be
                      synchronized among all bean instances across all nodes, so that the client application reaches the
                      same session state no matter which node serves the request. In an entity bean application, the bean
                      object sometimes needs to be cached across the cluster to reduce the database load. Currently, the
                      state replication and distributed cache services in JBoss AS are provided via two ways: the
                          <literal>HASessionState</literal> MBean and the JBoss Cache framework.</para>
                  <itemizedlist>
                      <listitem>
                          <para>The <literal>HASessionState</literal> MBean provides session replication and distributed
                              cache services for EJB 2.x stateful session beans and HTTP load balancers in JBoss 3.x and
                              4.x. The MBean is defined in the <literal>all/deploy/cluster-service.xml</literal> file. We
                              will show its configuration options in the EJB 2.x stateful session bean section
                          later.</para>
                      </listitem>
                      <listitem>
                          <para>JBoss Cache is a fully featured distributed cache framework that can be used in any
                              application server environment and standalone. It gradually replaces the
                                  <literal>HASessionState</literal> service. JBoss AS integrates JBoss Cache to provide
                              cache services for HTTP sessions, EJB 3.0 session and entity beans, as well as Hibernate
                              persistence objects. Each of these cache services is defined in a separate MBean. We will
                              cover those MBeans when we discuss specific services in the next several sections.</para>
                      </listitem>
                  </itemizedlist>
              </section>
          </section>
          <section id="clustering-jndi">
              <title>Clustered JNDI Services</title>
              <para>JNDI is one of the most important services provided by the application server. The JBoss clustered
                  JNDI service is based on the client-side interceptor architecture. The client must obtain a JNDI stub
                  object (via the <literal>InitialContext</literal> object) and invoke JNDI lookup services on the remote
                  server through the stub. Furthermore, JNDI is the basis for many other interceptor-based clustering
                  services: those services register themselves with the JNDI so that the client can lookup their stubs and
                  make use of their services.</para>
              <section id="clustering-jndi-how">
                  <title>How it works</title>
                  <para>The JBoss HA-JNDI (High Availability JNDI) service maintains a cluster-wide context tree. The
                      cluster wide tree is always available as long as there is one node left in the cluster. Each JNDI
                      node in the cluster also maintains its own local JNDI context. The server side application can bind
                      its objects to either trees. In this section, you will learn the distinctions of the two trees and
                      the best practices in application development. The design rational of this architecture is as
                      follows.</para>
                  <itemizedlist>
                      <listitem>
                          <para>We didn't want any migration issues with applications already assuming that their JNDI
                              implementation was local. We wanted clustering to work out-of-the-box with just a few tweaks
                              of configuration files.</para>
                      </listitem>
                      <listitem>
                          <para>We needed a clean distinction between locally bound objects and cluster-wide
                          objects.</para>
                      </listitem>
                      <listitem>
                          <para>In a homogeneous cluster, this configuration actually cuts down on the amount of network
                              traffic.</para>
                      </listitem>
                      <listitem>
                          <para>Designing it in this way makes the HA-JNDI service an optional service since all
                              underlying cluster code uses a straight new <literal>InitialContext()</literal> to lookup or
                              create bindings.</para>
                      </listitem>
                  </itemizedlist>
                  <para>On the server side, <literal>new InitialContext()</literal>, will be bound to a local-only,
                      non-cluster-wide JNDI Context (this is actually basic JNDI). So, all EJB homes and such will not be
                      bound to the cluster-wide JNDI Context, but rather, each home will be bound into the local JNDI.
                      When a remote client does a lookup through HA-JNDI, HA-JNDI will delegate to the local JNDI Context
                      when it cannot find the object within the global cluster-wide Context. The detailed lookup rule is
                      as follows.</para>
                  <itemizedlist>
                      <listitem>
                          <para>If the binding is available in the cluster-wide JNDI tree and it returns it.</para>
                      </listitem>
                      <listitem>
                          <para>If the binding is not in the cluster-wide tree, it delegates the lookup query to the local
                              JNDI service and returns the received answer if available.</para>
                      </listitem>
                      <listitem>
                          <para>If not available, the HA-JNDI services asks all other nodes in the cluster if their local
                              JNDI service owns such a binding and returns the an answer from the set it receives.</para>
                      </listitem>
                      <listitem>
                          <para>If no local JNDI service owns such a binding, a <literal>NameNotFoundException</literal>
                              is finally raised.</para>
                      </listitem>
                  </itemizedlist>
                  <para>So, an EJB home lookup through HA-JNDI, will always be delegated to the local JNDI instance. If
                      different beans (even of the same type, but participating in different clusters) use the same JNDI
                      name, it means that each JNDI server will have a different "target" bound (JNDI on node 1 will have
                      a binding for bean A and JNDI on node 2 will have a binding, under the same name, for bean B).
                      Consequently, if a client performs a HA-JNDI query for this name, the query will be invoked on any
                      JNDI server of the cluster and will return the locally bound stub. Nevertheless, it may not be the
                      correct stub that the client is expecting to receive!</para>
                  <note>
                      <para>You cannot currently use a non-JNP JNDI implementation (i.e. LDAP) for your local JNDI
                          implementation if you want to use HA-JNDI. However, you can use JNDI federation using the
                              <literal>ExternalContext</literal> MBean to bind non-JBoss JNDI trees into the JBoss JNDI
                          namespace. Furthermore, nothing prevents you though of using one centralized JNDI server for
                          your whole cluster and scrapping HA-JNDI and JNP.</para>
                  </note>
                  <note>
                      <para>If a binding is only made available on a few nodes in the cluster (for example because a bean
                          is only deployed on a small subset of nodes in the cluster), the probability to lookup a HA-JNDI
                          server that does not own this binding is higher and the lookup will need to be forwarded to all
                          nodes in the cluster. Consequently, the query time will be longer than if the binding would have
                          been available locally. Moral of the story: as much as possible, cache the result of your JNDI
                          queries in your client.</para>
                  </note>
                  <para>If you want to access HA-JNDI from the server side, you must explicitly get an
                          <literal>InitialContext</literal> by passing in JNDI properties. The following code shows how to
                      access the HA-JNDI.</para>
  
                  <programlisting>
  Properties p = new Properties();  
  p.put(Context.INITIAL_CONTEXT_FACTORY,   
        "org.jnp.interfaces.NamingContextFactory");  
  p.put(Context.URL_PKG_PREFIXES, "jboss.naming:org.jnp.interfaces");  
  p.put(Context.PROVIDER_URL, "localhost:1100"); // HA-JNDI port.  
  return new InitialContext(p); 
              </programlisting>
  
                  <para>The <literal>Context.PROVIDER_URL</literal> property points to the HA-JNDI service configured in
                      the <literal>HANamingService</literal> MBean (see <xref linkend="clustering-jndi-jboss"/>).</para>
  
              </section>
              <section id="clustering-jndi-client">
                  <title>Client configuration</title>
                  <para>The JNDI client needs to be aware of the HA-JNDI cluster. You can pass a list of JNDI servers
                      (i.e., the nodes in the HA-JNDI cluster) to the <literal>java.naming.provider.url</literal> JNDI
                      setting in the <literal>jndi.properties</literal> file. Each server node is identified by its IP
                      address and the JNDI port number. The server nodes are separated by commas (see <xref
                          linkend="clustering-jndi-jboss"/> on how to configure the servers and ports).</para>
                  <programlisting>
  java.naming.provier.url=server1:1100,server2:1100,server3:1100,server4:1100
              </programlisting>
                  <para>When initialising, the JNP client code will try to get in touch with each server node from the
                      list, one after the other, stopping as soon as one server has been reached. It will then download
                      the HA-JNDI stub from this node.</para>
  
                  <note>
                      <para>There is no load balancing behavior in the JNP client lookup process. It just goes through the
                          provider list and use the first available server. The HA-JNDI provider list only needs to
                          contain a subset of HA-JNDI nodes in the cluster.</para>
                  </note>
  
                  <para>The downloaded smart stub contains the logic to fail-over to another node if necessary and the
                      updated list of currently running nodes. Furthermore, each time a JNDI invocation is made to the
                      server, the list of targets in the stub interceptor is updated (only if the list has changed since
                      the last call).</para>
  
                  <para>If the property string <literal>java.naming.provider.url</literal> is empty or if all servers it
                      mentions are not reachable, the JNP client will try to discover a bootstrap HA-JNDI server through a
                      multicast call on the network (auto-discovery). See <xref linkend="clustering-jndi-jboss"/> on how
                      to configure auto-discovery on the JNDI server nodes. Through auto-discovery, the client might be
                      able to get a valid HA-JNDI server node without any configuration. Of course, for the auto-discovery
                      to work, the client must reside in the same LAN as the server cluster (e.g., the web servlets using
                      the EJB servers). The LAN or WAN must also be configured to propagate such multicast datagrams.</para>
                  <note>
                      <para>The auto-discovery feature uses multicast group address 230.0.0.4:1102.</para>
                  </note>
                  <para>In addition to the <literal>java.naming.provier.url</literal> property, you can specify a set of
                      other properties. The following list shows all client side properties you can specify, when creating
                      a new <literal>InitialContext</literal>.</para>
                  <itemizedlist>
                      <listitem>
                          <para><literal>java.naming.provier.url</literal>: Provides a list of IP addresses and port
                              numbers for HA-JNDI provider nodes in the cluster. The client tries those providers one by
                              one and uses the first one that responds.</para>
                      </listitem>
                      <listitem>
                          <para><literal>jnp.disableDiscovery</literal>: When set to <literal>true</literal>, this
                              property disables the automatic discovery feature. Default is
                          <literal>false</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><literal>jnp.partitionName</literal>: In an environment where multiple HA-JNDI services,
                              which are bound to distinct clusters (i.e., partitions), are started, this property allows
                              you to configure which cluster you broadcast to when the automatic discovery feature is
                              used. If you do not use the automatic discovery feature (e.g., you could explicitly provide
                              a list of valid JNDI nodes in <literal>java.naming.provider.url</literal>), this property is
                              not used. By default, this property is not set and the automatic discovery select the first
                              HA-JNDI server that responds, independently of the cluster partition name.</para>
                      </listitem>
                      <listitem>
                          <para><literal>jnp.discoveryTimeout</literal>: Determines how much time the context will wait
                              for a response to its automatic discovery packet. Default is 5000 ms.</para>
                      </listitem>
                      <listitem>
                          <para><literal>jnp.discoveryGroup</literal>: Determines which multicast group address is used
                              for the automatic discovery. Default is <literal>230.0.0.4</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><literal>jnp.discoveryPort</literal>: Determines which multicast group port is used for
                              the automatic discovery. Default is <literal>1102</literal>.</para>
                      </listitem>
                  </itemizedlist>
              </section>
              <section id="clustering-jndi-jboss">
                  <title>JBoss configuration</title>
                  <para>The <literal>cluster-service.xml</literal> file in the <literal>all/deploy</literal> directory
                      includes the following MBean to enable HA-JNDI services.</para>
                  <programlisting>
  &lt;mbean code="org.jboss.ha.jndi.HANamingService"            
         name="jboss:service=HAJNDI">       
      &lt;depends>jboss:service=DefaultPartition&lt;/depends>    
  &lt;/mbean>
              </programlisting>
                  <para>You can see that this MBean depends on the <literal>DefaultPartition</literal> MBean defined above
                      it (discussed in an earlier section in this chapter). In other configurations, you can put that
                      element in the <literal>jboss-services.xml</literal> file or any other JBoss configuration files in
                      the <literal>/deploy</literal> directory to enable HA-JNDI services. The available attributes for
                      this MBean are listed below.</para>
                  <itemizedlist>
                      <listitem>
                          <para><emphasis role="bold">PartitionName</emphasis> is an optional attribute to specify the
                              name of the cluster for the different nodes of the HA-JNDI service to communicate. The
                              default value is <literal>DefaultPartition</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">BindAddress</emphasis> is an optional attribute to specify the
                              address to which the HA-JNDI server will bind waiting for JNP clients. Only useful for
                              multi-homed computers.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">Port</emphasis> is an optional attribute to specify the port to
                              which the HA-JNDI server will bind waiting for JNP clients. The default value is
                                  <literal>1100</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">Backlog</emphasis> is an optional attribute to specify the backlog
                              value used for the TCP server socket waiting for JNP clients. The default value is
                                  <literal>50</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">RmiPort</emphasis> determines which port the server should use to
                              communicate with the downloaded stub. This attribute is optional. If it is missing, the
                              server automatically assigns a RMI port.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">AutoDiscoveryAddress</emphasis> is an optional attribute to specify
                              the multicast address to listen to for JNDI automatic discovery. The default value is
                                  <literal>230.0.0.4</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">AutoDiscoveryGroup</emphasis> is an optional attribute to specify
                              the multicast group to listen to for JNDI automatic discovery.. The default value is
                                  <literal>1102</literal>.</para>
                      </listitem>
  
                      <listitem>
                          <para><emphasis role="bold">LookupPool</emphasis> specifies the thread pool service used to
                              control the bootstrap and auto discovery lookups.</para>
                      </listitem>
  
                      <listitem>
                          <para><emphasis role="bold">DiscoveryDisabled</emphasis> is a boolean flag that disables
                              configuration of the auto discovery multicast listener.</para>
                      </listitem>
  
                      <listitem>
                          <para><emphasis role="bold">AutoDiscoveryBindAddress</emphasis> sets the auto-discovery
                              bootstrap multicast bind address. If this attribute is not specified and a
                                  <literal>BindAddress</literal> is specified, the <literal>BindAddress</literal> will be
                              used..</para>
                      </listitem>
  
                      <listitem>
                          <para><emphasis role="bold">AutoDiscoveryTTL</emphasis> specifies the TTL (time-to-live) for
                              autodiscovery IP multicast packets.</para>
                      </listitem>
  
                  </itemizedlist>
  
                  <para>The full default configuration of the <literal>HANamingService</literal> MBean is as follows.</para>
  
                  <programlisting>
  &lt;mbean code="org.jboss.ha.jndi.HANamingService" 
        name="jboss:service=HAJNDI"> 
      &lt;depends>
          jboss:service=${jboss.partition.name:DefaultPartition}
      &lt;/depends> 
      &lt;! -- Name of the partition to which the service is linked --> 
      &lt;attribute name="PartitionName">
          ${jboss.partition.name:DefaultPartition}
      &lt;/attribute> 
      &lt;! -- Bind address of bootstrap and HA-JNDI RMI endpoints --> 
      &lt;attribute name="BindAddress">${jboss.bind.address}&lt;/attribute> 
      &lt;! -- Port on which the HA-JNDI stub is made available --> 
      &lt;attribute name="Port">1100&lt;/attribute> 
      &lt;! -- RmiPort to be used by the HA-JNDI service once bound. 
          0 is for auto. --> 
      &lt;attribute name="RmiPort">1101&lt;/attribute> 
      &lt;! -- Accept backlog of the bootstrap socket --> 
      &lt;attribute name="Backlog">50&lt;/attribute> 
      &lt;! -- The thread pool service used to control the bootstrap and 
        auto discovery lookups --> 
      &lt;depends optional-attribute-name="LookupPool" 
          proxy-type="attribute">jboss.system:service=ThreadPool&lt;/depends>
  
      &lt;! -- A flag to disable the auto discovery via multicast --> 
      &lt;attribute name="DiscoveryDisabled">false&lt;/attribute> 
      &lt;! -- Set the auto-discovery bootstrap multicast bind address. --> 
      &lt;attribute name="AutoDiscoveryBindAddress">
          ${jboss.bind.address}
      &lt;/attribute> 
      
      &lt;! -- Multicast Address and group port used for auto-discovery --> 
      &lt;attribute name="AutoDiscoveryAddress">
          ${jboss.partition.udpGroup:230.0.0.4}
      &lt;/attribute> 
      &lt;attribute name="AutoDiscoveryGroup">1102&lt;/attribute> 
      &lt;! -- The TTL (time-to-live) for autodiscovery IP multicast packets --> 
      &lt;attribute name="AutoDiscoveryTTL">16&lt;/attribute>
  
      &lt;! -- Client socket factory to be used for client-server 
             RMI invocations during JNDI queries 
      &lt;attribute name="ClientSocketFactory">custom&lt;/attribute> 
      --> 
      &lt;! -- Server socket factory to be used for client-server 
             RMI invocations during JNDI queries 
      &lt;attribute name="ServerSocketFactory">custom&lt;/attribute> 
      --> 
  &lt;/mbean>            
              </programlisting>
  
                  <para>It is possible to start several HA-JNDI services that use different clusters. This can be used,
                      for example, if a node is part of many clusters. In this case, make sure that you set a different
                      port or IP address for both services. For instance, if you wanted to hook up HA-JNDI to the example
                      cluster you set up and change the binding port, the Mbean descriptor would look as follows.</para>
                  <programlisting>
  &lt;mbean code="org.jboss.ha.jndi.HANamingService"    
         name="jboss:service=HAJNDI">    
      &lt;depends>jboss:service=MySpecialPartition&lt;/depends>    
      &lt;attribute name="PartitionName">MySpecialPartition&lt;/attribute>    
      &lt;attribute name="Port">56789&lt;/attribute>  
  &lt;/mbean> 
              </programlisting>
              </section>
          </section>
          <section id="clustering-session">
              <title>Clustered Session EJBs</title>
              <para>Session EJBs provide remote invocation services. They are clustered based on the client-side
                  interceptor architecture. The client application for a clustered session bean is exactly the same as the
                  client for the non-clustered version of the session bean, except for a minor change to the
                      <literal>java.naming.provier.url</literal> system property to enable HA-JNDI lookup (see previous
                  section). No code change or re-compilation is needed on the client side. Now, let's check out how to
                  configure clustered session beans in EJB 2.x and EJB 3.0 server applications respectively.</para>
              <section id="clustering-session-slsb21">
                  <title>Stateless Session Bean in EJB 2.x</title>
                  <para>Clustering stateless session beans is most probably the easiest case: as no state is involved,
                      calls can be load-balanced on any participating node (i.e. any node that has this specific bean
                      deployed) of the cluster. To make a bean clustered, you need to modify its
                      <literal>jboss.xml</literal> descriptor to contain a <literal>&lt;clustered></literal> tag.</para>
                  <programlisting>
  &lt;jboss>    
      &lt;enterprise-beans>      
          &lt;session>        
              &lt;ejb-name>nextgen.StatelessSession&lt;/ejb-name>        
              &lt;jndi-name>nextgen.StatelessSession&lt;/jndi-name>        
              &lt;clustered>True&lt;/clustered>        
              &lt;cluster-config>          
                  &lt;partition-name>DefaultPartition&lt;/partition-name>          
                  &lt;home-load-balance-policy>                 
                      org.jboss.ha.framework.interfaces.RoundRobin          
                  &lt;/home-load-balance-policy>          
                  &lt;bean-load-balance-policy>  
                      org.jboss.ha.framework.interfaces.RoundRobin
                  &lt;/bean-load-balance-policy>
              &lt;/cluster-config>
          &lt;/session>
      &lt;/enterprise-beans>
  &lt;/jboss>
              </programlisting>
  
                  <note>
                      <para>The <literal>&lt;clustered>True&lt;/clustered></literal> element is really just an
                          alias for the <literal>&lt;configuration-name>Clustered Stateless
                              SessionBean&lt;/configuration-name></literal> element.</para>
                  </note>
  
                  <para>In the bean configuration, only the <literal>&lt;clustered></literal> element is mandatory. It
                      indicates that the bean works in a cluster. The <literal>&lt;cluster-config></literal> element
                      is optional and the default values of its attributes are indicated in the sample configuration
                      above. Below is a description of the attributes in the <literal>&lt;cluster-config></literal>
                      element.</para>
                  <itemizedlist>
                      <listitem>
                          <para><emphasis role="bold">partition-name</emphasis> specifies the name of the cluster the bean
                              participates in. The default value is <literal>DefaultPartition</literal>. The default
                              partition name can also be set system-wide using the <literal>jboss.partition.name</literal>
                              system property.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">home-load-balance-policy</emphasis> indicates the class to be used
                              by the home stub to balance calls made on the nodes of the cluster. By default, the proxy
                              will load-balance calls in a <literal>RoundRobin</literal> fashion. You can also implement
                              your own load-balance policy class or use the class <literal>FirstAvailable</literal> that
                              persists to use the first node available that it meets until it fails.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">bean-load-balance-policy</emphasis> Indicates the class to be used
                              by the bean stub to balance calls made on the nodes of the cluster. Comments made for the
                                  <literal>home-load-balance-policy</literal> attribute also apply.</para>
                      </listitem>
                  </itemizedlist>
                  <para>In JBoss 3.0.x, each client-side stub has its own list of available target nodes. Consequently,
                      some side-effects can occur. For example, if you cache your home stub and re-create a remote stub
                      for a stateless session bean (with the Round-Robin policy) each time you need to make an invocation,
                      a new remote stub, containing the list of available targets, will be downloaded for each invocation.
                      Consequently, as the first target node is always the first in the list, calls will not seemed to be
                      load-balanced because there is no usage-history between different stubs. In JBoss 3.2+, the proxy
                      families (i.e., the "First AvailableIdenticalAllProxies" load balancing policy, see <xref
                          linkend="clustering-intro-balancepolicy-32"/>) remove this side effect as the home and remote
                      stubs of a given EJB are in two different families.</para>
  
                  <section id="clustering-session-slsb21-retry">
                      <title>Handle Cluster Restart</title>
                      <para>We have covered the HA smart client architecture in <xref
                              linkend="clustering-intro-arch-proxy"/>. The default HA smart proxy client can only failover
                          as long as one node in the cluster exists. If there is a complete cluster shutdown, the proxy
                          becomes orphanned and looses knowledge of the available nodes in the cluster. There is no way
                          for the proxy to recover from this. The proxy needs to be looked up out of JNDI/HAJNDI when the
                          nodes are restarted.</para>
                      <para>The 3.2.7+/4.0.2+ releases contain a <literal>RetryInterceptor</literal> that can be added to
                          the proxy client side interceptor stack to allow for a transparent recovery from such a restart
                          failure. To enable it for an EJB, setup an <literal>invoker-proxy-binding</literal> that
                          includes the <literal>RetryInterceptor</literal>. Below is an example
                          <literal>jboss.xml</literal> configuration.</para>
  
                      <programlisting>
  &lt;jboss>
      &lt;session>
          &lt;ejb-name>nextgen_RetryInterceptorStatelessSession&lt;/ejb-name>
          &lt;invoker-bindings>
              &lt;invoker>
                  &lt;invoker-proxy-binding-name>
                      clustered-retry-stateless-rmi-invoker
                  &lt;/invoker-proxy-binding-name>
                  &lt;jndi-name>
                      nextgen_RetryInterceptorStatelessSession
                  &lt;/jndi-name>
              &lt;/invoker>
          &lt;/invoker-bindings>
          &lt;clustered>true&lt;/clustered>
      &lt;/session>
  
      &lt;invoker-proxy-binding>
          &lt;name>clustered-retry-stateless-rmi-invoker&lt;/name>
          &lt;invoker-mbean>jboss:service=invoker,type=jrmpha&lt;/invoker-mbean>
          &lt;proxy-factory>org.jboss.proxy.ejb.ProxyFactoryHA&lt;/proxy-factory>
          &lt;proxy-factory-config>
              &lt;client-interceptors>
                  &lt;home>
                      &lt;interceptor>
                          org.jboss.proxy.ejb.HomeInterceptor
                      &lt;/interceptor>
                      &lt;interceptor>
                          org.jboss.proxy.SecurityInterceptor
                      &lt;/interceptor>
                      &lt;interceptor>
                          org.jboss.proxy.TransactionInterceptor
                      &lt;/interceptor>
                      &lt;interceptor>
                          org.jboss.proxy.ejb.RetryInterceptor
                      &lt;/interceptor>
                      &lt;interceptor>
                          org.jboss.invocation.InvokerInterceptor
                      &lt;/interceptor>
                  &lt;/home>
                  &lt;bean>
                      &lt;interceptor>
                          org.jboss.proxy.ejb.StatelessSessionInterceptor
                      &lt;/interceptor>
                      &lt;interceptor>
                          org.jboss.proxy.SecurityInterceptor
                      &lt;/interceptor>
                      &lt;interceptor>
                          org.jboss.proxy.TransactionInterceptor
                      &lt;/interceptor>
                      &lt;interceptor>
                          org.jboss.proxy.ejb.RetryInterceptor
                      &lt;/interceptor>
                      &lt;interceptor>
                          org.jboss.invocation.InvokerInterceptor
                      &lt;/interceptor>
                  &lt;/bean>
              &lt;/client-interceptors>
          &lt;/proxy-factory-config>
      &lt;/invoker-proxy-binding>
                  </programlisting>
                  </section>
              </section>
  
              <section id="clustering-session-sfsb21">
                  <title>Stateful Session Bean in EJB 2.x</title>
                  <para>Clustering stateful session beans is more complex than clustering their stateless counterparts
                      since JBoss needs to manage the state information. The state of all stateful session beans are
                      replicated and synchronized across the cluster each time the state of a bean changes. The JBoss AS
                      uses the <literal>HASessionState</literal> MBean to manage distributed session states for clustered
                      EJB 2.x stateful session beans. In this section, we cover both the session bean configuration and
                      the <literal>HASessionState</literal> MBean configuration.</para>
                  <section>
                      <title>The EJB application configuration</title>
                      <para>In the EJB application, you need to modify the <literal>jboss.xml</literal> descriptor file
                          for each stateful session bean and add the <literal>&lt;clustered></literal> tag.</para>
                      <programlisting>
  &lt;jboss>    
      &lt;enterprise-beans>
          &lt;session>        
              &lt;ejb-name>nextgen.StatefulSession&lt;/ejb-name>        
              &lt;jndi-name>nextgen.StatefulSession&lt;/jndi-name>        
              &lt;clustered>True&lt;/clustered>        
              &lt;cluster-config>          
                  &lt;partition-name>DefaultPartition&lt;/partition-name>
                  &lt;home-load-balance-policy>               
                      org.jboss.ha.framework.interfaces.RoundRobin          
                  &lt;/home-load-balance-policy>          
                  &lt;bean-load-balance-policy>               
                      org.jboss.ha.framework.interfaces.FirstAvailable          
                  &lt;/bean-load-balance-policy>          
                  &lt;session-state-manager-jndi-name>              
                      /HASessionState/Default          
                  &lt;/session-state-manager-jndi-name>        
              &lt;/cluster-config>      
          &lt;/session>    
      &lt;/enterprise-beans>
  &lt;/jboss> 
                  </programlisting>
                      <para>In the bean configuration, only the <literal>&lt;clustered></literal> tag is mandatory to
                          indicate that the bean works in a cluster. The <literal>&lt;cluster-config></literal>
                          element is optional and its default attribute values are indicated in the sample configuration
                          above.</para>
                      <para>The <literal>&lt;session-state-manager-jndi-name></literal> tag is used to give the JNDI
                          name of the <literal>HASessionState</literal> service to be used by this bean.</para>
                      <para>The description of the remaining tags is identical to the one for stateless session bean.
                          Actions on the clustered stateful session bean's home interface are by default load-balanced,
                          round-robin. Once the bean's remote stub is available to the client, calls will not be
                          load-balanced round-robin any more and will stay "sticky" to the first node in the list.</para>
                  </section>
                  <section>
                      <title>Optimize state replication</title>
                      <para>As the replication process is a costly operation, you can optimise this behaviour by
                          optionally implementing in your bean class a method with the following signature:</para>
                      <programlisting>  
  public boolean isModified ();
                  </programlisting>
                      <para>Before replicating your bean, the container will detect if your bean implements this method.
                          If your bean does, the container calls the <literal>isModified()</literal> method and it only
                          replicates the bean when the method returns <literal>true</literal>. If the bean has not been
                          modified (or not enough to require replication, depending on your own preferences), you can
                          return <literal>false</literal> and the replication would not occur. This feature is available
                          on JBoss AS 3.0.1+ only.</para>
                  </section>
                  <section>
                      <title>The HASessionState service configuration</title>
                      <para>The <literal>HASessionState</literal> service MBean is defined in the
                              <code>all/deploy/cluster-service.xml</code> file.</para>
                      <programlisting>
  &lt;mbean code="org.jboss.ha.hasessionstate.server.HASessionStateService"
        name="jboss:service=HASessionState">
      &lt;depends>
          jboss:service=${jboss.partition.name:DefaultPartition}
      &lt;/depends>
      &lt;!-- Name of the partition to which the service is linked -->
      &lt;attribute name="PartitionName">
          ${jboss.partition.name:DefaultPartition}
      &lt;/attribute>
      &lt;!-- JNDI name under which the service is bound -->
      &lt;attribute name="JndiName">/HASessionState/Default&lt;/attribute>
      &lt;!-- Max delay before cleaning unreclaimed state.
             Defaults to 30*60*1000 => 30 minutes -->
      &lt;attribute name="BeanCleaningDelay">0&lt;/attribute>
  &lt;/mbean>
                  </programlisting>
                      <para>The configuration attributes in the <literal>HASessionState</literal> MBean are listed below.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">JndiName</emphasis> is an optional attribute to specify the JNDI
                                  name under which this <literal>HASessionState</literal> service is bound. The default
                                  value is <literal>/HAPartition/Default</literal>.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">PartitionName</emphasis> is an optional attribute to specify the
                                  name of the cluster in which the current <literal>HASessionState</literal> protocol will
                                  work. The default value is <literal>DefaultPartition</literal>.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">BeanCleaningDelay</emphasis> is an optional attribute to specify
                                  the number of miliseconds after which the <literal>HASessionState</literal> service can
                                  clean a state that has not been modified. If a node, owning a bean, crashes, its brother
                                  node will take ownership of this bean. Nevertheless, the container cache of the brother
                                  node will not know about it (because it has never seen it before) and will never delete
                                  according to the cleaning settings of the bean. That is why the
                                  <literal>HASessionState</literal> service needs to do this cleanup sometimes. The
                                  default value is <literal>30*60*1000</literal> milliseconds (i.e., 30 minutes).</para>
                          </listitem>
                      </itemizedlist>
                  </section>
              </section>
              <!-- TBD: Would be good to give a more complex example with
                  attributes on the annotations -->
              <section id="clustering-session-slsb30">
                  <title>Stateless Session Bean in EJB 3.0</title>
                  <para>To cluster a stateless session bean in EJB 3.0, all you need to do is to annotate the bean class
                      withe the <literal>@Cluster</literal> annotation. You can pass in the load balance policy and
                      cluster partition as parameters to the annotation. The default load balance policy is
                          <literal>org.jboss.ha.framework.interfaces.RandomRobin</literal> and the default cluster is
                          <literal>DefaultPartition</literal>. Below is the definition of the <literal>@Cluster</literal>
                      annotation.</para>
                  <programlisting>
  public @interface Clustered {
     Class loadBalancePolicy() default LoadBalancePolicy.class;
     String partition() default "DefaultPartition";
  }
              </programlisting>
                  <para>Here is an example of a clustered EJB 3.0 stateless session bean implementation.</para>
                  <programlisting>
  @Stateless
  @Clustered
  public class MyBean implements MySessionInt {
     
     public void test() {
        // Do something cool
     }
  }
              </programlisting>
              </section>
              <section id="clustering-session-sfsb30">
                  <title>Stateful Session Bean in EJB 3.0</title>
                  <para>To cluster stateful session beans in EJB 3.0, you need to tag the bean implementation class with
                      the <literal>@Cluster</literal> annotation, just as we did with the EJB 3.0 stateless session bean
                      earlier.</para>
                  <programlisting>
  @Stateful
  @Clustered
  public class MyBean implements MySessionInt {
     
     private int state = 0;
  
     public void increment() {
        System.out.println("counter: " + (state++));
     }
  }
              </programlisting>
                  <para>JBoss Cache provides the session state replication service for EJB 3.0 stateful session beans. The
                      related MBean service is defined in the <literal>ejb3-clustered-sfsbcache-service.xml</literal> file
                      in the <literal>deploy</literal> directory. The contents of the file are as follows.</para>
                  <programlisting>
  &lt;server>
     &lt;mbean code="org.jboss.ejb3.cache.tree.PassivationTreeCache"
         name="jboss.cache:service=EJB3SFSBClusteredCache">
        
          &lt;attribute name="IsolationLevel">READ_UNCOMMITTED&lt;/attribute>
          &lt;attribute name="CacheMode">REPL_SYNC&lt;/attribute>
          &lt;attribute name="ClusterName">SFSB-Cache&lt;/attribute>
          &lt;attribute name="ClusterConfig">
              ... ...
          &lt;/attribute>
  
          &lt;!--  Number of milliseconds to wait until all responses for a
                synchronous call have been received.
          -->
          &lt;attribute name="SyncReplTimeout">10000&lt;/attribute>
  
          &lt;!--  Max number of milliseconds to wait for a lock acquisition -->
          &lt;attribute name="LockAcquisitionTimeout">15000&lt;/attribute>
  
          &lt;!--  Name of the eviction policy class. -->
          &lt;attribute name="EvictionPolicyClass">
              org.jboss.ejb3.cache.tree.StatefulEvictionPolicy
          &lt;/attribute>
  
          &lt;!--  Specific eviction policy configurations. This is LRU -->
          &lt;attribute name="EvictionPolicyConfig">
              &lt;config>
                  &lt;attribute name="wakeUpIntervalSeconds">1&lt;/attribute>
                  &lt;name>statefulClustered&lt;/name>
                  &lt;region name="/_default_">
                      &lt;attribute name="maxNodes">1000000&lt;/attribute>
                      &lt;attribute name="timeToIdleSeconds">300&lt;/attribute>
                  &lt;/region>
              &lt;/config>
          &lt;/attribute>
  
          &lt;attribute name="CacheLoaderFetchPersistentState">false&lt;/attribute>
          &lt;attribute name="CacheLoaderFetchTransientState">true&lt;/attribute>
          &lt;attribute name="FetchStateOnStartup">true&lt;/attribute>
          &lt;attribute name="CacheLoaderClass">
              org.jboss.ejb3.cache.tree.StatefulCacheLoader
          &lt;/attribute>
          &lt;attribute name="CacheLoaderConfig">
              location=statefulClustered
          &lt;/attribute>
     &lt;/mbean>
  &lt;/server>
              </programlisting>
                  <para>The configuration attributes in the <literal>PassivationTreeCache</literal> MBean are essentially
                      the same as the attributes in the standard JBoss Cache <literal>TreeCache</literal> MBean discussed
                      in <xref linkend="jbosscache.chapt"/>. Again, we omitted the JGroups configurations in the
                          <literal>ClusterConfig</literal> attribute (see more in <xref linkend="jbosscache-jgroups"
                  />).</para>
              </section>
          </section>
          <section id="clustering-entity">
              <title>Clustered Entity EJBs</title>
              <para>In a JBoss AS cluster, the entity bean instances need to replicated across all nodes. If an entity
                  bean provides remote services, the service methods need to be load balanced as well.</para>
              <para>To use a clustered entity bean, the application does not need to do anything special, except for
                  looking up bean references from the clustered HA-JNDI.</para>
              <section id="clustering-entity-21">
                  <title>Entity Bean in EJB 2.x</title>
  
                  <para>First of all, it is worth to note that clustering 2.x entity beans is a bad thing to do. Its
                      exposes elements that generally are too fine grained for use as remote objects to clustered remote
                      objects and introduces data synchronization problems that are non-trivial. Do NOT use EJB 2.x entity
                      bean clustering unless you fit into the sepecial case situation of read-only, or one read-write node
                      with read-only nodes synched with the cache invalidation services.</para>
  
                  <!--
                  TODO: Discuss what is cache invalidation service
              -->
  
                  <para>To cluster EJB 2.x entity beans, you need to add the <literal>&lt;clustered></literal> element
                      to the application's <literal>jboss.xml</literal> descriptor file. Below is a typical
                          <literal>jboss.xml</literal> file.</para>
                  <programlisting>
  &lt;jboss>    
      &lt;enterprise-beans>      
          &lt;entity>        
              &lt;ejb-name>nextgen.EnterpriseEntity&lt;/ejb-name>        
              &lt;jndi-name>nextgen.EnterpriseEntity&lt;/jndi-name>          
              &lt;clustered>True&lt;/clustered>         
              &lt;cluster-config>            
                  &lt;partition-name>DefaultPartition&lt;/partition-name>            
                  &lt;home-load-balance-policy>                 
                      org.jboss.ha.framework.interfaces.RoundRobin            
                  &lt;/home-load-balance-policy>            
                  &lt;bean-load-balance-policy>                
                      org.jboss.ha.framework.interfaces.FirstAvailable            
                  &lt;/bean-load-balance-policy>          
              &lt;/cluster-config>      
          &lt;/entity>    
      &lt;/enterprise-beans>  
  &lt;/jboss>
              </programlisting>
                  <para>The EJB 2.x entity beans are clustered for load balanced remote invocations. All the bean
                      instances are synchronized to have the same contents on all nodes.</para>
                  <para>However, clustered EJB 2.x Entity Beans do not have a distributed locking mechanism or a
                      distributed cache. They can only be synchronized by using row-level locking at the database level
                      (see <literal>&lt;row-lock></literal> in the CMP specification) or by setting the Transaction
                      Isolation Level of your JDBC driver to be <literal>TRANSACTION_SERIALIZABLE</literal>. Because there
                      is no supported distributed locking mechanism or distributed cache Entity Beans use Commit Option
                      "B" by default (See <literal>standardjboss.xml</literal> and the container configurations Clustered
                      CMP 2.x EntityBean, Clustered CMP EntityBean, or Clustered BMP EntityBean). It is not recommended
                      that you use Commit Option "A" unless your Entity Bean is read-only. (There are some design patterns
                      that allow you to use Commit Option "A" with read-mostly beans. You can also take a look at the
                      Seppuku pattern <ulink url="http://dima.dhs.org/misc/readOnlyUpdates.html"/>. JBoss may incorporate
                      this pattern into later versions.)</para>
                  <note>
                      <para>If you are using Bean Managed Persistence (BMP), you are going to have to implement
                          synchronization on your own. The MVCSoft CMP 2.0 persistence engine (see <ulink
                              url="http://www.jboss.org/jbossgroup/partners.jsp"/>) provides different kinds of optimistic
                          locking strategies that can work in a JBoss cluster.</para>
                  </note>
              </section>
              <section id="clustering-entity-30">
                  <title>Entity Bean in EJB 3.0</title>
  
                  <!--
                  TODO: Discuss the drawback of EJB 3.0 clustering
              -->
  
                  <para>In EJB 3.0, the entity beans primarily serve as a persistence data model. They do not provide
                      remote services. Hence, the entity bean clustering service in EJB 3.0 primarily deals with
                      distributed caching and replication, instead of load balancing.</para>
                  <section id="clustering-entity-30-cache">
                      <title>Configure the distributed cache</title>
                      <para>To avoid round trips to the database, you can use a cache for your entities. JBoss EJB 3.0 is
                          implemented by Hibernate, which has support for a second-level cache. The Hibernate setup used
                          for the JBoss EJB 3.0 implementation uses JBoss Cache as its underlying cache implementation.
                          The cache provides the following functionalities.</para>
                      <itemizedlist>
                          <listitem>
                              <para>If you persist a cache enabled entity bean instance to the database via the entity
                                  manager the entity will inserted into the cache.</para>
                          </listitem>
                          <listitem>
                              <para>If you update an entity bean instance and save the changes to the database via the
                                  entity manager the entity will updated in the cache.</para>
                          </listitem>
                          <listitem>
                              <para>If you remove an entity bean instance from the database via the entity manager the
                                  entity will removed from the cache.</para>
                          </listitem>
                          <listitem>
                              <para>If loading a cached entity from the database via the entity manager, and that entity
                                  does not exist in the database, it will be inserted into the cache.</para>
                          </listitem>
                      </itemizedlist>
                      <para>JBoss Cache service for EJB 3.0 entity beans is configured in a <literal>TreeCache</literal>
                          MBean (see <xref linkend="jbosscache-cache"/>) in the
                              <literal>deploy/ejb3-entity-cache-service.xml</literal> file. The name of the cache MBean
                          service is <literal>jboss.cache:service=EJB3EntityTreeCache</literal>. Below is the contents of
                          the <literal>ejb3-entity-cache-service.xml</literal> file in the standard JBoss distribution.
                          Again, we omitted the JGroups configuration element <literal>ClusterConfig</literal>.</para>
                      <programlisting>
  &lt;server>
      &lt;mbean code="org.jboss.cache.TreeCache" 
              name="jboss.cache:service=EJB3EntityTreeCache">
          
          &lt;depends>jboss:service=Naming&lt;/depends>
          &lt;depends>jboss:service=TransactionManager&lt;/depends>
  
          &lt;!-- Configure the TransactionManager -->
          &lt;attribute name="TransactionManagerLookupClass">
              org.jboss.cache.JBossTransactionManagerLookup
          &lt;/attribute>
  
          &lt;attribute name="IsolationLevel">REPEATABLE_READ&lt;/attribute>
          &lt;attribute name="CacheMode">REPL_SYNC&lt;/attribute>
  
          &lt;!--Name of cluster. Needs to be the same for all clusters, 
              in order to find each other -->
          &lt;attribute name="ClusterName">EJB3-entity-cache&lt;/attribute>
  
          &lt;attribute name="ClusterConfig">
              ... ...
          &lt;/attribute>
  
          &lt;attribute name="InitialStateRetrievalTimeout">5000&lt;/attribute>
          &lt;attribute name="SyncReplTimeout">10000&lt;/attribute>
          &lt;attribute name="LockAcquisitionTimeout">15000&lt;/attribute>
  
          &lt;attribute name="EvictionPolicyClass">
              org.jboss.cache.eviction.LRUPolicy
          &lt;/attribute>
  
          &lt;!--  Specific eviction policy configurations. This is LRU -->
          &lt;attribute name="EvictionPolicyConfig">
              &lt;config>
                  &lt;attribute name="wakeUpIntervalSeconds">5&lt;/attribute>
                  &lt;!--  Cache wide default -->
                  &lt;region name="/_default_">
                      &lt;attribute name="maxNodes">5000&lt;/attribute>
                      &lt;attribute name="timeToLiveSeconds">1000&lt;/attribute>
                  &lt;/region>
              &lt;/config>
          &lt;/attribute>
      &lt;/mbean>
  &lt;/server>
                  </programlisting>
                      <para>As we discussed in <xref linkend="jbosscache-cache"/>, JBoss Cache allows you to specify
                          timeouts to cached entities. Entities not accessed within a certain amount of time are dropped
                          from the cache in order to save memory. If running within a cluster, and the cache is updated,
                          changes to the entries in one node will be replicated to the corresponding entries in the other
                          nodes in the cluster.</para>
                      <para>Now, we have JBoss Cache configured to support distributed caching of EJB 3.0 entity beans. We
                          still have to configure individual entity beans to use the cache service.</para>
                  </section>
                  <section id="clustering-entity-30-bean">
                      <title>Configure the entity beans for cache</title>
                      <para>You define your entity bean classes the normal way. Future versions of JBoss EJB 3.0 will
                          support annotating entities and their relationship collections as cached, but for now you have
                          to configure the underlying hibernate engine directly. Take a look at the
                              <literal>persistence.xml</literal> file, which configures the caching options for hibernate
                          via its optional <literal>property</literal> elements. The following element in
                              <literal>persistence.xml</literal> defines that caching should be enabled:</para>
                      <programlisting>
  &lt;!-- Clustered cache with TreeCache -->
  &lt;property name="cache.provider_class">
      org.jboss.ejb3.entity.TreeCacheProviderHook
  &lt;/property>
                  </programlisting>
                      <para>The following property element defines the object name of the cache to be used, and the MBean
                          name.</para>
                      <programlisting>
  &lt;property name="treecache.mbean.object_name">
      jboss.cache:service=EJB3EntityTreeCache
  &lt;/property>
                  </programlisting>
                      <para>Next we need to configure what entities be cached. The default is to not cache anything, even
                          with the settings shown above. We use the <literal>@Cache</literal> annotation to tag entity
                          beans that needs to be cached.</para>
  
                      <programlisting>
  @Entity 
  @Cache(usage=CacheConcurrencyStrategy.TRANSACTIONAL) 
  public class Customer implements Serializable { 
    // ... ... 
  }
                  </programlisting>
  
                      <para>A very simplified rule of thumb is that you will typically want to do caching for objects that
                          rarely change, and which are frequently read. You can fine tune the cache for each entity bean
                          in the <literal>ejb3-entity-cache-service.xml</literal> configuration file. For instance, you
                          can specify the size of the cache. If there are too many objects in the cache, the cache could
                          evict oldest objects (or least used objects, depending on configuration) to make room for new
                          objects. The cache for the <literal>mycompany.Customer</literal> entity bean is
                              <literal>/mycompany/Customer</literal> cache region.</para>
  
                      <programlisting>
  &lt;server>  
    &lt;mbean code="org.jboss.cache.TreeCache" 
           name="jboss.cache:service=EJB3EntityTreeCache">  
      &lt;depends>jboss:service=Naming 
      &lt;depends>jboss:service=TransactionManager 
      ... ... 
      &lt;attribute name="EvictionPolicyConfig">  
        &lt;config>  
          &lt;attribute name="wakeUpIntervalSeconds">5&lt;/attribute>  
          &lt;region name="/_default_">  
            &lt;attribute name="maxNodes">5000&lt;/attribute>  
            &lt;attribute name="timeToLiveSeconds">1000&lt;/attribute>  
          &lt;/region>  
          &lt;region name="/mycompany/Customer">  
            &lt;attribute name="maxNodes">10&lt;/attribute>  
            &lt;attribute name="timeToLiveSeconds">5000&lt;/attribute>  
          &lt;/region>  
          ... ... 
        &lt;/config>  
      &lt;/attribute>  
    &lt;/mbean> 
  &lt;/server>
                  </programlisting>
  
                      <para>If you do not specify a cache region for an entity bean class, all instances of this class
                          will be cached in the <literal>/_default</literal> region as defined above. The EJB3
                              <literal>Query</literal> API provides means for you to save to load query results (i.e.,
                          collections of entity beans) from specified cache regions.</para>
  
                  </section>
              </section>
          </section>
          <section id="clustering-http">
              <title>HTTP Services</title>
              <para>HTTP session replication is used to replicate the state associated with your web clients on other
                  nodes of a cluster. Thus, in the event one of your node crashes, another node in the cluster will be
                  able to recover. Two distinct functions must be performed:</para>
              <itemizedlist>
                  <listitem>
                      <para>Session state replication</para>
                  </listitem>
                  <listitem>
                      <para>Load-balance of incoming invocations</para>
                  </listitem>
              </itemizedlist>
              <para>State replication is directly handled by JBoss. When you run JBoss in the <literal>all</literal>
                  configuration, session state replication is enabled by default. Just deploy your web application and its
                  session state is already replicated across all JBoss instances in the cluster.</para>
              <para>However, Load-balancing is a different story, it is not handled by JBoss itself and requires
                  additional software. As a very common scenario, we will demonstrate how to setup Apache and mod_jk. This
                  activity could be either performed by specialized hardware switches or routers (Cisco LoadDirector for
                  example) or any other dedicated software though.</para>
              <note>
                  <para>A load-balancer tracks the HTTP requests and, depending on the session to which is linked the
                      request, it dispatches the request to the appropriate node. This is called a load-balancer with
                      sticky-sessions: once a session is created on a node, every future request will also be processed by
                      the same node. Using a load-balancer that supports sticky-sessions without replicating the sessions
                      allows you to scale very well without the cost of session state replication: each query will always
                      be handled by the same node. But in the case a node dies, the state of all client sessions hosted by
                      this node are lost (the shopping carts, for example) and the clients will most probably need to
                      login on another node and restart with a new session. In many situations, it is acceptable not to
                      replicate HTTP sessions because all critical state is stored in the database. In other situations,
                      loosing a client session is not acceptable and, in this case, session state replication is the price
                      one has to pay.</para>
              </note>
              <para>Apache is a well-known web server which can be extended by plugging modules. One of these modules,
                  mod_jk (and the newest mod_jk2) has been specifically designed to allow forward requests from Apache to
                  a Servlet container. Furthermore, it is also able to load-balance HTTP calls to a set of Servlet
                  containers while maintaining sticky sessions, and this is what is actually interesting for us.</para>
              <section id="clustering-http-download">
                  <title>Download the software</title>
                  <para>First of all, make sure that you have Apache installed. You can download Apache directly from
                      Apache web site at <literal>http://httpd.apache.org/</literal>. Its installation is pretty
                      straightforward and requires no specific configuration. As several versions of Apache exist, we
                      advise you to use version 2.0.x. We will consider, for the next sections, that you have installed
                      Apache in the <literal>APACHE_HOME</literal> directory.</para>
                  <para>Next, download mod_jk binaries. Several versions of mod_jk exist as well. We strongly advise you
                      to use mod_jk 1.2.x, as both mod_jk and mod_jk2 are deprecated, unsupported and no further
                      developments are going on in the community. The mod_jk 1.2.x binary can be downloaded from
                          <literal>http://www.apache.org/dist/jakarta/tomcat-connectors/jk/binaries/</literal>. Rename the
                      downloaded file to <literal>mod_jk.so</literal> and copy it under
                      <literal>APACHE_HOME/modules/</literal>.</para>
              </section>
              <section id="clustering-http-modjk">
                  <title>Configure Apache to load mod_jk</title>
                  <para>Modify APACHE_HOME/conf/httpd.conf and add a single line at the end of the file:</para>
                  <programlisting>
  # Include mod_jk's specific configuration file  
  Include conf/mod-jk.conf  
              </programlisting>
                  <para>Next, create a new file named <literal>APACHE_HOME/conf/mod-jk.conf</literal>:</para>
                  <programlisting>
  # Load mod_jk module
  # Specify the filename of the mod_jk lib
  LoadModule jk_module modules/mod_jk.so
   
  # Where to find workers.properties
  JkWorkersFile conf/workers.properties
  
  # Where to put jk logs
  JkLogFile logs/mod_jk.log
   
  # Set the jk log level [debug/error/info]
  JkLogLevel info 
   
  # Select the log format
  JkLogStampFormat  "[%a %b %d %H:%M:%S %Y]"
   
  # JkOptions indicates to send SSK KEY SIZE
  JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
   
  # JkRequestLogFormat
  JkRequestLogFormat "%w %V %T"
                 
  # Mount your applications
  JkMount /application/* loadbalancer
   
  # You can use external file for mount points.
  # It will be checked for updates each 60 seconds.
  # The format of the file is: /url=worker
  # /examples/*=loadbalancer
  JkMountFile conf/uriworkermap.properties               
  
  # Add shared memory.
  # This directive is present with 1.2.10 and
  # later versions of mod_jk, and is needed for
  # for load balancing to work properly
  JkShmFile logs/jk.shm 
                
  # Add jkstatus for managing runtime data
  &lt;Location /jkstatus/>
      JkMount status
      Order deny,allow
      Deny from all
      Allow from 127.0.0.1
  &lt;/Location>    
              </programlisting>
                  <para>Please note that two settings are very important:</para>
                  <itemizedlist>
                      <listitem>
                          <para>The <literal>LoadModule</literal> directive must reference the mod_jk library you have
                              downloaded in the previous section. You must indicate the exact same name with the "modules"
                              file path prefix.</para>
                      </listitem>
                      <listitem>
                          <para>The <literal>JkMount</literal> directive tells Apache which URLs it should forward to the
                              mod_jk module (and, in turn, to the Servlet containers). In the above file, all requests
                              with URL path <literal>/application/*</literal> are sent to the mod_jk load-balancer. This
                              way, you can configure Apache to server static contents (or PHP contents) directly and only
                              use the loadbalancer for Java applications. If you only use mod_jk as a loadbalancer, you
                              can also forward all URLs (i.e., <literal>/*</literal>) to mod_jk.</para>
                      </listitem>
                  </itemizedlist>
  
                  <para>In addition to the <literal>JkMount</literal> directive, you can also use the
                      <literal>JkMountFile</literal> directive to specify a mount points configuration file, which
                      contains multiple Tomcat forwarding URL mappings. You just need to create a
                          <literal>uriworkermap.properties</literal> file in the <literal>APACHE_HOME/conf</literal>
                      directory. The format of the file is <literal>/url=worker_name</literal>. To get things started,
                      paste the following example into the file you created:</para>
  
                  <programlisting>
  # Simple worker configuration file
  
  # Mount the Servlet context to the ajp13 worker
  /jmx-console=loadbalancer
  /jmx-console/*=loadbalancer
  /web-console=loadbalancer
  /web-console/*=loadbalancer
              </programlisting>
  
                  <para>This will configure mod_jk to forward requests to <literal>/jmx-console</literal> and
                          <literal>/web-console</literal> to Tomcat.</para>
  
                  <para>You will most probably not change the other settings in <literal>mod_jk.conf</literal>. They are
                      used to tell mod_jk where to put its logging file, which logging level to use and so on.</para>
  
              </section>
              <section id="clustering-http-nodes">
                  <title>Configure worker nodes in mod_jk</title>
                  <para>Next, you need to configure mod_jk workers file <literal>conf/workers.properties</literal>. This
                      file specify where are located the different Servlet containers and how calls should be
                      load-balanced across them. The configuration file contains one section for each target servlet
                      container and one global section. For a two nodes setup, the file could look like this:</para>
                  <!-- The local worker comment is from here: http://jira.jboss.com/jira/browse/JBDOCS-102 -->
                  <programlisting>
  # Define list of workers that will be used
  # for mapping requests
  worker.list=loadbalancer,status
  
  # Define Node1
  # modify the host as your host IP or DNS name.
  worker.node1.port=8009
  worker.node1.host=node1.mydomain.com 
  worker.node1.type=ajp13
  worker.node1.lbfactor=1
  worker.node1.cachesize=10
  
  # Define Node2
  # modify the host as your host IP or DNS name.
  worker.node2.port=8009
  worker.node2.host= node2.mydomain.com
  worker.node2.type=ajp13
  worker.node2.lbfactor=1
  worker.node2.cachesize=10
  
  # Load-balancing behaviour
  worker.loadbalancer.type=lb
  worker.loadbalancer.balance_workers=node1,node2
  worker.loadbalancer.sticky_session=1
  #worker.list=loadbalancer
  
  # Status worker for managing load balancer
  worker.status.type=status
              </programlisting>
                  <para>Basically, the above file configures mod_jk to perform weighted round-robin load balancing with
                      sticky sessions between two servlet containers (JBoss Tomcat) node1 and node2 listening on port
                      8009.</para>
  
                  <para>In the <literal>works.properties</literal> file, each node is defined using the
                          <literal>worker.XXX</literal> naming convention where <literal>XXX</literal> represents an
                      arbitrary name you choose for one of the target Servlet container. For each worker, you must give
                      the host name (or IP address) and port number of the AJP13 connector running in the Servlet
                      container.</para>
                  <para>The <literal>lbfactor</literal> attribute is the load-balancing factor for this specific worker.
                      It is used to define the priority (or weight) a node should have over other nodes. The higher this
                      number is, the more HTTP requests it will receive. This setting can be used to differentiate servers
                      with different processing power.</para>
                  <para>The <literal>cachesize</literal> attribute defines the size of the thread pools associated to the
                      Servlet container (i.e. the number of concurrent requests it will forward to the Servlet container).
                      Make sure this number does not outnumber the number of threads configured on the AJP13 connector of
                      the Servlet container. Please review
                          <literal>http://jakarta.apache.org/tomcat/connectors-doc/config/workers.html</literal> for
                      comments on <literal>cachesize</literal> for Apache 1.3.x.</para>
                  <para>The last part of the <literal>conf/workers.properties</literal> file defines the loadbalancer
                      worker. The only thing you must change is the
                      <literal>worker.loadbalancer.balanced_workers</literal> line: it must list all workers previously
                      defined in the same file: load-balancing will happen over these workers.</para>
                  <para>The <literal>sticky_session</literal> property specifies the cluster behavior for HTTP sessions.
                      If you specify <literal>worker.loadbalancer.sticky_session=0</literal>, each request will be load
                      balanced between node1 and node2. But when a user opens a session on one server, it is a good idea
                      to always forward this user's requests to the same server. This is called a "sticky session", as the
                      client is always using the same server he reached on his first request. Otherwise the user's session
                      data would need to be synchronized between both servers (session replication, see <xref
                          linkend="clustering-http-state"/>). To enable session stickiness, you need to set
                          <literal>worker.loadbalancer.sticky_session</literal> to 1.</para>
  
                  <note>
                      <para>A non-loadbalanced setup with a single node required the <literal>worker.list=node1</literal>
                          entry before mod_jk would function correctly.</para>
                  </note>
  
              </section>
  
              <section id="clustering-http-jboss">
                  <title>Configure JBoss</title>
  
                  <para>Finally, we must configure the JBoss Tomcat instances on all clustered nodes so that they can
                      expect requests forwarded from the mod_jk loadbalancer.</para>
  
                  <para>On each clustered JBoss node, we have to name the node according to the name specified in
                          <literal>workers.properties</literal>. For instance, on JBoss instance node1, edit the
                          <literal>JBOSS_HOME/server/all/deploy/jbossweb-tomcat50.sar/server.xml</literal> file (replace
                          <literal>/all</literal> with your own server name if necessary). Locate the
                          <literal>&lt;Engine></literal> element and add an attribute <literal>jvmRoute</literal>:</para>
  
                  <programlisting>
  &lt;Engine name="jboss.web" defaultHost="localhost" jvmRoute="node1">
  ... ...
  &lt;/Engine>
              </programlisting>
  
                  <para>Then, for each JBoss Tomcat instance in the cluster, we need to tell it to add the
                          <literal>jvmRoute</literal> value to its session cookies so that mod_jk can route incoming
                      requests. Edit the
                          <literal>JBOSS_HOME/server/all/deploy/jbossweb-tomcat50.sar/META-INF/jboss-service.xml</literal>
                      file (replace <literal>/all</literal> with your own server name). Locate the
                          <literal>&lt;attribute></literal> element with a name of <literal>UseJK</literal>, and set
                      its value to <literal>true</literal>:</para>
  
                  <programlisting>
  &lt;attribute name="UseJK">true&lt;/attribute>
              </programlisting>
  
                  <para>At this point, you have a fully working Apache+mod_jk load-balancer setup that will balance call
                      to the Servlet containers of your cluster while taking care of session stickiness (clients will
                      always use the same Servlet container).</para>
  
                  <note>
                      <para>For more updated information on using mod_jk 1.2 with JBoss Tomcat, please refer to the JBoss
                          wiki page at
                          <literal>http://wiki.jboss.org/wiki/Wiki.jsp?page=UsingMod_jk1.2WithJBoss</literal>.</para>
                  </note>
              </section>
  
              <section id="clustering-http-state">
                  <title>Configure HTTP session state replication</title>
  
                  <para>In <xref linkend="clustering-http-nodes"/>, we covered how to use sticky sessions to make sure
                      that a client in a session always hits the same server node in order to maintain the session state.
                      However, that is not an ideal solution. The load might be unevenly distributed over the nodes over
                      time and if a node goes down, all its session data is lost. A better and more reliable solution is
                      to replicate session data across all nodes in the cluster. This way, the client can hit any server
                      node and obtain the same session states.</para>
                  <para>The <literal>jboss.cache:service=TomcatClusteringCache</literal> MBean makes use of JBoss Cache to
                      provide HTTP session replication service to the HTTP load balancer in a JBoss Tomcat cluster. This
                      MBean is defined in the <literal>deploy/tc5-cluster.sar/META-INF/jboss-service.xml</literal> file.</para>
                  <note>
                      <para>Before AS 4.0.4 CR2, the HTTP session cache configuration file is the
                              <literal>deploy/tc5-cluster-service.xml</literal> file. Please see AS 4.0.3 documentation
                          for more details.</para>
                  </note>
  
                  <para>Below is a typical <literal>deploy/tc5-cluster.sar/META-INF/jboss-service.xml</literal> file. The
                      configuration attributes in the <literal>TomcatClusteringCache</literal> MBean is very similar to
                      those in <xref linkend="jbosscache-cache"/>.</para>
                  <programlisting>
  &lt;mbean code="org.jboss.cache.aop.TreeCacheAop"
      name="jboss.cache:service=TomcatClusteringCache">
  
      &lt;depends>jboss:service=Naming&lt;/depends>
      &lt;depends>jboss:service=TransactionManager&lt;/depends>
      &lt;depends>jboss.aop:service=AspectDeployer&lt;/depends>
  
      &lt;attribute name="TransactionManagerLookupClass">
          org.jboss.cache.BatchModeTransactionManagerLookup
      &lt;/attribute>
      
      &lt;attribute name="IsolationLevel">REPEATABLE_READ&lt;/attribute>
      
      &lt;attribute name="CacheMode">REPL_ASYNC&lt;/attribute>
      
      &lt;attribute name="ClusterName">
        Tomcat-${jboss.partition.name:Cluster}
      &lt;/attribute>
      
      &lt;attribute name="UseMarshalling">false&lt;/attribute>
      
      &lt;attribute name="InactiveOnStartup">false&lt;/attribute>
      
      &lt;attribute name="ClusterConfig">
          ... ...
      &lt;/attribute>
      
      &lt;attribute name="LockAcquisitionTimeout">15000&lt;/attribute>
  &lt;/mbean>
              </programlisting>
                  <para>The detailed configuration for the <literal>TreeCache</literal> MBean is covered in <xref
                          linkend="jbosscache-cache"/>. Below, we will just discuss several attributes that are most
                      relevant to the HTTP cluster session replication.</para>
                  <itemizedlist>
                      <listitem>
                          <para><emphasis role="bold">TransactionManagerLookupClass</emphasis> sets the transaction
                              manager factory. The default value is
                                  <literal>org.jboss.cache.BatchModeTransactionManagerLookup</literal>. It tells the cache
                              NOT to participate in JTA-specific transactions. Instead, the cache manages its own
                              transaction to support finely grained replications.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">IsolationLevel</emphasis> sets the isolation level for updates to
                              the transactional distributed cache. The valid values are <literal>SERIALIZABLE</literal>,
                                  <literal>REPEATABLE_READ</literal>, <literal>READ_COMMITTED</literal>,
                                  <literal>READ_UNCOMMITTED</literal>, and <literal>NONE</literal>. These isolation levels
                              mean the same thing as isolation levels on the database. The default isolation of
                                  <literal>REPEATABLE_READ</literal> makes sense for most web applications.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">CacheMode</emphasis> controls how the cache is replicated. The valid
                              values are <literal>REPL_SYNC</literal> and <literal>REPL_ASYNC</literal>, which determine
                              whether changes are made synchronously or asynchronously. Using synchronous replication
                              makes sure changes propagated to the cluster before the web request completes. However,
                              synchronous replication is much slower. For asyncrhonous access, you will want to enable and
                              tune the replication queue.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">ClusterName</emphasis> specifies the name of the cluster that the
                              cache works within. The default cluster name is the the word "Tomcat-" appended by the
                              current JBoss partition name. All the nodes should use the same cluster name. Although
                              session replication can share the same channel (multicast address and port) with other
                              clustered services in JBoss, replication should have it's own cluster name.</para>
                      </listitem>
                      <listitem>
                          <para>The <emphasis role="bold">UseMarshalling</emphasis> and <emphasis role="bold"
                                  >InactiveOnStartup</emphasis> attributes must have the same value. They must be
                                  <literal>true</literal> if <literal>FIELD</literal> level session replication is needed
                              (see later). Otherwise, they are default to <literal>false</literal>.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">ClusterConfig</emphasis> configures the underlying JGroups stack.
                              The most import configuration elements are the muliticast adress and port,
                                  <literal>mcast_addr</literal> and <literal>mcast_port</literal> respectively, to use for
                              clustered communication. These values should make sense for your network. Please refer to
                                  <xref linkend="jbosscache-jgroups"/> for more information.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">LockAcquisitionTimeout</emphasis> sets the maximum number of
                              milliseconds to wait for a lock acquisition. The default value is 15000.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">UseReplQueue</emphasis> determines whether to enable the replication
                              queue when using asynchronous replication. This allows multiple cache updates to be bundled
                              together to improve performance. The replication queue properties are controlled by the
                                  <literal>ReplQueueInterval</literal> and <literal>ReplQueueMaxElements</literal>
                              properties.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">ReplQueueInterval</emphasis> specifies the time in milliseconds
                              JBoss Cache will wait before sending items in the replication queue.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">ReplQueueMaxElements</emphasis>: specifies the maximum number of
                              elements allowed in the replication queue before JBoss Cache will send an update.</para>
                      </listitem>
                  </itemizedlist>
              </section>
              <section id="clustering-http-app">
                  <title>Enabling session replication in your application</title>
                  <para>To enable clustering of your web application you must it as distributable in the
                      <literal>web.xml</literal> descriptor. Here's an example:</para>
                  <programlisting>&lt;?xml version="1.0"?&gt; 
  &lt;web-app  xmlns=&quot;http://java.sun.com/xml/ns/j2ee&quot;
            xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; 
            xsi:schemaLocation=&quot;http://java.sun.com/xml/ns/j2ee 
                                http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd&quot; 
            version=&quot;2.4&quot;&gt;
      <emphasis role="bold">&lt;distributable/&gt;</emphasis>
      &lt;!-- ... --&gt;
  &lt;/web-app&gt;</programlisting>
                  <para> You can futher configure session replication using the <literal>replication-config</literal>
                      element in the <literal>jboss-web.xml</literal> file. Here is an example: </para>
                  <programlisting>&lt;jboss-web&gt;
      &lt;replication-config&gt;
          &lt;replication-trigger&gt;SET_AND_NON_PRIMITIVE_GET&lt;/replication-trigger&gt;
          &lt;replication-granularity&gt;SESSION&lt;/replication-granularity&gt;
          &lt;replication-field-batch-mode&gt;true&lt;/replication-field-batch-mode&gt;
      &lt;/replication-config&gt;
  &lt;/jboss-web&gt;</programlisting>
                  <para>The <literal>replication-trigger</literal> element determines what triggers a session replication
                      (or when is a session is considered dirty). It has 4 options:</para>
                  <itemizedlist>
                      <listitem>
                          <para><emphasis role="bold">SET</emphasis>: With this policy, the session is considered dirty
                              only when an attribute is set in the session. If your application always writes changed
                              value back into the session, this option will be most optimized in term of performance. If
                              an object is retrieved from the session and modified without being written back into the
                              session, the change to that object will not be replicated.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">SET_AND_GET</emphasis>: With this policy, any attribute that is get
                              or set will be marked as dirty. If an object is retrieved from the session and modified
                              without being written back into the session, the change to that object will be replicated.
                              This option can have significant performance implications.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">SET_AND_NON_PRIMITIVE_GET</emphasis>: This policy is similar to the
                              SET_AND_GET policy except that only non-primitive get operations are considered dirty. For
                              example, the http session request may retrieve a non-primitive object instance from the
                              attribute and then modify the instance. If we don't specify that non-primitive get is
                              considered dirty, then the modification will not be replication properly. This is the
                              default value.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">ACCESS</emphasis>: This option causes the session to be marked as
                              dirty whenever it is accessed. Since a the session is accessed during each HTTP request, it
                              will be replicated with each request. The access time stamp in the session instance will be
                              updated as well. Since the time stamp may not be updated in other clustering nodes because
                              of no replication, the session in other nodes may expire before the active node if the HTTP
                              request does not retrieve or modify any session attributes. When this option is set, the
                              session timestamps will be synchronized throughout the cluster nodes. Note that use of this
                              option can have a significant performance impact, so use it with caution.</para>
                      </listitem>
                  </itemizedlist>
                  <para>The <literal>replication-granularity</literal> element controls the size of the replication units.
                      The supported values are: </para>
                  <itemizedlist>
                      <listitem>
                          <para><emphasis role="bold">SESSION</emphasis>: Replication is per session instance. As long as
                              it is considered modified when the snapshot manager is called, the whole session object will
                              be serialized.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">ATTRIBUTE</emphasis>: Replication is only for the dirty attributes
                              in the session plus some session data, like, lastAccessTime. For session that carries large
                              amount of data, this option can increase replication performance.</para>
                      </listitem>
                      <listitem>
                          <para><emphasis role="bold">FIELD</emphasis>: Replication is only for data fields inside session
                              attribute objects (see more later).</para>
                      </listitem>
                  </itemizedlist>
                  <para>The <literal>replication-field-batch-mode</literal> element indicates whether you want to have
                      batch update between each http request or not. Default is <literal>true</literal>.</para>
                  <para>If your sessions are generally small, SESSION is the better policy. If your session is larger and
                      some parts are infrequently accessed, ATTRIBUTE replication will be more effective. If your
                      application has very big data objects in session attributes and only fields in those objects are
                      frequently modified, the FIELD policy would be the best. In the next section, let's discuss exactly
                      how the FIELD level replication works.</para>
              </section>
  
              <section id="clustering-http-field">
                  <title>Use FIELD level replication</title>
                  <para>FIELD-level replication only replicates modified data fields inside objects stored in the session.
                      It could potentially drastically reduce the data traffic between clustered nodes, and hence improve
                      the performance of the whole cluster. To use FIELD-level replication, you have to first prepare your
                      Java class to indicate which fields are to be replicated. This is done via JDK 1.4 style annotations
                      embedded in JavaDocs:</para>
  
                  <para>To annotate your POJO, we provide two annotations:
                          <literal>@@org.jboss.web.tomcat.tc5.session.AopMarker</literal> and
                          <literal>@@org.jboss.web.tomcat.tc5.session.InstanceAopMarker</literal>. When you annotate your
                      class with <literal>AopMarker</literal>, you indicate that instances of this class will be used in
                      FIELD-level replication. For exmaple,</para>
  
                  <programlisting>
  /*
   * My usual comments here first.
   * @@org.jboss.web.tomcat.tc5.session.AopMarker
   */
  public class Address 
  {
  ...
  }
  </programlisting>
  
                  <para>If you annotate it with <literal>InstanceAopMarker</literal> instead, then all of its sub-class
                      will be automatically annotated as well. For example,</para>
  
                  <programlisting>
  /*
   *
   * @@org.jboss.web.tomcat.tc5.session.InstanceOfAopMarker
   */
  public class Person 
  {
  ...
  }
  </programlisting>
  
                  <para>then when you have a sub-class like</para>
  
                  <programlisting>
  public class Student extends Person
  {
  ...
  }
  </programlisting>
  
                  <para>there will be no need to annotate <literal>Student</literal>. It will be annotated automatically
                      because it is a sub-class of <literal>Person</literal>.</para>
                  <para>However, since we only support JDK 1.4 style annotation (provided by JBoss Aop) now, you will need
                      to perform a pre-processing step. You need to use the JBoss AOP pre-compiler
                      <literal>annotationc</literal> and post-compiler <literal>aopc</literal> to process the above source
                      code before and after they are compiled by the Java compiler. Here is an example on how to invoke
                      those commands from command line.</para>
  
                  <programlisting>
  $ annotationc [classpath] [source files or directories]
  $ javac -cp [classpath] [source files or directories]
  $ aopc [classpath] [class files or directories]            
              </programlisting>
  
                  <para>Please see the JBoss AOP documentation for the usage of the pre- and post-compiler. The JBoss AOP
                      project also provides easy to use ANT tasks to help integrate those steps into your application
                      build process. In the next AS release, JDK 5.0 annotation support will be provided for greater
                      transparency. But for now, it is important that you perform the pre- and post-compilation steps for
                      your source code.</para>
  
                  <note>
                      <para>Or, you can see a complete example on how to build, deploy, and validate a FIELD-level
                          replicated web application from this page:
                              <literal>http://wiki.jboss.org/wiki/Wiki.jsp?page=Http_session_field_level_example</literal>.
                          The example bundles the pre- and post-compile tools so you do not need to download JBoss AOP
                          separately.</para>
                  </note>
  
                  <para>When you deploy the web application into JBoss AS, make sure that the following configurations are
                      correct:</para>
  
                  <itemizedlist>
                      <listitem>
                          <para>In the server's <literal>deploy/tc5-cluster.sar/META-INF/jboss-service.xml</literal> file,
                              the <literal>inactiveOnStartup</literal> and <literal>useMarshalling</literal> attributes
                              must both be <literal>true</literal>.</para>
                      </listitem>
  
                      <listitem>
                          <para>In the application's <literal>jboss-web.xml</literal> file, the
                                  <literal>replication-granularity</literal> attribute must be
                          <literal>FIELD</literal>.</para>
                      </listitem>
                  </itemizedlist>
  
                  <para>Finally, let's see an example on how to use FIELD-level replication on those data classes. Notice
                      that there is no need to call <literal>session.setAttribute()</literal> after you make changes to
                      the data object, and all changes to the fields are automatically replicated across the cluster.</para>
  
                  <programlisting>
  // Do this only once. So this can be in init(), e.g.
  if(firstTime)
  {
    Person joe = new Person("Joe", 40);
    Person mary = new Person("Mary", 30);
    Address addr = new Address();
    addr.setZip(94086);
  
    joe.setAddress(addr);
    mary.setAddress(addr); // joe and mary share the same address!
  
    session.setAttribute("joe", joe); // that's it.
    session.setAttribute("mary", mary); // that's it.
  }
  
  Person mary = (Person)session.getAttribute("mary");
  mary.getAddress().setZip(95123); // this will update and replicate the zip code.            
              </programlisting>
  
                  <para>Besides plain objects, you can also use regular Java collections of those objects as session
                      attributes. JBoss cache automatically figures out how to handle those collections and replicate
                      field changes in their member objects.</para>
  
              </section>
  
              <section id="clustering-http-monitor">
                  <title>Monitoring session replication</title>
                  <para> If you have deployed and accessed your application, go to the
                          <literal>jboss.cache:service=TomcatClusteringCache</literal> MBean and invoke the
                          <literal>printDetails</literal> operation. You should see output resembling the following.</para>
                  <programlisting>/JSESSION
  
  /quote
  
  /FB04767C454BAB3B2E462A27CB571330
  VERSION: 6
  FB04767C454BAB3B2E462A27CB571330: org.jboss.invocation.MarshalledValue at 1f13a81c
  
  /AxCI8Ovt5VQTfNyYy9Bomw**
  VERSION: 4
  AxCI8Ovt5VQTfNyYy9Bomw**: org.jboss.invocation.MarshalledValue at e076e4c8</programlisting>
                  <para>This output shows two separate web sessions, in one application named <emphasis>quote</emphasis>,
                      that are being shared via JBossCache. This example uses a <literal>replication-granularity</literal>
                      of <literal>session</literal>. Had <literal>attribute</literal> level replication been used, there
                      would be additional entries showing each replicated session attribute. In either case, the
                      replicated values are stored in an opaque <literal>MarshelledValue</literal> container. There aren't
                      currently any tools that allow you to inspect the contents of the replicated session values. If you
                      don't see any output, either the application was not correctly marked as
                      <literal>distributable</literal> or you haven't accessed a part of application that places values in
                      the HTTP session. The <literal>org.jboss.cache</literal> and <literal>org.jboss.web</literal>
                      logging categories provide additional insight into session replication useful for debugging
                      purposes. </para>
              </section>
              <section id="clustering-http-sso">
                  <title>Using Single Sign On</title>
                  <para> JBoss supports clustered single sign-on, allowing a user to authenticate to one application on a
                      JBoss server and to be recognized on all applications, on that same machine or on another node in
                      the cluster, that are deployed on the same virtual host. Authentication replication is handled by
                      the HTTP session replication service. Although session replication does not need to be explicitly
                      enabled for the applications in question, the <literal>tc5-cluster-service.xml</literal> file does
                      need to be deployed. </para>
                  <para> To enable single sign-on, you must add the <literal>ClusteredSingleSignOn</literal> valve to the
                      appropriate <literal>Host</literal> elements of the tomcat <literal>server.xml</literal> file. The
                      valve configuration is shown here: </para>
                  <programlisting>&lt;Valve className=&quot;org.jboss.web.tomcat.tc5.sso.ClusteredSingleSignOn&quot; /&gt;</programlisting>
              </section>
          </section>
          <section id="clustering-jms">
              <title>Clustered JMS Services</title>
              <para>JBoss AS 3.2.4 and above support high availability JMS (HA-JMS) services in the <literal>all</literal>
                  server configuration. In the current production release of JBoss AS, the HA-JMS service is implemented
                  as a clustered singleton fail-over service. <note>
                      <para>If you are willing to configure HA-JMS yourself, you can get it to work with earlier versions
                          of JBoss AS. We have a customer who uses HA-JMS successfully in JBoss AS 3.0.7. Please contact
                          JBoss support for more questions.</para>
                  </note>
                  <!-- TBD: Since the JBoss HA-JMS architecture has evolved significantly since JBoss AS 4.5.0, we will discuss two different HA-JMS architectures in separate sections below.--></para>
              <section id="clustering-jms-singleton">
                  <title>High Availability Singleton Fail-over</title>
                  <para>The JBoss HA-JMS service (i.e., message queues and topics) only runs on a single node (i.e., the
                      master node) in the cluster at any given time. If that node fails, the cluster simply elects another
                      node to run the JMS service (fail-over). This setup provides redundancy against server failures but
                      does not reduce the work load on the JMS server node.</para>
  
                  <note>
                      <para>While you cannot load balance HA-JMS queues (there is only one master node that runs the
                          queues), you can load balance the MDBs that process messages from those queues (see <xref
                              linkend="clustering-jms-loadbalanced"/>).</para>
                  </note>
  
                  <!-- 
                  Adrian mentioned that this example needs some work
                  
              <note>
                  <para>A JBoss user contributed a custom HA-JMS provider to load balance Message Driven Bean (MDB)
                      applications across nodes. You can download the code from the JBoss wiki at <ulink
                          url="http://wiki.jboss.org/wiki/Wiki.jsp?page=LoadBalancedFaultTolerantMDBs"/> and following the
                      instructions in the <literal>readme.txt</literal> file in the zip file.</para>
              </note>
              -->
  
                  <section id="clustering-jms-singleton-server">
                      <title>Server Side Configuration</title>
                      <para>To use the singleton fail-over HA-JMS service, you must configure JMS services identically on
                          all nodes in the cluster. That includes all JMS related service MBeans and all deployed JMS
                          applications.</para>
                      <para>The JMS server is configured to persist its data in the <literal>DefaultDS</literal>. By
                          default, that is the embedded HSQLDB. In most cluster environments, however, all nodes need to
                          persist data against a shared database. So, the first thing to do before you start clustered JMS
                          is to setup a shared database for JMS. You need to do the following:</para>
                      <itemizedlist>
                          <listitem>
                              <para>Configure <literal>DefaultDS</literal> to point to the database server of your choice.
                                  That is to replace the <literal>deploy/hsqlsb-ds.xml</literal> file with the
                                      <literal>xxx-ds.xml</literal> file in the <literal>docs/examples/jca</literal>
                                  directory, where <literal>xxx</literal> is the name of the target shared database (e.g.,
                                      <literal>mysql-ds.xml</literal>).</para>
                          </listitem>
                          <listitem>
                              <para>Replace the <literal>hsqldb-jdbc2-service.xml</literal> file under the
                                      <literal>server/all/deploy-hasingleton/jms</literal> directory with one tuned to the
                                  specific database. For example if you use MySQL the file is
                                      <literal>mysql-jdbc2-service.xml</literal>. Configuration files for a number of
                                  RDBMS are bundled with the JBoss AS distribution. They can be found under
                                      <literal>docs/examples/jms</literal>.</para>
                          </listitem>
                      </itemizedlist>
                      <note>
                          <para>There is no need to replace the <literal>hsqldb-jdbc-state-service.xml</literal> file
                              under the <literal>server/all/deploy-hasingleton/jms</literal> directory. Despite the
                                  <literal>hsql</literal> in its name, it works with all SQL92 compliant databases,
                              including HSQL, MySQL, SQL Server, and more. It automatically uses the
                              <literal>DefaultDS</literal> for storage, as we configured above.</para>
                      </note>
                  </section>
  
                  <section id="clustering-jms-singleton-client">
                      <title>HA-JMS Client</title>
                      <para>The HA-JMS client is different from regular JMS clients in two important aspects.</para>
                      <itemizedlist>
                          <listitem>
                              <para>The HA-JMS client must obtain JMS connection factories from the HA-JNDI (the default
                                  port is 1100).</para>
                          </listitem>
                          <listitem>
                              <para>The client connection must listens for server exceptions. When the cluster fail-over
                                  to a different master node, all client operations on the current connection fails with
                                  exceptions. The client must know to re-connect.</para>
                          </listitem>
                      </itemizedlist>
                      <note>
                          <para>While the HA-JMS connection factory knows the current master node that runs JMS services,
                              there is no smart client side interceptor. The client stub only knows the fixed master node
                              and cannot adjust to server topography changes.</para>
                      </note>
  
                  </section>
  
                  <section id="clustering-jms-loadbalanced">
                      <title>Load Balanced HA-JMS MDBs</title>
  
                      <para>While the HA-JMS queues and topics only run on a single node at a time, MDBs on multiple nodes
                          can receive and process messages from the HA-JMS master node. The contested queues and topics
                          result in load balancing behavior for MDBs. To enable loading balancing for MDBs, you can
                          specify a receiver for the queue. The receiver records which node is waiting for a message and
                          in which order the messages should be processed. JBoss provides three receiver implementations.</para>
  
                      <itemizedlist>
                          <listitem>
                              <para>The <literal>org.jboss.mq.server.ReceiversImpl</literal> is the default implementation
                                  using a <literal>HashSet</literal>.</para>
                          </listitem>
                          <listitem>
                              <para>The <literal>org.jboss.mq.server.ReceiversImplArrayList</literal> is theimplementation
                                  using an <literal>ArrayList</literal>.</para>
                          </listitem>
                          <listitem>
                              <para>The <literal>org.jboss.mq.server.ReceiversImplLinkedList</literal> is the
                                  implementation using a <literal>LinkedList</literal>.</para>
                          </listitem>
                      </itemizedlist>
  
                      <para>You can specify the receiver implementation class name as an attribute in the MBean that
                          defines the permanent JMS <literal>Queue</literal> or <literal>DestinationManager</literal> on
                          each node. For best load balancing performance, we suggest you
                          to use the <literal>ReceiversImplArrayList</literal> or
                          <literal>ReceiversImplArrayList</literal> implementations due to an undesirable implementation
                          detail of <literal>HashSet</literal> in the JVM.</para>
  
                  </section>
              </section>
  
          </section>
      </chapter>
      <chapter id="jbosscache.chapt">
          <title>JBossCache and JGroups Services</title>
          <para>JGroups and JBossCache provide the underlying communication, node replication and caching services, for
              JBoss AS clusters. Those services are configured as MBeans. There is a set of JBossCache and JGroups MBeans
              for each type of clustering applications (e.g., the Stateful Session EJBs, the distributed entity EJBs
              etc.). <!-- May not be true from version XXX -->
          </para>
          <para>The JBoss AS ships with a reasonable set of default JGroups and JBossCache MBean configurations. Most
              applications just work out of the box with the default MBean configurations. You only need to tweak them
              when you are deploying an application that has special network or performance requirements.</para>
          <section id="jbosscache-jgroups">
              <title>JGroups Configuration</title>
              <para>The JGroups framework provides services to enable peer-to-peer communications between nodes in a
                  cluster. It is built on top a stack of network communication protocols that provide transport,
                  discovery, reliability and failure detection, and cluster membership management services. <xref
                      linkend="jbosscache-JGroupsStack.fig"/> shows the protocol stack in JGroups.</para>
              <figure id="jbosscache-JGroupsStack.fig">
                  <title>Protocol stack in JGroups</title>
                  <mediaobject>
                      <imageobject>
                          <imagedata align="center" fileref="images/jbosscache-JGroupsStack.png"/>
                      </imageobject>
                  </mediaobject>
              </figure>
              <para>JGroups configurations often appear as a nested attribute in cluster related MBean services, such as
                  the <literal>PartitionConfig</literal> attribute in the <literal>ClusterPartition</literal> MBean or the
                      <literal>ClusterConfig</literal> attribute in the <literal>TreeCache</literal> MBean. You can
                  configure the behavior and properties of each protocol in JGroups via those MBean attributes. Below is
                  an example JGroups configuration in the <literal>ClusterPartition</literal> MBean.</para>
              <programlisting>
  &lt;mbean code="org.jboss.ha.framework.server.ClusterPartition"
      name="jboss:service=DefaultPartition">
  
      ... ...
      
      &lt;attribute name="PartitionConfig">
          &lt;Config>
              &lt;UDP mcast_addr="228.1.2.3" mcast_port="45566"
                 ip_ttl="8" ip_mcast="true"
                 mcast_send_buf_size="800000" mcast_recv_buf_size="150000"
                 ucast_send_buf_size="800000" ucast_recv_buf_size="150000"
                 loopback="false"/>
              &lt;PING timeout="2000" num_initial_members="3"
                 up_thread="true" down_thread="true"/>
              &lt;MERGE2 min_interval="10000" max_interval="20000"/>
              &lt;FD shun="true" up_thread="true" down_thread="true"
                 timeout="2500" max_tries="5"/>
              &lt;VERIFY_SUSPECT timeout="3000" num_msgs="3"
                 up_thread="true" down_thread="true"/>
              &lt;pbcast.NAKACK gc_lag="50"
                 retransmit_timeout="300,600,1200,2400,4800"
                 max_xmit_size="8192"
                 up_thread="true" down_thread="true"/>
              &lt;UNICAST timeout="300,600,1200,2400,4800" 
                 window_size="100" min_threshold="10"
                 down_thread="true"/>
              &lt;pbcast.STABLE desired_avg_gossip="20000"
                 up_thread="true" down_thread="true"/>
              &lt;FRAG frag_size="8192"
                 down_thread="true" up_thread="true"/>
              &lt;pbcast.GMS join_timeout="5000" join_retry_timeout="2000"
                 shun="true" print_local_addr="true"/>
              &lt;pbcast.STATE_TRANSFER up_thread="true" down_thread="true"/>
          &lt;/Config>
      &lt;/attribute>
  &lt;/mbean>
          </programlisting>
              <para>All the JGroups configuration data is contained in the <literal>&lt;Config&gt;</literal>
                  element under the JGroups config MBean attribute. In the next several sections, we will dig into the
                  options in the <literal>&lt;Config&gt;</literal> element and explain exactly what they mean.</para>
              <section id="jbosscache-jgroups-transport">
                  <title>Transport Protocols</title>
                  <para>The transport protocols send messages from one cluster node to another (unicast) or from cluster
                      node to all other nodes in the cluster (mcast). JGroups supports UDP, TCP, and TUNNEL as transport
                      protocols.</para>
                  <note>
                      <para>The <literal>UDP</literal>, <literal>TCP</literal>, and <literal>TUNNEL</literal> elements are
                          mutually exclusive. You can only have one transport protocol in each JGroups
                          <literal>Config</literal> element</para>
                  </note>
                  <section id="jbosscache-jgroups-transport-udp">
                      <title>UDP configuration</title>
                      <para>UDP is the preferred protocol for JGroups. UDP uses multicast or multiple unicasts to send and
                          receive messages. If you choose UDP as the transport protocol for your cluster service, you need
                          to configure it in the <literal>UDP</literal> sub-element in the JGroups
                          <literal>Config</literal> element. Here is an example.</para>
                      <programlisting>
  &lt;UDP mcast_send_buf_size="32000"
      mcast_port="45566"
      ucast_recv_buf_size="64000"
      mcast_addr="228.8.8.8"
      bind_to_all_interfaces="true"
      loopback="true"
      mcast_recv_buf_size="64000"
      max_bundle_size="30000"
      max_bundle_timeout="30"
      use_incoming_packet_handler="false"
      use_outgoing_packet_handler="false"
      ucast_send_buf_size="32000"
      ip_ttl="32"
      enable_bundling="false"/>
                  </programlisting>
                      <para>The available attributes in the above JGroups configuration are listed below.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">ip_mcast</emphasis> specifies whether or not to use IP
                                  multicasting. The default is <literal>true</literal>.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">mcast_addr</emphasis> specifies the multicast address (class D)
                                  for joining a group (i.e., the cluster). The default is
                              <literal>228.8.8.8</literal>.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">mcast_port</emphasis> specifies the multicast port number. The
                                  default is <literal>45566</literal>.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">bind_addr</emphasis> specifies the interface on which to receive
                                  and send multicasts (uses the <literal>bind.address</literal> system property, if
                                  present). If you have a multihomed machine, set the <literal>bind_addr</literal>
                                  attribute to the appropriate NIC IP address. Ignored if the
                                  <literal>ignore.bind.address</literal> property is true.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">bind_to_all_interfaces</emphasis> specifies whether this node
                                  should listen on all interfaces for multicasts. The default is <literal>false</literal>.
                                  It overrides the <literal>bind_addr</literal> property for receiving multicasts.
                                  However, <literal>bind_addr</literal> (if set) is still used to send multicasts.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">ip_ttl</emphasis> specifies the TTL for multicast
                              packets.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">use_incoming_packet_handler</emphasis> specifies whether to use
                                  a separate thread to process incoming messages.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">use_outgoing_packet_handler</emphasis> specifies whether to use
                                  a separate thread to process outgoing messages.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">enable_bundling</emphasis> specifies whether to enable bundling.
                                  If it is <literal>true</literal>, the node would queue outgoing messages until
                                      <literal>max_bundle_size</literal> bytes have accumulated, or
                                      <literal>max_bundle_time</literal> milliseconds have elapsed, whichever occurs
                                  first. Then bundle queued messages into a large message and send it. The messages are
                                  unbundled at the receiver. The default is <literal>false</literal>.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">loopback</emphasis> specifies whether to loop outgoing message
                                  back up the stack. In <literal>unicast</literal> mode, the messages are sent to self. In
                                      <literal>mcast</literal> mode, a copy of the mcast message is sent.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">discard_incompatibe_packets</emphasis> specifies whether to
                                  discard packets from different JGroups versions. Each message in the cluster is tagged
                                  with a JGroups version. When a message from a different version of JGroups is received,
                                  it will be discarded if set to true, otherwise a warning will be logged.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">mcast_send_buf_size, mcast_recv_buf_size, ucast_send_buf_size,
                                      ucast_recv_buf_size</emphasis> define receive and send buffer sizes. It is good to
                                  have a large receiver buffer size, so packets are less likely to get dropped due to
                                  buffer overflow.</para>
                          </listitem>
                      </itemizedlist>
                      <note>
                          <para>On Windows 2000 machines, because of the media sense feature being broken with multicast
                              (even after disabling media sense), you need to set the UDP protocol's
                              <literal>loopback</literal> attribute to <literal>true</literal>.</para>
                      </note>
                  </section>
                  <section id="jbosscache-jgroups-transport-tcp">
                      <title>TCP configuration</title>
                      <para>Alternatively, a JGroups-based cluster can also work over TCP connections. Compared with UDP,
                          TCP generates more network traffic when the cluster size increases but TCP is more reliable. TCP
                          is fundamentally a unicast protocol. To send multicast messages, JGroups uses multiple TCP
                          unicasts. To use TCP as a transport protocol, you should define a <literal>TCP</literal> element
                          in the JGroups <literal>Config</literal> element. Here is an example of the
                          <literal>TCP</literal> element.</para>
                      <programlisting>
  &lt;TCP start_port="7800"
      bind_addr="192.168.5.1"
      loopback="true"/>
                  </programlisting>
                      <para>Below are the attributes available in the <literal>TCP</literal> element.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">bind_addr</emphasis> specifies the binding address. It can also
                                  be set with the <literal>-Dbind.address</literal> command line option at server
                              startup.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">start_port, end_port</emphasis> define the range of TCP ports
                                  the server should bind to. The server socket is bound to the first available port from
                                      <literal>start_port</literal>. If no available port is found (e.g., because of a
                                  firewall) before the <literal>end_port</literal>, the server throws an exception.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">loopback</emphasis> specifies whether to loop outgoing message
                                  back up the stack. In <literal>unicast</literal> mode, the messages are sent to self. In
                                      <literal>mcast</literal> mode, a copy of the mcast message is sent.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">mcast_send_buf_size, mcast_recv_buf_size, ucast_send_buf_size,
                                      ucast_recv_buf_size</emphasis> define receive and send buffer sizes. It is good to
                                  have a large receiver buffer size, so packets are less likely to get dropped due to
                                  buffer overflow.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">conn_expire_time</emphasis> specifies the time (in milliseconds)
                                  after which a connection can be closed by the reaper if no traffic has been
                              received.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">reaper_interval</emphasis> specifies interval (in milliseconds)
                                  to run the reaper. If both values are 0, no reaping will be done. If either value is
                                  &gt; 0, reaping will be enabled.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
                  <section id="jbosscache-jgroups-transport-tunnel">
                      <title>TUNNEL configuration</title>
                      <para>The TUNNEL protocol uses an external router to send messages. The external router is known as
                          a <literal>GossipRouter</literal>. Each node has to register with the router. All messages are
                          sent to the router and forwarded on to their destinations. The TUNNEL approach can be used to
                          setup communication with nodes behind firewalls. A node can establish a TCP connection to the
                          GossipRouter through the firewall (you can use port 80). The same connection is used by the
                          router to send messages to nodes behind the firewall. The TUNNEL configuration is defined in the
                              <literal>TUNNEL</literal> element in the JGroups <literal>Config</literal> element. Here is
                          an example.</para>
                      <programlisting>
  &lt;TUNNEL router_port="12001"
      router_host="192.168.5.1"/>
                  </programlisting>
                      <para>The available attributes in the <literal>TUNNEL</literal> element are listed below.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">router_host</emphasis> specifies the host on which the
                                  GossipRouter is running.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">router_port</emphasis> specifies the port on which the
                                  GossipRouter is listening.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">loopback</emphasis> specifies whether to loop messages back up
                                  the stack. The default is <literal>true</literal>.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
              </section>
              <section id="jbosscache-jgroups-discovery">
                  <title>Discovery Protocols</title>
                  <para>The cluster need to maintain a list of current member nodes at all times so that the load balancer
                      and client interceptor know how to route their requests. The discovery protocols are used to
                      discover active nodes in the cluster. All initial nodes are discovered when the cluster starts up.
                      When a new node joins the cluster later, it is only discovered after the group membership protocol
                      (GMS, see <xref linkend="jbosscache-jgroups-other-gms"/>) admits it into the group.</para>
                  <para>Since the discovery protocols sit on top of the transport protocol. You can choose to use
                      different discovery protocols based on your transport protocol. The discovery protocols are also
                      configured as sub-elements in the JGroups MBean <literal>Config</literal> element.</para>
                  <section id="jbosscache-jgroups-discovery-ping">
                      <title>PING</title>
                      <para>The PING discovery protocol normally sits on top of the UDP transport protocol. Each node
                          responds with a unicast UDP datagram back to the sender. Here is an example PING configuration
                          under the JGroups <literal>Config</literal> element.</para>
                      <programlisting>
  &lt;PING timeout="2000"
      num_initial_members="2"/>
                  </programlisting>
                      <para>The available attributes in the <literal>PING</literal> element are listed below.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">timeout</emphasis> specifies the maximum number of milliseconds
                                  to wait for any responses.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">num_initial_members</emphasis> specifies the maximum number of
                                  responses to wait for.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">gossip_host</emphasis> specifies the host on which the
                                  GossipRouter is running.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">gossip_port</emphasis> specifies the port on which the
                                  GossipRouter is listening on.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">gossip_refresh</emphasis> specifies the interval (in
                                  milliseconds) for the lease from the GossipRouter.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">initial_hosts</emphasis> is a comma-seperated list of addresses
                                  (e.g., <literal>host1[12345],host2[23456]</literal>), which are pinged for
                              discovery.</para>
                          </listitem>
                      </itemizedlist>
                      <para>If both <literal>gossip_host</literal> and <literal>gossip_port</literal> are defined, the
                          cluster uses the GossipRouter for the initial discovery. If the <literal>initial_hosts</literal>
                          is specified, the cluster pings that static list of addresses for discovery. Otherwise, the
                          cluster uses IP multicasting for discovery.</para>
                      <note>
                          <para>The discovery phase returns when the <literal>timeout</literal> ms have elapsed or the
                                  <literal>num_initial_members</literal> responses have been received.</para>
                      </note>
                  </section>
                  <section id="jbosscache-jgroups-discovery-tcpgossip">
                      <title>TCPGOSSIP</title>
                      <para>The TCPGOSSIP protocol only works with a GossipRouter. It works essentially the same way as
                          the PING protocol configuration with valid <literal>gossip_host</literal> and
                              <literal>gossip_port</literal> attributes. It works on top of both UDP and TCP transport
                          protocols. Here is an example.</para>
                      <programlisting>
  &lt;PING timeout="2000"
      initial_hosts="192.168.5.1[12000],192.168.0.2[12000]"
      num_initial_members="3"/>
                  </programlisting>
                      <para>The available attributes in the <literal>TCPGOSSIP</literal> element are listed below.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">timeout</emphasis> specifies the maximum number of milliseconds
                                  to wait for any responses.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">num_initial_members</emphasis> specifies the maximum number of
                                  responses to wait for.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">initial_hosts</emphasis> is a comma-seperated list of addresses
                                  (e.g., <literal>host1[12345],host2[23456]</literal>) for GossipRouters to register
                              with.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
                  <section id="jbosscache-jgroups-discovery-tcpping">
                      <title>TCPPING</title>
                      <para>The TCPPING protocol takes a set of known members and ping them for discovery. This is
                          essentially a static configuration. It works on top of TCP. Here is an example of the
                              <literal>TCPPING</literal> configuration element in the JGroups <literal>Config</literal>
                          element.</para>
                      <programlisting>
  &lt;TCPPING timeout="2000"
      initial_hosts="192.168.5.1[7800],192.168.0.2[7800]"
      port_range="2"
      num_initial_members="3"/>
                  </programlisting>
                      <para>The available attributes in the <literal>TCPPING</literal> element are listed below.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">timeout</emphasis> specifies the maximum number of milliseconds
                                  to wait for any responses.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">num_initial_members</emphasis> specifies the maximum number of
                                  responses to wait for.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">initial_hosts</emphasis> is a comma-seperated list of addresses
                                  (e.g., <literal>host1[12345],host2[23456]</literal>) for pinging.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">port_range</emphasis> specifies the range of ports to ping on
                                  each host in the <literal>initial_hosts</literal> list. That is because multiple nodes
                                  can run on the same host. In the above example, the cluster would ping ports 7800, 7801,
                                  and 7802 on both hosts.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
                  <section id="jbosscache-jgroups-discovery-mping">
                      <title>MPING</title>
                      <para>The MPING protocol is a multicast ping over TCP. It works almost the same way as PING works on
                          UDP. It does not require external processes (GossipRouter) or static configuration (initial host
                          list). Here is an example of the <literal>MPING</literal> configuration element in the JGroups
                              <literal>Config</literal> element.</para>
                      <programlisting>
  &lt;MPING timeout="2000"
      bind_to_all_interfaces="true"
      mcast_addr="228.8.8.8"
      mcast_port="7500"
      ip_ttl="8"
      num_initial_members="3"/>
                  </programlisting>
                      <para>The available attributes in the <literal>MPING</literal> element are listed below.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">timeout</emphasis> specifies the maximum number of milliseconds
                                  to wait for any responses.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">num_initial_members</emphasis> specifies the maximum number of
                                  responses to wait for.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">bind_addr</emphasis> specifies the interface on which to send
                                  and receive multicast packets.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">bind_to_all_interfaces</emphasis> overrides the
                                      <literal>bind_addr</literal> and uses all interfaces in multihome nodes.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">mcast_addr, mcast_port, ip_ttl</emphasis> attributes are the
                                  same as related attributes in the UDP protocol configuration.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
              </section>
              <section id="jbosscache-jgroups-fd">
                  <title>Failure Detection Protocols</title>
                  <para>The failure detection protocols are used to detect failed nodes. Once a failed node is detected,
                      the cluster updates its view so that the load balancer and client interceptors know to avoid the
                      dead node. The failure detection protocols are configured as sub-elements in the JGroups MBean
                          <literal>Config</literal> element.</para>
                  <section id="jbosscache-jgroups-fd-fd">
                      <title>FD</title>
                      <para>The FD discovery protocol requires each node periodically sends are-you-alive messages to its
                          neighbor. If the neighbor fails to respond, the calling node sends a SUSPECT message to the
                          cluster. The current group coordinator double checks that the suspect node is indeed dead and
                          updates the cluster's view. Here is an example FD configuration.</para>
                      <programlisting>
  &lt;FD timeout="2000"
      max_tries="3"
      shun="true"/>
                  </programlisting>
                      <para>The available attributes in the <literal>FD</literal> element are listed below.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">timeout</emphasis> specifies the maximum number of milliseconds
                                  to wait for the responses to the are-you-alive messages.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">max_tries</emphasis> specifies the number of missed
                                  are-you-alive messages from a node before the node is suspected.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">shun</emphasis> specifies whether a failed node will be shunned.
                                  Once shunned, the node will be expelled from the cluster even if it comes back later.
                                  The shunned node would have to re-join the cluster through the discovery process.</para>
                          </listitem>
                      </itemizedlist>
                      <note>
                          <para>Regular traffic from a node counts as if it is a live. So, the are-you-alive messages are
                              only sent when there is no regular traffic to the node for sometime.</para>
                      </note>
                  </section>
                  <section id="jbosscache-jgroups-fd-fdsock">
                      <title>FD_SOCK</title>
                      <para>The are-you-alive messages in the FD protocol could increase the network load when there are
                          many nodes. It could also produce false suspicions. For instance, if the network is too busy and
                          the timeout is too short, nodes could be falsely suspected. Also, if one node is suspended in a
                          debugger or profiler, it could also be suspected and shunned. The FD_SOCK protocol addresses the
                          above issues by suspecting node failures only when a regular TCP connection to the node fails.
                          However, the problem with such passive detection is that hung nodes will not be detected until
                          it is accessed and the TCP timeouts after several minutes. FD_SOCK works best in high load
                          networks where all nodes are frequently accessed. The simplest FD_SOCK configuration does not
                          take any attribute. You can just declare an empty <literal>FD_SOCK</literal> element in
                          JGroups's <literal>Config</literal> element.</para>
                      <programlisting>
  &lt;FD_SOCK/>
                  </programlisting>
                      <para>There is only one optional attribute in the <literal>FD_SOCK</literal> element.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">srv_sock_bind_addr</emphasis> specifies the interface to which
                                  the server socket should bind to. If it is omitted, the <literal>-D
                                  bind.address</literal> property from the server startup command line is used.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
                  <section id="jbosscache-jgroups-fd-fdsimple">
                      <title>FD_SIMPLE</title>
                      <para>The FD_SIMPLE protocol is a more tolerant (less false suspicions) protocol based on
                          are-you-alive messages. Each node periodically sends are-you-alive messages to a randomly
                          choosen node and wait for a response. If a response has not been received within a certain
                          timeout time, a counter associated with that node will be incremented. If the counter exceeds a
                          certain value, that node will be suspected. When a response to an are-you-alive message is
                          received, the counter resets to zero. Here is an example configuration for the
                              <literal>FD_SIMPLE</literal> protocol.</para>
                      <programlisting>
  &lt;FD_SIMPLE timeout="2000"
      max_missed_hbs="10"/>
                  </programlisting>
                      <para>The available attributes in the <literal>FD_SIMPLE</literal> element are listed below.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">timeout</emphasis> specifies the timeout (in milliseconds) for
                                  the are-you-alive message. If a response is not received within timeout, the counter for
                                  the target node is increased.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">max_missed_hbs</emphasis> specifies maximum number of
                                  are-you-alive messages (i.e., the counter value) a node can miss before it is suspected
                                  failure.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
  
                  <!-- This algorithm is not recommended: Bela
              <section id="jbosscache-jgroups-fd-fdprob">
                  <title>FD_PROB</title>
                  <para>The FD_PROB protocol uses a probailistic failure detection algorithm. Each node in the clluster
                      maintains a list of all other nodes. For each node, 2 data points are maintained: a heartbeat
                      counter and the time of the last increment of the counter. Each member (P) periodically sends its
                      own heartbeat counter list to a randomly chosen member (Q). Q updates its own heartbeat counter list
                      and the associated time (if counter was incremented). Each member periodically increments its own
                      counter. If, when sending its heartbeat counter list, a member P detects that another member Q's
                      heartbeat counter was not incremented for timeout seconds, Q will be suspected. Here is an example
                      configuration for the <literal>FD_PROB</literal> protocol.</para>
                  <programlisting>
  &lt;FD_PROB timeout="2000"/>
                  </programlisting>
                  <para>The available attributes in the <literal>FD_SIMPLE</literal> element are listed below.</para>
                  <itemizedlist>
                      <listitem><para><emphasis role="bold">timeout</emphasis> specifies the timeout (in milliseconds) for each
                          node to increase its heartbeat counter. If a node does not increase its counter before timeout,
                          the node is suspected of failure.</para></listitem>
                  </itemizedlist>
              </section>
              -->
              </section>
              <section id="jbosscache-jgroups-reliable">
                  <title>Reliable Delivery Protocols</title>
                  <para>The reliable delivery protocols in the JGroups stack ensure that data pockets are actually
                      delivered in the right order (FIFO) to the destination node. The basis for reliable message delivery
                      is positive and negative delivery acknowledgments (ACK and NAK). In the ACK mode, the sender resends
                      the message until the acknowledgment is received from the receiver. In the NAK mode, the receiver
                      requests retransmission when it discovers a gap.</para>
                  <section id="jbosscache-jgroups-reliable-unicast">
                      <title>UNICAST</title>
                      <para>The UNICAST protocol is used for unicast messages. It uses ACK. It is configured as a
                          sub-element under the JGroups <literal>Config</literal> element. Here is an example
                          configuration for the <literal>UNICAST</literal> protocol.</para>
                      <programlisting>
  &lt;UNICAST timeout="100,200,400,800"/>
                  </programlisting>
                      <para>There is only one configurable attribute in the <literal>UNICAST</literal> element.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">timeout</emphasis> specifies the retransmission timeout (in
                                  milliseconds). For instance, if the timeout is "100,200,400,800", the sender resends the
                                  message if it hasn't received an ACK after 100 ms the first time, and the second time it
                                  waits for 200 ms before resending, and so on.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
                  <section id="jbosscache-jgroups-reliable-nakack">
                      <title>NAKACK</title>
                      <para>The NAKACK protocol is used for multicast messages. It uses NAK. Under this protocol, each
                          message is tagged with a sequence number. The receiver keeps track of the sequence numbers and
                          deliver the messages in order. When a gap in the sequence number is detected, the receiver asks
                          the sender to retransmit the missing message. The NAKACK protocol is configured as the
                              <literal>pbcast.NAKACK</literal> sub-element under the JGroups <literal>Config</literal>
                          element. Here is an example configuration.</para>
                      <programlisting>
  &lt;pbcast.NAKACK
      max_xmit_size="8192"
      use_mcast_xmit="true" 
      retransmit_timeout="600,1200,2400,4800"/>
                  </programlisting>
                      <para>The configurable attributes in the <literal>pbcast.NAKACK</literal> element are as follows.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">retransmit_timeout</emphasis> specifies the retransmission
                                  timeout (in milliseconds). It is the same as the <literal>timeout</literal> attribute in
                                  the UNICAST protocol.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">use_mcast_xmit</emphasis> determines whether the sender should
                                  send the retransmission to the entire cluster rather than just the node requesting it.
                                  This is useful when the sender drops the pocket -- so we do not need to retransmit for
                                  each node.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">max_xmit_size</emphasis> specifies maximum size for a bundled
                                  retransmission, if multiple packets are reported missing.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">discard_delivered_msgs</emphasis> specifies whether to discard
                                  delivery messages on the receiver nodes. By default, we save all delivered messages.
                                  However, if we only ask the sender to resend their messages, we can enable this option
                                  and discard delivered messages.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
              </section>
              <section id="jbosscache-jgroups-other">
                  <title>Other Configuration Options</title>
                  <para>In addition to the protocol stacks, you can also configure JGroups network services in the
                          <literal>Config</literal> element.</para>
                  <section id="jbosscache-jgroups-other-gms">
                      <title>Group Membership</title>
                      <para>The group membership service in the JGroups stack maintains a list of active nodes. It handles
                          the requests to join and leave the cluster. It also handles the SUSPECT messages sent by failure
                          detection protocols. All nodes in the cluster, as well as the load balancer and client side
                          interceptors, are notified if the group membership changes. The group membership service is
                          configured in the <literal>pbcast.GMS</literal> sub-element under the JGroups
                          <literal>Config</literal> element. Here is an example configuration.</para>
                      <programlisting>
  &lt;pbcast.GMS print_local_addr="true"
      join_timeout="3000"
      down_thread="false" 
      join_retry_timeout="2000"
      shun="true"/>
                  </programlisting>
                      <para>The configurable attributes in the <literal>pbcast.GMS</literal> element are as follows.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">join_timeout</emphasis> specifies the maximum number of
                                  milliseconds to wait for a new node JOIN request to succeed. Retry afterwards.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">join_retry_timeout</emphasis> specifies the maximum number of
                                  milliseconds to wait after a failed JOIN to re-submit it.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">print_local_addr</emphasis> specifies whether to dump the node's
                                  own address to the output when started.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">shun</emphasis> specifies whether a node should shun itself if
                                  it receives a cluster view that it is not a member node.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">disable_initial_coord</emphasis> specifies whether to prevent
                                  this node as the cluster coordinator.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
                  <section id="jbosscache-jgroups-other-fc">
                      <title>Flow Control</title>
                      <para>The flow control service tries to adapt the sending data rate and the receiving data among
                          nodes. If a sender node is too fast, it might overwhelm the receiver node and result in dropped
                          packets that have to be retransmitted. In JGroups, the flow control is implemented via a
                          credit-based system. The sender and receiver nodes have the same number of credits (bytes) to
                          start with. The sender subtracts credits by the number of bytes in messages it sends. The
                          receiver accumulates credits for the bytes in the messages it receives. When the sender's credit
                          drops to a threshold, the receivers sends some credit to the sender. If the sender's credit is
                          used up, the sender blocks until it receives credits from the receiver. The flow control service
                          is configured in the <literal>FC</literal> sub-element under the JGroups
                          <literal>Config</literal> element. Here is an example configuration.</para>
                      <programlisting>
  &lt;FC max_credits="1000000"
      down_thread="false" 
      min_threshold="0.10"/>
                  </programlisting>
                      <para>The configurable attributes in the <literal>FC</literal> element are as follows.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">max_credits</emphasis> specifies the maximum number of credits
                                  (in bytes). This value should be smaller than the JVM heap size.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">min_credits</emphasis> specifies the threshold credit on the
                                  sender, below which the receiver should send in more credits.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">min_threshold</emphasis> specifies percentage value of the
                                  threshold. It overrides the <literal>min_credits</literal> attribute.</para>
                          </listitem>
                      </itemizedlist>
                  </section>
                  <section id="jbosscache-jgroups-other-st">
                      <title>State Transfer</title>
                      <para>The state transfer service transfers the state from an existing node (i.e., the cluster
                          coordinator) to a newly joining node. It is configured in the
                          <literal>pbcast.STATE_TRANSFER</literal> sub-element under the JGroups <literal>Config</literal>
                          element. It does not have any configurable attribute. Here is an example configuration.</para>
                      <programlisting>
  &lt;pbcast.STATE_TRANSFER 
      down_thread="false"
      up_thread="false"/>
                  </programlisting>
                  </section>
                  <section id="jbosscache-jgroups-other-gc">
                      <title>Distributed Garbage Collection</title>
                      <para>In a JGroups cluster, all nodes have to store all messages received for potential
                          retransmission in case of a failure. However, if we store all messages forever, we will run out
                          of memory. So, the distributed garbage collection service in JGroups periodically purges
                          messages that have seen by all nodes from the memory in each node. The distributed garbage
                          collection service is configured in the <literal>pbcast.STABLE</literal> sub-element under the
                          JGroups <literal>Config</literal> element. Here is an example configuration.</para>
                      <programlisting>
  &lt;pbcast.STABLE stability_delay="1000"
      desired_avg_gossip="5000" 
      down_thread="false"
      max_bytes="250000"/>
                  </programlisting>
                      <para>The configurable attributes in the <literal>pbcast.STABLE</literal> element are as follows.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">desired_avg_gossip</emphasis> specifies intervals (in
                                  milliseconds) of garbage collection runs. Value <literal>0</literal> disables this
                                  service.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">max_bytes</emphasis> specifies the maximum number of bytes
                                  received before the cluster triggers a garbage collection run. Value
                                  <literal>0</literal> disables this service.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">max_gossip_runs</emphasis> specifies the maximum garbage
                                  collections runs before any changes. After this number is reached, there is no garbage
                                  collection until the message is received.</para>
                          </listitem>
                      </itemizedlist>
                      <note>
                          <para>Set the <literal>max_bytes</literal> attribute when you have a high traffic
                          cluster.</para>
                      </note>
                  </section>
                  <section id="jbosscache-jgroups-other-merge">
                      <title>Merging</title>
                      <para>When a network error occurs, the cluster might be partitioned into several different
                          partitions. JGroups has a MERGE service that allows the coordinators in partitions to
                          communicate with each other and form a single cluster back again. The flow control service is
                          configured in the <literal>MERGE2</literal> sub-element under the JGroups
                          <literal>Config</literal> element. Here is an example configuration.</para>
                      <programlisting>
  &lt;MERGE2 max_interval="10000"
      min_interval="2000"/>
                  </programlisting>
                      <para>The configurable attributes in the <literal>FC</literal> element are as follows.</para>
                      <itemizedlist>
                          <listitem>
                              <para><emphasis role="bold">max_interval</emphasis> specifies the maximum number of
                                  milliseconds to send out a MERGE message.</para>
                          </listitem>
                          <listitem>
                              <para><emphasis role="bold">min_interval</emphasis> specifies the minimum number of
                                  milliseconds to send out a MERGE message.</para>
                          </listitem>
                      </itemizedlist>
                      <para>JGroups chooses a random value between <literal>min_interval</literal> and
                              <literal>max_interval</literal> to send out the MERGE message.</para>
                      <note>
                          <para>The cluster states are not merged in a merger. This has to be done by the
                          application.</para>
                      </note>
                  </section>
              </section>
          </section>
          <section id="jbosscache-cache">
              <title>JBossCache Configuration</title>
              <para>JBoss Cache provides distributed cache and state replication services for the JBoss cluster. A JBoss
                  cluster can have multiple JBoss Cache MBeans (known as the <literal>TreeCache</literal> MBean), one for
                  HTTP session replication, one for stateful session beans, one for cached entity beans, etc. A generic
                      <literal>TreeCache</literal> MBean configuration is listed below. Application specific
                      <literal>TreeCache</literal> MBean configurations are covered in later chapters when those
                  applications are discussed.</para>
              <programlisting>
  &lt;mbean code="org.jboss.cache.TreeCache" 
          name="jboss.cache:service=TreeCache">
      
      &lt;depends>jboss:service=Naming&lt;/depends> 
      &lt;depends>jboss:service=TransactionManager&lt;/depends> 
  
      &lt;! -- Configure the TransactionManager --> 
      &lt;attribute name="TransactionManagerLookupClass">
          org.jboss.cache.DummyTransactionManagerLookup
      &lt;/attribute> 
  
      &lt;! -- 
              Node locking level : SERIALIZABLE
                                   REPEATABLE_READ (default)
                                   READ_COMMITTED
                                   READ_UNCOMMITTED
                                   NONE        
      --> 
      &lt;attribute name="IsolationLevel">REPEATABLE_READ&lt;/attribute> 
  
      &lt;! --     Valid modes are LOCAL
                               REPL_ASYNC
                               REPL_SYNC
      --> 
      &lt;attribute name="CacheMode">LOCAL&lt;/attribute>
   
      &lt;! -- Name of cluster. Needs to be the same for all clusters, in order
               to find each other --> 
      &lt;attribute name="ClusterName">TreeCache-Cluster&lt;/attribute> 
  
      &lt;! --    The max amount of time (in milliseconds) we wait until the
              initial state (ie. the contents of the cache) are 
              retrieved from existing members in a clustered environment
      --> 
      &lt;attribute name="InitialStateRetrievalTimeout">5000&lt;/attribute> 
  
      &lt;! --    Number of milliseconds to wait until all responses for a
              synchronous call have been received.
      --> 
      &lt;attribute name="SyncReplTimeout">10000&lt;/attribute> 
  
      &lt;! --  Max number of milliseconds to wait for a lock acquisition --> 
      &lt;attribute name="LockAcquisitionTimeout">15000&lt;/attribute> 
  
      &lt;! --  Name of the eviction policy class. --> 
      &lt;attribute name="EvictionPolicyClass">
          org.jboss.cache.eviction.LRUPolicy
      &lt;/attribute> 
  
      &lt;! --  Specific eviction policy configurations. This is LRU --> 
      &lt;attribute name="EvictionPolicyConfig">
          &lt;config>
              &lt;attribute name="wakeUpIntervalSeconds">5&lt;/attribute> 
              &lt;!--  Cache wide default --> 
              &lt;region name="/_default_">
                  &lt;attribute name="maxNodes">5000&lt;/attribute> 
                  &lt;attribute name="timeToLiveSeconds">1000&lt;/attribute> 
              &lt;/region>
  
              &lt;region name="/org/jboss/data">
                  &lt;attribute name="maxNodes">5000&lt;/attribute> 
                  &lt;attribute name="timeToLiveSeconds">1000&lt;/attribute> 
              &lt;/region>
  
              &lt;region name="/org/jboss/test/data">
                  &lt;attribute name="maxNodes">5&lt;/attribute> 
                  &lt;attribute name="timeToLiveSeconds">4&lt;/attribute> 
              &lt;/region>
          &lt;/config>
      &lt;/attribute>
  
      &lt;attribute name="CacheLoaderClass">
          org.jboss.cache.loader.bdbje.BdbjeCacheLoader
      &lt;/attribute>
      
      &lt;attribute name="CacheLoaderConfig">
         location=c:\\tmp
      &lt;/attribute>
      &lt;attribute name="CacheLoaderShared">true&lt;/attribute>
      &lt;attribute name="CacheLoaderPreload">
          /a/b/c,/all/my/objects
      &lt;/attribute>
      &lt;attribute name="CacheLoaderFetchTransientState">false&lt;/attribute>
      &lt;attribute name="CacheLoaderFetchPersistentState">true&lt;/attribute>
      
      &lt;attribute name="ClusterConfig">
          ... JGroups config for the cluster ...
      &lt;/attribute>
  &lt;/mbean>
          </programlisting>
              <para>The JGroups configuration element (i.e., the <literal>ClusterConfig</literal> attribute) is omitted
                  from the above listing. You have learned how to configure JGroups earlier in this chapter (<xref
                      linkend="jbosscache-jgroups"/>). The <literal>TreeCache</literal> MBean takes the following
                  attributes.</para>
              <itemizedlist>
                  <listitem>
                      <para><emphasis role="bold">CacheLoaderClass</emphasis> specifies the fully qualified class name of
                          the <literal>CacheLoader</literal> implementation.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">CacheLoaderConfig</emphasis> contains a set of properties from which the
                          specific CacheLoader implementation can configure itself.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">CacheLoaderFetchPersistentState</emphasis> specifies whether to fetch
                          the persistent state from another node. The persistence is fetched only if
                              <literal>CacheLoaderShared</literal> is <literal>false</literal>. This attribute is only
                          used if <literal>FetchStateOnStartup</literal> is <literal>true</literal>.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">CacheLoaderFetchTransientState</emphasis> specifies whether to fetch the
                          in-memory state from another node. This attribute is only used if
                          <literal>FetchStateOnStartup</literal> is <literal>true</literal>.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">CacheLoaderPreload</emphasis> contains a list of comma-separate nodes
                          that need to be preloaded (e.g., <literal>/aop</literal>,
                      <literal>/productcatalogue</literal>).</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">CacheLoaderShared</emphasis> specifies whether we want to shared a
                          datastore, or whether each node wants to have its own local datastore.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">CacheMode</emphasis> specifies how to synchronize cache between nodes.
                          The possible values are <literal>LOCAL</literal>, <literal>REPL_SYNC</literal>, or
                              <literal>REPL_ASYNC</literal>. <!-- May need a sublist here to explain the modes --></para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">ClusterName</emphasis> specifies the name of the cluster. This value
                          needs to be the same for all nodes in a cluster in order for them to find each other.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">ClusterConfig</emphasis> contains the configuration of the underlying
                          JGroups stack (see <xref linkend="jbosscache-jgroups"/>.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">EvictionPolicyClass</emphasis> specifies the name of a class
                          implementing <literal>EvictionPolicy</literal>. You can use a JBoss Cache provided
                              <literal>EvictionPolicy</literal> class or provide your own policy implementation. If this
                          attribute is empty, no eviction policy is enabled.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">EvictionPolicyConfig</emphasis> contains the configuration parameter for
                          the specified eviction policy. Note that the content is provider specific.
                          <!-- Add an example?? --></para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">FetchStateOnStartup</emphasis> specifies whether or not to acquire the
                          initial state from existing members. It allows for warm/hot caches
                          (<literal>true/false</literal>). This can be further defined by
                              <literal>CacheLoaderFetchTransientState</literal> and
                              <literal>CacheLoaderFetchPersistentState</literal>.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">InitialStateRetrievalTimeout</emphasis> specifies the time in
                          milliseconds to wait for initial state retrieval.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">IsolationLevel</emphasis> specifies the node locking level. Possible
                          values are <literal>SERIALIZABLE</literal>, <literal>REPEATABLE_READ</literal> (default),
                              <literal>READ_COMMITTED</literal>, <literal>READ_UNCOMMITTED</literal>, and
                          <literal>NONE</literal>. <!-- more docs needed --></para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">LockAcquisitionTimeout</emphasis> specifies the time in milliseconds to
                          wait for a lock to be acquired. If a lock cannot be acquired an exception will be thrown.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">ReplQueueInterval</emphasis> specifies the time in milliseconds for
                          elements from the replication queue to be replicated.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">SyncReplTimeout</emphasis> specifies the time in milliseconds to wait
                          until replication ACKs have been received from all nodes in the cluster. This attribute applies
                          to synchronous replication mode only (i.e., <literal>CacheMode</literal> attribute is
                              <literal>REPL_SYNC</literal>).</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">UseReplQueue</emphasis> specifies whether or not to use a replication
                          queue (<literal>true/false</literal>). This attribute applies to synchronous replication mode
                          only (i.e., <literal>CacheMode</literal> attribute is <literal>REPL_ASYNC</literal>).</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">ReplQueueMaxElements</emphasis> specifies the maximum number of elements
                          in the replication queue until replication kicks in.</para>
                  </listitem>
                  <listitem>
                      <para><emphasis role="bold">TransactionManagerLookupClass</emphasis> specifies the fully qualified
                          name of a class implementing <literal>TransactionManagerLookup</literal>. The default is
                              <literal>JBossTransactionManagerLookup</literal> for the transaction manager inside the
                          JBoss AS. There is also an option of <literal>DummyTransactionManagerLookup</literal> for simple
                          standalone examples.</para>
                  </listitem>
              </itemizedlist>
          </section>
      </chapter>
  
  
  
  
  
  </book>
  
  
  



More information about the jboss-cvs-commits mailing list