[jboss-cvs] JBossCache/docs/JBossCache-UserGuide/en/modules ...

Manik Surtani manik at jboss.org
Mon Apr 30 13:36:49 EDT 2007


  User: msurtani
  Date: 07/04/30 13:36:49

  Modified:    docs/JBossCache-UserGuide/en/modules     
                        eviction_policies.xml basic_api.xml
                        cache_loaders.xml architecture.xml replication.xml
  Log:
  JBCACHE-1040
  
  Revision  Changes    Path
  1.10      +379 -252  JBossCache/docs/JBossCache-UserGuide/en/modules/eviction_policies.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: eviction_policies.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/JBossCache-UserGuide/en/modules/eviction_policies.xml,v
  retrieving revision 1.9
  retrieving revision 1.10
  diff -u -b -r1.9 -r1.10
  --- eviction_policies.xml	28 Feb 2007 12:30:58 -0000	1.9
  +++ eviction_policies.xml	30 Apr 2007 17:36:48 -0000	1.10
  @@ -5,7 +5,8 @@
         Eviction policies control JBoss Cache's memory management by managing how many nodes are allowed to be stored in
         memory and their life spans.  Memory constraints on servers mean cache cannot grow indefinitely, so policies
         need to be in place to restrict the size of the cache.  Eviction policies are most often used alongside
  -      <link linkend="cache_loaders">cache loaders</link>.
  +      <link linkend="cache_loaders">cache loaders</link>
  +      .
      </para>
   
      <section>
  @@ -15,7 +16,7 @@
            <para>
               The basic eviction policy configuration element looks like:
               <programlisting>
  -<![CDATA[
  +               <![CDATA[
   
      ...
   
  @@ -51,9 +52,20 @@
               </programlisting>
   
               <itemizedlist>
  -               <listitem><literal>wakeUpIntervalSeconds</literal> - this required parameter defines how often the eviction thread runs</listitem>
  -               <listitem><literal>eventQueueSize</literal> - this optional parameter defines the size of the queue which holds eviction events.  If your eviction thread does not run often enough, you may need to increase this.</listitem>
  -               <listitem><literal>policyClass</literal> - this is required, unless you set individual policyClass attributes on each and every region.  This defines the eviction policy to use if one is not defined for a region.</listitem>
  +               <listitem>
  +                  <literal>wakeUpIntervalSeconds</literal>
  +                  - this required parameter defines how often the eviction thread runs
  +               </listitem>
  +               <listitem>
  +                  <literal>eventQueueSize</literal>
  +                  - this optional parameter defines the size of the queue which holds eviction events. If your eviction
  +                  thread does not run often enough, you may need to increase this.
  +               </listitem>
  +               <listitem>
  +                  <literal>policyClass</literal>
  +                  - this is required, unless you set individual policyClass attributes on each and every region. This
  +                  defines the eviction policy to use if one is not defined for a region.
  +               </listitem>
               </itemizedlist>
   
            </para>
  @@ -61,30 +73,73 @@
         <section>
            <title>Eviction Regions</title>
            <para>
  -            The concept of regions and the <literal>Region</literal> class were <link linkend="architecture.regions">visited earlier</link>
  +            The concept of regions and the
  +            <literal>Region</literal>
  +            class were
  +            <link linkend="architecture.regions">visited earlier</link>
               when talking about marshalling.  Regions also have another use, in that they are used to define the eviction
               policy used within the region.  In addition to using a region-specific configuration, you can also configure
               a default, cache-wide eviction policy for nodes that do not fall into predefined regions or if you do not
               wish to define specific regions.  It is important to note that when defining regions using the configuration
  -            XML file, all elements of the <literal>Fqn</literal> that defines the region are <literal>java.lang.String</literal>
  +            XML file, all elements of the
  +            <literal>Fqn</literal>
  +            that defines the region are
  +            <literal>java.lang.String</literal>
               objects.
            </para>
            <para>
  -            Looking at the eviction configuration snippet above, we see that a default region, <literal>_default_</literal>, holds attributes
  +            Looking at the eviction configuration snippet above, we see that a default region,
  +            <literal>_default_</literal>
  +            , holds attributes
               which apply to nodes that do not fall into any of the other regions defined.
            </para>
            <para>
  -            For each region, you can define parameters which affect how the policy which applies to the region chooses to evict nodes.
  -            In the example above, the <literal>LRUPolicy</literal> allows a <literal>maxNodes</literal> parameter which defines
  +            For each region, you can define parameters which affect how the policy which applies to the region chooses
  +            to evict nodes.
  +            In the example above, the
  +            <literal>LRUPolicy</literal>
  +            allows a
  +            <literal>maxNodes</literal>
  +            parameter which defines
               how many nodes can exist in the region before it chooses to start evicting nodes.  See the javadocs for each
               policy for a list of allowed parameters.
            </para>
  +
  +         <section>
  +            <title>Overlapping Eviction Regions</title>
  +
  +            <para>It's possible to define regions that overlap. In other words, one region can be defined for
  +               <emphasis>/a/b/c</emphasis>
  +               , and another
  +               defined for
  +               <emphasis>/a/b/c/d</emphasis>
  +               (which is just the
  +               <emphasis>d</emphasis>
  +               subtree of the
  +               <emphasis>/a/b/c</emphasis>
  +               sub-tree).
  +               The algorithm, in order to handle scenarios like this consistently, will always choose the first region
  +               it encounters.
  +               In this way, if the algorithm needed to decide how to handle
  +               <emphasis>/a/b/c/d/e</emphasis>
  +               , it would start from there and work
  +               its way up the tree until it hits the first defined region - in this case
  +               <emphasis>/a/b/c/d</emphasis>
  +               .
  +            </para>
  +         </section>
  +
         </section>
         <section>
            <title>Programmatic Configuration</title>
            <para>
  -            Configuring eviction using the <literal>Configuration</literal> object entails the use of the <literal>org.jboss.cache.config.EvictionConfig</literal>
  -            bean, which is passed into <literal>Configuration.setEvictionConfig()</literal>.
  +            Configuring eviction using the
  +            <literal>Configuration</literal>
  +            object entails the use of the
  +            <literal>org.jboss.cache.config.EvictionConfig</literal>
  +            bean, which is passed into
  +            <literal>Configuration.setEvictionConfig()</literal>
  +            .
            </para>
         </section>
      </section>
  @@ -96,19 +151,28 @@
   
         <para>
            <literal>org.jboss.cache.eviction.LRUPolicy</literal>
  -         controls both the node lifetime and age. This policy guarantees a constant order (<literal>O (1)</literal>) for
  +            controls both the node lifetime and age. This policy guarantees a constant order (
  +            <literal>O (1)</literal>
  +            ) for
            adds, removals and lookups (visits). It has the following configuration
            parameters:
         </para>
   
         <itemizedlist>
  -         <listitem><literal>maxNodes</literal> - This is the maximum number of nodes allowed in this region. 0 denotes no limit.</listitem>
            <listitem>
  -                        <literal>timeToLiveSeconds</literal> - The amount of time a node is not written to or read (in seconds) before the node is swept away. 0 denotes no limit.
  +               <literal>maxNodes</literal>
  +               - This is the maximum number of nodes allowed in this region. 0 denotes no limit.
  +            </listitem>
  +            <listitem>
  +               <literal>timeToLiveSeconds</literal>
  +               - The amount of time a node is not written to or read (in seconds) before the node is swept away. 0
  +               denotes no limit.
                     </listitem>
   
                     <listitem>
  -                        <literal>maxAgeSeconds</literal> - Lifespan of a node (in seconds) regardless of idle time before the node is swept away. 0 denotes no limit.
  +               <literal>maxAgeSeconds</literal>
  +               - Lifespan of a node (in seconds) regardless of idle time before the node is swept away. 0 denotes no
  +               limit.
                     </listitem>
         </itemizedlist>
      </section>
  @@ -119,12 +183,17 @@
         <para>
            <literal>org.jboss.cache.eviction.FIFOPolicy</literal>
            controls the eviction in a proper first in first out order. This policy
  -         guarantees a constant order (<literal>O (1)</literal>) for adds, removals and lookups (visits). It has the
  +            guarantees a constant order (
  +            <literal>O (1)</literal>
  +            ) for adds, removals and lookups (visits). It has the
            following configuration parameters:
         </para>
   
         <itemizedlist>
  -         <listitem><literal>maxNodes</literal> - This is the maximum number of nodes allowed in this region. 0 denotes no limit.</listitem>
  +            <listitem>
  +               <literal>maxNodes</literal>
  +               - This is the maximum number of nodes allowed in this region. 0 denotes no limit.
  +            </listitem>
         </itemizedlist>
         </section>
   
  @@ -137,12 +206,17 @@
            controls
            the eviction in based on most recently used algorithm. The most recently
            used nodes will be the first to evict with this policy. This policy
  -         guarantees a constant order (<literal>O (1)</literal>) for adds, removals and lookups (visits). It has the
  +            guarantees a constant order (
  +            <literal>O (1)</literal>
  +            ) for adds, removals and lookups (visits). It has the
            following configuration parameters:
         </para>
   
         <itemizedlist>
  -         <listitem><literal>maxNodes</literal> - This is the maximum number of nodes allowed in this region. 0 denotes no limit.</listitem>
  +            <listitem>
  +               <literal>maxNodes</literal>
  +               - This is the maximum number of nodes allowed in this region. 0 denotes no limit.
  +            </listitem>
         </itemizedlist>
      </section>
   
  @@ -159,22 +233,34 @@
            which nodes are least frequently used. LFU is also a sorted eviction
            algorithm. The underlying EvictionQueue implementation and algorithm is
            sorted in ascending order of the node visits counter. This class
  -         guarantees a constant order (<literal>O (1)</literal>) for adds, removal and searches. However, when any
  +            guarantees a constant order (
  +            <literal>O (1)</literal>
  +            ) for adds, removal and searches. However, when any
            number of nodes are added/visited to the queue for a given processing
  -         pass, a single quasilinear (<literal>O (n * log n)</literal>) operation is used to resort the queue in
  +            pass, a single quasilinear (
  +            <literal>O (n * log n)</literal>
  +            ) operation is used to resort the queue in
            proper LFU order. Similarly if any nodes are removed or evicted, a
  -         single linear (<literal>O (n)</literal>) pruning operation is necessary to clean up the
  +            single linear (
  +            <literal>O (n)</literal>
  +            ) pruning operation is necessary to clean up the
            EvictionQueue. LFU has the following configuration parameters:
         </para>
   
                  <itemizedlist>
  -                  <listitem><literal>maxNodes</literal> - This is the maximum number of nodes allowed in this region. 0 denotes no limit.</listitem>
  -                  <listitem><literal>minNodes</literal> - This is the minimum number of nodes allowed in this region. This value determines what
  +            <listitem>
  +               <literal>maxNodes</literal>
  +               - This is the maximum number of nodes allowed in this region. 0 denotes no limit.
  +            </listitem>
  +            <listitem>
  +               <literal>minNodes</literal>
  +               - This is the minimum number of nodes allowed in this region. This value determines what
                           the eviction queue should prune down to per pass. e.g. If
                           minNodes is 10 and the cache grows to 100 nodes, the cache is
                           pruned down to the 10 most frequently used nodes when the
                           eviction timer makes a pass through the eviction
  -                        algorithm.</listitem>
  +               algorithm.
  +            </listitem>
   
                  </itemizedlist>
   
  @@ -200,7 +286,9 @@
         </para>
   
         <para>
  -         This policy guarantees a constant order (<literal>O (1)</literal>) for adds and removals.
  +            This policy guarantees a constant order (
  +            <literal>O (1)</literal>
  +            ) for adds and removals.
            Internally, a sorted set (TreeSet) containing the expiration
            time and Fqn of the nodes is stored, which essentially
            functions as a heap.
  @@ -215,9 +303,13 @@
                  <literal>expirationKeyName</literal>
                  - This is the Node key name used
                  in the eviction algorithm. The configuration default is
  -               <literal>expiration</literal>.
  +               <literal>expiration</literal>
  +               .
  +            </listitem>
  +            <listitem>
  +               <literal>maxNodes</literal>
  +               - This is the maximum number of nodes allowed in this region. 0 denotes no limit.
            </listitem>
  -         <listitem><literal>maxNodes</literal> - This is the maximum number of nodes allowed in this region. 0 denotes no limit.</listitem>
   
         </itemizedlist>
   
  @@ -225,7 +317,7 @@
            The following listing shows how the expiration date is indicated and how the
            policy is applied:
            <programlisting>
  -<![CDATA[
  +               <![CDATA[
      Cache cache = DefaultCacheFactory.createCache();
      Fqn fqn1 = Fqn.fromString("/node/1");
      Long future = new Long(System.currentTimeMillis() + 2000);
  @@ -255,11 +347,16 @@
         <title>Eviction Policy Plugin Design</title>
   
         <para>The design of the JBoss Cache eviction policy framework is based
  -         on an <literal>EvictionInterceptor</literal> to handle cache events and relay them back to the eviction
  -         policies. During the cache start up, an <literal>EvictionInterceptor</literal> will be added to the cache
  +            on an
  +            <literal>EvictionInterceptor</literal>
  +            to handle cache events and relay them back to the eviction
  +            policies. During the cache start up, an
  +            <literal>EvictionInterceptor</literal>
  +            will be added to the cache
            interceptor stack if the eviction policy is specified.
            Then whenever a node is added, removed, evicted, or visited, the
  -         <literal>EvictionInterceptor</literal> will maintain state statistics and
  +            <literal>EvictionInterceptor</literal>
  +            will maintain state statistics and
            information will be relayed to each individual eviction region.
         </para>
   
  @@ -279,10 +376,18 @@
         <para>In order to implement an eviction policy, the following interfaces
            must be implemented:
            <itemizedlist>
  -         <listitem><literal>org.jboss.cache.eviction.EvictionPolicy</literal></listitem>
  -         <listitem><literal>org.jboss.cache.eviction.EvictionAlgorithm</literal></listitem>
  -         <listitem><literal>org.jboss.cache.eviction.EvictionQueue</literal></listitem>
  -         <listitem><literal>org.jboss.cache.eviction.EvictionConfiguration</literal></listitem>
  +               <listitem>
  +                  <literal>org.jboss.cache.eviction.EvictionPolicy</literal>
  +               </listitem>
  +               <listitem>
  +                  <literal>org.jboss.cache.eviction.EvictionAlgorithm</literal>
  +               </listitem>
  +               <listitem>
  +                  <literal>org.jboss.cache.eviction.EvictionQueue</literal>
  +               </listitem>
  +               <listitem>
  +                  <literal>org.jboss.cache.eviction.EvictionConfiguration</literal>
  +               </listitem>
            </itemizedlist>
            When compounded
            together, each of these interface implementations define all the
  @@ -307,7 +412,8 @@
   
         <itemizedlist>
            <listitem>
  -            <para>The EvictionConfiguration class <literal>parseXMLConfig(Element)</literal>
  +               <para>The EvictionConfiguration class
  +                  <literal>parseXMLConfig(Element)</literal>
                  method expects only the DOM element pertaining to the region the
                  policy is being configured for.
               </para>
  @@ -315,17 +421,30 @@
            <listitem>
               <para>The EvictionConfiguration implementation should maintain
                  getter and setter methods for configured properties pertaining to
  -               the policy used on a given cache region. (e.g. for <literal>LRUConfiguration</literal>
  -               there is a <literal>int getMaxNodes()</literal> and a <literal>setMaxNodes(int)</literal>)
  +                  the policy used on a given cache region. (e.g. for
  +                  <literal>LRUConfiguration</literal>
  +                  there is a
  +                  <literal>int getMaxNodes()</literal>
  +                  and a
  +                  <literal>setMaxNodes(int)</literal>
  +                  )
               </para>
            </listitem>
         </itemizedlist>
   
         <para>Alternatively, the implementation of a new eviction policy
  -         provider can be simplified by extending <literal>BaseEvictionPolicy</literal> and
  -         <literal>BaseEvictionAlgorithm</literal>. Or for properly sorted EvictionAlgorithms (sorted
  -         in eviction order - see <literal>LFUAlgorithm</literal>) extending
  -         <literal>BaseSortedEvictionAlgorithm</literal> and implementing <literal>SortedEvictionQueue</literal> takes
  +            provider can be simplified by extending
  +            <literal>BaseEvictionPolicy</literal>
  +            and
  +            <literal>BaseEvictionAlgorithm</literal>
  +            . Or for properly sorted EvictionAlgorithms (sorted
  +            in eviction order - see
  +            <literal>LFUAlgorithm</literal>
  +            ) extending
  +            <literal>BaseSortedEvictionAlgorithm</literal>
  +            and implementing
  +            <literal>SortedEvictionQueue</literal>
  +            takes
            care of most of the common functionality available in a set of eviction
            policy provider classes
         </para>
  @@ -337,7 +456,9 @@
   
         <itemizedlist>
            <listitem>
  -            <para>The <literal>BaseEvictionAlgorithm</literal> class maintains a processing
  +               <para>The
  +                  <literal>BaseEvictionAlgorithm</literal>
  +                  class maintains a processing
                  structure. It will process the ADD, REMOVE, and VISIT events queued
                  by the region first. It also maintains an collection of
                  items that were not properly evicted during the last go around
  @@ -347,7 +468,9 @@
               </para>
            </listitem>
            <listitem>
  -            <para>The <literal>BaseSortedEvictionAlgorithm</literal> class will maintain a boolean
  +               <para>The
  +                  <literal>BaseSortedEvictionAlgorithm</literal>
  +                  class will maintain a boolean
                  through the algorithm processing that will determine if any new
                  nodes were added or visited. This allows the Algorithm to determine
                  whether to resort the eviction queue items (in first to evict order)
  @@ -356,8 +479,12 @@
               </para>
            </listitem>
            <listitem>
  -            <para>The <literal>SortedEvictionQueue</literal> interface defines the contract used by
  -               the <literal>BaseSortedEvictionAlgorithm</literal> abstract class that is used to
  +               <para>The
  +                  <literal>SortedEvictionQueue</literal>
  +                  interface defines the contract used by
  +                  the
  +                  <literal>BaseSortedEvictionAlgorithm</literal>
  +                  abstract class that is used to
                  resort the underlying queue. Again, the queue sorting should be
                  sorted in first to evict order. The first entry in the list should
                  evict before the last entry in the queue. The last entry in the
  
  
  
  1.7       +15 -9     JBossCache/docs/JBossCache-UserGuide/en/modules/basic_api.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: basic_api.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/JBossCache-UserGuide/en/modules/basic_api.xml,v
  retrieving revision 1.6
  retrieving revision 1.7
  diff -u -b -r1.6 -r1.7
  --- basic_api.xml	30 Jan 2007 02:06:03 -0000	1.6
  +++ basic_api.xml	30 Apr 2007 17:36:48 -0000	1.7
  @@ -159,7 +159,9 @@
   
            </programlisting>
   
  -         Refer to the javadocs on the <literal>CacheLoader</literal> interface for details on the parameters passed in
  +         Refer to the javadocs on the
  +         <literal>CacheLoader</literal>
  +         interface for details on the parameters passed in
            to each of the callback methods.
         </para>
      </section>
  @@ -179,7 +181,8 @@
               </listitem>
               <listitem>
                  <literal>org.jboss.cache.loader.JDBCCacheLoader</literal>
  -               - uses a JDBC connection to store data. Connections could be created and maintained in an internal pool (uses the c3p0 pooling library)
  +               - uses a JDBC connection to store data. Connections could be created and maintained in an internal pool
  +               (uses the c3p0 pooling library)
                  or from a configured DataSource. The database this CacheLoader connects to could be local or remotely
                  located.
               </listitem>
  @@ -195,6 +198,9 @@
               <listitem>
                  <literal>org.jboss.cache.loader.tcp.TcpCacheLoader</literal>
                  - uses a TCP socket to "persist" data to a remote cluster, using a "far cache" pattern.
  +               <footnote>
  +                  <para>http://wiki.jboss.org/wiki/Wiki.jsp?page=JBossClusteringPatternFarCache</para>
  +               </footnote>
               </listitem>
               <listitem>
                  <literal>org.jboss.cache.loader.ClusteredCacheLoader</literal>
  
  
  
  1.13      +31 -21    JBossCache/docs/JBossCache-UserGuide/en/modules/cache_loaders.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: cache_loaders.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/JBossCache-UserGuide/en/modules/cache_loaders.xml,v
  retrieving revision 1.12
  retrieving revision 1.13
  diff -u -b -r1.12 -r1.13
  --- cache_loaders.xml	20 Apr 2007 17:40:29 -0000	1.12
  +++ cache_loaders.xml	30 Apr 2007 17:36:48 -0000	1.13
  @@ -323,18 +323,18 @@
   
         <para>
            <literal>singletonStore</literal>
  -         enables modifications to
  -         be stored by only one node in the cluster, the coordinator. This property
  -         can be set to true in all nodes, but only the coordinator of the cluster
  -         will store the modifications in the underlying cache loader as specified
  -         in the
  +         enables modifications to be stored by only one node in the cluster,
  +         the coordinator. Essentially, whenever any data comes in to some node
  +         it is always replicated so as to keep the caches' in-memory states in
  +         sync; the coordinator, though, has the sole responsibility of pushing
  +         that state to disk. This property can be set to true in all nodes, but
  +         again only the coordinator of the cluster will store the modifications
  +         in the underlying cache loader as specified in the
            <literal>class</literal>
  -         element inside
  -         <literal>cacheloader
  -         </literal>
  -         element . You cannot define a cache loader as
  -         <literal>shared
  -         </literal>
  +         element inside the
  +         <literal>cacheloader</literal>
  +         element. You cannot define a cache loader as
  +         <literal>shared</literal>
            and
            <literal>singletonStore</literal>
            at the same time.
  @@ -920,8 +920,11 @@
         <section id="cl.transforming">
            <title>Transforming Cache Loaders</title>
   
  -         <para>The way cached data is written to <literal>FileCacheLoader</literal> and
  -            <literal>JDBCCacheLoader</literal> based cache stores has changed in JBoss Cache 2.0 in such way that
  +         <para>The way cached data is written to
  +            <literal>FileCacheLoader</literal>
  +            and
  +            <literal>JDBCCacheLoader</literal>
  +            based cache stores has changed in JBoss Cache 2.0 in such way that
               these cache loaders now write and read data using the same marhalling framework used to replicate data
               accross the network. Such change is trivial for replication purpouses as it just requires the rest of the
               nodes to understand this format. However, changing the format of the data in cache stores brings up a new
  @@ -930,8 +933,10 @@
            </para>
   
            <para>With this in mind, JBoss Cache 2.0 comes with two cache loader implementations called
  -            <literal>org.jboss.cache.loader.TransformingFileCacheLoader</literal> and
  -            <literal>org.jboss.cache.loader.TransformingJDBCCacheLoader</literal> located within the optional
  +            <literal>org.jboss.cache.loader.TransformingFileCacheLoader</literal>
  +            and
  +            <literal>org.jboss.cache.loader.TransformingJDBCCacheLoader</literal>
  +            located within the optional
               jbosscache-cacheloader-migration.jar file. These are one-off cache loaders that read data from the
               cache store in JBoss Cache 1.x.x format and write data to cache stores in JBoss Cache 2.0 format.
            </para>
  @@ -941,10 +946,14 @@
               recursively reads the entire cache and writes the data read back into the cache. Once the data is
               transformed, users can revert back to their original cache configuration file(s). In order to help the users
               with this task, a cache loader migration example has been constructed which can be located under the
  -            <literal>examples/cacheloader-migration</literal> directory within the JBoss Cache distribution. This
  -            example, called <literal>examples.TransformStore</literal>, is independent of the actual data stored in
  +            <literal>examples/cacheloader-migration</literal>
  +            directory within the JBoss Cache distribution. This
  +            example, called
  +            <literal>examples.TransformStore</literal>
  +            , is independent of the actual data stored in
               the cache as it writes back whatever it was read recursively. It is highly recommended that anyone
  -            interested in porting their data run this example first, which contains a <literal>readme.txt</literal>
  +            interested in porting their data run this example first, which contains a
  +            <literal>readme.txt</literal>
               file with detailed information about the example itself, and also use it as base for their own application.  
            </para>
   
  @@ -1125,7 +1134,8 @@
            <orderedlist>
               <listitem>
                  <para>Tell the coordinator (oldest node in a cluster) to send it
  -                  the state
  +                  the state. This is always a full state transfer, overwriting
  +                  any state that may already be present.
                  </para>
               </listitem>
   
  
  
  
  1.9       +2 -2      JBossCache/docs/JBossCache-UserGuide/en/modules/architecture.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: architecture.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/JBossCache-UserGuide/en/modules/architecture.xml,v
  retrieving revision 1.8
  retrieving revision 1.9
  diff -u -b -r1.8 -r1.9
  --- architecture.xml	23 Apr 2007 14:15:07 -0000	1.8
  +++ architecture.xml	30 Apr 2007 17:36:49 -0000	1.9
  @@ -32,7 +32,7 @@
   
               <mediaobject>
                  <imageobject>
  -                  <imagedata fileref="images/TreeCacheArchitecture.gif"/>
  +                  <imagedata fileref="images/TreeCacheArchitecture.png"/>
                  </imageobject>
               </mediaobject>
            </figure>
  
  
  
  1.10      +90 -84    JBossCache/docs/JBossCache-UserGuide/en/modules/replication.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: replication.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/JBossCache-UserGuide/en/modules/replication.xml,v
  retrieving revision 1.9
  retrieving revision 1.10
  diff -u -b -r1.9 -r1.10
  --- replication.xml	26 Apr 2007 14:31:43 -0000	1.9
  +++ replication.xml	30 Apr 2007 17:36:49 -0000	1.10
  @@ -259,19 +259,27 @@
   
                  <para>Data Gravitation is a concept where if a request is made on a
                     cache in the cluster and the cache does not contain this
  -                  information, it then asks other instances in the cluster for the
  -                  data. If even this fails, it would (optionally) ask other instances
  -                  to check in the backup data they store for other caches. This means
  -                  that even if a cache containing your session dies, other instances
  -                  will still be able to access this data by asking the cluster to
  -                  search through their backups for this data.
  -               </para>
  -
  -               <para>Once located, this data is then transferred to the instance
  -                  which requested it and is added to this instance's data tree. It is
  -                  then (optionally) removed from all other instances (and backups) so
  -                  that if session affinity is used, the affinity should now be to this
  -                  new cache instance which has just
  +                  information, it asks other instances in the cluster for the
  +                  data. In other words, data is lazily transferred, migrating
  +                  <emphasis>only</emphasis>
  +                  when other nodes ask for it. This strategy
  +                  prevents a network storm effect where lots of data is pushed around
  +                  healthy nodes because only one (or a few) of them die.
  +               </para>
  +
  +               <para>If the data is not found in the primary section of some node,
  +                  it would (optionally) ask other instances to check in the backup
  +                  data they store for other caches.
  +                  This means that even if a cache containing your session dies, other
  +                  instances will still be able to access this data by asking the cluster
  +                  to search through their backups for this data.
  +               </para>
  +
  +               <para>Once located, this data is transferred to the instance
  +                  which requested it and is added to this instance's data tree.
  +                  The data is then (optionally) removed from all other instances
  +                  (and backups) so that if session affinity is used, the affinity
  +                  should now be to this new cache instance which has just
                     <emphasis>taken
                        ownership
                     </emphasis>
  @@ -288,7 +296,8 @@
                           <literal>dataGravitationRemoveOnFind</literal>
   
                           - forces all remote caches that own the data or hold backups for the data to remove that data,
  -                        thereby making the requesting cache the new data owner. If set to
  +                        thereby making the requesting cache the new data owner. This removal, of course, only happens
  +                        after the new owner finishes replicating data to its buddy. If set to
   
                           <literal>false</literal>
   
  @@ -353,57 +362,50 @@
   
                  <para>
                     <programlisting>
  -
  -                     &lt;!-- Buddy Replication config --&gt;
  -                     &lt;attribute name="BuddyReplicationConfig"&gt;
  -                     &lt;config&gt;
  -
  -                     &lt;!-- Enables buddy replication. This is the ONLY mandatory configuration element here. --&gt;
  -                     &lt;buddyReplicationEnabled&gt;true&lt;/buddyReplicationEnabled&gt;
  -
  -                     &lt;!-- These are the default values anyway --&gt;
  -                     &lt;buddyLocatorClass&gt;org.jboss.cache.buddyreplication.NextMemberBuddyLocator&lt;/buddyLocatorClass&gt;
  -
  -                     &lt;!-- numBuddies is the number of backup nodes each node maintains. ignoreColocatedBuddies means
  -                     that
  -                     each node will *try* to select a buddy on a different physical host. If not able to do so though,
  -                     it will fall back to colocated nodes. --&gt;
  -                     &lt;buddyLocatorProperties&gt;
  +                     <![CDATA[
  +<!-- Buddy Replication config -->
  +<attribute name="BuddyReplicationConfig">
  +   <config>
  +
  +      <!-- Enables buddy replication. This is the ONLY mandatory configuration element here. -->
  +      <buddyReplicationEnabled>true</buddyReplicationEnabled>
  +
  +      <!-- These are the default values anyway -->
  +      <buddyLocatorClass>org.jboss.cache.buddyreplication.NextMemberBuddyLocator</buddyLocatorClass>
  +
  +      <!--  numBuddies is the number of backup nodes each node maintains. ignoreColocatedBuddies means
  +            that each node will *try* to select a buddy on a different physical host. If not able to do so though,
  +            it will fall back to colocated nodes. -->
  +      <buddyLocatorProperties>
                        numBuddies = 1
                        ignoreColocatedBuddies = true
  -                     &lt;/buddyLocatorProperties&gt;
  +      </buddyLocatorProperties>
   
  -                     &lt;!-- A way to specify a preferred replication group. If specified, we try and pick a buddy why
  -                     shares
  +      <!-- A way to specify a preferred replication group. If specified, we try and pick a buddy which shares
                        the same pool name (falling back to other buddies if not available). This allows the sysdmin to
  -                     hint at
  -                     backup buddies are picked, so for example, nodes may be hinted topick buddies on a different
  -                     physical rack
  -                     or power supply for added fault tolerance. --&gt;
  -                     &lt;buddyPoolName&gt;myBuddyPoolReplicationGroup&lt;/buddyPoolName&gt;
  -
  -                     &lt;!-- Communication timeout for inter-buddy group organisation messages (such as assigning to and
  -                     removing
  -                     from groups, defaults to 1000. --&gt;
  -                     &lt;buddyCommunicationTimeout&gt;2000&lt;/buddyCommunicationTimeout&gt;
  -
  -                     &lt;!-- Whether data is removed from old owners when gravitated to a new owner. Defaults to true.
  -                     --&gt;
  -                     &lt;dataGravitationRemoveOnFind&gt;true&lt;/dataGravitationRemoveOnFind&gt;
  -
  -                     &lt;!-- Whether backup nodes can respond to data gravitation requests, or only the data owner is
  -                     supposed to respond.
  -                     defaults to true. --&gt;
  -                     &lt;dataGravitationSearchBackupTrees&gt;true&lt;/dataGravitationSearchBackupTrees&gt;
  -
  -                     &lt;!-- Whether all cache misses result in a data gravitation request. Defaults to false, requiring
  -                     callers to
  -                     enable data gravitation on a per-invocation basis using the Options API. --&gt;
  -                     &lt;autoDataGravitation&gt;false&lt;/autoDataGravitation&gt;
  +           hint at backup buddies are picked, so for example, nodes may be hinted topick buddies on a different
  +           physical rack or power supply for added fault tolerance. -->
  +      <buddyPoolName>myBuddyPoolReplicationGroup</buddyPoolName>
  +
  +      <!-- Communication timeout for inter-buddy group organisation messages (such as assigning to and
  +           removing from groups, defaults to 1000. -->
  +      <buddyCommunicationTimeout>2000</buddyCommunicationTimeout>
  +
  +      <!-- Whether data is removed from old owners when gravitated to a new owner. Defaults to true. -->
  +      <dataGravitationRemoveOnFind>true</dataGravitationRemoveOnFind>
   
  -                     &lt;/config&gt;
  -                     &lt;/attribute&gt;
  +      <!-- Whether backup nodes can respond to data gravitation requests, or only the data owner is
  +           supposed to respond.  Defaults to true. -->
  +      <dataGravitationSearchBackupTrees>true</dataGravitationSearchBackupTrees>
   
  +      <!-- Whether all cache misses result in a data gravitation request. Defaults to false, requiring
  +           callers to enable data gravitation on a per-invocation basis using the Options API. -->
  +      <autoDataGravitation>false</autoDataGravitation>
  +
  +   </config>
  +</attribute>
  +
  +]]>
                     </programlisting>
                  </para>
               </section>
  @@ -516,9 +518,14 @@
                     started (as part of the processing of the
                     <literal>start()</literal>
                     method). This is a full state transfer. The state is retrieved from
  -                  the cache instance that has been operational the longest. If there
  -                  is any problem receiving or integrating the state, the cache will
  -                  not start.
  +                  the cache instance that has been operational the longest.
  +                  <footnote>
  +                     <para>The longest operating cache instance is always, in JGroups
  +                        terms, the coordinator.
  +                     </para>
  +                  </footnote>
  +                  If there is any problem receiving or integrating the state, the cache
  +                  will not start.
                  </para>
   
                  <para>Initial state transfer will occur unless:</para>
  @@ -531,7 +538,7 @@
                           is
                           <literal>true</literal>
                           . This property is used in
  -                        conjunction with region-based marshaling.
  +                        conjunction with region-based marshalling.
                        </para>
                     </listitem>
   
  @@ -544,21 +551,20 @@
               </listitem>
   
               <listitem>
  -               <para>Partial state transfer following region activation. Only
  -                  relevant when region-based marshaling is used. Here a special
  -                  classloader is needed to unmarshal the state for a portion of the
  -                  tree. State transfer cannot succeed until the application registers
  -                  this classloader with the cache. Once the application registers its
  -                  classloader, it calls
  -                  <literal>cache.getRegion(fqn,
  -                     true).activate()
  -                  </literal>
  -                  . As part of the region activation
  -                  process, a partial state transfer of the relevant subtree's state is
  -                  performed. The state is requested from the oldest cache instance in
  -                  the cluster; if that instance responds with no state, state is
  -                  requested from each instance one by one until one provides state or
  -                  all instances have been queried.
  +               <para>Partial state transfer following region activation. When
  +                  region-based marshalling is used, the application needs to register
  +                  a specific class loader with the cache. This class loader is used
  +                  to unmarshall the state for a specific region (subtree) of the cache.
  +               </para>
  +
  +               <para>After registration, the application calls
  +                  <literal>cache.getRegion(fqn, true).activate()</literal>
  +                  ,
  +                  which initiates a partial state transfer of the relevant subtree's
  +                  state. The request is first made to the oldest cache instance in the
  +                  cluster. However, if that instance responds with no state, it is then
  +                  requested from each instance in turn until one either provides state
  +                  or all instances have been queried.
                  </para>
   
                  <para>Typically when region-based marshalling is used, the cache's
  
  
  



More information about the jboss-cvs-commits mailing list