[jboss-cvs] JBossCache/docs/TreeCache/en/modules ...

Brian Stansberry brian.stansberry at jboss.com
Thu Jul 20 22:51:29 EDT 2006


  User: bstansberry
  Date: 06/07/20 22:51:29

  Modified:    docs/TreeCache/en/modules         Tag:
                        Branch_JBossCache_1_4_0_MUX replication.xml
                        introduction.xml treecache_marshaller.xml
                        transactions.xml architecture.xml compatibility.xml
                        basic_api.xml eviction_policies.xml
  Log:
  Sync to JBossCache_1_4_0_GA
  
  Revision  Changes    Path
  No                   revision
  
  
  No                   revision
  
  
  1.5.4.1   +36 -49    JBossCache/docs/TreeCache/en/modules/replication.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: replication.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/TreeCache/en/modules/replication.xml,v
  retrieving revision 1.5
  retrieving revision 1.5.4.1
  diff -u -b -r1.5 -r1.5.4.1
  --- replication.xml	11 May 2006 14:36:14 -0000	1.5
  +++ replication.xml	21 Jul 2006 02:51:29 -0000	1.5.4.1
  @@ -1,7 +1,7 @@
   <chapter id="replication">
      <title>Clustered Caches</title>
   
  -   <para>The TreeCache can be configured to be either local (standalone) or clustered. If
  +   <para>The <literal>TreeCache</literal> can be configured to be either local (standalone) or clustered. If
      in a cluster, the cache can be configured to replicate changes, or to
      invalidate changes.  A detailed discussion on this follows.</para>
   
  @@ -18,7 +18,7 @@
        <title>Clustered Cache - Using Replication</title>
   
        <para>Replicated caches replicate all changes to the
  -     other TreeCache instances in the cluster. Replication can either happen
  +     other <literal>TreeCache</literal> instances in the cluster. Replication can either happen
        after each modification (no transactions), or at the end of a
        transaction (commit time).</para>
   
  @@ -27,7 +27,7 @@
        the caller (e.g. on a put()) until the modifications have been
        replicated successfully to all nodes in a cluster. Asynchronous
        replication performs replication in the background (the put() returns
  -     immediately). TreeCache also offers a replication queue, where
  +     immediately). <literal>TreeCache</literal> also offers a replication queue, where
        modifications are replicated periodically (i.e. interval-based), or when
        the queue size exceeds a number of elements, or a combination
        thereof.</para>
  @@ -39,7 +39,7 @@
        successfully, the caller knows for sure that all modifications have been
        applied at all nodes, whereas this may or may not be the case with
        asynchronous replication. With asynchronous replication, errors are
  -     simply written to a log.  Even when using transactions, a transaction may succeed but replication may not succeed on all TreeCache instances.</para>
  +     simply written to a log.  Even when using transactions, a transaction may succeed but replication may not succeed on all <literal>TreeCache</literal> instances.</para>
   
          <section>
              <title>Buddy Replication</title>
  @@ -51,7 +51,7 @@
              </para>
              <para>
                  One of the most common use cases of Buddy Replication is when a replicated cache is used by a servlet
  -               containerto store HTTP session data.  One of the pre-requisites to buddy replication working well and being
  +               container to store HTTP session data.  One of the pre-requisites to buddy replication working well and being
                  a real benefit is the use of <emphasis>session affinity</emphasis>, also known as <emphasis>sticky sessions</emphasis>
                  in HTTP session replication speak.  What this means is that if certain data is frequently accessed, it is
                  desirable that this is always accessed on one instance rather than in a round-robin fashion as this helps
  @@ -85,11 +85,11 @@
                       Also known as replication groups, a buddy pool is an optional construct where each instance in a cluster
                       may be configured with a buddy pool name.  Think of this as an 'exclusive club membership' where when
                       selecting buddies, <literal>BuddyLocator</literal>s would try and select buddies sharing the same
  -                    buddy pool name.  This allows system administrators a degree of fleibility and control over how buddies
  +                    buddy pool name.  This allows system administrators a degree of flexibility and control over how buddies
                       are selected.  For example, a sysadmin may put two instances on two separate physical servers that
  -                    may be on two separate physical racks in the same buddy pool.  So ratehr than picking an
  -                    instance on a different host on the same rack, <literal>BuddyLocator</literal>s would ratehr pick
  -                    the instance in the same buddy pool, on a separate rack which may adda degree of redundancy.
  +                    may be on two separate physical racks in the same buddy pool.  So rather than picking an
  +                    instance on a different host on the same rack, <literal>BuddyLocator</literal>s would rather pick
  +                    the instance in the same buddy pool, on a separate rack which may add a degree of redundancy.
                   </para>
               </section>
               <section>
  @@ -114,20 +114,13 @@
                       just <emphasis>taken ownership</emphasis> of this data.
                   </para>
                   <para>
  -                    Data Gravitation is implemented as a special type of <literal>CacheLoader</literal>,
  -                    <literal>org.jboss.cache.buddyreplication.DataGravitator</literal>.  To use Data Gravitation (and it
  -                    is recommended that you do when using Buddy Replication!) you should enable the <literal>DataGravitator</literal>
  -                    as a <literal>CacheLoader</literal>.  If you already have a cache loader defined, use <emphasis>cache loader chaining</emphasis>
  -                    and make sure the <literal>DataGravitator</literal> is last in the cache loader chain.
  -                </para>
  -                <para>
  -                    As a cache loader, the <literal>DataGravitator</literal> takes a few (all optional) configuration properties.
  +                    Data Gravitation is implemented as an interceptor.  The following (all optional) configuration properties pertain to data gravitation.
                       <itemizedlist>
  -                        <listitem><literal>removeOnFind</literal> - forces all remote caches that own the data or hold backups for the data to remove that data,
  +                        <listitem><literal>dataGravitationRemoveOnFind</literal> - forces all remote caches that own the data or hold backups for the data to remove that data,
                               thereby making the requesting cache the new data owner. If set to <literal>false</literal> an evict is broadcast instead of a remove, so any state
                               persisted in cache loaders will remain. This is useful if you have a shared cache loader configured.  Defaults to <literal>true</literal>.</listitem>
  -                        <listitem><literal>searchBackupSubtrees</literal> - Asks remote instances to search through their backups as well as main data trees.  Defaults to <literal>true</literal>.</listitem>
  -                        <listitem><literal>timeout</literal> - A timeout that defines how long it should wait for responses from instances in the cluster before assuming that they do not have the data requested.  Defaults to 10,000 ms</listitem>
  +                        <listitem><literal>dataGravitationSearchBackupTrees</literal> - Asks remote instances to search through their backups as well as main data trees.  Defaults to <literal>true</literal>.  The resulting effect is that if this is <literal>true</literal> then backup nodes can respond to data gravitation requests in addition to data owners.</listitem>
  +                        <listitem><literal>autoDataGravitation</literal> - Whether data gravitation occurs for every cache miss.  My default this is set to <literal>false</literal> to prevent unnecessary network calls.  Most use cases will know when it may need to gravitate data and will pass in an <literal>Option</literal> to enable data gravitation on a per-invocation basis.  If <literal>autoDataGravitation</literal> is <literal>true</literal> this <literal>Option</literal> is unnecessary.</listitem>
                       </itemizedlist>
                   </para>
               </section>
  @@ -150,36 +143,16 @@
                   <title>Configuration</title>
                   <para>
                       <programlisting><![CDATA[
  -                    <!-- DataGravitator is a type of clustered cache loader used in conjuction with buddy replication to
  -                    provide failover. -->
  -                    <attribute name="CacheLoaderConfiguration">
  -                        <config>
  -                            <passivation>false</passivation>
  -                            <preload>/</preload>
  -                            <shared>false</shared>
  -
  -                            <cacheloader>
  -                                <class>org.jboss.cache.buddyreplication.DataGravitator</class>
  -                                <properties>
  -                                    timeout=1000
  -                                    removeOnFind=true
  -                                    searchBackupSubtrees=true
  -                                </properties>
  -                                <async>false</async>
  -                                <fetchPersistentState>false</fetchPersistentState>
  -                                <!-- determines whether this cache loader ignores writes - defaults to false. -->
  -                                <ignoreModifications>false</ignoreModifications>
  -                            </cacheloader>
  -                        </config>
  -                    </attribute>
  -
                       <!-- Buddy Replication config -->
                       <attribute name="BuddyReplicationConfig">
                           <config>
  -                            <!-- enables buddy replication.  This is the ONLY mandatory configuration element here. -->
  +                            
  +							<!-- Enables buddy replication.  This is the ONLY mandatory configuration element here. -->
                               <buddyReplicationEnabled>true</buddyReplicationEnabled>
  -                            <!-- these are the default values anyway -->
  +                            
  +							<!-- These are the default values anyway -->
                               <buddyLocatorClass>org.jboss.cache.buddyreplication.NextMemberBuddyLocator</buddyLocatorClass>
  +                            
                               <!-- numBuddies is the number of backup nodes each node maintains.  ignoreColocatedBuddies means that
                                   each node will *try* to select a buddy on a different physical host.  If not able to do so though,
                                   it will fall back to colocated nodes. -->
  @@ -187,14 +160,28 @@
                                   numBuddies = 1
                                   ignoreColocatedBuddies = true
                               </buddyLocatorProperties>
  +                            
                               <!-- A way to specify a preferred replication group.  If specified, we try and pick a buddy why shares
                               the same pool name (falling back to other buddies if not available).  This allows the sysdmin to hint at
                               backup buddies are picked, so for example, nodes may be hinted topick buddies on a different physical rack
                               or power supply for added fault tolerance.  -->
                               <buddyPoolName>myBuddyPoolReplicationGroup</buddyPoolName>
  -                            <!-- communication timeout for inter-buddy group organisation messages (such as assigning to and removing
  +                            
  +							<!-- Communication timeout for inter-buddy group organisation messages (such as assigning to and removing
                               from groups, defaults to 1000. -->
                               <buddyCommunicationTimeout>2000</buddyCommunicationTimeout>
  +							
  +							<!-- Whether data is removed from old owners when gravitated to a new owner.  Defaults to true.  -->
  +							<dataGravitationRemoveOnFind>true</dataGravitationRemoveOnFind>	
  +							
  +							<!-- Whether backup nodes can respond to data gravitation requests, or only the data owner is supposed to respond.  
  +								defaults to true. -->
  +							<dataGravitationSearchBackupTrees>true</dataGravitationSearchBackupTrees>	
  +							
  +							<!-- Whether all cache misses result in a data gravitation request.  Defaults to false, requiring callers to 
  +								enable data gravitation on a per-invocation basis using the Options API.  -->
  +						    <autoDataGravitation>false</autoDataGravitation>
  +
                           </config>
                       </attribute>
                       ]]>
  
  
  
  1.4.6.1   +17 -17    JBossCache/docs/TreeCache/en/modules/introduction.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: introduction.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/TreeCache/en/modules/introduction.xml,v
  retrieving revision 1.4
  retrieving revision 1.4.6.1
  diff -u -b -r1.4 -r1.4.6.1
  --- introduction.xml	22 Mar 2006 04:02:00 -0000	1.4
  +++ introduction.xml	21 Jul 2006 02:51:29 -0000	1.4.6.1
  @@ -3,42 +3,42 @@
     <title>Introduction</title>
   
     <section>
  -    <title>What is a TreeCache?</title>
  +    <title>What is a <literal>TreeCache</literal>?</title>
   
       <para>
  -		A TreeCache is a structured, replicated, transactional cache from JBossCache.  TreeCache is the backbone for many fundamental JBoss Application Server clustering services, including - in certain versions - clustering JNDI, HTTP and EJB sessions, and clustering JMS.
  +		A <literal>TreeCache</literal> is a tree-structured, replicated, transactional cache from JBoss Cache.  <literal>TreeCache</literal> is the backbone for many fundamental JBoss Application Server clustering services, including - in certain versions - clustering JNDI, HTTP and EJB sessions, and clustering JMS.
       </para>
       <para>
  -		In addition to this, TreeCache can be used as a standalone transactional and replicated cache or even an object oriented data store, may be embedded in other J2EE compliant application servers such as BEA WebLogic or IBM WebSphere, servlet containers such as Tomcat, or even in Java applications that do not run from within an application server.
  +		In addition to this, <literal>TreeCache</literal> can be used as a standalone transactional and replicated cache or even an object oriented data store, may be embedded in other J2EE compliant application servers such as BEA WebLogic or IBM WebSphere, servlet containers such as Tomcat, or even in Java applications that do not run from within an application server.
   	</para>
     </section>
   
     <section>
  -    <title>TreeCache Basics</title>
  +    <title><literal>TreeCache</literal> Basics</title>
   
  -    <para>The structure of a TreeCache is a tree with nodes. Each node has a
  +    <para>The structure of a <literal>TreeCache</literal> is a tree with nodes. Each node has a
       name and zero or more children. A node can only have 1 parent; there is
       currently no support for graphs. A node can be reached by navigating from
       the root recursively through children, until the requested node is found. It can
       also be accessed by giving a fully qualified name (FQN), which consists of
       the concatenation of all node names from the root to the node in question.</para>
   
  -    <para>A TreeCache can have multiple roots, allowing for a number of
  +    <para>A <literal>TreeCache</literal> can have multiple roots, allowing for a number of
       different trees to be present in a single cache instance. Note that a one level tree is
  -    essentially a HashMap. Each node in the tree has a HashMap of keys and
  +    essentially a <literal>HashMap</literal>. Each node in the tree has a map of keys and
       values. For a replicated cache, all keys and values have to be
  -    serializable. Serializability is not a requirement for PojoCache, where
  -    reflection and AOP is used to replicate any type.</para>
  +    <literal>Serializable</literal>. Serializability is not a requirement for <literal>PojoCache</literal>, where
  +    reflection and aspect-oriented programming is used to replicate any type.</para>
   
  -    <para>A TreeCache can be either local or replicated. Local trees exist
  -    only inside the VM in which they are created, whereas replicated trees
  +    <para>A <literal>TreeCache</literal> can be either local or replicated. Local trees exist
  +    only inside the Java VM in which they are created, whereas replicated trees
       propagate any changes to all other replicated trees in the same cluster. A
       cluster may span different hosts on a network or just different JVMs
       on a single host.</para>
   
  -    <para>The first version of JBossCache was a HashMap. However, the decision
  +    <para>The first version of <literal>TreeCache</literal> was essentially a single <literal>HashMap</literal> that replicated. However, the decision
       was taken to go with a tree structured cache because (a) it is more
  -    flexible and efficient and (b) a tree can always be reduced to a HashMap,
  +    flexible and efficient and (b) a tree can always be reduced to a map,
       thereby offering both possibilities. The efficiency argument was driven by
       concerns over replication overhead, and was that a value itself can be a
       rather sophisticated object, with aggregation pointing to other objects,
  @@ -46,11 +46,11 @@
       therefore trigger the entire object (possibly the transitive closure over
       the object graph) to be serialized and propagated to the other nodes in
       the cluster. With a tree, only the modified nodes in the tree need to be
  -    serialized and propagated. This is not necessarily a concern for TreeCache, but is a
  -    vital requirement for PojoCache (as we will see in the separate PojoCache
  +    serialized and propagated. This is not necessarily a concern for <literal>TreeCache</literal>, but is a
  +    vital requirement for <literal>PojoCache</literal> (as we will see in the separate <literal>PojoCache</literal>
       documentation).</para>
   
  -    <para>When a change is made to the TreeCache, and that change is done in
  +    <para>When a change is made to the <literal>TreeCache</literal>, and that change is done in
       the context of a transaction, then we defer the replication of changes until the transaction
       commits successfully. All modifications are kept in a list associated with
       the transaction for the caller. When the transaction commits, we replicate the
  @@ -65,7 +65,7 @@
       rollback. In this sense, running without a transaction can be thought of as analogous as running with auto-commit switched on in JDBC terminology, where each operation is committed automatically.</para>
   
       <para>There is an API for plugging in different transaction managers: all
  -    it requires is to get the transaction associated with the caller's thread.  Several TransactionManagerLookup classes are provided for popular transaction managers, including a DummyTransactionManager for testing.</para>
  +    it requires is to get the transaction associated with the caller's thread.  Several <literal>TransactionManagerLookup</literal> implementations are provided for popular transaction managers, including a <literal>DummyTransactionManager</literal> for testing.</para>
   
       <para>Finally, we use pessimistic locking of the cache by default, with optimistic locking as a configurable option. With pessimistic locking, we can
       configure the local locking policy corresponding to database-style
  
  
  
  1.3.4.1   +15 -0     JBossCache/docs/TreeCache/en/modules/treecache_marshaller.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: treecache_marshaller.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/TreeCache/en/modules/treecache_marshaller.xml,v
  retrieving revision 1.3
  retrieving revision 1.3.4.1
  diff -u -b -r1.3 -r1.3.4.1
  --- treecache_marshaller.xml	11 May 2006 00:01:00 -0000	1.3
  +++ treecache_marshaller.xml	21 Jul 2006 02:51:29 -0000	1.3.4.1
  @@ -429,4 +429,19 @@
                 is disabled by passing in the <literal>-Dserialization.jboss=false</literal> environment variable into your JVM.
             </para>
         </section>
  +
  +		<section>
  +			<title>Backward compatibility</title>	
  +			<para>
  +				Marshalling in JBoss Cache is now versioned.  All communications between caches contain a version <literal>short</literal> which allows JBoss Cache instances of different versions to communicate with each other.  Up until JBoss Cache 1.4.0, all versions were able to communicate with each other anyway since they all used simple serialization of <literal>org.jgroups.MethodCall</literal> objects, provided they all used the same version of JGroups.  This requirement (more a requirement of the JGroups messaging layer than JBoss Cache) still exists, even though with JBoss Cache 1.4.0, we've moved to a much more efficient and sophisticated marshalling mechanism.
  +			</para>	
  +			<para>
  +				JBoss Cache 1.4.0 and future releases of JBoss Cache will always be able to unmarshall data from previous versions of JBoss Cache.  For JBoss Cache 1.4.0 and future releases to marshall data in a format that is compatible with older versions, however, you would have to start JBoss Cache with the following configuration attribute:
  +				<programlisting><![CDATA[  
  +				<!-- takes values such as 1.2.3, 1.2.4 and 1.3.0 -->
  +				<attribute name="ReplicationVersion">1.2.4</attribute>
  +				]]>
  +				</programlisting>
  +			</para>	
  +        </section>			
     </chapter>
  \ No newline at end of file
  
  
  
  1.2.6.1   +13 -13    JBossCache/docs/TreeCache/en/modules/transactions.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: transactions.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/TreeCache/en/modules/transactions.xml,v
  retrieving revision 1.2
  retrieving revision 1.2.6.1
  diff -u -b -r1.2 -r1.2.6.1
  --- transactions.xml	1 Feb 2006 13:31:14 -0000	1.2
  +++ transactions.xml	21 Jul 2006 02:51:29 -0000	1.2.6.1
  @@ -15,9 +15,9 @@
   	    <para>Lock owners are either transactions (call is made within the scope of an existing transaction)
   	    or threads (no transaction associated with the call).
   	    Regardless, a transaction or a thread is internally transformed into
  -	    an instance of GlobalTransaction, which is used as a globally unique ID
  +	    an instance of <literal>GlobalTransaction</literal>, which is used as a globally unique ID
   	    for modifications across a cluster. E.g. when we run a two-phase commit
  -	    protocol (see below) across the cluster, the GlobalTransaction uniquely
  +	    protocol (see below) across the cluster, the <literal>GlobalTransaction</literal> uniquely
   	    identifies the unit of work across a cluster.</para>
   	
   		<para>Locks can be read or write locks. Write locks serialize read and
  @@ -117,7 +117,7 @@
   	<section><title>Optimistic locking</title>
   	<para>The motivation for optimistic locking is to improve concurrency.  When a lot of threads have a lot of contention for access to the data tree, it can be inefficient to lock portions of the tree - for reading or writing - for the entire duration of a transaction as we do in pessimistic locking.  Optimistic locking allows for greater concurrency of threads and transactions by using a technique called data versioning, explained here.  Note that isolation levels (if configured) are ignored if optimistic locking is enabled.</para>
   	<section><title>Architecture</title>
  -	<para>Optimistic locking treats all method calls as transactional<footnote><para>Because of this requirement, you must always have a transaction manager configured when using optimistic locking.</para></footnote>.  Even if you do not invoke a call within the scope of an ongoing transaction, JBoss Cache creates an implicit transaction and commits this transaction when the invocation completes.  Each transaction maintains a transaction workspace, which contains a copy of the data used within the transaction.</para><para>For example, if a transaction calls get("/a/b/c"), nodes a, b and c are copied from the main data tree and into the workspace.  The data is versioned and all calls in the transaction work on the copy of the data rather than the actual data.  When the transaction commits, it's workspace is merged back into the underlying tree by matching versions.  If there is a version mismatch - such as when the actual data tree has a higher version than the workspace, per!
 haps if another transaction were to access the same data, change it and commit before the first transaction can finish - the transaction throws a RollbackException when committing and the commit fails.</para>
  +	<para>Optimistic locking treats all method calls as transactional<footnote><para>Because of this requirement, you must always have a transaction manager configured when using optimistic locking.</para></footnote>.  Even if you do not invoke a call within the scope of an ongoing transaction, JBoss Cache creates an implicit transaction and commits this transaction when the invocation completes.  Each transaction maintains a transaction workspace, which contains a copy of the data used within the transaction.</para><para>For example, if a transaction calls get("/a/b/c"), nodes a, b and c are copied from the main data tree and into the workspace.  The data is versioned and all calls in the transaction work on the copy of the data rather than the actual data.  When the transaction commits, it's workspace is merged back into the underlying tree by matching versions.  If there is a version mismatch - such as when the actual data tree has a higher version than the workspace, per!
 haps if another transaction were to access the same data, change it and commit before the first transaction can finish - the transaction throws a <literal>RollbackException</literal> when committing and the commit fails.</para>
   	<para>Optimistic locking uses the same locks we speak of above, but the locks are only held for a very short duration - at the start of a transaction to build a workspace, and when the transaction commits and has to merge data back into the tree.</para>
   	<para>
   		So while optimistic locking may occasionally fail if version validations fail or may run slightly slower than pessimistic locking due to the inevitable overhead and extra processing of maintaining workspaces, versioned data and validating on commit, it does buy you a near-SERIALIZABLE degree of data integrity while maintaining a very high level of concurrency.
  @@ -158,11 +158,11 @@
   </orderedlist>
   	<para>
   		In order to do this, the cache has
  -    to be configured with an instance of a TransactionManagerLookup which
  -    returns a javax.transaction.TransactionManager.</para>
  +    to be configured with an instance of a <literal>TransactionManagerLookup</literal> which
  +    returns a <literal>javax.transaction.TransactionManager</literal>.</para>
   
  -    <para>JBoss Cache ships with JBossTransactionManagerLookup and GenericTransactionManagerLookup.  The JBossTransactionManagerLookup is able to bind to a running JBoss Application Server and retrieve a TransactionManager while the GenericTransactionManagerLookup is able to bind to most popular JEE application servers and provide the same functionality. A dummy implementation - DummyTransactionManagerLookup - is also provided, which may be used for standalone JBoss Cache applications and unit tests running outside a JEE Application Server. Being a dummy, however, this is just for demo and testing purposes and is not recommended for production use.</para>
  -<para>The implementation of the JBossTransactionManagerLookup is as follows:</para>
  +    <para>JBoss Cache ships with <literal>JBossTransactionManagerLookup</literal> and <literal>GenericTransactionManagerLookup</literal>.  The <literal>JBossTransactionManagerLookup</literal> is able to bind to a running JBoss Application Server and retrieve a <literal>TransactionManager</literal> while the <literal>GenericTransactionManagerLookup</literal> is able to bind to most popular Java EE application servers and provide the same functionality. A dummy implementation - <literal>DummyTransactionManagerLookup</literal> - is also provided, which may be used for standalone JBoss Cache applications and unit tests running outside a Java EE Application Server. Being a dummy, however, this is just for demo and testing purposes and is not recommended for production use.</para>
  +<para>The implementation of the <literal>JBossTransactionManagerLookup</literal> is as follows:</para>
   
       <programlisting>public class JBossTransactionManagerLookup implements TransactionManagerLookup {
   
  @@ -174,14 +174,14 @@
       }
   }</programlisting>
   
  -    <para>The implementation looks up the JBoss TransactionManager from the
  +    <para>The implementation looks up the JBoss Transaction Manager from
       JNDI and returns it.</para>
   
  -    <para>When a call comes in, the TreeCache gets the current transaction and
  +    <para>When a call comes in, the <literal>TreeCache</literal> gets the current transaction and
       records the modification under the transaction as key. (If there is no
       transaction, the modification is applied immediately and possibly
       replicated). So over the lifetime of the transaction all modifications
  -    will be recorded and associated with the transaction. Also, the TreeCache
  +    will be recorded and associated with the transaction. Also, the <literal>TreeCache</literal>
       registers with the transaction to be notified of transaction committed or
       aborted when it first encounters the transaction.</para>
   
  @@ -217,8 +217,8 @@
       <section>
         <title>Example</title>
   
  -      <para>Let's look at an example of how to use the standalone (e.g.
  -      outside an appserver) TreeCache with dummy transactions:</para>
  +      <para>Let's look at an example of how to use JBoss Cache in a standalone (i.e.
  +      outside an application server) fashion with dummy transactions:</para>
   
         <programlisting>Properties prop = new Properties();
   prop.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.cache.transaction.DummyContextFactory");
  @@ -254,7 +254,7 @@
         associate it with the current thread internally). Any methods invoked on
         the cache will now be collected and only applied when the transaction is
         committed. In the above case, we create a node "/classes/cs-101" and add
  -      2 elements to its HashMap. Assuming that the cache is configured to use
  +      2 elements to its map. Assuming that the cache is configured to use
         synchronous replication, on transaction commit the modifications are
         replicated. If there is an exception in the methods (e.g. lock
         acquisition failed), or in the two-phase commit protocol applying the
  
  
  
  1.2.6.1   +1 -1      JBossCache/docs/TreeCache/en/modules/architecture.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: architecture.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/TreeCache/en/modules/architecture.xml,v
  retrieving revision 1.2
  retrieving revision 1.2.6.1
  diff -u -b -r1.2 -r1.2.6.1
  --- architecture.xml	1 Feb 2006 13:31:14 -0000	1.2
  +++ architecture.xml	21 Jul 2006 02:51:29 -0000	1.2.6.1
  @@ -12,7 +12,7 @@
   	    </figure>
   
       <para>The architecture is shown above. The example shows 2 Java VMs, each
  -    has created an instance of TreeCache. These VMs can be located on the same
  +    has created an instance of <literal>TreeCache</literal>. These VMs can be located on the same
       machine, or on 2 different machines. The setup of the underlying group
       communication subsystem is done using <ulink url="http://www.jgroups.org">JGroups</ulink>.</para>
   
  
  
  
  1.1.6.1   +10 -10    JBossCache/docs/TreeCache/en/modules/compatibility.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: compatibility.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/TreeCache/en/modules/compatibility.xml,v
  retrieving revision 1.1
  retrieving revision 1.1.6.1
  diff -u -b -r1.1 -r1.1.6.1
  --- compatibility.xml	21 Feb 2006 22:45:39 -0000	1.1
  +++ compatibility.xml	21 Jul 2006 02:51:29 -0000	1.1.6.1
  @@ -2,14 +2,14 @@
       <title>Version Compatibility and Interoperability</title>
       
       <para>While this is not absolutely guaranteed, generally speaking within a
  -    major version releases of JBossCache are meant to be compatible and 
  +    major version, releases of JBoss Cache are meant to be compatible and 
       interoperable.  Compatible in the sense that it should be possible to 
       upgrade an application from one version to another by simply replacing the 
       jars.  Interoperable in the sense that if two different versions of 
  -    JBossCache are used in the same cluster, they should be able to exchange 
  +    JBoss Cache are used in the same cluster, they should be able to exchange 
       replication and state transfer messages. Note however that interoperability
       requires use of the same JGroups version in all nodes in the cluster.  
  -    In most cases, the version of JGroups used by a version of JBossCache can 
  +    In most cases, the version of JGroups used by a version of JBoss Cache can 
       be upgraded.</para>
       
       <para>In the 1.2.4 and 1.2.4.SP1 releases, API compatibility and 
  @@ -22,22 +22,22 @@
       order to be sure you have no issues.</para>
       
       <para>Beginning in 1.2.4.SP2, a new configuration attribute 
  -    "ReplicationVersion" has been added.  This attribute needs to be set in 
  +    <literal>ReplicationVersion</literal> has been added.  This attribute needs to be set in 
       order to allow interoperability with previous releases.  The value should 
       be set to the release name of the version with which interoperability is 
       desired, e.g. "1.2.3".  If this attribute is set, the wire format of 
       replication and state transfer messages will conform to that understood 
  -    by the indicated release.  This mechanism allows us to improve JBossCache by 
  +    by the indicated release.  This mechanism allows us to improve JBoss Cache by 
       using more efficient wire formats while still providing a means to preserve
       interoperability.</para>
       
  -    <para>In a rare usage scenario, multiple different TreeCaches may
  +    <para>In a rare usage scenario, multiple different JBoss Cache instances may
       be operating on each node in a cluster, but not all need to interoperate
       with a version 1.2.3 cache, and thus some caches will not be configured
  -    with "ReplicationVersion" set to 1.2.3.  This can cause problems with
  +    with <literal>ReplicationVersion</literal> set to 1.2.3.  This can cause problems with
       serialization of Fqn objects.  If you are using this kind of configuration,
  -    are having problems and are unwilling to set "ReplicationVersion" to 
  -    "1.2.3" on all caches, a workaround is to set system property 
  -    "jboss.cache.fqn.123compatible" to "true".</para>
  +    are having problems and are unwilling to set <literal>ReplicationVersion</literal> to 
  +    <literal>1.2.3</literal> on all caches, a workaround is to set system property 
  +    <literal>jboss.cache.fqn.123compatible</literal> to <literal>true</literal>.</para>
       
   </chapter>
  \ No newline at end of file
  
  
  
  1.2.6.1   +17 -17    JBossCache/docs/TreeCache/en/modules/basic_api.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: basic_api.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/TreeCache/en/modules/basic_api.xml,v
  retrieving revision 1.2
  retrieving revision 1.2.6.1
  diff -u -b -r1.2 -r1.2.6.1
  --- basic_api.xml	1 Feb 2006 13:31:14 -0000	1.2
  +++ basic_api.xml	21 Jul 2006 02:51:29 -0000	1.2.6.1
  @@ -17,19 +17,19 @@
   tree.destroyService(); // not necessary, but is same as MBean lifecycle
   </programlisting>
   
  -    <para>The sample code first creates a TreeCache instance and then
  +    <para>The sample code first creates a <literal>TreeCache</literal> instance and then
       configures it. There is another constructor which accepts a number of
  -    configuration options. However, the TreeCache can be configured entirely
  +    configuration options. However, the <literal>TreeCache</literal> can be configured entirely
       from an XML file (shown later) and we don't recommend manual configuration
       as shown in the sample.</para>
   
       <para>The cluster name, properties of the underlying JGroups stack, and
       cache mode (synchronous replication) are configured first (a list of
  -    configuration options is shown later). Then we start the TreeCache. If
  -    replication is enabled, this will make the TreeCache join the cluster, and acquire initial state from an existing node.</para>
  +    configuration options is shown later). Then we start the <literal>TreeCache</literal>. If
  +    replication is enabled, this will make the <literal>TreeCache</literal> join the cluster, and acquire initial state from an existing node.</para>
   
       <para>Then we add 2 items into the cache: the first element creates a node
  -    "a" with a child node "b" that has a child node "c". (TreeCache by default
  +    "a" with a child node "b" that has a child node "c". (<literal>TreeCache</literal> by default
       creates intermediary nodes that don't exist). The key "name" is then
       inserted into the "/a/b/c" node, with a value of "Ben".</para>
   
  @@ -49,8 +49,8 @@
         </mediaobject>
       </figure>
   
  -    <para>The TreeCache has 4 nodes "a", "b", "c" and "d". Nodes "/a/b/c" has
  -    values "name" associated with "Ben" in its hashmap, and node "/a/b/c/d"
  +    <para>The <literal>TreeCache</literal> has 4 nodes "a", "b", "c" and "d". Nodes "/a/b/c" has
  +    values "name" associated with "Ben" in its map, and node "/a/b/c/d"
       has values "uid" and 322649.</para>
   
       <para>Each node can be retrieved by its absolute name (e.g. "/a/b/c") or
  @@ -64,9 +64,9 @@
       recursively from the cache. In this case, nodes "/a/b/c/d", "/a/b/c" and
       "/a/b" will be removed, leaving only "/a".</para>
   
  -    <para>Finally, the TreeCache is stopped. This will cause it to leave the
  +    <para>Finally, the <literal>TreeCache</literal> is stopped. This will cause it to leave the
       cluster, and every node in the cluster will be notified. Note that
  -    TreeCache can be stopped and started again. When it is stopped, all
  +    <literal>TreeCache</literal> can be stopped and started again. When it is stopped, all
       contents will be deleted. And when it is restarted, if it joins a cache
       group, the state will be replicated initially. So potentially you can
       recreate the contents.</para>
  @@ -86,8 +86,8 @@
   
       <para>In this example, we want to access a node that has information for
       employee with id=322649 in department with id=300. The string version
  -    needs 2 hashmap lookups on Strings, whereas the Fqn version needs to
  -    hashmap lookups on Integer. In a large hashtable, the hashCode() method
  +    needs two map lookups on Strings, whereas the Fqn version needs two
  +    map lookups on Integers. In a large hashtable, the hashCode() method
       for String may have collisions, leading to actual string comparisons.
       Also, clients of the cache may already have identifiers for their objects
       in Object form, and don't want to transform between Object and Strings,
  @@ -100,18 +100,18 @@
           <para>Plus their equivalent helper methods taking a String as node
           name.</para>
         </footnote> : <literal>put(Fqn node, Object key, Object key)</literal>
  -    and <literal>put(Fqn node, Hashmap values)</literal>. The former takes the
  +    and <literal>put(Fqn node, Map values)</literal>. The former takes the
       node name, creates it if it doesn't yet exist, and put the key and value
  -    into the node's hashmap, returning the previous value. The latter takes a
  -    hashmap of keys and values and adds them to the node's hashmap,
  +    into the node's map, returning the previous value. The latter takes a
  +    map of keys and values and adds them to the node's map,
       overwriting existing keys and values. Content that is not in the new
  -    hashmap remains in the node's hashmap.</para>
  +    map remains in the node's map.</para>
   
       <para>There are 3 remove() methods: <literal>remove(Fqn node, Object
       key)</literal>, <literal>remove(Fqn node)</literal> and
       <literal>removeData(Fqn node)</literal>. The first removes the given key
       from the node. The second removes the entire node and all subnodes, and
  -    the third removes all elements from the given node's hashmap.</para>
  +    the third removes all elements from the given node's map.</para>
   
       <para>The get methods are: <literal>get(Fqn node)</literal> and
       <literal>get(Fqn node, Object key)</literal>. The former returns a
  @@ -121,7 +121,7 @@
         </footnote> object, allowing for direct navigation, the latter returns
       the value for the given key for a node.</para>
   
  -    <para>Also, the TreeCache has a number of getters and setters. Since the
  +    <para>Also, the <literal>TreeCache</literal> has a number of getters and setters. Since the
       API may change at any time, we recommend the Javadoc for up-to-date
       information.</para>
   </chapter>
  \ No newline at end of file
  
  
  
  1.3.6.1   +4 -4      JBossCache/docs/TreeCache/en/modules/eviction_policies.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: eviction_policies.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/TreeCache/en/modules/eviction_policies.xml,v
  retrieving revision 1.3
  retrieving revision 1.3.6.1
  diff -u -b -r1.3 -r1.3.6.1
  --- eviction_policies.xml	15 Feb 2006 14:07:29 -0000	1.3
  +++ eviction_policies.xml	21 Jul 2006 02:51:29 -0000	1.3.6.1
  @@ -9,19 +9,19 @@
   
         <para>The design of the JBoss Cache eviction policy framework is based
         on the loosely coupled observable pattern (albeit still synchronous)
  -      where the eviction region manager will register a TreeCacheListener to
  +      where the eviction region manager will register a <literal>TreeCacheListener</literal> to
         handle cache events and relay them back to the eviction policies.
         Whenever a cached node is added, removed, evicted, or visited, the
  -      eviction registered TreeCacheListener will maintain state statistics and
  +      eviction registered <literal>TreeCacheListener</literal> will maintain state statistics and
         information will be relayed to each individual Eviction Region.
  -      Each Region can define a different EvictionPolicy implementation that
  +      Each Region can define a different <literal>EvictionPolicy</literal> implementation that
         will know how to correlate cache add, remove, and visit events back to a
         defined eviction behavior. It's the policy provider's responsibility to
         decide when to call back the cache "evict" operation.</para>
   
         <para>There is a single eviction thread (timer) that will run at a
         configured interval. This thread will make calls into each of the policy
  -      providers and inform it of any TreeCacheListener aggregated adds,
  +      providers and inform it of any <literal>TreeCacheListener</literal> aggregated adds,
         removes and visits (gets) to the cache during the configured interval.
         The eviction thread is responsible for kicking off the eviction policy
         processing (a single pass) for each configured eviction cache
  
  
  



More information about the jboss-cvs-commits mailing list