[jboss-cvs] JBossCache/docs/JBossCache-UserGuide/en/modules ...

Brian Stansberry brian.stansberry at jboss.com
Thu May 31 01:21:56 EDT 2007


  User: bstansberry
  Date: 07/05/31 01:21:56

  Modified:    docs/JBossCache-UserGuide/en/modules  cache_loaders.xml
  Log:
  Rework the discussion of "Replicated Caches With Only One Cache Having A Store"
  
  Revision  Changes    Path
  1.14      +25 -7     JBossCache/docs/JBossCache-UserGuide/en/modules/cache_loaders.xml
  
  (In the diff below, changes in quantity of whitespace are not shown.)
  
  Index: cache_loaders.xml
  ===================================================================
  RCS file: /cvsroot/jboss/JBossCache/docs/JBossCache-UserGuide/en/modules/cache_loaders.xml,v
  retrieving revision 1.13
  retrieving revision 1.14
  diff -u -b -r1.13 -r1.14
  --- cache_loaders.xml	30 Apr 2007 17:36:48 -0000	1.13
  +++ cache_loaders.xml	31 May 2007 05:21:56 -0000	1.14
  @@ -1083,18 +1083,36 @@
               </mediaobject>
            </figure>
   
  -         <para>This is a similar case as the previous one, but here only one
  +         <para>This is a similar case to the previous one, but here only one
               node in the cluster interacts with a backend store via its
  -            cache loader. All other nodes perform in-memory replication. A use case
  -            for this is HTTP session replication, where all nodes replicate
  -            sessions in-memory, and - in addition - one node saves the sessions to
  -            a persistent backend store. Note that here it may make sense for the
  -            cache loader to store changes asynchronously, that is
  -            <emphasis>not</emphasis>
  +            cache loader. All other nodes perform in-memory replication. The idea
  +            here is all application state is kept in memory in each node, with
  +            the existence of multiple caches making the data highly available. 
  +            (This assumes that a client that needs the data is able to somehow 
  +            fail over from one cache to another.) The single persistent backend 
  +            store then provides a backup copy of the data in case all caches in 
  +            the cluster fail or need to be restarted.
  +         </para>
  +         <para>
  +            Note that here it may make sense for the cache loader to store 
  +            changes asynchronously, that is <emphasis>not</emphasis>
               on the caller's thread, in order not to slow
               down the cluster by accessing (for example) a database. This is a
               non-issue when using asynchronous replication.
            </para>
  +         <para>
  +            A weakness with this architecture is that the cache with access
  +            to the cache loader becomes a single point of failure. Furthermore,
  +            if the cluster is restarted, the cache with the cache loader must
  +            be started first (easy to forget).  A solution to the first problem
  +            is to configure a cache loader on each node, but set the
  +            <literal>singletonStore</literal> configuration to 
  +            <literal>true</literal>. With this kind of setup, one but only one 
  +            node will always be writing to a persistent store. However, this 
  +            complicates the restart problem, as before restarting you need
  +            to determine which cache was writing before the shutdown/failure
  +            and then start that cache first.
  +         </para>
         </section>
   
         <section>
  
  
  



More information about the jboss-cvs-commits mailing list