On 31 May 2007, at 14:41, Brian Stansberry wrote:
Very good point; I think some kind of control to prevent calls
going through when not STARTED is very important. Same issue
applies to replicated calls.
This opens a bunch of design questions, i.e. how do you handle open
transactions during a call to stop()? Give them a couple seconds
to clear and then abort if needed? Or immediately abort? Vladimir
did some stuff in this area for FLUSH.
What has been done here, Vlad?
IMO all open transactions will fail and roll back if we have an
interceptor like I suggested, since when the TM calls beforeCompletion
() this propagates a prepare() up the stack and this will cause a
The tricky thing, IMO, is if a tx starts, the cache is stopped (in-
memory state cleared, etc), the cache is started, and THEN the tx
completes. Now this becomes a problem.
Perhaps what we need is when the cache is stopped, all registered
Synchronisations are marked as invalid. And even if the cache
restarts before the tx completes, the Synchronisations will fail
based on the state of the flag.
Does this sound like a valid approach?
Manik Surtani wrote:
> Good point.
> Since we intend to bypass the interceptor chain for this call, I
> presume we do a root.removeChildrenDirect() and
> root.clearDataDirect(). What guarantees have we got that once stop
> () is called, no existing threads are working on the dataset? I
> suppose this is more of a lifecycle question - when the
> CacheStatus is set to STOPPING, do we have any checks that prevent
> invocations on the cache? Perhaps a check in the CallInterceptor
> before making a call to the cache, barfing with a CacheException
> if needed?
> - Manik
> On 30 May 2007, at 17:57, Brian Stansberry wrote:
>> Hmm, I was thinking destroy too, but here's a use case for stop.
>> Cache is a 2nd level cache. No state transfer on startup, they
>> just want a cold cache.
>> Some problem is happening so they stop the cache and then start
>> again. Now they're out of sync with the cluster, i.e. missed any
>> updates to their cached data that occurred while they were stopped.
>> Basically, the state transfer semantics imply that the in-memory
>> state is "consistent" with the cluster when start returns. Either
>> its "consistent" because it's been transferred, or it's
>> "consistent" because it's empty and waiting to be populated from
>> a trusted source (shared cache loader or external source like
>> db). Leaving the in-memory state around after stop() breaks that.
>> Manik Surtani wrote:
>>> Makes sense to me, probably in destroy() rather than stop(), wdyt?
>>> On 30 May 2007, at 17:14, Brian Stansberry wrote:
>>>> Shouldn't destroy() clear all data from the in-memory cache? Or
>>>> maybe stop()? IMHO if you destroy a cache and then call create/
>>>> start again you shouldn't see the old data.
>>>> Not advocating doing anything fancy here, i.e. anything that
>>>> goes through the interceptor chain. Simple clearing of the
>>>> data and children maps on the root node.
>>>> --Brian Stansberry
>>>> Lead, AS Clustering
>>>> JBoss, a division of Red Hat
>>>> jbosscache-dev mailing list
>>> --Manik Surtani
>>> Lead, JBoss Cache
>>> JBoss, a division of Red Hat
>>> Email: manik(a)jboss.org
>>> Telephone: +44 7786 702 706
>>> MSN: manik(a)surtani.org
>>> Yahoo/AIM/Skype: maniksurtani
>> --Brian Stansberry
>> Lead, AS Clustering
>> JBoss, a division of Red Hat
Lead, AS Clustering
JBoss, a division of Red Hat
Lead, JBoss Cache
JBoss, a division of Red Hat