[infinispan-dev] Caches need be stopped in a specific order to respect cross-cache dependencies

Sanne Grinovero sanne at infinispan.org
Tue Aug 26 14:23:43 EDT 2014


On 26 August 2014 18:38, William Burns <mudokonman at gmail.com> wrote:
> On Mon, Aug 25, 2014 at 3:46 AM, Dan Berindei <dan.berindei at gmail.com> wrote:
>>
>>
>>
>> On Mon, Aug 25, 2014 at 10:26 AM, Galder Zamarreño <galder at redhat.com>
>> wrote:
>>>
>>>
>>> On 15 Aug 2014, at 15:55, Dan Berindei <dan.berindei at gmail.com> wrote:
>>>
>>> > It looks to me like you actually want a partial order between caches on
>>> > shutdown, so why not declare an explicit dependency (e.g.
>>> > manager.stopOrder(before, after)? We could even throw an exception if the
>>> > user tries to stop a cache manually in the wrong order (e.g.
>>> > TestingUtil.killCacheManagers).
>>> >
>>> > Alternatively, we could add an event CacheManagerStopEvent(pre=true) at
>>> > the cache manager level that is invoked before any cache is stopped, and you
>>> > could close all the indexes in that listener. The event could even be at the
>>> > cache level, if it would make things easier.
>
> I think something like this would be the simplest for now especially,
> how this is done though we can still decide.
>
>>>
>>> Not sure you need the listener event since we already have lifecycle event
>>> callbacks for external modules.
>>>
>>> IOW, couldn’t you do this cache stop ordering with an implementation of
>>> org.infinispan.lifecycle.ModuleLifecycle? On cacheStarting, you could maybe
>>> track each started cache and give it a priority, and then on
>>> cacheManagerStopping use that priority to close caches. Note: I’ve not
>>> tested this and I don’t know if the callbacks happen at the right time to
>>> allow this. Just thinking out loud.
>
> +1 this is a nice use of what is already in place.  The only issue I
> see here is that there is no ordering of the lifecycle callbacks if
> you had more than 1 callback, which could cause issues if users wanted
> to reference certain caches.
>
>>>
>>
>> Unfortunately ModuleLifecycle.cacheManagerStopping is only called _after_
>> all the caches have been stopped.
>
> This seems like a bug, not very nice for ordering of callback methods.
>
>>
>>
>>>
>>> Cheers,
>>>
>>> >
>>> > Cheers
>>> > Dan
>>> >
>>> >
>>> >
>>> > On Fri, Aug 15, 2014 at 3:29 PM, Sanne Grinovero <sanne at infinispan.org>
>>> > wrote:
>>> > The goal being to resolve ISPN-4561, I was thinking to expose a very
>>> > simple reference counter in the AdvancedCache API.
>>> >
>>> > As you know the Query module - which triggers on indexed caches - can
>>> > use the Infinispan Lucene Directory to store its indexes in a
>>> > (different) Cache.
>>> > When the CacheManager is stopped, if the index storage caches are
>>> > stopped first, then the indexed cache is stopped, this might need to
>>> > flush/close some pending state on the index and this results in an
>>> > illegal operation as the storate is shut down already.
>>> >
>>> > We could either implement a complex dependency graph, or add a method
>>> > like:
>>> >
>>> >
>>> >   boolean incRef();
>>> >
>>> > on AdvancedCache.
>>> >
>>> > when the Cache#close() method is invoked, this will do an internal
>>> > decrement, and only when hitting zero it will really close the cache.
>
> Unfortunately this won't work except in a simple dependency case (you
> depend on a cache, but no cache can depend on you).
>
> Say you have 3 caches (C1, C2, C3).
>
> The case is C2 depends on C1 and C3 depends on C2.  In this case both
> C1 and C2 would have a ref count value of 1 and C3 would have 0.  This
> would allow for C1 and C2 to both be eligible to be closed during the
> same iteration.

Yea people could use it the wrong way :-D

But you can increment in different patterns than what you described to
model a full graph: the important point is to allow users to define an
order in *some* way.


> I think if we started doing dependencies we would really need to have
> some sort of graph to have anything more than the simple case.
>
> Do we know of other use cases where we may want a dependency graph
> explicitly?  It seems what you want is solvable with what is in place,
> it just has a bug :(

True, for my case a two-phases would be good enough *generally
speaking* as we don't expect people to index stuff in a Cache which is
also used to store an index for a different Cache, but that's a
"legal" configuration.
Applying Muprhy's law, that means someone will try it out and I'd
rather be safe about that.

It just so happens that the counter proposal is both trivial and also
can handle a quite long ordering chain.

I don't understand how it's solvable "with what's in place", could you
elaborate?

-- Sanne

>
>>> >
>>> > A CacheManager shutdown will loop through all caches, and invoke
>>> > close() on all of them; the close() method should return something so
>>> > that the CacheManager shutdown loop understand if it really did close
>>> > all caches or if not, in which case it will loop again through all
>>> > caches, and loops until all cache instances are really closed.
>>> > The return type of "close()" doesn't necessarily need to be exposed on
>>> > public API, it could be an internal only variant.
>>> >
>>> > Could we do this?
>>> >
>>> > --Sanne



More information about the infinispan-dev mailing list