[infinispan-dev] Supporting notifications for entries expired while in the cache store - ISPN-694

Galder Zamarreño galder at redhat.com
Mon May 27 11:56:37 EDT 2013


On May 23, 2013, at 8:28 PM, Paul Ferraro <paul.ferraro at redhat.com> wrote:

> On Wed, 2013-05-22 at 18:03 +0200, Galder Zamarreño wrote:
>> On May 21, 2013, at 7:42 PM, Paul Ferraro <paul.ferraro at redhat.com> wrote:
>> 
>>> On Tue, 2013-05-21 at 17:07 +0200, Galder Zamarreño wrote:
>>>> On May 6, 2013, at 2:20 PM, Mircea Markus <mmarkus at redhat.com> wrote:
>>>> 
>>>>> 
>>>>> On 3 May 2013, at 20:15, Paul Ferraro wrote:
>>>>> 
>>>>>> Is it essential?  No - but it would simplify things on my end.
>>>>>> If Infinispan can't implement expiration notifications, then I am forced
>>>>>> to use immortal cache entries and perform expiration myself.  To do
>>>>>> this, I have to store meta information about the cache entry along with
>>>>>> my actual cache values, which normally I would get for free via mortal
>>>>>> cache entries.
>>>>> 
>>>>> In the scope of 5.2, what galder suggested was to fully support notifications for the entries in memory. In order to fully support your use case you'd need to add some code to trigger notifications in the cache store as well - I think that shouldn't be too difficult. What cache store implementation are you using any way?
>>>> 
>>>> ^ Personally, I'd do in-memory entry expiration notifications for 5.2, and I'd leave cache store based entry expiration for 6.0, when we'll revisit cache store API, and we can address cache store based entry expiration notification properly.
>>>> 
>>>> Agree everyone?
>>> 
>>> Thanks fine.
>>> Just to clarify, the end result is that an expiration notification would
>>> only ever be emitted on 1 node per cache entry, correct?  That is to
>>> say, for a given expired cache entry, the corresponding isOriginLocal()
>>> would only ever return true on one node, yes?  I just want to make sure
>>> that each node won't emit a notification for the same cache entry that
>>> was discovered to have expired.
>> 
>> ^ Hmmmm, if you want it to work that way it might need some thinking, and it could be expensive to achieve...
>> 
>> If the expiration happens when an entry is retrieved from the cache and this is expired, it's local. So, the way expiration will occur is that when each node accesses the entry and is expired, it will send a notification to any listener available locally indicating that the origin is local. The same thing happens when eviction calls purgeExpired.
> 
> That's precisely what I meant, actually.

Ah ok. As you can imagine, given the current logic, there will never be an expiration noticed generated with isOriginLocal=false (not that I can think of right now at least -> good thing to note in the docu for this btw)

> 
>> The advantage here is that expiration is local, and hence fast. No need to communicate with other nodes. Your suggestion might require nodes to interact with each other for expiration to find out where the expiration started to be differentiate between originLocal=true/false. I'm not sure we want to do this…
> 
> In our use case, the entry retrieval is done within the context of
> pessimistic locking, so a node would already have exclusive access to
> the cache key.

^ So, you've acquired a lock on the key or something? Remember that entry retrieval is lock-free on its own.

>  If I understand you correctly, this would inherently
> prevent multiple local expiration notifications from every node.  

^ Actually, no, because unless there's been a write operation on the key, or lock called for that key, in the batch or transaction, there will be no locks acquired on the key. When the key is read from the data container, it's expired there and then, without acquiring any locks. There might be some local locks acquired, locally, at the underlying CHM segment, but varies depending on the CHM used underneath when deleting the entry from the inner container, but these won't affect other nodes. It would only affect concurrent threads on the same node that read the same key and concurrently try to expire the entry...

> In
> this case, if a concurrent entry retrieval occurred on some other node -
> by the time that node got access to the lock, it would already have been
> removed from the cache store, thus no local expiration notification
> would be emitted from that node, correct?
> 
> My concern is about any kind of auto-purging of expired cache store
> entries within the cache store itself.  I imagine this would operate
> outside the context of any such locking, thus the potential for local
> expiration notifications on multiple nodes for the same cache entry.

Hmmm, not sure I understand this fully, but let me have a go: the situation for entries expired in cache stores depends slighly on the cache store itself and does not rely on the in-memory level locking (unless cooperation within a transaction). Some, like the FCS, get some locks (locally) even for reading from store, so expiration will happen within those locks if an entry is requested that's expired. 

There is the risk for sure for an entry to be expired right at the cache store and never be loaded into memory, and hence no notifications being sent. But one thing is for sure: if the entry is in-memory, the notification will be sent, because the in-memory container is checked before going to the cache store, and no other concurrent expiration in cache store in another node will stop from in-memory expirations happening (unless I it's a bug).

Cheers, 

> 
>> Cheers,
>> 
>>> 
>>>>>> 
>>>>>> So, it would be nice to have.  If I have to wait for 6.0 for this,
>>>>>> that's ok.
>>>>>> 
>>>>>> On Thu, 2013-05-02 at 17:03 +0200, Galder Zamarreño wrote:
>>>>>>> Hi,
>>>>>>> 
>>>>>>> Re: https://issues.jboss.org/browse/ISPN-694
>>>>>>> 
>>>>>>> We've got a little problem here. Paul requires that entries that might
>>>>>>> have been expired while in the cache store, when loaded, we send
>>>>>>> expiration notifications for them.
>>>>>>> 
>>>>>>> The problem is that expiration checking is currently done in the
>>>>>>> actual cache store implementations, which makes supporting this (even
>>>>>>> outside the purgeExpired business) specific to each cache store. Not
>>>>>>> ideal.
>>>>>>> 
>>>>>>> The alternative would be for CacheLoaderInterceptor to load, do the
>>>>>>> checks and then remove the entries accordingly. The big problem here
>>>>>>> is that you're imposing a way to deal with expiration handling for all
>>>>>>> cache store implementations, and some might be able to do these checks
>>>>>>> and removals in a more efficient way if they were left to do it
>>>>>>> themselves. For example, having to load all entries and then decide
>>>>>>> which are to expire might require a lot of work, instead of
>>>>>>> potentially communicating directly with the cache store (imagine a
>>>>>>> remote cache store…) and asking it to return all the entries filtered
>>>>>>> by those whose expiry has not expired. 
>>>>>>> 
>>>>>>> However, even if a cache store can do that, it would lead to loading
>>>>>>> only those entries not expired, but then how do you send the
>>>>>>> notifications if those expired entries have been filtered out? You
>>>>>>> probably need multiple load methods here...
>>>>>>> 
>>>>>>> @Paul, do you really need this for your use case?
>>>>>>> 
>>>>>>> The simplest thing to do might be to go for option 1, and let each
>>>>>>> cache store send notifications for expired entries for the moment, and
>>>>>>> then in 6.0 revise not only the API for purgeExpired, but also the API
>>>>>>> for load/loadAll() to find a way that, if any expiry listeners are in
>>>>>>> place, a different method can be called on the cache store that
>>>>>>> signals it to return all entries: both expired and non-expired, and
>>>>>>> then let the CacheLoaderInterceptor send notifications from a central
>>>>>>> location.
>>>>>>> 
>>>>>>> Thoughts?
>>>>>>> 
>>>>>>> Cheers,
>>>>>>> --
>>>>>>> Galder Zamarreño
>>>>>>> galder at redhat.com
>>>>>>> twitter.com/galderz
>>>>>>> 
>>>>>>> Project Lead, Escalante
>>>>>>> http://escalante.io
>>>>>>> 
>>>>>>> Engineer, Infinispan
>>>>>>> http://infinispan.org
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> infinispan-dev mailing list
>>>>>> infinispan-dev at lists.jboss.org
>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>> 
>>>>> Cheers,
>>>>> -- 
>>>>> Mircea Markus
>>>>> Infinispan lead (www.infinispan.org)
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> --
>>>> Galder Zamarreño
>>>> galder at redhat.com
>>>> twitter.com/galderz
>>>> 
>>>> Project Lead, Escalante
>>>> http://escalante.io
>>>> 
>>>> Engineer, Infinispan
>>>> http://infinispan.org
>>>> 
>>>> 
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> 
>>> 
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> 
>> --
>> Galder Zamarreño
>> galder at redhat.com
>> twitter.com/galderz
>> 
>> Project Lead, Escalante
>> http://escalante.io
>> 
>> Engineer, Infinispan
>> http://infinispan.org
>> 
>> 
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Galder Zamarreño
galder at redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org




More information about the infinispan-dev mailing list