<div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Thu, Jul 23, 2015 at 12:54 PM Radim Vansa <<a href="mailto:rvansa@redhat.com">rvansa@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">When you're into the stores & expiration: any plans for handling [1]?<br>
<br>
Radim<br>
<br>
[1] <a href="https://issues.jboss.org/browse/ISPN-3202" rel="noreferrer" target="_blank">https://issues.jboss.org/browse/ISPN-3202</a></blockquote><div><br></div><div>I am not planning on it. This is yet another thing I wasn't aware of that makes me dislike maxIdle.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
On 07/23/2015 02:37 PM, William Burns wrote:<br>
> I actually found another hiccup with cache stores. It seems currently<br>
> we only allow for a callback when an entry is expired from a cache<br>
> store when using the reaper thread [1]. However we don't allow for<br>
> such a callback on a read which finds an expired entry and wants to<br>
> remove it [2].<br>
><br>
> Interestingly our cache stores in general don't even expire entries on<br>
> load with the few exceptions below:<br>
><br>
> 1. SingleCacheStore returns true for an expired entry on contains<br>
> 2. SingleCacheStore removes expired entries on load<br>
> 3. RemoteStore does not need to worry about expiration since it is<br>
> handled by another remote server.<br>
><br>
> Of all of the other stores I have looked at they return false properly<br>
> for expired entries and only purge elements from within reaper thread.<br>
><br>
> I propose we change SingleCacheStore to behave as the other cache<br>
> stores. This doesn't require any API changes. We would then rely on<br>
> store expiring elements only during reaper thread or if the element<br>
> expires in memory. We should also guarantee that when a cache store is<br>
> used that the reaper thread is enabled (throw exception if not enabled<br>
> and store is present at init). Should I worry about when only a<br>
> RemoteStore is used (this seems a bit fragile)?<br>
><br>
> To be honest we would need to revamp the CacheLoader/Writer API at a<br>
> later point to allow for values to be optionally provided for<br>
> expiration anyways, so I would say to do that in addition to allowing<br>
> loader/stores to expire on access.<br>
><br>
> [1]<br>
> <a href="https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/persistence/spi/AdvancedCacheWriter.java#L29" rel="noreferrer" target="_blank">https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/persistence/spi/AdvancedCacheWriter.java#L29</a><br>
><br>
> [2]<br>
> <a href="https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/persistence/spi/CacheLoader.java#L34" rel="noreferrer" target="_blank">https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/persistence/spi/CacheLoader.java#L34</a><br>
><br>
> ---------- Forwarded message ---------<br>
> From: William Burns <<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a>>><br>
> Date: Wed, Jul 22, 2015 at 11:06 AM<br>
> Subject: Re: [infinispan-dev] Strict Expiration<br>
> To: infinispan -Dev List <<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>>><br>
><br>
><br>
> On Wed, Jul 22, 2015 at 10:53 AM Dan Berindei <<a href="mailto:dan.berindei@gmail.com" target="_blank">dan.berindei@gmail.com</a><br>
> <mailto:<a href="mailto:dan.berindei@gmail.com" target="_blank">dan.berindei@gmail.com</a>>> wrote:<br>
><br>
> Is it possible/feasible to skip the notification from the backups to<br>
> the primary (and back) when there is no clustered expiration listener?<br>
><br>
><br>
> Unfortunately there is no way to distinguish whether or a listener is<br>
> create, modify, remove or expiration. So this would only work if<br>
> there are no clustered listeners.<br>
><br>
> This however should be feasible. This shouldn't be hard to add.<br>
><br>
> The only thing I would have to figure out is what happens in the case<br>
> of a rehash and the node that removed the value is now the primary<br>
> owner and some nodes have the old value and someone registers an<br>
> expiration listener. I am thinking I should only raise the event if<br>
> the primary owner still has the value.<br>
><br>
><br>
> Dan<br>
><br>
><br>
> On Tue, Jul 21, 2015 at 5:25 PM, William Burns<br>
> <<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a>>> wrote:<br>
> > So I wanted to sum up what it looks like the plan is for this in<br>
> regards to<br>
> > cluster expiration for ISPN 8.<br>
> ><br>
> > First off to not make it ambiguous, maxIdle being used with a<br>
> clustered<br>
> > cache will provide undefined and unsupported behavior. This can<br>
> and will<br>
> > expire entries on a single node without notifying other cluster<br>
> members<br>
> > (essentially it will operate as it does today unchanged).<br>
> ><br>
> > This leaves me to talk solely about lifespan cluster expiration.<br>
> ><br>
> > Lifespan Expiration events are fired by the primary owner of an<br>
> expired key<br>
> ><br>
> > - when accessing an expired entry.<br>
> ><br>
> > - by the reaper thread.<br>
> ><br>
> > If the expiration is detected by a node other than the primary<br>
> owner, an<br>
> > expiration command is sent to it and null is returned<br>
> immediately not<br>
> > waiting for a response.<br>
> ><br>
> > Expiration event listeners follow the usual rules for<br>
> sync/async: in the<br>
> > case of a sync listener, the handler is invoked while holding<br>
> the lock,<br>
> > whereas an async listener will not hold locks.<br>
> ><br>
> > It is desirable for expiration events to contain both the key<br>
> and value.<br>
> > However currently cache stores do not provide the value when<br>
> they expire<br>
> > values. Thus we can only guarantee the value is present when an<br>
> in memory<br>
> > expiration event occurs. We could plan on adding this later.<br>
> ><br>
> > Also as you may have guessed this doesn't touch strict<br>
> expiration, which I<br>
> > think we have come to the conclusion should only work with<br>
> maxIdle and as<br>
> > such this is not explored with this iteration.<br>
> ><br>
> > Let me know if you guys think this approach is okay.<br>
> ><br>
> > Cheers,<br>
> ><br>
> > - Will<br>
> ><br>
> > On Tue, Jul 14, 2015 at 1:51 PM Radim Vansa <<a href="mailto:rvansa@redhat.com" target="_blank">rvansa@redhat.com</a><br>
> <mailto:<a href="mailto:rvansa@redhat.com" target="_blank">rvansa@redhat.com</a>>> wrote:<br>
> >><br>
> >> Yes, I know about [1]. I've worked that around by storing<br>
> timestamp in<br>
> >> the entry as well and when a new record is added, the 'expired'<br>
> >> invalidations are purged. But I can't purge that if I don't<br>
> access it -<br>
> >> Infinispan needs to handle that internally.<br>
> >><br>
> >> Radim<br>
> >><br>
> >> [1] <a href="https://hibernate.atlassian.net/browse/HHH-6219" rel="noreferrer" target="_blank">https://hibernate.atlassian.net/browse/HHH-6219</a><br>
> >><br>
> >> On 07/14/2015 05:45 PM, Dennis Reed wrote:<br>
> >> > On 07/14/2015 11:08 AM, Radim Vansa wrote:<br>
> >> >> On 07/14/2015 04:19 PM, William Burns wrote:<br>
> >> >>><br>
> >> >>> On Tue, Jul 14, 2015 at 9:37 AM William Burns<br>
> <<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a>><br>
> >> >>> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a><br>
> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a>>>> wrote:<br>
> >> >>><br>
> >> >>> On Tue, Jul 14, 2015 at 4:41 AM Dan Berindei<br>
> >> >>> <<a href="mailto:dan.berindei@gmail.com" target="_blank">dan.berindei@gmail.com</a><br>
> <mailto:<a href="mailto:dan.berindei@gmail.com" target="_blank">dan.berindei@gmail.com</a>> <mailto:<a href="mailto:dan.berindei@gmail.com" target="_blank">dan.berindei@gmail.com</a><br>
> <mailto:<a href="mailto:dan.berindei@gmail.com" target="_blank">dan.berindei@gmail.com</a>>>> wrote:<br>
> >> >>><br>
> >> >>> Processing expiration only on the reaper thread<br>
> sounds nice,<br>
> >> >>> but I<br>
> >> >>> have one reservation: processing 1 million<br>
> entries to see<br>
> >> >>> that<br>
> >> >>> 1 of<br>
> >> >>> them is expired is a lot of work, and in the<br>
> general case we<br>
> >> >>> will not<br>
> >> >>> be able to ensure an expiration precision of less<br>
> than 1<br>
> >> >>> minute (maybe<br>
> >> >>> more, with a huge SingleFileStore attached).<br>
> >> >>><br>
> >> >>><br>
> >> >>> This isn't much different then before. The only<br>
> difference is<br>
> >> >>> that if a user touched a value after it expired it<br>
> wouldn't show<br>
> >> >>> up (which is unlikely with maxIdle especially).<br>
> >> >>><br>
> >> >>><br>
> >> >>> What happens to users who need better precision? In<br>
> >> >>> particular, I know<br>
> >> >>> some JCache tests were failing because HotRod was<br>
> only<br>
> >> >>> supporting<br>
> >> >>> 1-second resolution instead of the 1-millisecond<br>
> resolution<br>
> >> >>> they were<br>
> >> >>> expecting.<br>
> >> >>><br>
> >> >>><br>
> >> >>> JCache is an interesting piece. The thing about<br>
> JCache is that<br>
> >> >>> the spec is only defined for local caches. However I<br>
> wouldn't<br>
> >> >>> want to muddy up the waters in regards to it behaving<br>
> >> >>> differently<br>
> >> >>> for local/remote. In the JCache scenario we could add an<br>
> >> >>> interceptor to prevent it returning such values (we<br>
> do something<br>
> >> >>> similar already for events). JCache behavior vs ISPN<br>
> behavior<br>
> >> >>> seems a bit easier to differentiate. But like you<br>
> are getting<br>
> >> >>> at,<br>
> >> >>> either way is not very appealing.<br>
> >> >>><br>
> >> >>><br>
> >> >>><br>
> >> >>> I'm even less convinced about the need to<br>
> guarantee that a<br>
> >> >>> clustered<br>
> >> >>> expiration listener will only be triggered once,<br>
> and that<br>
> >> >>> the<br>
> >> >>> entry<br>
> >> >>> must be null everywhere after that listener was<br>
> invoked.<br>
> >> >>> What's the<br>
> >> >>> use case?<br>
> >> >>><br>
> >> >>><br>
> >> >>> Maybe Tristan would know more to answer. To be<br>
> honest this work<br>
> >> >>> seems fruitless unless we know what our end users<br>
> want here.<br>
> >> >>> Spending time on something for it to thrown out is<br>
> never fun :(<br>
> >> >>><br>
> >> >>> And the more I thought about this the more I question the<br>
> >> >>> validity<br>
> >> >>> of maxIdle even. It seems like a very poor way to prevent<br>
> >> >>> memory<br>
> >> >>> exhaustion, which eviction does in a much better way<br>
> and has<br>
> >> >>> much<br>
> >> >>> more flexible algorithms. Does anyone know what<br>
> maxIdle would<br>
> >> >>> be<br>
> >> >>> used for that wouldn't be covered by eviction? The<br>
> only thing I<br>
> >> >>> can think of is cleaning up the cache store as well.<br>
> >> >>><br>
> >> >>><br>
> >> >>> Actually I guess for session/authentication related<br>
> information this<br>
> >> >>> would be important. However maxIdle isn't really as usable<br>
> in that<br>
> >> >>> case since most likely you would have a sticky session to<br>
> go back to<br>
> >> >>> that node which means you would never refresh the last used<br>
> date on<br>
> >> >>> the copies (current implementation). Without cluster<br>
> expiration you<br>
> >> >>> could lose that session information on a failover very easily.<br>
> >> >> I would say that maxIdle can be used as for memory<br>
> management as kind<br>
> >> >> of<br>
> >> >> WeakHashMap - e.g. in 2LC the maxIdle is used to store some<br>
> record for<br>
> >> >> a<br>
> >> >> short while (regular transaction lifespan ~ seconds to<br>
> minutes), and<br>
> >> >> regularly the record is removed. However, to make sure that<br>
> we don't<br>
> >> >> leak records in this cache (if something goes wrong and the<br>
> remove does<br>
> >> >> not occur), it is removed.<br>
> >> > Note that just relying on maxIdle doesn't guarantee you won't<br>
> leak<br>
> >> > records in this use case (specifically with the way the current<br>
> >> > hibernate-infinispan 2LC implementation uses it).<br>
> >> ><br>
> >> > Hibernate-infinispan adds entries to its own Map stored in<br>
> Infinispan,<br>
> >> > and expects maxIdle to remove the map if it skips a remove.<br>
> But in a<br>
> >> > current case, we found that due to frequent accesses to that<br>
> same map<br>
> >> > the entries never idle out and it ends up in OOME).<br>
> >> ><br>
> >> > -Dennis<br>
> >> ><br>
> >> >> I can guess how long the transaction takes place, but not<br>
> how many<br>
> >> >> parallel transactions there are. With eviction algorithms<br>
> (where I am<br>
> >> >> not sure about the exact guarantees) I can set the cache to<br>
> not hold<br>
> >> >> more than N entries, but I can't know for sure that my<br>
> record does not<br>
> >> >> suddenly get evicted after shorter period, possibly causing some<br>
> >> >> inconsistency.<br>
> >> >> So this is similar to WeakHashMap by removing the key "when<br>
> it can't be<br>
> >> >> used anymore" because I know that the transaction will<br>
> finish before<br>
> >> >> the<br>
> >> >> deadline. I don't care about the exact size, I don't want to<br>
> tune that,<br>
> >> >> I just don't want to leak.<br>
> >> >><br>
> >> >> From my POV the non-strict maxIdle and strict expiration<br>
> would be a<br>
> >> >> nice compromise.<br>
> >> >><br>
> >> >> Radim<br>
> >> >><br>
> >> >>> Note that this would make the reaper thread less<br>
> efficient:<br>
> >> >>> with<br>
> >> >>> numOwners=2 (best case), half of the entries that<br>
> the reaper<br>
> >> >>> touches<br>
> >> >>> cannot be expired, because the node isn't the<br>
> primary node.<br>
> >> >>> And to<br>
> >> >>> make matters worse, the same reaper thread would<br>
> have to<br>
> >> >>> perform a<br>
> >> >>> (synchronous?) RPC for each entry to ensure it<br>
> expires<br>
> >> >>> everywhere.<br>
> >> >>><br>
> >> >>><br>
> >> >>> I have debated about this, it could something like a sync<br>
> >> >>> removeAll which has a special marker to tell it is due to<br>
> >> >>> expiration (which would raise listeners there), while<br>
> also<br>
> >> >>> sending<br>
> >> >>> a cluster expiration event to other non owners.<br>
> >> >>><br>
> >> >>><br>
> >> >>> For maxIdle I'd like to know more information<br>
> about how<br>
> >> >>> exactly the<br>
> >> >>> owners would coordinate to expire an entry. I'm<br>
> pretty sure<br>
> >> >>> we<br>
> >> >>> cannot<br>
> >> >>> avoid ignoring some reads (expiring an entry<br>
> immediately<br>
> >> >>> after<br>
> >> >>> it was<br>
> >> >>> read), and ensuring that we don't accidentally<br>
> extend an<br>
> >> >>> entry's life<br>
> >> >>> (like the current code does, when we transfer an<br>
> entry to a<br>
> >> >>> new owner)<br>
> >> >>> also sounds problematic.<br>
> >> >>><br>
> >> >>><br>
> >> >>> For lifespan it is simple, the primary owner just<br>
> expires it<br>
> >> >>> when<br>
> >> >>> it expires there. There is no coordination needed in<br>
> this case<br>
> >> >>> it<br>
> >> >>> just sends the expired remove to owners etc.<br>
> >> >>><br>
> >> >>> Max idle is more complicated as we all know. The<br>
> primary owner<br>
> >> >>> would send a request for the last used time for a<br>
> given key or<br>
> >> >>> set<br>
> >> >>> of keys. Then the owner would take those times and<br>
> check for a<br>
> >> >>> new access it isn't aware of. If there isn't then it<br>
> would send<br>
> >> >>> a<br>
> >> >>> remove command for the key(s). If there is a new<br>
> access the<br>
> >> >>> owner<br>
> >> >>> would instead send the last used time to all of the<br>
> owners. The<br>
> >> >>> expiration obviously would have a window that if a<br>
> read occurred<br>
> >> >>> after sending a response that could be ignored. This<br>
> could be<br>
> >> >>> resolved by using some sort of 2PC and blocking reads<br>
> during<br>
> >> >>> that<br>
> >> >>> period but I would say it isn't worth it.<br>
> >> >>><br>
> >> >>> The issue with transferring to a new node refreshing<br>
> the last<br>
> >> >>> update/lifespan seems like just a bug we need to fix<br>
> >> >>> irrespective<br>
> >> >>> of this issue IMO.<br>
> >> >>><br>
> >> >>><br>
> >> >>> I'm not saying expiring entries on each node<br>
> independently<br>
> >> >>> is<br>
> >> >>> perfect,<br>
> >> >>> far from it. But I wouldn't want us to provide new<br>
> >> >>> guarantees that<br>
> >> >>> could hurt performance without a really good use<br>
> case.<br>
> >> >>><br>
> >> >>><br>
> >> >>> I would guess that user perceived performance should<br>
> be a little<br>
> >> >>> faster with this. But this also depends on an<br>
> alternative that<br>
> >> >>> we<br>
> >> >>> decided on :)<br>
> >> >>><br>
> >> >>> Also the expiration thread pool is set to min<br>
> priority atm so it<br>
> >> >>> may delay removal of said objects but hopefully (if<br>
> the jvm<br>
> >> >>> supports) it wouldn't overrun a CPU while processing<br>
> unless it<br>
> >> >>> has<br>
> >> >>> availability.<br>
> >> >>><br>
> >> >>><br>
> >> >>> Cheers<br>
> >> >>> Dan<br>
> >> >>><br>
> >> >>><br>
> >> >>> On Mon, Jul 13, 2015 at 9:25 PM, Tristan Tarrant<br>
> >> >>> <<a href="mailto:ttarrant@redhat.com" target="_blank">ttarrant@redhat.com</a> <mailto:<a href="mailto:ttarrant@redhat.com" target="_blank">ttarrant@redhat.com</a>><br>
> <mailto:<a href="mailto:ttarrant@redhat.com" target="_blank">ttarrant@redhat.com</a> <mailto:<a href="mailto:ttarrant@redhat.com" target="_blank">ttarrant@redhat.com</a>>>> wrote:<br>
> >> >>> > After re-reading the whole original thread, I<br>
> agree with<br>
> >> >>> the<br>
> >> >>> proposal<br>
> >> >>> > with two caveats:<br>
> >> >>> ><br>
> >> >>> > - ensure that we don't break JCache compatibility<br>
> >> >>> > - ensure that we document this properly<br>
> >> >>> ><br>
> >> >>> > Tristan<br>
> >> >>> ><br>
> >> >>> > On 13/07/2015 18:41, Sanne Grinovero wrote:<br>
> >> >>> >> +1<br>
> >> >>> >> You had me convinced at the first line,<br>
> although "A lot<br>
> >> >>> of<br>
> >> >>> code can now<br>
> >> >>> >> be removed and made simpler" makes it look<br>
> extremely<br>
> >> >>> nice.<br>
> >> >>> >><br>
> >> >>> >> On 13 Jul 2015 18:14, "William Burns"<br>
> >> >>> <<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a>><br>
> >> >>> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a><br>
> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a>>><br>
> >> >>> >> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a><br>
> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a>><br>
> >><br>
> >> >>> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a><br>
> <mailto:<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a>>>>> wrote:<br>
> >> >>> >><br>
> >> >>> >> This is a necro of [1].<br>
> >> >>> >><br>
> >> >>> >> With Infinispan 8.0 we are adding in clustered<br>
> >> >>> expiration. That<br>
> >> >>> >> includes an expiration event raised that is<br>
> clustered<br>
> >> >>> as well.<br>
> >> >>> >> Unfortunately expiration events currently occur<br>
> >> >>> multiple times (if<br>
> >> >>> >> numOwners > 1) at different times across<br>
> nodes in a<br>
> >> >>> cluster. This<br>
> >> >>> >> makes coordinating a single cluster<br>
> expiration event<br>
> >> >>> quite difficult.<br>
> >> >>> >><br>
> >> >>> >> To work around this I am proposing that the<br>
> >> >>> expiration<br>
> >> >>> of an event<br>
> >> >>> >> is done solely by the owner of the given key<br>
> that is<br>
> >> >>> now expired.<br>
> >> >>> >> This would fix the issue of having multiple<br>
> events<br>
> >> >>> and<br>
> >> >>> the event can<br>
> >> >>> >> be raised while holding the lock for the<br>
> given key so<br>
> >> >>> concurrent<br>
> >> >>> >> modifications would not be an issue.<br>
> >> >>> >><br>
> >> >>> >> The problem arises when you have other nodes that<br>
> >> >>> have<br>
> >> >>> expiration<br>
> >> >>> >> set but expire at different times. Max idle<br>
> is the<br>
> >> >>> biggest offender<br>
> >> >>> >> with this as a read on an owner only<br>
> refreshes the<br>
> >> >>> owners timestamp,<br>
> >> >>> >> meaning other owners would not be updated and<br>
> expire<br>
> >> >>> preemptively.<br>
> >> >>> >> To have expiration work properly in this case you<br>
> >> >>> would<br>
> >> >>> need<br>
> >> >>> >> coordination between the owners to see if<br>
> anyone has<br>
> >> >>> a<br>
> >> >>> higher<br>
> >> >>> >> value. This requires blocking and would have<br>
> to be<br>
> >> >>> done while<br>
> >> >>> >> accessing a key that is expired to be sure if<br>
> >> >>> expiration happened or<br>
> >> >>> >> not.<br>
> >> >>> >><br>
> >> >>> >> The linked dev listing proposed instead to only<br>
> >> >>> expire<br>
> >> >>> an entry by<br>
> >> >>> >> the reaper thread and not on access. In this<br>
> case a<br>
> >> >>> read will<br>
> >> >>> >> return a non null value until it is fully<br>
> expired,<br>
> >> >>> increasing hit<br>
> >> >>> >> ratios possibly.<br>
> >> >>> >><br>
> >> >>> >> Their are quire a bit of real benefits for this:<br>
> >> >>> >><br>
> >> >>> >> 1. Cluster cache reads would be much simpler and<br>
> >> >>> wouldn't have to<br>
> >> >>> >> block to verify the object exists or not<br>
> since this<br>
> >> >>> would only be<br>
> >> >>> >> done by the reaper thread (note this would<br>
> have only<br>
> >> >>> happened if the<br>
> >> >>> >> entry was expired locally). An access would just<br>
> >> >>> return the value<br>
> >> >>> >> immediately.<br>
> >> >>> >> 2. Each node only expires entries it owns in the<br>
> >> >>> reaper<br>
> >> >>> thread<br>
> >> >>> >> reducing how many entries they must check or<br>
> remove.<br>
> >> >>> This also<br>
> >> >>> >> provides a single point where events would be<br>
> raised<br>
> >> >>> as<br>
> >> >>> we need.<br>
> >> >>> >> 3. A lot of code can now be removed and made<br>
> simpler<br>
> >> >>> as<br>
> >> >>> it no longer<br>
> >> >>> >> has to check for expiration. The expiration<br>
> check<br>
> >> >>> would only be<br>
> >> >>> >> done in 1 place, the expiration reaper thread.<br>
> >> >>> >><br>
> >> >>> >> The main issue with this proposal is as the other<br>
> >> >>> listing mentions<br>
> >> >>> >> is if user code expects the value to be gone<br>
> after<br>
> >> >>> expiration for<br>
> >> >>> >> correctness. I would say this use case is not as<br>
> >> >>> compelling for<br>
> >> >>> >> maxIdle, especially since we never supported it<br>
> >> >>> properly. And in<br>
> >> >>> >> the case of lifespan the user could very<br>
> easily store<br>
> >> >>> the expiration<br>
> >> >>> >> time in the object that they can check after<br>
> a get as<br>
> >> >>> pointed out in<br>
> >> >>> >> the other thread.<br>
> >> >>> >><br>
> >> >>> >> [1]<br>
> >> >>> >><br>
> >> >>><br>
> >> >>><br>
> <a href="http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-strictly-not-returning-expired-values-td3428763.html" rel="noreferrer" target="_blank">http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-strictly-not-returning-expired-values-td3428763.html</a><br>
> >> >>> >><br>
> >> >>> >> _______________________________________________<br>
> >> >>> >> infinispan-dev mailing list<br>
> >> >>> >> <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>><br>
> >> >>> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>>><br>
> >> >>> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>><br>
> >> >>> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>>>><br>
> >> >>> >><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> >> >>> >><br>
> >> >>> >><br>
> >> >>> >><br>
> >> >>> >> _______________________________________________<br>
> >> >>> >> infinispan-dev mailing list<br>
> >> >>> >> <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>><br>
> >> >>> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>>><br>
> >> >>> >><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> >> >>> >><br>
> >> >>> ><br>
> >> >>> > --<br>
> >> >>> > Tristan Tarrant<br>
> >> >>> > Infinispan Lead<br>
> >> >>> > JBoss, a division of Red Hat<br>
> >> >>> > _______________________________________________<br>
> >> >>> > infinispan-dev mailing list<br>
> >> >>> > <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>><br>
> >> >>> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>>><br>
> >> >>> ><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> >> >>> _______________________________________________<br>
> >> >>> infinispan-dev mailing list<br>
> >> >>> <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>><br>
> >> >>> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>>><br>
> >> >>> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> >> >>><br>
> >> >>><br>
> >> >>><br>
> >> >>> _______________________________________________<br>
> >> >>> infinispan-dev mailing list<br>
> >> >>> <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>><br>
> >> >>> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> >> >><br>
> >> > _______________________________________________<br>
> >> > infinispan-dev mailing list<br>
> >> > <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>><br>
> >> > <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> >><br>
> >><br>
> >> --<br>
> >> Radim Vansa <<a href="mailto:rvansa@redhat.com" target="_blank">rvansa@redhat.com</a> <mailto:<a href="mailto:rvansa@redhat.com" target="_blank">rvansa@redhat.com</a>>><br>
> >> JBoss Performance Team<br>
> >><br>
> >> _______________________________________________<br>
> >> infinispan-dev mailing list<br>
> >> <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>><br>
> >> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > infinispan-dev mailing list<br>
> > <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>><br>
> > <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> _______________________________________________<br>
> infinispan-dev mailing list<br>
> <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a>><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> infinispan-dev mailing list<br>
> <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
<br>
<br>
--<br>
Radim Vansa <<a href="mailto:rvansa@redhat.com" target="_blank">rvansa@redhat.com</a>><br>
JBoss Performance Team<br>
<br>
_______________________________________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
</blockquote></div></div>