[infinispan-dev] Improving CacheStore.loadAll()

Manik Surtani manik at jboss.org
Mon Dec 14 07:57:10 EST 2009


On 10 Dec 2009, at 18:13, philippe van dyck wrote:

> Manik,
> 
> Thank you for taking a second look at CacheStore.loadAll().
> 
> The possibility of loading the whole cache in memory simply by calling one method is something I cannot live with ;-)
> What I mean is that I will not put this code in production knowing that *it will* kill my JVM with a "Java Heap Space" exception.
> Actually, the content of this method is commented out.. (In my local version of the S3 cache store).
> 
> But let's take a closer look at most of loadAll() calls :
> 
> 1)LeaveTask.performRehash()
> for (InternalCacheEntry ice : cs.loadAll()) if (!statemap.containsKey(ice.getKey())) statemap.addState(ice);
> 
> 2)CacheLoaderManagerImpl.preload()
> state = loader.loadAll();
> for (InternalCacheEntry e : state)
>                cache.getAdvancedCache().withFlags(SKIP_CACHE_STATUS_CHECK).put(e.getKey(), e.getValue(), e.getLifespan(), MILLISECONDS, e.getMaxIdle(), MILLISECONDS);
> 
> 3)RehashControlCommand.pullState()
>          for (InternalCacheEntry ice : cacheStore.loadAll()) {
>             Object k = ice.getKey();
>             if (!state.containsKey(k) && shouldAddToMap(k, oldCH, numCopies, self))
>                state.put(k, ice.toInternalCacheValue());
>          }
>       }
> 
> Each time loadAll() is used, it is within a 'for' loop.
> I think that having loadAll returning an Iterable<InternalCacheEntry> will solve memory issues.

loadAll() returns a Set<InternalCacheEntry>, which extends Iterable<InternalCacheEntry> - and the only method used on the set is iterator(), so for all practical purposes it does return an Iterable<InternalCacheEntry>.  Perhaps you want to restrict the return type so all you can do is iterate, and thus be able to spool entries off the CacheStore lazily, maybe with a prefetch?  This makes sense, and will solve memory issues in the CacheStore but not elsewhere (see below).

> Regarding 1), since the total size of the cache store will IMHO almost always be bigger than the entries in memory it may be a better idea to iterate on statemap and use containsKey() on the cache store.

That doesn't help.  What the loop does is identifies which entries in the cache store are *not* in the state map, and add these to the state map.

> For 2), there should indeed be a limit to the cache eviction "max size" (+1 for the JIRA issue - can I vote ?).

Yes, please do vote on the JIRA - even contribute a fix if you feel like, it shouldn't be too hard.  :-)

> For 3), a solution like 1) could be applied (calling getKey on each entry after loading *all* the set... sound a little bit expensive ).

Yes, (3) and (1) are very similar, but in the same way your approach doesn't help here.  Also, entry.getKey() is very cheap, this is not the expensive bit.  Loading all of the entries into memory is.

As a solution for both 1 and 3 is to come up with a stream-like API for loading entries.  It could be something low-level like loadAllStream(), but that would mean exposing internal storage formats outside of the CacheStore impl which is bad.  The other approach as you suggested, is to return an Iterable (or better still, an Iterator), which lazily loads and pre-fetches.  This could be interesting, since entries could be spooled out of the CacheStore but for it to be of any real benefit, the rehashing needs to stream state to the recipient as well otherwise the statemap becomes the memory hog (loops in 1 and 3 iterator over the set and adds relevant entries to the statemap for sending via RPC).  ISPN-284 modifies rehashing to stream state to recipients rather than use RPC, and once we have this then spooling entries off the CacheStore will be of benefit.  See https://jira.jboss.org/jira/browse/ISPN-284 for more details.  Again, you are welcome to lend a hand here too.  :-)

> And in the cache store itself (in S3 and File cache stores)... something funny (someone explain the goal of this ?) : 
> 	protected void purgeInternal() throws CacheLoaderException {
> 		loadAll();
> 	}

Yes, this was poor code and probably a cheap shortcut, see the updated code in trunk (or CR3) for a better version

> Could a minimal change in the CacheStore interface (using Iterable instead of returning the whole Set) and a couple of for loop optimizations solve it all ?
> 
> WDYT ? 
> 
> BTW, when you say that rehashing only happens when the cache is not shared, could you please confirm that it is related to the shared="true" setting available in the xml configuration ?

Yes.  

> Finally, can I be sure that there will never be a rehash of the keys in distributed mode ?

No - of course there could be a rehash.  If you don't rehash, the consistent hash algorithm breaks down when the shape of a cluster changes (nodes added/removed).

Cheers
Manik


> 
> Cheers,
> 
> Philippe
> 
> Le 10 déc. 2009 à 18:10, Manik Surtani a écrit :
> 
>> Adrian,
>> 
>> Thanks for the transcript between yourself and Philippe below.  Here are my thoughts:
>> 
>> *  loadAll() is generally overused and this can get expensive.  I've changed purgeExpired() in certain impls to not use loadAll().
>> * preloading the cache also calls loadAll().  I have a suggestion for this here - https://jira.jboss.org/jira/browse/ISPN-310 - but this won't be in place till 4.1.0.
>> * rehashing isn't as bad as you think - the rehashing of entries in stores only takes place when the cache store is *not* shared.  Any use of an expensive, remote store (such as S3, JDBC) would typically be shared between Infinispan nodes and as such these will not be considered when rehashing.
>> 
>> That said, stuff can be improved a bit, specifically with the addition of something like loadKeys(Set<Object> excludes).  This will allow the rehash code to load just the necessary keys, excluding keys already considered from the data container directly, and then inspect each key to test if the key needs to be rehashed elsewhere.  If so, the value could be loaded using load().
>> 
>> I have captured this in 
>> 
>> 	https://jira.jboss.org/jira/browse/ISPN-311
>> 
>> The problem, I think, with maintaining metadata separately is that it adds an additional synchronization point when updating that metadata, whether this is expiration data per key, or even just a list of keys in the store for a quick loadKeys() impl.  But am open to ideas, after all this is just CacheStore specific implementation details.
>> 
>> Cheers
>> Manik
>> 
>> 
>> On 3 Dec 2009, at 16:20, Adrian Cole wrote:
>> 
>>> <adriancole> aloha all
>>> <pvdyck> hi all
>>> <adriancole> we are talking about the rehash concern wrt bucket-based
>>> cachestores
>>> <pvdyck> here is transcript
>>> <pvdyck> it seems that it first loops on the set from the store to
>>> compare the keys with the keys in memory
>>> <pvdyck> [17:01] pvdyck: the set of keys present in memory will
>>> always be smaller ... so maybe looping on this one and comparing with
>>> the keys present in the store is a good optimization
>>> <pvdyck> [17:01] pvdyck: I will give you the exact file and line in a moment
>>> <pvdyck> [17:02] pvdyck: ok LeaveTask:74
>>> <pvdyck> [17:03] pvdyck: actually
>>> org.infinispan.distribution.LeaveTask line 74 from CR2
>>> <adriancole> for context, the current design implies a big load, pvdyck, right?
>>> <pvdyck> (is it sooo early ... is there and #infinispan irc channel
>>> display next to the coffee machine ? ;-)
>>> <adriancole> :)
>>> <adriancole> pvdyk, I can see that changing the loop will reduce the
>>> possiblity for overloading a node
>>> <pvdyck> the design implies calling loadAllLockSafe() ... loading all
>>> the entries (K+V) from the cache -> very bad idea actually
>>> <adriancole> seems that keys should be in a separate place
>>> <adriancole> wdyt?
>>> <adriancole> a lot of large systems have a separate area for metadata
>>> and payload
>>> <adriancole> one popular one is git ;)
>>> <pvdyck> the simple idea of having this loadAll thing is a problem
>>> <pvdyck> if it ever get called ... I am quite sure the system will hang
>>> <pvdyck> and indeed you are right, there is no reason to bring the
>>> values with it ... keys are more than enough!
>>> <adriancole> so, here's the thing
>>> <adriancole> the whole bucket-based thing is suppopsed to help avoid
>>> killing entries who share the same hashCode
>>> <adriancole> and there's also another issue with encoding keys
>>> <adriancole> since they might be typed and not strings
>>> <pvdyck> is it still the case with the permanent hash ?
>>> <pvdyck> oops sorry ... consistent hash
>>> <adriancole> well, I'm talking about the hash of the thing you are putting in
>>> <adriancole> not the consistent hash alg
>>> <pvdyck> ok, understood...
>>> <adriancole> ya, so I think that if we address this, we're ok
>>> <adriancole> in the blobstore (jclouds) thing, we could address
>>> <adriancole> by having a partition on hashCode
>>> <adriancole> and encoding the object's key as the name of the thing in s3
>>> <adriancole> or whereever
>>> <adriancole> so like  "apple" -> "bear"
>>> <adriancole> "bear".hashCode/encode("apple")
>>> <pvdyck> actually, I don't think the problem should end up in the
>>> hands of the store itself
>>> <adriancole> that would be convenient :)
>>> <adriancole> in that case, I think that ispn may need a metadata
>>> store and value store
>>> <adriancole> since currently the typed bucket object contains both
>>> keys and values
>>> <adriancole> which makes it impossible to get to one without the other
>>> <adriancole> I'm pretty sure
>>> <pvdyck> looks like a lot of changes... but it is a path to explore!
>>> <pvdyck> we obviously need to wait for them to appear ;-)
>>> <adriancole> firstly, I think we'll need a patch for the s3
>>> optimization so you can work :)
>>> <adriancole> and also, this design needs to be reworked for sure
>>> * Received a malformed DCC request from pvdyck.
>> 
>> --
>> Manik Surtani
>> manik at jboss.org
>> Lead, Infinispan
>> Lead, JBoss Cache
>> http://www.infinispan.org
>> http://www.jbosscache.org
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
manik at jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org




-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20091214/6e0b3e38/attachment-0002.html 


More information about the infinispan-dev mailing list