[infinispan-dev] FYI : probable memory leak
Philippe Van Dyck
pvdyck at gmail.com
Mon May 17 10:16:46 EDT 2010
Very interesting ! I will take a look at it later.
Right now, I check the size of the cache after each write and launch a
thread to evict the entries...
// calculate cache size
cacheSize = 0;
int maxCacheSizeInMB = 20;
final int percentToEvict = 2;
final int durationToConsiderOldInSeconds = 20 * 60; //
(twenty
// minutes)
for (InternalCacheEntry ice :
cache.getAdvancedCache().getDataContainer()) {
final int size = ((byte[]) ice.getValue()).length;
logger.info("Cache entry size " + size);
cacheSize += size;
}
logger.info("Cache size " + cacheSize);
int ONEMB = 1024 * 1024;
final int maxCacheSizeInBytes = maxCacheSizeInMB * ONEMB;
if (cacheSize > maxCacheSizeInBytes) {
logger.info("Cache too big (in MB)" + cacheSize / ONEMB);
new Thread("Qi4j - Eviction ") {
@Override
public void run() {
logger.info("Started eviction thread");
// ok let's evict a %age
Iterator<InternalCacheEntry> it =
cache.getAdvancedCache().getDataContainer()
.iterator();
// inflate cache size to the percentage to remove
cacheSize += (cacheSize * percentToEvict) / 100;
while (cacheSize > maxCacheSizeInBytes &&
it.hasNext()) {
InternalCacheEntry ice = it.next();
final long diffTime = currentTimeMillis -
ice.getLastUsed();
final long oldTreshold =
durationToConsiderOldInSeconds * 1000;
if (diffTime > oldTreshold) {
cache.evict(ice.getKey() + "");
cacheSize -= ((byte[]) ice.getValue()).length;
}
}
logger.info("new cache size (in MB)" + cacheSize /
1024);
}
}.start();
}
On Mon, May 17, 2010 at 4:11 PM, Vladimir Blagojevic <vblagoje at redhat.com>wrote:
> How did you do this? Did you write your own EvictionManager. Notice that a
> contract between EvictionManager and DataContainer changed slightly in
> Alpha2. DataContainer fulfils eviction contract with EvictionManager by
> letting EvictionManager know the evicted entries
> (DataContainer#getEvictionCandidates). You can also select eviction thread
> policy (DEFAULT or PIGGYBACK)[1]. Have a look at [2] for more details.
>
>
> [1]
> http://docs.jboss.org/infinispan/4.1/apidocs/config.html#ce_default_eviction
> [2]
> http://infinispan.blogspot.com/2010/03/infinispan-eviction-batching-updates.html
>
>
>
> On 2010-05-17, at 10:03 AM, Philippe Van Dyck wrote:
>
> Well... to be honest, I am not ;-)
> I run my own eviction trigger (based on the total cache size) and I evict
> any entry older than 20 minutes until the size of the cache is reduced
> enough (usually 2%). Since I need the 'lastUsed' value, I need the LRU
> strategy.
>
> phil
>
> On Mon, May 17, 2010 at 3:55 PM, Vladimir Blagojevic <vblagoje at redhat.com>wrote:
>
>> We have to handle -1 case. I'll look into this. I am glad that you are
>> exercising our new container with eviction. Keep pounding at it :)
>>
>> On 2010-05-17, at 9:49 AM, Philippe Van Dyck wrote:
>>
>> Ok, working now. Thanks again Vladimir. The memory problem was surely
>> coming from there (I will investigate it later) - I am back on BETA1.
>>
>> BTW, "maxEntries=-1" is not working anymore (update xml config doc ?)
>>
>> Caused by: java.lang.IllegalArgumentException
>> at
>> org.infinispan.util.concurrent.BoundedConcurrentHashMap.<init>(BoundedConcurrentHashMap.java:1139)
>> at
>> org.infinispan.container.DefaultDataContainer.<init>(DefaultDataContainer.java:92)
>>
>> cheers,
>>
>> phil
>>
>> On Mon, May 17, 2010 at 3:30 PM, Philippe Van Dyck <pvdyck at gmail.com>wrote:
>>
>>> Thanks Vladimir...
>>>
>>> May I suppose that this limitation was not verified in alpha1 ?
>>> I will test this right away !
>>>
>>> cheers,
>>>
>>> phil
>>>
>>>
>>> On Mon, May 17, 2010 at 3:26 PM, Vladimir Blagojevic <
>>> vblagoje at redhat.com> wrote:
>>>
>>>> I think the problem is related to the fact that you have maxEntries = 1
>>>> specified in configuration for your container.
>>>>
>>>> On 2010-05-17, at 9:24 AM, Philippe Van Dyck wrote:
>>>>
>>>> Confirmed - when I go back to alpha1 the problem disappears.
>>>>
>>>> Could anyone explain with alpha3 (the problem is already there) there is
>>>> only one entry in getDataContainer ?
>>>>
>>>> for (InternalCacheEntry ice :
>>>> cache.getAdvancedCache().getDataContainer()) {
>>>> final int size = ((byte[]) ice.getValue()).length;
>>>> logger.info("Cache entry size " + size);
>>>> cacheSize += size;
>>>> }
>>>>
>>>> logger.info("Cache size " + cacheSize);
>>>>
>>>>
>>>> cheers
>>>>
>>>> phil
>>>>
>>>> On Mon, May 17, 2010 at 2:57 PM, Manik Surtani <manik at jboss.org> wrote:
>>>>
>>>>> Wow, no idea. Any thread dumps, stack traces? Logging?
>>>>>
>>>>> On 17 May 2010, at 13:48, Philippe Van Dyck wrote:
>>>>>
>>>>> Update - trashed & crashed as planned.
>>>>> Done some debugging : something strange... my cache seems to contain
>>>>> only one entry (???)
>>>>> Any clue ?
>>>>>
>>>>> phil
>>>>>
>>>>> On Mon, May 17, 2010 at 2:21 PM, Philippe Van Dyck <pvdyck at gmail.com>wrote:
>>>>>
>>>>>> I don't have any resource available to setup profiling in prepod right
>>>>>> now.
>>>>>> Looking at the changes from alpha1 to beta1, I only see jclouds and
>>>>>> some guava libs updated.
>>>>>> Load on the server went berserk these 10 last minutes, it will
>>>>>> probably trash & crash in the next hour.
>>>>>> Will probably go back to ALPHA1.
>>>>>>
>>>>>> phil
>>>>>>
>>>>>> On Mon, May 17, 2010 at 2:13 PM, Manik Surtani <manik at jboss.org>wrote:
>>>>>>
>>>>>>> Have you tried profiling stuff? Nothing really should have changed
>>>>>>> in Beta1 to affect such a config, except perhaps the version of JClouds and
>>>>>>> some JClouds-related code.
>>>>>>>
>>>>>>> On 17 May 2010, at 13:07, Philippe Van Dyck wrote:
>>>>>>>
>>>>>>> <?xml version="1.0" encoding="UTF-8"?>
>>>>>>>
>>>>>>> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>>>>>>> xmlns="urn:infinispan:config:4.0">
>>>>>>> <global>
>>>>>>> <transport
>>>>>>>
>>>>>>> transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">
>>>>>>> <properties>
>>>>>>> <property name="configurationFile"
>>>>>>> value="jgroupsprod.xml"/>
>>>>>>> </properties>
>>>>>>> </transport>
>>>>>>> <globalJmxStatistics enabled="true"
>>>>>>> allowDuplicateDomains="true"/>
>>>>>>> </global>
>>>>>>>
>>>>>>>
>>>>>>> <namedCache name="qi4j">
>>>>>>> <jmxStatistics enabled="true"/>
>>>>>>> <transaction
>>>>>>>
>>>>>>> transactionManagerLookupClass="org.qi4j.entitystore.s3jclouds.AtomikosTransactionManagerLookup"/>
>>>>>>> <clustering mode="distribution">
>>>>>>> <l1 enabled="true" lifespan="100000"/>
>>>>>>> <hash numOwners="2" rehashRpcTimeout="120000"/>
>>>>>>> </clustering>
>>>>>>>
>>>>>>> <loaders passivation="false" shared="true" preload="false">
>>>>>>>
>>>>>>> <loader
>>>>>>> class="org.infinispan...CloudCacheStore"
>>>>>>> fetchPersistentState="false"
>>>>>>> ignoreModifications="false"
>>>>>>> purgeOnStartup="false" purgeSynchronously="true">
>>>>>>>
>>>>>>> <properties>
>>>>>>> <property name="identity" value="***"/>
>>>>>>> <property name="password" value="***"/>
>>>>>>> <property name="bucketPrefix" value="store2"/>
>>>>>>> <property name="cloudService" value="s3"/>
>>>>>>> </properties>
>>>>>>> </loader>
>>>>>>> </loaders>
>>>>>>>
>>>>>>> <eviction strategy="LRU" wakeUpInterval="-1" maxEntries="1"/>
>>>>>>>
>>>>>>> <locking lockAcquisitionTimeout="60000"
>>>>>>> useLockStriping="true"/>
>>>>>>>
>>>>>>>
>>>>>>> <unsafe unreliableReturnValues="true"/>
>>>>>>>
>>>>>>> </namedCache>
>>>>>>>
>>>>>>> </infinispan>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, May 17, 2010 at 1:55 PM, Manik Surtani <manik at jboss.org>wrote:
>>>>>>>
>>>>>>>> What configuration do you use?
>>>>>>>>
>>>>>>>> On 17 May 2010, at 12:46, Philippe Van Dyck wrote:
>>>>>>>>
>>>>>>>> > FYI, I upgraded from ALPHA1 to BETA1 on a preproduction system
>>>>>>>> this morning.
>>>>>>>> >
>>>>>>>> > Take a look at the graphic attached, the server is restarted
>>>>>>>> everyday around 1 am (blue and green lines crossing).
>>>>>>>> >
>>>>>>>> > Users began to use the system around 9 am.... look at today's
>>>>>>>> pattern and the previous day pattern !
>>>>>>>> >
>>>>>>>> > Anything I should know or I missed ?
>>>>>>>> >
>>>>>>>> > cheers,
>>>>>>>> >
>>>>>>>> > phil
>>>>>>>> > <memleak.tiff>_______________________________________________
>>>>>>>> > infinispan-dev mailing list
>>>>>>>> > infinispan-dev at lists.jboss.org
>>>>>>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>>
>>>>>>>> --
>>>>>>>> Manik Surtani
>>>>>>>> manik at jboss.org
>>>>>>>> Lead, Infinispan
>>>>>>>> Lead, JBoss Cache
>>>>>>>> http://www.infinispan.org
>>>>>>>> http://www.jbosscache.org
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> infinispan-dev mailing list
>>>>>>>> infinispan-dev at lists.jboss.org
>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> infinispan-dev mailing list
>>>>>>> infinispan-dev at lists.jboss.org
>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Manik Surtani
>>>>>>> manik at jboss.org
>>>>>>> Lead, Infinispan
>>>>>>> Lead, JBoss Cache
>>>>>>> http://www.infinispan.org
>>>>>>> http://www.jbosscache.org
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> infinispan-dev mailing list
>>>>>>> infinispan-dev at lists.jboss.org
>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev at lists.jboss.org
>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>>
>>>>> --
>>>>> Manik Surtani
>>>>> manik at jboss.org
>>>>> Lead, Infinispan
>>>>> Lead, JBoss Cache
>>>>> http://www.infinispan.org
>>>>> http://www.jbosscache.org
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev at lists.jboss.org
>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>>
>>>> Vladimir Blagojevic
>>>> JBoss Clustering Team
>>>> JBoss by Red Hat
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>
>>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> Vladimir Blagojevic
> JBoss Clustering Team
> JBoss by Red Hat
>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20100517/98ad5779/attachment-0001.html
More information about the infinispan-dev
mailing list