[infinispan-dev] How to run the testsuite?
Galder Zamarreño
galder at redhat.com
Tue Apr 23 14:10:40 EDT 2013
Nice :)
On Apr 18, 2013, at 6:43 AM, Mircea Markus <mmarkus at redhat.com> wrote:
> Now the suite is forks a new process for each module. I've complete the run with 128m MaxPerm.
>
> On 20 Mar 2013, at 23:55, Sanne Grinovero wrote:
>> Thanks Dan,
>> with the following options it completed the build:
>>
>> MAVEN_OPTS=-server -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
>> -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:NewRatio=4 -Xss500k
>> -Xmx16G -Xms1G -XX:MaxPermSize=700M -XX:HeapDumpPath=/tmp/java_heap
>> -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1
>> -XX:ReservedCodeCacheSize=200M
>> -Dlog4j.configuration=file:/opt/infinispan-log4j.xml
>>
>> Sanne
>>
>> On 20 March 2013 17:36, Manik Surtani <msurtani at redhat.com> wrote:
>>>
>>> On 20 Mar 2013, at 15:29, Adrian Nistor <anistor at redhat.com> wrote:
>>>
>>> I've also tried changing the fork mode of surefire from 'none' to 'once' and
>>> the entire suite runs fine now on jvm 1.6 with 500mb MaxPermSize.
>>> Previously I did not complete, 500mb was not enough.
>>> Anyone knows why surefire was not allowed to fork?
>>>
>>> Haven't tried to analyze closely the heap yet but first thing I noticed is
>>> 15% of it is occupied by 190000 ComponentMetadataRepo instances, which
>>> probably is not the root cause of this issue, but is odd anyway :).
>>>
>>>
>>> Yes, very odd. Do you also see 190000 instances of a
>>> GlobalComponentRegistry?
>>>
>>>
>>> On 03/20/2013 05:12 PM, Dan Berindei wrote:
>>>
>>> The problem is that we still leak threads in almost every module, and that
>>> means we keep a copy of the core classes (and all their dependencies) for
>>> every module. Of course, some modules' dependencies are already oversized,
>>> so keeping only one copy is already too much...
>>>
>>> I admit I don't run the whole test suite too often either, but I recently
>>> changed the Cloudbees settings to get rid of the OOM there. It uses about
>>> 550MB of permgen by the end of the test suite, without
>>> -XX:+UseCompressedOops. These are the settings I used:
>>>
>>> -server -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+UseParNewGC
>>> -XX:+CMSClassUnloadingEnabled -XX:NewRatio=4 -Xss500k -Xms100m -Xmx900m
>>> -XX:MaxPermSize=700M
>>>
>>>
>>> Cheers
>>> Dan
>>>
>>>
>>>
>>> On Wed, Mar 20, 2013 at 2:59 PM, Tristan Tarrant <ttarrant at redhat.com>
>>> wrote:
>>>>
>>>> Sanne, turn on CompressedOops ? Still those requirements are indeed
>>>> ridiculous.
>>>>
>>>> Tristan
>>>>
>>>> On 03/20/2013 01:27 PM, Sanne Grinovero wrote:
>>>>> I'm testing master, at da5c3f0
>>>>>
>>>>> Just killed a run which was using
>>>>>
>>>>> java version "1.7.0_17"
>>>>> Java(TM) SE Runtime Environment (build 1.7.0_17-b02)
>>>>> Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)
>>>>>
>>>>> this time again an OOM (while I have 2GB !), last sign of life came
>>>>> from the "Rolling Upgrade Tooling"
>>>>>
>>>>> I'm not going to merge/review any pull request until this works.
>>>>>
>>>>> Sanne
>>>>>
>>>>> On 20 March 2013 12:09, Mircea Markus <mmarkus at redhat.com> wrote:
>>>>>> I've just run it on master and didn't get OOM. well I'm using osx. Are
>>>>>> you running it on master or a particular branch? Which module crashes?
>>>>>> e.g. pedro's ISPN-2808 adds quite some threads to the party - that's
>>>>>> the reason it hasn't been integrated yet.
>>>>>>
>>>>>> On 20 Mar 2013, at 11:40, Sanne Grinovero wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>> after reviewing some pull requests, I'm since a couple of days unable
>>>>>>> to run the testsuite; since Anna's fixes affect many modules I'm
>>>>>>> trying to run the testsuite of the whole project, as we should always
>>>>>>> do but I admit I haven't done it in a while because of the core module
>>>>>>> failures.
>>>>>>>
>>>>>>> So I run:
>>>>>>> $ mvn -fn clean install
>>>>>>>
>>>>>>> using -fn to have it continue after the core failures.
>>>>>>>
>>>>>>> First attempt gave me an OOM, was running with 1G heap.. I'm pretty
>>>>>>> sure this was good enough some months back.
>>>>>>>
>>>>>>> Second attempt slowed down like crazy, and I found a warning about
>>>>>>> having filled the code cache size, so doubled it to 200M.
>>>>>>>
>>>>>>> Third attempt: OutOfMemoryError: PermGen space! But I'm running with
>>>>>>> -XX:MaxPermSize=380M which should be plenty?
>>>>>>>
>>>>>>> This is :
>>>>>>> java version "1.6.0_43"
>>>>>>> Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
>>>>>>> Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)
>>>>>>>
>>>>>>> MAVEN_OPTS=-Xmx2G -XX:MaxPermSize=380M -XX:+TieredCompilation
>>>>>>> -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1
>>>>>>> -XX:ReservedCodeCacheSize=200M
>>>>>>> -Dlog4j.configuration=file:/opt/infinispan-log4j.xml
>>>>>>>
>>>>>>> My custom log configuration just disables trace & debug.
>>>>>>>
>>>>>>> Going to try now with larger PermGen and different JVMs but it looks
>>>>>>> quite bad.. any other suggestion?
>>>>>>> (I do have the security limits setup properly)
>>>>>>>
>>>>>>> Sanne
>>>>>>> _______________________________________________
>>>>>>> infinispan-dev mailing list
>>>>>>> infinispan-dev at lists.jboss.org
>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>> Cheers,
>>>>>> --
>>>>>> Mircea Markus
>>>>>> Infinispan lead (www.infinispan.org)
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> infinispan-dev mailing list
>>>>>> infinispan-dev at lists.jboss.org
>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev at lists.jboss.org
>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>>
>>> --
>>> Manik Surtani
>>> manik at jboss.org
>>> twitter.com/maniksurtani
>>>
>>> Platform Architect, JBoss Data Grid
>>> http://red.ht/data-grid
>>>
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> Cheers,
> --
> Mircea Markus
> Infinispan lead (www.infinispan.org)
>
>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
galder at redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
More information about the infinispan-dev
mailing list