yes the second point is what you solved in one minute :-)
I'm battling next issue now.
thanks again
Sanne
2009/11/12 Manik Surtani <manik(a)jboss.org>:
On 12 Nov 2009, at 10:32, Sanne Grinovero wrote:
> Very nice!
> thanks this takes over a very time-consuming task from my laptop.
> I've seen that org.infinispan.api.mvcc.repeatable_read.WriteSkewTest
> and org.infinispan.jmx.RpcManagerMBeanTest were solved,
> but what is the status of
> org.infinispan.distribution.rehash.ConcurrentOverlappingLeaveTest ?
That's an intermittent failure with DIST which I am looking into.
> This is passing often but fails sometimes, even going back to revision
> 1000 (and tested many version between HEAD and 1000).
> As it's not deterministic I can't find what/when it broke, or if it
> was ever working.
> I'm having weird problems with a new test involving dynamic joining
> and leaving nodes while indexing and searching, I would like to know
> if the reason could be because of this.
> What I experience is that some nodes are not finding (GET) stuff which
> was put by other nodes/threads before.
Is this the issue you're talking about on IRC?
>
> thanks,
> Sanne
>
> 2009/11/12 Manik Surtani <manik(a)jboss.org>:
>> Ok, initial response:
>>
>> It is just the "view" page [1] that isn't updated for some weird
reason (and it is being investigated).
>>
>> The detailed job pages, however, ARE updated with test results. See [2] for the
main Infinispan test run, maybe bookmark this for now until [1] is properly fixed and
updated.
>>
>> Cheers
>> Manik
>>
>> [1]
http://hudson.jboss.org/hudson/view/Infinispan/
>> [2]
http://hudson.jboss.org/hudson/view/Infinispan/job/Infinispan-trunk-JDK6-...
>>
>> On 11 Nov 2009, at 21:32, Sanne Grinovero wrote:
>>
>>> Hello,
>>> I'm going to play the role of Hudson today, as the last build
>>> published on
http://hudson.jboss.org/hudson/view/Infinispan/ is from 5
>>> months ago.
>>> Isn't there a working CI for Infinispan? I've had some trouble today
>>> with unexpected issues, according to JIRA I could find some issues
>>> related to my problems,
>>> but most of them were solved recently according to JIRA. The tests I
>>> just have run are having a different opinion:
>>>
>>> Failed tests:
>>>
testTransactional(org.infinispan.distribution.rehash.ConcurrentOverlappingLeaveTest)
>>>
testWriteSkewWithOnlyPut(org.infinispan.api.mvcc.repeatable_read.WriteSkewTest)
>>> testEnableJmxStats(org.infinispan.jmx.RpcManagerMBeanTest)
>>> testTransactional(org.infinispan.distribution.rehash.ConcurrentJoinTest)
>>> testNonTransactional(org.infinispan.distribution.rehash.ConcurrentJoinTest)
>>> testonInfinispanDIST(org.infinispan.stress.PutIfAbsentStressTest)
>>>
>>> Some comments:
>>>
>>> org.infinispan.distribution.rehash.ConcurrentOverlappingLeaveTest
>>> Is NOT failing consistently (it sometimes runs fine); It's
>>> inconsistent even running it alone mvn test
>>> -Dtest=org.infinispan.distribution.rehash.ConcurrentOverlappingLeaveTest
>>>
>>> org.infinispan.api.mvcc.repeatable_read.WriteSkewTest
>>> Is failing all the time.
>>>
>>> org.infinispan.jmx.RpcManagerMBeanTest is "interesting" as the
assert
>>> fails on this message "Expected 1, was 1".
>>> Debugging I see it's the wrong type:
>>> mBeanServer.getAttribute(rpcManager1,
"ReplicationCount").equals("1")
>>> fails because getAttribute is returning a "Long", not a String.
>>>
>>> org.infinispan.distribution.rehash.ConcurrentJoinTest
>>> This one was the reason for me to rerun all tests, as it's making me
>>> fail several Lucene index tests.
>>> The message "Some caches have not finished rehashing after 8
minutes":
>>> gets me a bit worried :-)
>>>
>>> org.infinispan.stress.PutIfAbsentStressTest
>>> This one is also breaking Lucene; we know for sure that it was running
>>> fine as I've tested it several times when Markus fixed the related
>>> issue.
>>>
>>> I'm attaching the full reports with stacktraces.
>>> Isn't there a real Hudson running to prevent this? The weird things I
>>> experienced today were killing my brain, as I was looking in the wrong
>>> direction expecting that stuff fixed last week was stil fine.
>>> Sorry for telling that all was fine with Lucene Directory this
>>> morning; I'm going to step through recent versions to identify the
>>> breaking change, it's the only way I can help.
>>>
>>> Cheers,
>>> Sanne
>>>
<infinispan-failed-tests-report.tar.bz2>_______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> --
>> Manik Surtani
>> manik(a)jboss.org
>> Lead, Infinispan
>> Lead, JBoss Cache
>>
http://www.infinispan.org
>>
http://www.jbosscache.org
>>
>>
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev