There are 3 tests which seemed to me as failing due to async ops.
https://issues.jboss.org/browse/JBPAPP-9377
They only fail in EC2 which is slow, so I thought it could be it.
Thanks for info, I'll look for different cause.
Ondra
On Mon, 2012-07-09 at 23:44 -0400, Jason Greene wrote:
All management ops are synchronous, and execute serially. Maybe you
are thinking of test ordering issues?
Sent from my iPhone
On Jul 9, 2012, at 9:30 PM, Ondřej Žižka <ozizka(a)redhat.com> wrote:
> I'd also suggest to add an information to the docs of DMR operations whether
they are sync or async.
> Often I can see tests broken due to race condition caused by async operation, like
unfinished removal of something in one test while being added in next test.
>
> my2c
> Ondra
>
>
>
> Jason T. Greene píše v Po 09. 07. 2012 v 13:16 -0500:
>>
>> We always have the problem of having a set of tests which fail one out
>> of 10 runs, but we leave the test around hoping one day someone will fix
>> it. The problem is no one does, and it makes regression catching hard.
>> Right now people that submit pull requests have to scan through test
>> results and ask around to figure out if they broke something or not.
>>
>> So I propose a new policy. Any test which intermittently fails will be
>> ignored and a JIRA opened to the author for up to a month. If that test
>> is not passing in one month time, it will be removed from the codebase.
>>
>> The biggest problem with this policy is that we might completely lose
>> coverage. A number of the clustering tests for example fail
>> intermittently, and if we removed them we would have no other coverage.
>> So for special cases like clustering, I am thinking of relocating them
>> to a different test run called "broken-clustering", or something like
>> that. This run would only be monitored by those working on clustering,
>> and would not be included in the main "all tests" run.
>>
>> Any other ideas?
>>
>