<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 TRANSITIONAL//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; CHARSET=UTF-8">
<META NAME="GENERATOR" CONTENT="GtkHTML/3.32.2">
</HEAD>
<BODY>
There are 3 tests which seemed to me as failing due to async ops.<BR>
<A HREF="https://issues.jboss.org/browse/JBPAPP-9377">https://issues.jboss.org/browse/JBPAPP-9377</A><BR>
They only fail in EC2 which is slow, so I thought it could be it.<BR>
Thanks for info, I'll look for different cause.<BR>
<BR>
Ondra<BR>
<BR>
<BR>
<BR>
On Mon, 2012-07-09 at 23:44 -0400, Jason Greene wrote:
<BLOCKQUOTE TYPE=CITE>
<PRE>
All management ops are synchronous, and execute serially. Maybe you are thinking of test ordering issues?
Sent from my iPhone
On Jul 9, 2012, at 9:30 PM, Ondřej Žižka <<A HREF="mailto:ozizka@redhat.com">ozizka@redhat.com</A>> wrote:
> I'd also suggest to add an information to the docs of DMR operations whether they are sync or async.
> Often I can see tests broken due to race condition caused by async operation, like unfinished removal of something in one test while being added in next test.
>
> my2c
> Ondra
>
>
>
> Jason T. Greene píše v Po 09. 07. 2012 v 13:16 -0500:
>>
>> We always have the problem of having a set of tests which fail one out
>> of 10 runs, but we leave the test around hoping one day someone will fix
>> it. The problem is no one does, and it makes regression catching hard.
>> Right now people that submit pull requests have to scan through test
>> results and ask around to figure out if they broke something or not.
>>
>> So I propose a new policy. Any test which intermittently fails will be
>> ignored and a JIRA opened to the author for up to a month. If that test
>> is not passing in one month time, it will be removed from the codebase.
>>
>> The biggest problem with this policy is that we might completely lose
>> coverage. A number of the clustering tests for example fail
>> intermittently, and if we removed them we would have no other coverage.
>> So for special cases like clustering, I am thinking of relocating them
>> to a different test run called "broken-clustering", or something like
>> that. This run would only be monitored by those working on clustering,
>> and would not be included in the main "all tests" run.
>>
>> Any other ideas?
>>
>
</PRE>
</BLOCKQUOTE>
<BR>
</BODY>
</HTML>