Some remarks in-line

2010/9/7 Marko Strukelj <marko.strukelj@gmail.com>

In order to run selenium tests with jboss I have to do the following modification to current pom.xml:

Index: testsuite/selenium-snifftests/pom.xml
===================================================================
--- testsuite/selenium-snifftests/pom.xml       (revision 4045)
+++ testsuite/selenium-snifftests/pom.xml       (working copy)
@@ -13,12 +13,12 @@

        <properties>
                <org.selenium.server.version>1.0.1</org.selenium.server.version>
-               <selenium.port>4444</selenium.port>
+               <selenium.port>8444</selenium.port>
                <selenium.browser>firefox</selenium.browser>
                <selenium.timeout>10000</selenium.timeout>
                <selenium.speed>300</selenium.speed>
                <selenium.host>localhost</selenium.host>
-               <org.selenium.maven-plugin.version>1.0</org.selenium.maven-plugin.version>
+               <org.selenium.maven-plugin.version>1.0.1</org.selenium.maven-plugin.version>
Could you explain why you need this maintenance version exactly ?
        </properties>

        <dependencies>


I've been running these a lot the last two days. 

They aren't very useful at the moment for doing pre-commit checks to catch any introduction of systemic issues primarily for three reasons.

 - one, it takes 1 hour 40 mins on my laptop to run the suite. If I want a 'before my change', and 'after my change' I have to run it twice so I can see a diff in test failures. The name 'snifftests' gives an impression that this is a quick testsuite to be run before doing a commit :)
 - two, at the moment many tests are failing - my last run: Tests run: 248, Failures: 74, Errors: 21, Skipped: 0
which makes it difficult to determine if some might be due to my code changes. The same tests sometimes fail with a 'Failure', and sometimes with 'Error', so the end report always looks different making finding effective differences a challenge even with a diff tool.

 - three, some of the tests seem to fail randomly - more likely they are sensitive to initial conditions which can change if some other test fails to  do a proper cleanup. It can also happen by killing the test in the middle of execution. The situation is exacerbated by the fact that the tests are run in random order ...

Point number one could be addressed by making a set of simple to maintain tests that perform a few operations that touch many aspects of the portal. These would go into 'snifftests' module. The exhaustive mass of other detailed tests - which are undoubtedly also a burden to maintain, would go into 'alltests'.
 
In fact there are differences in Functional tests and sniff tests.
- Sniff tests have "SNF" in the name of the  test
- These tests can be run specifically by using "-Dtest=Test*SNF* " with your maven command
- We will be implementing shortly what we call "Smoke tests" that will test in <20 steps most functionalities of GateIn (these will be for testing installations on many different configurations)
- The name of the folder and how to run the different tests will be also be changed in SELEGEN 1.1 (expected in a week or so...see more improvements in the next points)

Point number two could be addressed by creating some kind of final report that would throw 'failures' and 'errors' in a single set and sort it alphabetically.
- This will become a reality with SELEGEN 1.1 (we will use a DOXIA maven framework to create html reports)...
- see attached "surefire-report.html" for an old example (i've been using it on my own for some time). The format is not perfect but better than nothing for now.

For point number three, a slow workaround is to run another packaging/pkg/mvn install.
Maybe we could add some cleanup mechanism, so each test can define the tear down sequence which would go through all the steps ignoring any errors. Removing test artifacts created during a test is already part of the existing tests. But if it was separated it could be run explicitly:
- We have a problem using testsuites with our Maven approach, that is why each script is independent and cleans after himself.
- This also permits us to keep the artifacts of failed tests to better analyse what happened
- we would need to pass a specific scenario between each test... (any ideas on this would be helpful)

mvn -Pselenium-cleanup integration-test


Otherwise thumbs up for the sheer number of these tests, and the systematic approach ... 
For that we can thank Hang Nguyen for her work on this !

I attached 2 docs that can give you more visibility on how we work in eXo vietnam


- marko



_______________________________________________
gatein-dev mailing list
gatein-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/gatein-dev