[infinispan-dev] Hidden failures in the testsuite

Vitalii Chepeliuk vchepeli at redhat.com
Mon Aug 24 04:04:34 EDT 2015


Hey Sanne! Yep you are right ignoring output is BAD IDEA. I realized that it's difficult to look through all log manually so probably we should write some parser in python or bash to grep it and put it into bin/ folder with other scripts, So at least we can run this script after all tests were run and analyze it somehow. And about "appear to be good" nobody knows why. It could be testng/junit issue as we mix it a lot. So this needs further discussion and analysis.

Vitalii 

----- Вихідне повідомлення -----
Від: "Sanne Grinovero" <sanne at infinispan.org>
Кому: "infinispan -Dev List" <infinispan-dev at lists.jboss.org>
Надіслано: Понеділок, 10 Серпень 2015 р 20:46:06
Тема: [infinispan-dev] Hidden failures in the testsuite

Hi all,
I just updated my local master fork and started the testsuite, as I
sometimes do.

It's great to see that the build was successful, and no tests
*appeared* to have failed.

But! lazily scrolling up in the console, I see lots of exceptions
which don't look like intentional (I'm aware that some tests
intentionally create error conditions). Also some tests are extremely
verbose, which might be the reason for nobody noticing these.

Some examples:
 - org.infinispan.it.compatibility.EmbeddedRestHotRodTest seems to log
TRACE to the console (and probably the whole module)
 - CDI tests such as org.infinispan.cdi.InfinispanExtensionRemote seem
to fail in great number because of some ClassNotFoundException(s)
and/or ResourceLoadingException(s)
 - OSGi integration tests seem to be all broken by some invalid
integration with Aries / Geronimo
 - OSGi integration tests dump a lot of unnecessary information to the
build console
 - the Infinispan Query tests log lots of WARN too, around missing
configuration properties and in some cases concerning exceptions; I'm
pretty sure that I had resolved those in the past, seems some
refactorings were done w/o considering the log outputs.

Please don't ignore the output; if it's too verbose to watch, that
needs to be resolved too.

I also monitor the "expected execution time" of some modules I'm
interested in, that's been useful in some cases to figure out that
there was some regression.

One big question: why is it that so many tests "appear to be good" but
are actually broken? I would like to understand that.

Thanks,
Sanne
_______________________________________________
infinispan-dev mailing list
infinispan-dev at lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev



More information about the infinispan-dev mailing list