Recently I've had a chat with Galder, Will and Vittorio about how we
test the Hot Rod server module and the various clients. We also
discussed some of this in the past, but we now need to move forward with
a better strategy.
First up is the Hot Rod server module testsuite: it is the only part of
the code which still uses Scala. Will has a partial port of it to Java,
but we're wondering if it is worth completing that work, seeing that
most of the tests in that testsuite, in particular those related to the
protocol itself, are actually duplicated by the Java Hot Rod client's
testsuite which also happens to be our reference implementation of a
client and is much more extensive.
The only downside of removing it is that verification will require
running the client testsuite, instead of being self-contained.
Next up is how we test clients.
The Java client, partially described above, runs all of the tests
against ad-hoc embedded servers. Some of these tests, in particular
those related to topology, start and stop new servers on the fly.
The server integration testsuite performs yet another set of tests, some
of which overlap the above, but using the actual full-blown server. It
doesn't test for topology changes.
The C++ client wraps the native client in a Java wrapper generated by
SWIG and runs the Java client testsuite. It then checks against a
blacklist of known failures. It also has a small number of native tests
which use the server distribution.
The Node.js client has its own home-grown testsuite which also uses the
server distribution.
Duplication aside, which in some cases is unavoidable, it is impossible
to confidently say that each client is properly tested.
Since complete unification is impossible because of the different
testing harnesses used by the various platforms/languages, I propose the
following:
- we identify and group the tests depending on their scope (basic
protocol ops, bulk ops, topology/failover, security, etc). A client
which implements the functionality of a group MUST pass all of the tests
in that group with NO exceptions
- we assign a unique identifier to each group/test combination (e.g.
HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
collected in a "test book" (some kind of structured file) for comparison
with client test runs
- we refactor the Java client testsuite according to the above grouping
/ naming strategy so that testsuite which use the wrapping approach
(i.e. C++ with SWIG) can consume it by directly specifying the supported
groups
- other clients get reorganized so that they support the above grouping
I understand this is quite some work, but the current situation isn't
really sustainable.
Let me know what your thoughts are
Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat