From: "Galder Zamarreño" <galder(a)redhat.com>
To: "infinispan -Dev List" <infinispan-dev(a)lists.jboss.org>
Sent: Friday, September 23, 2016 11:33:12 AM
Subject: Re: [infinispan-dev] Hot Rod testing
--
Galder Zamarreño
Infinispan, Red Hat
> On 15 Sep 2016, at 13:58, Sebastian Laskawiec <slaskawi(a)redhat.com> wrote:
>
> How about turning the problem upside down and creating a TCK suite which
> runs on JUnit and has pluggable clients? The TCK suite would be
> responsible for bootstrapping servers, turning them down and validating
> the results.
>
> The biggest advantage of this approach is that all those things are pretty
> well known in Java world (e.g. using Arquillian for managing server
> lifecycle or JUnit for assertions). But the biggest challenge is how to
> plug for example a JavaScript client into the suite? How to call it from
> Java.
^ I thought about all of this when working on the JS client, and although
like you, I thought this was the biggest hurdle, eventually I realised that
there are bigger issues than that:
1. How do you verify that a Javascript client works the way a Javascript
program would use it?
IOW, even if you could call JS from Java, what you'd be verifying is that
whichever contorsionate way of calling JS from Java works, which might not
necessarily mean it works when a real JS program calls it.
I think the user workflow can be verified separately. Being able to verify the functional
behavior of clients written in multiple languages using a single test suite would be a
huge win, IMO. I agree with you though that this should be coupled with an actual end-user
test where the Javascript client is run against a real node server, a C++ client is
installed from RPMs and built into an application, etc for a complete certification of a
client.
2. Development workflow
I can't really argue with this point. Any solution that uses a single test suite to
test all clients will by definition not feel native to developers. The question is whether
it makes sense to recreate the test suite in every language which just doesn't feel
like it can scale.
Thanks,
Alan
The other side problem is related to workflow: when you develop in a
scripting, dynamically typed language, the way you go about testing is
slightly different. Since you don't have the type checker to help, you're
almost forced to run your testsuite continuously, and the JS client tests I
developed were geared to make this possible.
To give an example: to make being able to run test continously, the JS client
assumes you have a running node for local tests and a set of servers for
clustered tests (we provide a script for it). By having a running set of
servers, I can very quickly run tests continously. This is very different to
how Java-based testsuites work where each test or testsuites starts the
required servers and then shuts them down. I'd be very upset if developing
my JS client required this kind of waste of time. Moreover, the JS client
tests are designed so that whatever they do, they go back to initial state
when they finish. This happens for example with failover tests where I could
not simply kill running servers, and instead the failover test starts a
bunch servers which it kills as it goes along to test failover. The result
is that none of the tests started by failover tests end up surviving when
the test finishes.
Maybe some day we'll have a Java-based testsuite that more easily allows
continous testing. Scala, through SBT, do have something along this lines,
so I don't think it's necessarily impossible, but we're not there yet. And,
as I said above, you always have the first issue: testing how the user will
use things.
Cheers,
[1]
https://github.com/infinispan/js-client/blob/master/spec/infinispan_failo...
>
> Thanks
> Sebastian
>
> On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes <gustavo(a)infinispan.org>
> wrote:
>
>
> On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero <sanne(a)infinispan.org>
> wrote:
> I was actually planning to start a similar topic, but from the point of
> view of user's testing needs.
>
> I've recently created Hibernate OGM support for Hot Rod, and it wasn't as
> easy as other NoSQL databases to test; luckily I have some knowledge and
> contact on Infinispan ;) but I had to develop several helpers and refine
> the approach to testing over multiple iterations.
>
> I ended up developing a JUnit rule - handy for individual test runs in the
> IDE - and with a Maven life cycle extension and also with an Arquillian
> extension, which I needed to run both the Hot Rod server and start a
> Wildfly instance to host my client app.
>
> At some point I was also in trouble with conflicting dependencies so
> considered making a Maven plugin to manage the server lifecycle as a
> proper IT phase - I didn't ultimately make this as I found an easier
> solution but it would be great if Infinispan could provide such helpers to
> end users too.. Forking the ANT scripts from the Infinispan project to
> assemble and start my own (as you do..) seems quite cumbersome for users
> ;)
>
> Especially the server is not even available via Maven coordinates.
>
> The server is available at [1]
>
> [1]
>
http://central.maven.org/maven2/org/infinispan/server/infinispan-server-b...
>
>
> I'm of course happy to contribute my battle-tested Test helpers to
> Infinispan, but they are meant for JUnit users.
> Finally, comparing to developing OGM integrations for other NoSQL stores..
> It's really hard work when there is no "viewer" of the cache content.
>
> We need some kind of interactive console to explore the stored data, I felt
> like driving blind: developing based on black box, when something doesn't
> work as expected it's challenging to figure if one has a bug with the
> storage method rather than the reading method, or maybe the encoding not
> quite right or the query options being used.. sometimes it's the used
> flags or the configuration properties (hell, I've been swearing a lot at
> some of these flags!)
>
> Thanks,
> Sanne
>
> On 15 Sep 2016 11:07, "Tristan Tarrant" <ttarrant(a)redhat.com>
wrote:
> Recently I've had a chat with Galder, Will and Vittorio about how we
> test the Hot Rod server module and the various clients. We also
> discussed some of this in the past, but we now need to move forward with
> a better strategy.
>
> First up is the Hot Rod server module testsuite: it is the only part of
> the code which still uses Scala. Will has a partial port of it to Java,
> but we're wondering if it is worth completing that work, seeing that
> most of the tests in that testsuite, in particular those related to the
> protocol itself, are actually duplicated by the Java Hot Rod client's
> testsuite which also happens to be our reference implementation of a
> client and is much more extensive.
> The only downside of removing it is that verification will require
> running the client testsuite, instead of being self-contained.
>
> Next up is how we test clients.
>
> The Java client, partially described above, runs all of the tests
> against ad-hoc embedded servers. Some of these tests, in particular
> those related to topology, start and stop new servers on the fly.
>
> The server integration testsuite performs yet another set of tests, some
> of which overlap the above, but using the actual full-blown server. It
> doesn't test for topology changes.
>
> The C++ client wraps the native client in a Java wrapper generated by
> SWIG and runs the Java client testsuite. It then checks against a
> blacklist of known failures. It also has a small number of native tests
> which use the server distribution.
>
> The Node.js client has its own home-grown testsuite which also uses the
> server distribution.
>
> Duplication aside, which in some cases is unavoidable, it is impossible
> to confidently say that each client is properly tested.
>
> Since complete unification is impossible because of the different
> testing harnesses used by the various platforms/languages, I propose the
> following:
>
> - we identify and group the tests depending on their scope (basic
> protocol ops, bulk ops, topology/failover, security, etc). A client
> which implements the functionality of a group MUST pass all of the tests
> in that group with NO exceptions
> - we assign a unique identifier to each group/test combination (e.g.
> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
> collected in a "test book" (some kind of structured file) for comparison
> with client test runs
> - we refactor the Java client testsuite according to the above grouping
> / naming strategy so that testsuite which use the wrapping approach
> (i.e. C++ with SWIG) can consume it by directly specifying the supported
> groups
> - other clients get reorganized so that they support the above grouping
>
> I understand this is quite some work, but the current situation isn't
> really sustainable.
>
> Let me know what your thoughts are
>
>
> Tristan
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev