[infinispan-dev] Hot Rod testing

Alan Field afield at redhat.com
Thu Sep 15 12:42:04 EDT 2016


I also like this idea for a Unit-Based TCK for all clients, if this is possible.

> - we identify and group the tests depending on their scope (basic
> protocol ops, bulk ops, topology/failover, security, etc). A client
> which implements the functionality of a group MUST pass all of the tests
> in that group with NO exceptions

This makes sense to me, but I also agree that the hard part will be in categorizing the tests into these buckets. Should the groups be divided by intelligence as well? I'm just wondering about "dumb" clients like REST and Memcached.

> - we assign a unique identifier to each group/test combination (e.g.
> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
> collected in a "test book" (some kind of structured file) for comparison
> with client test runs

Are these identifiers just used as the JUNit test group names?

> - we refactor the Java client testsuite according to the above grouping
> / naming strategy so that testsuite which use the wrapping approach
> (i.e. C++ with SWIG) can consume it by directly specifying the supported
> groups

This makes sense to me as well.

I think the other requirements here are that the client tests must use a real server distribution and not the embedded server. Any non-duplicated tests from the server integration test suite have to be migrated to the client test suite as well. I think this also is an opportunity to inventory the client test suite and reduce it to the most minimal number of tests that verify the adherence to the protocol and expected behavior beyond the protocol.

Thanks,
Alan

----- Original Message -----
> From: "Tristan Tarrant" <ttarrant at redhat.com>
> To: infinispan-dev at lists.jboss.org
> Sent: Thursday, September 15, 2016 12:27:54 PM
> Subject: Re: [infinispan-dev] Hot Rod testing
> 
> Anyway, I like the idea. Can we sketch a POC ?
> 
> Tristan
> 
> 
> On 15/09/16 14:24, Tristan Tarrant wrote:
> > Whatever we choose, this solves only half of the problem: enumerating
> > and classifying the tests is the hard part.
> >
> > Tristan
> >
> > On 15/09/16 13:58, Sebastian Laskawiec wrote:
> >> How about turning the problem upside down and creating a TCK suite
> >> which runs on JUnit and has pluggable clients? The TCK suite would be
> >> responsible for bootstrapping servers, turning them down and
> >> validating the results.
> >>
> >> The biggest advantage of this approach is that all those things are
> >> pretty well known in Java world (e.g. using Arquillian for managing
> >> server lifecycle or JUnit for assertions). But the biggest challenge
> >> is how to plug for example a JavaScript client into the suite? How to
> >> call it from Java.
> >>
> >> Thanks
> >> Sebastian
> >>
> >> On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes
> >> <gustavo at infinispan.org <mailto:gustavo at infinispan.org>> wrote:
> >>
> >>
> >>
> >>     On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero
> >>     <sanne at infinispan.org <mailto:sanne at infinispan.org>> wrote:
> >>
> >>         I was actually planning to start a similar topic, but from the
> >>         point of view of user's testing needs.
> >>
> >>         I've recently created Hibernate OGM support for Hot Rod, and
> >>         it wasn't as easy as other NoSQL databases to test; luckily I
> >>         have some knowledge and contact on Infinispan ;) but I had to
> >>         develop several helpers and refine the approach to testing
> >>         over multiple iterations.
> >>
> >>         I ended up developing a JUnit rule - handy for individual test
> >>         runs in the IDE - and with a Maven life cycle extension and
> >>         also with an Arquillian extension, which I needed to run both
> >>         the Hot Rod server and start a Wildfly instance to host my
> >>         client app.
> >>
> >>         At some point I was also in trouble with conflicting
> >>         dependencies so considered making a Maven plugin to manage the
> >>         server lifecycle as a proper IT phase - I didn't ultimately
> >>         make this as I found an easier solution but it would be great
> >>         if Infinispan could provide such helpers to end users too..
> >>         Forking the ANT scripts from the Infinispan project to
> >>         assemble and start my own (as you do..) seems quite cumbersome
> >>         for users ;)
> >>
> >>         Especially the server is not even available via Maven
> >>         coordinates/./
> >>
> >>     The server is available at [1]
> >>
> >>     [1]
> >> http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/
> >> <http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/>
> >>
> >>         I'm of course happy to contribute my battle-tested Test
> >>         helpers to Infinispan, but they are meant for JUnit users.
> >>         Finally, comparing to developing OGM integrations for other
> >>         NoSQL stores.. It's really hard work when there is no "viewer"
> >>         of the cache content.
> >>
> >>         We need some kind of interactive console to explore the stored
> >>         data, I felt like driving blind: developing based on black
> >>         box, when something doesn't work as expected it's challenging
> >>         to figure if one has a bug with the storage method rather than
> >>         the reading method, or maybe the encoding not quite right or
> >>         the query options being used.. sometimes it's the used flags
> >>         or the configuration properties (hell, I've been swearing a
> >>         lot at some of these flags!)
> >>
> >>         Thanks,
> >>         Sanne
> >>
> >>
> >>         On 15 Sep 2016 11:07, "Tristan Tarrant" <ttarrant at redhat.com
> >>         <mailto:ttarrant at redhat.com>> wrote:
> >>
> >>             Recently I've had a chat with Galder, Will and Vittorio
> >>             about how we
> >>             test the Hot Rod server module and the various clients. We
> >>             also
> >>             discussed some of this in the past, but we now need to
> >>             move forward with
> >>             a better strategy.
> >>
> >>             First up is the Hot Rod server module testsuite: it is the
> >>             only part of
> >>             the code which still uses Scala. Will has a partial port
> >>             of it to Java,
> >>             but we're wondering if it is worth completing that work,
> >>             seeing that
> >>             most of the tests in that testsuite, in particular those
> >>             related to the
> >>             protocol itself, are actually duplicated by the Java Hot
> >>             Rod client's
> >>             testsuite which also happens to be our reference
> >>             implementation of a
> >>             client and is much more extensive.
> >>             The only downside of removing it  is that verification
> >>             will require
> >>             running the client testsuite, instead of being
> >> self-contained.
> >>
> >>             Next up is how we test clients.
> >>
> >>             The Java client, partially described above, runs all of
> >>             the tests
> >>             against ad-hoc embedded servers. Some of these tests, in
> >>             particular
> >>             those related to topology, start and stop new servers on
> >>             the fly.
> >>
> >>             The server integration testsuite performs yet another set
> >>             of tests, some
> >>             of which overlap the above, but using the actual
> >>             full-blown server. It
> >>             doesn't test for topology changes.
> >>
> >>             The C++ client wraps the native client in a Java wrapper
> >>             generated by
> >>             SWIG and runs the Java client testsuite. It then checks
> >>             against a
> >>             blacklist of known failures. It also has a small number of
> >>             native tests
> >>             which use the server distribution.
> >>
> >>             The Node.js client has its own home-grown testsuite which
> >>             also uses the
> >>             server distribution.
> >>
> >>             Duplication aside, which in some cases is unavoidable, it
> >>             is impossible
> >>             to confidently say that each client is properly tested.
> >>
> >>             Since complete unification is impossible because of the
> >>             different
> >>             testing harnesses used by the various platforms/languages,
> >>             I propose the
> >>             following:
> >>
> >>             - we identify and group the tests depending on their scope
> >>             (basic
> >>             protocol ops, bulk ops, topology/failover, security, etc).
> >>             A client
> >>             which implements the functionality of a group MUST pass
> >>             all of the tests
> >>             in that group with NO exceptions
> >>             - we assign a unique identifier to each group/test
> >>             combination (e.g.
> >>             HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These
> >>             should be
> >>             collected in a "test book" (some kind of structured file)
> >>             for comparison
> >>             with client test runs
> >>             - we refactor the Java client testsuite according to the
> >>             above grouping
> >>             / naming strategy so that testsuite which use the wrapping
> >>             approach
> >>             (i.e. C++ with SWIG) can consume it by directly specifying
> >>             the supported
> >>             groups
> >>             - other clients get reorganized so that they support the
> >>             above grouping
> >>
> >>             I understand this is quite some work, but the current
> >>             situation isn't
> >>             really sustainable.
> >>
> >>             Let me know what your thoughts are
> >>
> >>
> >>             Tristan
> >>             --
> >>             Tristan Tarrant
> >>             Infinispan Lead
> >>             JBoss, a division of Red Hat
> >>             _______________________________________________
> >>             infinispan-dev mailing list
> >>             infinispan-dev at lists.jboss.org
> >>             <mailto:infinispan-dev at lists.jboss.org>
> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> >>
> >>
> >>         _______________________________________________
> >>         infinispan-dev mailing list
> >>         infinispan-dev at lists.jboss.org
> >>         <mailto:infinispan-dev at lists.jboss.org>
> >>         https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> >>
> >>
> >>
> >>     _______________________________________________
> >>     infinispan-dev mailing list
> >>     infinispan-dev at lists.jboss.org
> >> <mailto:infinispan-dev at lists.jboss.org>
> >>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> infinispan-dev mailing list
> >> infinispan-dev at lists.jboss.org
> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> >
> 
> 
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 


More information about the infinispan-dev mailing list