From ttarrant at redhat.com Tue Sep 1 04:55:20 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 1 Sep 2015 10:55:20 +0200 Subject: [infinispan-dev] Branching for 8.1 Message-ID: <55E567F8.1080803@redhat.com> Hi guys, I've revised the calendar for Infinispan releases in Jira [1] Regarding micros, we should be releasing both 8.0.1.Final and 7.2.5.Final next Monday (7th September), with future on-demand micros. The 8.1.0.Final release is scheduled for 16th November, with the following breakdown: 8.1.0.Alpha1 16th September 8.1.0.Alpha2 1st October 8.1.0.Beta1 15th October 8.1.0.Beta2 29th October 8.1.0.CR1 8th November 8.1.0.Final 16th November [1] https://issues.jboss.org/plugins/servlet/project-config/ISPN/versions -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From pedro at infinispan.org Tue Sep 1 05:18:59 2015 From: pedro at infinispan.org (Pedro Ruivo) Date: Tue, 1 Sep 2015 10:18:59 +0100 Subject: [infinispan-dev] Infinispan 8.0 is released! Message-ID: <55E56D83.6080501@infinispan.org> Dear community, The final release of Infinispan 8 is finally available. Check out our blog for the complete list of cool features introduced! http://blog.infinispan.org/2015/08/infinispan-800final.html Cheers, The Infinispan Team. From jholusa at redhat.com Wed Sep 2 04:40:34 2015 From: jholusa at redhat.com (Jiri Holusa) Date: Wed, 2 Sep 2015 04:40:34 -0400 (EDT) Subject: [infinispan-dev] Uber jars testing In-Reply-To: <694240932.16714234.1441179080250.JavaMail.zimbra@redhat.com> Message-ID: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> Hi all, we've been thinking for a while, how to test ISPN uber jars. The current status is that we actually don't have many tests in the testsuite, there are few tests in integrationtests/all-embedded-* modules that are basically copies of the actual tests in corresponding modules. We think that this test coverage is not enough and more importantly, they are duplicates. The questions are now following: * which tests should be invoked with uber-jars? Whole ISPN testsuite? Only integrationtests module? * how would it run? Create Maven different profiles for "classic" jars and uber jars? Or try to use some Maven exclusion magic if even possible? Some time ago, we had discussion about this with Sebastian, who suggested that running only integrationtests module would be sufficient, because uber-jars are really about packaging, not the functionality itself. But I don't know if the tests coverage is sufficient in that level, I would be much more confident if we could run the whole ISPN testsuite against uber-jars. I'm opening this for wider discussion as we should agree on the way how to do it, so we could do it right :) Cheers, Jiri From mgencur at redhat.com Wed Sep 2 04:50:09 2015 From: mgencur at redhat.com (Martin Gencur) Date: Wed, 02 Sep 2015 10:50:09 +0200 Subject: [infinispan-dev] Uber jars testing In-Reply-To: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> Message-ID: <55E6B841.4090808@redhat.com> Hi Jiri, comments inline. On 2.9.2015 10:40, Jiri Holusa wrote: > Hi all, > > we've been thinking for a while, how to test ISPN uber jars. The current status is that we actually don't have many tests in the testsuite, there are few tests in integrationtests/all-embedded-* modules that are basically copies of the actual tests in corresponding modules. We think that this test coverage is not enough and more importantly, they are duplicates. > > The questions are now following: > * which tests should be invoked with uber-jars? Whole ISPN testsuite? Only integrationtests module? The goal is to run the whole test suite because, as you said, we don't have enough tests in integrationtests/* And we can't duplicate all test classes from individual modules here. > * how would it run? Create Maven different profiles for "classic" jars and uber jars? Or try to use some Maven exclusion magic if even possible? > > Some time ago, we had discussion about this with Sebastian, who suggested that running only integrationtests module would be sufficient, because uber-jars are really about packaging, not the functionality itself. But I don't know if the tests coverage is sufficient in that level, I would be much more confident if we could run the whole ISPN testsuite against uber-jars. Right. Uber-jars are about packaging but you don't know that the packiging is right until you try all the features and see that everything works. There might be some classes missing (just for some particular features), same classes in different packages, the Manifest.mf might be corrupted and then something won't work in OSGi. I'd prefer a separate Maven profile. IMO, exclusions are too error-prone. Martin > > I'm opening this for wider discussion as we should agree on the way how to do it, so we could do it right :) > > Cheers, > Jiri > > From galder at redhat.com Wed Sep 2 09:43:11 2015 From: galder at redhat.com (Galder Zamarreno) Date: Wed, 2 Sep 2015 09:43:11 -0400 (EDT) Subject: [infinispan-dev] New blog post on Functional Map API: Working with single entries In-Reply-To: <1789361391.23411642.1441201378300.JavaMail.zimbra@redhat.com> Message-ID: <1089311380.23411666.1441201391991.JavaMail.zimbra@redhat.com> Hi all, I've just published a new blog post that continues the introduction of the Functional Map API. This time, the blog focuses on working with single entries: http://blog.infinispan.org/2015/09/functional-map-api-working-with-single.html Cheers, -- Galder Zamarre?o Infinispan, Red Hat From emmanuel at hibernate.org Wed Sep 2 13:54:35 2015 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 2 Sep 2015 19:54:35 +0200 Subject: [infinispan-dev] Building the website Message-ID: I have had my share of bad Ruby dependency experiences in the past but for the love of me, I cannot make the Infinispan website build on Mac OS X 10.10.5. I?ve done rake clean[all] rake setup[local] Got into this problem https://gist.github.com/emmanuelbernard/6692c6f43237218d24fd Which I fixed with bundle config build.libv8 -- --with-system-v8 rake clean[all] rake setup[local] And now have this problem https://gist.github.com/emmanuelbernard/b4531a12a1ee2105435a Anyone had more success? Emmanuel -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150902/c5fc1265/attachment.html From emmanuel at hibernate.org Thu Sep 3 03:36:52 2015 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 3 Sep 2015 09:36:52 +0200 Subject: [infinispan-dev] Building the website In-Reply-To: References: Message-ID: <68F54909-2147-4CBD-B4F9-EAAAD613F47D@hibernate.org> I forgot to mention the brew install v8 during my first error hoop. I also tried brew install v8-315, no success. > On 02 Sep 2015, at 19:54, Emmanuel Bernard wrote: > > I have had my share of bad Ruby dependency experiences in the past but for the love of me, I cannot make the Infinispan website build on Mac OS X 10.10.5. > > I?ve done > > rake clean[all] > rake setup[local] > > Got into this problem > > https://gist.github.com/emmanuelbernard/6692c6f43237218d24fd > > Which I fixed with > > bundle config build.libv8 -- --with-system-v8 > rake clean[all] > rake setup[local] > > And now have this problem > > https://gist.github.com/emmanuelbernard/b4531a12a1ee2105435a > > Anyone had more success? > > Emmanuel > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150903/89ff6ece/attachment.html From galder at redhat.com Thu Sep 3 03:49:06 2015 From: galder at redhat.com (Galder Zamarreno) Date: Thu, 3 Sep 2015 03:49:06 -0400 (EDT) Subject: [infinispan-dev] Blue-Green deployment scenario In-Reply-To: <55D9D9D7.7000009@sweazer.com> References: <55D9D9D7.7000009@sweazer.com> Message-ID: <529260913.24033999.1441266546280.JavaMail.zimbra@redhat.com> Hi Christian, The question should be directed, if you've not already done so, to our user forums: http://infinispan.org/community/ Cheers, -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > Hello, > > I have been reading the rolling upgrade chapter[1] from the > documentation and I have some questions. > > 1. The documentation states that in the target cluster, every cache > that should be migrated, should use a CLI cache loader pointing to > the source cluster. I suppose that this can only be configured via > XML but not via the CLI or JMX? That would be bad because after a > node restart the cache loader would be enabled again. > 2. How would the JMX URL look like if I wanted to connect to a secured > Wildfly over HTTP? I was thinking of > jmx:http-remoting-jmx://USER:PASSWORD at HOST:PORT/CACHEMANAGER/CACHE > 3. What do I need to do to rollback to the source cluster after > switching a few nodes to the target cluster? > > Thanks in advance! > > Regards, > Christian > > > [1] > http://infinispan.org/docs/7.2.x/user_guide/user_guide.html#_rolling_upgrades_for_infinispan_library_embedded_mode > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Thu Sep 3 05:53:26 2015 From: galder at redhat.com (Galder Zamarreno) Date: Thu, 3 Sep 2015 05:53:26 -0400 (EDT) Subject: [infinispan-dev] Redis infinispan cache store In-Reply-To: References: <55B8A148.1090709@redhat.com> <55BB2B48.5080802@redhat.com> <55DEF2B4.80506@redhat.com> Message-ID: <1555718029.24080898.1441274006823.JavaMail.zimbra@redhat.com> Great stuff Simon, thanks for contributing that! :D -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > I have added a license as well as configuration snippets on use. I think > its probably best for infinispan if it were transferred to the Infinispan > org. > > Looking forward to your feedback. > > Thanks, > Simon > > > On 27 August 2015 at 12:21, Tristan Tarrant wrote: > > > Thank you Simon, this is excellent news ! > > > > On 27/08/2015 12:49, Simon Paulger wrote: > > > Hi, > > > > > > This is now done and is available to see here: > > > https://github.com/spaulg/infinispan-cachestore-redis. I used the remote > > > store within the infinispan repo as a base of reference. > > > > I see there is no license associated with that repo. I think you should > > add one. > > Would like the repo to become officially owned by the Infinispan > > organization ? > > > > > > > > There are some points that my be worth further discussion. They are: > > > 1. the cache loader size method return type is limited to int. Redis > > > servers can hold much more than Integer.MAX_VALUE and the Jedis client > > > method for counting items on the Redis server returns longs for each > > > server, which in addition must be totalled up for each server if using a > > > Redis cluster topology. To get around this I am checking for a long over > > > Integer.MAX_VALUE and logging a warn, then returning Integer.MAX_VALUE. > > > > This is a last-minute change we could do in Infinispan 8's > > AdvancedCacheLoader. > > > > > 2. Redis handles expiration. I am using lifespan to immediately set the > > > expiration of the cache entry in Redis, and when that lifespan is > > > reached the item is immediately purged by Redis itself. This means, > > > there is no idle time, and there is no purge method implementation. > > > > Good :) > > > > > 3. A few unit tests around expiration had to be disabled as they require > > > changes to time. As expiration is handled by Redis, I would have to > > > change the system time to make Redis force expiration. For now, they are > > > just disabled. > > > > Absolutely reasonable. > > > > > > > I have built it against the Jedis client. I also tried 2 other clients, > > > lettuce and redisson, but felt that Jedis gave the best implementation > > > as a) it didn't try to do too much (by this I mean running background > > > monitoring threads that try to detect failure and perform automatic > > > failover of Redis slaves) and b) had all the API features I needed to > > > make the implementation work efficiently. > > > > > > Jedis supports 3 main modes of operation. They are, single server, Redis > > > sentinel and Redis cluster. Redis versions that should be supported are > > > 2.8+ and 3.0+. > > > > > > I haven't tested this beyond the unit tests distributed with Infinispan > > > which are starting full Redis servers in single server, sentinel and > > > cluster configurations to run the tests, but I am hoping to start > > > working on getting integration in to Wildfly 10, which I can test with a > > > cache container for web sessions and a simple counter web app. > > > > I will take a look at the code. > > > > Thanks again for this awesome contribution. > > > > Tristan > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From spaulger at codezen.co.uk Thu Sep 3 06:06:15 2015 From: spaulger at codezen.co.uk (Simon Paulger) Date: Thu, 3 Sep 2015 11:06:15 +0100 Subject: [infinispan-dev] Redis infinispan cache store In-Reply-To: <1555718029.24080898.1441274006823.JavaMail.zimbra@redhat.com> References: <55B8A148.1090709@redhat.com> <55BB2B48.5080802@redhat.com> <55DEF2B4.80506@redhat.com> <1555718029.24080898.1441274006823.JavaMail.zimbra@redhat.com> Message-ID: <394CFE5A-AEAE-47F9-9822-96AD75B4C569@codezen.co.uk> You're welcome :) Sent from my iPhone > On 3 Sep 2015, at 10:53, Galder Zamarreno wrote: > > Great stuff Simon, thanks for contributing that! :D > > -- > Galder Zamarre?o > Infinispan, Red Hat > > ----- Original Message ----- >> I have added a license as well as configuration snippets on use. I think >> its probably best for infinispan if it were transferred to the Infinispan >> org. >> >> Looking forward to your feedback. >> >> Thanks, >> Simon >> >> >>> On 27 August 2015 at 12:21, Tristan Tarrant wrote: >>> >>> Thank you Simon, this is excellent news ! >>> >>>> On 27/08/2015 12:49, Simon Paulger wrote: >>>> Hi, >>>> >>>> This is now done and is available to see here: >>>> https://github.com/spaulg/infinispan-cachestore-redis. I used the remote >>>> store within the infinispan repo as a base of reference. >>> >>> I see there is no license associated with that repo. I think you should >>> add one. >>> Would like the repo to become officially owned by the Infinispan >>> organization ? >>> >>>> >>>> There are some points that my be worth further discussion. They are: >>>> 1. the cache loader size method return type is limited to int. Redis >>>> servers can hold much more than Integer.MAX_VALUE and the Jedis client >>>> method for counting items on the Redis server returns longs for each >>>> server, which in addition must be totalled up for each server if using a >>>> Redis cluster topology. To get around this I am checking for a long over >>>> Integer.MAX_VALUE and logging a warn, then returning Integer.MAX_VALUE. >>> >>> This is a last-minute change we could do in Infinispan 8's >>> AdvancedCacheLoader. >>> >>>> 2. Redis handles expiration. I am using lifespan to immediately set the >>>> expiration of the cache entry in Redis, and when that lifespan is >>>> reached the item is immediately purged by Redis itself. This means, >>>> there is no idle time, and there is no purge method implementation. >>> >>> Good :) >>> >>>> 3. A few unit tests around expiration had to be disabled as they require >>>> changes to time. As expiration is handled by Redis, I would have to >>>> change the system time to make Redis force expiration. For now, they are >>>> just disabled. >>> >>> Absolutely reasonable. >>> >>> >>>> I have built it against the Jedis client. I also tried 2 other clients, >>>> lettuce and redisson, but felt that Jedis gave the best implementation >>>> as a) it didn't try to do too much (by this I mean running background >>>> monitoring threads that try to detect failure and perform automatic >>>> failover of Redis slaves) and b) had all the API features I needed to >>>> make the implementation work efficiently. >>>> >>>> Jedis supports 3 main modes of operation. They are, single server, Redis >>>> sentinel and Redis cluster. Redis versions that should be supported are >>>> 2.8+ and 3.0+. >>>> >>>> I haven't tested this beyond the unit tests distributed with Infinispan >>>> which are starting full Redis servers in single server, sentinel and >>>> cluster configurations to run the tests, but I am hoping to start >>>> working on getting integration in to Wildfly 10, which I can test with a >>>> cache container for web sessions and a simple counter web app. >>> >>> I will take a look at the code. >>> >>> Thanks again for this awesome contribution. >>> >>> Tristan >>> -- >>> Tristan Tarrant >>> Infinispan Lead >>> JBoss, a division of Red Hat >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Thu Sep 3 06:31:39 2015 From: galder at redhat.com (Galder Zamarreno) Date: Thu, 3 Sep 2015 06:31:39 -0400 (EDT) Subject: [infinispan-dev] Uber jars testing In-Reply-To: <55E6B841.4090808@redhat.com> References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <55E6B841.4090808@redhat.com> Message-ID: <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> Good post Jiri, this got me thinking :) Running the entire testsuite again with uber jars would add a lot of time to the build time. Maybe we should have a set of tests that must be executed for sure, e.g. like Wildfly's smoke tests [1]. We have "functional" group but right now it covers pretty much all tests. Such tests should live in a separate testsuite, so that we could add the essential tests for *all* components. In a way, we've already done some of this in integrationtests/ but it's not really well structured for this aim. Also, if we would go down this path, something we should take advantage of (if possible with JUnit/TestNG) is what Gustavo did with the Spark tests in [2], where he used suites to make it faster to run things, by starting a cache manager for distributed caches, running all distributed tests...etc. In a way, I think we can already do this with Arquillian Infinispan integration, so Arquillian would probably well suited for such smoke testsuite. Thoughts? Cheers, [1] https://github.com/wildfly/wildfly#running-the-testsuite [2] https://github.com/infinispan/infinispan-spark/tree/master/src/test/scala/org/infinispan/spark -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > Hi Jiri, comments inline. > > On 2.9.2015 10:40, Jiri Holusa wrote: > > Hi all, > > > > we've been thinking for a while, how to test ISPN uber jars. The current > > status is that we actually don't have many tests in the testsuite, there > > are few tests in integrationtests/all-embedded-* modules that are > > basically copies of the actual tests in corresponding modules. We think > > that this test coverage is not enough and more importantly, they are > > duplicates. > > > > The questions are now following: > > * which tests should be invoked with uber-jars? Whole ISPN testsuite? Only > > integrationtests module? > > The goal is to run the whole test suite because, as you said, we don't > have enough tests in integrationtests/* And we can't duplicate all > test classes from individual modules here. > > > * how would it run? Create Maven different profiles for "classic" jars and > > uber jars? Or try to use some Maven exclusion magic if even possible? > > > > Some time ago, we had discussion about this with Sebastian, who suggested > > that running only integrationtests module would be sufficient, because > > uber-jars are really about packaging, not the functionality itself. But I > > don't know if the tests coverage is sufficient in that level, I would be > > much more confident if we could run the whole ISPN testsuite against > > uber-jars. > Right. Uber-jars are about packaging but you don't know that the > packiging is right until you try all the features and see that > everything works. There might be some classes missing (just for some > particular features), same classes in different packages, the > Manifest.mf might be corrupted and then something won't work in OSGi. > > > I'd prefer a separate Maven profile. IMO, exclusions are too error-prone. > > > Martin > > > > I'm opening this for wider discussion as we should agree on the way how to > > do it, so we could do it right :) > > > > Cheers, > > Jiri > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From galder at redhat.com Thu Sep 3 06:34:27 2015 From: galder at redhat.com (Galder Zamarreno) Date: Thu, 3 Sep 2015 06:34:27 -0400 (EDT) Subject: [infinispan-dev] Uber jars testing In-Reply-To: <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <55E6B841.4090808@redhat.com> <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> Message-ID: <1497354448.24093053.1441276467028.JavaMail.zimbra@redhat.com> Another interesting improvement here would be if you could run all these smoke tests with an alternative implementation of AdvancedCache, e.g. one based with functional API. Cheers, -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > Good post Jiri, this got me thinking :) > > Running the entire testsuite again with uber jars would add a lot of time to > the build time. > > Maybe we should have a set of tests that must be executed for sure, e.g. like > Wildfly's smoke tests [1]. We have "functional" group but right now it > covers pretty much all tests. > > Such tests should live in a separate testsuite, so that we could add the > essential tests for *all* components. In a way, we've already done some of > this in integrationtests/ but it's not really well structured for this aim. > > Also, if we would go down this path, something we should take advantage of > (if possible with JUnit/TestNG) is what Gustavo did with the Spark tests in > [2], where he used suites to make it faster to run things, by starting a > cache manager for distributed caches, running all distributed tests...etc. > In a way, I think we can already do this with Arquillian Infinispan > integration, so Arquillian would probably well suited for such smoke > testsuite. > > Thoughts? > > Cheers, > > [1] https://github.com/wildfly/wildfly#running-the-testsuite > [2] > https://github.com/infinispan/infinispan-spark/tree/master/src/test/scala/org/infinispan/spark > -- > Galder Zamarre?o > Infinispan, Red Hat > > ----- Original Message ----- > > Hi Jiri, comments inline. > > > > On 2.9.2015 10:40, Jiri Holusa wrote: > > > Hi all, > > > > > > we've been thinking for a while, how to test ISPN uber jars. The current > > > status is that we actually don't have many tests in the testsuite, there > > > are few tests in integrationtests/all-embedded-* modules that are > > > basically copies of the actual tests in corresponding modules. We think > > > that this test coverage is not enough and more importantly, they are > > > duplicates. > > > > > > The questions are now following: > > > * which tests should be invoked with uber-jars? Whole ISPN testsuite? > > > Only > > > integrationtests module? > > > > The goal is to run the whole test suite because, as you said, we don't > > have enough tests in integrationtests/* And we can't duplicate all > > test classes from individual modules here. > > > > > * how would it run? Create Maven different profiles for "classic" jars > > > and > > > uber jars? Or try to use some Maven exclusion magic if even possible? > > > > > > Some time ago, we had discussion about this with Sebastian, who suggested > > > that running only integrationtests module would be sufficient, because > > > uber-jars are really about packaging, not the functionality itself. But I > > > don't know if the tests coverage is sufficient in that level, I would be > > > much more confident if we could run the whole ISPN testsuite against > > > uber-jars. > > Right. Uber-jars are about packaging but you don't know that the > > packiging is right until you try all the features and see that > > everything works. There might be some classes missing (just for some > > particular features), same classes in different packages, the > > Manifest.mf might be corrupted and then something won't work in OSGi. > > > > > > I'd prefer a separate Maven profile. IMO, exclusions are too error-prone. > > > > > > Martin > > > > > > I'm opening this for wider discussion as we should agree on the way how > > > to > > > do it, so we could do it right :) > > > > > > Cheers, > > > Jiri > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From sanne at infinispan.org Thu Sep 3 06:54:39 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 3 Sep 2015 11:54:39 +0100 Subject: [infinispan-dev] Uber jars testing In-Reply-To: <1497354448.24093053.1441276467028.JavaMail.zimbra@redhat.com> References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <55E6B841.4090808@redhat.com> <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> <1497354448.24093053.1441276467028.JavaMail.zimbra@redhat.com> Message-ID: Interesting subject. We also have many tests which (ab)use inheritance to re-test the same API semantics in slightly different configurations, like embedded/DIST and embedded/REPL, sometimes becoming an @Override mess. It would be far more useful to restructure the testsuite to have such tests in a single class (no inheritance) and declare - maybe annotations? - which permutations of configuration parameters should be valid. Among those configuration permutations one would not have "just" different replication models, but also things like - using the same API remotely (Hot Rod) - using the same feature but within a WildFly embedded module - using the uber jars vs small jars - uber jars & remote.. - remote & embedded modules.. - remote, uber jars, in OSGi.. And finally combine with other options: - A Query test using: remote client, using uber jars, in OSGi, but switching JTA implementation, using a new experimental JGroups stack! For example many Core API and Query tests are copy/pasted into other modules as "integration tests", etc.. but we really should just run the same one in a different environment. This would keep our code better maintainable, but also allow some neat tricks like specify that some configurations should definitely be tested in some test group (like Galder suggests, one could flag one of these for "smoke tests", one for "nightly tests"), but you could also want to flag some configuration settings as a "should work, low priority for testing". A smart testsuite could then use a randomizer to generate permutations of configuration options for those low priority tests which are not essential; there are great examples of such testsuites in the Haskell world, and also Lucene and ElasticSearch do it. A single random seed is used for the whole run, and it's printed clearly at the start; a single seed will deterministically define all parameters of the testsuite, so you can reproduce it all by setting a specific seed when needing to debug a failure. http://blog.mikemccandless.com/2011/03/your-test-cases-should-sometimes-fail.html Thanks, Sanne On 3 September 2015 at 11:34, Galder Zamarreno wrote: > Another interesting improvement here would be if you could run all these smoke tests with an alternative implementation of AdvancedCache, e.g. one based with functional API. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > ----- Original Message ----- >> Good post Jiri, this got me thinking :) >> >> Running the entire testsuite again with uber jars would add a lot of time to >> the build time. >> >> Maybe we should have a set of tests that must be executed for sure, e.g. like >> Wildfly's smoke tests [1]. We have "functional" group but right now it >> covers pretty much all tests. >> >> Such tests should live in a separate testsuite, so that we could add the >> essential tests for *all* components. In a way, we've already done some of >> this in integrationtests/ but it's not really well structured for this aim. >> >> Also, if we would go down this path, something we should take advantage of >> (if possible with JUnit/TestNG) is what Gustavo did with the Spark tests in >> [2], where he used suites to make it faster to run things, by starting a >> cache manager for distributed caches, running all distributed tests...etc. >> In a way, I think we can already do this with Arquillian Infinispan >> integration, so Arquillian would probably well suited for such smoke >> testsuite. >> >> Thoughts? >> >> Cheers, >> >> [1] https://github.com/wildfly/wildfly#running-the-testsuite >> [2] >> https://github.com/infinispan/infinispan-spark/tree/master/src/test/scala/org/infinispan/spark >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> ----- Original Message ----- >> > Hi Jiri, comments inline. >> > >> > On 2.9.2015 10:40, Jiri Holusa wrote: >> > > Hi all, >> > > >> > > we've been thinking for a while, how to test ISPN uber jars. The current >> > > status is that we actually don't have many tests in the testsuite, there >> > > are few tests in integrationtests/all-embedded-* modules that are >> > > basically copies of the actual tests in corresponding modules. We think >> > > that this test coverage is not enough and more importantly, they are >> > > duplicates. >> > > >> > > The questions are now following: >> > > * which tests should be invoked with uber-jars? Whole ISPN testsuite? >> > > Only >> > > integrationtests module? >> > >> > The goal is to run the whole test suite because, as you said, we don't >> > have enough tests in integrationtests/* And we can't duplicate all >> > test classes from individual modules here. >> > >> > > * how would it run? Create Maven different profiles for "classic" jars >> > > and >> > > uber jars? Or try to use some Maven exclusion magic if even possible? >> > > >> > > Some time ago, we had discussion about this with Sebastian, who suggested >> > > that running only integrationtests module would be sufficient, because >> > > uber-jars are really about packaging, not the functionality itself. But I >> > > don't know if the tests coverage is sufficient in that level, I would be >> > > much more confident if we could run the whole ISPN testsuite against >> > > uber-jars. >> > Right. Uber-jars are about packaging but you don't know that the >> > packiging is right until you try all the features and see that >> > everything works. There might be some classes missing (just for some >> > particular features), same classes in different packages, the >> > Manifest.mf might be corrupted and then something won't work in OSGi. >> > >> > >> > I'd prefer a separate Maven profile. IMO, exclusions are too error-prone. >> > >> > >> > Martin >> > > >> > > I'm opening this for wider discussion as we should agree on the way how >> > > to >> > > do it, so we could do it right :) >> > > >> > > Cheers, >> > > Jiri >> > > >> > > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Thu Sep 3 08:12:01 2015 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 3 Sep 2015 14:12:01 +0200 Subject: [infinispan-dev] Building the website In-Reply-To: <68F54909-2147-4CBD-B4F9-EAAAD613F47D@hibernate.org> References: <68F54909-2147-4CBD-B4F9-EAAAD613F47D@hibernate.org> Message-ID: Thanks to your blatant lack of help, I ended up writing a docker image to edit the website. I will contribute it as part of the initial change I wanted to push. I hate you all ! :) Emmanuel > On 03 Sep 2015, at 09:36, Emmanuel Bernard wrote: > > I forgot to mention the brew install v8 during my first error hoop. I also tried brew install v8-315, no success. > >> On 02 Sep 2015, at 19:54, Emmanuel Bernard > wrote: >> >> I have had my share of bad Ruby dependency experiences in the past but for the love of me, I cannot make the Infinispan website build on Mac OS X 10.10.5. >> >> I?ve done >> >> rake clean[all] >> rake setup[local] >> >> Got into this problem >> >> https://gist.github.com/emmanuelbernard/6692c6f43237218d24fd >> >> Which I fixed with >> >> bundle config build.libv8 -- --with-system-v8 >> rake clean[all] >> rake setup[local] >> >> And now have this problem >> >> https://gist.github.com/emmanuelbernard/b4531a12a1ee2105435a >> >> Anyone had more success? >> >> Emmanuel >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150903/1c22a92e/attachment.html From galder at redhat.com Thu Sep 3 09:41:27 2015 From: galder at redhat.com (Galder Zamarreno) Date: Thu, 3 Sep 2015 09:41:27 -0400 (EDT) Subject: [infinispan-dev] Hidden failures in the testsuite In-Reply-To: <101300424.14304090.1440403474589.JavaMail.zimbra@redhat.com> References: <101300424.14304090.1440403474589.JavaMail.zimbra@redhat.com> Message-ID: <505253668.24168193.1441287687745.JavaMail.zimbra@redhat.com> Hi Sanne, I've looked at CDI and Compatibility issues, see below. Cheers, -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > Hey Sanne! Yep you are right ignoring output is BAD IDEA. I realized that > it's difficult to look through all log manually so probably we should write > some parser in python or bash to grep it and put it into bin/ folder with > other scripts, So at least we can run this script after all tests were run > and analyze it somehow. And about "appear to be good" nobody knows why. It > could be testng/junit issue as we mix it a lot. So this needs further > discussion and analysis. > > Vitalii > > ----- ??????? ???????????? ----- > ???: "Sanne Grinovero" > ????: "infinispan -Dev List" > ?????????: ?????????, 10 ??????? 2015 ? 20:46:06 > ????: [infinispan-dev] Hidden failures in the testsuite > > Hi all, > I just updated my local master fork and started the testsuite, as I > sometimes do. > > It's great to see that the build was successful, and no tests > *appeared* to have failed. > > But! lazily scrolling up in the console, I see lots of exceptions > which don't look like intentional (I'm aware that some tests > intentionally create error conditions). Also some tests are extremely > verbose, which might be the reason for nobody noticing these. > > Some examples: > - org.infinispan.it.compatibility.EmbeddedRestHotRodTest seems to log > TRACE to the console (and probably the whole module) ^ I've run the compatibility testsuite manually and didn't have such issue with master: https://gist.github.com/galderz/b59f1ed4599229022f27 Are you still having issues with this? > - CDI tests such as org.infinispan.cdi.InfinispanExtensionRemote seem > to fail in great number because of some ClassNotFoundException(s) > and/or ResourceLoadingException(s) ^ Hmmmm, not seeing any of that either: https://gist.github.com/galderz/1143078e6be8869cd602 Are you still having issues with this? > - OSGi integration tests seem to be all broken by some invalid > integration with Aries / Geronimo > - OSGi integration tests dump a lot of unnecessary information to the > build console > - the Infinispan Query tests log lots of WARN too, around missing > configuration properties and in some cases concerning exceptions; I'm > pretty sure that I had resolved those in the past, seems some > refactorings were done w/o considering the log outputs. > > Please don't ignore the output; if it's too verbose to watch, that > needs to be resolved too. > > I also monitor the "expected execution time" of some modules I'm > interested in, that's been useful in some cases to figure out that > there was some regression. > > One big question: why is it that so many tests "appear to be good" but > are actually broken? I would like to understand that. > > Thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Thu Sep 3 11:45:19 2015 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 3 Sep 2015 17:45:19 +0200 Subject: [infinispan-dev] Building the website In-Reply-To: References: <68F54909-2147-4CBD-B4F9-EAAAD613F47D@hibernate.org> Message-ID: <01F8E738-F361-42E1-A7CC-1B00764A9134@hibernate.org> https://github.com/infinispan/infinispan.github.io/pull/17 > On 03 Sep 2015, at 14:12, Emmanuel Bernard wrote: > > Thanks to your blatant lack of help, I ended up writing a docker image to edit the website. > I will contribute it as part of the initial change I wanted to push. > > I hate you all ! :) > > Emmanuel > >> On 03 Sep 2015, at 09:36, Emmanuel Bernard > wrote: >> >> I forgot to mention the brew install v8 during my first error hoop. I also tried brew install v8-315, no success. >> >>> On 02 Sep 2015, at 19:54, Emmanuel Bernard > wrote: >>> >>> I have had my share of bad Ruby dependency experiences in the past but for the love of me, I cannot make the Infinispan website build on Mac OS X 10.10.5. >>> >>> I?ve done >>> >>> rake clean[all] >>> rake setup[local] >>> >>> Got into this problem >>> >>> https://gist.github.com/emmanuelbernard/6692c6f43237218d24fd >>> >>> Which I fixed with >>> >>> bundle config build.libv8 -- --with-system-v8 >>> rake clean[all] >>> rake setup[local] >>> >>> And now have this problem >>> >>> https://gist.github.com/emmanuelbernard/b4531a12a1ee2105435a >>> >>> Anyone had more success? >>> >>> Emmanuel >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150903/dacd9d8e/attachment.html From dan.berindei at gmail.com Fri Sep 4 04:25:45 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Fri, 4 Sep 2015 11:25:45 +0300 Subject: [infinispan-dev] HotRod C++ client build Message-ID: Hi Tristan I installed cmake, valgrind, and swig on the RHEL CI agents, and the C++ client build seems to work. Cheers Dan From ttarrant at redhat.com Fri Sep 4 08:29:42 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 4 Sep 2015 14:29:42 +0200 Subject: [infinispan-dev] Metrics and interceptors Message-ID: <55E98EB6.9030008@redhat.com> A recent issue with some refactoring of the PassivationInterceptor affecting code using it directly (EAP's Infinispan subsystem), has got me thinking about the fact that we have somewhat treated interceptors as a form of API since we do not provide another way of retrieving metrics collected by the interceptors, aside from the basic cache stats. With the plan to eventually drop interceptors, these kind of metrics should be exposed through a more stable API (aside from maintaining stability of the MBean side of things). The org.infinispan.stats.Stats interface already partially covers that for basic stats (see org.infinispan.cache.AdvancedCache.getStats()), and I think it should be extended or at least complemented to do this. WDYT ? Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Fri Sep 4 10:56:58 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 4 Sep 2015 16:56:58 +0200 Subject: [infinispan-dev] Building the website In-Reply-To: References: <68F54909-2147-4CBD-B4F9-EAAAD613F47D@hibernate.org> Message-ID: <55E9B13A.2020607@redhat.com> It was a test to see if you were worthy of contributing :) Tristan On 03/09/2015 14:12, Emmanuel Bernard wrote: > Thanks to your blatant lack of help, I ended up writing a docker image > to edit the website. > I will contribute it as part of the initial change I wanted to push. > > I hate you all ! :) > > Emmanuel > >> On 03 Sep 2015, at 09:36, Emmanuel Bernard > > wrote: >> >> I forgot to mention the brew install v8 during my first error hoop. I >> also tried brew install v8-315, no success. >> >>> On 02 Sep 2015, at 19:54, Emmanuel Bernard >> > wrote: >>> >>> I have had my share of bad Ruby dependency experiences in the past >>> but for the love of me, I cannot make the Infinispan website build on >>> Mac OS X 10.10.5. >>> >>> I?ve done >>> >>> rake clean[all] >>> rake setup[local] >>> >>> Got into this problem >>> >>> https://gist.github.com/emmanuelbernard/6692c6f43237218d24fd >>> >>> Which I fixed with >>> >>> bundle config build.libv8 -- --with-system-v8 >>> rake clean[all] >>> rake setup[local] >>> >>> And now have this problem >>> >>> https://gist.github.com/emmanuelbernard/b4531a12a1ee2105435a >>> >>> Anyone had more success? >>> >>> Emmanuel >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Mon Sep 7 07:50:47 2015 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 7 Sep 2015 07:50:47 -0400 (EDT) Subject: [infinispan-dev] HotRod C++ client build In-Reply-To: References: Message-ID: <1894572542.25715996.1441626647247.JavaMail.zimbra@redhat.com> Thanks Dan :) -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > Hi Tristan > > I installed cmake, valgrind, and swig on the RHEL CI agents, and the > C++ client build seems to work. > > Cheers > Dan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From galder at redhat.com Mon Sep 7 07:52:47 2015 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 7 Sep 2015 07:52:47 -0400 (EDT) Subject: [infinispan-dev] Metrics and interceptors In-Reply-To: <55E98EB6.9030008@redhat.com> References: <55E98EB6.9030008@redhat.com> Message-ID: <1060316398.25716227.1441626767732.JavaMail.zimbra@redhat.com> Makes sense. -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > A recent issue with some refactoring of the PassivationInterceptor > affecting code using it directly (EAP's Infinispan subsystem), has got > me thinking about the fact that we have somewhat treated interceptors as > a form of API since we do not provide another way of retrieving metrics > collected by the interceptors, aside from the basic cache stats. > With the plan to eventually drop interceptors, these kind of metrics > should be exposed through a more stable API (aside from maintaining > stability of the MBean side of things). > > The org.infinispan.stats.Stats interface already partially covers that > for basic stats (see org.infinispan.cache.AdvancedCache.getStats()), and > I think it should be extended or at least complemented to do this. > > WDYT ? > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From dan.berindei at gmail.com Mon Sep 7 09:39:57 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 7 Sep 2015 16:39:57 +0300 Subject: [infinispan-dev] Metrics and interceptors In-Reply-To: <1060316398.25716227.1441626767732.JavaMail.zimbra@redhat.com> References: <55E98EB6.9030008@redhat.com> <1060316398.25716227.1441626767732.JavaMail.zimbra@redhat.com> Message-ID: +1 Dan On Mon, Sep 7, 2015 at 2:52 PM, Galder Zamarreno wrote: > Makes sense. > > -- > Galder Zamarre?o > Infinispan, Red Hat > > ----- Original Message ----- >> A recent issue with some refactoring of the PassivationInterceptor >> affecting code using it directly (EAP's Infinispan subsystem), has got >> me thinking about the fact that we have somewhat treated interceptors as >> a form of API since we do not provide another way of retrieving metrics >> collected by the interceptors, aside from the basic cache stats. >> With the plan to eventually drop interceptors, these kind of metrics >> should be exposed through a more stable API (aside from maintaining >> stability of the MBean side of things). >> >> The org.infinispan.stats.Stats interface already partially covers that >> for basic stats (see org.infinispan.cache.AdvancedCache.getStats()), and >> I think it should be extended or at least complemented to do this. >> >> WDYT ? >> >> Tristan >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Mon Sep 7 12:26:05 2015 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 7 Sep 2015 12:26:05 -0400 (EDT) Subject: [infinispan-dev] XSite Hot Rod client failover wiki In-Reply-To: <796873173.25801064.1441643120877.JavaMail.zimbra@redhat.com> Message-ID: <439866156.25801124.1441643165536.JavaMail.zimbra@redhat.com> Hi all, I've written a wiki describing how XSite Hot Rod client failover could work [1]. If you have any comments/doubts/question, please reply :) Cheers, [1] https://github.com/infinispan/infinispan/wiki/XSite-Failover-for-Hot-Rod-clients -- Galder Zamarre?o Infinispan, Red Hat From pedro at infinispan.org Mon Sep 7 12:39:29 2015 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 7 Sep 2015 17:39:29 +0100 Subject: [infinispan-dev] Metrics and interceptors In-Reply-To: References: <55E98EB6.9030008@redhat.com> <1060316398.25716227.1441626767732.JavaMail.zimbra@redhat.com> Message-ID: <55EDBDC1.5030806@infinispan.org> +1 Cheers, Pedro On 09/07/2015 02:39 PM, Dan Berindei wrote: > +1 > > Dan > > > On Mon, Sep 7, 2015 at 2:52 PM, Galder Zamarreno wrote: >> Makes sense. >> >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> ----- Original Message ----- >>> A recent issue with some refactoring of the PassivationInterceptor >>> affecting code using it directly (EAP's Infinispan subsystem), has got >>> me thinking about the fact that we have somewhat treated interceptors as >>> a form of API since we do not provide another way of retrieving metrics >>> collected by the interceptors, aside from the basic cache stats. >>> With the plan to eventually drop interceptors, these kind of metrics >>> should be exposed through a more stable API (aside from maintaining >>> stability of the MBean side of things). >>> >>> The org.infinispan.stats.Stats interface already partially covers that >>> for basic stats (see org.infinispan.cache.AdvancedCache.getStats()), and >>> I think it should be extended or at least complemented to do this. >>> >>> WDYT ? >>> >>> Tristan >>> -- >>> Tristan Tarrant >>> Infinispan Lead >>> JBoss, a division of Red Hat >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From rvansa at redhat.com Tue Sep 8 04:12:52 2015 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 8 Sep 2015 10:12:52 +0200 Subject: [infinispan-dev] XSite Hot Rod client failover wiki In-Reply-To: <439866156.25801124.1441643165536.JavaMail.zimbra@redhat.com> References: <439866156.25801124.1441643165536.JavaMail.zimbra@redhat.com> Message-ID: <55EE9884.8040200@redhat.com> 1) Is it really desired to keep the site list in client configuration? It has always seemed to me great that I can provide only single hotrod server address to the client and it will figure out all the other nodes. I can imagine a configuration with one or few well-known nodes (possibly with capacity factor 0), and the heavy lifting to be done by an elastic cluster. Especially in AWS or GCE like environments it simplifies the configuration. The same could hold for the backup sites, though I understand that this has two downsides: a) If x-site interface is different from the interface accessible by clients, we need a mechanism to publish the external-host:external-port information b) if this information is per-client, it's easy to set up the order of backup sites (according to geographical location, to keep the cluster as close as possible). If that's server based, it may not be possible to declare that accurately. 2) There should be a way to tell the clients that the original site is back online, without bringing down the backup site. However, that puts us back to point 1b) - how should the client know that the another online site is actually closer, if it does not have it on the list. Maybe, having an optional list that would declare the priority, with site names, would be beneficial (client would have foo.bar.sites=BRQ,LON,SFO but wouldn't have to care about IP addresses). Radim On 09/07/2015 06:26 PM, Galder Zamarreno wrote: > Hi all, > > I've written a wiki describing how XSite Hot Rod client failover could work [1]. > > If you have any comments/doubts/question, please reply :) > > Cheers, > > [1] https://github.com/infinispan/infinispan/wiki/XSite-Failover-for-Hot-Rod-clients > -- > Galder Zamarre?o > Infinispan, Red Hat > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From sanne at infinispan.org Wed Sep 9 06:33:25 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 9 Sep 2015 11:33:25 +0100 Subject: [infinispan-dev] Hidden failures in the testsuite In-Reply-To: <505253668.24168193.1441287687745.JavaMail.zimbra@redhat.com> References: <101300424.14304090.1440403474589.JavaMail.zimbra@redhat.com> <505253668.24168193.1441287687745.JavaMail.zimbra@redhat.com> Message-ID: Hi all, sorry for the slow reply, I can't get to test Infinispan very often lately. On top of previously reported issues - which I still have - today I also noticed this one: [UnitTestTestNGListener] Test testPutTimeout(org.infinispan.client.hotrod.ClientSocketReadTimeoutTest) failed. Sep 09, 2015 11:28:16 AM io.netty.util.concurrent.SingleThreadEventExecutor$2 run WARNING: Unexpected exception from an event executor: java.lang.OutOfMemoryError: GC overhead limit exceeded Unless surefire overrides it, all my Maven jobs are assigned 2GB of heap. I know that's not huge but I prefer it to be conservative to serve as "canary". Is that known to be not enough anymore, or worth looking for memory issues? Thanks, Sanne On 3 September 2015 at 14:41, Galder Zamarreno wrote: > Hi Sanne, > > I've looked at CDI and Compatibility issues, see below. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > ----- Original Message ----- >> Hey Sanne! Yep you are right ignoring output is BAD IDEA. I realized that >> it's difficult to look through all log manually so probably we should write >> some parser in python or bash to grep it and put it into bin/ folder with >> other scripts, So at least we can run this script after all tests were run >> and analyze it somehow. And about "appear to be good" nobody knows why. It >> could be testng/junit issue as we mix it a lot. So this needs further >> discussion and analysis. >> >> Vitalii >> >> ----- ??????? ???????????? ----- >> ???: "Sanne Grinovero" >> ????: "infinispan -Dev List" >> ?????????: ?????????, 10 ??????? 2015 ? 20:46:06 >> ????: [infinispan-dev] Hidden failures in the testsuite >> >> Hi all, >> I just updated my local master fork and started the testsuite, as I >> sometimes do. >> >> It's great to see that the build was successful, and no tests >> *appeared* to have failed. >> >> But! lazily scrolling up in the console, I see lots of exceptions >> which don't look like intentional (I'm aware that some tests >> intentionally create error conditions). Also some tests are extremely >> verbose, which might be the reason for nobody noticing these. >> >> Some examples: >> - org.infinispan.it.compatibility.EmbeddedRestHotRodTest seems to log >> TRACE to the console (and probably the whole module) > > ^ I've run the compatibility testsuite manually and didn't have such issue with master: > https://gist.github.com/galderz/b59f1ed4599229022f27 > > Are you still having issues with this? > >> - CDI tests such as org.infinispan.cdi.InfinispanExtensionRemote seem >> to fail in great number because of some ClassNotFoundException(s) >> and/or ResourceLoadingException(s) > > ^ Hmmmm, not seeing any of that either: > https://gist.github.com/galderz/1143078e6be8869cd602 > > Are you still having issues with this? > >> - OSGi integration tests seem to be all broken by some invalid >> integration with Aries / Geronimo >> - OSGi integration tests dump a lot of unnecessary information to the >> build console >> - the Infinispan Query tests log lots of WARN too, around missing >> configuration properties and in some cases concerning exceptions; I'm >> pretty sure that I had resolved those in the past, seems some >> refactorings were done w/o considering the log outputs. >> >> Please don't ignore the output; if it's too verbose to watch, that >> needs to be resolved too. >> >> I also monitor the "expected execution time" of some modules I'm >> interested in, that's been useful in some cases to figure out that >> there was some regression. >> >> One big question: why is it that so many tests "appear to be good" but >> are actually broken? I would like to understand that. >> >> Thanks, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Wed Sep 9 06:58:30 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 9 Sep 2015 13:58:30 +0300 Subject: [infinispan-dev] Hidden failures in the testsuite In-Reply-To: References: <101300424.14304090.1440403474589.JavaMail.zimbra@redhat.com> <505253668.24168193.1441287687745.JavaMail.zimbra@redhat.com> Message-ID: Sanne, the forked JVM that actually runs the tests uses only 1GB by default: -Xmx1024m -XX:MaxPermSize=256m The CI builds use the same value, although they do enable compressed oops explicitly and use smaller thread stacks: env.MAVEN_FORK_OPTS = %maven_opts.memory.x64% %maven_opts.tuning% maven_opts.memory.x64 = -XX:+UseCompressedOops -Xmx1024m -Xms256m -XX:MaxPermSize=256m -Xss512k That being said, you're probably seeing ISPN-5727: https://github.com/infinispan/infinispan/pull/3696 Cheers Dan On Wed, Sep 9, 2015 at 1:33 PM, Sanne Grinovero wrote: > Hi all, > sorry for the slow reply, I can't get to test Infinispan very often lately. > > On top of previously reported issues - which I still have - today I > also noticed this one: > > [UnitTestTestNGListener] Test > testPutTimeout(org.infinispan.client.hotrod.ClientSocketReadTimeoutTest) > failed. > Sep 09, 2015 11:28:16 AM > io.netty.util.concurrent.SingleThreadEventExecutor$2 run > WARNING: Unexpected exception from an event executor: > java.lang.OutOfMemoryError: GC overhead limit exceeded > > Unless surefire overrides it, all my Maven jobs are assigned 2GB of > heap. I know that's not huge but I prefer it to be conservative to > serve as "canary". > > Is that known to be not enough anymore, or worth looking for memory issues? > > Thanks, > Sanne > > > On 3 September 2015 at 14:41, Galder Zamarreno wrote: >> Hi Sanne, >> >> I've looked at CDI and Compatibility issues, see below. >> >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> ----- Original Message ----- >>> Hey Sanne! Yep you are right ignoring output is BAD IDEA. I realized that >>> it's difficult to look through all log manually so probably we should write >>> some parser in python or bash to grep it and put it into bin/ folder with >>> other scripts, So at least we can run this script after all tests were run >>> and analyze it somehow. And about "appear to be good" nobody knows why. It >>> could be testng/junit issue as we mix it a lot. So this needs further >>> discussion and analysis. >>> >>> Vitalii >>> >>> ----- ??????? ???????????? ----- >>> ???: "Sanne Grinovero" >>> ????: "infinispan -Dev List" >>> ?????????: ?????????, 10 ??????? 2015 ? 20:46:06 >>> ????: [infinispan-dev] Hidden failures in the testsuite >>> >>> Hi all, >>> I just updated my local master fork and started the testsuite, as I >>> sometimes do. >>> >>> It's great to see that the build was successful, and no tests >>> *appeared* to have failed. >>> >>> But! lazily scrolling up in the console, I see lots of exceptions >>> which don't look like intentional (I'm aware that some tests >>> intentionally create error conditions). Also some tests are extremely >>> verbose, which might be the reason for nobody noticing these. >>> >>> Some examples: >>> - org.infinispan.it.compatibility.EmbeddedRestHotRodTest seems to log >>> TRACE to the console (and probably the whole module) >> >> ^ I've run the compatibility testsuite manually and didn't have such issue with master: >> https://gist.github.com/galderz/b59f1ed4599229022f27 >> >> Are you still having issues with this? >> >>> - CDI tests such as org.infinispan.cdi.InfinispanExtensionRemote seem >>> to fail in great number because of some ClassNotFoundException(s) >>> and/or ResourceLoadingException(s) >> >> ^ Hmmmm, not seeing any of that either: >> https://gist.github.com/galderz/1143078e6be8869cd602 >> >> Are you still having issues with this? >> >>> - OSGi integration tests seem to be all broken by some invalid >>> integration with Aries / Geronimo >>> - OSGi integration tests dump a lot of unnecessary information to the >>> build console >>> - the Infinispan Query tests log lots of WARN too, around missing >>> configuration properties and in some cases concerning exceptions; I'm >>> pretty sure that I had resolved those in the past, seems some >>> refactorings were done w/o considering the log outputs. >>> >>> Please don't ignore the output; if it's too verbose to watch, that >>> needs to be resolved too. >>> >>> I also monitor the "expected execution time" of some modules I'm >>> interested in, that's been useful in some cases to figure out that >>> there was some regression. >>> >>> One big question: why is it that so many tests "appear to be good" but >>> are actually broken? I would like to understand that. >>> >>> Thanks, >>> Sanne >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Wed Sep 9 07:23:24 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 9 Sep 2015 13:23:24 +0200 Subject: [infinispan-dev] Hidden failures in the testsuite In-Reply-To: References: <101300424.14304090.1440403474589.JavaMail.zimbra@redhat.com> <505253668.24168193.1441287687745.JavaMail.zimbra@redhat.com> Message-ID: <55F016AC.3090700@redhat.com> Dan, since Java 8, Compressed Oops are on by default, so that can be removed. Tristan On 09/09/2015 12:58, Dan Berindei wrote: > Sanne, the forked JVM that actually runs the tests uses only 1GB by default: > > -Xmx1024m -XX:MaxPermSize=256m > > The CI builds use the same value, although they do enable compressed > oops explicitly and use smaller thread stacks: > > env.MAVEN_FORK_OPTS = %maven_opts.memory.x64% %maven_opts.tuning% > maven_opts.memory.x64 = -XX:+UseCompressedOops -Xmx1024m -Xms256m > -XX:MaxPermSize=256m -Xss512k > > That being said, you're probably seeing ISPN-5727: > https://github.com/infinispan/infinispan/pull/3696 > > Cheers > Dan > > > On Wed, Sep 9, 2015 at 1:33 PM, Sanne Grinovero wrote: >> Hi all, >> sorry for the slow reply, I can't get to test Infinispan very often lately. >> >> On top of previously reported issues - which I still have - today I >> also noticed this one: >> >> [UnitTestTestNGListener] Test >> testPutTimeout(org.infinispan.client.hotrod.ClientSocketReadTimeoutTest) >> failed. >> Sep 09, 2015 11:28:16 AM >> io.netty.util.concurrent.SingleThreadEventExecutor$2 run >> WARNING: Unexpected exception from an event executor: >> java.lang.OutOfMemoryError: GC overhead limit exceeded >> >> Unless surefire overrides it, all my Maven jobs are assigned 2GB of >> heap. I know that's not huge but I prefer it to be conservative to >> serve as "canary". >> >> Is that known to be not enough anymore, or worth looking for memory issues? >> >> Thanks, >> Sanne >> >> >> On 3 September 2015 at 14:41, Galder Zamarreno wrote: >>> Hi Sanne, >>> >>> I've looked at CDI and Compatibility issues, see below. >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> ----- Original Message ----- >>>> Hey Sanne! Yep you are right ignoring output is BAD IDEA. I realized that >>>> it's difficult to look through all log manually so probably we should write >>>> some parser in python or bash to grep it and put it into bin/ folder with >>>> other scripts, So at least we can run this script after all tests were run >>>> and analyze it somehow. And about "appear to be good" nobody knows why. It >>>> could be testng/junit issue as we mix it a lot. So this needs further >>>> discussion and analysis. >>>> >>>> Vitalii >>>> >>>> ----- ??????? ???????????? ----- >>>> ???: "Sanne Grinovero" >>>> ????: "infinispan -Dev List" >>>> ?????????: ?????????, 10 ??????? 2015 ? 20:46:06 >>>> ????: [infinispan-dev] Hidden failures in the testsuite >>>> >>>> Hi all, >>>> I just updated my local master fork and started the testsuite, as I >>>> sometimes do. >>>> >>>> It's great to see that the build was successful, and no tests >>>> *appeared* to have failed. >>>> >>>> But! lazily scrolling up in the console, I see lots of exceptions >>>> which don't look like intentional (I'm aware that some tests >>>> intentionally create error conditions). Also some tests are extremely >>>> verbose, which might be the reason for nobody noticing these. >>>> >>>> Some examples: >>>> - org.infinispan.it.compatibility.EmbeddedRestHotRodTest seems to log >>>> TRACE to the console (and probably the whole module) >>> >>> ^ I've run the compatibility testsuite manually and didn't have such issue with master: >>> https://gist.github.com/galderz/b59f1ed4599229022f27 >>> >>> Are you still having issues with this? >>> >>>> - CDI tests such as org.infinispan.cdi.InfinispanExtensionRemote seem >>>> to fail in great number because of some ClassNotFoundException(s) >>>> and/or ResourceLoadingException(s) >>> >>> ^ Hmmmm, not seeing any of that either: >>> https://gist.github.com/galderz/1143078e6be8869cd602 >>> >>> Are you still having issues with this? >>> >>>> - OSGi integration tests seem to be all broken by some invalid >>>> integration with Aries / Geronimo >>>> - OSGi integration tests dump a lot of unnecessary information to the >>>> build console >>>> - the Infinispan Query tests log lots of WARN too, around missing >>>> configuration properties and in some cases concerning exceptions; I'm >>>> pretty sure that I had resolved those in the past, seems some >>>> refactorings were done w/o considering the log outputs. >>>> >>>> Please don't ignore the output; if it's too verbose to watch, that >>>> needs to be resolved too. >>>> >>>> I also monitor the "expected execution time" of some modules I'm >>>> interested in, that's been useful in some cases to figure out that >>>> there was some regression. >>>> >>>> One big question: why is it that so many tests "appear to be good" but >>>> are actually broken? I would like to understand that. >>>> >>>> Thanks, >>>> Sanne >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From sanne at infinispan.org Wed Sep 9 07:32:18 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 9 Sep 2015 12:32:18 +0100 Subject: [infinispan-dev] Hidden failures in the testsuite In-Reply-To: <55F016AC.3090700@redhat.com> References: <101300424.14304090.1440403474589.JavaMail.zimbra@redhat.com> <505253668.24168193.1441287687745.JavaMail.zimbra@redhat.com> <55F016AC.3090700@redhat.com> Message-ID: since MaxPermSize is meaningless in Java8 and actually fails the build with Java9, I've removed all references to this option, and also gave a bit more heap space to most tests (slightly less generous for clustered integration tests) https://github.com/infinispan/infinispan/pull/3701 On 9 September 2015 at 12:23, Tristan Tarrant wrote: > Dan, since Java 8, Compressed Oops are on by default, so that can be > removed. > > Tristan > > On 09/09/2015 12:58, Dan Berindei wrote: >> Sanne, the forked JVM that actually runs the tests uses only 1GB by default: >> >> -Xmx1024m -XX:MaxPermSize=256m >> >> The CI builds use the same value, although they do enable compressed >> oops explicitly and use smaller thread stacks: >> >> env.MAVEN_FORK_OPTS = %maven_opts.memory.x64% %maven_opts.tuning% >> maven_opts.memory.x64 = -XX:+UseCompressedOops -Xmx1024m -Xms256m >> -XX:MaxPermSize=256m -Xss512k >> >> That being said, you're probably seeing ISPN-5727: >> https://github.com/infinispan/infinispan/pull/3696 >> >> Cheers >> Dan >> >> >> On Wed, Sep 9, 2015 at 1:33 PM, Sanne Grinovero wrote: >>> Hi all, >>> sorry for the slow reply, I can't get to test Infinispan very often lately. >>> >>> On top of previously reported issues - which I still have - today I >>> also noticed this one: >>> >>> [UnitTestTestNGListener] Test >>> testPutTimeout(org.infinispan.client.hotrod.ClientSocketReadTimeoutTest) >>> failed. >>> Sep 09, 2015 11:28:16 AM >>> io.netty.util.concurrent.SingleThreadEventExecutor$2 run >>> WARNING: Unexpected exception from an event executor: >>> java.lang.OutOfMemoryError: GC overhead limit exceeded >>> >>> Unless surefire overrides it, all my Maven jobs are assigned 2GB of >>> heap. I know that's not huge but I prefer it to be conservative to >>> serve as "canary". >>> >>> Is that known to be not enough anymore, or worth looking for memory issues? >>> >>> Thanks, >>> Sanne >>> >>> >>> On 3 September 2015 at 14:41, Galder Zamarreno wrote: >>>> Hi Sanne, >>>> >>>> I've looked at CDI and Compatibility issues, see below. >>>> >>>> Cheers, >>>> -- >>>> Galder Zamarre?o >>>> Infinispan, Red Hat >>>> >>>> ----- Original Message ----- >>>>> Hey Sanne! Yep you are right ignoring output is BAD IDEA. I realized that >>>>> it's difficult to look through all log manually so probably we should write >>>>> some parser in python or bash to grep it and put it into bin/ folder with >>>>> other scripts, So at least we can run this script after all tests were run >>>>> and analyze it somehow. And about "appear to be good" nobody knows why. It >>>>> could be testng/junit issue as we mix it a lot. So this needs further >>>>> discussion and analysis. >>>>> >>>>> Vitalii >>>>> >>>>> ----- ??????? ???????????? ----- >>>>> ???: "Sanne Grinovero" >>>>> ????: "infinispan -Dev List" >>>>> ?????????: ?????????, 10 ??????? 2015 ? 20:46:06 >>>>> ????: [infinispan-dev] Hidden failures in the testsuite >>>>> >>>>> Hi all, >>>>> I just updated my local master fork and started the testsuite, as I >>>>> sometimes do. >>>>> >>>>> It's great to see that the build was successful, and no tests >>>>> *appeared* to have failed. >>>>> >>>>> But! lazily scrolling up in the console, I see lots of exceptions >>>>> which don't look like intentional (I'm aware that some tests >>>>> intentionally create error conditions). Also some tests are extremely >>>>> verbose, which might be the reason for nobody noticing these. >>>>> >>>>> Some examples: >>>>> - org.infinispan.it.compatibility.EmbeddedRestHotRodTest seems to log >>>>> TRACE to the console (and probably the whole module) >>>> >>>> ^ I've run the compatibility testsuite manually and didn't have such issue with master: >>>> https://gist.github.com/galderz/b59f1ed4599229022f27 >>>> >>>> Are you still having issues with this? >>>> >>>>> - CDI tests such as org.infinispan.cdi.InfinispanExtensionRemote seem >>>>> to fail in great number because of some ClassNotFoundException(s) >>>>> and/or ResourceLoadingException(s) >>>> >>>> ^ Hmmmm, not seeing any of that either: >>>> https://gist.github.com/galderz/1143078e6be8869cd602 >>>> >>>> Are you still having issues with this? >>>> >>>>> - OSGi integration tests seem to be all broken by some invalid >>>>> integration with Aries / Geronimo >>>>> - OSGi integration tests dump a lot of unnecessary information to the >>>>> build console >>>>> - the Infinispan Query tests log lots of WARN too, around missing >>>>> configuration properties and in some cases concerning exceptions; I'm >>>>> pretty sure that I had resolved those in the past, seems some >>>>> refactorings were done w/o considering the log outputs. >>>>> >>>>> Please don't ignore the output; if it's too verbose to watch, that >>>>> needs to be resolved too. >>>>> >>>>> I also monitor the "expected execution time" of some modules I'm >>>>> interested in, that's been useful in some cases to figure out that >>>>> there was some regression. >>>>> >>>>> One big question: why is it that so many tests "appear to be good" but >>>>> are actually broken? I would like to understand that. >>>>> >>>>> Thanks, >>>>> Sanne >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Wed Sep 9 09:22:52 2015 From: galder at redhat.com (Galder Zamarreno) Date: Wed, 9 Sep 2015 09:22:52 -0400 (EDT) Subject: [infinispan-dev] Uber jars testing In-Reply-To: References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <55E6B841.4090808@redhat.com> <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> <1497354448.24093053.1441276467028.JavaMail.zimbra@redhat.com> Message-ID: <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> I agree pretty much with everything below: * We overuse test overriding to run the same test with different configuration. I did that same mistake with the functional map API stuff :( * I'm in favour of testsuite restructuring, but I think we really need to start from scratch in a separate testsuite maven project, since we can then add all functional test for all (not only core...etc, but also compatibility tests...etc), and leave its project to test implementation details? Adding this separation would open up the path to create a testkit (as I explained last year in Berlin) * I'm also in favour in defining the test once and running it with different configuration options automatically. * I'm in favour too of randomising (need to check that link) but also we need some quickcheck style tests [1], e.g. a test that verifies that put(K, V) works not matter the type of object passed in. Cheers, [1] https://www.fpcomplete.com/user/pbv/an-introduction-to-quickcheck-testing -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > Interesting subject. We also have many tests which (ab)use inheritance > to re-test the same API semantics in slightly different > configurations, like embedded/DIST and embedded/REPL, sometimes > becoming an @Override mess. > It would be far more useful to restructure the testsuite to have such > tests in a single class (no inheritance) and declare - maybe > annotations? - which permutations of configuration parameters should > be valid. > > Among those configuration permutations one would not have "just" > different replication models, but also things like > - using the same API remotely (Hot Rod) > - using the same feature but within a WildFly embedded module > - using the uber jars vs small jars > - uber jars & remote.. > - remote & embedded modules.. > - remote, uber jars, in OSGi.. > > And finally combine with other options: > - A Query test using: remote client, using uber jars, in OSGi, but > switching JTA implementation, using a new experimental JGroups stack! > > For example many Core API and Query tests are copy/pasted into other > modules as "integration tests", etc.. but we really should just run > the same one in a different environment. > > This would keep our code better maintainable, but also allow some neat > tricks like specify that some configurations should definitely be > tested in some test group (like Galder suggests, one could flag one of > these for "smoke tests", one for "nightly tests"), but you could also > want to flag some configuration settings as a "should work, low > priority for testing". > A smart testsuite could then use a randomizer to generate permutations > of configuration options for those low priority tests which are not > essential; there are great examples of such testsuites in the Haskell > world, and also Lucene and ElasticSearch do it. > A single random seed is used for the whole run, and it's printed > clearly at the start; a single seed will deterministically define all > parameters of the testsuite, so you can reproduce it all by setting a > specific seed when needing to debug a failure. > > http://blog.mikemccandless.com/2011/03/your-test-cases-should-sometimes-fail.html > > Thanks, > Sanne > > On 3 September 2015 at 11:34, Galder Zamarreno wrote: > > Another interesting improvement here would be if you could run all these > > smoke tests with an alternative implementation of AdvancedCache, e.g. one > > based with functional API. > > > > Cheers, > > -- > > Galder Zamarre?o > > Infinispan, Red Hat > > > > ----- Original Message ----- > >> Good post Jiri, this got me thinking :) > >> > >> Running the entire testsuite again with uber jars would add a lot of time > >> to > >> the build time. > >> > >> Maybe we should have a set of tests that must be executed for sure, e.g. > >> like > >> Wildfly's smoke tests [1]. We have "functional" group but right now it > >> covers pretty much all tests. > >> > >> Such tests should live in a separate testsuite, so that we could add the > >> essential tests for *all* components. In a way, we've already done some of > >> this in integrationtests/ but it's not really well structured for this > >> aim. > >> > >> Also, if we would go down this path, something we should take advantage of > >> (if possible with JUnit/TestNG) is what Gustavo did with the Spark tests > >> in > >> [2], where he used suites to make it faster to run things, by starting a > >> cache manager for distributed caches, running all distributed tests...etc. > >> In a way, I think we can already do this with Arquillian Infinispan > >> integration, so Arquillian would probably well suited for such smoke > >> testsuite. > >> > >> Thoughts? > >> > >> Cheers, > >> > >> [1] https://github.com/wildfly/wildfly#running-the-testsuite > >> [2] > >> https://github.com/infinispan/infinispan-spark/tree/master/src/test/scala/org/infinispan/spark > >> -- > >> Galder Zamarre?o > >> Infinispan, Red Hat > >> > >> ----- Original Message ----- > >> > Hi Jiri, comments inline. > >> > > >> > On 2.9.2015 10:40, Jiri Holusa wrote: > >> > > Hi all, > >> > > > >> > > we've been thinking for a while, how to test ISPN uber jars. The > >> > > current > >> > > status is that we actually don't have many tests in the testsuite, > >> > > there > >> > > are few tests in integrationtests/all-embedded-* modules that are > >> > > basically copies of the actual tests in corresponding modules. We > >> > > think > >> > > that this test coverage is not enough and more importantly, they are > >> > > duplicates. > >> > > > >> > > The questions are now following: > >> > > * which tests should be invoked with uber-jars? Whole ISPN testsuite? > >> > > Only > >> > > integrationtests module? > >> > > >> > The goal is to run the whole test suite because, as you said, we don't > >> > have enough tests in integrationtests/* And we can't duplicate all > >> > test classes from individual modules here. > >> > > >> > > * how would it run? Create Maven different profiles for "classic" jars > >> > > and > >> > > uber jars? Or try to use some Maven exclusion magic if even possible? > >> > > > >> > > Some time ago, we had discussion about this with Sebastian, who > >> > > suggested > >> > > that running only integrationtests module would be sufficient, because > >> > > uber-jars are really about packaging, not the functionality itself. > >> > > But I > >> > > don't know if the tests coverage is sufficient in that level, I would > >> > > be > >> > > much more confident if we could run the whole ISPN testsuite against > >> > > uber-jars. > >> > Right. Uber-jars are about packaging but you don't know that the > >> > packiging is right until you try all the features and see that > >> > everything works. There might be some classes missing (just for some > >> > particular features), same classes in different packages, the > >> > Manifest.mf might be corrupted and then something won't work in OSGi. > >> > > >> > > >> > I'd prefer a separate Maven profile. IMO, exclusions are too > >> > error-prone. > >> > > >> > > >> > Martin > >> > > > >> > > I'm opening this for wider discussion as we should agree on the way > >> > > how > >> > > to > >> > > do it, so we could do it right :) > >> > > > >> > > Cheers, > >> > > Jiri > >> > > > >> > > > >> > > >> > _______________________________________________ > >> > infinispan-dev mailing list > >> > infinispan-dev at lists.jboss.org > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Wed Sep 9 10:00:11 2015 From: galder at redhat.com (Galder Zamarreno) Date: Wed, 9 Sep 2015 10:00:11 -0400 (EDT) Subject: [infinispan-dev] XSite Hot Rod client failover wiki In-Reply-To: <55EE9884.8040200@redhat.com> References: <439866156.25801124.1441643165536.JavaMail.zimbra@redhat.com> <55EE9884.8040200@redhat.com> Message-ID: <1176127892.27080351.1441807211171.JavaMail.zimbra@redhat.com> -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > 1) Is it really desired to keep the site list in client configuration? Not sure if it was totally clear but the entire site list would not be required. The client must contain at least 1 node's address in that site, so that it can then get the topology of the rest. > It has always seemed to me great that I can provide only single hotrod > server address to the client and it will figure out all the other nodes. > I can imagine a configuration with one or few well-known nodes (possibly > with capacity factor 0), and the heavy lifting to be done by an elastic > cluster. Especially in AWS or GCE like environments it simplifies the > configuration. The same could hold for the backup sites, though I > understand that this has two downsides: > a) If x-site interface is different from the interface accessible by > clients, we need a mechanism to publish the external-host:external-port > information ^ A server can define it's external host and port, but we call them proxyHost and proxyPort. > b) if this information is per-client, it's easy to set up the order of > backup sites (according to geographical location, to keep the cluster as > close as possible). If that's server based, it may not be possible to > declare that accurately. ^ Not sure I understand what you mean by that. > > 2) There should be a way to tell the clients that the original site is > back online, without bringing down the backup site. ^ That's easy to do, simply put the backup site offline and the client should bounce back to original site. > However, that puts > us back to point 1b) - how should the client know that the another > online site is actually closer, if it does not have it on the list. > Maybe, having an optional list that would declare the priority, with > site names, would be beneficial (client would have > foo.bar.sites=BRQ,LON,SFO but wouldn't have to care about IP addresses). ^ I'm not sure about the real usability of that. > > Radim > > On 09/07/2015 06:26 PM, Galder Zamarreno wrote: > > Hi all, > > > > I've written a wiki describing how XSite Hot Rod client failover could work > > [1]. > > > > If you have any comments/doubts/question, please reply :) > > > > Cheers, > > > > [1] > > https://github.com/infinispan/infinispan/wiki/XSite-Failover-for-Hot-Rod-clients > > -- > > Galder Zamarre?o > > Infinispan, Red Hat > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Wed Sep 9 11:48:22 2015 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 9 Sep 2015 17:48:22 +0200 Subject: [infinispan-dev] XSite Hot Rod client failover wiki In-Reply-To: <1176127892.27080351.1441807211171.JavaMail.zimbra@redhat.com> References: <439866156.25801124.1441643165536.JavaMail.zimbra@redhat.com> <55EE9884.8040200@redhat.com> <1176127892.27080351.1441807211171.JavaMail.zimbra@redhat.com> Message-ID: <55F054C6.1090604@redhat.com> On 09/09/2015 04:00 PM, Galder Zamarreno wrote: > > -- > Galder Zamarre?o > Infinispan, Red Hat > > ----- Original Message ----- >> 1) Is it really desired to keep the site list in client configuration? > Not sure if it was totally clear but the entire site list would not be required. The client must contain at least 1 node's address in that site, so that it can then get the topology of the rest. OK, I've eventually convinced myself that it's not that bad idea to keep the addresses there. The mechanism of updating site list on the client can be out of scope for Infinispan. > >> It has always seemed to me great that I can provide only single hotrod >> server address to the client and it will figure out all the other nodes. >> I can imagine a configuration with one or few well-known nodes (possibly >> with capacity factor 0), and the heavy lifting to be done by an elastic >> cluster. Especially in AWS or GCE like environments it simplifies the >> configuration. The same could hold for the backup sites, though I >> understand that this has two downsides: >> a) If x-site interface is different from the interface accessible by >> clients, we need a mechanism to publish the external-host:external-port >> information > ^ A server can define it's external host and port, but we call them proxyHost and proxyPort. Not that it would matter, but [1] says that its configured as 'external-host' [1] https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_8_0.xsd#L147 > >> b) if this information is per-client, it's easy to set up the order of >> backup sites (according to geographical location, to keep the cluster as >> close as possible). If that's server based, it may not be possible to >> declare that accurately. > ^ Not sure I understand what you mean by that. You need to declare priority of sites to which the client should connect. If we keep the current configuration, the main site is obvious, but to which sites should it connect when the site BRQ fails, LON or SFO? > >> 2) There should be a way to tell the clients that the original site is >> back online, without bringing down the backup site. > ^ That's easy to do, simply put the backup site offline and the client should bounce back to original site. -1 That's what I meant by 'without bringing down the backup site'. You can have active-active setup with one set of clients in the main site and another set of clients connected to the backup site (when both sites are online). When one site fails, the clients will be transferred to the backup site (from their POV), so all clients would connect to single site. But when you bring the failed site back online, you don't want to disrupt the other set of clients, and ping pong them between the sites - you want to use both. Radim > >> However, that puts >> us back to point 1b) - how should the client know that the another >> online site is actually closer, if it does not have it on the list. >> Maybe, having an optional list that would declare the priority, with >> site names, would be beneficial (client would have >> foo.bar.sites=BRQ,LON,SFO but wouldn't have to care about IP addresses). > ^ I'm not sure about the real usability of that. > >> Radim >> >> On 09/07/2015 06:26 PM, Galder Zamarreno wrote: >>> Hi all, >>> >>> I've written a wiki describing how XSite Hot Rod client failover could work >>> [1]. >>> >>> If you have any comments/doubts/question, please reply :) >>> >>> Cheers, >>> >>> [1] >>> https://github.com/infinispan/infinispan/wiki/XSite-Failover-for-Hot-Rod-clients >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From rvansa at redhat.com Wed Sep 9 11:54:27 2015 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 9 Sep 2015 17:54:27 +0200 Subject: [infinispan-dev] Uber jars testing In-Reply-To: <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <55E6B841.4090808@redhat.com> <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> <1497354448.24093053.1441276467028.JavaMail.zimbra@redhat.com> <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> Message-ID: <55F05633.2020307@redhat.com> Any plans for tests that are just slightly different for different configurations? With inheritance, it's simple - you just override the method. If you just run that test on a huge matrix of configurations, you end up with having a method with a very complicated switch for certain configurations. I am not asking sarcastically, but I've run into similar issue when implementing similar thing in 2LC testsuite. Radim On 09/09/2015 03:22 PM, Galder Zamarreno wrote: > I agree pretty much with everything below: > > * We overuse test overriding to run the same test with different configuration. I did that same mistake with the functional map API stuff :( > > * I'm in favour of testsuite restructuring, but I think we really need to start from scratch in a separate testsuite maven project, since we can then add all functional test for all (not only core...etc, but also compatibility tests...etc), and leave its project to test implementation details? Adding this separation would open up the path to create a testkit (as I explained last year in Berlin) > > * I'm also in favour in defining the test once and running it with different configuration options automatically. > > * I'm in favour too of randomising (need to check that link) but also we need some quickcheck style tests [1], e.g. a test that verifies that put(K, V) works not matter the type of object passed in. > > Cheers, > > [1] https://www.fpcomplete.com/user/pbv/an-introduction-to-quickcheck-testing > -- > Galder Zamarre?o > Infinispan, Red Hat > > ----- Original Message ----- >> Interesting subject. We also have many tests which (ab)use inheritance >> to re-test the same API semantics in slightly different >> configurations, like embedded/DIST and embedded/REPL, sometimes >> becoming an @Override mess. >> It would be far more useful to restructure the testsuite to have such >> tests in a single class (no inheritance) and declare - maybe >> annotations? - which permutations of configuration parameters should >> be valid. >> >> Among those configuration permutations one would not have "just" >> different replication models, but also things like >> - using the same API remotely (Hot Rod) >> - using the same feature but within a WildFly embedded module >> - using the uber jars vs small jars >> - uber jars & remote.. >> - remote & embedded modules.. >> - remote, uber jars, in OSGi.. >> >> And finally combine with other options: >> - A Query test using: remote client, using uber jars, in OSGi, but >> switching JTA implementation, using a new experimental JGroups stack! >> >> For example many Core API and Query tests are copy/pasted into other >> modules as "integration tests", etc.. but we really should just run >> the same one in a different environment. >> >> This would keep our code better maintainable, but also allow some neat >> tricks like specify that some configurations should definitely be >> tested in some test group (like Galder suggests, one could flag one of >> these for "smoke tests", one for "nightly tests"), but you could also >> want to flag some configuration settings as a "should work, low >> priority for testing". >> A smart testsuite could then use a randomizer to generate permutations >> of configuration options for those low priority tests which are not >> essential; there are great examples of such testsuites in the Haskell >> world, and also Lucene and ElasticSearch do it. >> A single random seed is used for the whole run, and it's printed >> clearly at the start; a single seed will deterministically define all >> parameters of the testsuite, so you can reproduce it all by setting a >> specific seed when needing to debug a failure. >> >> http://blog.mikemccandless.com/2011/03/your-test-cases-should-sometimes-fail.html >> >> Thanks, >> Sanne >> >> On 3 September 2015 at 11:34, Galder Zamarreno wrote: >>> Another interesting improvement here would be if you could run all these >>> smoke tests with an alternative implementation of AdvancedCache, e.g. one >>> based with functional API. >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> ----- Original Message ----- >>>> Good post Jiri, this got me thinking :) >>>> >>>> Running the entire testsuite again with uber jars would add a lot of time >>>> to >>>> the build time. >>>> >>>> Maybe we should have a set of tests that must be executed for sure, e.g. >>>> like >>>> Wildfly's smoke tests [1]. We have "functional" group but right now it >>>> covers pretty much all tests. >>>> >>>> Such tests should live in a separate testsuite, so that we could add the >>>> essential tests for *all* components. In a way, we've already done some of >>>> this in integrationtests/ but it's not really well structured for this >>>> aim. >>>> >>>> Also, if we would go down this path, something we should take advantage of >>>> (if possible with JUnit/TestNG) is what Gustavo did with the Spark tests >>>> in >>>> [2], where he used suites to make it faster to run things, by starting a >>>> cache manager for distributed caches, running all distributed tests...etc. >>>> In a way, I think we can already do this with Arquillian Infinispan >>>> integration, so Arquillian would probably well suited for such smoke >>>> testsuite. >>>> >>>> Thoughts? >>>> >>>> Cheers, >>>> >>>> [1] https://github.com/wildfly/wildfly#running-the-testsuite >>>> [2] >>>> https://github.com/infinispan/infinispan-spark/tree/master/src/test/scala/org/infinispan/spark >>>> -- >>>> Galder Zamarre?o >>>> Infinispan, Red Hat >>>> >>>> ----- Original Message ----- >>>>> Hi Jiri, comments inline. >>>>> >>>>> On 2.9.2015 10:40, Jiri Holusa wrote: >>>>>> Hi all, >>>>>> >>>>>> we've been thinking for a while, how to test ISPN uber jars. The >>>>>> current >>>>>> status is that we actually don't have many tests in the testsuite, >>>>>> there >>>>>> are few tests in integrationtests/all-embedded-* modules that are >>>>>> basically copies of the actual tests in corresponding modules. We >>>>>> think >>>>>> that this test coverage is not enough and more importantly, they are >>>>>> duplicates. >>>>>> >>>>>> The questions are now following: >>>>>> * which tests should be invoked with uber-jars? Whole ISPN testsuite? >>>>>> Only >>>>>> integrationtests module? >>>>> The goal is to run the whole test suite because, as you said, we don't >>>>> have enough tests in integrationtests/* And we can't duplicate all >>>>> test classes from individual modules here. >>>>> >>>>>> * how would it run? Create Maven different profiles for "classic" jars >>>>>> and >>>>>> uber jars? Or try to use some Maven exclusion magic if even possible? >>>>>> >>>>>> Some time ago, we had discussion about this with Sebastian, who >>>>>> suggested >>>>>> that running only integrationtests module would be sufficient, because >>>>>> uber-jars are really about packaging, not the functionality itself. >>>>>> But I >>>>>> don't know if the tests coverage is sufficient in that level, I would >>>>>> be >>>>>> much more confident if we could run the whole ISPN testsuite against >>>>>> uber-jars. >>>>> Right. Uber-jars are about packaging but you don't know that the >>>>> packiging is right until you try all the features and see that >>>>> everything works. There might be some classes missing (just for some >>>>> particular features), same classes in different packages, the >>>>> Manifest.mf might be corrupted and then something won't work in OSGi. >>>>> >>>>> >>>>> I'd prefer a separate Maven profile. IMO, exclusions are too >>>>> error-prone. >>>>> >>>>> >>>>> Martin >>>>>> I'm opening this for wider discussion as we should agree on the way >>>>>> how >>>>>> to >>>>>> do it, so we could do it right :) >>>>>> >>>>>> Cheers, >>>>>> Jiri >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Wed Sep 9 12:24:06 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 9 Sep 2015 18:24:06 +0200 Subject: [infinispan-dev] XSite Hot Rod client failover wiki In-Reply-To: <439866156.25801124.1441643165536.JavaMail.zimbra@redhat.com> References: <439866156.25801124.1441643165536.JavaMail.zimbra@redhat.com> Message-ID: <55F05D26.70409@redhat.com> Regarding the special server response to failover to another site, this could also be used in rolling upgrade scenarios, if it were possible to have the server send the new site addresses: - create a new cluster, pointing it to the old one using the remote cache store - the old cluster sends the failover response to the clients with the addresses of the new cluster However this requires the various sites to be able to pass their topology info between each other... but it is just a cache :) Tristan On 07/09/2015 18:26, Galder Zamarreno wrote: > Hi all, > > I've written a wiki describing how XSite Hot Rod client failover could work [1]. > > If you have any comments/doubts/question, please reply :) > > Cheers, > > [1] https://github.com/infinispan/infinispan/wiki/XSite-Failover-for-Hot-Rod-clients > -- > Galder Zamarre?o > Infinispan, Red Hat > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From rory.odonnell at oracle.com Wed Sep 9 13:29:17 2015 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Wed, 9 Sep 2015 18:29:17 +0100 Subject: [infinispan-dev] Project Jigsaw: Early-Access Builds available on jdk9.java.net/jigsaw Message-ID: <55F06C6D.4000109@oracle.com> Hi Galder, Early-access builds of JDK 9 with Project Jigsaw are available for download at jdk9.java.net/jigsaw . The EA builds contain the latest prototype implementation of JSR 376 , the Java Platform Module System, as well as that of the JDK-specific APIs and tools described in JEP 261 . If you'd like to try out the EA builds, by far the most helpful things you can do are: * Try to run existing applications, without change, on these builds to see whether the module system, or the modularization of the platform, breaks your code or identifies code that depends upon JDK-internal APIs or other unspecified aspects of the platform. * Experiment with the module system itself, perhaps by following the quick start guide , and start thinking about how to migrate existing libraries and application components to modules. We hope to publish some specific migration tips shortly. Please send usage questions and experience reports to the jigsaw-dev list. Specific suggestions about the design of the module system should be sent to the JSR 376 Expert Group's comments list . For more information please seen Mark Reinhold's mail [1] Rgds,Rory [1]http://mail.openjdk.java.net/pipermail/jigsaw-dev/2015-September/004480.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150909/f3eb94f3/attachment.html From sanne at infinispan.org Thu Sep 10 11:43:35 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 10 Sep 2015 16:43:35 +0100 Subject: [infinispan-dev] Lucene 5 is coming: pitfalls to consider In-Reply-To: References: Message-ID: A wrap up on this subject. Infinispan 8 is now based on Lucene 5.3 and all problems I previously listed are dealt with in a mostly-backwards compatible way; this is what you need to know. ## Null Markers One exception is null-marker tokens: when applied to a NumericField they now shall be represented by a number of the matching type of the field.. no big deal. ## Sorting The bigger issue was sorting, and its need for appropriate metadata, so that we'd know at indexing time which fields would potentially be the target for a sorting query. Our solution in Hibernate Search 5.5 is to provide a @SortableField annotation to allow users (and integrators like Infinispan Remote Query) to mark fields for this purpose, but also we're falling back to a slower sorting strategy in case at runtime a Query is run targeting wich a field which was not appropriately annotated. But while you might think "great, I don't have any change to do", especially if you don't need the extra performance boost that @SortableField would provide, make sure to start migrating infrastructure to use this annotation as the fallback strategy won't be maintained forever! With the next version we'll - by default - refuse to use the fallback and get a runtime exception, but still provide a configuration option to allow it. That would be a great time to make sure all your needs are covered by the new alternative metadata. After that we will get rid of the fallback strategy. Gunnar is going to publish a blog post with more details next week on the Hibernate blog: http://in.relation.to/ , please watch that space. ## Index encoding Hibernate Search is including the backwards compatible codecs. Infinispan could decide to include them too, if you prefer. ## Dynamic Analyzer choices We managed to keep this feature even if Lucene doesn't allow it, we'll probably deprecate this like with sorting but I guess this doesn't require any upfront work from Infinispan. Thanks, Sanne From ttarrant at redhat.com Thu Sep 10 16:56:10 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 10 Sep 2015 22:56:10 +0200 Subject: [infinispan-dev] Infinispan 8.0.1.Final and 7.2.5.Final Message-ID: <55F1EE6A.60508@redhat.com> Dear all, we've just cooked two new point releases of Infinispan to address a number of issues. The highlights for 8.0.1.Final are: ISPN-5717 Notify continuous query also when entry expires ISPN-5591 Simple local cache without interceptor stack. This is an extremely fast cache with very few features (no transactions, no indexing, no persistence, etc). Its primary intendend usage is as a 2nd-level cache for Hibernate, but we're sure you can find lot's of other applications for it, provided you don't require all the bells and whistles that come with our fully-fledged caches. Bump Hibernate Search to 5.5.0.CR1 and Lucene to 5.3.0 A number of query fixes, including indexing and searching of null non-string properties, aggregation expressions in orderBy, filter with both 'where' and 'having' in the same query ISPN-5731 Cannot use aggregation expression in orderBy Read the complete release notes The highlights for 7.2.5.Final are: ISPN-5607 Preemptively invalidate near cache after writes ISPN-5670 Hot Rod server sets -1 for lifespan or maxIdle as default ISPN-5677 RemoteCache async methods use flags ISPN-5684 Make getAll work with compatibility mode in DIST Read the complete release notes Visit http://infinispan.org to get it, learn how to use and help us improve it. Enjoy ! The Infinispan team -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Fri Sep 11 10:19:02 2015 From: galder at redhat.com (Galder Zamarreno) Date: Fri, 11 Sep 2015 10:19:02 -0400 (EDT) Subject: [infinispan-dev] Uber jars testing In-Reply-To: <55F05633.2020307@redhat.com> References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <55E6B841.4090808@redhat.com> <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> <1497354448.24093053.1441276467028.JavaMail.zimbra@redhat.com> <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> <55F05633.2020307@redhat.com> Message-ID: <480833508.28352119.1441981142276.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Any plans for tests that are just slightly different for different > configurations? With inheritance, it's simple - you just override the > method. If you just run that test on a huge matrix of configurations, > you end up with having a method with a very complicated switch for > certain configurations. ^ I see what you are getting at here. Normally such differences sometimes happen and can be divided into two: operations executed and assertions. Sometimes the operations executed are slightly different, and sometimes the operations are the same, but assertions slightly different. I don't have specific ideas about how to solve this but my gut feeling is something like this: If we can write tests as objects/types, where we define the operations and the assertions, then all the tests (testXXX methods) have to do is run this N objects against M configurations. With that in mind, running slightly different tests would be done extending or composing the test object/types, independent of the test classes themselves. To run these slight variations, we'd define a test class that runs the variations with M configurations. Note that I've not prototyped any of that and there are probably better ways to do this. > > I am not asking sarcastically, but I've run into similar issue when > implementing similar thing in 2LC testsuite. > > Radim > > On 09/09/2015 03:22 PM, Galder Zamarreno wrote: > > I agree pretty much with everything below: > > > > * We overuse test overriding to run the same test with different > > configuration. I did that same mistake with the functional map API stuff > > :( > > > > * I'm in favour of testsuite restructuring, but I think we really need to > > start from scratch in a separate testsuite maven project, since we can > > then add all functional test for all (not only core...etc, but also > > compatibility tests...etc), and leave its project to test implementation > > details? Adding this separation would open up the path to create a testkit > > (as I explained last year in Berlin) > > > > * I'm also in favour in defining the test once and running it with > > different configuration options automatically. > > > > * I'm in favour too of randomising (need to check that link) but also we > > need some quickcheck style tests [1], e.g. a test that verifies that > > put(K, V) works not matter the type of object passed in. > > > > Cheers, > > > > [1] > > https://www.fpcomplete.com/user/pbv/an-introduction-to-quickcheck-testing > > -- > > Galder Zamarre?o > > Infinispan, Red Hat > > > > ----- Original Message ----- > >> Interesting subject. We also have many tests which (ab)use inheritance > >> to re-test the same API semantics in slightly different > >> configurations, like embedded/DIST and embedded/REPL, sometimes > >> becoming an @Override mess. > >> It would be far more useful to restructure the testsuite to have such > >> tests in a single class (no inheritance) and declare - maybe > >> annotations? - which permutations of configuration parameters should > >> be valid. > >> > >> Among those configuration permutations one would not have "just" > >> different replication models, but also things like > >> - using the same API remotely (Hot Rod) > >> - using the same feature but within a WildFly embedded module > >> - using the uber jars vs small jars > >> - uber jars & remote.. > >> - remote & embedded modules.. > >> - remote, uber jars, in OSGi.. > >> > >> And finally combine with other options: > >> - A Query test using: remote client, using uber jars, in OSGi, but > >> switching JTA implementation, using a new experimental JGroups stack! > >> > >> For example many Core API and Query tests are copy/pasted into other > >> modules as "integration tests", etc.. but we really should just run > >> the same one in a different environment. > >> > >> This would keep our code better maintainable, but also allow some neat > >> tricks like specify that some configurations should definitely be > >> tested in some test group (like Galder suggests, one could flag one of > >> these for "smoke tests", one for "nightly tests"), but you could also > >> want to flag some configuration settings as a "should work, low > >> priority for testing". > >> A smart testsuite could then use a randomizer to generate permutations > >> of configuration options for those low priority tests which are not > >> essential; there are great examples of such testsuites in the Haskell > >> world, and also Lucene and ElasticSearch do it. > >> A single random seed is used for the whole run, and it's printed > >> clearly at the start; a single seed will deterministically define all > >> parameters of the testsuite, so you can reproduce it all by setting a > >> specific seed when needing to debug a failure. > >> > >> http://blog.mikemccandless.com/2011/03/your-test-cases-should-sometimes-fail.html > >> > >> Thanks, > >> Sanne > >> > >> On 3 September 2015 at 11:34, Galder Zamarreno wrote: > >>> Another interesting improvement here would be if you could run all these > >>> smoke tests with an alternative implementation of AdvancedCache, e.g. one > >>> based with functional API. > >>> > >>> Cheers, > >>> -- > >>> Galder Zamarre?o > >>> Infinispan, Red Hat > >>> > >>> ----- Original Message ----- > >>>> Good post Jiri, this got me thinking :) > >>>> > >>>> Running the entire testsuite again with uber jars would add a lot of > >>>> time > >>>> to > >>>> the build time. > >>>> > >>>> Maybe we should have a set of tests that must be executed for sure, e.g. > >>>> like > >>>> Wildfly's smoke tests [1]. We have "functional" group but right now it > >>>> covers pretty much all tests. > >>>> > >>>> Such tests should live in a separate testsuite, so that we could add the > >>>> essential tests for *all* components. In a way, we've already done some > >>>> of > >>>> this in integrationtests/ but it's not really well structured for this > >>>> aim. > >>>> > >>>> Also, if we would go down this path, something we should take advantage > >>>> of > >>>> (if possible with JUnit/TestNG) is what Gustavo did with the Spark tests > >>>> in > >>>> [2], where he used suites to make it faster to run things, by starting a > >>>> cache manager for distributed caches, running all distributed > >>>> tests...etc. > >>>> In a way, I think we can already do this with Arquillian Infinispan > >>>> integration, so Arquillian would probably well suited for such smoke > >>>> testsuite. > >>>> > >>>> Thoughts? > >>>> > >>>> Cheers, > >>>> > >>>> [1] https://github.com/wildfly/wildfly#running-the-testsuite > >>>> [2] > >>>> https://github.com/infinispan/infinispan-spark/tree/master/src/test/scala/org/infinispan/spark > >>>> -- > >>>> Galder Zamarre?o > >>>> Infinispan, Red Hat > >>>> > >>>> ----- Original Message ----- > >>>>> Hi Jiri, comments inline. > >>>>> > >>>>> On 2.9.2015 10:40, Jiri Holusa wrote: > >>>>>> Hi all, > >>>>>> > >>>>>> we've been thinking for a while, how to test ISPN uber jars. The > >>>>>> current > >>>>>> status is that we actually don't have many tests in the testsuite, > >>>>>> there > >>>>>> are few tests in integrationtests/all-embedded-* modules that are > >>>>>> basically copies of the actual tests in corresponding modules. We > >>>>>> think > >>>>>> that this test coverage is not enough and more importantly, they are > >>>>>> duplicates. > >>>>>> > >>>>>> The questions are now following: > >>>>>> * which tests should be invoked with uber-jars? Whole ISPN testsuite? > >>>>>> Only > >>>>>> integrationtests module? > >>>>> The goal is to run the whole test suite because, as you said, we don't > >>>>> have enough tests in integrationtests/* And we can't duplicate all > >>>>> test classes from individual modules here. > >>>>> > >>>>>> * how would it run? Create Maven different profiles for "classic" jars > >>>>>> and > >>>>>> uber jars? Or try to use some Maven exclusion magic if even possible? > >>>>>> > >>>>>> Some time ago, we had discussion about this with Sebastian, who > >>>>>> suggested > >>>>>> that running only integrationtests module would be sufficient, because > >>>>>> uber-jars are really about packaging, not the functionality itself. > >>>>>> But I > >>>>>> don't know if the tests coverage is sufficient in that level, I would > >>>>>> be > >>>>>> much more confident if we could run the whole ISPN testsuite against > >>>>>> uber-jars. > >>>>> Right. Uber-jars are about packaging but you don't know that the > >>>>> packiging is right until you try all the features and see that > >>>>> everything works. There might be some classes missing (just for some > >>>>> particular features), same classes in different packages, the > >>>>> Manifest.mf might be corrupted and then something won't work in OSGi. > >>>>> > >>>>> > >>>>> I'd prefer a separate Maven profile. IMO, exclusions are too > >>>>> error-prone. > >>>>> > >>>>> > >>>>> Martin > >>>>>> I'm opening this for wider discussion as we should agree on the way > >>>>>> how > >>>>>> to > >>>>>> do it, so we could do it right :) > >>>>>> > >>>>>> Cheers, > >>>>>> Jiri > >>>>>> > >>>>>> > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Fri Sep 11 10:21:08 2015 From: galder at redhat.com (Galder Zamarreno) Date: Fri, 11 Sep 2015 10:21:08 -0400 (EDT) Subject: [infinispan-dev] Lucene 5 is coming: pitfalls to consider In-Reply-To: References: Message-ID: <1473049222.28352676.1441981268670.JavaMail.zimbra@redhat.com> Any chance of cross-posting the info/post to the Infinispan blog? Cheers, -- Galder Zamarre?o Infinispan, Red Hat ----- Original Message ----- > A wrap up on this subject. > > Infinispan 8 is now based on Lucene 5.3 and all problems I previously > listed are dealt with in a mostly-backwards compatible way; this is > what you need to know. > > ## Null Markers > One exception is null-marker tokens: when applied to a NumericField > they now shall be represented by a number of the matching type of the > field.. no big deal. > > ## Sorting > The bigger issue was sorting, and its need for appropriate metadata, > so that we'd know at indexing time which fields would potentially be > the target for a sorting query. > > Our solution in Hibernate Search 5.5 is to provide a @SortableField > annotation to allow users (and integrators like Infinispan Remote > Query) to mark fields for this purpose, but also we're falling back to > a slower sorting strategy in case at runtime a Query is run targeting > wich a field which was not appropriately annotated. > > But while you might think "great, I don't have any change to do", > especially if you don't need the extra performance boost that > @SortableField would provide, make sure to start migrating > infrastructure to use this annotation as the fallback strategy won't > be maintained forever! > > With the next version we'll - by default - refuse to use the fallback > and get a runtime exception, but still provide a configuration option > to allow it. That would be a great time to make sure all your needs > are covered by the new alternative metadata. After that we will get > rid of the fallback strategy. > > Gunnar is going to publish a blog post with more details next week on > the Hibernate blog: http://in.relation.to/ , please watch that space. > > ## Index encoding > Hibernate Search is including the backwards compatible codecs. > Infinispan could decide to include them too, if you prefer. > > ## Dynamic Analyzer choices > We managed to keep this feature even if Lucene doesn't allow it, we'll > probably deprecate this like with sorting but I guess this doesn't > require any upfront work from Infinispan. > > Thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From sanne at infinispan.org Fri Sep 11 10:28:57 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 11 Sep 2015 15:28:57 +0100 Subject: [infinispan-dev] Uber jars testing In-Reply-To: <480833508.28352119.1441981142276.JavaMail.zimbra@redhat.com> References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <55E6B841.4090808@redhat.com> <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> <1497354448.24093053.1441276467028.JavaMail.zimbra@redhat.com> <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> <55F05633.2020307@redhat.com> <480833508.28352119.1441981142276.JavaMail.zimbra@redhat.com> Message-ID: +1 Galder for your abstraction, we might even need a DSL. An additional benefit would be that all "API functional tests" could be tested also in conditions such as running in a in a race with topology changes, instrumenting the timing of parallel code and network operations. As a DSL and using some Byteman we could automatically have it insert network issues or timing issues at specific critical points; such points follow a general pattern so this can be generalized w/o having to code tests for specific critical paths in each functional test independently. A simple example would be to kill a node after a command was sent to it, but before it replies: all the tests should be able to survive that. On 11 September 2015 at 15:19, Galder Zamarreno wrote: > ----- Original Message ----- >> Any plans for tests that are just slightly different for different >> configurations? With inheritance, it's simple - you just override the >> method. If you just run that test on a huge matrix of configurations, >> you end up with having a method with a very complicated switch for >> certain configurations. > > ^ I see what you are getting at here. Normally such differences sometimes happen and can be divided into two: operations executed and assertions. Sometimes the operations executed are slightly different, and sometimes the operations are the same, but assertions slightly different. > > I don't have specific ideas about how to solve this but my gut feeling is something like this: > > If we can write tests as objects/types, where we define the operations and the assertions, then all the tests (testXXX methods) have to do is run this N objects against M configurations. With that in mind, running slightly different tests would be done extending or composing the test object/types, independent of the test classes themselves. To run these slight variations, we'd define a test class that runs the variations with M configurations. > > Note that I've not prototyped any of that and there are probably better ways to do this. > >> >> I am not asking sarcastically, but I've run into similar issue when >> implementing similar thing in 2LC testsuite. >> >> Radim >> >> On 09/09/2015 03:22 PM, Galder Zamarreno wrote: >> > I agree pretty much with everything below: >> > >> > * We overuse test overriding to run the same test with different >> > configuration. I did that same mistake with the functional map API stuff >> > :( >> > >> > * I'm in favour of testsuite restructuring, but I think we really need to >> > start from scratch in a separate testsuite maven project, since we can >> > then add all functional test for all (not only core...etc, but also >> > compatibility tests...etc), and leave its project to test implementation >> > details? Adding this separation would open up the path to create a testkit >> > (as I explained last year in Berlin) >> > >> > * I'm also in favour in defining the test once and running it with >> > different configuration options automatically. >> > >> > * I'm in favour too of randomising (need to check that link) but also we >> > need some quickcheck style tests [1], e.g. a test that verifies that >> > put(K, V) works not matter the type of object passed in. >> > >> > Cheers, >> > >> > [1] >> > https://www.fpcomplete.com/user/pbv/an-introduction-to-quickcheck-testing >> > -- >> > Galder Zamarre?o >> > Infinispan, Red Hat >> > >> > ----- Original Message ----- >> >> Interesting subject. We also have many tests which (ab)use inheritance >> >> to re-test the same API semantics in slightly different >> >> configurations, like embedded/DIST and embedded/REPL, sometimes >> >> becoming an @Override mess. >> >> It would be far more useful to restructure the testsuite to have such >> >> tests in a single class (no inheritance) and declare - maybe >> >> annotations? - which permutations of configuration parameters should >> >> be valid. >> >> >> >> Among those configuration permutations one would not have "just" >> >> different replication models, but also things like >> >> - using the same API remotely (Hot Rod) >> >> - using the same feature but within a WildFly embedded module >> >> - using the uber jars vs small jars >> >> - uber jars & remote.. >> >> - remote & embedded modules.. >> >> - remote, uber jars, in OSGi.. >> >> >> >> And finally combine with other options: >> >> - A Query test using: remote client, using uber jars, in OSGi, but >> >> switching JTA implementation, using a new experimental JGroups stack! >> >> >> >> For example many Core API and Query tests are copy/pasted into other >> >> modules as "integration tests", etc.. but we really should just run >> >> the same one in a different environment. >> >> >> >> This would keep our code better maintainable, but also allow some neat >> >> tricks like specify that some configurations should definitely be >> >> tested in some test group (like Galder suggests, one could flag one of >> >> these for "smoke tests", one for "nightly tests"), but you could also >> >> want to flag some configuration settings as a "should work, low >> >> priority for testing". >> >> A smart testsuite could then use a randomizer to generate permutations >> >> of configuration options for those low priority tests which are not >> >> essential; there are great examples of such testsuites in the Haskell >> >> world, and also Lucene and ElasticSearch do it. >> >> A single random seed is used for the whole run, and it's printed >> >> clearly at the start; a single seed will deterministically define all >> >> parameters of the testsuite, so you can reproduce it all by setting a >> >> specific seed when needing to debug a failure. >> >> >> >> http://blog.mikemccandless.com/2011/03/your-test-cases-should-sometimes-fail.html >> >> >> >> Thanks, >> >> Sanne >> >> >> >> On 3 September 2015 at 11:34, Galder Zamarreno wrote: >> >>> Another interesting improvement here would be if you could run all these >> >>> smoke tests with an alternative implementation of AdvancedCache, e.g. one >> >>> based with functional API. >> >>> >> >>> Cheers, >> >>> -- >> >>> Galder Zamarre?o >> >>> Infinispan, Red Hat >> >>> >> >>> ----- Original Message ----- >> >>>> Good post Jiri, this got me thinking :) >> >>>> >> >>>> Running the entire testsuite again with uber jars would add a lot of >> >>>> time >> >>>> to >> >>>> the build time. >> >>>> >> >>>> Maybe we should have a set of tests that must be executed for sure, e.g. >> >>>> like >> >>>> Wildfly's smoke tests [1]. We have "functional" group but right now it >> >>>> covers pretty much all tests. >> >>>> >> >>>> Such tests should live in a separate testsuite, so that we could add the >> >>>> essential tests for *all* components. In a way, we've already done some >> >>>> of >> >>>> this in integrationtests/ but it's not really well structured for this >> >>>> aim. >> >>>> >> >>>> Also, if we would go down this path, something we should take advantage >> >>>> of >> >>>> (if possible with JUnit/TestNG) is what Gustavo did with the Spark tests >> >>>> in >> >>>> [2], where he used suites to make it faster to run things, by starting a >> >>>> cache manager for distributed caches, running all distributed >> >>>> tests...etc. >> >>>> In a way, I think we can already do this with Arquillian Infinispan >> >>>> integration, so Arquillian would probably well suited for such smoke >> >>>> testsuite. >> >>>> >> >>>> Thoughts? >> >>>> >> >>>> Cheers, >> >>>> >> >>>> [1] https://github.com/wildfly/wildfly#running-the-testsuite >> >>>> [2] >> >>>> https://github.com/infinispan/infinispan-spark/tree/master/src/test/scala/org/infinispan/spark >> >>>> -- >> >>>> Galder Zamarre?o >> >>>> Infinispan, Red Hat >> >>>> >> >>>> ----- Original Message ----- >> >>>>> Hi Jiri, comments inline. >> >>>>> >> >>>>> On 2.9.2015 10:40, Jiri Holusa wrote: >> >>>>>> Hi all, >> >>>>>> >> >>>>>> we've been thinking for a while, how to test ISPN uber jars. The >> >>>>>> current >> >>>>>> status is that we actually don't have many tests in the testsuite, >> >>>>>> there >> >>>>>> are few tests in integrationtests/all-embedded-* modules that are >> >>>>>> basically copies of the actual tests in corresponding modules. We >> >>>>>> think >> >>>>>> that this test coverage is not enough and more importantly, they are >> >>>>>> duplicates. >> >>>>>> >> >>>>>> The questions are now following: >> >>>>>> * which tests should be invoked with uber-jars? Whole ISPN testsuite? >> >>>>>> Only >> >>>>>> integrationtests module? >> >>>>> The goal is to run the whole test suite because, as you said, we don't >> >>>>> have enough tests in integrationtests/* And we can't duplicate all >> >>>>> test classes from individual modules here. >> >>>>> >> >>>>>> * how would it run? Create Maven different profiles for "classic" jars >> >>>>>> and >> >>>>>> uber jars? Or try to use some Maven exclusion magic if even possible? >> >>>>>> >> >>>>>> Some time ago, we had discussion about this with Sebastian, who >> >>>>>> suggested >> >>>>>> that running only integrationtests module would be sufficient, because >> >>>>>> uber-jars are really about packaging, not the functionality itself. >> >>>>>> But I >> >>>>>> don't know if the tests coverage is sufficient in that level, I would >> >>>>>> be >> >>>>>> much more confident if we could run the whole ISPN testsuite against >> >>>>>> uber-jars. >> >>>>> Right. Uber-jars are about packaging but you don't know that the >> >>>>> packiging is right until you try all the features and see that >> >>>>> everything works. There might be some classes missing (just for some >> >>>>> particular features), same classes in different packages, the >> >>>>> Manifest.mf might be corrupted and then something won't work in OSGi. >> >>>>> >> >>>>> >> >>>>> I'd prefer a separate Maven profile. IMO, exclusions are too >> >>>>> error-prone. >> >>>>> >> >>>>> >> >>>>> Martin >> >>>>>> I'm opening this for wider discussion as we should agree on the way >> >>>>>> how >> >>>>>> to >> >>>>>> do it, so we could do it right :) >> >>>>>> >> >>>>>> Cheers, >> >>>>>> Jiri >> >>>>>> >> >>>>>> >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>> >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Fri Sep 11 10:34:16 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 11 Sep 2015 15:34:16 +0100 Subject: [infinispan-dev] Lucene 5 is coming: pitfalls to consider In-Reply-To: <1473049222.28352676.1441981268670.JavaMail.zimbra@redhat.com> References: <1473049222.28352676.1441981268670.JavaMail.zimbra@redhat.com> Message-ID: +1 for someone to do that :) Sorry I can't volunteer, this is my last day before going for holidays next two weeks.. see you all in Rome. What I wrote here was mostly targeting Infinispan developers and integrators; ony the @SortableField is relevant to end users too: feel free to advertise our post on the matter, but it's not written yet! Next week. The Infinispan team should start thinking of exposing the equivalent of @SortableField for Hot Rod, in preparation for when we'll kill the old strategy (we might need to). I guess it would be more interesting to Infinispan users when you actually have that alternative to migrate to. Cheers, Sanne On 11 September 2015 at 15:21, Galder Zamarreno wrote: > Any chance of cross-posting the info/post to the Infinispan blog? > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > ----- Original Message ----- >> A wrap up on this subject. >> >> Infinispan 8 is now based on Lucene 5.3 and all problems I previously >> listed are dealt with in a mostly-backwards compatible way; this is >> what you need to know. >> >> ## Null Markers >> One exception is null-marker tokens: when applied to a NumericField >> they now shall be represented by a number of the matching type of the >> field.. no big deal. >> >> ## Sorting >> The bigger issue was sorting, and its need for appropriate metadata, >> so that we'd know at indexing time which fields would potentially be >> the target for a sorting query. >> >> Our solution in Hibernate Search 5.5 is to provide a @SortableField >> annotation to allow users (and integrators like Infinispan Remote >> Query) to mark fields for this purpose, but also we're falling back to >> a slower sorting strategy in case at runtime a Query is run targeting >> wich a field which was not appropriately annotated. >> >> But while you might think "great, I don't have any change to do", >> especially if you don't need the extra performance boost that >> @SortableField would provide, make sure to start migrating >> infrastructure to use this annotation as the fallback strategy won't >> be maintained forever! >> >> With the next version we'll - by default - refuse to use the fallback >> and get a runtime exception, but still provide a configuration option >> to allow it. That would be a great time to make sure all your needs >> are covered by the new alternative metadata. After that we will get >> rid of the fallback strategy. >> >> Gunnar is going to publish a blog post with more details next week on >> the Hibernate blog: http://in.relation.to/ , please watch that space. >> >> ## Index encoding >> Hibernate Search is including the backwards compatible codecs. >> Infinispan could decide to include them too, if you prefer. >> >> ## Dynamic Analyzer choices >> We managed to keep this feature even if Lucene doesn't allow it, we'll >> probably deprecate this like with sorting but I guess this doesn't >> require any upfront work from Infinispan. >> >> Thanks, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Fri Sep 11 10:37:31 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 11 Sep 2015 16:37:31 +0200 Subject: [infinispan-dev] Lucene 5 is coming: pitfalls to consider In-Reply-To: References: <1473049222.28352676.1441981268670.JavaMail.zimbra@redhat.com> Message-ID: <55F2E72B.3050408@redhat.com> I'll take care of that. Our blog queue is quite long and I don't want to put out everything at once. Rate limiting mode: on. Tristan On 11/09/2015 16:34, Sanne Grinovero wrote: > +1 for someone to do that :) > Sorry I can't volunteer, this is my last day before going for holidays > next two weeks.. see you all in Rome. > > What I wrote here was mostly targeting Infinispan developers and > integrators; ony the @SortableField is relevant to end users too: feel > free to advertise our post on the matter, but it's not written yet! > Next week. > > The Infinispan team should start thinking of exposing the equivalent > of @SortableField for Hot Rod, in preparation for when we'll kill the > old strategy (we might need to). I guess it would be more interesting > to Infinispan users when you actually have that alternative to migrate > to. > > Cheers, > Sanne > > > On 11 September 2015 at 15:21, Galder Zamarreno wrote: >> Any chance of cross-posting the info/post to the Infinispan blog? >> >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> ----- Original Message ----- >>> A wrap up on this subject. >>> >>> Infinispan 8 is now based on Lucene 5.3 and all problems I previously >>> listed are dealt with in a mostly-backwards compatible way; this is >>> what you need to know. >>> >>> ## Null Markers >>> One exception is null-marker tokens: when applied to a NumericField >>> they now shall be represented by a number of the matching type of the >>> field.. no big deal. >>> >>> ## Sorting >>> The bigger issue was sorting, and its need for appropriate metadata, >>> so that we'd know at indexing time which fields would potentially be >>> the target for a sorting query. >>> >>> Our solution in Hibernate Search 5.5 is to provide a @SortableField >>> annotation to allow users (and integrators like Infinispan Remote >>> Query) to mark fields for this purpose, but also we're falling back to >>> a slower sorting strategy in case at runtime a Query is run targeting >>> wich a field which was not appropriately annotated. >>> >>> But while you might think "great, I don't have any change to do", >>> especially if you don't need the extra performance boost that >>> @SortableField would provide, make sure to start migrating >>> infrastructure to use this annotation as the fallback strategy won't >>> be maintained forever! >>> >>> With the next version we'll - by default - refuse to use the fallback >>> and get a runtime exception, but still provide a configuration option >>> to allow it. That would be a great time to make sure all your needs >>> are covered by the new alternative metadata. After that we will get >>> rid of the fallback strategy. >>> >>> Gunnar is going to publish a blog post with more details next week on >>> the Hibernate blog: http://in.relation.to/ , please watch that space. >>> >>> ## Index encoding >>> Hibernate Search is including the backwards compatible codecs. >>> Infinispan could decide to include them too, if you prefer. >>> >>> ## Dynamic Analyzer choices >>> We managed to keep this feature even if Lucene doesn't allow it, we'll >>> probably deprecate this like with sorting but I guess this doesn't >>> require any upfront work from Infinispan. >>> >>> Thanks, >>> Sanne >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From rvansa at redhat.com Fri Sep 11 10:54:44 2015 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 11 Sep 2015 16:54:44 +0200 Subject: [infinispan-dev] Uber jars testing In-Reply-To: References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <55E6B841.4090808@redhat.com> <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> <1497354448.24093053.1441276467028.JavaMail.zimbra@redhat.com> <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> <55F05633.2020307@redhat.com> <480833508.28352119.1441981142276.JavaMail.zimbra@redhat.com> Message-ID: <55F2EB34.1040005@redhat.com> -0.1 for Byteman - although generally I am a fan of Byteman, it seems to me that the rules are too fragile, since IDE does not give any hints like "hey, there's a test that inserts some logic to the place you're modifying". IMO, using Byteman regularly will end up with many tests silently passing, since the timing is broken after few changes in the source. If we're going to use any instrumentation for testing, I'd consider putting annotation to the spot I want to hook. I know that this mixes the test code and actual processing, but makes any dirty tricks more obvious - and you can consider such annotation an abstract comment what's happening. Haven't prototyped that either :) Anyway, I'm eager to see how will the approach described by Galder work, can't imagine that fully atm. My $0.02 Radim On 09/11/2015 04:28 PM, Sanne Grinovero wrote: > +1 Galder for your abstraction, we might even need a DSL. > > An additional benefit would be that all "API functional tests" could > be tested also in conditions such as running in a in a race with > topology changes, instrumenting the timing of parallel code and > network operations. > As a DSL and using some Byteman we could automatically have it insert > network issues or timing issues at specific critical points; such > points follow a general pattern so this can be generalized w/o having > to code tests for specific critical paths in each functional test > independently. > A simple example would be to kill a node after a command was sent to > it, but before it replies: all the tests should be able to survive > that. > > > > On 11 September 2015 at 15:19, Galder Zamarreno wrote: >> ----- Original Message ----- >>> Any plans for tests that are just slightly different for different >>> configurations? With inheritance, it's simple - you just override the >>> method. If you just run that test on a huge matrix of configurations, >>> you end up with having a method with a very complicated switch for >>> certain configurations. >> ^ I see what you are getting at here. Normally such differences sometimes happen and can be divided into two: operations executed and assertions. Sometimes the operations executed are slightly different, and sometimes the operations are the same, but assertions slightly different. >> >> I don't have specific ideas about how to solve this but my gut feeling is something like this: >> >> If we can write tests as objects/types, where we define the operations and the assertions, then all the tests (testXXX methods) have to do is run this N objects against M configurations. With that in mind, running slightly different tests would be done extending or composing the test object/types, independent of the test classes themselves. To run these slight variations, we'd define a test class that runs the variations with M configurations. >> >> Note that I've not prototyped any of that and there are probably better ways to do this. >> >>> I am not asking sarcastically, but I've run into similar issue when >>> implementing similar thing in 2LC testsuite. >>> >>> Radim >>> >>> On 09/09/2015 03:22 PM, Galder Zamarreno wrote: >>>> I agree pretty much with everything below: >>>> >>>> * We overuse test overriding to run the same test with different >>>> configuration. I did that same mistake with the functional map API stuff >>>> :( >>>> >>>> * I'm in favour of testsuite restructuring, but I think we really need to >>>> start from scratch in a separate testsuite maven project, since we can >>>> then add all functional test for all (not only core...etc, but also >>>> compatibility tests...etc), and leave its project to test implementation >>>> details? Adding this separation would open up the path to create a testkit >>>> (as I explained last year in Berlin) >>>> >>>> * I'm also in favour in defining the test once and running it with >>>> different configuration options automatically. >>>> >>>> * I'm in favour too of randomising (need to check that link) but also we >>>> need some quickcheck style tests [1], e.g. a test that verifies that >>>> put(K, V) works not matter the type of object passed in. >>>> >>>> Cheers, >>>> >>>> [1] >>>> https://www.fpcomplete.com/user/pbv/an-introduction-to-quickcheck-testing >>>> -- >>>> Galder Zamarre?o >>>> Infinispan, Red Hat >>>> >>>> ----- Original Message ----- >>>>> Interesting subject. We also have many tests which (ab)use inheritance >>>>> to re-test the same API semantics in slightly different >>>>> configurations, like embedded/DIST and embedded/REPL, sometimes >>>>> becoming an @Override mess. >>>>> It would be far more useful to restructure the testsuite to have such >>>>> tests in a single class (no inheritance) and declare - maybe >>>>> annotations? - which permutations of configuration parameters should >>>>> be valid. >>>>> >>>>> Among those configuration permutations one would not have "just" >>>>> different replication models, but also things like >>>>> - using the same API remotely (Hot Rod) >>>>> - using the same feature but within a WildFly embedded module >>>>> - using the uber jars vs small jars >>>>> - uber jars & remote.. >>>>> - remote & embedded modules.. >>>>> - remote, uber jars, in OSGi.. >>>>> >>>>> And finally combine with other options: >>>>> - A Query test using: remote client, using uber jars, in OSGi, but >>>>> switching JTA implementation, using a new experimental JGroups stack! >>>>> >>>>> For example many Core API and Query tests are copy/pasted into other >>>>> modules as "integration tests", etc.. but we really should just run >>>>> the same one in a different environment. >>>>> >>>>> This would keep our code better maintainable, but also allow some neat >>>>> tricks like specify that some configurations should definitely be >>>>> tested in some test group (like Galder suggests, one could flag one of >>>>> these for "smoke tests", one for "nightly tests"), but you could also >>>>> want to flag some configuration settings as a "should work, low >>>>> priority for testing". >>>>> A smart testsuite could then use a randomizer to generate permutations >>>>> of configuration options for those low priority tests which are not >>>>> essential; there are great examples of such testsuites in the Haskell >>>>> world, and also Lucene and ElasticSearch do it. >>>>> A single random seed is used for the whole run, and it's printed >>>>> clearly at the start; a single seed will deterministically define all >>>>> parameters of the testsuite, so you can reproduce it all by setting a >>>>> specific seed when needing to debug a failure. >>>>> >>>>> http://blog.mikemccandless.com/2011/03/your-test-cases-should-sometimes-fail.html >>>>> >>>>> Thanks, >>>>> Sanne >>>>> >>>>> On 3 September 2015 at 11:34, Galder Zamarreno wrote: >>>>>> Another interesting improvement here would be if you could run all these >>>>>> smoke tests with an alternative implementation of AdvancedCache, e.g. one >>>>>> based with functional API. >>>>>> >>>>>> Cheers, >>>>>> -- >>>>>> Galder Zamarre?o >>>>>> Infinispan, Red Hat >>>>>> >>>>>> ----- Original Message ----- >>>>>>> Good post Jiri, this got me thinking :) >>>>>>> >>>>>>> Running the entire testsuite again with uber jars would add a lot of >>>>>>> time >>>>>>> to >>>>>>> the build time. >>>>>>> >>>>>>> Maybe we should have a set of tests that must be executed for sure, e.g. >>>>>>> like >>>>>>> Wildfly's smoke tests [1]. We have "functional" group but right now it >>>>>>> covers pretty much all tests. >>>>>>> >>>>>>> Such tests should live in a separate testsuite, so that we could add the >>>>>>> essential tests for *all* components. In a way, we've already done some >>>>>>> of >>>>>>> this in integrationtests/ but it's not really well structured for this >>>>>>> aim. >>>>>>> >>>>>>> Also, if we would go down this path, something we should take advantage >>>>>>> of >>>>>>> (if possible with JUnit/TestNG) is what Gustavo did with the Spark tests >>>>>>> in >>>>>>> [2], where he used suites to make it faster to run things, by starting a >>>>>>> cache manager for distributed caches, running all distributed >>>>>>> tests...etc. >>>>>>> In a way, I think we can already do this with Arquillian Infinispan >>>>>>> integration, so Arquillian would probably well suited for such smoke >>>>>>> testsuite. >>>>>>> >>>>>>> Thoughts? >>>>>>> >>>>>>> Cheers, >>>>>>> >>>>>>> [1] https://github.com/wildfly/wildfly#running-the-testsuite >>>>>>> [2] >>>>>>> https://github.com/infinispan/infinispan-spark/tree/master/src/test/scala/org/infinispan/spark >>>>>>> -- >>>>>>> Galder Zamarre?o >>>>>>> Infinispan, Red Hat >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> Hi Jiri, comments inline. >>>>>>>> >>>>>>>> On 2.9.2015 10:40, Jiri Holusa wrote: >>>>>>>>> Hi all, >>>>>>>>> >>>>>>>>> we've been thinking for a while, how to test ISPN uber jars. The >>>>>>>>> current >>>>>>>>> status is that we actually don't have many tests in the testsuite, >>>>>>>>> there >>>>>>>>> are few tests in integrationtests/all-embedded-* modules that are >>>>>>>>> basically copies of the actual tests in corresponding modules. We >>>>>>>>> think >>>>>>>>> that this test coverage is not enough and more importantly, they are >>>>>>>>> duplicates. >>>>>>>>> >>>>>>>>> The questions are now following: >>>>>>>>> * which tests should be invoked with uber-jars? Whole ISPN testsuite? >>>>>>>>> Only >>>>>>>>> integrationtests module? >>>>>>>> The goal is to run the whole test suite because, as you said, we don't >>>>>>>> have enough tests in integrationtests/* And we can't duplicate all >>>>>>>> test classes from individual modules here. >>>>>>>> >>>>>>>>> * how would it run? Create Maven different profiles for "classic" jars >>>>>>>>> and >>>>>>>>> uber jars? Or try to use some Maven exclusion magic if even possible? >>>>>>>>> >>>>>>>>> Some time ago, we had discussion about this with Sebastian, who >>>>>>>>> suggested >>>>>>>>> that running only integrationtests module would be sufficient, because >>>>>>>>> uber-jars are really about packaging, not the functionality itself. >>>>>>>>> But I >>>>>>>>> don't know if the tests coverage is sufficient in that level, I would >>>>>>>>> be >>>>>>>>> much more confident if we could run the whole ISPN testsuite against >>>>>>>>> uber-jars. >>>>>>>> Right. Uber-jars are about packaging but you don't know that the >>>>>>>> packiging is right until you try all the features and see that >>>>>>>> everything works. There might be some classes missing (just for some >>>>>>>> particular features), same classes in different packages, the >>>>>>>> Manifest.mf might be corrupted and then something won't work in OSGi. >>>>>>>> >>>>>>>> >>>>>>>> I'd prefer a separate Maven profile. IMO, exclusions are too >>>>>>>> error-prone. >>>>>>>> >>>>>>>> >>>>>>>> Martin >>>>>>>>> I'm opening this for wider discussion as we should agree on the way >>>>>>>>> how >>>>>>>>> to >>>>>>>>> do it, so we could do it right :) >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> Jiri >>>>>>>>> >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> -- >>> Radim Vansa >>> JBoss Performance Team >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From sanne at infinispan.org Fri Sep 11 11:41:02 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 11 Sep 2015 16:41:02 +0100 Subject: [infinispan-dev] Uber jars testing In-Reply-To: <55F2EB34.1040005@redhat.com> References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <55E6B841.4090808@redhat.com> <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> <1497354448.24093053.1441276467028.JavaMail.zimbra@redhat.com> <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> <55F05633.2020307@redhat.com> <480833508.28352119.1441981142276.JavaMail.zimbra@redhat.com> <55F2EB34.1040005@redhat.com> Message-ID: Yes I understand, and I also use Byteman only as last resort (sometimes it is). But while it might not be practical to maintain hundreds of tests using it and relying on specific internals - and in different ways - if you have a single general "test runner" which makes use of it you can limit the maintenance a lot, and it becomes feasible to test the tester to make sure your rules are still valid. And like you suggest, you'd only inject code in specific points, maybe marked with annotations. I didn't mean to suggest a specific way for doing it, I guess most of what we need can be done with a custom interceptor or JGroups protocols. The point is to inject failures at crucial points: nothing new, we do it all the time in many tests, but we don't reuse such patterns for other tests. TBH if I look into some cluster "resilience" tests I wrote myself I'm ashamed from reading the style and the assumptions I'd make some years ago, but there are many such code points scattered around so I'd rather centralize the logic and reuse one properly maintained runner. On 11 September 2015 at 15:54, Radim Vansa wrote: > -0.1 for Byteman - although generally I am a fan of Byteman, it seems to > me that the rules are too fragile, since IDE does not give any hints > like "hey, there's a test that inserts some logic to the place you're > modifying". IMO, using Byteman regularly will end up with many tests > silently passing, since the timing is broken after few changes in the > source. > > If we're going to use any instrumentation for testing, I'd consider > putting annotation to the spot I want to hook. I know that this mixes > the test code and actual processing, but makes any dirty tricks more > obvious - and you can consider such annotation an abstract comment > what's happening. > > Haven't prototyped that either :) > > Anyway, I'm eager to see how will the approach described by Galder work, > can't imagine that fully atm. > > My $0.02 > > Radim > > On 09/11/2015 04:28 PM, Sanne Grinovero wrote: >> +1 Galder for your abstraction, we might even need a DSL. >> >> An additional benefit would be that all "API functional tests" could >> be tested also in conditions such as running in a in a race with >> topology changes, instrumenting the timing of parallel code and >> network operations. >> As a DSL and using some Byteman we could automatically have it insert >> network issues or timing issues at specific critical points; such >> points follow a general pattern so this can be generalized w/o having >> to code tests for specific critical paths in each functional test >> independently. >> A simple example would be to kill a node after a command was sent to >> it, but before it replies: all the tests should be able to survive >> that. >> >> >> >> On 11 September 2015 at 15:19, Galder Zamarreno wrote: >>> ----- Original Message ----- >>>> Any plans for tests that are just slightly different for different >>>> configurations? With inheritance, it's simple - you just override the >>>> method. If you just run that test on a huge matrix of configurations, >>>> you end up with having a method with a very complicated switch for >>>> certain configurations. >>> ^ I see what you are getting at here. Normally such differences sometimes happen and can be divided into two: operations executed and assertions. Sometimes the operations executed are slightly different, and sometimes the operations are the same, but assertions slightly different. >>> >>> I don't have specific ideas about how to solve this but my gut feeling is something like this: >>> >>> If we can write tests as objects/types, where we define the operations and the assertions, then all the tests (testXXX methods) have to do is run this N objects against M configurations. With that in mind, running slightly different tests would be done extending or composing the test object/types, independent of the test classes themselves. To run these slight variations, we'd define a test class that runs the variations with M configurations. >>> >>> Note that I've not prototyped any of that and there are probably better ways to do this. >>> >>>> I am not asking sarcastically, but I've run into similar issue when >>>> implementing similar thing in 2LC testsuite. >>>> >>>> Radim >>>> >>>> On 09/09/2015 03:22 PM, Galder Zamarreno wrote: >>>>> I agree pretty much with everything below: >>>>> >>>>> * We overuse test overriding to run the same test with different >>>>> configuration. I did that same mistake with the functional map API stuff >>>>> :( >>>>> >>>>> * I'm in favour of testsuite restructuring, but I think we really need to >>>>> start from scratch in a separate testsuite maven project, since we can >>>>> then add all functional test for all (not only core...etc, but also >>>>> compatibility tests...etc), and leave its project to test implementation >>>>> details? Adding this separation would open up the path to create a testkit >>>>> (as I explained last year in Berlin) >>>>> >>>>> * I'm also in favour in defining the test once and running it with >>>>> different configuration options automatically. >>>>> >>>>> * I'm in favour too of randomising (need to check that link) but also we >>>>> need some quickcheck style tests [1], e.g. a test that verifies that >>>>> put(K, V) works not matter the type of object passed in. >>>>> >>>>> Cheers, >>>>> >>>>> [1] >>>>> https://www.fpcomplete.com/user/pbv/an-introduction-to-quickcheck-testing >>>>> -- >>>>> Galder Zamarre?o >>>>> Infinispan, Red Hat >>>>> >>>>> ----- Original Message ----- >>>>>> Interesting subject. We also have many tests which (ab)use inheritance >>>>>> to re-test the same API semantics in slightly different >>>>>> configurations, like embedded/DIST and embedded/REPL, sometimes >>>>>> becoming an @Override mess. >>>>>> It would be far more useful to restructure the testsuite to have such >>>>>> tests in a single class (no inheritance) and declare - maybe >>>>>> annotations? - which permutations of configuration parameters should >>>>>> be valid. >>>>>> >>>>>> Among those configuration permutations one would not have "just" >>>>>> different replication models, but also things like >>>>>> - using the same API remotely (Hot Rod) >>>>>> - using the same feature but within a WildFly embedded module >>>>>> - using the uber jars vs small jars >>>>>> - uber jars & remote.. >>>>>> - remote & embedded modules.. >>>>>> - remote, uber jars, in OSGi.. >>>>>> >>>>>> And finally combine with other options: >>>>>> - A Query test using: remote client, using uber jars, in OSGi, but >>>>>> switching JTA implementation, using a new experimental JGroups stack! >>>>>> >>>>>> For example many Core API and Query tests are copy/pasted into other >>>>>> modules as "integration tests", etc.. but we really should just run >>>>>> the same one in a different environment. >>>>>> >>>>>> This would keep our code better maintainable, but also allow some neat >>>>>> tricks like specify that some configurations should definitely be >>>>>> tested in some test group (like Galder suggests, one could flag one of >>>>>> these for "smoke tests", one for "nightly tests"), but you could also >>>>>> want to flag some configuration settings as a "should work, low >>>>>> priority for testing". >>>>>> A smart testsuite could then use a randomizer to generate permutations >>>>>> of configuration options for those low priority tests which are not >>>>>> essential; there are great examples of such testsuites in the Haskell >>>>>> world, and also Lucene and ElasticSearch do it. >>>>>> A single random seed is used for the whole run, and it's printed >>>>>> clearly at the start; a single seed will deterministically define all >>>>>> parameters of the testsuite, so you can reproduce it all by setting a >>>>>> specific seed when needing to debug a failure. >>>>>> >>>>>> http://blog.mikemccandless.com/2011/03/your-test-cases-should-sometimes-fail.html >>>>>> >>>>>> Thanks, >>>>>> Sanne >>>>>> >>>>>> On 3 September 2015 at 11:34, Galder Zamarreno wrote: >>>>>>> Another interesting improvement here would be if you could run all these >>>>>>> smoke tests with an alternative implementation of AdvancedCache, e.g. one >>>>>>> based with functional API. >>>>>>> >>>>>>> Cheers, >>>>>>> -- >>>>>>> Galder Zamarre?o >>>>>>> Infinispan, Red Hat >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> Good post Jiri, this got me thinking :) >>>>>>>> >>>>>>>> Running the entire testsuite again with uber jars would add a lot of >>>>>>>> time >>>>>>>> to >>>>>>>> the build time. >>>>>>>> >>>>>>>> Maybe we should have a set of tests that must be executed for sure, e.g. >>>>>>>> like >>>>>>>> Wildfly's smoke tests [1]. We have "functional" group but right now it >>>>>>>> covers pretty much all tests. >>>>>>>> >>>>>>>> Such tests should live in a separate testsuite, so that we could add the >>>>>>>> essential tests for *all* components. In a way, we've already done some >>>>>>>> of >>>>>>>> this in integrationtests/ but it's not really well structured for this >>>>>>>> aim. >>>>>>>> >>>>>>>> Also, if we would go down this path, something we should take advantage >>>>>>>> of >>>>>>>> (if possible with JUnit/TestNG) is what Gustavo did with the Spark tests >>>>>>>> in >>>>>>>> [2], where he used suites to make it faster to run things, by starting a >>>>>>>> cache manager for distributed caches, running all distributed >>>>>>>> tests...etc. >>>>>>>> In a way, I think we can already do this with Arquillian Infinispan >>>>>>>> integration, so Arquillian would probably well suited for such smoke >>>>>>>> testsuite. >>>>>>>> >>>>>>>> Thoughts? >>>>>>>> >>>>>>>> Cheers, >>>>>>>> >>>>>>>> [1] https://github.com/wildfly/wildfly#running-the-testsuite >>>>>>>> [2] >>>>>>>> https://github.com/infinispan/infinispan-spark/tree/master/src/test/scala/org/infinispan/spark >>>>>>>> -- >>>>>>>> Galder Zamarre?o >>>>>>>> Infinispan, Red Hat >>>>>>>> >>>>>>>> ----- Original Message ----- >>>>>>>>> Hi Jiri, comments inline. >>>>>>>>> >>>>>>>>> On 2.9.2015 10:40, Jiri Holusa wrote: >>>>>>>>>> Hi all, >>>>>>>>>> >>>>>>>>>> we've been thinking for a while, how to test ISPN uber jars. The >>>>>>>>>> current >>>>>>>>>> status is that we actually don't have many tests in the testsuite, >>>>>>>>>> there >>>>>>>>>> are few tests in integrationtests/all-embedded-* modules that are >>>>>>>>>> basically copies of the actual tests in corresponding modules. We >>>>>>>>>> think >>>>>>>>>> that this test coverage is not enough and more importantly, they are >>>>>>>>>> duplicates. >>>>>>>>>> >>>>>>>>>> The questions are now following: >>>>>>>>>> * which tests should be invoked with uber-jars? Whole ISPN testsuite? >>>>>>>>>> Only >>>>>>>>>> integrationtests module? >>>>>>>>> The goal is to run the whole test suite because, as you said, we don't >>>>>>>>> have enough tests in integrationtests/* And we can't duplicate all >>>>>>>>> test classes from individual modules here. >>>>>>>>> >>>>>>>>>> * how would it run? Create Maven different profiles for "classic" jars >>>>>>>>>> and >>>>>>>>>> uber jars? Or try to use some Maven exclusion magic if even possible? >>>>>>>>>> >>>>>>>>>> Some time ago, we had discussion about this with Sebastian, who >>>>>>>>>> suggested >>>>>>>>>> that running only integrationtests module would be sufficient, because >>>>>>>>>> uber-jars are really about packaging, not the functionality itself. >>>>>>>>>> But I >>>>>>>>>> don't know if the tests coverage is sufficient in that level, I would >>>>>>>>>> be >>>>>>>>>> much more confident if we could run the whole ISPN testsuite against >>>>>>>>>> uber-jars. >>>>>>>>> Right. Uber-jars are about packaging but you don't know that the >>>>>>>>> packiging is right until you try all the features and see that >>>>>>>>> everything works. There might be some classes missing (just for some >>>>>>>>> particular features), same classes in different packages, the >>>>>>>>> Manifest.mf might be corrupted and then something won't work in OSGi. >>>>>>>>> >>>>>>>>> >>>>>>>>> I'd prefer a separate Maven profile. IMO, exclusions are too >>>>>>>>> error-prone. >>>>>>>>> >>>>>>>>> >>>>>>>>> Martin >>>>>>>>>> I'm opening this for wider discussion as we should agree on the way >>>>>>>>>> how >>>>>>>>>> to >>>>>>>>>> do it, so we could do it right :) >>>>>>>>>> >>>>>>>>>> Cheers, >>>>>>>>>> Jiri >>>>>>>>>> >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> -- >>>> Radim Vansa >>>> JBoss Performance Team >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From pedro at infinispan.org Mon Sep 14 07:46:53 2015 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 14 Sep 2015 12:46:53 +0100 Subject: [infinispan-dev] Remove cache issues Message-ID: <55F6B3AD.2010709@infinispan.org> Hi, I found the following issues with _EmbeddedCacheManager.removeCache()_ while I was helping the LEADS devs. The method removes the cache from all the nodes in the cluster. #1 It has different behaviour in the invoker node. In the invoked node, it removes the configuration from _configurationOverrides_ field and from _cacheDependencyGraph_. In the remaining node, it doesn't. To think: it should remove from _cacheDependencyGraph_ in all the nodes but keep the configuration. #2 It tries to remove the cache remotely before locally. It could be done in parallel and it has a small issue: if a timeout occurs, it never tries to remove the cache locally. To think: can we send the request asynchronously? #3 When passivation is enabled, it first invoke _PassivationManager.passivateAll()_ and then _PersistenceManager.stop()_. The former will copy all the data in memory to the cache store and the later will clear the cache store. We can skip the passivation. To think: create a _PassivationManager.skipPassivationOnStop()_ (similar to _PersistenceManager.setClearOnStop()_). Comments are welcome. Cheers, Pedro From ttarrant at redhat.com Mon Sep 14 11:46:19 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 14 Sep 2015 17:46:19 +0200 Subject: [infinispan-dev] Weekly IRC meeting minutes 2015-09-14 Message-ID: <55F6EBCB.6010101@redhat.com> Hi all, here are the meeting minutes from today's IRC meeting: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-09-14-14.02.log.html Enjoy Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From dan.berindei at gmail.com Tue Sep 15 03:19:20 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 15 Sep 2015 10:19:20 +0300 Subject: [infinispan-dev] Weekly IRC meeting minutes 2015-09-14 In-Reply-To: <55F6EBCB.6010101@redhat.com> References: <55F6EBCB.6010101@redhat.com> Message-ID: Hi guys Sorry for missing the meeting yesterday, I was out most of the day. I started last week trying to help Bela with the TCP_NIO2 problems. I wasn't quite satisfied with JGroups' logging, so I tried to use the Chronon embedded with IntelliJ to debug the tests. In theory it's nice because you only have to reproduce a random failure once, and you can replay the trace as many times as you want, but I got IntelliJ to hang way too many times, so I had to give up. Bela fixed the correctness problem by himself, but the test suite is about 50% slower with TCP_NIO2, so we can't use it instead of TCP without further investigation. Then I got back to ISPN-5699, and after a lot of fiddling with the EntryFactoryImpl methods and fixing failing tests, this morning I finally issued the PR. This week, I need to look into a replicated-mode read performance regression, and then get back to the sequential interceptor interfaces. Cheers Dan On Mon, Sep 14, 2015 at 6:46 PM, Tristan Tarrant wrote: > Hi all, > > here are the meeting minutes from today's IRC meeting: > > http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-09-14-14.02.log.html > > Enjoy > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Tue Sep 15 04:49:16 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 15 Sep 2015 10:49:16 +0200 Subject: [infinispan-dev] Fine-grained security proposals Message-ID: <55F7DB8C.8090206@redhat.com> Hi guys, I've created a wiki entry for fine-grained authorization. Please look at it: https://github.com/infinispan/infinispan/wiki/Fine-grained-security-for-caches And let me know your thoughts. Personally I'm not sure the second case is worth the effort, but it definitely has its advantages. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From bban at redhat.com Tue Sep 15 05:12:27 2015 From: bban at redhat.com (Bela Ban) Date: Tue, 15 Sep 2015 05:12:27 -0400 Subject: [infinispan-dev] Weekly IRC meeting minutes 2015-09-14 In-Reply-To: References: <55F6EBCB.6010101@redhat.com> Message-ID: <55F7E0FB.60902@redhat.com> Hi Dan, what a coincidence, I tried Chronon as well, but outside of IDEA as it requires the commercial version and I only have the community version... However, using something like Chronos is certainly worth looking into, for all of us, as (as you mentioned) you only need to reproduce the error once and can then go back in time to see the stack, variables etc, so you know exactly what's going on. If someone gets Chronos (or any other time travelling debugger) to work, this would be worth a demo at our November meeting, and would add a tool of tremendous value to our common toolset ! Any takers ? On 09/15/2015 03:19 AM, Dan Berindei wrote: > Hi guys > > Sorry for missing the meeting yesterday, I was out most of the day. > > I started last week trying to help Bela with the TCP_NIO2 problems. I > wasn't quite satisfied with JGroups' logging, so I tried to use the > Chronon embedded with IntelliJ to debug the tests. In theory it's nice > because you only have to reproduce a random failure once, and you can > replay the trace as many times as you want, but I got IntelliJ to hang > way too many times, so I had to give up. Bela fixed the correctness > problem by himself, but the test suite is about 50% slower with > TCP_NIO2, so we can't use it instead of TCP without further > investigation. > > Then I got back to ISPN-5699, and after a lot of fiddling with the > EntryFactoryImpl methods and fixing failing tests, this morning I > finally issued the PR. > > This week, I need to look into a replicated-mode read performance > regression, and then get back to the sequential interceptor > interfaces. > > Cheers > Dan > > > On Mon, Sep 14, 2015 at 6:46 PM, Tristan Tarrant wrote: >> Hi all, >> >> here are the meeting minutes from today's IRC meeting: >> >> http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-09-14-14.02.log.html >> >> Enjoy >> >> Tristan >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Tue Sep 15 06:02:26 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 15 Sep 2015 12:02:26 +0200 Subject: [infinispan-dev] TaskManager Message-ID: <55F7ECB2.9060909@redhat.com> Hi all, design morning ! I have updated the wiki page on scripting execution to make it a bit more comprehensive. For this reason the title has changed a bit. https://github.com/infinispan/infinispan/wiki/Task-Execution-Design Please let me know your thoughts. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From dan.berindei at gmail.com Tue Sep 15 07:32:48 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 15 Sep 2015 14:32:48 +0300 Subject: [infinispan-dev] Weekly IRC meeting minutes 2015-09-14 In-Reply-To: <55F7E0FB.60902@redhat.com> References: <55F6EBCB.6010101@redhat.com> <55F7E0FB.60902@redhat.com> Message-ID: Hi Bela I was able to reuse the configuration created by IntelliJ and run my test in a loop outside the IDE, until I reproduced it. Unfortunately, when I was tracing all of JGroups, it was also harder to reproduce, the test JVM was crashing a lot, and I also had problems loading the trace in the IDE. I'll see if I can get a demo working for the November meeting, but this was how I managed to reproduce it: 1. Figure out where the chronon configuration and recording are stored, in my case: CHRONON_DIR=$HOME/.IntelliJIdea14/system/chronon-recordings/2015_09_08_FullSyncWriteSkewTotalOrderTest_testPut1/ 2. Copy the agent parameters from the IntelliJ console to MAVEN_FORK_OPTS (this is Infinispan-specific, if maven-surefire isn't forking you can use MAVEN_OPTS instead): export MAVEN_FORK_OPTS="-javaagent:$HOME/.IntelliJIdea14/config/plugins/chronon/lib/recorder/recorder-3.70.0.200.jar=$CHRONON_DIR/config.txt -agentpath:$HOME/.IntelliJIdea14/config/plugins/chronon/lib/recorder/native/librecorderagent64-3.0.7.so -noverify" 3. Run these in a loop until you get a test failure: rm -rf $CHRONON_DIR/*/ \ mvn test -pl core '-Dtest=org.infinispan.tx.totalorder.simple.dist.FullSyncWriteSkewTotalOrderTest#testPut' 4. Open $CHRONON_DIR in IntelliJ with Run -> Open Chronon Recording. 5. Enjoy debugging backwards in time... Well, I haven't got to the point where I enjoy it yet, but I think I made some progress :) Cheers Dan On Tue, Sep 15, 2015 at 12:12 PM, Bela Ban wrote: > Hi Dan, > > what a coincidence, I tried Chronon as well, but outside of IDEA as it > requires the commercial version and I only have the community version... > > However, using something like Chronos is certainly worth looking into, > for all of us, as (as you mentioned) you only need to reproduce the > error once and can then go back in time to see the stack, variables etc, > so you know exactly what's going on. > > If someone gets Chronos (or any other time travelling debugger) to work, > this would be worth a demo at our November meeting, and would add a tool > of tremendous value to our common toolset ! > > Any takers ? > > > On 09/15/2015 03:19 AM, Dan Berindei wrote: >> Hi guys >> >> Sorry for missing the meeting yesterday, I was out most of the day. >> >> I started last week trying to help Bela with the TCP_NIO2 problems. I >> wasn't quite satisfied with JGroups' logging, so I tried to use the >> Chronon embedded with IntelliJ to debug the tests. In theory it's nice >> because you only have to reproduce a random failure once, and you can >> replay the trace as many times as you want, but I got IntelliJ to hang >> way too many times, so I had to give up. Bela fixed the correctness >> problem by himself, but the test suite is about 50% slower with >> TCP_NIO2, so we can't use it instead of TCP without further >> investigation. >> >> Then I got back to ISPN-5699, and after a lot of fiddling with the >> EntryFactoryImpl methods and fixing failing tests, this morning I >> finally issued the PR. >> >> This week, I need to look into a replicated-mode read performance >> regression, and then get back to the sequential interceptor >> interfaces. >> >> Cheers >> Dan >> >> >> On Mon, Sep 14, 2015 at 6:46 PM, Tristan Tarrant wrote: >>> Hi all, >>> >>> here are the meeting minutes from today's IRC meeting: >>> >>> http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-09-14-14.02.log.html >>> >>> Enjoy >>> >>> Tristan >>> -- >>> Tristan Tarrant >>> Infinispan Lead >>> JBoss, a division of Red Hat >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Tue Sep 15 08:25:59 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 15 Sep 2015 15:25:59 +0300 Subject: [infinispan-dev] Fine-grained security proposals In-Reply-To: <55F7DB8C.8090206@redhat.com> References: <55F7DB8C.8090206@redhat.com> Message-ID: Unless there is a big push for ACLs from users, I think we should stick to the callback version. Cheers Dan On Tue, Sep 15, 2015 at 11:49 AM, Tristan Tarrant wrote: > Hi guys, > I've created a wiki entry for fine-grained authorization. Please look at it: > > https://github.com/infinispan/infinispan/wiki/Fine-grained-security-for-caches > > And let me know your thoughts. Personally I'm not sure the second case > is worth the effort, but it definitely has its advantages. > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Tue Sep 15 08:46:53 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 15 Sep 2015 15:46:53 +0300 Subject: [infinispan-dev] Remove cache issues In-Reply-To: <55F6B3AD.2010709@infinispan.org> References: <55F6B3AD.2010709@infinispan.org> Message-ID: On Mon, Sep 14, 2015 at 2:46 PM, Pedro Ruivo wrote: > Hi, > > I found the following issues with _EmbeddedCacheManager.removeCache()_ > while I was helping the LEADS devs. The method removes the cache from > all the nodes in the cluster. > > #1 It has different behaviour in the invoker node. > > In the invoked node, it removes the configuration from > _configurationOverrides_ field and from _cacheDependencyGraph_. In the > remaining node, it doesn't. > > To think: it should remove from _cacheDependencyGraph_ in all the nodes > but keep the configuration. Galder added the _configurationOverrides_ removal for ISPN-3234, so I'm guessing JCache needs it. I guess the problem is that as long as the configuration exists, manager.getCache(name) will re-create the cache, and that doesn't fit with JCache. I'd rather remove the cache from _configurationOverrides_ everywhere, at least until we stop auto-spawning caches. > > #2 It tries to remove the cache remotely before locally. > > It could be done in parallel and it has a small issue: if a timeout > occurs, it never tries to remove the cache locally. > > To think: can we send the request asynchronously? +1 to do it asynchronously > > #3 When passivation is enabled, it first invoke > _PassivationManager.passivateAll()_ and then _PersistenceManager.stop()_. > > The former will copy all the data in memory to the cache store and the > later will clear the cache store. We can skip the passivation. > > To think: create a _PassivationManager.skipPassivationOnStop()_ (similar > to _PersistenceManager.setClearOnStop()_). Personally I was never really happy with removeCache() always purging the store. Although it makes sense for temporary caches, it means we're missing a way to just stop a cache on all the nodes. Maybe add a (mutable?) configuration option AbstractStoreConfiguration.purgeOnStop that could be referenced by both the PassivationManager and the PersistenceManager instead? Cheers Dan From mudokonman at gmail.com Tue Sep 15 09:11:42 2015 From: mudokonman at gmail.com (William Burns) Date: Tue, 15 Sep 2015 13:11:42 +0000 Subject: [infinispan-dev] Fine-grained security proposals In-Reply-To: References: <55F7DB8C.8090206@redhat.com> Message-ID: >From the perspective of liking to control things, I agree on the callback version. The callback implementation also seems like it will be much easier to implement and maintain as well :) Guessing we can put everything in SecureCache as well then. - Will On Tue, Sep 15, 2015 at 8:26 AM Dan Berindei wrote: > Unless there is a big push for ACLs from users, I think we should > stick to the callback version. > > Cheers > Dan > > > On Tue, Sep 15, 2015 at 11:49 AM, Tristan Tarrant > wrote: > > Hi guys, > > I've created a wiki entry for fine-grained authorization. Please look at > it: > > > > > https://github.com/infinispan/infinispan/wiki/Fine-grained-security-for-caches > > > > And let me know your thoughts. Personally I'm not sure the second case > > is worth the effort, but it definitely has its advantages. > > > > Tristan > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150915/4c9f884c/attachment.html From mudokonman at gmail.com Tue Sep 15 09:29:07 2015 From: mudokonman at gmail.com (William Burns) Date: Tue, 15 Sep 2015 13:29:07 +0000 Subject: [infinispan-dev] TaskManager In-Reply-To: <55F7ECB2.9060909@redhat.com> References: <55F7ECB2.9060909@redhat.com> Message-ID: Since I can't comment on the wiki page, will send it here. 1. I am not sure what our other REST operations do, but imo we shouldn't be using a GET to run a script. A GET to me should be idempotent, in our case the script can modify underlying resources and return different results each time. 2. This TaskManager would also be used by other things such as map/reduce, distributed streams, right? - Will On Tue, Sep 15, 2015 at 6:02 AM Tristan Tarrant wrote: > Hi all, > > design morning ! > > I have updated the wiki page on scripting execution to make it a bit > more comprehensive. For this reason the title has changed a bit. > > https://github.com/infinispan/infinispan/wiki/Task-Execution-Design > > Please let me know your thoughts. > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150915/6d57f083/attachment.html From ttarrant at redhat.com Tue Sep 15 09:35:01 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 15 Sep 2015 15:35:01 +0200 Subject: [infinispan-dev] TaskManager In-Reply-To: References: <55F7ECB2.9060909@redhat.com> Message-ID: <55F81E85.9000506@redhat.com> On 15/09/2015 15:29, William Burns wrote: > Since I can't comment on the wiki page, will send it here. > > 1. I am not sure what our other REST operations do, but imo we shouldn't > be using a GET to run a script. A GET to me should be idempotent, in > our case the script can modify underlying resources and return different > results each time. Right. Then it will have to be a POST. I guess this would also need to be taken into account when we implement queries over REST. > 2. This TaskManager would also be used by other things such as > map/reduce, distributed streams, right? The intention is to initially provide two types of "task": scripting tasks and deployed tasks. The former you all know already whereas the latter will be packaged in a JAR and implement some kind of Task interface. I'll push a preview PR today with an initial design for these interfaces. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From vjuranek at redhat.com Tue Sep 15 11:16:42 2015 From: vjuranek at redhat.com (Vojtech Juranek) Date: Tue, 15 Sep 2015 17:16:42 +0200 Subject: [infinispan-dev] Fine-grained security proposals In-Reply-To: <55F7DB8C.8090206@redhat.com> References: <55F7DB8C.8090206@redhat.com> Message-ID: <4260825.mJhQjFb2yv@localhost.localdomain> Hi, > I've created a wiki entry for fine-grained authorization. could you specify use-cases which you'd like to solve by this feature? If I want e.g. restrict access to entries only to user who created the entry, the callback itself doesn't easily (*) solve the problem, as I need some additional entry metadata for the decision, etc. Btw: in case of auth. callback I see custom code as an advantage, as it gives me freedom to implement my security policy as I like. And we can provide some common callbacks for users who don't want to implement it themselves. Thanks Vojta (*) I can probably e.g. encode some subject hash into the key and then allow/reject request for entry based on hash of requesting subject, but this is not a very nice solution -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150915/bb8f8110/attachment-0001.bin From bban at redhat.com Tue Sep 15 13:31:51 2015 From: bban at redhat.com (Bela Ban) Date: Tue, 15 Sep 2015 13:31:51 -0400 Subject: [infinispan-dev] Weekly IRC meeting minutes 2015-09-14 In-Reply-To: References: <55F6EBCB.6010101@redhat.com> <55F7E0FB.60902@redhat.com> Message-ID: <55F85607.6000007@redhat.com> Hey Dan, thiswould be great, as it would allow anyone who can reproduce a bug maybe 1 out of 10 times, to record the failed session and then send the captured data to a dev who could then replay it to see what went wrong. If you can make a demo of this and present this at the f2f in Berlin, that would be great ! I think this would be a very valuable tool in our toolset to handle almost non-reproduceable bugs. On 09/15/2015 07:32 AM, Dan Berindei wrote: > Hi Bela > > I was able to reuse the configuration created by IntelliJ and run my > test in a loop outside the IDE, until I reproduced it. Unfortunately, > when I was tracing all of JGroups, it was also harder to reproduce, > the test JVM was crashing a lot, and I also had problems loading the > trace in the IDE. > > I'll see if I can get a demo working for the November meeting, but > this was how I managed to reproduce it: > > 1. Figure out where the chronon configuration and recording are > stored, in my case: > CHRONON_DIR=$HOME/.IntelliJIdea14/system/chronon-recordings/2015_09_08_FullSyncWriteSkewTotalOrderTest_testPut1/ > > 2. Copy the agent parameters from the IntelliJ console to > MAVEN_FORK_OPTS (this is Infinispan-specific, if maven-surefire isn't > forking you can use MAVEN_OPTS instead): > export MAVEN_FORK_OPTS="-javaagent:$HOME/.IntelliJIdea14/config/plugins/chronon/lib/recorder/recorder-3.70.0.200.jar=$CHRONON_DIR/config.txt > -agentpath:$HOME/.IntelliJIdea14/config/plugins/chronon/lib/recorder/native/librecorderagent64-3.0.7.so > -noverify" > > 3. Run these in a loop until you get a test failure: > rm -rf $CHRONON_DIR/*/ \ > mvn test -pl core > '-Dtest=org.infinispan.tx.totalorder.simple.dist.FullSyncWriteSkewTotalOrderTest#testPut' > > 4. Open $CHRONON_DIR in IntelliJ with Run -> Open Chronon Recording. > > 5. Enjoy debugging backwards in time... Well, I haven't got to the > point where I enjoy it yet, but I think I made some progress :) > > Cheers > Dan > > On Tue, Sep 15, 2015 at 12:12 PM, Bela Ban wrote: >> Hi Dan, >> >> what a coincidence, I tried Chronon as well, but outside of IDEA as it >> requires the commercial version and I only have the community version... >> >> However, using something like Chronos is certainly worth looking into, >> for all of us, as (as you mentioned) you only need to reproduce the >> error once and can then go back in time to see the stack, variables etc, >> so you know exactly what's going on. >> >> If someone gets Chronos (or any other time travelling debugger) to work, >> this would be worth a demo at our November meeting, and would add a tool >> of tremendous value to our common toolset ! >> >> Any takers ? >> >> >> On 09/15/2015 03:19 AM, Dan Berindei wrote: >>> Hi guys >>> >>> Sorry for missing the meeting yesterday, I was out most of the day. >>> >>> I started last week trying to help Bela with the TCP_NIO2 problems. I >>> wasn't quite satisfied with JGroups' logging, so I tried to use the >>> Chronon embedded with IntelliJ to debug the tests. In theory it's nice >>> because you only have to reproduce a random failure once, and you can >>> replay the trace as many times as you want, but I got IntelliJ to hang >>> way too many times, so I had to give up. Bela fixed the correctness >>> problem by himself, but the test suite is about 50% slower with >>> TCP_NIO2, so we can't use it instead of TCP without further >>> investigation. >>> >>> Then I got back to ISPN-5699, and after a lot of fiddling with the >>> EntryFactoryImpl methods and fixing failing tests, this morning I >>> finally issued the PR. >>> >>> This week, I need to look into a replicated-mode read performance >>> regression, and then get back to the sequential interceptor >>> interfaces. >>> >>> Cheers >>> Dan >>> >>> >>> On Mon, Sep 14, 2015 at 6:46 PM, Tristan Tarrant wrote: >>>> Hi all, >>>> >>>> here are the meeting minutes from today's IRC meeting: >>>> >>>> http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-09-14-14.02.log.html >>>> >>>> Enjoy >>>> >>>> Tristan >>>> -- >>>> Tristan Tarrant >>>> Infinispan Lead >>>> JBoss, a division of Red Hat >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20150915/b746efeb/attachment.html From dan.berindei at gmail.com Wed Sep 16 06:03:27 2015 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 16 Sep 2015 13:03:27 +0300 Subject: [infinispan-dev] Uber jars testing In-Reply-To: References: <511839617.16737762.1441183234404.JavaMail.zimbra@redhat.com> <55E6B841.4090808@redhat.com> <2076411012.24092427.1441276299249.JavaMail.zimbra@redhat.com> <1497354448.24093053.1441276467028.JavaMail.zimbra@redhat.com> <356567689.27056391.1441804972571.JavaMail.zimbra@redhat.com> <55F05633.2020307@redhat.com> <480833508.28352119.1441981142276.JavaMail.zimbra@redhat.com> <55F2EB34.1040005@redhat.com> Message-ID: Sanne, we don't really care about the various JGroups protocols, we use a DISCARD protocol in some tests but it's always just above the transport. Instead, a lot of tests need to block just before/after a certain interceptor. I think there are also tests that we should have, but are missing, because we don't have a good way to "split" the execution of a single interceptors and introduce a delay. IMO a generic test that runs a set of operations with a set of configurations without introducing any failures should be great for testing the "happy flow" (which, incidentally, I think is what the integration tests should focus on). And we could even use the random seed idea to reduce the number of tests that run each time. But once you start to introduce failures/delays at specific points, it becomes really hard to make the failure the same for all operations and configurations, because they really behave differently. Putting everything in a single class won't make those differences disappear. Back to Jiri's initial question, I think it would be best to have a separate module that runs the whole core/query/hotrod-client test suites with uber jars (assuming that's possible). Those tests don't take that long, because all 3 modules can run tests in parallel, so I'm not sure trying to find the right subset is worth it. Cheers Dan On Fri, Sep 11, 2015 at 6:41 PM, Sanne Grinovero wrote: > Yes I understand, and I also use Byteman only as last resort (sometimes it is). > > But while it might not be practical to maintain hundreds of tests > using it and relying on specific internals - and in different ways - > if you have a single general "test runner" which makes use of it you > can limit the maintenance a lot, and it becomes feasible to test the > tester to make sure your rules are still valid. > And like you suggest, you'd only inject code in specific points, maybe > marked with annotations. > I didn't mean to suggest a specific way for doing it, I guess most of > what we need can be done with a custom interceptor or JGroups > protocols. > > The point is to inject failures at crucial points: nothing new, we do > it all the time in many tests, but we don't reuse such patterns for > other tests. TBH if I look into some cluster "resilience" tests I > wrote myself I'm ashamed from reading the style and the assumptions > I'd make some years ago, but there are many such code points scattered > around so I'd rather centralize the logic and reuse one properly > maintained runner. > > > On 11 September 2015 at 15:54, Radim Vansa wrote: >> -0.1 for Byteman - although generally I am a fan of Byteman, it seems to >> me that the rules are too fragile, since IDE does not give any hints >> like "hey, there's a test that inserts some logic to the place you're >> modifying". IMO, using Byteman regularly will end up with many tests >> silently passing, since the timing is broken after few changes in the >> source. >> >> If we're going to use any instrumentation for testing, I'd consider >> putting annotation to the spot I want to hook. I know that this mixes >> the test code and actual processing, but makes any dirty tricks more >> obvious - and you can consider such annotation an abstract comment >> what's happening. >> >> Haven't prototyped that either :) >> >> Anyway, I'm eager to see how will the approach described by Galder work, >> can't imagine that fully atm. >> >> My $0.02 >> >> Radim >> >> On 09/11/2015 04:28 PM, Sanne Grinovero wrote: >>> +1 Galder for your abstraction, we might even need a DSL. >>> >>> An additional benefit would be that all "API functional tests" could >>> be tested also in conditions such as running in a in a race with >>> topology changes, instrumenting the timing of parallel code and >>> network operations. >>> As a DSL and using some Byteman we could automatically have it insert >>> network issues or timing issues at specific critical points; such >>> points follow a general pattern so this can be generalized w/o having >>> to code tests for specific critical paths in each functional test >>> independently. >>> A simple example would be to kill a node after a command was sent to >>> it, but before it replies: all the tests should be able to survive >>> that. >>> >>> >>> >>> On 11 September 2015 at 15:19, Galder Zamarreno wrote: >>>> ----- Original Message ----- >>>>> Any plans for tests that are just slightly different for different >>>>> configurations? With inheritance, it's simple - you just override the >>>>> method. If you just run that test on a huge matrix of configurations, >>>>> you end up with having a method with a very complicated switch for >>>>> certain configurations. >>>> ^ I see what you are getting at here. Normally such differences sometimes happen and can be divided into two: operations executed and assertions. Sometimes the operations executed are slightly different, and sometimes the operations are the same, but assertions slightly different. >>>> >>>> I don't have specific ideas about how to solve this but my gut feeling is something like this: >>>> >>>> If we can write tests as objects/types, where we define the operations and the assertions, then all the tests (testXXX methods) have to do is run this N objects against M configurations. With that in mind, running slightly different tests would be done extending or composing the test object/types, independent of the test classes themselves. To run these slight variations, we'd define a test class that runs the variations with M configurations. >>>> >>>> Note that I've not prototyped any of that and there are probably better ways to do this. >>>> >>>>> I am not asking sarcastically, but I've run into similar issue when >>>>> implementing similar thing in 2LC testsuite. >>>>> >>>>> Radim >>>>> >>>>> On 09/09/2015 03:22 PM, Galder Zamarreno wrote: >>>>>> I agree pretty much with everything below: >>>>>> >>>>>> * We overuse test overriding to run the same test with different >>>>>> configuration. I did that same mistake with the functional map API stuff >>>>>> :( >>>>>> >>>>>> * I'm in favour of testsuite restructuring, but I think we really need to >>>>>> start from scratch in a separate testsuite maven project, since we can >>>>>> then add all functional test for all (not only core...etc, but also >>>>>> compatibility tests...etc), and leave its project to test implementation >>>>>> details? Adding this separation would open up the path to create a testkit >>>>>> (as I explained last year in Berlin) >>>>>> >>>>>> * I'm also in favour in defining the test once and running it with >>>>>> different configuration options automatically. >>>>>> >>>>>> * I'm in favour too of randomising (need to check that link) but also we >>>>>> need some quickcheck style tests [1], e.g. a test that verifies that >>>>>> put(K, V) works not matter the type of object passed in. >>>>>> >>>>>> Cheers, >>>>>> >>>>>> [1] >>>>>> https://www.fpcomplete.com/user/pbv/an-introduction-to-quickcheck-testing >>>>>> -- >>>>>> Galder Zamarre?o >>>>>> Infinispan, Red Hat >>>>>> >>>>>> ----- Original Message ----- >>>>>>> Interesting subject. We also have many tests which (ab)use inheritance >>>>>>> to re-test the same API semantics in slightly different >>>>>>> configurations, like embedded/DIST and embedded/REPL, sometimes >>>>>>> becoming an @Override mess. >>>>>>> It would be far more useful to restructure the testsuite to have such >>>>>>> tests in a single class (no inheritance) and declare - maybe >>>>>>> annotations? - which permutations of configuration parameters should >>>>>>> be valid. >>>>>>> >>>>>>> Among those configuration permutations one would not have "just" >>>>>>> different replication models, but also things like >>>>>>> - using the same API remotely (Hot Rod) >>>>>>> - using the same feature but within a WildFly embedded module >>>>>>> - using the uber jars vs small jars >>>>>>> - uber jars & remote.. >>>>>>> - remote & embedded modules.. >>>>>>> - remote, uber jars, in OSGi.. >>>>>>> >>>>>>> And finally combine with other options: >>>>>>> - A Query test using: remote client, using uber jars, in OSGi, but >>>>>>> switching JTA implementation, using a new experimental JGroups stack! >>>>>>> >>>>>>> For example many Core API and Query tests are copy/pasted into other >>>>>>> modules as "integration tests", etc.. but we really should just run >>>>>>> the same one in a different environment. >>>>>>> >>>>>>> This would keep our code better maintainable, but also allow some neat >>>>>>> tricks like specify that some configurations should definitely be >>>>>>> tested in some test group (like Galder suggests, one could flag one of >>>>>>> these for "smoke tests", one for "nightly tests"), but you could also >>>>>>> want to flag some configuration settings as a "should work, low >>>>>>> priority for testing". >>>>>>> A smart testsuite could then use a randomizer to generate permutations >>>>>>> of configuration options for those low priority tests which are not >>>>>>> essential; there are great examples of such testsuites in the Haskell >>>>>>> world, and also Lucene and ElasticSearch do it. >>>>>>> A single random seed is used for the whole run, and it's printed >>>>>>> clearly at the start; a single seed will deterministically define all >>>>>>> parameters of the testsuite, so you can reproduce it all by setting a >>>>>>> specific seed when needing to debug a failure. >>>>>>> >>>>>>> http://blog.mikemccandless.com/2011/03/your-test-cases-should-sometimes-fail.html >>>>>>> >>>>>>> Thanks, >>>>>>> Sanne >>>>>>> >>>>>>> On 3 September 2015 at 11:34, Galder Zamarreno wrote: >>>>>>>> Another interesting improvement here would be if you could run all these >>>>>>>> smoke tests with an alternative implementation of AdvancedCache, e.g. one >>>>>>>> based with functional API. >>>>>>>> >>>>>>>> Cheers, >>>>>>>> -- >>>>>>>> Galder Zamarre?o >>>>>>>> Infinispan, Red Hat >>>>>>>> >>>>>>>> ----- Original Message ----- >>>>>>>>> Good post Jiri, this got me thinking :) >>>>>>>>> >>>>>>>>> Running the entire testsuite again with uber jars would add a lot of >>>>>>>>> time >>>>>>>>> to >>>>>>>>> the build time. >>>>>>>>> >>>>>>>>> Maybe we should have a set of tests that must be executed for sure, e.g. >>>>>>>>> like >>>>>>>>> Wildfly's smoke tests [1]. We have "functional" group but right now it >>>>>>>>> covers pretty much all tests. >>>>>>>>> >>>>>>>>> Such tests should live in a separate testsuite, so that we could add the >>>>>>>>> essential tests for *all* components. In a way, we've already done some >>>>>>>>> of >>>>>>>>> this in integrationtests/ but it's not really well structured for this >>>>>>>>> aim. >>>>>>>>> >>>>>>>>> Also, if we would go down this path, something we should take advantage >>>>>>>>> of >>>>>>>>> (if possible with JUnit/TestNG) is what Gustavo did with the Spark tests >>>>>>>>> in >>>>>>>>> [2], where he used suites to make it faster to run things, by starting a >>>>>>>>> cache manager for distributed caches, running all distributed >>>>>>>>> tests...etc. >>>>>>>>> In a way, I think we can already do this with Arquillian Infinispan >>>>>>>>> integration, so Arquillian would probably well suited for such smoke >>>>>>>>> testsuite. >>>>>>>>> >>>>>>>>> Thoughts? >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> >>>>>>>>> [1] https://github.com/wildfly/wildfly#running-the-testsuite >>>>>>>>> [2] >>>>>>>>> https://github.com/infinispan/infinispan-spark/tree/master/src/test/scala/org/infinispan/spark >>>>>>>>> -- >>>>>>>>> Galder Zamarre?o >>>>>>>>> Infinispan, Red Hat >>>>>>>>> >>>>>>>>> ----- Original Message ----- >>>>>>>>>> Hi Jiri, comments inline. >>>>>>>>>> >>>>>>>>>> On 2.9.2015 10:40, Jiri Holusa wrote: >>>>>>>>>>> Hi all, >>>>>>>>>>> >>>>>>>>>>> we've been thinking for a while, how to test ISPN uber jars. The >>>>>>>>>>> current >>>>>>>>>>> status is that we actually don't have many tests in the testsuite, >>>>>>>>>>> there >>>>>>>>>>> are few tests in integrationtests/all-embedded-* modules that are >>>>>>>>>>> basically copies of the actual tests in corresponding modules. We >>>>>>>>>>> think >>>>>>>>>>> that this test coverage is not enough and more importantly, they are >>>>>>>>>>> duplicates. >>>>>>>>>>> >>>>>>>>>>> The questions are now following: >>>>>>>>>>> * which tests should be invoked with uber-jars? Whole ISPN testsuite? >>>>>>>>>>> Only >>>>>>>>>>> integrationtests module? >>>>>>>>>> The goal is to run the whole test suite because, as you said, we don't >>>>>>>>>> have enough tests in integrationtests/* And we can't duplicate all >>>>>>>>>> test classes from individual modules here. >>>>>>>>>> >>>>>>>>>>> * how would it run? Create Maven different profiles for "classic" jars >>>>>>>>>>> and >>>>>>>>>>> uber jars? Or try to use some Maven exclusion magic if even possible? >>>>>>>>>>> >>>>>>>>>>> Some time ago, we had discussion about this with Sebastian, who >>>>>>>>>>> suggested >>>>>>>>>>> that running only integrationtests module would be sufficient, because >>>>>>>>>>> uber-jars are really about packaging, not the functionality itself. >>>>>>>>>>> But I >>>>>>>>>>> don't know if the tests coverage is sufficient in that level, I would >>>>>>>>>>> be >>>>>>>>>>> much more confident if we could run the whole ISPN testsuite against >>>>>>>>>>> uber-jars. >>>>>>>>>> Right. Uber-jars are about packaging but you don't know that the >>>>>>>>>> packiging is right until you try all the features and see that >>>>>>>>>> everything works. There might be some classes missing (just for some >>>>>>>>>> particular features), same classes in different packages, the >>>>>>>>>> Manifest.mf might be corrupted and then something won't work in OSGi. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I'd prefer a separate Maven profile. IMO, exclusions are too >>>>>>>>>> error-prone. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Martin >>>>>>>>>>> I'm opening this for wider discussion as we should agree on the way >>>>>>>>>>> how >>>>>>>>>>> to >>>>>>>>>>> do it, so we could do it right :) >>>>>>>>>>> >>>>>>>>>>> Cheers, >>>>>>>>>>> Jiri >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> infinispan-dev mailing list >>>>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> -- >>>>> Radim Vansa >>>>> JBoss Performance Team >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From pedro at infinispan.org Wed Sep 16 13:15:56 2015 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 16 Sep 2015 18:15:56 +0100 Subject: [infinispan-dev] Remove cache issues In-Reply-To: References: <55F6B3AD.2010709@infinispan.org> Message-ID: <55F9A3CC.2010708@infinispan.org> On 09/15/2015 01:46 PM, Dan Berindei wrote: > On Mon, Sep 14, 2015 at 2:46 PM, Pedro Ruivo wrote: >> Hi, >> >> I found the following issues with _EmbeddedCacheManager.removeCache()_ >> while I was helping the LEADS devs. The method removes the cache from >> all the nodes in the cluster. >> >> #1 It has different behaviour in the invoker node. >> >> In the invoked node, it removes the configuration from >> _configurationOverrides_ field and from _cacheDependencyGraph_. In the >> remaining node, it doesn't. >> >> To think: it should remove from _cacheDependencyGraph_ in all the nodes >> but keep the configuration. > > Galder added the _configurationOverrides_ removal for ISPN-3234, so > I'm guessing JCache needs it. You are right :) > > I guess the problem is that as long as the configuration exists, > manager.getCache(name) will re-create the cache, and that doesn't fit > with JCache. I'd rather remove the cache from _configurationOverrides_ > everywhere, at least until we stop auto-spawning caches. > >> >> #2 It tries to remove the cache remotely before locally. >> >> It could be done in parallel and it has a small issue: if a timeout >> occurs, it never tries to remove the cache locally. >> >> To think: can we send the request asynchronously? > > +1 to do it asynchronously Well, the JCache docs does not say anything about it but, to be safe, I'll leave it synchronously. > >> >> #3 When passivation is enabled, it first invoke >> _PassivationManager.passivateAll()_ and then _PersistenceManager.stop()_. >> >> The former will copy all the data in memory to the cache store and the >> later will clear the cache store. We can skip the passivation. >> >> To think: create a _PassivationManager.skipPassivationOnStop()_ (similar >> to _PersistenceManager.setClearOnStop()_). > > Personally I was never really happy with removeCache() always purging > the store. Although it makes sense for temporary caches, it means > we're missing a way to just stop a cache on all the nodes. > > Maybe add a (mutable?) configuration option > AbstractStoreConfiguration.purgeOnStop that could be referenced by > both the PassivationManager and the PersistenceManager instead? It sounds weird to have a per-store purgeOnStop. The PassivationManager will invoke the write() in PersistenceManager and it doesn't know if it is stopping or not. I've put an option in PassivationManager (the same way as clearOnStop). > > Cheers > Dan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From galder at redhat.com Mon Sep 21 10:37:19 2015 From: galder at redhat.com (Galder Zamarreno) Date: Mon, 21 Sep 2015 10:37:19 -0400 (EDT) Subject: [infinispan-dev] Weekly meeting on IRC Message-ID: <1996039233.34024716.1442846239009.JavaMail.zimbra@redhat.com> Hi all, Here's the transcript from the IRC meeting we had earlier today: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2015/infinispan.2015-09-21-14.02.html Cheers, -- Galder Zamarre?o Infinispan, Red Hat From ttarrant at redhat.com Wed Sep 23 05:43:43 2015 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 23 Sep 2015 11:43:43 +0200 Subject: [infinispan-dev] Infinispan 8.1.0.Alpha1 Message-ID: <5602744F.4090403@redhat.com> Dear all, release early release often ! The first Alpha release of Infinispan 8.1 is out. As is traditional, it is codenamed after a beer. This time it is "Mahou" ! The highlights for 8.1.0.Alpha1 are: ISPN-5781 - Upgrade server to WildFly 10.0.0.CR1 ISPN-5742 - Add global persistent state path configuration We're working on lots of cool things for 8.1 Final due at the end of November, so be sure to check our roadmap to see what's coming. Get it, learn how to use it, help us improve it Enjoy ! http://infinispan.org The Infinispan team -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From sanne at infinispan.org Wed Sep 30 18:16:33 2015 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 1 Oct 2015 00:16:33 +0200 Subject: [infinispan-dev] Lambda's & Batching Message-ID: A local cache with batching enabled produces this: java.lang.IllegalArgumentException: Cannot create a transactional context without a valid Transaction instance. at org.infinispan.context.TransactionalInvocationContextFactory.createInvocationContext(TransactionalInvocationContextFactory.java:69) at org.infinispan.context.TransactionalInvocationContextFactory.createInvocationContext(TransactionalInvocationContextFactory.java:63) at org.infinispan.functional.impl.ReadWriteMapImpl.eval(ReadWriteMapImpl.java:56) at org.infinispan.lucene.impl.FileListOperations.addFileName(FileListOperations.java:60) (<-- experimental uncommitted code here) I'm guessing the eval implementations is needing the "auto-transaction-start" semantics which we normally have for other operations in a batching cache... right? But I wonder about the usefulness of having a short lived batching context when all I'm doing is sending a lambda to a specific entry: wouldn't it be even better to treat this as a no-context operation? Thanks, Sanne