From rvansa at redhat.com Fri Sep 2 05:45:29 2016 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 2 Sep 2016 11:45:29 +0200 Subject: [infinispan-dev] Unwrapping exceptions In-Reply-To: <57C58075.3060303@redhat.com> References: <57C45A84.9060801@redhat.com> <57C468B4.1070000@redhat.com> <57C58075.3060303@redhat.com> Message-ID: <57C94A39.3000103@redhat.com> On 08/30/2016 02:47 PM, Radim Vansa wrote: > On 08/30/2016 02:16 PM, Dan Berindei wrote: >> On Tue, Aug 30, 2016 at 1:11 PM, Sanne Grinovero wrote: >>> Yes please make sure that the kind of end users exceptions are the >>> same for the client, regardless if the originator happens to be an >>> owner as well. >>> >>> It's valuable to know that the exception happened on another node, but >>> the exception type (and primary message) should be the same. >>> >> Is it really a problem if the local exceptions are wrapped in >> CacheException, and the remote ones are wrapped in RemoteException? >> RemoteException is a subtype of CacheException, so `catch >> (CacheException e)` works with all of them. >> >> Future.get() wraps all exceptions in ExecutionException, >> CompletableFuture.join() wraps all exceptions in CompletionException. >> So we'd be an outlier if we *didn't* wrap user exceptions. > Okay, the exceptions can be wrapped in one (and always exactly one) > level of CacheException. Not as convenient for filtering (try-catch > block vs. instanceofs on e.getCause()), but makes (enough) sense. I'll > adapt the PR. Hmm, it's not possible to use only one remote exception - if the command fails on backup (before failing on primary), we should keep the hierarchy as RemoteException (from primary) caused by RemoteException (from backup) caused by (actual failure). Radim >>> Bonus points to not have exceptions at all :) >> Error codes FTW? ;) >> >>> In Elasticsearch they developed a new scripting language for a use >>> case similar to our "lambda execution" which basically restricts in >>> the language itself what is safe to do vs what you can't do. >>> I'm not sure about developing a new language but from this point of >>> view it's brilliant.. >>> >> I think you impose the same kind of restrictions at the bytecode >> level, with something like JaQue [1]. Still, considering that you also >> need a way to ship the lambda to the server, a DSL doesn't sound too >> bad. >> >> [1]: https://github.com/TrigerSoft/jaque >> >>> On 29 August 2016 at 17:54, Radim Vansa wrote: >>>> The intention was not to protect the user from knowing where the code >>>> was executed, but rather simplify exception handling when he wants to >>>> handle different exceptions from his code (though, throwing exception on >>>> remote node is not too efficient). And the argument was that he does not >>>> *need* to know it. >>>> >> But does the user really need to know that it was an exception in >> their lambda vs an exception in Infinispan itself? Most of the time, >> there's nothing you can do about it anyway... >> >>>> As for the debugging aid, it could make sense to add the remote stack >>>> trace to suppressed exceptions, though I don't think that it will be of >>>> any use to him. >>>> >> If we throw the exact exception that the lambda raised on the remote >> node, the user is going to see *only* the remote stack trace. >> >> TBH we have the same problem with RemoteExceptions now: instead of >> having a stack trace pointing to the user code calling into >> Infinispan, our stack trace points to the Infinispan code handling the >> response. But at least we have a chance to "fix" the stack trace of >> the wrapper exception in AsyncInterceptorChainImpl.invoke to that it >> has the caller's stack trace instead. If we don't have a wrapper, >> that's no longer possible. >> >> Cheers >> Dan >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Radim Vansa JBoss Performance Team From rory.odonnell at oracle.com Mon Sep 5 13:39:11 2016 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Mon, 5 Sep 2016 18:39:11 +0100 Subject: [infinispan-dev] Early Access builds of JDK 9 b134 are available on java.net Message-ID: Hi Galder, Early Access b134 for JDK 9 is available on java.net, summary of changes are listed here . There have been a number of fixes , since the last availability email , to bugs reported by Open Source projects : * 8156841 sun.security.pkcs11.SunPKCS11 poller thread retains a strong reference to the context class loader * 8146961 Fix PermGen memory leaks caused by static final Exceptions * 8163353 NPE in ConcurrentHashMap.removeAll() * 8160328 ClassCastException: sun.awt.image.BufImgSurfaceData cannot be cast to sun.java2d.xr.XRSurfaceData after xrandr change output Secondly, there are a number of interesting items to bring to our attention * JDK 9 Rampdown Phase 1: Process proposal [1] * The Java team has published the ?Oracle JRE and JDK Cryptographic Roadmap? [2] java.com/cryptoroadmap * The Quality Report for September 2016 is now available [3], thank you for your continued support! Highlights from the Quality Report for September : * 21 new Open Source projects have joined the Outreach program * Projects filed 35 new issues in the JDK Bug System, this is almost double the number of bugs in the previous six months! * Continuing to provide excellent feedback via the OpenJDK dev mailing lists Thank you! Rgds,Rory [1] http://mail.openjdk.java.net/pipermail/jdk9-dev/2016-August/004777.html [2] java.com/cryptoroadmap [3] https://wiki.openjdk.java.net/display/quality/Quality+Outreach+Report+September+2016 -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160905/86c9b5db/attachment.html From bban at redhat.com Tue Sep 6 11:14:48 2016 From: bban at redhat.com (Bela Ban) Date: Tue, 6 Sep 2016 17:14:48 +0200 Subject: [infinispan-dev] Doing my part to shed weight... Message-ID: <57CEDD68.4070802@redhat.com> I'm currently training for a half-marathon in November and need to lose some weight, so I thought my baby (JGroups) might also benefit from losing a few pounds, so here goes... :-) Trying to reduce the number of threads and thread pools created: [1], [2], [3]. This will all be in 4.0. Cheers, [1] https://issues.jboss.org/browse/JGRP-2047 [2] https://issues.jboss.org/browse/JGRP-2099 [3] https://issues.jboss.org/browse/JGRP-2100 -- Bela Ban, JGroups lead (http://www.jgroups.org) From afield at redhat.com Tue Sep 6 11:36:30 2016 From: afield at redhat.com (Alan Field) Date: Tue, 6 Sep 2016 11:36:30 -0400 (EDT) Subject: [infinispan-dev] Doing my part to shed weight... In-Reply-To: <57CEDD68.4070802@redhat.com> References: <57CEDD68.4070802@redhat.com> Message-ID: <657621503.9435222.1473176190583.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Bela Ban" > To: infinispan-dev at lists.jboss.org > Sent: Tuesday, September 6, 2016 11:14:48 AM > Subject: [infinispan-dev] Doing my part to shed weight... > > I'm currently training for a half-marathon in November and need to lose > some weight, so I thought my baby (JGroups) might also benefit from > losing a few pounds, so here goes... :-) Does this mean fewer steaks and more tofu?! :-) > Trying to reduce the number of threads and thread pools created: [1], > [2], [3]. > > This will all be in 4.0. > Cheers, > > [1] https://issues.jboss.org/browse/JGRP-2047 > [2] https://issues.jboss.org/browse/JGRP-2099 > [3] https://issues.jboss.org/browse/JGRP-2100 > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From ttarrant at redhat.com Tue Sep 6 12:06:44 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 6 Sep 2016 18:06:44 +0200 Subject: [infinispan-dev] Doing my part to shed weight... In-Reply-To: <657621503.9435222.1473176190583.JavaMail.zimbra@redhat.com> References: <57CEDD68.4070802@redhat.com> <657621503.9435222.1473176190583.JavaMail.zimbra@redhat.com> Message-ID: <6a383282-78f5-86f8-81d5-00d6fed13b70@infinispan.org> On 06/09/16 17:36, Alan Field wrote: > > ----- Original Message ----- >> From: "Bela Ban" >> To: infinispan-dev at lists.jboss.org >> Sent: Tuesday, September 6, 2016 11:14:48 AM >> Subject: [infinispan-dev] Doing my part to shed weight... >> >> I'm currently training for a half-marathon in November and need to lose >> some weight, so I thought my baby (JGroups) might also benefit from >> losing a few pounds, so here goes... :-) > Does this mean fewer steaks and more tofu?! :-) Steaks are fine, it's the Mai Tai's which will need cutting down. Tristan From bban at redhat.com Tue Sep 6 12:08:10 2016 From: bban at redhat.com (Bela Ban) Date: Tue, 6 Sep 2016 18:08:10 +0200 Subject: [infinispan-dev] Doing my part to shed weight... In-Reply-To: <6a383282-78f5-86f8-81d5-00d6fed13b70@infinispan.org> References: <57CEDD68.4070802@redhat.com> <657621503.9435222.1473176190583.JavaMail.zimbra@redhat.com> <6a383282-78f5-86f8-81d5-00d6fed13b70@infinispan.org> Message-ID: <57CEE9EA.8010800@redhat.com> On 06/09/16 18:06, Tristan Tarrant wrote: > On 06/09/16 17:36, Alan Field wrote: >> >> ----- Original Message ----- >>> From: "Bela Ban" >>> To: infinispan-dev at lists.jboss.org >>> Sent: Tuesday, September 6, 2016 11:14:48 AM >>> Subject: [infinispan-dev] Doing my part to shed weight... >>> >>> I'm currently training for a half-marathon in November and need to lose >>> some weight, so I thought my baby (JGroups) might also benefit from >>> losing a few pounds, so here goes... :-) >> Does this mean fewer steaks and more tofu?! :-) > Steaks are fine, it's the Mai Tai's which will need cutting down. and the beers to flush down the mai tais... :-) > Tristan -- Bela Ban, JGroups lead (http://www.jgroups.org) From sanne at infinispan.org Tue Sep 6 13:34:23 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 6 Sep 2016 18:34:23 +0100 Subject: [infinispan-dev] Doing my part to shed weight... In-Reply-To: <57CEE9EA.8010800@redhat.com> References: <57CEDD68.4070802@redhat.com> <657621503.9435222.1473176190583.JavaMail.zimbra@redhat.com> <6a383282-78f5-86f8-81d5-00d6fed13b70@infinispan.org> <57CEE9EA.8010800@redhat.com> Message-ID: Ah, from the premises I thought that you'd make the JGroups jar lose weight by removing some junk files from it.. :P Like the Maven settings configuration file ? On 6 Sep 2016 17:09, "Bela Ban" wrote: > > > On 06/09/16 18:06, Tristan Tarrant wrote: > > On 06/09/16 17:36, Alan Field wrote: > >> > >> ----- Original Message ----- > >>> From: "Bela Ban" > >>> To: infinispan-dev at lists.jboss.org > >>> Sent: Tuesday, September 6, 2016 11:14:48 AM > >>> Subject: [infinispan-dev] Doing my part to shed weight... > >>> > >>> I'm currently training for a half-marathon in November and need to lose > >>> some weight, so I thought my baby (JGroups) might also benefit from > >>> losing a few pounds, so here goes... :-) > >> Does this mean fewer steaks and more tofu?! :-) > > Steaks are fine, it's the Mai Tai's which will need cutting down. > > and the beers to flush down the mai tais... :-) > > > > Tristan > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160906/ff365595/attachment.html From bban at redhat.com Wed Sep 7 02:56:07 2016 From: bban at redhat.com (Bela Ban) Date: Wed, 7 Sep 2016 08:56:07 +0200 Subject: [infinispan-dev] Doing my part to shed weight... In-Reply-To: References: <57CEDD68.4070802@redhat.com> <657621503.9435222.1473176190583.JavaMail.zimbra@redhat.com> <6a383282-78f5-86f8-81d5-00d6fed13b70@infinispan.org> <57CEE9EA.8010800@redhat.com> Message-ID: <57CFBA07.5090906@redhat.com> You mean pom.xml? :-) On 06/09/16 19:34, Sanne Grinovero wrote: > Ah, from the premises I thought that you'd make the JGroups jar lose > weight by removing some junk files from it.. :P Like the Maven settings > configuration file ? > > > On 6 Sep 2016 17:09, "Bela Ban" > wrote: > > > > On 06/09/16 18:06, Tristan Tarrant wrote: > > On 06/09/16 17:36, Alan Field wrote: > >> > >> ----- Original Message ----- > >>> From: "Bela Ban" > > >>> To: infinispan-dev at lists.jboss.org > > >>> Sent: Tuesday, September 6, 2016 11:14:48 AM > >>> Subject: [infinispan-dev] Doing my part to shed weight... > >>> > >>> I'm currently training for a half-marathon in November and need > to lose > >>> some weight, so I thought my baby (JGroups) might also benefit from > >>> losing a few pounds, so here goes... :-) > >> Does this mean fewer steaks and more tofu?! :-) > > Steaks are fine, it's the Mai Tai's which will need cutting down. > > and the beers to flush down the mai tais... :-) > > > > Tristan > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From sanne at infinispan.org Wed Sep 7 12:45:07 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 7 Sep 2016 17:45:07 +0100 Subject: [infinispan-dev] Doing my part to shed weight... In-Reply-To: <57CFBA07.5090906@redhat.com> References: <57CEDD68.4070802@redhat.com> <657621503.9435222.1473176190583.JavaMail.zimbra@redhat.com> <6a383282-78f5-86f8-81d5-00d6fed13b70@infinispan.org> <57CEE9EA.8010800@redhat.com> <57CFBA07.5090906@redhat.com> Message-ID: On 7 September 2016 at 07:56, Bela Ban wrote: > You mean pom.xml? :-) I meant the "settings.xml" which contains a - quoting your file - "Example of Maven settings.xml file" and is bundled in JGroups 3.6.10.Final (not sure about other versions). But it's just an example, I suspect you could also remove other files like the "INSTALL.html", README, LICENSE, and the various example configuration files, especially as they might all clash with user configuration files. > > On 06/09/16 19:34, Sanne Grinovero wrote: >> Ah, from the premises I thought that you'd make the JGroups jar lose >> weight by removing some junk files from it.. :P Like the Maven settings >> configuration file ? >> >> >> On 6 Sep 2016 17:09, "Bela Ban" > > wrote: >> >> >> >> On 06/09/16 18:06, Tristan Tarrant wrote: >> > On 06/09/16 17:36, Alan Field wrote: >> >> >> >> ----- Original Message ----- >> >>> From: "Bela Ban" > >> >>> To: infinispan-dev at lists.jboss.org >> >> >>> Sent: Tuesday, September 6, 2016 11:14:48 AM >> >>> Subject: [infinispan-dev] Doing my part to shed weight... >> >>> >> >>> I'm currently training for a half-marathon in November and need >> to lose >> >>> some weight, so I thought my baby (JGroups) might also benefit from >> >>> losing a few pounds, so here goes... :-) >> >> Does this mean fewer steaks and more tofu?! :-) >> > Steaks are fine, it's the Mai Tai's which will need cutting down. >> >> and the beers to flush down the mai tais... :-) >> >> >> > Tristan >> >> -- >> Bela Ban, JGroups lead (http://www.jgroups.org) >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Thu Sep 8 02:37:43 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 8 Sep 2016 08:37:43 +0200 Subject: [infinispan-dev] Combining AS modules Message-ID: <284b0e77-befb-d4fb-cc1d-906751b29cd5@redhat.com> Hi all, we currently distribute two separate packages for WildFly modules: embedded and client. Unfortunately we only list the former on the download page. My proposal is to combine the two packages into one. WDYT ? Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From gustavo at infinispan.org Thu Sep 8 02:59:45 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Thu, 8 Sep 2016 07:59:45 +0100 Subject: [infinispan-dev] Combining AS modules In-Reply-To: <284b0e77-befb-d4fb-cc1d-906751b29cd5@redhat.com> References: <284b0e77-befb-d4fb-cc1d-906751b29cd5@redhat.com> Message-ID: +1 On Thu, Sep 8, 2016 at 7:37 AM, Tristan Tarrant wrote: > Hi all, > > we currently distribute two separate packages for WildFly modules: > embedded and client. Unfortunately we only list the former on the > download page. > My proposal is to combine the two packages into one. WDYT ? > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160908/e15f5dc9/attachment-0001.html From gustavo at infinispan.org Thu Sep 8 03:08:26 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Thu, 8 Sep 2016 08:08:26 +0100 Subject: [infinispan-dev] Combining AS modules In-Reply-To: References: <284b0e77-befb-d4fb-cc1d-906751b29cd5@redhat.com> Message-ID: Actually, looking at the current modules zip listed in the website [1], I can see that protostream and hotrod-client modules are are there, together with embedded modules (core, lucene-directoty, etc), isn't that correct? What is in the other unlisted "client" module? [1] http://downloads.jboss.org/infinispan/9.0.0.Alpha4/infinispan-as-embedded-modules-9.0.0.Alpha4.zip On Thu, Sep 8, 2016 at 7:59 AM, Gustavo Fernandes wrote: > +1 > > > On Thu, Sep 8, 2016 at 7:37 AM, Tristan Tarrant > wrote: > >> Hi all, >> >> we currently distribute two separate packages for WildFly modules: >> embedded and client. Unfortunately we only list the former on the >> download page. >> My proposal is to combine the two packages into one. WDYT ? >> >> Tristan >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160908/66bac1c8/attachment.html From slaskawi at redhat.com Thu Sep 8 03:17:03 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 8 Sep 2016 09:17:03 +0200 Subject: [infinispan-dev] Combining AS modules In-Reply-To: References: <284b0e77-befb-d4fb-cc1d-906751b29cd5@redhat.com> Message-ID: Technically I'm all for it. The question that I have is how many users/clients do we have who wants to use only RemoteCacheManager? Thanks Sebastian On Thu, Sep 8, 2016 at 9:08 AM, Gustavo Fernandes wrote: > Actually, looking at the current modules zip listed in the website [1], I > can see that protostream and hotrod-client modules are are there, > together with embedded modules (core, lucene-directoty, etc), isn't that > correct? What is in the other unlisted "client" module? > > > [1] http://downloads.jboss.org/infinispan/9.0.0.Alpha4/ > infinispan-as-embedded-modules-9.0.0.Alpha4.zip > > On Thu, Sep 8, 2016 at 7:59 AM, Gustavo Fernandes > wrote: > >> +1 >> >> >> On Thu, Sep 8, 2016 at 7:37 AM, Tristan Tarrant >> wrote: >> >>> Hi all, >>> >>> we currently distribute two separate packages for WildFly modules: >>> embedded and client. Unfortunately we only list the former on the >>> download page. >>> My proposal is to combine the two packages into one. WDYT ? >>> >>> Tristan >>> -- >>> Tristan Tarrant >>> Infinispan Lead >>> JBoss, a division of Red Hat >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160908/1cca3c1e/attachment.html From bban at redhat.com Thu Sep 8 04:12:49 2016 From: bban at redhat.com (Bela Ban) Date: Thu, 8 Sep 2016 10:12:49 +0200 Subject: [infinispan-dev] Doing my part to shed weight... In-Reply-To: References: <57CEDD68.4070802@redhat.com> <657621503.9435222.1473176190583.JavaMail.zimbra@redhat.com> <6a383282-78f5-86f8-81d5-00d6fed13b70@infinispan.org> <57CEE9EA.8010800@redhat.com> <57CFBA07.5090906@redhat.com> Message-ID: <57D11D81.4060904@redhat.com> On 07/09/16 18:45, Sanne Grinovero wrote: > On 7 September 2016 at 07:56, Bela Ban wrote: >> You mean pom.xml? :-) > > I meant the "settings.xml" which contains a - quoting your file - > "Example of Maven settings.xml file" and is bundled in JGroups > 3.6.10.Final (not sure about other versions). Done (in 4.0) > But it's just an example, I suspect you could also remove other files > like the "INSTALL.html", This was removed in 4.0 anyway README, LICENSE, Done > and the various example files, especially as they might all clash with user > configuration files. I don't want to remove sample configs; as I usually suggest people copy them from the JAR rename them and make their modifications. >> >> On 06/09/16 19:34, Sanne Grinovero wrote: >>> Ah, from the premises I thought that you'd make the JGroups jar lose >>> weight by removing some junk files from it.. :P Like the Maven settings >>> configuration file ? >>> >>> >>> On 6 Sep 2016 17:09, "Bela Ban" >> > wrote: >>> >>> >>> >>> On 06/09/16 18:06, Tristan Tarrant wrote: >>> > On 06/09/16 17:36, Alan Field wrote: >>> >> >>> >> ----- Original Message ----- >>> >>> From: "Bela Ban" > >>> >>> To: infinispan-dev at lists.jboss.org >>> >>> >>> Sent: Tuesday, September 6, 2016 11:14:48 AM >>> >>> Subject: [infinispan-dev] Doing my part to shed weight... >>> >>> >>> >>> I'm currently training for a half-marathon in November and need >>> to lose >>> >>> some weight, so I thought my baby (JGroups) might also benefit from >>> >>> losing a few pounds, so here goes... :-) >>> >> Does this mean fewer steaks and more tofu?! :-) >>> > Steaks are fine, it's the Mai Tai's which will need cutting down. >>> >>> and the beers to flush down the mai tais... :-) >>> >>> >>> > Tristan >>> >>> -- >>> Bela Ban, JGroups lead (http://www.jgroups.org) >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> -- >> Bela Ban, JGroups lead (http://www.jgroups.org) >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From sanne at infinispan.org Thu Sep 8 04:39:49 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 8 Sep 2016 09:39:49 +0100 Subject: [infinispan-dev] Doing my part to shed weight... In-Reply-To: <57D11D81.4060904@redhat.com> References: <57CEDD68.4070802@redhat.com> <657621503.9435222.1473176190583.JavaMail.zimbra@redhat.com> <6a383282-78f5-86f8-81d5-00d6fed13b70@infinispan.org> <57CEE9EA.8010800@redhat.com> <57CFBA07.5090906@redhat.com> <57D11D81.4060904@redhat.com> Message-ID: On 8 September 2016 at 09:12, Bela Ban wrote: > > > On 07/09/16 18:45, Sanne Grinovero wrote: >> On 7 September 2016 at 07:56, Bela Ban wrote: >>> You mean pom.xml? :-) >> >> I meant the "settings.xml" which contains a - quoting your file - >> "Example of Maven settings.xml file" and is bundled in JGroups >> 3.6.10.Final (not sure about other versions). > > Done (in 4.0) Thanks! >> But it's just an example, I suspect you could also remove other files >> like the "INSTALL.html", > > This was removed in 4.0 anyway > > README, LICENSE, > > Done > >> and the various example files, especially as they might all clash with user >> configuration files. > > I don't want to remove sample configs; as I usually suggest people copy > them from the JAR rename them and make their modifications. > >>> >>> On 06/09/16 19:34, Sanne Grinovero wrote: >>>> Ah, from the premises I thought that you'd make the JGroups jar lose >>>> weight by removing some junk files from it.. :P Like the Maven settings >>>> configuration file ? >>>> >>>> >>>> On 6 Sep 2016 17:09, "Bela Ban" >>> > wrote: >>>> >>>> >>>> >>>> On 06/09/16 18:06, Tristan Tarrant wrote: >>>> > On 06/09/16 17:36, Alan Field wrote: >>>> >> >>>> >> ----- Original Message ----- >>>> >>> From: "Bela Ban" > >>>> >>> To: infinispan-dev at lists.jboss.org >>>> >>>> >>> Sent: Tuesday, September 6, 2016 11:14:48 AM >>>> >>> Subject: [infinispan-dev] Doing my part to shed weight... >>>> >>> >>>> >>> I'm currently training for a half-marathon in November and need >>>> to lose >>>> >>> some weight, so I thought my baby (JGroups) might also benefit from >>>> >>> losing a few pounds, so here goes... :-) >>>> >> Does this mean fewer steaks and more tofu?! :-) >>>> > Steaks are fine, it's the Mai Tai's which will need cutting down. >>>> >>>> and the beers to flush down the mai tais... :-) >>>> >>>> >>>> > Tristan >>>> >>>> -- >>>> Bela Ban, JGroups lead (http://www.jgroups.org) >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> -- >>> Bela Ban, JGroups lead (http://www.jgroups.org) >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Thu Sep 8 07:22:32 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 8 Sep 2016 12:22:32 +0100 Subject: [infinispan-dev] Combining AS modules In-Reply-To: References: <284b0e77-befb-d4fb-cc1d-906751b29cd5@redhat.com> Message-ID: +1 But please also add a couple of "user facing" modules which expose all the necessary components as a single unit. For my use case in Hibernate OGM I needed to list all of: (figured this list out by trial & error) Would be nice to have a single module ID for "remote client" and a single module ID for "embedded usage", document these clearly, and then mark the other modules as private API. Thanks, Sanne On 8 September 2016 at 08:17, Sebastian Laskawiec wrote: > Technically I'm all for it. > > The question that I have is how many users/clients do we have who wants to > use only RemoteCacheManager? > > Thanks > Sebastian > > On Thu, Sep 8, 2016 at 9:08 AM, Gustavo Fernandes > wrote: >> >> Actually, looking at the current modules zip listed in the website [1], I >> can see that protostream and hotrod-client modules are are there, >> together with embedded modules (core, lucene-directoty, etc), isn't that >> correct? What is in the other unlisted "client" module? >> >> >> [1] >> http://downloads.jboss.org/infinispan/9.0.0.Alpha4/infinispan-as-embedded-modules-9.0.0.Alpha4.zip >> >> On Thu, Sep 8, 2016 at 7:59 AM, Gustavo Fernandes >> wrote: >>> >>> +1 >>> >>> >>> On Thu, Sep 8, 2016 at 7:37 AM, Tristan Tarrant >>> wrote: >>>> >>>> Hi all, >>>> >>>> we currently distribute two separate packages for WildFly modules: >>>> embedded and client. Unfortunately we only list the former on the >>>> download page. >>>> My proposal is to combine the two packages into one. WDYT ? >>>> >>>> Tristan >>>> -- >>>> Tristan Tarrant >>>> Infinispan Lead >>>> JBoss, a division of Red Hat >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Thu Sep 8 07:58:53 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 8 Sep 2016 12:58:53 +0100 Subject: [infinispan-dev] Combining AS modules In-Reply-To: References: <284b0e77-befb-d4fb-cc1d-906751b29cd5@redhat.com> Message-ID: I actually just finished debugging some LinkageError issues and discovered that the embedded zip contains a full copy of the remote-client libraries as well, with at least one difference (in how River is being exported). So currently be aware that if you unzip both the embedded modules and the remove modules, Infinispan (in either mode) fails to start because of Linkage issues. Opened: - https://issues.jboss.org/browse/ISPN-7006 On 8 September 2016 at 12:22, Sanne Grinovero wrote: > +1 > > But please also add a couple of "user facing" modules which expose all > the necessary components as a single unit. > > For my use case in Hibernate OGM I needed to list all of: > > slot="${infinispan.module.slot}" /> > slot="${infinispan.module.slot}" /> > slot="${infinispan.module.slot}" /> > slot="${infinispan.module.slot}" /> > slot="${infinispan.module.slot}" /> > > (figured this list out by trial & error) > > Would be nice to have a single module ID for "remote client" and a > single module ID for "embedded usage", document these clearly, and > then mark the other modules as private API. > > > > > > Thanks, > Sanne > > > On 8 September 2016 at 08:17, Sebastian Laskawiec wrote: >> Technically I'm all for it. >> >> The question that I have is how many users/clients do we have who wants to >> use only RemoteCacheManager? >> >> Thanks >> Sebastian >> >> On Thu, Sep 8, 2016 at 9:08 AM, Gustavo Fernandes >> wrote: >>> >>> Actually, looking at the current modules zip listed in the website [1], I >>> can see that protostream and hotrod-client modules are are there, >>> together with embedded modules (core, lucene-directoty, etc), isn't that >>> correct? What is in the other unlisted "client" module? >>> >>> >>> [1] >>> http://downloads.jboss.org/infinispan/9.0.0.Alpha4/infinispan-as-embedded-modules-9.0.0.Alpha4.zip >>> >>> On Thu, Sep 8, 2016 at 7:59 AM, Gustavo Fernandes >>> wrote: >>>> >>>> +1 >>>> >>>> >>>> On Thu, Sep 8, 2016 at 7:37 AM, Tristan Tarrant >>>> wrote: >>>>> >>>>> Hi all, >>>>> >>>>> we currently distribute two separate packages for WildFly modules: >>>>> embedded and client. Unfortunately we only list the former on the >>>>> download page. >>>>> My proposal is to combine the two packages into one. WDYT ? >>>>> >>>>> Tristan >>>>> -- >>>>> Tristan Tarrant >>>>> Infinispan Lead >>>>> JBoss, a division of Red Hat >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Thu Sep 8 08:06:55 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 8 Sep 2016 14:06:55 +0200 Subject: [infinispan-dev] Combining AS modules In-Reply-To: References: <284b0e77-befb-d4fb-cc1d-906751b29cd5@redhat.com> Message-ID: <3100a336-3b96-36ca-970a-3a39ae67c195@infinispan.org> Ok, I've checked: the as-modules/client module doesn't really have any reason to exist, since the as-modules/embedded one contains everything already (and is misnamed). Also the problem about the single module is only on the remote side, which doesn't re-export any of the APIs. The embedded "org.infinispan.main:x.y" module does it correctly, but I don't like the name (as it create confusion with the usual "main" slot"). My proposal: - remove as-modules/client - move as-modules/embedded to become as-modules - for symmetry with the uberjars, have an org.infinispan.embedded and org.infinispan.remote modules which re-export the appropriate APIs. https://issues.jboss.org/browse/ISPN-7007 Tristan On 08/09/16 13:22, Sanne Grinovero wrote: > +1 > > But please also add a couple of "user facing" modules which expose all > the necessary components as a single unit. > > For my use case in Hibernate OGM I needed to list all of: > > slot="${infinispan.module.slot}" /> > slot="${infinispan.module.slot}" /> > slot="${infinispan.module.slot}" /> > slot="${infinispan.module.slot}" /> > slot="${infinispan.module.slot}" /> > > (figured this list out by trial & error) > > Would be nice to have a single module ID for "remote client" and a > single module ID for "embedded usage", document these clearly, and > then mark the other modules as private API. > > > > > > Thanks, > Sanne > > > On 8 September 2016 at 08:17, Sebastian Laskawiec wrote: >> Technically I'm all for it. >> >> The question that I have is how many users/clients do we have who wants to >> use only RemoteCacheManager? >> >> Thanks >> Sebastian >> >> On Thu, Sep 8, 2016 at 9:08 AM, Gustavo Fernandes >> wrote: >>> Actually, looking at the current modules zip listed in the website [1], I >>> can see that protostream and hotrod-client modules are are there, >>> together with embedded modules (core, lucene-directoty, etc), isn't that >>> correct? What is in the other unlisted "client" module? >>> >>> >>> [1] >>> http://downloads.jboss.org/infinispan/9.0.0.Alpha4/infinispan-as-embedded-modules-9.0.0.Alpha4.zip >>> >>> On Thu, Sep 8, 2016 at 7:59 AM, Gustavo Fernandes >>> wrote: >>>> +1 >>>> >>>> >>>> On Thu, Sep 8, 2016 at 7:37 AM, Tristan Tarrant >>>> wrote: >>>>> Hi all, >>>>> >>>>> we currently distribute two separate packages for WildFly modules: >>>>> embedded and client. Unfortunately we only list the former on the >>>>> download page. >>>>> My proposal is to combine the two packages into one. WDYT ? >>>>> >>>>> Tristan >>>>> -- >>>>> Tristan Tarrant >>>>> Infinispan Lead >>>>> JBoss, a division of Red Hat >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From wfink at redhat.com Thu Sep 8 09:15:25 2016 From: wfink at redhat.com (Wolf Fink) Date: Thu, 8 Sep 2016 15:15:25 +0200 Subject: [infinispan-dev] Combining AS modules In-Reply-To: <3100a336-3b96-36ca-970a-3a39ae67c195@infinispan.org> References: <284b0e77-befb-d4fb-cc1d-906751b29cd5@redhat.com> <3100a336-3b96-36ca-970a-3a39ae67c195@infinispan.org> Message-ID: Would that take affect for JDG as well in a future release? On Thu, Sep 8, 2016 at 2:06 PM, Tristan Tarrant wrote: > Ok, I've checked: > > the as-modules/client module doesn't really have any reason to exist, > since the as-modules/embedded one contains everything already (and is > misnamed). > Also the problem about the single module is only on the remote side, > which doesn't re-export any of the APIs. The embedded > "org.infinispan.main:x.y" module does it correctly, but I don't like the > name (as it create confusion with the usual "main" slot"). > > My proposal: > - remove as-modules/client > - move as-modules/embedded to become as-modules > - for symmetry with the uberjars, have an org.infinispan.embedded and > org.infinispan.remote modules which re-export the appropriate APIs. > > https://issues.jboss.org/browse/ISPN-7007 > > Tristan > > > > On 08/09/16 13:22, Sanne Grinovero wrote: > > +1 > > > > But please also add a couple of "user facing" modules which expose all > > the necessary components as a single unit. > > > > For my use case in Hibernate OGM I needed to list all of: > > > > > slot="${infinispan.module.slot}" /> > > > slot="${infinispan.module.slot}" /> > > > slot="${infinispan.module.slot}" /> > > > slot="${infinispan.module.slot}" /> > > > slot="${infinispan.module.slot}" /> > > > > (figured this list out by trial & error) > > > > Would be nice to have a single module ID for "remote client" and a > > single module ID for "embedded usage", document these clearly, and > > then mark the other modules as private API. > > > > > > > > > > > > Thanks, > > Sanne > > > > > > On 8 September 2016 at 08:17, Sebastian Laskawiec > wrote: > >> Technically I'm all for it. > >> > >> The question that I have is how many users/clients do we have who wants > to > >> use only RemoteCacheManager? > >> > >> Thanks > >> Sebastian > >> > >> On Thu, Sep 8, 2016 at 9:08 AM, Gustavo Fernandes < > gustavo at infinispan.org> > >> wrote: > >>> Actually, looking at the current modules zip listed in the website > [1], I > >>> can see that protostream and hotrod-client modules are are there, > >>> together with embedded modules (core, lucene-directoty, etc), isn't > that > >>> correct? What is in the other unlisted "client" module? > >>> > >>> > >>> [1] > >>> http://downloads.jboss.org/infinispan/9.0.0.Alpha4/ > infinispan-as-embedded-modules-9.0.0.Alpha4.zip > >>> > >>> On Thu, Sep 8, 2016 at 7:59 AM, Gustavo Fernandes < > gustavo at infinispan.org> > >>> wrote: > >>>> +1 > >>>> > >>>> > >>>> On Thu, Sep 8, 2016 at 7:37 AM, Tristan Tarrant > >>>> wrote: > >>>>> Hi all, > >>>>> > >>>>> we currently distribute two separate packages for WildFly modules: > >>>>> embedded and client. Unfortunately we only list the former on the > >>>>> download page. > >>>>> My proposal is to combine the two packages into one. WDYT ? > >>>>> > >>>>> Tristan > >>>>> -- > >>>>> Tristan Tarrant > >>>>> Infinispan Lead > >>>>> JBoss, a division of Red Hat > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160908/3061ec1d/attachment.html From slaskawi at redhat.com Mon Sep 12 02:57:54 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 12 Sep 2016 08:57:54 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <5731A2D6.1020300@redhat.com> Message-ID: After investigating ALPN [1] and HTTP/2 [2] support I revisited this feature to see how everything fits together. Just as a reminder - the idea behind multi-tenant router is to implement a component which will have references to all deployed Hot Rod and REST servers (Memcached and WebSockets are out of the scope at this point) [3] and will be able to forward requests to proper instance. Since we'd like to create an ALPN-based, polyglot client at some point, I believe the router concept should be a little bit more generic. It should be able to use SNI for routing as well as negotiate the protocol using ALPN or even switch to different protocol using HTTP 1.1/Upgrade header. Having this in mind, I would like to rebase multi-tenancy feature and slightly modify router endpoint configuration to something like this: With this configuration, the router should be really flexible and extendable. If there will be no negative comments, I'll start working on that tomorrow. Thanks Sebastian [1] https://issues.jboss.org/browse/ISPN-6899 [2] https://issues.jboss.org/browse/ISPN-6676 [3] https://github.com/infinispan/infinispan/wiki/Multi-tenancy-for-Hotrod-Server On Mon, Jul 18, 2016 at 9:14 AM, Sebastian Laskawiec wrote: > Hey! > > Dan pointed out a very interesting thing [1] - we could use host header > for multi-tenant REST endpoints. Although I really like the idea (this > header was introduced to support this kind of use cases), it might be a bit > problematic from security point of view (if someone forgets to set it, > he'll be talking to someone else Cache Container). > > What do you think about this? Should we implement this (now or later)? > > I vote for yes and implement it in 9.1 (or 9.0 if there is enough time). > > Thanks > Sebastian > > On Wed, Jun 29, 2016 at 8:55 AM, Sebastian Laskawiec > wrote: > >> Hey! >> >> The multi-tenancy support for Hot Rod and REST has been implemented [2]. >> Since the PR is gigantic, I marked some interesting places for review so >> you might want to skip boilerplate parts. >> >> The Memcached and WebSockets implementations are currently out of scope. >> If you would like us to implement them, please vote on the following >> tickets: >> >> - Memcached https://issues.jboss.org/browse/ISPN-6639 >> - Web Sockets https://issues.jboss.org/browse/ISPN-6638 >> >> Thanks >> Sebastian >> >> [2] https://github.com/infinispan/infinispan/pull/4348 >> >> On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec > > wrote: >> >>> Hey Galder! >>> >>> Comments inlined. >>> >>> Thanks >>> Sebastian >>> >>> On Wed, May 25, 2016 at 10:52 AM, Galder Zamarre?o >>> wrote: >>> >>>> Hi all, >>>> >>>> Sorry for the delay getting back on this. >>>> >>>> The addition of a new component does not worry me so much. It has the >>>> advantage of implementing it once independent of the backend endpoint, >>>> whether HR or Rest. >>>> >>>> What I'm struggling to understand is what protocol the clients will use >>>> to talk to the router. It seems wasteful having to build two protocols at >>>> this level, e.g. one at TCP level and one at REST level. If you're going to >>>> end up building two protocols, the benefit of the router component >>>> dissapears and then you might as well embedded the two routing protocols >>>> within REST and HR directly. >>>> >>> >>> I think I wasn't clear enough in the design how the routing works... >>> >>> In your scenario - both servers (hotrod and rest) will start >>> EmbeddedCacheManagers internally but none of them will start Netty >>> transport. The only transport that will be turned on is the router. The >>> router will be responsible for recognizing the request type (if HTTP - find >>> proper REST server, if HotRod protocol - find proper HotRod) and attaching >>> handlers at the end of the pipeline. >>> >>> Regarding to custom protocol (this usecase could be used with Hotrod >>> clients which do not use SSL (so SNI routing is not possible)), you and >>> Tristan got me thinking whether we really need it. Maybe we should require >>> SSL+SNI when using HotRod protocol with no exceptions? The thing that >>> bothers me is that SSL makes the whole setup twice slower: >>> https://gist.github.com/slaskawi/51f76b0658b9ee0c9351bd17224b1b >>> a2#file-gistfile1-txt-L1753-L1754 >>> >>> >>>> >>>> In other words, for the router component to make sense, I think it >>>> should: >>>> >>>> 1. Clients, no matter whether HR or REST, to use 1 single protocol to >>>> the router. The natural thing here would be HTTP/2 or similar protocol. >>>> >>> >>> Yes, that's the goal. >>> >>> >>>> 2. The router then talks HR or REST to the backend. Here the router >>>> uses TCP or HTTP protocol based on the backend needs. >>>> >>> >>> It's even simpler - it just uses the backend's Netty Handlers. >>> >>> Since the SNI implementation is ready, please have a look: >>> https://github.com/infinispan/infinispan/pull/4348 >>> >>> >>>> >>>> ^ The above implies that HR client has to talk TCP when using HR server >>>> directly or HTTP/2 when using it via router, but I don't think this is too >>>> bad and it gives us some experience working with HTTP/2 besides the work >>>> Anton is carrying out as part of GSoC. >>> >>> >>>> Cheers, >>>> -- >>>> Galder Zamarre?o >>>> Infinispan, Red Hat >>>> >>>> > On 11 May 2016, at 10:38, Sebastian Laskawiec >>>> wrote: >>>> > >>>> > Hey Tristan! >>>> > >>>> > If I understood you correctly, you're suggesting to enhance the >>>> ProtocolServer to support multiple EmbeddedCacheManagers (probably with >>>> shared transport and by that I mean started on the same Netty server). >>>> > >>>> > Yes, that also could work but I'm not convinced if we won't loose >>>> some configuration flexibility. >>>> > >>>> > Let's consider a configuration file - https://gist.github.com/ >>>> slaskawi/c85105df571eeb56b12752d7f5777ce9, how for example use >>>> authentication for CacheContainer cc1 (and not for cc2) and encryption for >>>> cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using >>>> this kind of different options makes sense in terms of multi tenancy. And >>>> please note that if we start a new Netty server for each CacheContainer - >>>> we almost ended up with the router I proposed. >>>> > >>>> > The second argument for using a router is extracting the routing >>>> logic into a separate module. Otherwise we would probably end up with >>>> several if(isMultiTenent()) statements in Hotrod as well as REST server. >>>> Extracting this has also additional advantage that we limit changes in >>>> those modules (actually there will be probably 2 changes #1 we should be >>>> able to start a ProtocolServer without starting a Netty server (the Router >>>> will do it in multi tenant configuration) and #2 collect Netty handlers >>>> from ProtocolServer). >>>> > >>>> > To sum it up - the router's implementation seems to be more >>>> complicated but in the long run I think it might be worth it. >>>> > >>>> > I also wrote the summary of the above here: >>>> https://github.com/infinispan/infinispan/wiki/Multi-tenancy- >>>> for-Hotrod-Server#alternative-approach >>>> > >>>> > @Galder - you wrote a huge part of the Hot Rod server - I would love >>>> to hear your opinion as well. >>>> > >>>> > Thanks >>>> > Sebastian >>>> > >>>> > >>>> > >>>> > On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant < >>>> ttarrant at redhat.com> wrote: >>>> > Not sure I like the introduction of another component at the front. >>>> > >>>> > My original idea for allowing the client to choose the container was: >>>> > >>>> > - with TLS: use SNI to choose the container >>>> > - without TLS: enhance the PING operation of the Hot Rod protocol to >>>> > also take the server name. This would need to be a requirement when >>>> > exposing multiple containers over the same endpoint. >>>> > >>>> > From a client API perspective, there would be no difference between >>>> the >>>> > above two approaches: just specify the server name and depending on >>>> the >>>> > transport, select the right one. >>>> > >>>> > Tristan >>>> > >>>> > On 29/04/2016 17:29, Sebastian Laskawiec wrote: >>>> > > Dear Community, >>>> > > >>>> > > Please have a look at the design of Multi tenancy support for >>>> Infinispan >>>> > > [1]. I would be more than happy to get some feedback from you. >>>> > > >>>> > > Highlights: >>>> > > >>>> > > * The implementation will be based on a Router (which will be >>>> built >>>> > > based on Netty) >>>> > > * Multiple Hot Rod and REST servers will be attached to the router >>>> > > which in turn will be attached to the endpoint >>>> > > * The router will operate on a binary protocol when using Hot Rod >>>> > > clients and path-based routing when using REST >>>> > > * Memcached will be out of scope >>>> > > * The router will support SSL+SNI >>>> > > >>>> > > Thanks >>>> > > Sebastian >>>> > > >>>> > > [1] >>>> > > https://github.com/infinispan/infinispan/wiki/Multi-tenancy- >>>> for-Hotrod-Server >>>> > > >>>> > > >>>> > > _______________________________________________ >>>> > > infinispan-dev mailing list >>>> > > infinispan-dev at lists.jboss.org >>>> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> > > >>>> > >>>> > -- >>>> > Tristan Tarrant >>>> > Infinispan Lead >>>> > JBoss, a division of Red Hat >>>> > _______________________________________________ >>>> > infinispan-dev mailing list >>>> > infinispan-dev at lists.jboss.org >>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> > >>>> > _______________________________________________ >>>> > infinispan-dev mailing list >>>> > infinispan-dev at lists.jboss.org >>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160912/01ba9bcd/attachment-0001.html From gabovantonnikolaevich at gmail.com Mon Sep 12 05:10:25 2016 From: gabovantonnikolaevich at gmail.com (Anton Gabov) Date: Mon, 12 Sep 2016 12:10:25 +0300 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <5731A2D6.1020300@redhat.com> Message-ID: Sebastian, correct me if I'm wrong. As I understand, client will have Router instance, which has info about servers, caches in these servers and support protocols (HotRod, HTTP/1, HTTP/2). So, I have some questions: 1) Will Router keep all connections up or close connection after the request? For instance, client need to make request for some server. It creates connection, make request and close connection (or we keep connection and leave it opened). 2) How update from HTTP/2 to HotRod can be done? I cannot imagine this situation, but I would like to know it :) 3) Can Router be configurated programmatically or only by xml configuration? Best wishes, Anton. 2016-09-12 9:57 GMT+03:00 Sebastian Laskawiec : > After investigating ALPN [1] and HTTP/2 [2] support I revisited this > feature to see how everything fits together. > > Just as a reminder - the idea behind multi-tenant router is to implement a > component which will have references to all deployed Hot Rod and REST > servers (Memcached and WebSockets are out of the scope at this point) [3] > and will be able to forward requests to proper instance. > > Since we'd like to create an ALPN-based, polyglot client at some point, I > believe the router concept should be a little bit more generic. It should > be able to use SNI for routing as well as negotiate the protocol using ALPN > or even switch to different protocol using HTTP 1.1/Upgrade header. Having > this in mind, I would like to rebase multi-tenancy feature and slightly > modify router endpoint configuration to something like this: > > > > > > > > > > > > > > > > > > > > > > > With this configuration, the router should be really flexible and > extendable. > > If there will be no negative comments, I'll start working on that > tomorrow. > > Thanks > Sebastian > > [1] https://issues.jboss.org/browse/ISPN-6899 > [2] https://issues.jboss.org/browse/ISPN-6676 > [3] https://github.com/infinispan/infinispan/wiki/ > Multi-tenancy-for-Hotrod-Server > > On Mon, Jul 18, 2016 at 9:14 AM, Sebastian Laskawiec > wrote: > >> Hey! >> >> Dan pointed out a very interesting thing [1] - we could use host header >> for multi-tenant REST endpoints. Although I really like the idea (this >> header was introduced to support this kind of use cases), it might be a bit >> problematic from security point of view (if someone forgets to set it, >> he'll be talking to someone else Cache Container). >> >> What do you think about this? Should we implement this (now or later)? >> >> I vote for yes and implement it in 9.1 (or 9.0 if there is enough time). >> >> Thanks >> Sebastian >> >> On Wed, Jun 29, 2016 at 8:55 AM, Sebastian Laskawiec > > wrote: >> >>> Hey! >>> >>> The multi-tenancy support for Hot Rod and REST has been implemented [2]. >>> Since the PR is gigantic, I marked some interesting places for review so >>> you might want to skip boilerplate parts. >>> >>> The Memcached and WebSockets implementations are currently out of scope. >>> If you would like us to implement them, please vote on the following >>> tickets: >>> >>> - Memcached https://issues.jboss.org/browse/ISPN-6639 >>> - Web Sockets https://issues.jboss.org/browse/ISPN-6638 >>> >>> Thanks >>> Sebastian >>> >>> [2] https://github.com/infinispan/infinispan/pull/4348 >>> >>> On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec < >>> slaskawi at redhat.com> wrote: >>> >>>> Hey Galder! >>>> >>>> Comments inlined. >>>> >>>> Thanks >>>> Sebastian >>>> >>>> On Wed, May 25, 2016 at 10:52 AM, Galder Zamarre?o >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> Sorry for the delay getting back on this. >>>>> >>>>> The addition of a new component does not worry me so much. It has the >>>>> advantage of implementing it once independent of the backend endpoint, >>>>> whether HR or Rest. >>>>> >>>>> What I'm struggling to understand is what protocol the clients will >>>>> use to talk to the router. It seems wasteful having to build two protocols >>>>> at this level, e.g. one at TCP level and one at REST level. If you're going >>>>> to end up building two protocols, the benefit of the router component >>>>> dissapears and then you might as well embedded the two routing protocols >>>>> within REST and HR directly. >>>>> >>>> >>>> I think I wasn't clear enough in the design how the routing works... >>>> >>>> In your scenario - both servers (hotrod and rest) will start >>>> EmbeddedCacheManagers internally but none of them will start Netty >>>> transport. The only transport that will be turned on is the router. The >>>> router will be responsible for recognizing the request type (if HTTP - find >>>> proper REST server, if HotRod protocol - find proper HotRod) and attaching >>>> handlers at the end of the pipeline. >>>> >>>> Regarding to custom protocol (this usecase could be used with Hotrod >>>> clients which do not use SSL (so SNI routing is not possible)), you and >>>> Tristan got me thinking whether we really need it. Maybe we should require >>>> SSL+SNI when using HotRod protocol with no exceptions? The thing that >>>> bothers me is that SSL makes the whole setup twice slower: >>>> https://gist.github.com/slaskawi/51f76b0658b9ee0c935 >>>> 1bd17224b1ba2#file-gistfile1-txt-L1753-L1754 >>>> >>>> >>>>> >>>>> In other words, for the router component to make sense, I think it >>>>> should: >>>>> >>>>> 1. Clients, no matter whether HR or REST, to use 1 single protocol to >>>>> the router. The natural thing here would be HTTP/2 or similar protocol. >>>>> >>>> >>>> Yes, that's the goal. >>>> >>>> >>>>> 2. The router then talks HR or REST to the backend. Here the router >>>>> uses TCP or HTTP protocol based on the backend needs. >>>>> >>>> >>>> It's even simpler - it just uses the backend's Netty Handlers. >>>> >>>> Since the SNI implementation is ready, please have a look: >>>> https://github.com/infinispan/infinispan/pull/4348 >>>> >>>> >>>>> >>>>> ^ The above implies that HR client has to talk TCP when using HR >>>>> server directly or HTTP/2 when using it via router, but I don't think this >>>>> is too bad and it gives us some experience working with HTTP/2 besides the >>>>> work Anton is carrying out as part of GSoC. >>>> >>>> >>>>> Cheers, >>>>> -- >>>>> Galder Zamarre?o >>>>> Infinispan, Red Hat >>>>> >>>>> > On 11 May 2016, at 10:38, Sebastian Laskawiec >>>>> wrote: >>>>> > >>>>> > Hey Tristan! >>>>> > >>>>> > If I understood you correctly, you're suggesting to enhance the >>>>> ProtocolServer to support multiple EmbeddedCacheManagers (probably with >>>>> shared transport and by that I mean started on the same Netty server). >>>>> > >>>>> > Yes, that also could work but I'm not convinced if we won't loose >>>>> some configuration flexibility. >>>>> > >>>>> > Let's consider a configuration file - https://gist.github.com/slaska >>>>> wi/c85105df571eeb56b12752d7f5777ce9, how for example use >>>>> authentication for CacheContainer cc1 (and not for cc2) and encryption for >>>>> cc1 (and not for cc1)? Both are tied to hotrod-connector. I think using >>>>> this kind of different options makes sense in terms of multi tenancy. And >>>>> please note that if we start a new Netty server for each CacheContainer - >>>>> we almost ended up with the router I proposed. >>>>> > >>>>> > The second argument for using a router is extracting the routing >>>>> logic into a separate module. Otherwise we would probably end up with >>>>> several if(isMultiTenent()) statements in Hotrod as well as REST server. >>>>> Extracting this has also additional advantage that we limit changes in >>>>> those modules (actually there will be probably 2 changes #1 we should be >>>>> able to start a ProtocolServer without starting a Netty server (the Router >>>>> will do it in multi tenant configuration) and #2 collect Netty handlers >>>>> from ProtocolServer). >>>>> > >>>>> > To sum it up - the router's implementation seems to be more >>>>> complicated but in the long run I think it might be worth it. >>>>> > >>>>> > I also wrote the summary of the above here: >>>>> https://github.com/infinispan/infinispan/wiki/Multi-tenancy- >>>>> for-Hotrod-Server#alternative-approach >>>>> > >>>>> > @Galder - you wrote a huge part of the Hot Rod server - I would love >>>>> to hear your opinion as well. >>>>> > >>>>> > Thanks >>>>> > Sebastian >>>>> > >>>>> > >>>>> > >>>>> > On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant < >>>>> ttarrant at redhat.com> wrote: >>>>> > Not sure I like the introduction of another component at the front. >>>>> > >>>>> > My original idea for allowing the client to choose the container was: >>>>> > >>>>> > - with TLS: use SNI to choose the container >>>>> > - without TLS: enhance the PING operation of the Hot Rod protocol to >>>>> > also take the server name. This would need to be a requirement when >>>>> > exposing multiple containers over the same endpoint. >>>>> > >>>>> > From a client API perspective, there would be no difference between >>>>> the >>>>> > above two approaches: just specify the server name and depending on >>>>> the >>>>> > transport, select the right one. >>>>> > >>>>> > Tristan >>>>> > >>>>> > On 29/04/2016 17:29, Sebastian Laskawiec wrote: >>>>> > > Dear Community, >>>>> > > >>>>> > > Please have a look at the design of Multi tenancy support for >>>>> Infinispan >>>>> > > [1]. I would be more than happy to get some feedback from you. >>>>> > > >>>>> > > Highlights: >>>>> > > >>>>> > > * The implementation will be based on a Router (which will be >>>>> built >>>>> > > based on Netty) >>>>> > > * Multiple Hot Rod and REST servers will be attached to the >>>>> router >>>>> > > which in turn will be attached to the endpoint >>>>> > > * The router will operate on a binary protocol when using Hot Rod >>>>> > > clients and path-based routing when using REST >>>>> > > * Memcached will be out of scope >>>>> > > * The router will support SSL+SNI >>>>> > > >>>>> > > Thanks >>>>> > > Sebastian >>>>> > > >>>>> > > [1] >>>>> > > https://github.com/infinispan/infinispan/wiki/Multi-tenancy- >>>>> for-Hotrod-Server >>>>> > > >>>>> > > >>>>> > > _______________________________________________ >>>>> > > infinispan-dev mailing list >>>>> > > infinispan-dev at lists.jboss.org >>>>> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> > > >>>>> > >>>>> > -- >>>>> > Tristan Tarrant >>>>> > Infinispan Lead >>>>> > JBoss, a division of Red Hat >>>>> > _______________________________________________ >>>>> > infinispan-dev mailing list >>>>> > infinispan-dev at lists.jboss.org >>>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> > >>>>> > _______________________________________________ >>>>> > infinispan-dev mailing list >>>>> > infinispan-dev at lists.jboss.org >>>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> >>> >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160912/7d51a477/attachment-0001.html From slaskawi at redhat.com Mon Sep 12 09:03:41 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 12 Sep 2016 15:03:41 +0200 Subject: [infinispan-dev] Multi tenancy support for Infinispan In-Reply-To: References: <5731A2D6.1020300@redhat.com> Message-ID: Hey Anton! Just to clarify - the router is a concept implemented in Infinispan *Server*. In the endpoint to be 100% precise. Each server will have this component up and running and it will take the incomming TCP connection and pass it to proper NettyServer or RestServer instance (after choosing proper tenant or negotiating protocol with ALPN once we have the implementation). On the client side we will need something a little but different. A polyglot client will need to look into available protocol implementations (let's imagine we have a client which supports Hot Rod and HTTP/2 protocol) during the TLS handshake and pick the best one. For the sake of this example - Hot Rod could have a higher priority because it's faster. I assume your questions are slightly missed (since they assume the router on a client side) but let me try to answer them... Thanks Sebastian On Mon, Sep 12, 2016 at 11:10 AM, Anton Gabov < gabovantonnikolaevich at gmail.com> wrote: > Sebastian, correct me if I'm wrong. > > As I understand, client will have Router instance, which has info about > servers, caches in these servers and support protocols (HotRod, HTTP/1, > HTTP/2). > > So, I have some questions: > 1) Will Router keep all connections up or close connection after the > request? For instance, client need to make request for some server. It > creates connection, make request and close connection (or we keep > connection and leave it opened). > I believe it should keep (or pool) them. Moreover when considering Kubernetes we need to go through an Ingress [1]. Plus there are also PetSets [2]. I've heard some rumors that the routing for them might use SNI. So we might need to use TLS/SNI differently depending on scenario and possibly holding more than one connection per server. Unfortunately I can not confirm this at this stage. [1] http://kubernetes.io/docs/user-guide/ingress/ [2] http://kubernetes.io/docs/user-guide/petset/ > 2) How update from HTTP/2 to HotRod can be done? I cannot imagine this > situation, but I would like to know it :) > We can not upgrade since since HTTP/2 doesn't support the upgrade procedure. However you can upgrade from HTTP 1.1 using the Upgrade header [3] or negotiate using HTTP/2 using ALPN [4]. The same approach might be used to upgrade (or negotiate) any TCP based protocol (including HTTP for REST, Memcached since it's plain text or Hot Rod). [3] https://http2.github.io/http2-spec/#rfc.section.3.2 [4] https://tools.ietf.org/html/rfc7301 > 3) Can Router be configurated programmatically or only by xml > configuration? > Since this is a server component - only XML will be available for the client*. [*] But if you look carefully, the implementation allows you to bootstrap everything from java using proper ConfigurationBuilders. Of course they should be used only internally. > > Best wishes, > Anton. > > 2016-09-12 9:57 GMT+03:00 Sebastian Laskawiec : > >> After investigating ALPN [1] and HTTP/2 [2] support I revisited this >> feature to see how everything fits together. >> >> Just as a reminder - the idea behind multi-tenant router is to implement >> a component which will have references to all deployed Hot Rod and REST >> servers (Memcached and WebSockets are out of the scope at this point) [3] >> and will be able to forward requests to proper instance. >> >> Since we'd like to create an ALPN-based, polyglot client at some point, I >> believe the router concept should be a little bit more generic. It should >> be able to use SNI for routing as well as negotiate the protocol using ALPN >> or even switch to different protocol using HTTP 1.1/Upgrade header. Having >> this in mind, I would like to rebase multi-tenancy feature and slightly >> modify router endpoint configuration to something like this: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> With this configuration, the router should be really flexible and >> extendable. >> >> If there will be no negative comments, I'll start working on that >> tomorrow. >> >> Thanks >> Sebastian >> >> [1] https://issues.jboss.org/browse/ISPN-6899 >> [2] https://issues.jboss.org/browse/ISPN-6676 >> [3] https://github.com/infinispan/infinispan/wiki/Multi- >> tenancy-for-Hotrod-Server >> >> On Mon, Jul 18, 2016 at 9:14 AM, Sebastian Laskawiec > > wrote: >> >>> Hey! >>> >>> Dan pointed out a very interesting thing [1] - we could use host header >>> for multi-tenant REST endpoints. Although I really like the idea (this >>> header was introduced to support this kind of use cases), it might be a bit >>> problematic from security point of view (if someone forgets to set it, >>> he'll be talking to someone else Cache Container). >>> >>> What do you think about this? Should we implement this (now or later)? >>> >>> I vote for yes and implement it in 9.1 (or 9.0 if there is enough time). >>> >>> Thanks >>> Sebastian >>> >>> On Wed, Jun 29, 2016 at 8:55 AM, Sebastian Laskawiec < >>> slaskawi at redhat.com> wrote: >>> >>>> Hey! >>>> >>>> The multi-tenancy support for Hot Rod and REST has been implemented >>>> [2]. Since the PR is gigantic, I marked some interesting places for review >>>> so you might want to skip boilerplate parts. >>>> >>>> The Memcached and WebSockets implementations are currently out of >>>> scope. If you would like us to implement them, please vote on the following >>>> tickets: >>>> >>>> - Memcached https://issues.jboss.org/browse/ISPN-6639 >>>> - Web Sockets https://issues.jboss.org/browse/ISPN-6638 >>>> >>>> Thanks >>>> Sebastian >>>> >>>> [2] https://github.com/infinispan/infinispan/pull/4348 >>>> >>>> On Thu, May 26, 2016 at 4:51 PM, Sebastian Laskawiec < >>>> slaskawi at redhat.com> wrote: >>>> >>>>> Hey Galder! >>>>> >>>>> Comments inlined. >>>>> >>>>> Thanks >>>>> Sebastian >>>>> >>>>> On Wed, May 25, 2016 at 10:52 AM, Galder Zamarre?o >>>>> wrote: >>>>> >>>>>> Hi all, >>>>>> >>>>>> Sorry for the delay getting back on this. >>>>>> >>>>>> The addition of a new component does not worry me so much. It has the >>>>>> advantage of implementing it once independent of the backend endpoint, >>>>>> whether HR or Rest. >>>>>> >>>>>> What I'm struggling to understand is what protocol the clients will >>>>>> use to talk to the router. It seems wasteful having to build two protocols >>>>>> at this level, e.g. one at TCP level and one at REST level. If you're going >>>>>> to end up building two protocols, the benefit of the router component >>>>>> dissapears and then you might as well embedded the two routing protocols >>>>>> within REST and HR directly. >>>>>> >>>>> >>>>> I think I wasn't clear enough in the design how the routing works... >>>>> >>>>> In your scenario - both servers (hotrod and rest) will start >>>>> EmbeddedCacheManagers internally but none of them will start Netty >>>>> transport. The only transport that will be turned on is the router. The >>>>> router will be responsible for recognizing the request type (if HTTP - find >>>>> proper REST server, if HotRod protocol - find proper HotRod) and attaching >>>>> handlers at the end of the pipeline. >>>>> >>>>> Regarding to custom protocol (this usecase could be used with Hotrod >>>>> clients which do not use SSL (so SNI routing is not possible)), you and >>>>> Tristan got me thinking whether we really need it. Maybe we should require >>>>> SSL+SNI when using HotRod protocol with no exceptions? The thing that >>>>> bothers me is that SSL makes the whole setup twice slower: >>>>> https://gist.github.com/slaskawi/51f76b0658b9ee0c935 >>>>> 1bd17224b1ba2#file-gistfile1-txt-L1753-L1754 >>>>> >>>>> >>>>>> >>>>>> In other words, for the router component to make sense, I think it >>>>>> should: >>>>>> >>>>>> 1. Clients, no matter whether HR or REST, to use 1 single protocol to >>>>>> the router. The natural thing here would be HTTP/2 or similar protocol. >>>>>> >>>>> >>>>> Yes, that's the goal. >>>>> >>>>> >>>>>> 2. The router then talks HR or REST to the backend. Here the router >>>>>> uses TCP or HTTP protocol based on the backend needs. >>>>>> >>>>> >>>>> It's even simpler - it just uses the backend's Netty Handlers. >>>>> >>>>> Since the SNI implementation is ready, please have a look: >>>>> https://github.com/infinispan/infinispan/pull/4348 >>>>> >>>>> >>>>>> >>>>>> ^ The above implies that HR client has to talk TCP when using HR >>>>>> server directly or HTTP/2 when using it via router, but I don't think this >>>>>> is too bad and it gives us some experience working with HTTP/2 besides the >>>>>> work Anton is carrying out as part of GSoC. >>>>> >>>>> >>>>>> Cheers, >>>>>> -- >>>>>> Galder Zamarre?o >>>>>> Infinispan, Red Hat >>>>>> >>>>>> > On 11 May 2016, at 10:38, Sebastian Laskawiec >>>>>> wrote: >>>>>> > >>>>>> > Hey Tristan! >>>>>> > >>>>>> > If I understood you correctly, you're suggesting to enhance the >>>>>> ProtocolServer to support multiple EmbeddedCacheManagers (probably with >>>>>> shared transport and by that I mean started on the same Netty server). >>>>>> > >>>>>> > Yes, that also could work but I'm not convinced if we won't loose >>>>>> some configuration flexibility. >>>>>> > >>>>>> > Let's consider a configuration file - >>>>>> https://gist.github.com/slaskawi/c85105df571eeb56b12752d7f5777ce9, >>>>>> how for example use authentication for CacheContainer cc1 (and not for cc2) >>>>>> and encryption for cc1 (and not for cc1)? Both are tied to >>>>>> hotrod-connector. I think using this kind of different options makes sense >>>>>> in terms of multi tenancy. And please note that if we start a new Netty >>>>>> server for each CacheContainer - we almost ended up with the router I >>>>>> proposed. >>>>>> > >>>>>> > The second argument for using a router is extracting the routing >>>>>> logic into a separate module. Otherwise we would probably end up with >>>>>> several if(isMultiTenent()) statements in Hotrod as well as REST server. >>>>>> Extracting this has also additional advantage that we limit changes in >>>>>> those modules (actually there will be probably 2 changes #1 we should be >>>>>> able to start a ProtocolServer without starting a Netty server (the Router >>>>>> will do it in multi tenant configuration) and #2 collect Netty handlers >>>>>> from ProtocolServer). >>>>>> > >>>>>> > To sum it up - the router's implementation seems to be more >>>>>> complicated but in the long run I think it might be worth it. >>>>>> > >>>>>> > I also wrote the summary of the above here: >>>>>> https://github.com/infinispan/infinispan/wiki/Multi-tenancy- >>>>>> for-Hotrod-Server#alternative-approach >>>>>> > >>>>>> > @Galder - you wrote a huge part of the Hot Rod server - I would >>>>>> love to hear your opinion as well. >>>>>> > >>>>>> > Thanks >>>>>> > Sebastian >>>>>> > >>>>>> > >>>>>> > >>>>>> > On Tue, May 10, 2016 at 10:59 AM, Tristan Tarrant < >>>>>> ttarrant at redhat.com> wrote: >>>>>> > Not sure I like the introduction of another component at the front. >>>>>> > >>>>>> > My original idea for allowing the client to choose the container >>>>>> was: >>>>>> > >>>>>> > - with TLS: use SNI to choose the container >>>>>> > - without TLS: enhance the PING operation of the Hot Rod protocol to >>>>>> > also take the server name. This would need to be a requirement when >>>>>> > exposing multiple containers over the same endpoint. >>>>>> > >>>>>> > From a client API perspective, there would be no difference >>>>>> between the >>>>>> > above two approaches: just specify the server name and depending on >>>>>> the >>>>>> > transport, select the right one. >>>>>> > >>>>>> > Tristan >>>>>> > >>>>>> > On 29/04/2016 17:29, Sebastian Laskawiec wrote: >>>>>> > > Dear Community, >>>>>> > > >>>>>> > > Please have a look at the design of Multi tenancy support for >>>>>> Infinispan >>>>>> > > [1]. I would be more than happy to get some feedback from you. >>>>>> > > >>>>>> > > Highlights: >>>>>> > > >>>>>> > > * The implementation will be based on a Router (which will be >>>>>> built >>>>>> > > based on Netty) >>>>>> > > * Multiple Hot Rod and REST servers will be attached to the >>>>>> router >>>>>> > > which in turn will be attached to the endpoint >>>>>> > > * The router will operate on a binary protocol when using Hot >>>>>> Rod >>>>>> > > clients and path-based routing when using REST >>>>>> > > * Memcached will be out of scope >>>>>> > > * The router will support SSL+SNI >>>>>> > > >>>>>> > > Thanks >>>>>> > > Sebastian >>>>>> > > >>>>>> > > [1] >>>>>> > > https://github.com/infinispan/infinispan/wiki/Multi-tenancy- >>>>>> for-Hotrod-Server >>>>>> > > >>>>>> > > >>>>>> > > _______________________________________________ >>>>>> > > infinispan-dev mailing list >>>>>> > > infinispan-dev at lists.jboss.org >>>>>> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> > > >>>>>> > >>>>>> > -- >>>>>> > Tristan Tarrant >>>>>> > Infinispan Lead >>>>>> > JBoss, a division of Red Hat >>>>>> > _______________________________________________ >>>>>> > infinispan-dev mailing list >>>>>> > infinispan-dev at lists.jboss.org >>>>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> > >>>>>> > _______________________________________________ >>>>>> > infinispan-dev mailing list >>>>>> > infinispan-dev at lists.jboss.org >>>>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>> >>>>> >>>> >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160912/2d015893/attachment-0001.html From ttarrant at redhat.com Tue Sep 13 06:49:14 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 13 Sep 2016 12:49:14 +0200 Subject: [infinispan-dev] Cache creation over Hot Rod / REST Message-ID: <54449ec9-2d6c-cae6-c6e6-506eff305e08@redhat.com> Hi guys, I have put together a wiki page describing how cache creation over remote protocols should be implemented. https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod Comments are welcome Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From gustavo at infinispan.org Tue Sep 13 07:22:10 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Tue, 13 Sep 2016 12:22:10 +0100 Subject: [infinispan-dev] Cache creation over Hot Rod / REST In-Reply-To: <54449ec9-2d6c-cae6-c6e6-506eff305e08@redhat.com> References: <54449ec9-2d6c-cae6-c6e6-506eff305e08@redhat.com> Message-ID: I'm wondering that when the admin client is ready [1], it'd overlap in terms of feature with this new HR operation. Having it in Hot Rod can be convenient for simple cases, but OTOH implies having to support it in all clients and potentially on other remote protocols. Not a big deal, I had the impression that [1] would be the single place to create remote caches. Since we are here, [1] implies a java client, making it very cumbersome for example for a Node.js user to create caches, so maybe we should have a Rest endpoint to do admin operations similar to [2], easily consumable by all clients and allowing for advanced operations. Thoughts? [1] https://github.com/infinispan/infinispan/wiki/Remote-Admin-C lient-Library [2] https://docs.jboss.org/author/display/WFLY10/The+HTTP+management+API Cheers, Gustavo On Tue, Sep 13, 2016 at 11:49 AM, Tristan Tarrant wrote: > Hi guys, > > I have put together a wiki page describing how cache creation over > remote protocols should be implemented. > > https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod > > Comments are welcome > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160913/29d509e4/attachment.html From sanne at infinispan.org Tue Sep 13 07:42:39 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 13 Sep 2016 12:42:39 +0100 Subject: [infinispan-dev] Cache creation over Hot Rod / REST In-Reply-To: <54449ec9-2d6c-cae6-c6e6-506eff305e08@redhat.com> References: <54449ec9-2d6c-cae6-c6e6-506eff305e08@redhat.com> Message-ID: Thanks Tristan, that's a very welcome feature! Regarding the suggested API: - RemoteCache RemoteCacheManager.createCache(String cacheName, String configurationName, boolean temporary) I like the basic idea, I'm not sold on the "temporary" boolean. I think it might be confusing as it's not clear that the "temporaneousity" refers to the configuration change being persisted or not, rather than introducing some notion of a "temporary cache"; for example I might expect the Cache to be destroyed when my client disconnects. Maybe simply change the argument name to "persistInConfiguration" | "modifyConfigurationfiles" | .. ? I wouldn't mind to see this feature materialize in a simpler form which simply doesn't expose the capability to modify configuration files at all. So you could consider a: - RemoteCache RemoteCacheManager.createCache(String cacheName, String baseConfigurationName); More importantly, I'd like to see a symmetrical method to destroy the created caches; particularly useful for execution of integration tests. Thanks, Sanne On 13 September 2016 at 11:49, Tristan Tarrant wrote: > Hi guys, > > I have put together a wiki page describing how cache creation over > remote protocols should be implemented. > > https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod > > Comments are welcome > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Tue Sep 13 07:56:02 2016 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 13 Sep 2016 13:56:02 +0200 Subject: [infinispan-dev] Cache creation over Hot Rod / REST In-Reply-To: References: <54449ec9-2d6c-cae6-c6e6-506eff305e08@redhat.com> Message-ID: <57D7E952.9030807@redhat.com> Could we just get a coordinates (IP, port... not sure if anything else is needed) for the admin client through Hot Rod operation, and use it to initialize the RACL? That way, the operation would be exposed as remoteCacheManager.admin().createCache(...). JCache could wrap the simplified RACL client, too. Radim On 09/13/2016 01:22 PM, Gustavo Fernandes wrote: > I'm wondering that when the admin client is ready [1], it'd overlap in > terms of feature with this new HR operation. > > Having it in Hot Rod can be convenient for simple cases, but OTOH > implies having to support it in all clients and > potentially on other remote protocols. > > Not a big deal, I had the impression that [1] would be the single > place to create remote caches. > > Since we are here, [1] implies a java client, making it very > cumbersome for example for a Node.js user to create caches, so maybe > we should have a Rest endpoint to do admin operations similar to [2], > easily consumable by all clients and allowing for > advanced operations. > > Thoughts? > > > [1] > https://github.com/infinispan/infinispan/wiki/Remote-Admin-Client-Library > > [2] > https://docs.jboss.org/author/display/WFLY10/The+HTTP+management+API > > > Cheers, > Gustavo > > On Tue, Sep 13, 2016 at 11:49 AM, Tristan Tarrant > wrote: > > Hi guys, > > I have put together a wiki page describing how cache creation over > remote protocols should be implemented. > > https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod > > > Comments are welcome > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Tue Sep 13 08:15:32 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 13 Sep 2016 14:15:32 +0200 Subject: [infinispan-dev] Cache creation over Hot Rod / REST In-Reply-To: References: <54449ec9-2d6c-cae6-c6e6-506eff305e08@redhat.com> Message-ID: <10fd2f08-6b25-de3e-7105-c2aed463da38@redhat.com> On 13/09/16 13:22, Gustavo Fernandes wrote: > I'm wondering that when the admin client is ready [1], it'd overlap in > terms of feature with this new HR operation. The main problem here is security: the admin endpoint will most likely have completely different security requirements (certificate, credentials, authorization) than the cache endpoint (be it Hot Rod, REST or whatever) and probably exposed on a separate network interface. And in the case of a domain controller scenario, on an entirely different host, which might not even be visible to clients. > Not a big deal, I had the impression that [1] would be the single > place to create remote caches. > > Since we are here, [1] implies a java client, making it very > cumbersome for example for a Node.js user to create caches, so maybe > we should have a Rest endpoint to do admin operations similar to [2], > easily consumable by all clients and allowing for > advanced operations. While I agree that this does add some burden on the clients, the operation is simple enough. Much simpler than implementing the RACL would be. Also, the "REST" endpoint already exists: the management endpoint on http[s]://xxx:9990 allows you to send DMR requests. That's what the console uses. Tristan > > Thoughts? > > > [1] > https://github.com/infinispan/infinispan/wiki/Remote-Admin-Client-Library > > [2] > https://docs.jboss.org/author/display/WFLY10/The+HTTP+management+API > > > Cheers, > Gustavo > > On Tue, Sep 13, 2016 at 11:49 AM, Tristan Tarrant > wrote: > > Hi guys, > > I have put together a wiki page describing how cache creation over > remote protocols should be implemented. > > https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod > > > Comments are welcome > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Tue Sep 13 08:18:27 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 13 Sep 2016 14:18:27 +0200 Subject: [infinispan-dev] Cache creation over Hot Rod / REST In-Reply-To: References: <54449ec9-2d6c-cae6-c6e6-506eff305e08@redhat.com> Message-ID: On 13/09/16 13:42, Sanne Grinovero wrote: > Thanks Tristan, that's a very welcome feature! > > Regarding the suggested API: > - RemoteCache RemoteCacheManager.createCache(String cacheName, String > configurationName, boolean temporary) > > I like the basic idea, I'm not sold on the "temporary" boolean. > I think it might be confusing as it's not clear that the > "temporaneousity" refers to the configuration change being persisted > or not, rather than introducing some notion of a "temporary cache"; > for example I might expect the Cache to be destroyed when my client > disconnects. Yes, I agree. Something which would be impossible to enforce anyway :) > Maybe simply change the argument name to "persistInConfiguration" | > "modifyConfigurationfiles" | .. ? I like "persistInConfiguration". > I wouldn't mind to see this feature materialize in a simpler form > which simply doesn't expose the capability to modify configuration > files at all. > > So you could consider a: > - RemoteCache RemoteCacheManager.createCache(String cacheName, String > baseConfigurationName); Yes, that needs to be there. > More importantly, I'd like to see a symmetrical method to destroy the > created caches; particularly useful for execution of integration > tests. Yep. Tristan > > Thanks, > Sanne > > > On 13 September 2016 at 11:49, Tristan Tarrant wrote: >> Hi guys, >> >> I have put together a wiki page describing how cache creation over >> remote protocols should be implemented. >> >> https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod >> >> Comments are welcome >> >> Tristan >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From sanne at infinispan.org Wed Sep 14 10:02:32 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 14 Sep 2016 15:02:32 +0100 Subject: [infinispan-dev] Hot Rod clients & payload size estimates Message-ID: Hi all, I just noticed that the payload size estimates are ignored by the Hot Rod client when using the ProtoStreamMarshaller. That sounds good, as the estimating strategies are typically based on dumb statistics, while in case we're using protobuf-encoded data we have a schema which implies the ProtoStreamMarshaller could know better.. however this marshaller doesn't seem to try optimise the buffer size at all. Some types in protobuf will still need variable lenght encoding for obvious reasons (i.e. encode a String type), but some message types could have a pre-computed constant size. Even for those messages which include some variable length fields we could define at least: - a minimum size - try guess a good a reasonable default size depending on the type and number of fields - a per- message type sampler? With the OGM Grid Dialect for Hot Rod I'm sending several different kinds of objects over the wire, which implies wildfly different buffer sizes. Would be nice to be able to control this better? Thanks, Sanne From ttarrant at redhat.com Thu Sep 15 06:06:36 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 15 Sep 2016 12:06:36 +0200 Subject: [infinispan-dev] Hot Rod testing Message-ID: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> Recently I've had a chat with Galder, Will and Vittorio about how we test the Hot Rod server module and the various clients. We also discussed some of this in the past, but we now need to move forward with a better strategy. First up is the Hot Rod server module testsuite: it is the only part of the code which still uses Scala. Will has a partial port of it to Java, but we're wondering if it is worth completing that work, seeing that most of the tests in that testsuite, in particular those related to the protocol itself, are actually duplicated by the Java Hot Rod client's testsuite which also happens to be our reference implementation of a client and is much more extensive. The only downside of removing it is that verification will require running the client testsuite, instead of being self-contained. Next up is how we test clients. The Java client, partially described above, runs all of the tests against ad-hoc embedded servers. Some of these tests, in particular those related to topology, start and stop new servers on the fly. The server integration testsuite performs yet another set of tests, some of which overlap the above, but using the actual full-blown server. It doesn't test for topology changes. The C++ client wraps the native client in a Java wrapper generated by SWIG and runs the Java client testsuite. It then checks against a blacklist of known failures. It also has a small number of native tests which use the server distribution. The Node.js client has its own home-grown testsuite which also uses the server distribution. Duplication aside, which in some cases is unavoidable, it is impossible to confidently say that each client is properly tested. Since complete unification is impossible because of the different testing harnesses used by the various platforms/languages, I propose the following: - we identify and group the tests depending on their scope (basic protocol ops, bulk ops, topology/failover, security, etc). A client which implements the functionality of a group MUST pass all of the tests in that group with NO exceptions - we assign a unique identifier to each group/test combination (e.g. HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be collected in a "test book" (some kind of structured file) for comparison with client test runs - we refactor the Java client testsuite according to the above grouping / naming strategy so that testsuite which use the wrapping approach (i.e. C++ with SWIG) can consume it by directly specifying the supported groups - other clients get reorganized so that they support the above grouping I understand this is quite some work, but the current situation isn't really sustainable. Let me know what your thoughts are Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From sanne at infinispan.org Thu Sep 15 07:33:02 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 15 Sep 2016 12:33:02 +0100 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> Message-ID: I was actually planning to start a similar topic, but from the point of view of user's testing needs. I've recently created Hibernate OGM support for Hot Rod, and it wasn't as easy as other NoSQL databases to test; luckily I have some knowledge and contact on Infinispan ;) but I had to develop several helpers and refine the approach to testing over multiple iterations. I ended up developing a JUnit rule - handy for individual test runs in the IDE - and with a Maven life cycle extension and also with an Arquillian extension, which I needed to run both the Hot Rod server and start a Wildfly instance to host my client app. At some point I was also in trouble with conflicting dependencies so considered making a Maven plugin to manage the server lifecycle as a proper IT phase - I didn't ultimately make this as I found an easier solution but it would be great if Infinispan could provide such helpers to end users too.. Forking the ANT scripts from the Infinispan project to assemble and start my own (as you do..) seems quite cumbersome for users ;) Especially the server is not even available via Maven coordinates. I'm of course happy to contribute my battle-tested Test helpers to Infinispan, but they are meant for JUnit users. Finally, comparing to developing OGM integrations for other NoSQL stores.. It's really hard work when there is no "viewer" of the cache content. We need some kind of interactive console to explore the stored data, I felt like driving blind: developing based on black box, when something doesn't work as expected it's challenging to figure if one has a bug with the storage method rather than the reading method, or maybe the encoding not quite right or the query options being used.. sometimes it's the used flags or the configuration properties (hell, I've been swearing a lot at some of these flags!) Thanks, Sanne On 15 Sep 2016 11:07, "Tristan Tarrant" wrote: > Recently I've had a chat with Galder, Will and Vittorio about how we > test the Hot Rod server module and the various clients. We also > discussed some of this in the past, but we now need to move forward with > a better strategy. > > First up is the Hot Rod server module testsuite: it is the only part of > the code which still uses Scala. Will has a partial port of it to Java, > but we're wondering if it is worth completing that work, seeing that > most of the tests in that testsuite, in particular those related to the > protocol itself, are actually duplicated by the Java Hot Rod client's > testsuite which also happens to be our reference implementation of a > client and is much more extensive. > The only downside of removing it is that verification will require > running the client testsuite, instead of being self-contained. > > Next up is how we test clients. > > The Java client, partially described above, runs all of the tests > against ad-hoc embedded servers. Some of these tests, in particular > those related to topology, start and stop new servers on the fly. > > The server integration testsuite performs yet another set of tests, some > of which overlap the above, but using the actual full-blown server. It > doesn't test for topology changes. > > The C++ client wraps the native client in a Java wrapper generated by > SWIG and runs the Java client testsuite. It then checks against a > blacklist of known failures. It also has a small number of native tests > which use the server distribution. > > The Node.js client has its own home-grown testsuite which also uses the > server distribution. > > Duplication aside, which in some cases is unavoidable, it is impossible > to confidently say that each client is properly tested. > > Since complete unification is impossible because of the different > testing harnesses used by the various platforms/languages, I propose the > following: > > - we identify and group the tests depending on their scope (basic > protocol ops, bulk ops, topology/failover, security, etc). A client > which implements the functionality of a group MUST pass all of the tests > in that group with NO exceptions > - we assign a unique identifier to each group/test combination (e.g. > HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be > collected in a "test book" (some kind of structured file) for comparison > with client test runs > - we refactor the Java client testsuite according to the above grouping > / naming strategy so that testsuite which use the wrapping approach > (i.e. C++ with SWIG) can consume it by directly specifying the supported > groups > - other clients get reorganized so that they support the above grouping > > I understand this is quite some work, but the current situation isn't > really sustainable. > > Let me know what your thoughts are > > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160915/826cf383/attachment.html From gustavo at infinispan.org Thu Sep 15 07:52:49 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Thu, 15 Sep 2016 12:52:49 +0100 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> Message-ID: On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero wrote: > I was actually planning to start a similar topic, but from the point of > view of user's testing needs. > > I've recently created Hibernate OGM support for Hot Rod, and it wasn't as > easy as other NoSQL databases to test; luckily I have some knowledge and > contact on Infinispan ;) but I had to develop several helpers and refine > the approach to testing over multiple iterations. > > I ended up developing a JUnit rule - handy for individual test runs in the > IDE - and with a Maven life cycle extension and also with an Arquillian > extension, which I needed to run both the Hot Rod server and start a > Wildfly instance to host my client app. > > At some point I was also in trouble with conflicting dependencies so > considered making a Maven plugin to manage the server lifecycle as a proper > IT phase - I didn't ultimately make this as I found an easier solution but > it would be great if Infinispan could provide such helpers to end users > too.. Forking the ANT scripts from the Infinispan project to assemble and > start my own (as you do..) seems quite cumbersome for users ;) > > Especially the server is not even available via Maven coordinates*.* > The server is available at [1] [1] http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/ > I'm of course happy to contribute my battle-tested Test helpers to > Infinispan, but they are meant for JUnit users. > Finally, comparing to developing OGM integrations for other NoSQL stores.. > It's really hard work when there is no "viewer" of the cache content. > We need some kind of interactive console to explore the stored data, I felt > like driving blind: developing based on black box, when something doesn't > work as expected it's challenging to figure if one has a bug with the > storage method rather than the reading method, or maybe the encoding not > quite right or the query options being used.. sometimes it's the used flags > or the configuration properties (hell, I've been swearing a lot at some of > these flags!) > > Thanks, > Sanne > > On 15 Sep 2016 11:07, "Tristan Tarrant" wrote: > >> Recently I've had a chat with Galder, Will and Vittorio about how we >> test the Hot Rod server module and the various clients. We also >> discussed some of this in the past, but we now need to move forward with >> a better strategy. >> >> First up is the Hot Rod server module testsuite: it is the only part of >> the code which still uses Scala. Will has a partial port of it to Java, >> but we're wondering if it is worth completing that work, seeing that >> most of the tests in that testsuite, in particular those related to the >> protocol itself, are actually duplicated by the Java Hot Rod client's >> testsuite which also happens to be our reference implementation of a >> client and is much more extensive. >> The only downside of removing it is that verification will require >> running the client testsuite, instead of being self-contained. >> >> Next up is how we test clients. >> >> The Java client, partially described above, runs all of the tests >> against ad-hoc embedded servers. Some of these tests, in particular >> those related to topology, start and stop new servers on the fly. >> >> The server integration testsuite performs yet another set of tests, some >> of which overlap the above, but using the actual full-blown server. It >> doesn't test for topology changes. >> >> The C++ client wraps the native client in a Java wrapper generated by >> SWIG and runs the Java client testsuite. It then checks against a >> blacklist of known failures. It also has a small number of native tests >> which use the server distribution. >> >> The Node.js client has its own home-grown testsuite which also uses the >> server distribution. >> >> Duplication aside, which in some cases is unavoidable, it is impossible >> to confidently say that each client is properly tested. >> >> Since complete unification is impossible because of the different >> testing harnesses used by the various platforms/languages, I propose the >> following: >> >> - we identify and group the tests depending on their scope (basic >> protocol ops, bulk ops, topology/failover, security, etc). A client >> which implements the functionality of a group MUST pass all of the tests >> in that group with NO exceptions >> - we assign a unique identifier to each group/test combination (e.g. >> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be >> collected in a "test book" (some kind of structured file) for comparison >> with client test runs >> - we refactor the Java client testsuite according to the above grouping >> / naming strategy so that testsuite which use the wrapping approach >> (i.e. C++ with SWIG) can consume it by directly specifying the supported >> groups >> - other clients get reorganized so that they support the above grouping >> >> I understand this is quite some work, but the current situation isn't >> really sustainable. >> >> Let me know what your thoughts are >> >> >> Tristan >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160915/1654d1c3/attachment-0001.html From slaskawi at redhat.com Thu Sep 15 07:58:21 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 15 Sep 2016 13:58:21 +0200 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> Message-ID: How about turning the problem upside down and creating a TCK suite which runs on JUnit and has pluggable clients? The TCK suite would be responsible for bootstrapping servers, turning them down and validating the results. The biggest advantage of this approach is that all those things are pretty well known in Java world (e.g. using Arquillian for managing server lifecycle or JUnit for assertions). But the biggest challenge is how to plug for example a JavaScript client into the suite? How to call it from Java. Thanks Sebastian On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes wrote: > > > On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero > wrote: > >> I was actually planning to start a similar topic, but from the point of >> view of user's testing needs. >> >> I've recently created Hibernate OGM support for Hot Rod, and it wasn't as >> easy as other NoSQL databases to test; luckily I have some knowledge and >> contact on Infinispan ;) but I had to develop several helpers and refine >> the approach to testing over multiple iterations. >> >> I ended up developing a JUnit rule - handy for individual test runs in >> the IDE - and with a Maven life cycle extension and also with an Arquillian >> extension, which I needed to run both the Hot Rod server and start a >> Wildfly instance to host my client app. >> >> At some point I was also in trouble with conflicting dependencies so >> considered making a Maven plugin to manage the server lifecycle as a proper >> IT phase - I didn't ultimately make this as I found an easier solution but >> it would be great if Infinispan could provide such helpers to end users >> too.. Forking the ANT scripts from the Infinispan project to assemble and >> start my own (as you do..) seems quite cumbersome for users ;) >> >> Especially the server is not even available via Maven coordinates*.* >> > The server is available at [1] > > [1] http://central.maven.org/maven2/org/infinispan/server/ > infinispan-server-build/9.0.0.Alpha4/ > > > >> I'm of course happy to contribute my battle-tested Test helpers to >> Infinispan, but they are meant for JUnit users. >> Finally, comparing to developing OGM integrations for other NoSQL >> stores.. It's really hard work when there is no "viewer" of the cache >> content. >> > We need some kind of interactive console to explore the stored data, I >> felt like driving blind: developing based on black box, when something >> doesn't work as expected it's challenging to figure if one has a bug with >> the storage method rather than the reading method, or maybe the encoding >> not quite right or the query options being used.. sometimes it's the used >> flags or the configuration properties (hell, I've been swearing a lot at >> some of these flags!) >> >> Thanks, >> Sanne >> >> On 15 Sep 2016 11:07, "Tristan Tarrant" wrote: >> >>> Recently I've had a chat with Galder, Will and Vittorio about how we >>> test the Hot Rod server module and the various clients. We also >>> discussed some of this in the past, but we now need to move forward with >>> a better strategy. >>> >>> First up is the Hot Rod server module testsuite: it is the only part of >>> the code which still uses Scala. Will has a partial port of it to Java, >>> but we're wondering if it is worth completing that work, seeing that >>> most of the tests in that testsuite, in particular those related to the >>> protocol itself, are actually duplicated by the Java Hot Rod client's >>> testsuite which also happens to be our reference implementation of a >>> client and is much more extensive. >>> The only downside of removing it is that verification will require >>> running the client testsuite, instead of being self-contained. >>> >>> Next up is how we test clients. >>> >>> The Java client, partially described above, runs all of the tests >>> against ad-hoc embedded servers. Some of these tests, in particular >>> those related to topology, start and stop new servers on the fly. >>> >>> The server integration testsuite performs yet another set of tests, some >>> of which overlap the above, but using the actual full-blown server. It >>> doesn't test for topology changes. >>> >>> The C++ client wraps the native client in a Java wrapper generated by >>> SWIG and runs the Java client testsuite. It then checks against a >>> blacklist of known failures. It also has a small number of native tests >>> which use the server distribution. >>> >>> The Node.js client has its own home-grown testsuite which also uses the >>> server distribution. >>> >>> Duplication aside, which in some cases is unavoidable, it is impossible >>> to confidently say that each client is properly tested. >>> >>> Since complete unification is impossible because of the different >>> testing harnesses used by the various platforms/languages, I propose the >>> following: >>> >>> - we identify and group the tests depending on their scope (basic >>> protocol ops, bulk ops, topology/failover, security, etc). A client >>> which implements the functionality of a group MUST pass all of the tests >>> in that group with NO exceptions >>> - we assign a unique identifier to each group/test combination (e.g. >>> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be >>> collected in a "test book" (some kind of structured file) for comparison >>> with client test runs >>> - we refactor the Java client testsuite according to the above grouping >>> / naming strategy so that testsuite which use the wrapping approach >>> (i.e. C++ with SWIG) can consume it by directly specifying the supported >>> groups >>> - other clients get reorganized so that they support the above grouping >>> >>> I understand this is quite some work, but the current situation isn't >>> really sustainable. >>> >>> Let me know what your thoughts are >>> >>> >>> Tristan >>> -- >>> Tristan Tarrant >>> Infinispan Lead >>> JBoss, a division of Red Hat >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160915/c0124bd0/attachment.html From ttarrant at redhat.com Thu Sep 15 08:22:12 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 15 Sep 2016 14:22:12 +0200 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> Message-ID: <1bc915f3-8b51-68f9-7381-2047650abbfe@redhat.com> On 15/09/16 13:33, Sanne Grinovero wrote: > Especially the server is not even available via Maven coordinates. You didn't try hard enough: org.infinispan.server:infinispan-server:9.0.0.Alpha4:zip:bin org.infinispan.server infinispan-server 9.0.0.Alpha4 zip bin :) Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Thu Sep 15 08:24:30 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 15 Sep 2016 14:24:30 +0200 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> Message-ID: <649bf650-0b89-03d9-06c3-66e6fe62134c@infinispan.org> Whatever we choose, this solves only half of the problem: enumerating and classifying the tests is the hard part. Tristan On 15/09/16 13:58, Sebastian Laskawiec wrote: > How about turning the problem upside down and creating a TCK suite > which runs on JUnit and has pluggable clients? The TCK suite would be > responsible for bootstrapping servers, turning them down and > validating the results. > > The biggest advantage of this approach is that all those things are > pretty well known in Java world (e.g. using Arquillian for managing > server lifecycle or JUnit for assertions). But the biggest challenge > is how to plug for example a JavaScript client into the suite? How to > call it from Java. > > Thanks > Sebastian > > On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes > > wrote: > > > > On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero > > wrote: > > I was actually planning to start a similar topic, but from the > point of view of user's testing needs. > > I've recently created Hibernate OGM support for Hot Rod, and > it wasn't as easy as other NoSQL databases to test; luckily I > have some knowledge and contact on Infinispan ;) but I had to > develop several helpers and refine the approach to testing > over multiple iterations. > > I ended up developing a JUnit rule - handy for individual test > runs in the IDE - and with a Maven life cycle extension and > also with an Arquillian extension, which I needed to run both > the Hot Rod server and start a Wildfly instance to host my > client app. > > At some point I was also in trouble with conflicting > dependencies so considered making a Maven plugin to manage the > server lifecycle as a proper IT phase - I didn't ultimately > make this as I found an easier solution but it would be great > if Infinispan could provide such helpers to end users too.. > Forking the ANT scripts from the Infinispan project to > assemble and start my own (as you do..) seems quite cumbersome > for users ;) > > Especially the server is not even available via Maven > coordinates/./ > > The server is available at [1] > > [1] > http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/ > > > I'm of course happy to contribute my battle-tested Test > helpers to Infinispan, but they are meant for JUnit users. > Finally, comparing to developing OGM integrations for other > NoSQL stores.. It's really hard work when there is no "viewer" > of the cache content. > > We need some kind of interactive console to explore the stored > data, I felt like driving blind: developing based on black > box, when something doesn't work as expected it's challenging > to figure if one has a bug with the storage method rather than > the reading method, or maybe the encoding not quite right or > the query options being used.. sometimes it's the used flags > or the configuration properties (hell, I've been swearing a > lot at some of these flags!) > > Thanks, > Sanne > > > On 15 Sep 2016 11:07, "Tristan Tarrant" > wrote: > > Recently I've had a chat with Galder, Will and Vittorio > about how we > test the Hot Rod server module and the various clients. We > also > discussed some of this in the past, but we now need to > move forward with > a better strategy. > > First up is the Hot Rod server module testsuite: it is the > only part of > the code which still uses Scala. Will has a partial port > of it to Java, > but we're wondering if it is worth completing that work, > seeing that > most of the tests in that testsuite, in particular those > related to the > protocol itself, are actually duplicated by the Java Hot > Rod client's > testsuite which also happens to be our reference > implementation of a > client and is much more extensive. > The only downside of removing it is that verification > will require > running the client testsuite, instead of being self-contained. > > Next up is how we test clients. > > The Java client, partially described above, runs all of > the tests > against ad-hoc embedded servers. Some of these tests, in > particular > those related to topology, start and stop new servers on > the fly. > > The server integration testsuite performs yet another set > of tests, some > of which overlap the above, but using the actual > full-blown server. It > doesn't test for topology changes. > > The C++ client wraps the native client in a Java wrapper > generated by > SWIG and runs the Java client testsuite. It then checks > against a > blacklist of known failures. It also has a small number of > native tests > which use the server distribution. > > The Node.js client has its own home-grown testsuite which > also uses the > server distribution. > > Duplication aside, which in some cases is unavoidable, it > is impossible > to confidently say that each client is properly tested. > > Since complete unification is impossible because of the > different > testing harnesses used by the various platforms/languages, > I propose the > following: > > - we identify and group the tests depending on their scope > (basic > protocol ops, bulk ops, topology/failover, security, etc). > A client > which implements the functionality of a group MUST pass > all of the tests > in that group with NO exceptions > - we assign a unique identifier to each group/test > combination (e.g. > HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These > should be > collected in a "test book" (some kind of structured file) > for comparison > with client test runs > - we refactor the Java client testsuite according to the > above grouping > / naming strategy so that testsuite which use the wrapping > approach > (i.e. C++ with SWIG) can consume it by directly specifying > the supported > groups > - other clients get reorganized so that they support the > above grouping > > I understand this is quite some work, but the current > situation isn't > really sustainable. > > Let me know what your thoughts are > > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Thu Sep 15 12:27:54 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 15 Sep 2016 18:27:54 +0200 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: <649bf650-0b89-03d9-06c3-66e6fe62134c@infinispan.org> References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <649bf650-0b89-03d9-06c3-66e6fe62134c@infinispan.org> Message-ID: <9ebb5285-7c83-ee11-addf-581a4a566759@redhat.com> Anyway, I like the idea. Can we sketch a POC ? Tristan On 15/09/16 14:24, Tristan Tarrant wrote: > Whatever we choose, this solves only half of the problem: enumerating > and classifying the tests is the hard part. > > Tristan > > On 15/09/16 13:58, Sebastian Laskawiec wrote: >> How about turning the problem upside down and creating a TCK suite >> which runs on JUnit and has pluggable clients? The TCK suite would be >> responsible for bootstrapping servers, turning them down and >> validating the results. >> >> The biggest advantage of this approach is that all those things are >> pretty well known in Java world (e.g. using Arquillian for managing >> server lifecycle or JUnit for assertions). But the biggest challenge >> is how to plug for example a JavaScript client into the suite? How to >> call it from Java. >> >> Thanks >> Sebastian >> >> On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes >> > wrote: >> >> >> >> On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero >> > wrote: >> >> I was actually planning to start a similar topic, but from the >> point of view of user's testing needs. >> >> I've recently created Hibernate OGM support for Hot Rod, and >> it wasn't as easy as other NoSQL databases to test; luckily I >> have some knowledge and contact on Infinispan ;) but I had to >> develop several helpers and refine the approach to testing >> over multiple iterations. >> >> I ended up developing a JUnit rule - handy for individual test >> runs in the IDE - and with a Maven life cycle extension and >> also with an Arquillian extension, which I needed to run both >> the Hot Rod server and start a Wildfly instance to host my >> client app. >> >> At some point I was also in trouble with conflicting >> dependencies so considered making a Maven plugin to manage the >> server lifecycle as a proper IT phase - I didn't ultimately >> make this as I found an easier solution but it would be great >> if Infinispan could provide such helpers to end users too.. >> Forking the ANT scripts from the Infinispan project to >> assemble and start my own (as you do..) seems quite cumbersome >> for users ;) >> >> Especially the server is not even available via Maven >> coordinates/./ >> >> The server is available at [1] >> >> [1] >> http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/ >> >> >> I'm of course happy to contribute my battle-tested Test >> helpers to Infinispan, but they are meant for JUnit users. >> Finally, comparing to developing OGM integrations for other >> NoSQL stores.. It's really hard work when there is no "viewer" >> of the cache content. >> >> We need some kind of interactive console to explore the stored >> data, I felt like driving blind: developing based on black >> box, when something doesn't work as expected it's challenging >> to figure if one has a bug with the storage method rather than >> the reading method, or maybe the encoding not quite right or >> the query options being used.. sometimes it's the used flags >> or the configuration properties (hell, I've been swearing a >> lot at some of these flags!) >> >> Thanks, >> Sanne >> >> >> On 15 Sep 2016 11:07, "Tristan Tarrant" > > wrote: >> >> Recently I've had a chat with Galder, Will and Vittorio >> about how we >> test the Hot Rod server module and the various clients. We >> also >> discussed some of this in the past, but we now need to >> move forward with >> a better strategy. >> >> First up is the Hot Rod server module testsuite: it is the >> only part of >> the code which still uses Scala. Will has a partial port >> of it to Java, >> but we're wondering if it is worth completing that work, >> seeing that >> most of the tests in that testsuite, in particular those >> related to the >> protocol itself, are actually duplicated by the Java Hot >> Rod client's >> testsuite which also happens to be our reference >> implementation of a >> client and is much more extensive. >> The only downside of removing it is that verification >> will require >> running the client testsuite, instead of being >> self-contained. >> >> Next up is how we test clients. >> >> The Java client, partially described above, runs all of >> the tests >> against ad-hoc embedded servers. Some of these tests, in >> particular >> those related to topology, start and stop new servers on >> the fly. >> >> The server integration testsuite performs yet another set >> of tests, some >> of which overlap the above, but using the actual >> full-blown server. It >> doesn't test for topology changes. >> >> The C++ client wraps the native client in a Java wrapper >> generated by >> SWIG and runs the Java client testsuite. It then checks >> against a >> blacklist of known failures. It also has a small number of >> native tests >> which use the server distribution. >> >> The Node.js client has its own home-grown testsuite which >> also uses the >> server distribution. >> >> Duplication aside, which in some cases is unavoidable, it >> is impossible >> to confidently say that each client is properly tested. >> >> Since complete unification is impossible because of the >> different >> testing harnesses used by the various platforms/languages, >> I propose the >> following: >> >> - we identify and group the tests depending on their scope >> (basic >> protocol ops, bulk ops, topology/failover, security, etc). >> A client >> which implements the functionality of a group MUST pass >> all of the tests >> in that group with NO exceptions >> - we assign a unique identifier to each group/test >> combination (e.g. >> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These >> should be >> collected in a "test book" (some kind of structured file) >> for comparison >> with client test runs >> - we refactor the Java client testsuite according to the >> above grouping >> / naming strategy so that testsuite which use the wrapping >> approach >> (i.e. C++ with SWIG) can consume it by directly specifying >> the supported >> groups >> - other clients get reorganized so that they support the >> above grouping >> >> I understand this is quite some work, but the current >> situation isn't >> really sustainable. >> >> Let me know what your thoughts are >> >> >> Tristan >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From afield at redhat.com Thu Sep 15 12:42:04 2016 From: afield at redhat.com (Alan Field) Date: Thu, 15 Sep 2016 12:42:04 -0400 (EDT) Subject: [infinispan-dev] Hot Rod testing In-Reply-To: <9ebb5285-7c83-ee11-addf-581a4a566759@redhat.com> References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <649bf650-0b89-03d9-06c3-66e6fe62134c@infinispan.org> <9ebb5285-7c83-ee11-addf-581a4a566759@redhat.com> Message-ID: <1258153056.11617352.1473957724218.JavaMail.zimbra@redhat.com> I also like this idea for a Unit-Based TCK for all clients, if this is possible. > - we identify and group the tests depending on their scope (basic > protocol ops, bulk ops, topology/failover, security, etc). A client > which implements the functionality of a group MUST pass all of the tests > in that group with NO exceptions This makes sense to me, but I also agree that the hard part will be in categorizing the tests into these buckets. Should the groups be divided by intelligence as well? I'm just wondering about "dumb" clients like REST and Memcached. > - we assign a unique identifier to each group/test combination (e.g. > HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be > collected in a "test book" (some kind of structured file) for comparison > with client test runs Are these identifiers just used as the JUNit test group names? > - we refactor the Java client testsuite according to the above grouping > / naming strategy so that testsuite which use the wrapping approach > (i.e. C++ with SWIG) can consume it by directly specifying the supported > groups This makes sense to me as well. I think the other requirements here are that the client tests must use a real server distribution and not the embedded server. Any non-duplicated tests from the server integration test suite have to be migrated to the client test suite as well. I think this also is an opportunity to inventory the client test suite and reduce it to the most minimal number of tests that verify the adherence to the protocol and expected behavior beyond the protocol. Thanks, Alan ----- Original Message ----- > From: "Tristan Tarrant" > To: infinispan-dev at lists.jboss.org > Sent: Thursday, September 15, 2016 12:27:54 PM > Subject: Re: [infinispan-dev] Hot Rod testing > > Anyway, I like the idea. Can we sketch a POC ? > > Tristan > > > On 15/09/16 14:24, Tristan Tarrant wrote: > > Whatever we choose, this solves only half of the problem: enumerating > > and classifying the tests is the hard part. > > > > Tristan > > > > On 15/09/16 13:58, Sebastian Laskawiec wrote: > >> How about turning the problem upside down and creating a TCK suite > >> which runs on JUnit and has pluggable clients? The TCK suite would be > >> responsible for bootstrapping servers, turning them down and > >> validating the results. > >> > >> The biggest advantage of this approach is that all those things are > >> pretty well known in Java world (e.g. using Arquillian for managing > >> server lifecycle or JUnit for assertions). But the biggest challenge > >> is how to plug for example a JavaScript client into the suite? How to > >> call it from Java. > >> > >> Thanks > >> Sebastian > >> > >> On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes > >> > wrote: > >> > >> > >> > >> On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero > >> > wrote: > >> > >> I was actually planning to start a similar topic, but from the > >> point of view of user's testing needs. > >> > >> I've recently created Hibernate OGM support for Hot Rod, and > >> it wasn't as easy as other NoSQL databases to test; luckily I > >> have some knowledge and contact on Infinispan ;) but I had to > >> develop several helpers and refine the approach to testing > >> over multiple iterations. > >> > >> I ended up developing a JUnit rule - handy for individual test > >> runs in the IDE - and with a Maven life cycle extension and > >> also with an Arquillian extension, which I needed to run both > >> the Hot Rod server and start a Wildfly instance to host my > >> client app. > >> > >> At some point I was also in trouble with conflicting > >> dependencies so considered making a Maven plugin to manage the > >> server lifecycle as a proper IT phase - I didn't ultimately > >> make this as I found an easier solution but it would be great > >> if Infinispan could provide such helpers to end users too.. > >> Forking the ANT scripts from the Infinispan project to > >> assemble and start my own (as you do..) seems quite cumbersome > >> for users ;) > >> > >> Especially the server is not even available via Maven > >> coordinates/./ > >> > >> The server is available at [1] > >> > >> [1] > >> http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/ > >> > >> > >> I'm of course happy to contribute my battle-tested Test > >> helpers to Infinispan, but they are meant for JUnit users. > >> Finally, comparing to developing OGM integrations for other > >> NoSQL stores.. It's really hard work when there is no "viewer" > >> of the cache content. > >> > >> We need some kind of interactive console to explore the stored > >> data, I felt like driving blind: developing based on black > >> box, when something doesn't work as expected it's challenging > >> to figure if one has a bug with the storage method rather than > >> the reading method, or maybe the encoding not quite right or > >> the query options being used.. sometimes it's the used flags > >> or the configuration properties (hell, I've been swearing a > >> lot at some of these flags!) > >> > >> Thanks, > >> Sanne > >> > >> > >> On 15 Sep 2016 11:07, "Tristan Tarrant" >> > wrote: > >> > >> Recently I've had a chat with Galder, Will and Vittorio > >> about how we > >> test the Hot Rod server module and the various clients. We > >> also > >> discussed some of this in the past, but we now need to > >> move forward with > >> a better strategy. > >> > >> First up is the Hot Rod server module testsuite: it is the > >> only part of > >> the code which still uses Scala. Will has a partial port > >> of it to Java, > >> but we're wondering if it is worth completing that work, > >> seeing that > >> most of the tests in that testsuite, in particular those > >> related to the > >> protocol itself, are actually duplicated by the Java Hot > >> Rod client's > >> testsuite which also happens to be our reference > >> implementation of a > >> client and is much more extensive. > >> The only downside of removing it is that verification > >> will require > >> running the client testsuite, instead of being > >> self-contained. > >> > >> Next up is how we test clients. > >> > >> The Java client, partially described above, runs all of > >> the tests > >> against ad-hoc embedded servers. Some of these tests, in > >> particular > >> those related to topology, start and stop new servers on > >> the fly. > >> > >> The server integration testsuite performs yet another set > >> of tests, some > >> of which overlap the above, but using the actual > >> full-blown server. It > >> doesn't test for topology changes. > >> > >> The C++ client wraps the native client in a Java wrapper > >> generated by > >> SWIG and runs the Java client testsuite. It then checks > >> against a > >> blacklist of known failures. It also has a small number of > >> native tests > >> which use the server distribution. > >> > >> The Node.js client has its own home-grown testsuite which > >> also uses the > >> server distribution. > >> > >> Duplication aside, which in some cases is unavoidable, it > >> is impossible > >> to confidently say that each client is properly tested. > >> > >> Since complete unification is impossible because of the > >> different > >> testing harnesses used by the various platforms/languages, > >> I propose the > >> following: > >> > >> - we identify and group the tests depending on their scope > >> (basic > >> protocol ops, bulk ops, topology/failover, security, etc). > >> A client > >> which implements the functionality of a group MUST pass > >> all of the tests > >> in that group with NO exceptions > >> - we assign a unique identifier to each group/test > >> combination (e.g. > >> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These > >> should be > >> collected in a "test book" (some kind of structured file) > >> for comparison > >> with client test runs > >> - we refactor the Java client testsuite according to the > >> above grouping > >> / naming strategy so that testsuite which use the wrapping > >> approach > >> (i.e. C++ with SWIG) can consume it by directly specifying > >> the supported > >> groups > >> - other clients get reorganized so that they support the > >> above grouping > >> > >> I understand this is quite some work, but the current > >> situation isn't > >> really sustainable. > >> > >> Let me know what your thoughts are > >> > >> > >> Tristan > >> -- > >> Tristan Tarrant > >> Infinispan Lead > >> JBoss, a division of Red Hat > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From vrigamon at redhat.com Thu Sep 15 12:47:27 2016 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Thu, 15 Sep 2016 12:47:27 -0400 (EDT) Subject: [infinispan-dev] Hot Rod testing In-Reply-To: <9ebb5285-7c83-ee11-addf-581a4a566759@redhat.com> References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <649bf650-0b89-03d9-06c3-66e6fe62134c@infinispan.org> <9ebb5285-7c83-ee11-addf-581a4a566759@redhat.com> Message-ID: <1304229521.86892974.1473958047804.JavaMail.zimbra@redhat.com> I feel, but I'm not sure, that we first need to define what we want to test: I mean enumerate and organize the requirements could probably be the right starting point. Of course Sebastian's approach could be right if we can imagine a tool that can enforce a requirement's organizational model. Vittorio ----- Original Message ----- From: "Tristan Tarrant" To: infinispan-dev at lists.jboss.org Sent: Thursday, September 15, 2016 6:27:54 PM Subject: Re: [infinispan-dev] Hot Rod testing Anyway, I like the idea. Can we sketch a POC ? Tristan On 15/09/16 14:24, Tristan Tarrant wrote: > Whatever we choose, this solves only half of the problem: enumerating > and classifying the tests is the hard part. > > Tristan > > On 15/09/16 13:58, Sebastian Laskawiec wrote: >> How about turning the problem upside down and creating a TCK suite >> which runs on JUnit and has pluggable clients? The TCK suite would be >> responsible for bootstrapping servers, turning them down and >> validating the results. >> >> The biggest advantage of this approach is that all those things are >> pretty well known in Java world (e.g. using Arquillian for managing >> server lifecycle or JUnit for assertions). But the biggest challenge >> is how to plug for example a JavaScript client into the suite? How to >> call it from Java. >> >> Thanks >> Sebastian >> >> On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes >> > wrote: >> >> >> >> On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero >> > wrote: >> >> I was actually planning to start a similar topic, but from the >> point of view of user's testing needs. >> >> I've recently created Hibernate OGM support for Hot Rod, and >> it wasn't as easy as other NoSQL databases to test; luckily I >> have some knowledge and contact on Infinispan ;) but I had to >> develop several helpers and refine the approach to testing >> over multiple iterations. >> >> I ended up developing a JUnit rule - handy for individual test >> runs in the IDE - and with a Maven life cycle extension and >> also with an Arquillian extension, which I needed to run both >> the Hot Rod server and start a Wildfly instance to host my >> client app. >> >> At some point I was also in trouble with conflicting >> dependencies so considered making a Maven plugin to manage the >> server lifecycle as a proper IT phase - I didn't ultimately >> make this as I found an easier solution but it would be great >> if Infinispan could provide such helpers to end users too.. >> Forking the ANT scripts from the Infinispan project to >> assemble and start my own (as you do..) seems quite cumbersome >> for users ;) >> >> Especially the server is not even available via Maven >> coordinates/./ >> >> The server is available at [1] >> >> [1] >> http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/ >> >> >> I'm of course happy to contribute my battle-tested Test >> helpers to Infinispan, but they are meant for JUnit users. >> Finally, comparing to developing OGM integrations for other >> NoSQL stores.. It's really hard work when there is no "viewer" >> of the cache content. >> >> We need some kind of interactive console to explore the stored >> data, I felt like driving blind: developing based on black >> box, when something doesn't work as expected it's challenging >> to figure if one has a bug with the storage method rather than >> the reading method, or maybe the encoding not quite right or >> the query options being used.. sometimes it's the used flags >> or the configuration properties (hell, I've been swearing a >> lot at some of these flags!) >> >> Thanks, >> Sanne >> >> >> On 15 Sep 2016 11:07, "Tristan Tarrant" > > wrote: >> >> Recently I've had a chat with Galder, Will and Vittorio >> about how we >> test the Hot Rod server module and the various clients. We >> also >> discussed some of this in the past, but we now need to >> move forward with >> a better strategy. >> >> First up is the Hot Rod server module testsuite: it is the >> only part of >> the code which still uses Scala. Will has a partial port >> of it to Java, >> but we're wondering if it is worth completing that work, >> seeing that >> most of the tests in that testsuite, in particular those >> related to the >> protocol itself, are actually duplicated by the Java Hot >> Rod client's >> testsuite which also happens to be our reference >> implementation of a >> client and is much more extensive. >> The only downside of removing it is that verification >> will require >> running the client testsuite, instead of being >> self-contained. >> >> Next up is how we test clients. >> >> The Java client, partially described above, runs all of >> the tests >> against ad-hoc embedded servers. Some of these tests, in >> particular >> those related to topology, start and stop new servers on >> the fly. >> >> The server integration testsuite performs yet another set >> of tests, some >> of which overlap the above, but using the actual >> full-blown server. It >> doesn't test for topology changes. >> >> The C++ client wraps the native client in a Java wrapper >> generated by >> SWIG and runs the Java client testsuite. It then checks >> against a >> blacklist of known failures. It also has a small number of >> native tests >> which use the server distribution. >> >> The Node.js client has its own home-grown testsuite which >> also uses the >> server distribution. >> >> Duplication aside, which in some cases is unavoidable, it >> is impossible >> to confidently say that each client is properly tested. >> >> Since complete unification is impossible because of the >> different >> testing harnesses used by the various platforms/languages, >> I propose the >> following: >> >> - we identify and group the tests depending on their scope >> (basic >> protocol ops, bulk ops, topology/failover, security, etc). >> A client >> which implements the functionality of a group MUST pass >> all of the tests >> in that group with NO exceptions >> - we assign a unique identifier to each group/test >> combination (e.g. >> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These >> should be >> collected in a "test book" (some kind of structured file) >> for comparison >> with client test runs >> - we refactor the Java client testsuite according to the >> above grouping >> / naming strategy so that testsuite which use the wrapping >> approach >> (i.e. C++ with SWIG) can consume it by directly specifying >> the supported >> groups >> - other clients get reorganized so that they support the >> above grouping >> >> I understand this is quite some work, but the current >> situation isn't >> really sustainable. >> >> Let me know what your thoughts are >> >> >> Tristan >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat _______________________________________________ infinispan-dev mailing list infinispan-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Thu Sep 15 13:48:58 2016 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 15 Sep 2016 20:48:58 +0300 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: <1258153056.11617352.1473957724218.JavaMail.zimbra@redhat.com> References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <649bf650-0b89-03d9-06c3-66e6fe62134c@infinispan.org> <9ebb5285-7c83-ee11-addf-581a4a566759@redhat.com> <1258153056.11617352.1473957724218.JavaMail.zimbra@redhat.com> Message-ID: On Thu, Sep 15, 2016 at 7:42 PM, Alan Field wrote: > I also like this idea for a Unit-Based TCK for all clients, if this is possible. > >> - we identify and group the tests depending on their scope (basic >> protocol ops, bulk ops, topology/failover, security, etc). A client >> which implements the functionality of a group MUST pass all of the tests >> in that group with NO exceptions > > This makes sense to me, but I also agree that the hard part will be in categorizing the tests into these buckets. Should the groups be divided by intelligence as well? I'm just wondering about "dumb" clients like REST and Memcached. > >> - we assign a unique identifier to each group/test combination (e.g. >> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be >> collected in a "test book" (some kind of structured file) for comparison >> with client test runs > > Are these identifiers just used as the JUNit test group names? > >> - we refactor the Java client testsuite according to the above grouping >> / naming strategy so that testsuite which use the wrapping approach >> (i.e. C++ with SWIG) can consume it by directly specifying the supported >> groups > > This makes sense to me as well. > > I think the other requirements here are that the client tests must use a real server distribution and not the embedded server. Any non-duplicated tests from the server integration test suite have to be migrated to the client test suite as well. I think this also is an opportunity to inventory the client test suite and reduce it to the most minimal number of tests that verify the adherence to the protocol and expected behavior beyond the protocol. > Reducing the number of tests may not be so easy... remember that we need to test all versions of the protocol, not just the latest one. And we still need to test stuff that's not explicitly in the protocol, especially around state transfer/server crashes and around query (which the protocol says almost nothing about). More importantly, if I have to rebuild the entire server distribution every time I make a change in the HR server, then I'm pretty sure I won't touch the HR server again :) Cheers Dan From sanne at infinispan.org Mon Sep 19 07:39:30 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 19 Sep 2016 12:39:30 +0100 Subject: [infinispan-dev] Testing a Hot Rod client app, with smart routing disabled? Message-ID: Hi all, I'm testing a concurrent update protocol based on Hot Rod client's support for versioned entries. I would love to be able to write a test having multiple client endpoints, each connecting to a specific server. Of course HR's "smart routing" prevents me from controlling this explicitly.. is there a way to control it? For example having servers {A, B} connected in cluster I'd like to create two clients {A', B}', each one connected exclusively to one of them and not aware of the other server node. (A <-> A', B <-> B'). Thanks, Sanne From ttarrant at redhat.com Mon Sep 19 08:52:49 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 19 Sep 2016 14:52:49 +0200 Subject: [infinispan-dev] Testing a Hot Rod client app, with smart routing disabled? In-Reply-To: References: Message-ID: <2f840fa5-9abb-6032-225a-1b9477c01129@infinispan.org> Currently thee Java client always sets the client intelligence header to 3 (topo + ch aware). We could add a configuration property so that you could specify 1 (basic). The alternative requires playing with the transport and playing with the "failed servers" but that is messy ! Tristan On 19/09/16 13:39, Sanne Grinovero wrote: > Hi all, > > I'm testing a concurrent update protocol based on Hot Rod client's > support for versioned entries. > > I would love to be able to write a test having multiple client > endpoints, each connecting to a specific server. Of course HR's "smart > routing" prevents me from controlling this explicitly.. is there a way > to control it? > > For example having servers {A, B} connected in cluster I'd like to > create two clients {A', B}', each one connected exclusively to one of > them and not aware of the other server node. (A <-> A', B <-> B'). > > Thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From gustavo at infinispan.org Mon Sep 19 09:14:42 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 19 Sep 2016 14:14:42 +0100 Subject: [infinispan-dev] Testing a Hot Rod client app, with smart routing disabled? In-Reply-To: <2f840fa5-9abb-6032-225a-1b9477c01129@infinispan.org> References: <2f840fa5-9abb-6032-225a-1b9477c01129@infinispan.org> Message-ID: You can try to generate keys that hash to specific servers and use them. >From the Hot Rod client you can get .getCacheTopologyInfo that gives you the segment ownership for the servers in the cluster, and to target a specific server you'd craft a key that maps to a specific segment owned by that server. I'd imagine the implementation to be similar to [1] but for client-server mode. [1] https://github.com/infinispan/infinispan/blob/master/core/sr c/test/java/org/infinispan/distribution/MagicKey.java On Mon, Sep 19, 2016 at 1:52 PM, Tristan Tarrant wrote: > Currently thee Java client always sets the client intelligence header to > 3 (topo + ch aware). We could add a configuration property so that you > could specify 1 (basic). > The alternative requires playing with the transport and playing with the > "failed servers" but that is messy ! > > Tristan > > On 19/09/16 13:39, Sanne Grinovero wrote: > > Hi all, > > > > I'm testing a concurrent update protocol based on Hot Rod client's > > support for versioned entries. > > > > I would love to be able to write a test having multiple client > > endpoints, each connecting to a specific server. Of course HR's "smart > > routing" prevents me from controlling this explicitly.. is there a way > > to control it? > > > > For example having servers {A, B} connected in cluster I'd like to > > create two clients {A', B}', each one connected exclusively to one of > > them and not aware of the other server node. (A <-> A', B <-> B'). > > > > Thanks, > > Sanne > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160919/36395d43/attachment.html From sanne at infinispan.org Mon Sep 19 09:19:08 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 19 Sep 2016 14:19:08 +0100 Subject: [infinispan-dev] Testing a Hot Rod client app, with smart routing disabled? In-Reply-To: <2f840fa5-9abb-6032-225a-1b9477c01129@infinispan.org> References: <2f840fa5-9abb-6032-225a-1b9477c01129@infinispan.org> Message-ID: On 19 September 2016 at 13:52, Tristan Tarrant wrote: > Currently thee Java client always sets the client intelligence header to > 3 (topo + ch aware). We could add a configuration property so that you > could specify 1 (basic). > The alternative requires playing with the transport and playing with the > "failed servers" but that is messy ! I'd also need the client to not "autodiscover" the nodes I didn't explicitly list. Would setting a "1" in this intelligence header be enough for that? Thanks! Sanne > > Tristan > > On 19/09/16 13:39, Sanne Grinovero wrote: >> Hi all, >> >> I'm testing a concurrent update protocol based on Hot Rod client's >> support for versioned entries. >> >> I would love to be able to write a test having multiple client >> endpoints, each connecting to a specific server. Of course HR's "smart >> routing" prevents me from controlling this explicitly.. is there a way >> to control it? >> >> For example having servers {A, B} connected in cluster I'd like to >> create two clients {A', B}', each one connected exclusively to one of >> them and not aware of the other server node. (A <-> A', B <-> B'). >> >> Thanks, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Mon Sep 19 09:23:17 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 19 Sep 2016 14:23:17 +0100 Subject: [infinispan-dev] Testing a Hot Rod client app, with smart routing disabled? In-Reply-To: References: <2f840fa5-9abb-6032-225a-1b9477c01129@infinispan.org> Message-ID: On 19 September 2016 at 14:14, Gustavo Fernandes wrote: > You can try to generate keys that hash to specific servers and use them. > From the Hot Rod client you can > get .getCacheTopologyInfo that gives you the segment ownership for the > servers in the cluster, and to target > a specific server you'd craft a key that maps to a specific segment owned by > that server. I'd imagine the > implementation to be similar to [1] but for client-server mode. In my case all writes happen on the same key: I want to verify my understanding of the versioning semantics is good enough to resolve conflicting writes on the same key, even though each client might physically connect to a different owner because of the topology being dynamic or network issues. > > [1] > https://github.com/infinispan/infinispan/blob/master/core/src/test/java/org/infinispan/distribution/MagicKey.java > > > On Mon, Sep 19, 2016 at 1:52 PM, Tristan Tarrant > wrote: >> >> Currently thee Java client always sets the client intelligence header to >> 3 (topo + ch aware). We could add a configuration property so that you >> could specify 1 (basic). >> The alternative requires playing with the transport and playing with the >> "failed servers" but that is messy ! >> >> Tristan >> >> On 19/09/16 13:39, Sanne Grinovero wrote: >> > Hi all, >> > >> > I'm testing a concurrent update protocol based on Hot Rod client's >> > support for versioned entries. >> > >> > I would love to be able to write a test having multiple client >> > endpoints, each connecting to a specific server. Of course HR's "smart >> > routing" prevents me from controlling this explicitly.. is there a way >> > to control it? >> > >> > For example having servers {A, B} connected in cluster I'd like to >> > create two clients {A', B}', each one connected exclusively to one of >> > them and not aware of the other server node. (A <-> A', B <-> B'). >> > >> > Thanks, >> > Sanne >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Mon Sep 19 09:24:27 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 19 Sep 2016 15:24:27 +0200 Subject: [infinispan-dev] Testing a Hot Rod client app, with smart routing disabled? In-Reply-To: References: <2f840fa5-9abb-6032-225a-1b9477c01129@infinispan.org> Message-ID: Yes, it means the client is not topology aware either. Tristan On 19/09/16 15:19, Sanne Grinovero wrote: > On 19 September 2016 at 13:52, Tristan Tarrant wrote: >> Currently thee Java client always sets the client intelligence header to >> 3 (topo + ch aware). We could add a configuration property so that you >> could specify 1 (basic). >> The alternative requires playing with the transport and playing with the >> "failed servers" but that is messy ! > I'd also need the client to not "autodiscover" the nodes I didn't > explicitly list. > Would setting a "1" in this intelligence header be enough for that? > > Thanks! > Sanne > > >> Tristan >> >> On 19/09/16 13:39, Sanne Grinovero wrote: >>> Hi all, >>> >>> I'm testing a concurrent update protocol based on Hot Rod client's >>> support for versioned entries. >>> >>> I would love to be able to write a test having multiple client >>> endpoints, each connecting to a specific server. Of course HR's "smart >>> routing" prevents me from controlling this explicitly.. is there a way >>> to control it? >>> >>> For example having servers {A, B} connected in cluster I'd like to >>> create two clients {A', B}', each one connected exclusively to one of >>> them and not aware of the other server node. (A <-> A', B <-> B'). >>> >>> Thanks, >>> Sanne >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From rory.odonnell at oracle.com Tue Sep 20 06:26:55 2016 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Tue, 20 Sep 2016 11:26:55 +0100 Subject: [infinispan-dev] Early Access build 136 for JDK 9 & JDK 9 with Project Jigsaw are available on java.net Message-ID: Hi Galder, Early Access b136 for JDK 9 is available on java.net, summary of changes are listed here . Early Access b136 (#5506) for JDK 9 with Project Jigsaw is available on java.net, summary of changes are listed here . There have been a number of fixes to bugs reported by Open Source projects since the last availability email : * 8165723 - b136 - core-libs JarFile::isMultiRelease() method returns false when it should return true * 8165116 - b136 - xml redirect function is not allowed even with enableExtensionFunctions NOTE:- Build 135 included a fix for JDK-8161016 which *has introduced a behavioral change to HttpURLConnection, more info:* The behavior of HttpURLConnection when using a ProxySelector has been modified with this JDK release. Currently, HttpURLConnection.connect() call would fallback to a DIRECT connection attempt if the configured proxy/proxies failed to make a connection. This release introduces a change whereby no DIRECT connection will be attempted in such a scenario. Instead, the HttpURLConnection.connect() method will fail and throw an IOException which occurred from the last proxy tested. This behavior now matches with the HTTP connections made by popular web browsers. But this change will bring compatibility issues for the applications expecting a DIRECT connection when a proxy server is down or when wrong proxies are provided. * JDK 9 Outreach Survey* In order to encourage and receive additional feedback from developers testing their applications with JDK 9, the OpenJDK Quality Outreach effort has put together a very brief survey about your experiences with JDK 9 so far. It is available at***https://www.surveymonkey.de/r/JDK9EA* We would love to hear feedback from you! Rgds,Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160920/39cbdd6f/attachment.html From ttarrant at redhat.com Wed Sep 21 10:48:01 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 21 Sep 2016 16:48:01 +0200 Subject: [infinispan-dev] Testing a Hot Rod client app, with smart routing disabled? In-Reply-To: References: <2f840fa5-9abb-6032-225a-1b9477c01129@infinispan.org> Message-ID: Sanne, this should work for you: https://github.com/infinispan/infinispan/pull/4561 You're welcome :) Tristan On 19/09/16 15:24, Tristan Tarrant wrote: > Yes, it means the client is not topology aware either. > > Tristan > > On 19/09/16 15:19, Sanne Grinovero wrote: >> On 19 September 2016 at 13:52, Tristan Tarrant >> wrote: >>> Currently thee Java client always sets the client intelligence >>> header to >>> 3 (topo + ch aware). We could add a configuration property so that you >>> could specify 1 (basic). >>> The alternative requires playing with the transport and playing with >>> the >>> "failed servers" but that is messy ! >> I'd also need the client to not "autodiscover" the nodes I didn't >> explicitly list. >> Would setting a "1" in this intelligence header be enough for that? >> >> Thanks! >> Sanne >> >> >>> Tristan >>> >>> On 19/09/16 13:39, Sanne Grinovero wrote: >>>> Hi all, >>>> >>>> I'm testing a concurrent update protocol based on Hot Rod client's >>>> support for versioned entries. >>>> >>>> I would love to be able to write a test having multiple client >>>> endpoints, each connecting to a specific server. Of course HR's "smart >>>> routing" prevents me from controlling this explicitly.. is there a way >>>> to control it? >>>> >>>> For example having servers {A, B} connected in cluster I'd like to >>>> create two clients {A', B}', each one connected exclusively to one of >>>> them and not aware of the other server node. (A <-> A', B <-> B'). >>>> >>>> Thanks, >>>> Sanne >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Wed Sep 21 12:08:40 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 21 Sep 2016 18:08:40 +0200 Subject: [infinispan-dev] Client listener failovers when connectivity drops (ISPN-7031) Message-ID: Hi all, Re: https://issues.jboss.org/browse/ISPN-7031 A potential solution here would be for the client listener, if it receives a "Connection reset by peer" IOException, to try to fail over all listeners connected to that node. The first tricky aspect is how to make sure that if you have N connected listeners, only one of them fails over the connected listeners, and hence avoid all connected listeners trying to failover all of them. A simple solution here would be for each listener to try to failover itself. Even more tricky is how to deal with the situation when failover fails. E.g. imagine you have only one server and connectivity drops. The connect is reset and the failover fails since there's no other servers. What does the listener do about it? One thing it could do is failover itself periodically until it works, but this is not ideal. Another option would be to avoid any failover until the client sends an operation and gets a connection. The latter option has a bigger chance of missing events but we have the state receiving option for those who must receive state. Any other ideas? Cheers, -- Galder Zamarre?o Infinispan, Red Hat From ttarrant at redhat.com Thu Sep 22 02:53:36 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 22 Sep 2016 08:53:36 +0200 Subject: [infinispan-dev] Weekly IRC meeting logs 2016-09-19 Message-ID: Here are the logs for this week?s IRC meeting http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2016/infinispan.2016-09-19-14.02.log.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From vjuranek at redhat.com Thu Sep 22 09:25:08 2016 From: vjuranek at redhat.com (Vojtech Juranek) Date: Thu, 22 Sep 2016 15:25:08 +0200 Subject: [infinispan-dev] Ceph cache store Message-ID: <1481975.UHkj8RgnTN@localhost.localdomain> Hi, I've implemented initial version of Ceph [1] cache store [2]. Cache entries are stored into Ceph pools [3], one pool per cache if not configured otherwise. The cache store leverages librados [4] java binding for direct communication with Ceph cluster/RADOS (see e.g. Ceph architecture overview [5] for high-level understanding what is difference between accessing RADOS via RADOS gateway or POSIX file system client and librados). Would be there any interest in such cache store? If yes, any recommendations for improvements are welcome. Thanks Vojta [1] http://ceph.com/ [2] https://github.com/vjuranek/infinispan-cachestore-ceph [3] http://docs.ceph.com/docs/jewel/rados/operations/pools/ [4] http://docs.ceph.com/docs/hammer/rados/api/librados-intro/ [5] http://docs.ceph.com/docs/hammer/architecture/ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160922/29219ba6/attachment.bin From ttarrant at redhat.com Thu Sep 22 11:23:53 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 22 Sep 2016 17:23:53 +0200 Subject: [infinispan-dev] Ceph cache store In-Reply-To: <1481975.UHkj8RgnTN@localhost.localdomain> References: <1481975.UHkj8RgnTN@localhost.localdomain> Message-ID: <5fc760e7-5a0b-cfad-4abf-9058cbe33b96@redhat.com> On 22/09/16 15:25, Vojtech Juranek wrote: > Hi, > I've implemented initial version of Ceph [1] cache store [2]. Cache entries Nice one Vojtech ! Now you have given me a reason to install and learn about Ceph, although I don't think I have the exabyte-scale capacity :) Are there any recommendations / patterns on how Ceph should be used to make better use of its features ? Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From vjuranek at redhat.com Thu Sep 22 18:03:29 2016 From: vjuranek at redhat.com (Vojtech Juranek) Date: Fri, 23 Sep 2016 00:03:29 +0200 Subject: [infinispan-dev] Ceph cache store In-Reply-To: <5fc760e7-5a0b-cfad-4abf-9058cbe33b96@redhat.com> References: <1481975.UHkj8RgnTN@localhost.localdomain> <5fc760e7-5a0b-cfad-4abf-9058cbe33b96@redhat.com> Message-ID: <11259337.Te71sn6Byk@localhost.localdomain> > I don't think I have the exabyte-scale capacity :) AFAIK this is not mandatory, cluster with capacity of dozen of petabytes should be fine for the initial testing and learning :-) > Are there any recommendations / patterns on how Ceph should be used to > make better use of its features ? you can find some general performance tuning tips like [1], but I'm not aware of any recommended usage patterns. However, I'm Ceph beginner, so maybe it's just my ignorance. As for ceph-ispn specifically, I'd like to learn more about CRUSH algorithm and CRUSH map options [2] if it would be possible to map ISPN segment to specified Ceph primary OSD, which would allow us to run ISPN node and it's appropriate primary OSD on the same machine (similar thing we do in ISPN-Spark integration), which should result into better performance. [1] http://tracker.ceph.com/projects/ceph/wiki/7_Best_Practices_to_Maximize_Your_Ceph_Cluster's_Performance [2] http://docs.ceph.com/docs/jewel/rados/operations/crush-map/ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160923/d5a2a573/attachment.bin From slaskawi at redhat.com Fri Sep 23 09:02:08 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 23 Sep 2016 15:02:08 +0200 Subject: [infinispan-dev] Ceph cache store In-Reply-To: <1481975.UHkj8RgnTN@localhost.localdomain> References: <1481975.UHkj8RgnTN@localhost.localdomain> Message-ID: Great job Vojtech! The only thing that comes into my mind is to test it with Kubernetes/OpenShift Ceph volumes [6]. But I guess this is OpenShift/Kubernetes configuration rather than Ceph CacheStore itself. Thanks Sebastian [6] http://kubernetes.io/docs/user-guide/volumes/#cephfs On Thu, Sep 22, 2016 at 3:25 PM, Vojtech Juranek wrote: > Hi, > I've implemented initial version of Ceph [1] cache store [2]. Cache entries > are stored into Ceph pools [3], one pool per cache if not configured > otherwise. The cache store leverages librados [4] java binding for direct > communication with Ceph cluster/RADOS (see e.g. Ceph architecture overview > [5] > for high-level understanding what is difference between accessing RADOS via > RADOS gateway or POSIX file system client and librados). > > Would be there any interest in such cache store? If yes, any > recommendations > for improvements are welcome. > > Thanks > Vojta > > [1] http://ceph.com/ > [2] https://github.com/vjuranek/infinispan-cachestore-ceph > [3] http://docs.ceph.com/docs/jewel/rados/operations/pools/ > [4] http://docs.ceph.com/docs/hammer/rados/api/librados-intro/ > [5] http://docs.ceph.com/docs/hammer/architecture/ > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160923/94b2ea79/attachment.html From galder at redhat.com Fri Sep 23 11:33:12 2016 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Fri, 23 Sep 2016 17:33:12 +0200 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> Message-ID: <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> -- Galder Zamarre?o Infinispan, Red Hat > On 15 Sep 2016, at 13:58, Sebastian Laskawiec wrote: > > How about turning the problem upside down and creating a TCK suite which runs on JUnit and has pluggable clients? The TCK suite would be responsible for bootstrapping servers, turning them down and validating the results. > > The biggest advantage of this approach is that all those things are pretty well known in Java world (e.g. using Arquillian for managing server lifecycle or JUnit for assertions). But the biggest challenge is how to plug for example a JavaScript client into the suite? How to call it from Java. ^ I thought about all of this when working on the JS client, and although like you, I thought this was the biggest hurdle, eventually I realised that there are bigger issues than that: 1. How do you verify that a Javascript client works the way a Javascript program would use it? IOW, even if you could call JS from Java, what you'd be verifying is that whichever contorsionate way of calling JS from Java works, which might not necessarily mean it works when a real JS program calls it. 2. Development workflow The other side problem is related to workflow: when you develop in a scripting, dynamically typed language, the way you go about testing is slightly different. Since you don't have the type checker to help, you're almost forced to run your testsuite continuously, and the JS client tests I developed were geared to make this possible. To give an example: to make being able to run test continously, the JS client assumes you have a running node for local tests and a set of servers for clustered tests (we provide a script for it). By having a running set of servers, I can very quickly run tests continously. This is very different to how Java-based testsuites work where each test or testsuites starts the required servers and then shuts them down. I'd be very upset if developing my JS client required this kind of waste of time. Moreover, the JS client tests are designed so that whatever they do, they go back to initial state when they finish. This happens for example with failover tests where I could not simply kill running servers, and instead the failover test starts a bunch servers which it kills as it goes along to test failover. The result is that none of the tests started by failover tests end up surviving when the test finishes. Maybe some day we'll have a Java-based testsuite that more easily allows continous testing. Scala, through SBT, do have something along this lines, so I don't think it's necessarily impossible, but we're not there yet. And, as I said above, you always have the first issue: testing how the user will use things. Cheers, [1] https://github.com/infinispan/js-client/blob/master/spec/infinispan_failover_spec.js > > Thanks > Sebastian > > On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes wrote: > > > On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero wrote: > I was actually planning to start a similar topic, but from the point of view of user's testing needs. > > I've recently created Hibernate OGM support for Hot Rod, and it wasn't as easy as other NoSQL databases to test; luckily I have some knowledge and contact on Infinispan ;) but I had to develop several helpers and refine the approach to testing over multiple iterations. > > I ended up developing a JUnit rule - handy for individual test runs in the IDE - and with a Maven life cycle extension and also with an Arquillian extension, which I needed to run both the Hot Rod server and start a Wildfly instance to host my client app. > > At some point I was also in trouble with conflicting dependencies so considered making a Maven plugin to manage the server lifecycle as a proper IT phase - I didn't ultimately make this as I found an easier solution but it would be great if Infinispan could provide such helpers to end users too.. Forking the ANT scripts from the Infinispan project to assemble and start my own (as you do..) seems quite cumbersome for users ;) > > Especially the server is not even available via Maven coordinates. > > The server is available at [1] > > [1] http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/ > > > I'm of course happy to contribute my battle-tested Test helpers to Infinispan, but they are meant for JUnit users. > Finally, comparing to developing OGM integrations for other NoSQL stores.. It's really hard work when there is no "viewer" of the cache content. > > We need some kind of interactive console to explore the stored data, I felt like driving blind: developing based on black box, when something doesn't work as expected it's challenging to figure if one has a bug with the storage method rather than the reading method, or maybe the encoding not quite right or the query options being used.. sometimes it's the used flags or the configuration properties (hell, I've been swearing a lot at some of these flags!) > > Thanks, > Sanne > > On 15 Sep 2016 11:07, "Tristan Tarrant" wrote: > Recently I've had a chat with Galder, Will and Vittorio about how we > test the Hot Rod server module and the various clients. We also > discussed some of this in the past, but we now need to move forward with > a better strategy. > > First up is the Hot Rod server module testsuite: it is the only part of > the code which still uses Scala. Will has a partial port of it to Java, > but we're wondering if it is worth completing that work, seeing that > most of the tests in that testsuite, in particular those related to the > protocol itself, are actually duplicated by the Java Hot Rod client's > testsuite which also happens to be our reference implementation of a > client and is much more extensive. > The only downside of removing it is that verification will require > running the client testsuite, instead of being self-contained. > > Next up is how we test clients. > > The Java client, partially described above, runs all of the tests > against ad-hoc embedded servers. Some of these tests, in particular > those related to topology, start and stop new servers on the fly. > > The server integration testsuite performs yet another set of tests, some > of which overlap the above, but using the actual full-blown server. It > doesn't test for topology changes. > > The C++ client wraps the native client in a Java wrapper generated by > SWIG and runs the Java client testsuite. It then checks against a > blacklist of known failures. It also has a small number of native tests > which use the server distribution. > > The Node.js client has its own home-grown testsuite which also uses the > server distribution. > > Duplication aside, which in some cases is unavoidable, it is impossible > to confidently say that each client is properly tested. > > Since complete unification is impossible because of the different > testing harnesses used by the various platforms/languages, I propose the > following: > > - we identify and group the tests depending on their scope (basic > protocol ops, bulk ops, topology/failover, security, etc). A client > which implements the functionality of a group MUST pass all of the tests > in that group with NO exceptions > - we assign a unique identifier to each group/test combination (e.g. > HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be > collected in a "test book" (some kind of structured file) for comparison > with client test runs > - we refactor the Java client testsuite according to the above grouping > / naming strategy so that testsuite which use the wrapping approach > (i.e. C++ with SWIG) can consume it by directly specifying the supported > groups > - other clients get reorganized so that they support the above grouping > > I understand this is quite some work, but the current situation isn't > really sustainable. > > Let me know what your thoughts are > > > Tristan > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From afield at redhat.com Fri Sep 23 13:06:11 2016 From: afield at redhat.com (Alan Field) Date: Fri, 23 Sep 2016 13:06:11 -0400 (EDT) Subject: [infinispan-dev] Hot Rod testing In-Reply-To: <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> Message-ID: <814975603.1983442.1474650371159.JavaMail.zimbra@redhat.com> Hey Galder, ----- Original Message ----- > From: "Galder Zamarre?o" > To: "infinispan -Dev List" > Sent: Friday, September 23, 2016 11:33:12 AM > Subject: Re: [infinispan-dev] Hot Rod testing > > > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 15 Sep 2016, at 13:58, Sebastian Laskawiec wrote: > > > > How about turning the problem upside down and creating a TCK suite which > > runs on JUnit and has pluggable clients? The TCK suite would be > > responsible for bootstrapping servers, turning them down and validating > > the results. > > > > The biggest advantage of this approach is that all those things are pretty > > well known in Java world (e.g. using Arquillian for managing server > > lifecycle or JUnit for assertions). But the biggest challenge is how to > > plug for example a JavaScript client into the suite? How to call it from > > Java. > > ^ I thought about all of this when working on the JS client, and although > like you, I thought this was the biggest hurdle, eventually I realised that > there are bigger issues than that: > > 1. How do you verify that a Javascript client works the way a Javascript > program would use it? > IOW, even if you could call JS from Java, what you'd be verifying is that > whichever contorsionate way of calling JS from Java works, which might not > necessarily mean it works when a real JS program calls it. I think the user workflow can be verified separately. Being able to verify the functional behavior of clients written in multiple languages using a single test suite would be a huge win, IMO. I agree with you though that this should be coupled with an actual end-user test where the Javascript client is run against a real node server, a C++ client is installed from RPMs and built into an application, etc for a complete certification of a client. > 2. Development workflow I can't really argue with this point. Any solution that uses a single test suite to test all clients will by definition not feel native to developers. The question is whether it makes sense to recreate the test suite in every language which just doesn't feel like it can scale. Thanks, Alan > The other side problem is related to workflow: when you develop in a > scripting, dynamically typed language, the way you go about testing is > slightly different. Since you don't have the type checker to help, you're > almost forced to run your testsuite continuously, and the JS client tests I > developed were geared to make this possible. > > To give an example: to make being able to run test continously, the JS client > assumes you have a running node for local tests and a set of servers for > clustered tests (we provide a script for it). By having a running set of > servers, I can very quickly run tests continously. This is very different to > how Java-based testsuites work where each test or testsuites starts the > required servers and then shuts them down. I'd be very upset if developing > my JS client required this kind of waste of time. Moreover, the JS client > tests are designed so that whatever they do, they go back to initial state > when they finish. This happens for example with failover tests where I could > not simply kill running servers, and instead the failover test starts a > bunch servers which it kills as it goes along to test failover. The result > is that none of the tests started by failover tests end up surviving when > the test finishes. > > Maybe some day we'll have a Java-based testsuite that more easily allows > continous testing. Scala, through SBT, do have something along this lines, > so I don't think it's necessarily impossible, but we're not there yet. And, > as I said above, you always have the first issue: testing how the user will > use things. > > Cheers, > > [1] > https://github.com/infinispan/js-client/blob/master/spec/infinispan_failover_spec.js > > > > > Thanks > > Sebastian > > > > On Thu, Sep 15, 2016 at 1:52 PM, Gustavo Fernandes > > wrote: > > > > > > On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero > > wrote: > > I was actually planning to start a similar topic, but from the point of > > view of user's testing needs. > > > > I've recently created Hibernate OGM support for Hot Rod, and it wasn't as > > easy as other NoSQL databases to test; luckily I have some knowledge and > > contact on Infinispan ;) but I had to develop several helpers and refine > > the approach to testing over multiple iterations. > > > > I ended up developing a JUnit rule - handy for individual test runs in the > > IDE - and with a Maven life cycle extension and also with an Arquillian > > extension, which I needed to run both the Hot Rod server and start a > > Wildfly instance to host my client app. > > > > At some point I was also in trouble with conflicting dependencies so > > considered making a Maven plugin to manage the server lifecycle as a > > proper IT phase - I didn't ultimately make this as I found an easier > > solution but it would be great if Infinispan could provide such helpers to > > end users too.. Forking the ANT scripts from the Infinispan project to > > assemble and start my own (as you do..) seems quite cumbersome for users > > ;) > > > > Especially the server is not even available via Maven coordinates. > > > > The server is available at [1] > > > > [1] > > http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/ > > > > > > I'm of course happy to contribute my battle-tested Test helpers to > > Infinispan, but they are meant for JUnit users. > > Finally, comparing to developing OGM integrations for other NoSQL stores.. > > It's really hard work when there is no "viewer" of the cache content. > > > > We need some kind of interactive console to explore the stored data, I felt > > like driving blind: developing based on black box, when something doesn't > > work as expected it's challenging to figure if one has a bug with the > > storage method rather than the reading method, or maybe the encoding not > > quite right or the query options being used.. sometimes it's the used > > flags or the configuration properties (hell, I've been swearing a lot at > > some of these flags!) > > > > Thanks, > > Sanne > > > > On 15 Sep 2016 11:07, "Tristan Tarrant" wrote: > > Recently I've had a chat with Galder, Will and Vittorio about how we > > test the Hot Rod server module and the various clients. We also > > discussed some of this in the past, but we now need to move forward with > > a better strategy. > > > > First up is the Hot Rod server module testsuite: it is the only part of > > the code which still uses Scala. Will has a partial port of it to Java, > > but we're wondering if it is worth completing that work, seeing that > > most of the tests in that testsuite, in particular those related to the > > protocol itself, are actually duplicated by the Java Hot Rod client's > > testsuite which also happens to be our reference implementation of a > > client and is much more extensive. > > The only downside of removing it is that verification will require > > running the client testsuite, instead of being self-contained. > > > > Next up is how we test clients. > > > > The Java client, partially described above, runs all of the tests > > against ad-hoc embedded servers. Some of these tests, in particular > > those related to topology, start and stop new servers on the fly. > > > > The server integration testsuite performs yet another set of tests, some > > of which overlap the above, but using the actual full-blown server. It > > doesn't test for topology changes. > > > > The C++ client wraps the native client in a Java wrapper generated by > > SWIG and runs the Java client testsuite. It then checks against a > > blacklist of known failures. It also has a small number of native tests > > which use the server distribution. > > > > The Node.js client has its own home-grown testsuite which also uses the > > server distribution. > > > > Duplication aside, which in some cases is unavoidable, it is impossible > > to confidently say that each client is properly tested. > > > > Since complete unification is impossible because of the different > > testing harnesses used by the various platforms/languages, I propose the > > following: > > > > - we identify and group the tests depending on their scope (basic > > protocol ops, bulk ops, topology/failover, security, etc). A client > > which implements the functionality of a group MUST pass all of the tests > > in that group with NO exceptions > > - we assign a unique identifier to each group/test combination (e.g. > > HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be > > collected in a "test book" (some kind of structured file) for comparison > > with client test runs > > - we refactor the Java client testsuite according to the above grouping > > / naming strategy so that testsuite which use the wrapping approach > > (i.e. C++ with SWIG) can consume it by directly specifying the supported > > groups > > - other clients get reorganized so that they support the above grouping > > > > I understand this is quite some work, but the current situation isn't > > really sustainable. > > > > Let me know what your thoughts are > > > > > > Tristan > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Mon Sep 26 03:36:40 2016 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 26 Sep 2016 09:36:40 +0200 Subject: [infinispan-dev] Fine grained maps Message-ID: <57E8D008.3000106@redhat.com> Hi all, I have realized that fine grained maps don't work reliably with write-skew check. This happens because WSC tries to load the entry from DC/cache-store, compare versions and store it, assuming that this happens atomically as the entry is locked. However, as fine grained maps can lock two different keys and modify the same entry, there is a risk that the check & store won't be atomic. Right now, the update itself won't be lost, because fine grained maps use DeltaAwareCacheEntries which apply the updates DC's lock (there can be some problems when passivation is used, though, [1] hopefully deals with them). I have figured this out when trying to update the DeltaAware handling to support more than just atomic maps - yes, there are special branches for atomic maps in the code, which is quite ugly design-wise, IMO. My intention is to do similar things like WSC for replaying the deltas, but this, obviously, needs some atomicity. IIUC, fine-grained locking was introduced back in 5.1 because of deadlocks in the lock-acquisition algorithm; the purpose was not to improve concurrency. Luckily, the days of deadlocks are far back, now we can get the cluster stuck in more complex ways :) Therefore, with a correctness-first approach, in optimistic caches I would lock just the main key (not the composite keys). The prepare-commit should be quite fast anyway, and I don't see how this could affect users (counter-examples are welcome) but slightly reduced concurrency. In pessimistic caches we have to be more cautious, since users manipulate the locks directly and reason about them more. Therefore, we need to lock the composite keys during transaction runtime, but in addition to that, during the commit itself we should lock the main key for the duration of the commit if necessary - pessimistic caches don't sport WSC, but I was looking for some atomicity options for deltas. An alternative would be to piggyback on DC's locking scheme, however, this is quite unsuitable for the optimistic case with a RPC between WSC and DC store. In addition to that, it doesn't fit into our async picture and we would send complex compute functions into the DC, instead of decoupled lock/unlock. I could also devise another layer of locking, but that's just madness. I am adding Sanne to recipients as OGM is probably the most important consumer of atomic hash maps. WDYT? Radim [1] https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e -- Radim Vansa JBoss Performance Team From slaskawi at redhat.com Mon Sep 26 07:31:14 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 26 Sep 2016 13:31:14 +0200 Subject: [infinispan-dev] Interesting article - "A New High Throughput Java Executor Service" Message-ID: Just stumbled upon - https://dzone.com/articles/a-new-high-throughput-java-executor-service Project's web page: http://executorservice.org/ Sources: https://github.com/vmlens/executor-service Maybe that's something we could experiment with? Thanks Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160926/6d438cce/attachment.html From bban at redhat.com Mon Sep 26 07:34:58 2016 From: bban at redhat.com (Bela Ban) Date: Mon, 26 Sep 2016 13:34:58 +0200 Subject: [infinispan-dev] Interesting article - "A New High Throughput Java Executor Service" In-Reply-To: References: Message-ID: <57E907E2.9060103@redhat.com> I already ran an experiment, replacing the default executor we have in JGroups. The result was abysmally slow and the executor used a LOT of CPU, bringing things almost to a standstill. Looking at the code, they're using a lot of busy waiting/spinning On 26/09/16 13:31, Sebastian Laskawiec wrote: > Just stumbled upon - > https://dzone.com/articles/a-new-high-throughput-java-executor-service > > Project's web page: http://executorservice.org/ > Sources: https://github.com/vmlens/executor-service > > Maybe that's something we could experiment with? > > Thanks > Sebastian > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From rvansa at redhat.com Mon Sep 26 09:00:44 2016 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 26 Sep 2016 15:00:44 +0200 Subject: [infinispan-dev] Fine grained maps In-Reply-To: <57E8D008.3000106@redhat.com> References: <57E8D008.3000106@redhat.com> Message-ID: <57E91BFC.9060604@redhat.com> Using infinispan-dev as debugging duck... The pessimistic case is somewhat precarious. Since during the 1PC commit we cannot set the order by synchronizing on primary owner, we should lock on all owners. However, the opens the possibility to lock locally and do a RPC to lock remotely (since we lock in LockingInterceptor and DistributionInterceptor is below that), which leads to the well-known deadlocks. So we could move the locking into new interceptor below DI; however, the idea is that the WSC load should happen in EntryWrappingInterceptor/CacheLoaderInterceptor, as this is the place to load stuff into context, and we need to lock it before these, which are above DI :-/ So the only way I could think of is move the replication of PrepareCommand in pessimistic caches above PessimisticLockingInterceptor. And that's rather big for my taste. And in order to prevent deadlocks due to different ordering of locked keys, we have to order the keys as in optimistic caches. However, if user locks the keys that atomic maps use explicitly, he could lock them in different order and that would lead to deadlocks! Pheew. Almost lost an apetite for such changes. Radim PS: non-tx caches aren't that complex, the situation there is quite similar to optimistic caches. PPS: (Repl|Dist)WriteSkewAtomicMapAPITests have testConcurrentTx disabled, instead of requiring the WSC to be thrown :-/ On 09/26/2016 09:36 AM, Radim Vansa wrote: > Hi all, > > I have realized that fine grained maps don't work reliably with > write-skew check. This happens because WSC tries to load the entry from > DC/cache-store, compare versions and store it, assuming that this > happens atomically as the entry is locked. However, as fine grained maps > can lock two different keys and modify the same entry, there is a risk > that the check & store won't be atomic. Right now, the update itself > won't be lost, because fine grained maps use DeltaAwareCacheEntries > which apply the updates DC's lock (there can be some problems when > passivation is used, though, [1] hopefully deals with them). > > I have figured this out when trying to update the DeltaAware handling to > support more than just atomic maps - yes, there are special branches for > atomic maps in the code, which is quite ugly design-wise, IMO. My > intention is to do similar things like WSC for replaying the deltas, but > this, obviously, needs some atomicity. > > IIUC, fine-grained locking was introduced back in 5.1 because of > deadlocks in the lock-acquisition algorithm; the purpose was not to > improve concurrency. Luckily, the days of deadlocks are far back, now we > can get the cluster stuck in more complex ways :) Therefore, with a > correctness-first approach, in optimistic caches I would lock just the > main key (not the composite keys). The prepare-commit should be quite > fast anyway, and I don't see how this could affect users > (counter-examples are welcome) but slightly reduced concurrency. > > In pessimistic caches we have to be more cautious, since users > manipulate the locks directly and reason about them more. Therefore, we > need to lock the composite keys during transaction runtime, but in > addition to that, during the commit itself we should lock the main key > for the duration of the commit if necessary - pessimistic caches don't > sport WSC, but I was looking for some atomicity options for deltas. > > An alternative would be to piggyback on DC's locking scheme, however, > this is quite unsuitable for the optimistic case with a RPC between WSC > and DC store. In addition to that, it doesn't fit into our async picture > and we would send complex compute functions into the DC, instead of > decoupled lock/unlock. I could also devise another layer of locking, but > that's just madness. > > I am adding Sanne to recipients as OGM is probably the most important > consumer of atomic hash maps. > > WDYT? > > Radim > > [1] > https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e > -- Radim Vansa JBoss Performance Team From tobia.loschiavo at gmail.com Mon Sep 26 09:56:43 2016 From: tobia.loschiavo at gmail.com (matroska) Date: Mon, 26 Sep 2016 06:56:43 -0700 (MST) Subject: [infinispan-dev] Inspect local cache values Message-ID: <1474898203305-4031182.post@n3.nabble.com> Hi, I am trying to write a demo with Infinispan in order to make my company to use it. I have configured a distributed cache but I would like to print the content of the local values stored in memory (so the node memory). I cannot find any api to do that. Could you please suggest me how to show the local cache on nodes? Thanks Tobia -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Inspect-local-cache-values-tp4031182.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. From rvansa at redhat.com Mon Sep 26 10:09:19 2016 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 26 Sep 2016 16:09:19 +0200 Subject: [infinispan-dev] Inspect local cache values In-Reply-To: <1474898203305-4031182.post@n3.nabble.com> References: <1474898203305-4031182.post@n3.nabble.com> Message-ID: <57E92C0F.6020608@redhat.com> Hi Tobia, for user questions, please use forum [1]. As per your question, use cache.getAdvancedCache().withFlags(Flag.CACHE_MODE_LOCAL) and call entrySet() or stream() on this object. Radim [1] https://developer.jboss.org/en/infinispan/ On 09/26/2016 03:56 PM, matroska wrote: > Hi, > > I am trying to write a demo with Infinispan in order to make my company to > use it. I have configured a distributed cache but I would like to print the > content of the local values stored in memory (so the node memory). I cannot > find any api to do that. Could you please suggest me how to show the local > cache on nodes? > > Thanks > Tobia > > > > -- > View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Inspect-local-cache-values-tp4031182.html > Sent from the Infinispan Developer List mailing list archive at Nabble.com. > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From pedro at infinispan.org Mon Sep 26 13:23:28 2016 From: pedro at infinispan.org (Pedro Ruivo) Date: Mon, 26 Sep 2016 18:23:28 +0100 Subject: [infinispan-dev] Fine grained maps In-Reply-To: <57E8D008.3000106@redhat.com> References: <57E8D008.3000106@redhat.com> Message-ID: On 26-09-2016 08:36, Radim Vansa wrote: > Hi all, > > I have realized that fine grained maps don't work reliably with > write-skew check. This happens because WSC tries to load the entry from > DC/cache-store, compare versions and store it, assuming that this > happens atomically as the entry is locked. However, as fine grained maps > can lock two different keys and modify the same entry, there is a risk > that the check & store won't be atomic. Right now, the update itself > won't be lost, because fine grained maps use DeltaAwareCacheEntries > which apply the updates DC's lock (there can be some problems when > passivation is used, though, [1] hopefully deals with them). Aren't you getting a ClassCastException (re: ISPN-2729)? BTW, why not removing the FineGrainedAtomicMap? The grouping API should provide similar semantics and it would be simpler to handle/use. Also, it provides method to return and remove all keys associated to the group (AdvancedCache#getGroup() and AdvancedCache#removeGroup()). > > I have figured this out when trying to update the DeltaAware handling to > support more than just atomic maps - yes, there are special branches for > atomic maps in the code, which is quite ugly design-wise, IMO. My > intention is to do similar things like WSC for replaying the deltas, but > this, obviously, needs some atomicity. > > IIUC, fine-grained locking was introduced back in 5.1 because of > deadlocks in the lock-acquisition algorithm; the purpose was not to > improve concurrency. Luckily, the days of deadlocks are far back, now we > can get the cluster stuck in more complex ways :) Therefore, with a > correctness-first approach, in optimistic caches I would lock just the > main key (not the composite keys). The prepare-commit should be quite > fast anyway, and I don't see how this could affect users > (counter-examples are welcome) but slightly reduced concurrency. > > In pessimistic caches we have to be more cautious, since users > manipulate the locks directly and reason about them more. Therefore, we > need to lock the composite keys during transaction runtime, but in > addition to that, during the commit itself we should lock the main key > for the duration of the commit if necessary - pessimistic caches don't > sport WSC, but I was looking for some atomicity options for deltas. > > An alternative would be to piggyback on DC's locking scheme, however, > this is quite unsuitable for the optimistic case with a RPC between WSC > and DC store. In addition to that, it doesn't fit into our async picture > and we would send complex compute functions into the DC, instead of > decoupled lock/unlock. I could also devise another layer of locking, but > that's just madness. > > I am adding Sanne to recipients as OGM is probably the most important > consumer of atomic hash maps. > > WDYT? > > Radim > > [1] > https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e > From mwigglesworth at redhat.com Mon Sep 26 15:13:49 2016 From: mwigglesworth at redhat.com (Martes Wigglesworth) Date: Mon, 26 Sep 2016 15:13:49 -0400 (EDT) Subject: [infinispan-dev] To which issue do I assocate my PR work for a quickstart update? In-Reply-To: <2087229218.2490474.1474916199035.JavaMail.zimbra@redhat.com> Message-ID: <140347556.2492426.1474917229423.JavaMail.zimbra@redhat.com> Greetings all. For the quickstart PR work, what issue should I reference, when the work is done? The relevant information on each issue is listed below, as was updated by jbossbot, when posting to infinispan channel on freenode. (2:24:44 PM) jbossbot: jira [ ISPN-7037 ] Create quickstart that demonstrates library mode node security using callbackhandlers [ New (Unresolved) Feature Request , Minor , Demos and Tutorials/Tasks , Martes Wigglesworth ] https://issues.jboss.org/browse/ISPN-7037 (2:24:44 PM) jbossbot: jira [ JDG-523 ] An updated secure library mode quickstart should be created which demonstrates node security using callbackhandlers for authenticating new entrants to a jgroups cluster. [ New (Unresolved) Enhancement , Minor , unspecified , Tristan Tarrant ] https://issues.jboss.org/browse/JDG-523 Respectfully, Martes G Wigglesworth Consultant - Middleware Engineer Red Hat Consulting Red Hat, Inc. Office Phone: 804 343 6084 - 8136084 Office Email: mwiggles at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160926/4063950e/attachment-0001.html From mwiggles at redhat.com Mon Sep 26 15:16:36 2016 From: mwiggles at redhat.com (Martes Wigglesworth) Date: Mon, 26 Sep 2016 15:16:36 -0400 (EDT) Subject: [infinispan-dev] To which issue do I assocate my PR work for a quickstart update? In-Reply-To: <140347556.2492426.1474917229423.JavaMail.zimbra@redhat.com> Message-ID: <1813341396.2492636.1474917396291.JavaMail.zimbra@redhat.com> Greetings all. For the quickstart PR work, what issue should I reference, when the work is done? The relevant information on each issue is listed below, as was updated by jbossbot, when posting to infinispan channel on freenode. (2:24:44 PM) jbossbot: jira [ ISPN-7037 ] Create quickstart that demonstrates library mode node security using callbackhandlers [ New (Unresolved) Feature Request , Minor , Demos and Tutorials/Tasks , Martes Wigglesworth ] https://issues.jboss.org/browse/ISPN-7037 (2:24:44 PM) jbossbot: jira [ JDG-523 ] An updated secure library mode quickstart should be created which demonstrates node security using callbackhandlers for authenticating new entrants to a jgroups cluster. [ New (Unresolved) Enhancement , Minor , unspecified , Tristan Tarrant ] https://issues.jboss.org/browse/JDG-523 Respectfully, Martes G Wigglesworth Consultant - Middleware Engineer Red Hat Consulting Red Hat, Inc. Office Phone: 804 343 6084 - 8136084 Office Email: mwiggles at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160926/1df4064f/attachment.html From dan.berindei at gmail.com Mon Sep 26 16:51:52 2016 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 26 Sep 2016 23:51:52 +0300 Subject: [infinispan-dev] Fine grained maps In-Reply-To: <57E8D008.3000106@redhat.com> References: <57E8D008.3000106@redhat.com> Message-ID: On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa wrote: > Hi all, > > I have realized that fine grained maps don't work reliably with > write-skew check. This happens because WSC tries to load the entry from > DC/cache-store, compare versions and store it, assuming that this > happens atomically as the entry is locked. However, as fine grained maps > can lock two different keys and modify the same entry, there is a risk > that the check & store won't be atomic. Right now, the update itself > won't be lost, because fine grained maps use DeltaAwareCacheEntries > which apply the updates DC's lock (there can be some problems when > passivation is used, though, [1] hopefully deals with them). > I had a hard time understanding what the problem is, but then I realized it's because I was assuming we keep a separate version for each subkey. After I realized it's not implemented like that, I also found a couple of bugs I filed for it a long time ago: https://issues.jboss.org/browse/ISPN-3123 https://issues.jboss.org/browse/ISPN-5584 > I have figured this out when trying to update the DeltaAware handling to > support more than just atomic maps - yes, there are special branches for > atomic maps in the code, which is quite ugly design-wise, IMO. My > intention is to do similar things like WSC for replaying the deltas, but > this, obviously, needs some atomicity. > Yes, for all the bugs in the AtomicMaps, it's even harder implementing a DeltaAware that is not an AtomicMap... But I don't see any reason to do that anyway, I'd rather work on making the functional stuff work with transactions. > IIUC, fine-grained locking was introduced back in 5.1 because of > deadlocks in the lock-acquisition algorithm; the purpose was not to > improve concurrency. Luckily, the days of deadlocks are far back, now we > can get the cluster stuck in more complex ways :) Therefore, with a > correctness-first approach, in optimistic caches I would lock just the > main key (not the composite keys). The prepare-commit should be quite > fast anyway, and I don't see how this could affect users > (counter-examples are welcome) but slightly reduced concurrency. > I don't remember what initial use case for FineGrainedAtomicMaps was, but I agree with Pedro that it's a bit long in the tooth now. The only advantage of FGAM over grouping is that getGroup(key) needs to iterate over the entire data container/store, so it can be a lot slower when you have lots of small groups. But if you need to work with all the subkeys in the every transaction, you should probably be using a regular AtomicMap instead. IMO we should deprecate FineGrainedAtomicMap and implement it as a regular AtomicMap. > In pessimistic caches we have to be more cautious, since users > manipulate the locks directly and reason about them more. Therefore, we > need to lock the composite keys during transaction runtime, but in > addition to that, during the commit itself we should lock the main key > for the duration of the commit if necessary - pessimistic caches don't > sport WSC, but I was looking for some atomicity options for deltas. > -1 to implicitly locking the main key. If a DeltaAware implementation wants to support partial locking, then it should take care of the atomicity of the merge operation itself. If it doesn't want to support partial locking, then it shouldn't use AdvancedCache.applyDelta(). It's a bit unfortunate that applyDelta() looks like a method that anyone can call, but it should only be called by the DeltaAware implementation itself. However, I agree that implementing a DeltaAware partial locking correctly in all possible configurations is nigh impossible. So it would be much better if we also deprecate applyDelta() and start ignoring the `locksToAcquire` parameter. > An alternative would be to piggyback on DC's locking scheme, however, > this is quite unsuitable for the optimistic case with a RPC between WSC > and DC store. In addition to that, it doesn't fit into our async picture > and we would send complex compute functions into the DC, instead of > decoupled lock/unlock. I could also devise another layer of locking, but > that's just madness. > -10 to piggyback on DC locking, and -100 to a new locking layer. I think you could lock the main key by executing a LockControlCommand(CACHE_MODE_LOCAL) from PessimisticLockingInterceptor.visitPrepareCommand, before passing the PrepareCommand to the next interceptor. But please don't do it! > I am adding Sanne to recipients as OGM is probably the most important > consumer of atomic hash maps. > > WDYT? > > Radim > > [1] > https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Tue Sep 27 03:28:17 2016 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 27 Sep 2016 09:28:17 +0200 Subject: [infinispan-dev] Fine grained maps In-Reply-To: References: <57E8D008.3000106@redhat.com> Message-ID: <57EA1F91.8060202@redhat.com> To Pedro: I have figured out that it shouldn't work rather theoretically, so I haven't crashed into ISPN-2729. On 09/26/2016 10:51 PM, Dan Berindei wrote: > On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa wrote: >> Hi all, >> >> I have realized that fine grained maps don't work reliably with >> write-skew check. This happens because WSC tries to load the entry from >> DC/cache-store, compare versions and store it, assuming that this >> happens atomically as the entry is locked. However, as fine grained maps >> can lock two different keys and modify the same entry, there is a risk >> that the check & store won't be atomic. Right now, the update itself >> won't be lost, because fine grained maps use DeltaAwareCacheEntries >> which apply the updates DC's lock (there can be some problems when >> passivation is used, though, [1] hopefully deals with them). >> > I had a hard time understanding what the problem is, but then I > realized it's because I was assuming we keep a separate version for > each subkey. After I realized it's not implemented like that, I also > found a couple of bugs I filed for it a long time ago: > > https://issues.jboss.org/browse/ISPN-3123 > https://issues.jboss.org/browse/ISPN-5584 > >> I have figured this out when trying to update the DeltaAware handling to >> support more than just atomic maps - yes, there are special branches for >> atomic maps in the code, which is quite ugly design-wise, IMO. My >> intention is to do similar things like WSC for replaying the deltas, but >> this, obviously, needs some atomicity. >> > Yes, for all the bugs in the AtomicMaps, it's even harder implementing > a DeltaAware that is not an AtomicMap... > > But I don't see any reason to do that anyway, I'd rather work on > making the functional stuff work with transactions. Yes, I would rather focus on functional stuff too, but the Delta* stuff gets into my way all the time, so I was trying to remove that. However, though we could deprecate fine grained maps (+1!) we have to keep it working as OGM uses that. I am awaiting some details from Sanne that could suggest alternative solution, but he's on PTO now. > >> IIUC, fine-grained locking was introduced back in 5.1 because of >> deadlocks in the lock-acquisition algorithm; the purpose was not to >> improve concurrency. Luckily, the days of deadlocks are far back, now we >> can get the cluster stuck in more complex ways :) Therefore, with a >> correctness-first approach, in optimistic caches I would lock just the >> main key (not the composite keys). The prepare-commit should be quite >> fast anyway, and I don't see how this could affect users >> (counter-examples are welcome) but slightly reduced concurrency. >> > I don't remember what initial use case for FineGrainedAtomicMaps was, > but I agree with Pedro that it's a bit long in the tooth now. The only > advantage of FGAM over grouping is that getGroup(key) needs to iterate > over the entire data container/store, so it can be a lot slower when > you have lots of small groups. But if you need to work with all the > subkeys in the every transaction, you should probably be using a > regular AtomicMap instead. Iterating through whole container seems like a very limiting factor to me, but I would keep AtomicMaps and let them be implemented through deltas/functional commands (preferred), but use the standard locking mechanisms instead of fine-grained insanity. > > IMO we should deprecate FineGrainedAtomicMap and implement it as a > regular AtomicMap. > >> In pessimistic caches we have to be more cautious, since users >> manipulate the locks directly and reason about them more. Therefore, we >> need to lock the composite keys during transaction runtime, but in >> addition to that, during the commit itself we should lock the main key >> for the duration of the commit if necessary - pessimistic caches don't >> sport WSC, but I was looking for some atomicity options for deltas. >> > -1 to implicitly locking the main key. If a DeltaAware implementation > wants to support partial locking, then it should take care of the > atomicity of the merge operation itself. If it doesn't want to support > partial locking, then it shouldn't use AdvancedCache.applyDelta(). > It's a bit unfortunate that applyDelta() looks like a method that > anyone can call, but it should only be called by the DeltaAware > implementation itself. As I have mentioned in my last mail, I found that it's not that easy, so I am not implementing that. But it's not about taking care of atomicity of the merge - applying merge can be synchronized, but you have to do that with the entry stored in DC when the entry is about to be stored in DC - and this is the only moment you can squeeze the WSC inl, because the DeltaAware can't know anything about WSCs. That's the DC locking piggyback you -10. > > However, I agree that implementing a DeltaAware partial locking > correctly in all possible configurations is nigh impossible. So it > would be much better if we also deprecate applyDelta() and start > ignoring the `locksToAcquire` parameter. > >> An alternative would be to piggyback on DC's locking scheme, however, >> this is quite unsuitable for the optimistic case with a RPC between WSC >> and DC store. In addition to that, it doesn't fit into our async picture >> and we would send complex compute functions into the DC, instead of >> decoupled lock/unlock. I could also devise another layer of locking, but >> that's just madness. >> > -10 to piggyback on DC locking, and -100 to a new locking layer. > > I think you could lock the main key by executing a > LockControlCommand(CACHE_MODE_LOCAL) from > PessimisticLockingInterceptor.visitPrepareCommand, before passing the > PrepareCommand to the next interceptor. But please don't do it! Okay, I'll just wait until someone tells me why the heck anyone needs fine grained, discuss how to avoid it and then deprecate it :) Radim > >> I am adding Sanne to recipients as OGM is probably the most important >> consumer of atomic hash maps. >> >> WDYT? >> >> Radim >> >> [1] >> https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From slaskawi at redhat.com Tue Sep 27 05:35:19 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 27 Sep 2016 11:35:19 +0200 Subject: [infinispan-dev] Interesting article - "A New High Throughput Java Executor Service" In-Reply-To: <57E907E2.9060103@redhat.com> References: <57E907E2.9060103@redhat.com> Message-ID: Ouch :( The article looked promising :D Thanks a lot for checking! On Mon, Sep 26, 2016 at 1:34 PM, Bela Ban wrote: > I already ran an experiment, replacing the default executor we have in > JGroups. The result was abysmally slow and the executor used a LOT of > CPU, bringing things almost to a standstill. > > Looking at the code, they're using a lot of busy waiting/spinning > > On 26/09/16 13:31, Sebastian Laskawiec wrote: > > Just stumbled upon - > > https://dzone.com/articles/a-new-high-throughput-java-executor-service > > > > Project's web page: http://executorservice.org/ > > Sources: https://github.com/vmlens/executor-service > > > > Maybe that's something we could experiment with? > > > > Thanks > > Sebastian > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160927/935e85ea/attachment-0001.html From bban at redhat.com Tue Sep 27 05:37:58 2016 From: bban at redhat.com (Bela Ban) Date: Tue, 27 Sep 2016 11:37:58 +0200 Subject: [infinispan-dev] Interesting article - "A New High Throughput Java Executor Service" In-Reply-To: References: <57E907E2.9060103@redhat.com> Message-ID: <57EA3DF6.3030604@redhat.com> That should not prevent you from running your own tests. Maybe that software was created with a different use case in mind... On 27/09/16 11:35, Sebastian Laskawiec wrote: > Ouch :( The article looked promising :D > > Thanks a lot for checking! > > On Mon, Sep 26, 2016 at 1:34 PM, Bela Ban > wrote: > > I already ran an experiment, replacing the default executor we have in > JGroups. The result was abysmally slow and the executor used a LOT of > CPU, bringing things almost to a standstill. > > Looking at the code, they're using a lot of busy waiting/spinning > > On 26/09/16 13:31, Sebastian Laskawiec wrote: > > Just stumbled upon - > > > https://dzone.com/articles/a-new-high-throughput-java-executor-service > > > > > Project's web page: http://executorservice.org/ > > Sources: https://github.com/vmlens/executor-service > > > > > Maybe that's something we could experiment with? > > > > Thanks > > Sebastian > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From gustavo at infinispan.org Tue Sep 27 06:24:15 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Tue, 27 Sep 2016 11:24:15 +0100 Subject: [infinispan-dev] Interesting article - "A New High Throughput Java Executor Service" In-Reply-To: <57EA3DF6.3030604@redhat.com> References: <57E907E2.9060103@redhat.com> <57EA3DF6.3030604@redhat.com> Message-ID: The doc clearly states that "The tradeoff is that latency is much higher than that of the standard JDK executor service", and looking at the benchmarks, it's a 10x penalty. Gustavo On 27 Sep 2016 10:38, "Bela Ban" wrote: > That should not prevent you from running your own tests. Maybe that > software was created with a different use case in mind... > > On 27/09/16 11:35, Sebastian Laskawiec wrote: > > Ouch :( The article looked promising :D > > > > Thanks a lot for checking! > > > > On Mon, Sep 26, 2016 at 1:34 PM, Bela Ban > > wrote: > > > > I already ran an experiment, replacing the default executor we have > in > > JGroups. The result was abysmally slow and the executor used a LOT of > > CPU, bringing things almost to a standstill. > > > > Looking at the code, they're using a lot of busy waiting/spinning > > > > On 26/09/16 13:31, Sebastian Laskawiec wrote: > > > Just stumbled upon - > > > > > https://dzone.com/articles/a-new-high-throughput-java- > executor-service > > executor-service> > > > > > > Project's web page: http://executorservice.org/ > > > Sources: https://github.com/vmlens/executor-service > > > > > > > > Maybe that's something we could experiment with? > > > > > > Thanks > > > Sebastian > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > -- > > Bela Ban, JGroups lead (http://www.jgroups.org) > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160927/28884b48/attachment.html From sanne at infinispan.org Tue Sep 27 07:35:54 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 27 Sep 2016 12:35:54 +0100 Subject: [infinispan-dev] Fine grained maps In-Reply-To: <57EA1F91.8060202@redhat.com> References: <57E8D008.3000106@redhat.com> <57EA1F91.8060202@redhat.com> Message-ID: Hibernate OGM is using the FGAMs and needs their "fine-grained" semantics as it's being used to store what is perceived possibly as independent user entries; having these entries share a same lock would introduce possibilities of deadlock depending on the end user's business logic; we're an abstraction layer and these semantics are being relied on. I believe a good analogy to this is the comparison of databases implementing row-level locking vs. page-level locking; such changes in granularity would make end users loose hair so I'll not change our usage from FGAMS to AMs. We can maybe move to grouping or to queries; this wasn't considered back then as neither grouping nor (unindexed) queries were available. Both grouping and queries have their drawbacks though, so we can take this in consideration on the OGM team but it's going to take some time; So feel free deprecate FGAMs but please don't remove them yet. For the record, OGM never uses write skew checks on FGAMs, and also Hibernate OGM / Hot Rod doesn't use FGAMs (whe use them only in embedded mode, when transactions are available). Thanks, Sanne On 27 September 2016 at 08:28, Radim Vansa wrote: > To Pedro: I have figured out that it shouldn't work rather > theoretically, so I haven't crashed into ISPN-2729. > > On 09/26/2016 10:51 PM, Dan Berindei wrote: >> On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa wrote: >>> Hi all, >>> >>> I have realized that fine grained maps don't work reliably with >>> write-skew check. This happens because WSC tries to load the entry from >>> DC/cache-store, compare versions and store it, assuming that this >>> happens atomically as the entry is locked. However, as fine grained maps >>> can lock two different keys and modify the same entry, there is a risk >>> that the check & store won't be atomic. Right now, the update itself >>> won't be lost, because fine grained maps use DeltaAwareCacheEntries >>> which apply the updates DC's lock (there can be some problems when >>> passivation is used, though, [1] hopefully deals with them). >>> >> I had a hard time understanding what the problem is, but then I >> realized it's because I was assuming we keep a separate version for >> each subkey. After I realized it's not implemented like that, I also >> found a couple of bugs I filed for it a long time ago: >> >> https://issues.jboss.org/browse/ISPN-3123 >> https://issues.jboss.org/browse/ISPN-5584 >> >>> I have figured this out when trying to update the DeltaAware handling to >>> support more than just atomic maps - yes, there are special branches for >>> atomic maps in the code, which is quite ugly design-wise, IMO. My >>> intention is to do similar things like WSC for replaying the deltas, but >>> this, obviously, needs some atomicity. >>> >> Yes, for all the bugs in the AtomicMaps, it's even harder implementing >> a DeltaAware that is not an AtomicMap... >> >> But I don't see any reason to do that anyway, I'd rather work on >> making the functional stuff work with transactions. > > Yes, I would rather focus on functional stuff too, but the Delta* stuff > gets into my way all the time, so I was trying to remove that. However, > though we could deprecate fine grained maps (+1!) we have to keep it > working as OGM uses that. I am awaiting some details from Sanne that > could suggest alternative solution, but he's on PTO now. > >> >>> IIUC, fine-grained locking was introduced back in 5.1 because of >>> deadlocks in the lock-acquisition algorithm; the purpose was not to >>> improve concurrency. Luckily, the days of deadlocks are far back, now we >>> can get the cluster stuck in more complex ways :) Therefore, with a >>> correctness-first approach, in optimistic caches I would lock just the >>> main key (not the composite keys). The prepare-commit should be quite >>> fast anyway, and I don't see how this could affect users >>> (counter-examples are welcome) but slightly reduced concurrency. >>> >> I don't remember what initial use case for FineGrainedAtomicMaps was, >> but I agree with Pedro that it's a bit long in the tooth now. The only >> advantage of FGAM over grouping is that getGroup(key) needs to iterate >> over the entire data container/store, so it can be a lot slower when >> you have lots of small groups. But if you need to work with all the >> subkeys in the every transaction, you should probably be using a >> regular AtomicMap instead. > > Iterating through whole container seems like a very limiting factor to > me, but I would keep AtomicMaps and let them be implemented through > deltas/functional commands (preferred), but use the standard locking > mechanisms instead of fine-grained insanity. > >> >> IMO we should deprecate FineGrainedAtomicMap and implement it as a >> regular AtomicMap. >> >>> In pessimistic caches we have to be more cautious, since users >>> manipulate the locks directly and reason about them more. Therefore, we >>> need to lock the composite keys during transaction runtime, but in >>> addition to that, during the commit itself we should lock the main key >>> for the duration of the commit if necessary - pessimistic caches don't >>> sport WSC, but I was looking for some atomicity options for deltas. >>> >> -1 to implicitly locking the main key. If a DeltaAware implementation >> wants to support partial locking, then it should take care of the >> atomicity of the merge operation itself. If it doesn't want to support >> partial locking, then it shouldn't use AdvancedCache.applyDelta(). >> It's a bit unfortunate that applyDelta() looks like a method that >> anyone can call, but it should only be called by the DeltaAware >> implementation itself. > > As I have mentioned in my last mail, I found that it's not that easy, so > I am not implementing that. But it's not about taking care of atomicity > of the merge - applying merge can be synchronized, but you have to do > that with the entry stored in DC when the entry is about to be stored in > DC - and this is the only moment you can squeeze the WSC inl, because > the DeltaAware can't know anything about WSCs. That's the DC locking > piggyback you -10. > >> >> However, I agree that implementing a DeltaAware partial locking >> correctly in all possible configurations is nigh impossible. So it >> would be much better if we also deprecate applyDelta() and start >> ignoring the `locksToAcquire` parameter. >> >>> An alternative would be to piggyback on DC's locking scheme, however, >>> this is quite unsuitable for the optimistic case with a RPC between WSC >>> and DC store. In addition to that, it doesn't fit into our async picture >>> and we would send complex compute functions into the DC, instead of >>> decoupled lock/unlock. I could also devise another layer of locking, but >>> that's just madness. >>> >> -10 to piggyback on DC locking, and -100 to a new locking layer. >> >> I think you could lock the main key by executing a >> LockControlCommand(CACHE_MODE_LOCAL) from >> PessimisticLockingInterceptor.visitPrepareCommand, before passing the >> PrepareCommand to the next interceptor. But please don't do it! > > Okay, I'll just wait until someone tells me why the heck anyone needs > fine grained, discuss how to avoid it and then deprecate it :) > > Radim > >> >>> I am adding Sanne to recipients as OGM is probably the most important >>> consumer of atomic hash maps. >>> >>> WDYT? >>> >>> Radim >>> >>> [1] >>> https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e >>> >>> -- >>> Radim Vansa >>> JBoss Performance Team >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Tue Sep 27 09:33:47 2016 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 27 Sep 2016 15:33:47 +0200 Subject: [infinispan-dev] Fine grained maps In-Reply-To: References: <57E8D008.3000106@redhat.com> <57EA1F91.8060202@redhat.com> Message-ID: <57EA753B.7050204@redhat.com> On 09/27/2016 01:35 PM, Sanne Grinovero wrote: > Hibernate OGM is using the FGAMs and needs their "fine-grained" > semantics as it's being used to store what is perceived possibly as > independent user entries; having these entries share a same lock would > introduce possibilities of deadlock depending on the end user's > business logic; we're an abstraction layer and these semantics are > being relied on. > I believe a good analogy to this is the comparison of databases > implementing row-level locking vs. page-level locking; such changes in > granularity would make end users loose hair so I'll not change our > usage from FGAMS to AMs. I am not asking you to move from FGAMs to AMs as-is - I am trying to precisely asses your needs and provide an option that suits them best, while not creating malfunctioning configurations and breaking encapsulation. OGM uses configuration by default (not sure if you support changing that to pessimistic). With this configuration, there's no risk of deadlocks within Infinispan, no matter what order you use to read/write entries in a transaction. All locks are acquired in a deterministic order only during transaction commit, that appears to OGM (or the user) as a single atomic operation. So what fine-grained gives you now is that you don't have to wait until the tx1.commit() call completes before tx2.commit() can start. So I see the performance impact upon highly concurrent modification of single entry, but from the end-user perspective it should not have any visible effect (least lose hair :)). Another important feature of AtomicMaps (but FGAM and AM have it in common) is that they send only the delta over the wire - such behaviour must be preserved, of course, but functional API should supersede the hacks in place now. > We can maybe move to grouping or to queries; this wasn't considered > back then as neither grouping nor (unindexed) queries were available. > Both grouping and queries have their drawbacks though, so we can take > this in consideration on the OGM team but it's going to take some > time; > > So feel free deprecate FGAMs but please don't remove them yet. We wouldn't remove the interface in 9.0; we would return something that does what you need but differently. > > For the record, OGM never uses write skew checks on FGAMs, and also > Hibernate OGM / Hot Rod doesn't use FGAMs (whe use them only in > embedded mode, when transactions are available). And that's the only reason why it was not fixed sooner :) Radim > > Thanks, > Sanne > > > On 27 September 2016 at 08:28, Radim Vansa wrote: >> To Pedro: I have figured out that it shouldn't work rather >> theoretically, so I haven't crashed into ISPN-2729. >> >> On 09/26/2016 10:51 PM, Dan Berindei wrote: >>> On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa wrote: >>>> Hi all, >>>> >>>> I have realized that fine grained maps don't work reliably with >>>> write-skew check. This happens because WSC tries to load the entry from >>>> DC/cache-store, compare versions and store it, assuming that this >>>> happens atomically as the entry is locked. However, as fine grained maps >>>> can lock two different keys and modify the same entry, there is a risk >>>> that the check & store won't be atomic. Right now, the update itself >>>> won't be lost, because fine grained maps use DeltaAwareCacheEntries >>>> which apply the updates DC's lock (there can be some problems when >>>> passivation is used, though, [1] hopefully deals with them). >>>> >>> I had a hard time understanding what the problem is, but then I >>> realized it's because I was assuming we keep a separate version for >>> each subkey. After I realized it's not implemented like that, I also >>> found a couple of bugs I filed for it a long time ago: >>> >>> https://issues.jboss.org/browse/ISPN-3123 >>> https://issues.jboss.org/browse/ISPN-5584 >>> >>>> I have figured this out when trying to update the DeltaAware handling to >>>> support more than just atomic maps - yes, there are special branches for >>>> atomic maps in the code, which is quite ugly design-wise, IMO. My >>>> intention is to do similar things like WSC for replaying the deltas, but >>>> this, obviously, needs some atomicity. >>>> >>> Yes, for all the bugs in the AtomicMaps, it's even harder implementing >>> a DeltaAware that is not an AtomicMap... >>> >>> But I don't see any reason to do that anyway, I'd rather work on >>> making the functional stuff work with transactions. >> Yes, I would rather focus on functional stuff too, but the Delta* stuff >> gets into my way all the time, so I was trying to remove that. However, >> though we could deprecate fine grained maps (+1!) we have to keep it >> working as OGM uses that. I am awaiting some details from Sanne that >> could suggest alternative solution, but he's on PTO now. >> >>>> IIUC, fine-grained locking was introduced back in 5.1 because of >>>> deadlocks in the lock-acquisition algorithm; the purpose was not to >>>> improve concurrency. Luckily, the days of deadlocks are far back, now we >>>> can get the cluster stuck in more complex ways :) Therefore, with a >>>> correctness-first approach, in optimistic caches I would lock just the >>>> main key (not the composite keys). The prepare-commit should be quite >>>> fast anyway, and I don't see how this could affect users >>>> (counter-examples are welcome) but slightly reduced concurrency. >>>> >>> I don't remember what initial use case for FineGrainedAtomicMaps was, >>> but I agree with Pedro that it's a bit long in the tooth now. The only >>> advantage of FGAM over grouping is that getGroup(key) needs to iterate >>> over the entire data container/store, so it can be a lot slower when >>> you have lots of small groups. But if you need to work with all the >>> subkeys in the every transaction, you should probably be using a >>> regular AtomicMap instead. >> Iterating through whole container seems like a very limiting factor to >> me, but I would keep AtomicMaps and let them be implemented through >> deltas/functional commands (preferred), but use the standard locking >> mechanisms instead of fine-grained insanity. >> >>> IMO we should deprecate FineGrainedAtomicMap and implement it as a >>> regular AtomicMap. >>> >>>> In pessimistic caches we have to be more cautious, since users >>>> manipulate the locks directly and reason about them more. Therefore, we >>>> need to lock the composite keys during transaction runtime, but in >>>> addition to that, during the commit itself we should lock the main key >>>> for the duration of the commit if necessary - pessimistic caches don't >>>> sport WSC, but I was looking for some atomicity options for deltas. >>>> >>> -1 to implicitly locking the main key. If a DeltaAware implementation >>> wants to support partial locking, then it should take care of the >>> atomicity of the merge operation itself. If it doesn't want to support >>> partial locking, then it shouldn't use AdvancedCache.applyDelta(). >>> It's a bit unfortunate that applyDelta() looks like a method that >>> anyone can call, but it should only be called by the DeltaAware >>> implementation itself. >> As I have mentioned in my last mail, I found that it's not that easy, so >> I am not implementing that. But it's not about taking care of atomicity >> of the merge - applying merge can be synchronized, but you have to do >> that with the entry stored in DC when the entry is about to be stored in >> DC - and this is the only moment you can squeeze the WSC inl, because >> the DeltaAware can't know anything about WSCs. That's the DC locking >> piggyback you -10. >> >>> However, I agree that implementing a DeltaAware partial locking >>> correctly in all possible configurations is nigh impossible. So it >>> would be much better if we also deprecate applyDelta() and start >>> ignoring the `locksToAcquire` parameter. >>> >>>> An alternative would be to piggyback on DC's locking scheme, however, >>>> this is quite unsuitable for the optimistic case with a RPC between WSC >>>> and DC store. In addition to that, it doesn't fit into our async picture >>>> and we would send complex compute functions into the DC, instead of >>>> decoupled lock/unlock. I could also devise another layer of locking, but >>>> that's just madness. >>>> >>> -10 to piggyback on DC locking, and -100 to a new locking layer. >>> >>> I think you could lock the main key by executing a >>> LockControlCommand(CACHE_MODE_LOCAL) from >>> PessimisticLockingInterceptor.visitPrepareCommand, before passing the >>> PrepareCommand to the next interceptor. But please don't do it! >> Okay, I'll just wait until someone tells me why the heck anyone needs >> fine grained, discuss how to avoid it and then deprecate it :) >> >> Radim >> >>>> I am adding Sanne to recipients as OGM is probably the most important >>>> consumer of atomic hash maps. >>>> >>>> WDYT? >>>> >>>> Radim >>>> >>>> [1] >>>> https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e >>>> >>>> -- >>>> Radim Vansa >>>> JBoss Performance Team >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From dan.berindei at gmail.com Tue Sep 27 10:47:35 2016 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 27 Sep 2016 17:47:35 +0300 Subject: [infinispan-dev] Fine grained maps In-Reply-To: <57EA1F91.8060202@redhat.com> References: <57E8D008.3000106@redhat.com> <57EA1F91.8060202@redhat.com> Message-ID: On Tue, Sep 27, 2016 at 10:28 AM, Radim Vansa wrote: > To Pedro: I have figured out that it shouldn't work rather > theoretically, so I haven't crashed into ISPN-2729. > > On 09/26/2016 10:51 PM, Dan Berindei wrote: >> On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa wrote: >>> Hi all, >>> >>> I have realized that fine grained maps don't work reliably with >>> write-skew check. This happens because WSC tries to load the entry from >>> DC/cache-store, compare versions and store it, assuming that this >>> happens atomically as the entry is locked. However, as fine grained maps >>> can lock two different keys and modify the same entry, there is a risk >>> that the check & store won't be atomic. Right now, the update itself >>> won't be lost, because fine grained maps use DeltaAwareCacheEntries >>> which apply the updates DC's lock (there can be some problems when >>> passivation is used, though, [1] hopefully deals with them). >>> >> I had a hard time understanding what the problem is, but then I >> realized it's because I was assuming we keep a separate version for >> each subkey. After I realized it's not implemented like that, I also >> found a couple of bugs I filed for it a long time ago: >> >> https://issues.jboss.org/browse/ISPN-3123 >> https://issues.jboss.org/browse/ISPN-5584 >> >>> I have figured this out when trying to update the DeltaAware handling to >>> support more than just atomic maps - yes, there are special branches for >>> atomic maps in the code, which is quite ugly design-wise, IMO. My >>> intention is to do similar things like WSC for replaying the deltas, but >>> this, obviously, needs some atomicity. >>> >> Yes, for all the bugs in the AtomicMaps, it's even harder implementing >> a DeltaAware that is not an AtomicMap... >> >> But I don't see any reason to do that anyway, I'd rather work on >> making the functional stuff work with transactions. > > Yes, I would rather focus on functional stuff too, but the Delta* stuff > gets into my way all the time, so I was trying to remove that. However, > though we could deprecate fine grained maps (+1!) we have to keep it > working as OGM uses that. I am awaiting some details from Sanne that > could suggest alternative solution, but he's on PTO now. > >> >>> IIUC, fine-grained locking was introduced back in 5.1 because of >>> deadlocks in the lock-acquisition algorithm; the purpose was not to >>> improve concurrency. Luckily, the days of deadlocks are far back, now we >>> can get the cluster stuck in more complex ways :) Therefore, with a >>> correctness-first approach, in optimistic caches I would lock just the >>> main key (not the composite keys). The prepare-commit should be quite >>> fast anyway, and I don't see how this could affect users >>> (counter-examples are welcome) but slightly reduced concurrency. >>> >> I don't remember what initial use case for FineGrainedAtomicMaps was, >> but I agree with Pedro that it's a bit long in the tooth now. The only >> advantage of FGAM over grouping is that getGroup(key) needs to iterate >> over the entire data container/store, so it can be a lot slower when >> you have lots of small groups. But if you need to work with all the >> subkeys in the every transaction, you should probably be using a >> regular AtomicMap instead. > > Iterating through whole container seems like a very limiting factor to > me, but I would keep AtomicMaps and let them be implemented through > deltas/functional commands (preferred), but use the standard locking > mechanisms instead of fine-grained insanity. > Indeed, it can be limiting, especially when you have one small group that's iterated over all the time and one large group that's never iterated in the same cache. I was hoping it would be good enough as a bridge until we have the functional API working with transactions, but based on Sanne's comments I guess I was wrong :) >> >> IMO we should deprecate FineGrainedAtomicMap and implement it as a >> regular AtomicMap. >> >>> In pessimistic caches we have to be more cautious, since users >>> manipulate the locks directly and reason about them more. Therefore, we >>> need to lock the composite keys during transaction runtime, but in >>> addition to that, during the commit itself we should lock the main key >>> for the duration of the commit if necessary - pessimistic caches don't >>> sport WSC, but I was looking for some atomicity options for deltas. >>> >> -1 to implicitly locking the main key. If a DeltaAware implementation >> wants to support partial locking, then it should take care of the >> atomicity of the merge operation itself. If it doesn't want to support >> partial locking, then it shouldn't use AdvancedCache.applyDelta(). >> It's a bit unfortunate that applyDelta() looks like a method that >> anyone can call, but it should only be called by the DeltaAware >> implementation itself. > > As I have mentioned in my last mail, I found that it's not that easy, so > I am not implementing that. But it's not about taking care of atomicity > of the merge - applying merge can be synchronized, but you have to do > that with the entry stored in DC when the entry is about to be stored in > DC - and this is the only moment you can squeeze the WSC inl, because > the DeltaAware can't know anything about WSCs. That's the DC locking > piggyback you -10. > I think you're making it harder than it should be, because you're trying to come up with a generic solution that works with any possible data structure. But if a data structure is not suitable for fine-grained locking, it should just use regular locking instead (locksToAcquire = {mainKey}). E.g. any ordered structure is out of the question for fine-grained locking, but it should be possible to implement a fine-grained set/bag without any new locking in core. As you may have seen from ISPN-3123 and ISPN-5584, I think the problem with FGAM is that it's not granular enough: we shouldn't throw WriteSkewExceptions just because two transactions modify the same FGAM, we should only throw the WriteSkewException when both transaction modify the same subkey. >> >> However, I agree that implementing a DeltaAware partial locking >> correctly in all possible configurations is nigh impossible. So it >> would be much better if we also deprecate applyDelta() and start >> ignoring the `locksToAcquire` parameter. >> >>> An alternative would be to piggyback on DC's locking scheme, however, >>> this is quite unsuitable for the optimistic case with a RPC between WSC >>> and DC store. In addition to that, it doesn't fit into our async picture >>> and we would send complex compute functions into the DC, instead of >>> decoupled lock/unlock. I could also devise another layer of locking, but >>> that's just madness. >>> >> -10 to piggyback on DC locking, and -100 to a new locking layer. >> >> I think you could lock the main key by executing a >> LockControlCommand(CACHE_MODE_LOCAL) from >> PessimisticLockingInterceptor.visitPrepareCommand, before passing the >> PrepareCommand to the next interceptor. But please don't do it! > > Okay, I'll just wait until someone tells me why the heck anyone needs > fine grained, discuss how to avoid it and then deprecate it :) > > Radim > >> >>> I am adding Sanne to recipients as OGM is probably the most important >>> consumer of atomic hash maps. >>> >>> WDYT? >>> >>> Radim >>> >>> [1] >>> https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e >>> >>> -- >>> Radim Vansa >>> JBoss Performance Team >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Tue Sep 27 10:51:25 2016 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 27 Sep 2016 16:51:25 +0200 Subject: [infinispan-dev] Func API in tx cache Message-ID: <57EA876D.6060300@redhat.com> Hi, seems I'll have to implement the functional stuff on tx caches [1][2] if I want to get rid of DeltaAware et al. The general idea is quite simple - ReadOnly* commands should behave very similar to non-tx mode, WriteOnly* commands will be just added as modifications to the PrepareCommand and ReadWrite* commands will be both added to modifications list, and sent to remote nodes where the result won't be stored yet. The results of operations should not be stored into transactional context - the command will execute remotely (if the owners are remote) unless the value was read by Get* beforehand. With repeatable-reads isolation, the situation gets more complicated. If we use ReadOnly* that performs identity lookup (effectively the same as Get*) and the entry was modified in during the transaction, we can return two different results - so a read committed semantics. With write skew check enabled, we could at least fail the transaction at the end (the check would be performed reads as well if the transaction contains functional reads), but we cannot rely on WSC always on with RR. Retrieving the whole entry and applying the functional command is not a viable solution, IMO - that would completely defy the purpose of using functional command. A possible solution would be to send the global transaction ID with those read commands and keep a remote transactional context with read entries for the duration of transaction on remote nodes, too. However, if we do a Read* command to primary owner, it's possible that further Get* command will hit backup. So, we could go to all owners with Read* already during the transaction (slowing down functional reads considerably), or read only from primary owner (which slows down Get*s even if we don't use functional APIs - this makes it a no-go). I am not 100% sure how a transaction transfer during ST will get into that. We could also do it the ostrich way - "Yes we've promissed RR but Func will be only RC". I'll probably do that in the first draft anyway. Comments & opinions appreciated. Radim [1] https://issues.jboss.org/browse/ISPN-5806 [2] https://issues.jboss.org/browse/ISPN-6573 -- Radim Vansa JBoss Performance Team From rvansa at redhat.com Tue Sep 27 11:15:47 2016 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 27 Sep 2016 17:15:47 +0200 Subject: [infinispan-dev] Fine grained maps In-Reply-To: References: <57E8D008.3000106@redhat.com> <57EA1F91.8060202@redhat.com> Message-ID: <57EA8D23.7000003@redhat.com> On 09/27/2016 04:47 PM, Dan Berindei wrote: > On Tue, Sep 27, 2016 at 10:28 AM, Radim Vansa wrote: >> To Pedro: I have figured out that it shouldn't work rather >> theoretically, so I haven't crashed into ISPN-2729. >> >> On 09/26/2016 10:51 PM, Dan Berindei wrote: >>> On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa wrote: >>>> Hi all, >>>> >>>> I have realized that fine grained maps don't work reliably with >>>> write-skew check. This happens because WSC tries to load the entry from >>>> DC/cache-store, compare versions and store it, assuming that this >>>> happens atomically as the entry is locked. However, as fine grained maps >>>> can lock two different keys and modify the same entry, there is a risk >>>> that the check & store won't be atomic. Right now, the update itself >>>> won't be lost, because fine grained maps use DeltaAwareCacheEntries >>>> which apply the updates DC's lock (there can be some problems when >>>> passivation is used, though, [1] hopefully deals with them). >>>> >>> I had a hard time understanding what the problem is, but then I >>> realized it's because I was assuming we keep a separate version for >>> each subkey. After I realized it's not implemented like that, I also >>> found a couple of bugs I filed for it a long time ago: >>> >>> https://issues.jboss.org/browse/ISPN-3123 >>> https://issues.jboss.org/browse/ISPN-5584 >>> >>>> I have figured this out when trying to update the DeltaAware handling to >>>> support more than just atomic maps - yes, there are special branches for >>>> atomic maps in the code, which is quite ugly design-wise, IMO. My >>>> intention is to do similar things like WSC for replaying the deltas, but >>>> this, obviously, needs some atomicity. >>>> >>> Yes, for all the bugs in the AtomicMaps, it's even harder implementing >>> a DeltaAware that is not an AtomicMap... >>> >>> But I don't see any reason to do that anyway, I'd rather work on >>> making the functional stuff work with transactions. >> Yes, I would rather focus on functional stuff too, but the Delta* stuff >> gets into my way all the time, so I was trying to remove that. However, >> though we could deprecate fine grained maps (+1!) we have to keep it >> working as OGM uses that. I am awaiting some details from Sanne that >> could suggest alternative solution, but he's on PTO now. >> >>>> IIUC, fine-grained locking was introduced back in 5.1 because of >>>> deadlocks in the lock-acquisition algorithm; the purpose was not to >>>> improve concurrency. Luckily, the days of deadlocks are far back, now we >>>> can get the cluster stuck in more complex ways :) Therefore, with a >>>> correctness-first approach, in optimistic caches I would lock just the >>>> main key (not the composite keys). The prepare-commit should be quite >>>> fast anyway, and I don't see how this could affect users >>>> (counter-examples are welcome) but slightly reduced concurrency. >>>> >>> I don't remember what initial use case for FineGrainedAtomicMaps was, >>> but I agree with Pedro that it's a bit long in the tooth now. The only >>> advantage of FGAM over grouping is that getGroup(key) needs to iterate >>> over the entire data container/store, so it can be a lot slower when >>> you have lots of small groups. But if you need to work with all the >>> subkeys in the every transaction, you should probably be using a >>> regular AtomicMap instead. >> Iterating through whole container seems like a very limiting factor to >> me, but I would keep AtomicMaps and let them be implemented through >> deltas/functional commands (preferred), but use the standard locking >> mechanisms instead of fine-grained insanity. >> > Indeed, it can be limiting, especially when you have one small group > that's iterated over all the time and one large group that's never > iterated in the same cache. I was hoping it would be good enough as a > bridge until we have the functional API working with transactions, but > based on Sanne's comments I guess I was wrong :) > >>> IMO we should deprecate FineGrainedAtomicMap and implement it as a >>> regular AtomicMap. >>> >>>> In pessimistic caches we have to be more cautious, since users >>>> manipulate the locks directly and reason about them more. Therefore, we >>>> need to lock the composite keys during transaction runtime, but in >>>> addition to that, during the commit itself we should lock the main key >>>> for the duration of the commit if necessary - pessimistic caches don't >>>> sport WSC, but I was looking for some atomicity options for deltas. >>>> >>> -1 to implicitly locking the main key. If a DeltaAware implementation >>> wants to support partial locking, then it should take care of the >>> atomicity of the merge operation itself. If it doesn't want to support >>> partial locking, then it shouldn't use AdvancedCache.applyDelta(). >>> It's a bit unfortunate that applyDelta() looks like a method that >>> anyone can call, but it should only be called by the DeltaAware >>> implementation itself. >> As I have mentioned in my last mail, I found that it's not that easy, so >> I am not implementing that. But it's not about taking care of atomicity >> of the merge - applying merge can be synchronized, but you have to do >> that with the entry stored in DC when the entry is about to be stored in >> DC - and this is the only moment you can squeeze the WSC inl, because >> the DeltaAware can't know anything about WSCs. That's the DC locking >> piggyback you -10. >> > I think you're making it harder than it should be, because you're > trying to come up with a generic solution that works with any possible > data structure. I am trying to work with two concepts: (Copyable)DeltaAware interface, and ApplyDeltaCommand with its set of locks. Atomic maps are higher-level concept and there should not be any Ugh!s [1]. In any implementation of DeltaAwareCacheEntry.commit(), you'll have to atomically load the (Im)mutableCacheEntry from DC/cache store and store it into DC. Yes, you could do a load, copy (because the delta is modifying), apply delta/run WSC, compare&set in DC, but that's quite annoying loop implemented through exception handling <- I haven't proposed this and focused on changing the locking scheme. [1] https://github.com/infinispan/jdg/blob/3aaa3d85fe9a90ee3c371b44ff5e5b36414c69fd/core/src/main/java/org/infinispan/container/entries/ReadCommittedEntry.java#L150 > But if a data structure is not suitable for > fine-grained locking, it should just use regular locking instead > (locksToAcquire = {mainKey}). > > E.g. any ordered structure is out of the question for fine-grained > locking, but it should be possible to implement a fine-grained set/bag > without any new locking in core. > > As you may have seen from ISPN-3123 and ISPN-5584, I think the problem > with FGAM is that it's not granular enough: we shouldn't throw > WriteSkewExceptions just because two transactions modify the same > FGAM, we should only throw the WriteSkewException when both > transaction modify the same subkey. You're right that WSC is not fine-grained enough, and at this point you can't solve that generally - how do you apply WSC on DeltaAware when you know that it locks certain keys? And you would add the static call to WSCHelper into DeltaAwareCacheEntry, class made for storing value & interacting with DC? > >>> However, I agree that implementing a DeltaAware partial locking >>> correctly in all possible configurations is nigh impossible. So it >>> would be much better if we also deprecate applyDelta() and start >>> ignoring the `locksToAcquire` parameter. >>> >>>> An alternative would be to piggyback on DC's locking scheme, however, >>>> this is quite unsuitable for the optimistic case with a RPC between WSC >>>> and DC store. In addition to that, it doesn't fit into our async picture >>>> and we would send complex compute functions into the DC, instead of >>>> decoupled lock/unlock. I could also devise another layer of locking, but >>>> that's just madness. >>>> >>> -10 to piggyback on DC locking, and -100 to a new locking layer. >>> >>> I think you could lock the main key by executing a >>> LockControlCommand(CACHE_MODE_LOCAL) from >>> PessimisticLockingInterceptor.visitPrepareCommand, before passing the >>> PrepareCommand to the next interceptor. But please don't do it! >> Okay, I'll just wait until someone tells me why the heck anyone needs >> fine grained, discuss how to avoid it and then deprecate it :) >> >> Radim >> >>>> I am adding Sanne to recipients as OGM is probably the most important >>>> consumer of atomic hash maps. >>>> >>>> WDYT? >>>> >>>> Radim >>>> >>>> [1] >>>> https://github.com/infinispan/infinispan/pull/4564/commits/2eeb7efbd4e1ea3e7f45ff2b443691b78ad4ae8e >>>> >>>> -- >>>> Radim Vansa >>>> JBoss Performance Team >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Tue Sep 27 12:21:27 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 27 Sep 2016 18:21:27 +0200 Subject: [infinispan-dev] Documentation mini-sprint Message-ID: <43a25ed6-71da-97f3-b0ba-43544a34db05@redhat.com> Hi all, I have created a PR [1] which contains a TODO list for our documentation. This makes the review process public and uses the familiar GitHub tooling. I would like everybody to dedicate some time before the end of this week in looking at the items and making proposals to improve the docs. On Friday I will merge the TODO list to master. Starting on Monday we will dedicate 3 days on doing as much as possible to reduce that list to zero. Documentation PRs will need to remove the corresponding lines from the TODO list. Thanks Tristan [1] https://github.com/infinispan/infinispan/pull/4572 -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Tue Sep 27 14:42:28 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 27 Sep 2016 20:42:28 +0200 Subject: [infinispan-dev] Documentation mini-sprint In-Reply-To: <43a25ed6-71da-97f3-b0ba-43544a34db05@redhat.com> References: <43a25ed6-71da-97f3-b0ba-43544a34db05@redhat.com> Message-ID: Oh, and since Sebastian asked for clear, and well defined goals, I have come up with a good one: let's make the documentation not suck ! Tristan On 27/09/16 18:21, Tristan Tarrant wrote: > Hi all, > > I have created a PR [1] which contains a TODO list for our > documentation. This makes the review process public and uses the > familiar GitHub tooling. > > I would like everybody to dedicate some time before the end of this week > in looking at the items and making proposals to improve the docs. > > On Friday I will merge the TODO list to master. > > Starting on Monday we will dedicate 3 days on doing as much as possible > to reduce that list to zero. Documentation PRs will need to remove the > corresponding lines from the TODO list. > > Thanks > > Tristan > > > [1] https://github.com/infinispan/infinispan/pull/4572 -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Wed Sep 28 02:23:48 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 28 Sep 2016 08:23:48 +0200 Subject: [infinispan-dev] Documentation mini-sprint In-Reply-To: References: <43a25ed6-71da-97f3-b0ba-43544a34db05@redhat.com> Message-ID: hahaha I knew I will get hit by this boomerang :D Anyway - thanks for this TODO list. I like the idea of removing items from it with proper commits. On Tue, Sep 27, 2016 at 8:42 PM, Tristan Tarrant wrote: > Oh, and since Sebastian asked for clear, and well defined goals, I have > come up with a good one: > > let's make the documentation not suck ! > > Tristan > > On 27/09/16 18:21, Tristan Tarrant wrote: > > Hi all, > > > > I have created a PR [1] which contains a TODO list for our > > documentation. This makes the review process public and uses the > > familiar GitHub tooling. > > > > I would like everybody to dedicate some time before the end of this week > > in looking at the items and making proposals to improve the docs. > > > > On Friday I will merge the TODO list to master. > > > > Starting on Monday we will dedicate 3 days on doing as much as possible > > to reduce that list to zero. Documentation PRs will need to remove the > > corresponding lines from the TODO list. > > > > Thanks > > > > Tristan > > > > > > [1] https://github.com/infinispan/infinispan/pull/4572 > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160928/bac7967f/attachment.html From dan.berindei at gmail.com Wed Sep 28 03:09:44 2016 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 28 Sep 2016 10:09:44 +0300 Subject: [infinispan-dev] Fine grained maps In-Reply-To: <57EA8D23.7000003@redhat.com> References: <57E8D008.3000106@redhat.com> <57EA1F91.8060202@redhat.com> <57EA8D23.7000003@redhat.com> Message-ID: On Tue, Sep 27, 2016 at 6:15 PM, Radim Vansa wrote: > On 09/27/2016 04:47 PM, Dan Berindei wrote: >> On Tue, Sep 27, 2016 at 10:28 AM, Radim Vansa wrote: >>> To Pedro: I have figured out that it shouldn't work rather >>> theoretically, so I haven't crashed into ISPN-2729. >>> >>> On 09/26/2016 10:51 PM, Dan Berindei wrote: >>>> On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa wrote: ... >>>>> In pessimistic caches we have to be more cautious, since users >>>>> manipulate the locks directly and reason about them more. Therefore, we >>>>> need to lock the composite keys during transaction runtime, but in >>>>> addition to that, during the commit itself we should lock the main key >>>>> for the duration of the commit if necessary - pessimistic caches don't >>>>> sport WSC, but I was looking for some atomicity options for deltas. >>>>> >>>> -1 to implicitly locking the main key. If a DeltaAware implementation >>>> wants to support partial locking, then it should take care of the >>>> atomicity of the merge operation itself. If it doesn't want to support >>>> partial locking, then it shouldn't use AdvancedCache.applyDelta(). >>>> It's a bit unfortunate that applyDelta() looks like a method that >>>> anyone can call, but it should only be called by the DeltaAware >>>> implementation itself. >>> As I have mentioned in my last mail, I found that it's not that easy, so >>> I am not implementing that. But it's not about taking care of atomicity >>> of the merge - applying merge can be synchronized, but you have to do >>> that with the entry stored in DC when the entry is about to be stored in >>> DC - and this is the only moment you can squeeze the WSC inl, because >>> the DeltaAware can't know anything about WSCs. That's the DC locking >>> piggyback you -10. >>> >> I think you're making it harder than it should be, because you're >> trying to come up with a generic solution that works with any possible >> data structure. > > I am trying to work with two concepts: (Copyable)DeltaAware interface, > and ApplyDeltaCommand with its set of locks. Atomic maps are > higher-level concept and there should not be any Ugh!s [1]. > We can't look at ApplyDeltaCommand in isolation, as the user of the data structure should never call applyDelta() directly. All DeltaAware structures should have a user-facing proxy that knows the internal structure and calls applyDelta(), following the rules that we set. The way I see it, these are the 2 low-level concepts, both using the DeltaAware interface: 1) Coarse-grained DeltaAware (high-level equivalent: AtomicMap). These use PutKeyValueCommand(with Flag.DELTA_WRITE set in the constructor), although I do have a stale branch trying to make them use ApplyDeltaCommand(locksToAcquire=null). These are merged in the invocation context, using only the regular locks. 2) Fine-grained DeltaAware (high-level equivalent: FineGrainedAtomicMap). These use use ApplyDeltaCommand(locksToAcquire=CompositeKey*) and DeltaAwareCacheEntry, and must be able to merge concurrent updates to separate subkeys without losing updates. Currently both types of data structures can implement either DeltaAware or CopyableDeltaAware, but non-copyable DeltaAware breaks transaction isolation (and listeners, and query), so we should require all DeltaAwares to be copyable. (As a bridge, we can use serialization+deserialization for those that are not.) In theory, applyDelta() can also apply a delta without first issuing a read for the key, e.g. a set of counters or the DeltaAwareList used by our old M/R framework. I don't think we have any tests for this scenario now, so I'd prohibit it explicitly. > In any implementation of DeltaAwareCacheEntry.commit(), you'll have to > atomically load the (Im)mutableCacheEntry from DC/cache store and store > it into DC. Yes, you could do a load, copy (because the delta is > modifying), apply delta/run WSC, compare&set in DC, but that's quite > annoying loop implemented through exception handling <- I haven't > proposed this and focused on changing the locking scheme. > I'm ok with doing the merge while holding the DC lock for fine-grained DeltaAware, as we do now. For coarse-grained DeltaAware, we can keep doing the merge in the context entry. I'm not convinced we need new locking for the write-skew check in either case. > [1] > https://github.com/infinispan/jdg/blob/3aaa3d85fe9a90ee3c371b44ff5e5b36414c69fd/core/src/main/java/org/infinispan/container/entries/ReadCommittedEntry.java#L150 > >> But if a data structure is not suitable for >> fine-grained locking, it should just use regular locking instead >> (locksToAcquire = {mainKey}). >> >> E.g. any ordered structure is out of the question for fine-grained >> locking, but it should be possible to implement a fine-grained set/bag >> without any new locking in core. >> >> As you may have seen from ISPN-3123 and ISPN-5584, I think the problem >> with FGAM is that it's not granular enough: we shouldn't throw >> WriteSkewExceptions just because two transactions modify the same >> FGAM, we should only throw the WriteSkewException when both >> transaction modify the same subkey. > > You're right that WSC is not fine-grained enough, and at this point you > can't solve that generally - how do you apply WSC on DeltaAware when you > know that it locks certain keys? And you would add the static call to > WSCHelper into DeltaAwareCacheEntry, class made for storing value & > interacting with DC? > I'm not sure how we should do it, but we'd almost certainly need a new kind of metadata that holds a map of subkeys to versions instead of a single version. I'd try to modify EntryWrappingInterceptor to add dummy ClusteredRepeatableReadEntries in the context for all the `locksToAcquire` composite keys during ApplyDeltaCommand. I think we can add all the subkey versions to the transaction's versionsSeenMap the moment we read the DeltaAware, and let the prepare perform the regular WSC for those fake entries. Of course, those fake entries couldn't load their version from the data container/persistence, so we'd also have to move the version loading to the EntryWrappingInterceptor, but I think you've already started working on that. Cheers Dan From rvansa at redhat.com Thu Sep 29 04:29:09 2016 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 29 Sep 2016 10:29:09 +0200 Subject: [infinispan-dev] Fine grained maps In-Reply-To: References: <57E8D008.3000106@redhat.com> <57EA1F91.8060202@redhat.com> <57EA8D23.7000003@redhat.com> Message-ID: <57ECD0D5.8040703@redhat.com> On 09/28/2016 09:09 AM, Dan Berindei wrote: > On Tue, Sep 27, 2016 at 6:15 PM, Radim Vansa wrote: >> On 09/27/2016 04:47 PM, Dan Berindei wrote: >>> On Tue, Sep 27, 2016 at 10:28 AM, Radim Vansa wrote: >>>> To Pedro: I have figured out that it shouldn't work rather >>>> theoretically, so I haven't crashed into ISPN-2729. >>>> >>>> On 09/26/2016 10:51 PM, Dan Berindei wrote: >>>>> On Mon, Sep 26, 2016 at 10:36 AM, Radim Vansa wrote: > ... >>>>>> In pessimistic caches we have to be more cautious, since users >>>>>> manipulate the locks directly and reason about them more. Therefore, we >>>>>> need to lock the composite keys during transaction runtime, but in >>>>>> addition to that, during the commit itself we should lock the main key >>>>>> for the duration of the commit if necessary - pessimistic caches don't >>>>>> sport WSC, but I was looking for some atomicity options for deltas. >>>>>> >>>>> -1 to implicitly locking the main key. If a DeltaAware implementation >>>>> wants to support partial locking, then it should take care of the >>>>> atomicity of the merge operation itself. If it doesn't want to support >>>>> partial locking, then it shouldn't use AdvancedCache.applyDelta(). >>>>> It's a bit unfortunate that applyDelta() looks like a method that >>>>> anyone can call, but it should only be called by the DeltaAware >>>>> implementation itself. >>>> As I have mentioned in my last mail, I found that it's not that easy, so >>>> I am not implementing that. But it's not about taking care of atomicity >>>> of the merge - applying merge can be synchronized, but you have to do >>>> that with the entry stored in DC when the entry is about to be stored in >>>> DC - and this is the only moment you can squeeze the WSC inl, because >>>> the DeltaAware can't know anything about WSCs. That's the DC locking >>>> piggyback you -10. >>>> >>> I think you're making it harder than it should be, because you're >>> trying to come up with a generic solution that works with any possible >>> data structure. >> I am trying to work with two concepts: (Copyable)DeltaAware interface, >> and ApplyDeltaCommand with its set of locks. Atomic maps are >> higher-level concept and there should not be any Ugh!s [1]. >> > We can't look at ApplyDeltaCommand in isolation, as the user of the > data structure should never call applyDelta() directly. All DeltaAware > structures should have a user-facing proxy that knows the internal > structure and calls applyDelta(), following the rules that we set. > > The way I see it, these are the 2 low-level concepts, both using the > DeltaAware interface: > > 1) Coarse-grained DeltaAware (high-level equivalent: AtomicMap). > These use PutKeyValueCommand(with Flag.DELTA_WRITE set in the > constructor), although I do have a stale branch trying to make them > use ApplyDeltaCommand(locksToAcquire=null). > These are merged in the invocation context, using only the regular locks. > > 2) Fine-grained DeltaAware (high-level equivalent: FineGrainedAtomicMap). > These use use ApplyDeltaCommand(locksToAcquire=CompositeKey*) and > DeltaAwareCacheEntry, and must be able to merge concurrent updates to > separate subkeys without losing updates. > > Currently both types of data structures can implement either > DeltaAware or CopyableDeltaAware, but non-copyable DeltaAware breaks > transaction isolation (and listeners, and query), so we should require > all DeltaAwares to be copyable. (As a bridge, we can use > serialization+deserialization for those that are not.) > > In theory, applyDelta() can also apply a delta without first issuing a > read for the key, e.g. a set of counters or the DeltaAwareList used by > our old M/R framework. I don't think we have any tests for this > scenario now, so I'd prohibit it explicitly. > >> In any implementation of DeltaAwareCacheEntry.commit(), you'll have to >> atomically load the (Im)mutableCacheEntry from DC/cache store and store >> it into DC. Yes, you could do a load, copy (because the delta is >> modifying), apply delta/run WSC, compare&set in DC, but that's quite >> annoying loop implemented through exception handling <- I haven't >> proposed this and focused on changing the locking scheme. >> > I'm ok with doing the merge while holding the DC lock for fine-grained > DeltaAware, as we do now. For coarse-grained DeltaAware, we can keep > doing the merge in the context entry. > > I'm not convinced we need new locking for the write-skew check in either case. "...doing the merge while holding DC lock..." means that it's already second phase of 2PC. Therefore, if the WSC fails, you can't rollback the transaction. Unless you want to do a RPC while holding DC lock. Btw., I guess that you don't have any motivation to move persistence loading from ClusteredRepeatableReadEntry to persistence-related interceptors? >> [1] >> https://github.com/infinispan/jdg/blob/3aaa3d85fe9a90ee3c371b44ff5e5b36414c69fd/core/src/main/java/org/infinispan/container/entries/ReadCommittedEntry.java#L150 >> >>> But if a data structure is not suitable for >>> fine-grained locking, it should just use regular locking instead >>> (locksToAcquire = {mainKey}). >>> >>> E.g. any ordered structure is out of the question for fine-grained >>> locking, but it should be possible to implement a fine-grained set/bag >>> without any new locking in core. >>> >>> As you may have seen from ISPN-3123 and ISPN-5584, I think the problem >>> with FGAM is that it's not granular enough: we shouldn't throw >>> WriteSkewExceptions just because two transactions modify the same >>> FGAM, we should only throw the WriteSkewException when both >>> transaction modify the same subkey. >> You're right that WSC is not fine-grained enough, and at this point you >> can't solve that generally - how do you apply WSC on DeltaAware when you >> know that it locks certain keys? And you would add the static call to >> WSCHelper into DeltaAwareCacheEntry, class made for storing value & >> interacting with DC? >> > I'm not sure how we should do it, but we'd almost certainly need a new > kind of metadata that holds a map of subkeys to versions instead of a > single version. > > I'd try to modify EntryWrappingInterceptor to add dummy > ClusteredRepeatableReadEntries in the context for all the > `locksToAcquire` composite keys during ApplyDeltaCommand. I think we > can add all the subkey versions to the transaction's versionsSeenMap > the moment we read the DeltaAware, and let the prepare perform the > regular WSC for those fake entries. It's not limited to ApplyDeltaCommand - this command should modify the metadata, adding the subkeys, but you have to record the versions seen upon first read (e.g. Get*). But ok, you can check the subkeys in the metadata. You also have to deal with non-existent versions of subkeys for existing keys later on as you don't know about subkeys added concurrently. > > Of course, those fake entries couldn't load their version from the > data container/persistence, so we'd also have to move the version > loading to the EntryWrappingInterceptor, but I think you've already > started working on that. No; I've dropped my refactoring efforts for now, all what I've done is in [1] - please review when time permits. [1] https://github.com/infinispan/infinispan/pull/4564 Radim > > Cheers > Dan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From vjuranek at redhat.com Thu Sep 29 05:13:16 2016 From: vjuranek at redhat.com (Vojtech Juranek) Date: Thu, 29 Sep 2016 11:13:16 +0200 Subject: [infinispan-dev] Ceph cache store In-Reply-To: References: <1481975.UHkj8RgnTN@localhost.localdomain> Message-ID: <1542540.0oL6v8NXfi@localhost.localdomain> Hi Sebastian, sorry for late reply. > The only thing that comes into my mind is to test it with > Kubernetes/OpenShift Ceph volumes [6]. I'm not very familiar with k8s and its doc page doesn't provide any detail how it works under the hood, but AFAICT (looking on the source code [1]), it uses Ceph FS, not directly librados which is used by cache store, so IMHO there's not much to test. Single file store or soft index files store would be more appropriate to test with k8s Ceph volume. But what I'd like to do definitely in the future is some performance comparison between Ceph store using directly librados, cloud store using Ceph via RadosGW and single file store using Ceph via CephFS. Thanks Vojta [1] https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cephfs/cephfs.go -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160929/753ea62a/attachment-0001.bin From slaskawi at redhat.com Thu Sep 29 10:36:55 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 29 Sep 2016 16:36:55 +0200 Subject: [infinispan-dev] Ceph cache store In-Reply-To: <1542540.0oL6v8NXfi@localhost.localdomain> References: <1481975.UHkj8RgnTN@localhost.localdomain> <1542540.0oL6v8NXfi@localhost.localdomain> Message-ID: Ok, sounds good. Thanks Vojtech! On Thu, Sep 29, 2016 at 11:13 AM, Vojtech Juranek wrote: > Hi Sebastian, > sorry for late reply. > > > The only thing that comes into my mind is to test it with > > Kubernetes/OpenShift Ceph volumes [6]. > > I'm not very familiar with k8s and its doc page doesn't provide any detail > how > it works under the hood, but AFAICT (looking on the source code [1]), it > uses > Ceph FS, not directly librados which is used by cache store, so IMHO > there's > not much to test. Single file store or soft index files store would be more > appropriate to test with k8s Ceph volume. > > But what I'd like to do definitely in the future is some performance > comparison between Ceph store using directly librados, cloud store using > Ceph > via RadosGW and single file store using Ceph via CephFS. > > Thanks > Vojta > > [1] > https://github.com/kubernetes/kubernetes/blob/master/pkg/ > volume/cephfs/cephfs.go > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160929/f5388e01/attachment.html From slaskawi at redhat.com Fri Sep 30 02:53:32 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 30 Sep 2016 08:53:32 +0200 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense Message-ID: Hey! A while ago I asked Radim and Dan about these kind of constructs [1]: private boolean trace = logger.isTraceEnabled(); //stored in a field ... called in some method ... if(trace) logger.tracef(...); ... At first they seemed wrong to me, because if one changes logging level (using JMX for example), the code won't notice it. I also though it's quite ok to use tracef directly, because JIT will inline and optimize it. Unfortunately my benchmarks [2] show that I was wrong. Logger#tracef indeed checks if the logging level is enabled but since JBoss Logging may use different backends, the check is not trivial and is not inlined (at least with default settings). The performance results look like this: Benchmark Mode Cnt Score Error Units MyBenchmark.noVariable thrpt 20 *717252060.124* ? 13420522.229 ops/s MyBenchmark.withVariable thrpt 20 *2358360244.627* ? 50214969.572 ops/s So if you even see a construct like this: logger.debuf or logger.tracef - make sure you check if the logging level is enabled (and the check result is stored in a field). That was a bit surprising and interesting lesson :D Thanks Sebastian [1] https://github.com/infinispan/infinispan/pull/4538#discussion_r80666086 [2] https://github.com/slaskawi/jboss-logging-perf-test -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160930/59a29117/attachment.html From wfink at redhat.com Fri Sep 30 03:43:01 2016 From: wfink at redhat.com (Wolf Fink) Date: Fri, 30 Sep 2016 09:43:01 +0200 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense In-Reply-To: References: Message-ID: I understand the impact of this, but we should keep in mind that there are some important points where it is worse if you can't change the logging on the fly for a few moments to check something and switch back. For the test my understanding is that you use - the logger.tracef direct - check logger.isTraceEnabled() first I see the variable stored but not used - or am I wrong and the benchmark test do something extra? So interesting would be the difference between - log.trace("xyz") - if(log.isTraceEnabled) log.trace("xyz") - log.tracef("xyz %s", var) - if(log.isTraceEnabled) log.tracef("xyz %s",var) and the construct with storing the log level in a static field - boolean isTrace=log.isTraceEnabled() if(isTrace) log.tracef("xyz %s",var) On Fri, Sep 30, 2016 at 8:53 AM, Sebastian Laskawiec wrote: > Hey! > > A while ago I asked Radim and Dan about these kind of constructs [1]: > > private boolean trace = logger.isTraceEnabled(); //stored in a field > > ... called in some method ... > if(trace) > logger.tracef(...); > ... > > At first they seemed wrong to me, because if one changes logging level > (using JMX for example), the code won't notice it. I also though it's quite > ok to use tracef directly, because JIT will inline and optimize it. > > Unfortunately my benchmarks [2] show that I was wrong. Logger#tracef > indeed checks if the logging level is enabled but since JBoss Logging may > use different backends, the check is not trivial and is not inlined (at > least with default settings). The performance results look like this: > Benchmark Mode Cnt Score Error Units > MyBenchmark.noVariable thrpt 20 *717252060.124* ? 13420522.229 > ops/s > MyBenchmark.withVariable thrpt 20 *2358360244.627* ? 50214969.572 > ops/s > > So if you even see a construct like this: logger.debuf or logger.tracef - > make sure you check if the logging level is enabled (and the check result > is stored in a field). > > That was a bit surprising and interesting lesson :D > > Thanks > Sebastian > > [1] https://github.com/infinispan/infinispan/pull/ > 4538#discussion_r80666086 > [2] https://github.com/slaskawi/jboss-logging-perf-test > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160930/74194acb/attachment.html From emmanuel at hibernate.org Fri Sep 30 04:11:40 2016 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Fri, 30 Sep 2016 10:11:40 +0200 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: <814975603.1983442.1474650371159.JavaMail.zimbra@redhat.com> References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> <814975603.1983442.1474650371159.JavaMail.zimbra@redhat.com> Message-ID: <20160930081140.GB41213@hibernate.org> >> 1. How do you verify that a Javascript client works the way a Javascript >> program would use it? >> IOW, even if you could call JS from Java, what you'd be verifying is that >> whichever contorsionate way of calling JS from Java works, which might not >> necessarily mean it works when a real JS program calls it. > >I think the user workflow can be verified separately. Being able to verify the functional behavior of clients written in multiple languages using a single test suite would be a huge win, IMO. I agree with you though that this should be coupled with an actual end-user test where the Javascript client is run against a real node server, a C++ client is installed from RPMs and built into an application, etc for a complete certification of a client. > That was my thinking too, often TCK based tools also have a separate test suite. You could have a common TCK for behavior and a separate test suite for each client to make sure it works as expected between the chair and the API. From emmanuel at hibernate.org Fri Sep 30 04:13:10 2016 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Fri, 30 Sep 2016 10:13:10 +0200 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> Message-ID: <20160930081310.GC41213@hibernate.org> On Fri 2016-09-23 17:33, Galder Zamarre?o wrote: >Maybe some day we'll have a Java-based testsuite that more easily allows continous testing. Scala, through SBT, do have something along this lines, so I don't think it's necessarily impossible, but we're not there yet. And, as I said above, you always have the first issue: testing how the user will use things. This reminded me of Infinitest https://infinitest.github.io Which bring continuous testing to your IDEs (for Java). From rvansa at redhat.com Fri Sep 30 04:36:17 2016 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 30 Sep 2016 10:36:17 +0200 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense In-Reply-To: References: Message-ID: <57EE2401.5040203@redhat.com> Wolf, the isTraceEnabled() is called only once during class initialization (if that's a static field) or instance creation, but it is usually stored as final field and therefore the JVM is likely to optimize the calls. It's possible to change final fields, and in this case it's not as unsafe (the only risk is not logging something and the cost related to recompiling the class), but the problematic part is finding them :) In Infinispan, you get close to all logging if you inspect all classes in component registry (and global component registry). It's not as easy as setting the level through JMX, though. R. On 09/30/2016 09:43 AM, Wolf Fink wrote: > I understand the impact of this, but we should keep in mind that there > are some important points where it is worse if you can't change the > logging on the fly for a few moments to check something and switch back. > > For the test my understanding is that you use > - the logger.tracef direct > - check logger.isTraceEnabled() first > > I see the variable stored but not used - or am I wrong and the > benchmark test do something extra? > > > So interesting would be the difference between > - log.trace("xyz") > - if(log.isTraceEnabled) log.trace("xyz") > - log.tracef("xyz %s", var) > - if(log.isTraceEnabled) log.tracef("xyz %s",var) > and the construct with storing the log level in a static field > - boolean isTrace=log.isTraceEnabled() > if(isTrace) log.tracef("xyz %s",var) > > > On Fri, Sep 30, 2016 at 8:53 AM, Sebastian Laskawiec > > wrote: > > Hey! > > A while ago I asked Radim and Dan about these kind of constructs [1]: > > private boolean trace = logger.isTraceEnabled(); //stored in a field > > ... called in some method ... > if(trace) > logger.tracef(...); > ... > > At first they seemed wrong to me, because if one changes logging > level (using JMX for example), the code won't notice it. I also > though it's quite ok to use tracef directly, because JIT will > inline and optimize it. > > Unfortunately my benchmarks [2] show that I was wrong. > Logger#tracef indeed checks if the logging level is enabled but > since JBoss Logging may use different backends, the check is not > trivial and is not inlined (at least with default settings). The > performance results look like this: > Benchmark Mode Cnt Score Error Units > MyBenchmark.noVariable thrpt 20 *717252060.124* ? > 13420522.229 ops/s > MyBenchmark.withVariable thrpt 20 *2358360244.627* ? > 50214969.572 ops/s > > So if you even see a construct like this: logger.debuf or > logger.tracef - make sure you check if the logging level is > enabled (and the check result is stored in a field). > > That was a bit surprising and interesting lesson :D > > Thanks > Sebastian > > [1] > https://github.com/infinispan/infinispan/pull/4538#discussion_r80666086 > > [2] https://github.com/slaskawi/jboss-logging-perf-test > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Fri Sep 30 04:40:59 2016 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 30 Sep 2016 10:40:59 +0200 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> Message-ID: On 23/09/16 17:33, Galder Zamarre?o wrote: > ^ I thought about all of this when working on the JS client, and although like you, I thought this was the biggest hurdle, eventually I realised that there are bigger issues than that: > > 1. How do you verify that a Javascript client works the way a Javascript program would use it? > IOW, even if you could call JS from Java, what you'd be verifying is that whichever contorsionate way of calling JS from Java works, which might not necessarily mean it works when a real JS program calls it. If a specific language API wants to "feel native" in its environment that is fine, and there should be local tests to exercise that, but from a protocol compliance point of view this is irrelevant. We need to verify that: - for each Hot Rod operation and variant (e.g. flags, metadata) the client is sending the correct request. - the client should also be able to correctly process the response, again with different variations (result, not found, errors, metadata) - for the different client intelligence levels the client should be able to correctly process the returned headers (topology, hashing, etc) - the client should correctly react to topology changes and failover - the client should correctly react to events and fire the appropriate listeners - the client should be able to correctly handle encryption handshaking and report error situations properly - the client should be able to correctly handle authentication and report error situations properly for the client-supported mechanisms Additionally client might wish to test for the following, but this is not part of the protocol specification: - marshalling - async methods - site failover - language-specific synctactic sugar Also, to provide a common ground for the server configuration used by both types of tests (TCK and client-specific), we should really use docker containers with appropriately named configs together with common scripts that recreate the test scenarios, so that each testsuite doesn't have to reinvent the wheel. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Fri Sep 30 05:07:55 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 30 Sep 2016 11:07:55 +0200 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense In-Reply-To: <57EE2401.5040203@redhat.com> References: <57EE2401.5040203@redhat.com> Message-ID: Yes, exactly - Radim is correct. I added also one test - "if(logger.isTraceEnabled()) logger.trace(...)". The results look like the following: Benchmark Mode Cnt Score Error Units MyBenchmark.noVariable thrpt 20 725265655.062 ? 1777607.124 ops/s *MyBenchmark.withIsTraceEnabledCheck thrpt 20 725116016.785 ? 2812327.685 ops/s* MyBenchmark.withVariable thrpt 20 2415571350.540 ? 7052276.025 ops/s Both results (logger.tracef(...) and if(logger.isTraceEnabled()) logger.tracef(...)) look exactly the same. This is expected because logger.tracef checks if proper level is enabled before processing input. Thanks Sebastian On Fri, Sep 30, 2016 at 10:36 AM, Radim Vansa wrote: > Wolf, the isTraceEnabled() is called only once during class > initialization (if that's a static field) or instance creation, but it > is usually stored as final field and therefore the JVM is likely to > optimize the calls. > > It's possible to change final fields, and in this case it's not as > unsafe (the only risk is not logging something and the cost related to > recompiling the class), but the problematic part is finding them :) In > Infinispan, you get close to all logging if you inspect all classes in > component registry (and global component registry). It's not as easy as > setting the level through JMX, though. > > R. > > On 09/30/2016 09:43 AM, Wolf Fink wrote: > > I understand the impact of this, but we should keep in mind that there > > are some important points where it is worse if you can't change the > > logging on the fly for a few moments to check something and switch back. > > > > For the test my understanding is that you use > > - the logger.tracef direct > > - check logger.isTraceEnabled() first > > > > I see the variable stored but not used - or am I wrong and the > > benchmark test do something extra? > > > > > > So interesting would be the difference between > > - log.trace("xyz") > > - if(log.isTraceEnabled) log.trace("xyz") > > - log.tracef("xyz %s", var) > > - if(log.isTraceEnabled) log.tracef("xyz %s",var) > > and the construct with storing the log level in a static field > > - boolean isTrace=log.isTraceEnabled() > > if(isTrace) log.tracef("xyz %s",var) > > > > > > On Fri, Sep 30, 2016 at 8:53 AM, Sebastian Laskawiec > > > wrote: > > > > Hey! > > > > A while ago I asked Radim and Dan about these kind of constructs [1]: > > > > private boolean trace = logger.isTraceEnabled(); //stored in a field > > > > ... called in some method ... > > if(trace) > > logger.tracef(...); > > ... > > > > At first they seemed wrong to me, because if one changes logging > > level (using JMX for example), the code won't notice it. I also > > though it's quite ok to use tracef directly, because JIT will > > inline and optimize it. > > > > Unfortunately my benchmarks [2] show that I was wrong. > > Logger#tracef indeed checks if the logging level is enabled but > > since JBoss Logging may use different backends, the check is not > > trivial and is not inlined (at least with default settings). The > > performance results look like this: > > Benchmark Mode Cnt Score Error Units > > MyBenchmark.noVariable thrpt 20 *717252060.124* ? > > 13420522.229 ops/s > > MyBenchmark.withVariable thrpt 20 *2358360244.627* ? > > 50214969.572 ops/s > > > > So if you even see a construct like this: logger.debuf or > > logger.tracef - make sure you check if the logging level is > > enabled (and the check result is stored in a field). > > > > That was a bit surprising and interesting lesson :D > > > > Thanks > > Sebastian > > > > [1] > > https://github.com/infinispan/infinispan/pull/4538# > discussion_r80666086 > > 4538#discussion_r80666086> > > [2] https://github.com/slaskawi/jboss-logging-perf-test > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160930/b63714a5/attachment.html From gustavo at infinispan.org Fri Sep 30 05:14:15 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Fri, 30 Sep 2016 10:14:15 +0100 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> Message-ID: On Fri, Sep 30, 2016 at 9:40 AM, Tristan Tarrant wrote: > On 23/09/16 17:33, Galder Zamarre?o wrote: > > ^ I thought about all of this when working on the JS client, and > although like you, I thought this was the biggest hurdle, eventually I > realised that there are bigger issues than that: > > > > 1. How do you verify that a Javascript client works the way a Javascript > program would use it? > > IOW, even if you could call JS from Java, what you'd be verifying is > that whichever contorsionate way of calling JS from Java works, which might > not necessarily mean it works when a real JS program calls it. > If a specific language API wants to "feel native" in its environment > that is fine, and there should be local tests to exercise that, but from > a protocol compliance point of view this is irrelevant. We need to > verify that: > > - for each Hot Rod operation and variant (e.g. flags, metadata) the > client is sending the correct request. > - the client should also be able to correctly process the response, > again with different variations (result, not found, errors, metadata) > - for the different client intelligence levels the client should be able > to correctly process the returned headers (topology, hashing, etc) > - the client should correctly react to topology changes and failover > - the client should correctly react to events and fire the appropriate > listeners > - the client should be able to correctly handle encryption handshaking > and report error situations properly > - the client should be able to correctly handle authentication and > report error situations properly for the client-supported mechanisms > I wonder if something like Haxe [1] could help here in defining a language agnostic TCK (maybe an skeleton?) that gets compiled to several platforms. Each platform's testsuite would them "implement" the spec and of course would be free to add 'native' tests as well. There's also a unit test framework built on top of [1], worth exploring [1] https://haxe.org/ [2] https://github.com/massiveinteractive/MassiveUnit/ > Additionally client might wish to test for the following, but this is > not part of the protocol specification: > > - marshalling > - async methods > - site failover > - language-specific synctactic sugar > > Also, to provide a common ground for the server configuration used by > both types of tests (TCK and client-specific), we should really use > docker containers with appropriately named configs together with common > scripts that recreate the test scenarios, so that each testsuite doesn't > have to reinvent the wheel. > > +1 for docker, as it no longer requires the hack of having VirtualBox on non-Linux platforms. >From my experience, most of the testing cases don't even need huge pre-canned XMLs, all configurations can be achieved by runtime manipulation of the server model. Cheers, Gustavo Tristan > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160930/3a6a3cb3/attachment-0001.html From ion at infinispan.org Fri Sep 30 05:16:32 2016 From: ion at infinispan.org (Ion Savin) Date: Fri, 30 Sep 2016 12:16:32 +0300 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> Message-ID: <06a716ef-c038-2639-8e1e-71ac8a3c54ce@infinispan.org> Hi all, > - for each Hot Rod operation and variant (e.g. flags, metadata) the > client is sending the correct request. > - the client should also be able to correctly process the response, > again with different variations (result, not found, errors, metadata) > - for the different client intelligence levels the client should be able > to correctly process the returned headers (topology, hashing, etc) > - the client should correctly react to topology changes and failover > - the client should correctly react to events and fire the appropriate > listeners > - the client should be able to correctly handle encryption handshaking > and report error situations properly > - the client should be able to correctly handle authentication and > report error situations properly for the client-supported mechanisms At least for some of this cases this approach could work for protocol level client tests: Implement a tool (single process) which mocks the server side, can accept multiple connections from clients to simulate a cluster and can verify that the interaction with the client matches a predefined script. There could be a separate script for each HR version / intelligence level. The script is interpreted by the mock and not dependent on any of the languages in which the clients are implemented. All assertions are done in this tool and not the client (e.g. to test get() generate a random value and expect the client to do a put() on another key with the value it got using get()). For each HR client implement a client app in that language which interacts with the mock as prescribed by the script. This is very similar to how financial institution automate certification for FIX protocol implementations / integration work. -- Ion Savin From gustavo at infinispan.org Fri Sep 30 05:58:27 2016 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Fri, 30 Sep 2016 10:58:27 +0100 Subject: [infinispan-dev] Hot Rod testing In-Reply-To: References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> Message-ID: > > > > I wonder if something like Haxe [1] could help here in defining a language > agnostic > TCK (maybe an skeleton?) that gets compiled to several platforms. Each > platform's > testsuite would them "implement" the spec and of course would be free to > add > 'native' tests as well. There's also a unit test framework built on top of > [1], worth exploring > > [1] https://haxe.org/ > [2] https://github.com/massiveinteractive/MassiveUnit/ > > This is an idea of how to use it: 1) Define an interface using the Haxe language (just assume syntax is correct): interface IHotRodClient { get(Object k) put(Object k, Object value) etc } 2) Write the TCK in terms of that interface. The Haxe language has lots of libraries, including unit tests: class TCK { test1( ) { ... } test2( ) { ... } etc void Main(IHotRodClient client) new TCK(client).run() } 3) Cross compile the TCK and distribute it as jar, dll, js, etc 4) Each Hot Rod client consumes the artifact above 5) Each Hot Rod runs the TCK passing its implementation of IHotRodClient 6) Profit My 2p, Gustavo > >> >> Tristan >> >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160930/2cf4fc15/attachment.html From david.lloyd at redhat.com Fri Sep 30 07:53:06 2016 From: david.lloyd at redhat.com (David M. Lloyd) Date: Fri, 30 Sep 2016 06:53:06 -0500 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense In-Reply-To: References: Message-ID: <50e73fd2-49b8-55ae-a7b4-91345b44a7e9@redhat.com> On 09/30/2016 01:53 AM, Sebastian Laskawiec wrote: > Hey! > > A while ago I asked Radim and Dan about these kind of constructs [1]: > > private boolean trace = logger.isTraceEnabled(); //stored in a field > > ... called in some method ... > if(trace) > logger.tracef(...); > ... > > At first they seemed wrong to me, because if one changes logging level > (using JMX for example), the code won't notice it. I also though it's > quite ok to use tracef directly, because JIT will inline and optimize it. > > Unfortunately my benchmarks [2] show that I was wrong. Logger#tracef > indeed checks if the logging level is enabled but since JBoss Logging > may use different backends, the check is not trivial and is not inlined > (at least with default settings). What backend where you using with your test? > The performance results look like this: > Benchmark Mode Cnt Score Error Units > MyBenchmark.noVariable thrpt 20 *717252060.124* ? 13420522.229 ops/s > MyBenchmark.withVariable thrpt 20 *2358360244.627* ? 50214969.572 ops/s > > So if you even see a construct like this: logger.debuf or logger.tracef > - make sure you check if the logging level is enabled (and the check > result is stored in a field). > > That was a bit surprising and interesting lesson :D > > Thanks > Sebastian > > [1] https://github.com/infinispan/infinispan/pull/4538#discussion_r80666086 > [2] https://github.com/slaskawi/jboss-logging-perf-test > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- - DML From wfink at redhat.com Fri Sep 30 08:14:20 2016 From: wfink at redhat.com (Wolf Fink) Date: Fri, 30 Sep 2016 14:14:20 +0200 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense In-Reply-To: <50e73fd2-49b8-55ae-a7b4-91345b44a7e9@redhat.com> References: <50e73fd2-49b8-55ae-a7b4-91345b44a7e9@redhat.com> Message-ID: Ok, thanks for clarifying it. So there is a factor of 3 for the test if no trace is enabled, just for checking. It makes sense to use it. But my concern is still that it is sometimes good to have the possibility to enable debug for some important information in production just on the fly and switch it of to prevent from throtteling the server by that log statements or restart the server. We have the same issue in EAP but here a restart is not that bad as here you don't have to load the cache or rebalance the cluster for stop/start. - Wolf On Fri, Sep 30, 2016 at 1:53 PM, David M. Lloyd wrote: > On 09/30/2016 01:53 AM, Sebastian Laskawiec wrote: > > Hey! > > > > A while ago I asked Radim and Dan about these kind of constructs [1]: > > > > private boolean trace = logger.isTraceEnabled(); //stored in a field > > > > ... called in some method ... > > if(trace) > > logger.tracef(...); > > ... > > > > At first they seemed wrong to me, because if one changes logging level > > (using JMX for example), the code won't notice it. I also though it's > > quite ok to use tracef directly, because JIT will inline and optimize it. > > > > Unfortunately my benchmarks [2] show that I was wrong. Logger#tracef > > indeed checks if the logging level is enabled but since JBoss Logging > > may use different backends, the check is not trivial and is not inlined > > (at least with default settings). > > What backend where you using with your test? > > > The performance results look like this: > > Benchmark Mode Cnt Score Error > Units > > MyBenchmark.noVariable thrpt 20 *717252060.124* ? 13420522.229 > ops/s > > MyBenchmark.withVariable thrpt 20 *2358360244.627* ? 50214969.572 > ops/s > > > > So if you even see a construct like this: logger.debuf or logger.tracef > > - make sure you check if the logging level is enabled (and the check > > result is stored in a field). > > > > That was a bit surprising and interesting lesson :D > > > > Thanks > > Sebastian > > > > [1] https://github.com/infinispan/infinispan/pull/4538# > discussion_r80666086 > > [2] https://github.com/slaskawi/jboss-logging-perf-test > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > - DML > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160930/34cf7c7f/attachment.html From slaskawi at redhat.com Fri Sep 30 08:40:19 2016 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 30 Sep 2016 14:40:19 +0200 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense In-Reply-To: <50e73fd2-49b8-55ae-a7b4-91345b44a7e9@redhat.com> References: <50e73fd2-49b8-55ae-a7b4-91345b44a7e9@redhat.com> Message-ID: Hey David! It's Java Util Logging (so the JDKLogger implementation). Thanks Sebastian On Fri, Sep 30, 2016 at 1:53 PM, David M. Lloyd wrote: > On 09/30/2016 01:53 AM, Sebastian Laskawiec wrote: > > Hey! > > > > A while ago I asked Radim and Dan about these kind of constructs [1]: > > > > private boolean trace = logger.isTraceEnabled(); //stored in a field > > > > ... called in some method ... > > if(trace) > > logger.tracef(...); > > ... > > > > At first they seemed wrong to me, because if one changes logging level > > (using JMX for example), the code won't notice it. I also though it's > > quite ok to use tracef directly, because JIT will inline and optimize it. > > > > Unfortunately my benchmarks [2] show that I was wrong. Logger#tracef > > indeed checks if the logging level is enabled but since JBoss Logging > > may use different backends, the check is not trivial and is not inlined > > (at least with default settings). > > What backend where you using with your test? > > > The performance results look like this: > > Benchmark Mode Cnt Score Error > Units > > MyBenchmark.noVariable thrpt 20 *717252060.124* ? 13420522.229 > ops/s > > MyBenchmark.withVariable thrpt 20 *2358360244.627* ? 50214969.572 > ops/s > > > > So if you even see a construct like this: logger.debuf or logger.tracef > > - make sure you check if the logging level is enabled (and the check > > result is stored in a field). > > > > That was a bit surprising and interesting lesson :D > > > > Thanks > > Sebastian > > > > [1] https://github.com/infinispan/infinispan/pull/4538# > discussion_r80666086 > > [2] https://github.com/slaskawi/jboss-logging-perf-test > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > - DML > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160930/6c690327/attachment-0001.html From dan.berindei at gmail.com Fri Sep 30 08:41:58 2016 From: dan.berindei at gmail.com (Dan Berindei) Date: Fri, 30 Sep 2016 15:41:58 +0300 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense In-Reply-To: References: <50e73fd2-49b8-55ae-a7b4-91345b44a7e9@redhat.com> Message-ID: I should stress that we only cache `isTraceEnabled()` in a static field. Debug logging can still be enabled or disabled on the fly. Cheers Dan On Fri, Sep 30, 2016 at 3:14 PM, Wolf Fink wrote: > Ok, > > thanks for clarifying it. > > So there is a factor of 3 for the test if no trace is enabled, just for > checking. > It makes sense to use it. > But my concern is still that it is sometimes good to have the possibility to > enable debug for some important information in production just on the fly > and switch it of to prevent from throtteling the server by that log > statements or restart the server. > We have the same issue in EAP but here a restart is not that bad as here you > don't have to load the cache or rebalance the cluster for stop/start. > > - Wolf > > On Fri, Sep 30, 2016 at 1:53 PM, David M. Lloyd > wrote: >> >> On 09/30/2016 01:53 AM, Sebastian Laskawiec wrote: >> > Hey! >> > >> > A while ago I asked Radim and Dan about these kind of constructs [1]: >> > >> > private boolean trace = logger.isTraceEnabled(); //stored in a field >> > >> > ... called in some method ... >> > if(trace) >> > logger.tracef(...); >> > ... >> > >> > At first they seemed wrong to me, because if one changes logging level >> > (using JMX for example), the code won't notice it. I also though it's >> > quite ok to use tracef directly, because JIT will inline and optimize >> > it. >> > >> > Unfortunately my benchmarks [2] show that I was wrong. Logger#tracef >> > indeed checks if the logging level is enabled but since JBoss Logging >> > may use different backends, the check is not trivial and is not inlined >> > (at least with default settings). >> >> What backend where you using with your test? >> >> > The performance results look like this: >> > Benchmark Mode Cnt Score Error >> > Units >> > MyBenchmark.noVariable thrpt 20 *717252060.124* ? 13420522.229 >> > ops/s >> > MyBenchmark.withVariable thrpt 20 *2358360244.627* ? 50214969.572 >> > ops/s >> > >> > So if you even see a construct like this: logger.debuf or logger.tracef >> > - make sure you check if the logging level is enabled (and the check >> > result is stored in a field). >> > >> > That was a bit surprising and interesting lesson :D >> > >> > Thanks >> > Sebastian >> > >> > [1] >> > https://github.com/infinispan/infinispan/pull/4538#discussion_r80666086 >> > [2] https://github.com/slaskawi/jboss-logging-perf-test >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> >> -- >> - DML >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Fri Sep 30 09:23:32 2016 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 30 Sep 2016 14:23:32 +0100 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense In-Reply-To: References: <50e73fd2-49b8-55ae-a7b4-91345b44a7e9@redhat.com> Message-ID: this discussion appears on this mailing list approximately every 2 years :) On 30 September 2016 at 13:41, Dan Berindei wrote: > I should stress that we only cache `isTraceEnabled()` in a static > field. Debug logging can still be enabled or disabled on the fly. > > > Cheers > Dan > > > On Fri, Sep 30, 2016 at 3:14 PM, Wolf Fink wrote: >> Ok, >> >> thanks for clarifying it. >> >> So there is a factor of 3 for the test if no trace is enabled, just for >> checking. >> It makes sense to use it. >> But my concern is still that it is sometimes good to have the possibility to >> enable debug for some important information in production just on the fly >> and switch it of to prevent from throtteling the server by that log >> statements or restart the server. >> We have the same issue in EAP but here a restart is not that bad as here you >> don't have to load the cache or rebalance the cluster for stop/start. >> >> - Wolf >> >> On Fri, Sep 30, 2016 at 1:53 PM, David M. Lloyd >> wrote: >>> >>> On 09/30/2016 01:53 AM, Sebastian Laskawiec wrote: >>> > Hey! >>> > >>> > A while ago I asked Radim and Dan about these kind of constructs [1]: >>> > >>> > private boolean trace = logger.isTraceEnabled(); //stored in a field >>> > >>> > ... called in some method ... >>> > if(trace) >>> > logger.tracef(...); >>> > ... >>> > >>> > At first they seemed wrong to me, because if one changes logging level >>> > (using JMX for example), the code won't notice it. I also though it's >>> > quite ok to use tracef directly, because JIT will inline and optimize >>> > it. >>> > >>> > Unfortunately my benchmarks [2] show that I was wrong. Logger#tracef >>> > indeed checks if the logging level is enabled but since JBoss Logging >>> > may use different backends, the check is not trivial and is not inlined >>> > (at least with default settings). >>> >>> What backend where you using with your test? >>> >>> > The performance results look like this: >>> > Benchmark Mode Cnt Score Error >>> > Units >>> > MyBenchmark.noVariable thrpt 20 *717252060.124* ? 13420522.229 >>> > ops/s >>> > MyBenchmark.withVariable thrpt 20 *2358360244.627* ? 50214969.572 >>> > ops/s >>> > >>> > So if you even see a construct like this: logger.debuf or logger.tracef >>> > - make sure you check if the logging level is enabled (and the check >>> > result is stored in a field). >>> > >>> > That was a bit surprising and interesting lesson :D >>> > >>> > Thanks >>> > Sebastian >>> > >>> > [1] >>> > https://github.com/infinispan/infinispan/pull/4538#discussion_r80666086 >>> > [2] https://github.com/slaskawi/jboss-logging-perf-test >>> > >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> >>> -- >>> - DML >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dereed at redhat.com Fri Sep 30 12:16:51 2016 From: dereed at redhat.com (Dennis Reed) Date: Fri, 30 Sep 2016 11:16:51 -0500 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense In-Reply-To: References: <50e73fd2-49b8-55ae-a7b4-91345b44a7e9@redhat.com> Message-ID: <57EE8FF3.9020303@redhat.com> As Wolf noted, caching the trace flag is bad when trying to debug issues. Don't do it! It's not worth breaking the logging semantics for a nano-second level performance difference. (if your trace is being called enough for that tiny impact to make any real difference, that trace logging is going to be WAY too verbose to be of any use anyways). If I see it done, I'm going to open a BZ. -Dennis On 09/30/2016 08:23 AM, Sanne Grinovero wrote: > this discussion appears on this mailing list approximately every 2 years :) > > On 30 September 2016 at 13:41, Dan Berindei wrote: >> I should stress that we only cache `isTraceEnabled()` in a static >> field. Debug logging can still be enabled or disabled on the fly. >> >> >> Cheers >> Dan >> >> >> On Fri, Sep 30, 2016 at 3:14 PM, Wolf Fink wrote: >>> Ok, >>> >>> thanks for clarifying it. >>> >>> So there is a factor of 3 for the test if no trace is enabled, just for >>> checking. >>> It makes sense to use it. >>> But my concern is still that it is sometimes good to have the possibility to >>> enable debug for some important information in production just on the fly >>> and switch it of to prevent from throtteling the server by that log >>> statements or restart the server. >>> We have the same issue in EAP but here a restart is not that bad as here you >>> don't have to load the cache or rebalance the cluster for stop/start. >>> >>> - Wolf >>> >>> On Fri, Sep 30, 2016 at 1:53 PM, David M. Lloyd >>> wrote: >>>> >>>> On 09/30/2016 01:53 AM, Sebastian Laskawiec wrote: >>>>> Hey! >>>>> >>>>> A while ago I asked Radim and Dan about these kind of constructs [1]: >>>>> >>>>> private boolean trace = logger.isTraceEnabled(); //stored in a field >>>>> >>>>> ... called in some method ... >>>>> if(trace) >>>>> logger.tracef(...); >>>>> ... >>>>> >>>>> At first they seemed wrong to me, because if one changes logging level >>>>> (using JMX for example), the code won't notice it. I also though it's >>>>> quite ok to use tracef directly, because JIT will inline and optimize >>>>> it. >>>>> >>>>> Unfortunately my benchmarks [2] show that I was wrong. Logger#tracef >>>>> indeed checks if the logging level is enabled but since JBoss Logging >>>>> may use different backends, the check is not trivial and is not inlined >>>>> (at least with default settings). >>>> >>>> What backend where you using with your test? >>>> >>>>> The performance results look like this: >>>>> Benchmark Mode Cnt Score Error >>>>> Units >>>>> MyBenchmark.noVariable thrpt 20 *717252060.124* ? 13420522.229 >>>>> ops/s >>>>> MyBenchmark.withVariable thrpt 20 *2358360244.627* ? 50214969.572 >>>>> ops/s >>>>> >>>>> So if you even see a construct like this: logger.debuf or logger.tracef >>>>> - make sure you check if the logging level is enabled (and the check >>>>> result is stored in a field). >>>>> >>>>> That was a bit surprising and interesting lesson :D >>>>> >>>>> Thanks >>>>> Sebastian >>>>> >>>>> [1] >>>>> https://github.com/infinispan/infinispan/pull/4538#discussion_r80666086 >>>>> [2] https://github.com/slaskawi/jboss-logging-perf-test >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> -- >>>> - DML >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From afield at redhat.com Fri Sep 30 13:26:24 2016 From: afield at redhat.com (Alan Field) Date: Fri, 30 Sep 2016 13:26:24 -0400 (EDT) Subject: [infinispan-dev] Hot Rod testing In-Reply-To: References: <5f8345e3-8b2d-821d-c94f-3ca82855215e@redhat.com> <60258398-4DCE-42CE-A807-EDDE269E22D0@redhat.com> Message-ID: <920330703.902075.1475256384443.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Gustavo Fernandes" > To: "infinispan -Dev List" > Sent: Friday, September 30, 2016 5:58:27 AM > Subject: Re: [infinispan-dev] Hot Rod testing > > I wonder if something like Haxe [1] could help here in defining a language > > agnostic > > > TCK (maybe an skeleton?) that gets compiled to several platforms. Each > > platform's > > > testsuite would them "implement" the spec and of course would be free to > > add > > > 'native' tests as well. There's also a unit test framework built on top of > > [1], worth exploring > > > [1] https://haxe.org/ > > > [2] https://github.com/massiveinteractive/MassiveUnit/ > > This is an idea of how to use it: > 1) Define an interface using the Haxe language (just assume syntax is > correct): > interface IHotRodClient { > get(Object k) > put(Object k, Object value) > etc > } > 2) Write the TCK in terms of that interface. The Haxe language has lots of > libraries, including unit tests: > class TCK { > test1( ) { ... } > test2( ) { ... } > etc > void Main(IHotRodClient client) > new TCK(client).run() > } > 3) Cross compile the TCK and distribute it as jar, dll, js, etc > 4) Each Hot Rod client consumes the artifact above > 5) Each Hot Rod runs the TCK passing its implementation of IHotRodClient > 6) Profit It takes 6 steps to profit?! I think the idea of writing the TCK once and being able to generate the code in the native language of the client is a great idea. The issue will be when we have a Hot Rod client in a language that Haxe doesn't support. (Go?) Thanks, Alan > My 2p, > Gustavo > > > Tristan > > > > > > -- > > > > > > Tristan Tarrant > > > > > > Infinispan Lead > > > > > > JBoss, a division of Red Hat > > > > > > _______________________________________________ > > > > > > infinispan-dev mailing list > > > > > > infinispan-dev at lists.jboss.org > > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20160930/f0bfae37/attachment.html From david.lloyd at redhat.com Fri Sep 30 15:15:45 2016 From: david.lloyd at redhat.com (David M. Lloyd) Date: Fri, 30 Sep 2016 14:15:45 -0500 Subject: [infinispan-dev] if (trace) logger.tracef - it makes sense In-Reply-To: <57EE8FF3.9020303@redhat.com> References: <50e73fd2-49b8-55ae-a7b4-91345b44a7e9@redhat.com> <57EE8FF3.9020303@redhat.com> Message-ID: The performance problem that this trick is meant to resolve is really a problem in the logging backend. It *should* be faster inside of WildFly, where JBoss LogManager is used, because that project just checks a single volatile field for the level check... and the path to that code *should* be inline-friendly. On 09/30/2016 11:16 AM, Dennis Reed wrote: > As Wolf noted, caching the trace flag is bad when trying to debug issues. > > Don't do it! It's not worth breaking the logging semantics for a > nano-second level performance difference. (if your trace is being > called enough for that tiny impact to make any real difference, that > trace logging is going to be WAY too verbose to be of any use anyways). > > If I see it done, I'm going to open a BZ. > > -Dennis > > > On 09/30/2016 08:23 AM, Sanne Grinovero wrote: >> this discussion appears on this mailing list approximately every 2 years :) >> >> On 30 September 2016 at 13:41, Dan Berindei wrote: >>> I should stress that we only cache `isTraceEnabled()` in a static >>> field. Debug logging can still be enabled or disabled on the fly. >>> >>> >>> Cheers >>> Dan >>> >>> >>> On Fri, Sep 30, 2016 at 3:14 PM, Wolf Fink wrote: >>>> Ok, >>>> >>>> thanks for clarifying it. >>>> >>>> So there is a factor of 3 for the test if no trace is enabled, just for >>>> checking. >>>> It makes sense to use it. >>>> But my concern is still that it is sometimes good to have the possibility to >>>> enable debug for some important information in production just on the fly >>>> and switch it of to prevent from throtteling the server by that log >>>> statements or restart the server. >>>> We have the same issue in EAP but here a restart is not that bad as here you >>>> don't have to load the cache or rebalance the cluster for stop/start. >>>> >>>> - Wolf >>>> >>>> On Fri, Sep 30, 2016 at 1:53 PM, David M. Lloyd >>>> wrote: >>>>> >>>>> On 09/30/2016 01:53 AM, Sebastian Laskawiec wrote: >>>>>> Hey! >>>>>> >>>>>> A while ago I asked Radim and Dan about these kind of constructs [1]: >>>>>> >>>>>> private boolean trace = logger.isTraceEnabled(); //stored in a field >>>>>> >>>>>> ... called in some method ... >>>>>> if(trace) >>>>>> logger.tracef(...); >>>>>> ... >>>>>> >>>>>> At first they seemed wrong to me, because if one changes logging level >>>>>> (using JMX for example), the code won't notice it. I also though it's >>>>>> quite ok to use tracef directly, because JIT will inline and optimize >>>>>> it. >>>>>> >>>>>> Unfortunately my benchmarks [2] show that I was wrong. Logger#tracef >>>>>> indeed checks if the logging level is enabled but since JBoss Logging >>>>>> may use different backends, the check is not trivial and is not inlined >>>>>> (at least with default settings). >>>>> >>>>> What backend where you using with your test? >>>>> >>>>>> The performance results look like this: >>>>>> Benchmark Mode Cnt Score Error >>>>>> Units >>>>>> MyBenchmark.noVariable thrpt 20 *717252060.124* ? 13420522.229 >>>>>> ops/s >>>>>> MyBenchmark.withVariable thrpt 20 *2358360244.627* ? 50214969.572 >>>>>> ops/s >>>>>> >>>>>> So if you even see a construct like this: logger.debuf or logger.tracef >>>>>> - make sure you check if the logging level is enabled (and the check >>>>>> result is stored in a field). >>>>>> >>>>>> That was a bit surprising and interesting lesson :D >>>>>> >>>>>> Thanks >>>>>> Sebastian >>>>>> >>>>>> [1] >>>>>> https://github.com/infinispan/infinispan/pull/4538#discussion_r80666086 >>>>>> [2] https://github.com/slaskawi/jboss-logging-perf-test >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>> >>>>> -- >>>>> - DML >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- - DML