From jholusa at redhat.com Wed May 3 09:14:36 2017 From: jholusa at redhat.com (Jiri Holusa) Date: Wed, 3 May 2017 09:14:36 -0400 (EDT) Subject: [infinispan-dev] Documentation code snippets In-Reply-To: <316848336.12640070.1491582833099.JavaMail.zimbra@redhat.com> References: <316848336.12640070.1491582833099.JavaMail.zimbra@redhat.com> Message-ID: <765772517.3607015.1493817276720.JavaMail.zimbra@redhat.com> Moving this to infinispan-dev. I've just issued a PR [1], where I setup the code snippets generation. It was actually pretty easy. I started implementing it for the configuration part of the documentation and I came across following findings/issues. There were more votes for option 2 (see the previous mail for detail, in summary using existing testsuite), hence I started with that. Pretty shortly I see following issues: * XML configuration - since we want to have the element there in the configuration, I have to do one XML file per one configuration code snippet -> the number of files will grow and will mess up the "normal" testsuite * IMHO biggest problem - our testsuite is usually not written in "documentation simplicity". For example, in testsuite we barely (= never) do "EmbeddedCacheManager cacheManager = new DefaultCacheManager("...");", we obtain the cache manager by some helper method. While this is great for testing, you don't want to have this in documentation as it should be simple and straightforward. Another example would be [2]. Look at the programmatic configuration snippets. In the testsuite, we usually don't have that trivial setup, not so comprehensively written somewhere. * When you want to introduce a new code snippet, how can you be sure that the snippet is not somewhere in the testsuite already, but written a bit differently? I encountered this right from the beginning, search the test classes and looking for "good enough" code snippet that I could use. Together it seems to me that it will mess up the testsuite quite a bit, make the maintenance of documentation harder and will significantly prolong the time needed for writing new documentation. What do you think? How about we went the same way as Hibernate (option 1 in first email) - creating separate documentation testsuite that is as simple as possible, descriptive and straightforward. I don't really care, which option we choose, I will implement it either way, but I wanted to show that there are some pitfalls of the option 2 as well :( Cheers, Jiri [1] https://github.com/infinispan/infinispan/pull/5115 [2] http://infinispan.org/docs/stable/user_guide/user_guide.html#configuring_caches_programmatically ----- Forwarded Message ----- > From: "Jiri Holusa" > To: "infinispan-internal" > Sent: Friday, April 7, 2017 6:33:53 PM > Subject: [infinispan-internal] Documentation code snippets > > Hi everybody, > > during the documentation review for JDG 7.1 GA, I came across this little > thing. > > Having a good documentation is IMHO crucial for people to like our technology > and the key point is having code snippets in the documentation up to date > and working. During review of my parts, I found out many and many outdated > code snippets, either non-compilable or using deprecated methods. I would > like to eliminate this issue in the future, so it would make our > documentation better and also remove burden when doing documentation review. > > I did some research and I found out that Hibernate team (thanks Radim, Sanne > for the information) does a very cool thing and that is that the code > snippets are taken right from testsuite. This way they know that the code > snippet can always compile and also make sure that it's working properly. I > would definitely love to see the same in Infinispan. > > It works extremely simply that you mark by comment in the test the part, you > want to include in the documentation, see an example here for the AsciiDoc > part [1] and here for the test part [2]. There are two ways of how to > organize that: > 1) create a separate "documentation testsuite", with as simple as possible > test classes - Hibernate team does it this way. Pros: documentation is > easily separated. Cons: possible duplication. > 2) use existing testsuite, marking the parts in the existing testsuite. Pros: > no duplication. Cons: documentation snippets are spread all across the > testsuite. > > I would definitely volunteer to make this happen in Infinispan > documentation. > > What do you guys think about it? > > Cheers, > Jiri > > [1] > https://raw.githubusercontent.com/hibernate/hibernate-validator/master/documentation/src/main/asciidoc/ch03.asciidoc > [2] > https://github.com/hibernate/hibernate-orm/blob/master/documentation/src/test/java/org/hibernate/userguide/caching/FirstLevelCacheTest.java > > From stkousso at redhat.com Wed May 3 10:20:57 2017 From: stkousso at redhat.com (Stelios Kousouris) Date: Wed, 3 May 2017 15:20:57 +0100 Subject: [infinispan-dev] Simplest way to check the validity of connection to Remote Cache In-Reply-To: <2098058988.4079799.1490016558856.JavaMail.zimbra@redhat.com> References: <92895211.4077569.1490016241640.JavaMail.zimbra@redhat.com> <2098058988.4079799.1490016558856.JavaMail.zimbra@redhat.com> Message-ID: HI guys, At the back of reading this (got some time on my hands today). Do we have a "quickstart" ref archirecture of externalizing state from JDV into JDG? i know it requires a bit more than simply how to in order to get a scalable architecture but just wondered if we have even the basics available out there? On Mon, Mar 20, 2017 at 1:29 PM, Ramesh Reddy wrote: > Hi, > > Is there call I can make on the cache API like ping to check the validity > of the remote connection? In OpenShift JDV is having issues with keeping > the connections fresh to JDG when node count goes to zero and comes back up. > > Thank you. > > Ramesh.. > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170503/c564c613/attachment-0001.html From emmanuel at hibernate.org Wed May 3 12:02:36 2017 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 3 May 2017 18:02:36 +0200 Subject: [infinispan-dev] TLS/SNI support for Relay protocol In-Reply-To: References: Message-ID: <20170503160236.GC79190@hibernate.org> Sebastian, Do you know if OpenShift has or plans to have some VPN or VPN like capabilities to bridge two "cross site" projects? It would probably be a faster and more generic solution than going through HTTP. Emmanuel On Tue 17-04-25 13:04, Sebastian Laskawiec wrote: >Hey Bela! > >I've been thinking about Cross Site Replication using Relay protocol on >Kubernetes/OpenShift. Most of the installations should use Federation [1] >but I can also imagine a custom installation with two sites (let's call >them X and Y) and totally separate networks. In that case, the flow through >Kubernetes/OpenShift might look like the following: > >Site X, Pod 1 (sending relay message) ---> sending packets ---> the >Internet ---> Site Y, Ingress/Route ---> Service ---> Site Y, Pod 1 > >Ingress/Routes and Services are Kubernetes/OpenShift "things". The former >acts as a reverse proxy and the latter as a load balancer. > >Unfortunately Ingress/Routes don't have good support for custom protocols >using TCP (they were designed with HTTP in mind). The only way to make it >work is to use TLS with SNI [2][3]. So we would need to encrypt all traffic >with TLS and use Application FQDN (a fully qualified application name, so >something like this: infinispan-app-2-myproject.*site-x*.com) as SNI >Hostname. Note that FQDN for both sites might be slightly different - >Infinispan on site X might want to use FQDN containing site Y in its name >and vice versa. > >I was wondering if it is possible to configure JGroups this way. If not, >are there any plans to do so? > >Thanks, >Sebastian > >[1] https://kubernetes.io/docs/concepts/cluster-administration/federation/ >[2] https://www.ietf.org/rfc/rfc3546.txt >[3] Look for "Passthrough Termination" >https://docs.openshift.com/enterprise/3.2/architecture/core_concepts/routes.html#secured-routes >-- > >SEBASTIAN ?ASKAWIEC > >INFINISPAN DEVELOPER > >Red Hat EMEA > >_______________________________________________ >infinispan-dev mailing list >infinispan-dev at lists.jboss.org >https://lists.jboss.org/mailman/listinfo/infinispan-dev From dereed at redhat.com Wed May 3 12:07:36 2017 From: dereed at redhat.com (Dennis Reed) Date: Wed, 3 May 2017 12:07:36 -0400 Subject: [infinispan-dev] Documentation code snippets In-Reply-To: <765772517.3607015.1493817276720.JavaMail.zimbra@redhat.com> References: <316848336.12640070.1491582833099.JavaMail.zimbra@redhat.com> <765772517.3607015.1493817276720.JavaMail.zimbra@redhat.com> Message-ID: <2ee8b0df-3d2c-e98f-bfb7-4dc1cf20d5d6@redhat.com> Definitely #1. They serve two completely separate purposes. I'm glad to see this, as incorrect examples in documentation is a pet peeve of mine. :) -Dennis On 05/03/2017 09:14 AM, Jiri Holusa wrote: > Moving this to infinispan-dev. > > I've just issued a PR [1], where I setup the code snippets generation. It was actually pretty easy. I started implementing it for the configuration part of the documentation and I came across following findings/issues. > > There were more votes for option 2 (see the previous mail for detail, in summary using existing testsuite), hence I started with that. Pretty shortly I see following issues: > * XML configuration - since we want to have the element there in the configuration, I have to do one XML file per one configuration code snippet -> the number of files will grow and will mess up the "normal" testsuite > * IMHO biggest problem - our testsuite is usually not written in "documentation simplicity". For example, in testsuite we barely (= never) do "EmbeddedCacheManager cacheManager = new DefaultCacheManager("...");", we obtain the cache manager by some helper method. While this is great for testing, you don't want to have this in documentation as it should be simple and straightforward. Another example would be [2]. Look at the programmatic configuration snippets. In the testsuite, we usually don't have that trivial setup, not so comprehensively written somewhere. > * When you want to introduce a new code snippet, how can you be sure that the snippet is not somewhere in the testsuite already, but written a bit differently? I encountered this right from the beginning, search the test classes and looking for "good enough" code snippet that I could use. > > Together it seems to me that it will mess up the testsuite quite a bit, make the maintenance of documentation harder and will significantly prolong the time needed for writing new documentation. What do you think? How about we went the same way as Hibernate (option 1 in first email) - creating separate documentation testsuite that is as simple as possible, descriptive and straightforward. > > I don't really care, which option we choose, I will implement it either way, but I wanted to show that there are some pitfalls of the option 2 as well :( > > Cheers, > Jiri > > [1] https://github.com/infinispan/infinispan/pull/5115 > [2] http://infinispan.org/docs/stable/user_guide/user_guide.html#configuring_caches_programmatically > > > > ----- Forwarded Message ----- >> From: "Jiri Holusa" >> To: "infinispan-internal" >> Sent: Friday, April 7, 2017 6:33:53 PM >> Subject: [infinispan-internal] Documentation code snippets >> >> Hi everybody, >> >> during the documentation review for JDG 7.1 GA, I came across this little >> thing. >> >> Having a good documentation is IMHO crucial for people to like our technology >> and the key point is having code snippets in the documentation up to date >> and working. During review of my parts, I found out many and many outdated >> code snippets, either non-compilable or using deprecated methods. I would >> like to eliminate this issue in the future, so it would make our >> documentation better and also remove burden when doing documentation review. >> >> I did some research and I found out that Hibernate team (thanks Radim, Sanne >> for the information) does a very cool thing and that is that the code >> snippets are taken right from testsuite. This way they know that the code >> snippet can always compile and also make sure that it's working properly. I >> would definitely love to see the same in Infinispan. >> >> It works extremely simply that you mark by comment in the test the part, you >> want to include in the documentation, see an example here for the AsciiDoc >> part [1] and here for the test part [2]. There are two ways of how to >> organize that: >> 1) create a separate "documentation testsuite", with as simple as possible >> test classes - Hibernate team does it this way. Pros: documentation is >> easily separated. Cons: possible duplication. >> 2) use existing testsuite, marking the parts in the existing testsuite. Pros: >> no duplication. Cons: documentation snippets are spread all across the >> testsuite. >> >> I would definitely volunteer to make this happen in Infinispan >> documentation. >> >> What do you guys think about it? >> >> Cheers, >> Jiri >> >> [1] >> https://raw.githubusercontent.com/hibernate/hibernate-validator/master/documentation/src/main/asciidoc/ch03.asciidoc >> [2] >> https://github.com/hibernate/hibernate-orm/blob/master/documentation/src/test/java/org/hibernate/userguide/caching/FirstLevelCacheTest.java >> >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From rareddy at redhat.com Wed May 3 13:12:18 2017 From: rareddy at redhat.com (Ramesh Reddy) Date: Wed, 3 May 2017 13:12:18 -0400 (EDT) Subject: [infinispan-dev] Simplest way to check the validity of connection to Remote Cache In-Reply-To: References: <92895211.4077569.1490016241640.JavaMail.zimbra@redhat.com> <2098058988.4079799.1490016558856.JavaMail.zimbra@redhat.com> Message-ID: <1671271310.4633337.1493831538647.JavaMail.zimbra@redhat.com> In JDV there is not much information in terms of user level state, however at application level JDV can have lot of state, like temp tables, materialization of views, cached source data etc. JDV already has translator for the JDG, with which we can do materialization. We also just implemented a new translator that requires to no defined marshallers and can work with portable objects in Infinispan as contents of table. This enables JDV to externalize state into Infinispan much more easily now. A simple video of this integration can be viewed at [1] [1] https://youtu.be/kQa2Q7ceUgU Ramesh.. ----- Original Message ----- > HI guys, > At the back of reading this (got some time on my hands today). Do we have a > "quickstart" ref archirecture of externalizing state from JDV into JDG? i > know it requires a bit more than simply how to in order to get a scalable > architecture but just wondered if we have even the basics available out > there? > On Mon, Mar 20, 2017 at 1:29 PM, Ramesh Reddy < rareddy at redhat.com > wrote: > > Hi, > > > Is there call I can make on the cache API like ping to check the validity > > of > > the remote connection? In OpenShift JDV is having issues with keeping the > > connections fresh to JDG when node count goes to zero and comes back up. > > > Thank you. > > > Ramesh.. > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170503/38e0a4af/attachment.html From galder at redhat.com Thu May 4 07:26:35 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Thu, 4 May 2017 13:26:35 +0200 Subject: [infinispan-dev] All jars must go? Message-ID: <4002ADDB-0409-407B-8021-79E716141F70@redhat.com> Hi all, As you might already know, there's been big debates about upcoming Java 9 module system. Recently Stephen Colebourne, creator Joda time, posted his thoughts [1]. Stephen mentions some potential problems with all jars since no two modules should have same package. We know from past experience that using these jars as dependencies in Maven create all sorts of problems, but with the new JPMS they might not even work? Have we tried all jars in Java 9? I'm wondering whether Stephen's problems with all jars are truly founded since Java offers no publishing itself. I mean, for that Stephen mentions to appear, you'd have to at runtime have an all jar and then individual jars, in which case it would fail. But as long as Maven does not enforce this in their repos, I think it's fine. If Maven starts enforcing this in the jars that are stored in Maven repos then yeah, we have a big problem. Thoughts? Cheers, [1] http://blog.joda.org/2017/04/java-se-9-jpms-module-naming.html -- Galder Zamarre?o Infinispan, Red Hat From sanne at infinispan.org Thu May 4 11:50:16 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 4 May 2017 11:50:16 -0400 Subject: [infinispan-dev] All jars must go? In-Reply-To: <4002ADDB-0409-407B-8021-79E716141F70@redhat.com> References: <4002ADDB-0409-407B-8021-79E716141F70@redhat.com> Message-ID: N.B. one problem many are not aware of is that - unlike with OSGi - the restriction in Jigsaw also applies to private packages, e.g. packages you're using within the jar but have no intention to "export" make public. So having this sorted out for OSGi doesn't mean that it will work fine with Jigsaw. I suspect we didn't test this, as far as I know we've only tested running and compiling withing JDK9 but Infinispan itself is not defining module descriptors; i.e. it's not modularized. It's very likely that when we'll want to "modularize it" we'll have to change APIs. Thanks, Sanne On 4 May 2017 at 07:26, Galder Zamarre?o wrote: > Hi all, > > As you might already know, there's been big debates about upcoming Java 9 module system. > > Recently Stephen Colebourne, creator Joda time, posted his thoughts [1]. > > Stephen mentions some potential problems with all jars since no two modules should have same package. We know from past experience that using these jars as dependencies in Maven create all sorts of problems, but with the new JPMS they might not even work? > > Have we tried all jars in Java 9? I'm wondering whether Stephen's problems with all jars are truly founded since Java offers no publishing itself. I mean, for that Stephen mentions to appear, you'd have to at runtime have an all jar and then individual jars, in which case it would fail. But as long as Maven does not enforce this in their repos, I think it's fine. If Maven starts enforcing this in the jars that are stored in Maven repos then yeah, we have a big problem. > > Thoughts? > > Cheers, > > [1] http://blog.joda.org/2017/04/java-se-9-jpms-module-naming.html > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Thu May 4 12:03:30 2017 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 4 May 2017 19:03:30 +0300 Subject: [infinispan-dev] All jars must go? In-Reply-To: References: <4002ADDB-0409-407B-8021-79E716141F70@redhat.com> Message-ID: If it's just private packages, then we won't have to change the API ;) Personally I'm more worried about how our externalizers for JDK classes are going to work: it's going to be hard to say we support Java 9 and at the same time ask users to add a bunch of --add-opens [1] to their JVM arguments. I stopped updating the POM comment at some point, but most other requirements for access to private JDK fields seem to come from WildFly/Pax Exam. [1]: https://github.com/infinispan/infinispan/blob/master/parent/pom.xml#L1614 Cheers Dan On Thu, May 4, 2017 at 6:50 PM, Sanne Grinovero wrote: > N.B. one problem many are not aware of is that - unlike with OSGi - > the restriction in Jigsaw also applies to private packages, e.g. > packages you're using within the jar but have no intention to "export" > make public. > > So having this sorted out for OSGi doesn't mean that it will work fine > with Jigsaw. > > I suspect we didn't test this, as far as I know we've only tested > running and compiling withing JDK9 but Infinispan itself is not > defining module descriptors; i.e. it's not modularized. > > It's very likely that when we'll want to "modularize it" we'll have to > change APIs. > > Thanks, > Sanne > > > On 4 May 2017 at 07:26, Galder Zamarre?o wrote: >> Hi all, >> >> As you might already know, there's been big debates about upcoming Java 9 module system. >> >> Recently Stephen Colebourne, creator Joda time, posted his thoughts [1]. >> >> Stephen mentions some potential problems with all jars since no two modules should have same package. We know from past experience that using these jars as dependencies in Maven create all sorts of problems, but with the new JPMS they might not even work? >> >> Have we tried all jars in Java 9? I'm wondering whether Stephen's problems with all jars are truly founded since Java offers no publishing itself. I mean, for that Stephen mentions to appear, you'd have to at runtime have an all jar and then individual jars, in which case it would fail. But as long as Maven does not enforce this in their repos, I think it's fine. If Maven starts enforcing this in the jars that are stored in Maven repos then yeah, we have a big problem. >> >> Thoughts? >> >> Cheers, >> >> [1] http://blog.joda.org/2017/04/java-se-9-jpms-module-naming.html >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Thu May 4 12:11:24 2017 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 4 May 2017 19:11:24 +0300 Subject: [infinispan-dev] All jars must go? In-Reply-To: References: <4002ADDB-0409-407B-8021-79E716141F70@redhat.com> Message-ID: Forgot to answer the initial question: we do have integration tests that use uber jars: integrationtests/all-embedded-it integrationtests/all-embedded-query-it integrationtests/all-remote-it We don't have a Java 9 build in Jenkins ATM, but they ran fine with jdk9-ea-164 on my machine. Cheers Dan On Thu, May 4, 2017 at 7:03 PM, Dan Berindei wrote: > If it's just private packages, then we won't have to change the API ;) > > Personally I'm more worried about how our externalizers for JDK > classes are going to work: it's going to be hard to say we support > Java 9 and at the same time ask users to add a bunch of --add-opens > [1] to their JVM arguments. I stopped updating the POM comment at some > point, but most other requirements for access to private JDK fields > seem to come from WildFly/Pax Exam. > > [1]: https://github.com/infinispan/infinispan/blob/master/parent/pom.xml#L1614 > > Cheers > Dan > > > On Thu, May 4, 2017 at 6:50 PM, Sanne Grinovero wrote: >> N.B. one problem many are not aware of is that - unlike with OSGi - >> the restriction in Jigsaw also applies to private packages, e.g. >> packages you're using within the jar but have no intention to "export" >> make public. >> >> So having this sorted out for OSGi doesn't mean that it will work fine >> with Jigsaw. >> >> I suspect we didn't test this, as far as I know we've only tested >> running and compiling withing JDK9 but Infinispan itself is not >> defining module descriptors; i.e. it's not modularized. >> >> It's very likely that when we'll want to "modularize it" we'll have to >> change APIs. >> >> Thanks, >> Sanne >> >> >> On 4 May 2017 at 07:26, Galder Zamarre?o wrote: >>> Hi all, >>> >>> As you might already know, there's been big debates about upcoming Java 9 module system. >>> >>> Recently Stephen Colebourne, creator Joda time, posted his thoughts [1]. >>> >>> Stephen mentions some potential problems with all jars since no two modules should have same package. We know from past experience that using these jars as dependencies in Maven create all sorts of problems, but with the new JPMS they might not even work? >>> >>> Have we tried all jars in Java 9? I'm wondering whether Stephen's problems with all jars are truly founded since Java offers no publishing itself. I mean, for that Stephen mentions to appear, you'd have to at runtime have an all jar and then individual jars, in which case it would fail. But as long as Maven does not enforce this in their repos, I think it's fine. If Maven starts enforcing this in the jars that are stored in Maven repos then yeah, we have a big problem. >>> >>> Thoughts? >>> >>> Cheers, >>> >>> [1] http://blog.joda.org/2017/04/java-se-9-jpms-module-naming.html >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Thu May 4 12:22:22 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 4 May 2017 12:22:22 -0400 Subject: [infinispan-dev] All jars must go? In-Reply-To: References: <4002ADDB-0409-407B-8021-79E716141F70@redhat.com> Message-ID: On 4 May 2017 at 12:03, Dan Berindei wrote: > If it's just private packages, then we won't have to change the API ;) Sorry if that was confusing: I meant to remind there are at least 2 problems to take in consideration. Right they are not strongly correlated other than being consequences of Jigsaw. > > Personally I'm more worried about how our externalizers for JDK > classes are going to work: it's going to be hard to say we support > Java 9 and at the same time ask users to add a bunch of --add-opens > [1] to their JVM arguments. I stopped updating the POM comment at some > point, but most other requirements for access to private JDK fields > seem to come from WildFly/Pax Exam. Using "add-opens" is not the only option, and I agree it's not desirable - especially for embedded users. Read the Hibernate blog for some alternatives, but hey yes the APIs will have to change ;) - http://in.relation.to/2017/04/11/accessing-private-state-of-java-9-modules/ Thanks, Sanne > > [1]: https://github.com/infinispan/infinispan/blob/master/parent/pom.xml#L1614 > > Cheers > Dan > > > On Thu, May 4, 2017 at 6:50 PM, Sanne Grinovero wrote: >> N.B. one problem many are not aware of is that - unlike with OSGi - >> the restriction in Jigsaw also applies to private packages, e.g. >> packages you're using within the jar but have no intention to "export" >> make public. >> >> So having this sorted out for OSGi doesn't mean that it will work fine >> with Jigsaw. >> >> I suspect we didn't test this, as far as I know we've only tested >> running and compiling withing JDK9 but Infinispan itself is not >> defining module descriptors; i.e. it's not modularized. >> >> It's very likely that when we'll want to "modularize it" we'll have to >> change APIs. >> >> Thanks, >> Sanne >> >> >> On 4 May 2017 at 07:26, Galder Zamarre?o wrote: >>> Hi all, >>> >>> As you might already know, there's been big debates about upcoming Java 9 module system. >>> >>> Recently Stephen Colebourne, creator Joda time, posted his thoughts [1]. >>> >>> Stephen mentions some potential problems with all jars since no two modules should have same package. We know from past experience that using these jars as dependencies in Maven create all sorts of problems, but with the new JPMS they might not even work? >>> >>> Have we tried all jars in Java 9? I'm wondering whether Stephen's problems with all jars are truly founded since Java offers no publishing itself. I mean, for that Stephen mentions to appear, you'd have to at runtime have an all jar and then individual jars, in which case it would fail. But as long as Maven does not enforce this in their repos, I think it's fine. If Maven starts enforcing this in the jars that are stored in Maven repos then yeah, we have a big problem. >>> >>> Thoughts? >>> >>> Cheers, >>> >>> [1] http://blog.joda.org/2017/04/java-se-9-jpms-module-naming.html >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Thu May 4 12:31:51 2017 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 4 May 2017 19:31:51 +0300 Subject: [infinispan-dev] All jars must go? In-Reply-To: References: <4002ADDB-0409-407B-8021-79E716141F70@redhat.com> Message-ID: On Thu, May 4, 2017 at 7:22 PM, Sanne Grinovero wrote: > On 4 May 2017 at 12:03, Dan Berindei wrote: >> >> Personally I'm more worried about how our externalizers for JDK >> classes are going to work: it's going to be hard to say we support >> Java 9 and at the same time ask users to add a bunch of --add-opens >> [1] to their JVM arguments. I stopped updating the POM comment at some >> point, but most other requirements for access to private JDK fields >> seem to come from WildFly/Pax Exam. > > Using "add-opens" is not the only option, and I agree it's not > desirable - especially for embedded users. > > Read the Hibernate blog for some alternatives, but hey yes the APIs > will have to change ;) > - http://in.relation.to/2017/04/11/accessing-private-state-of-java-9-modules/ > Both methods seem to require the cooperation of the module containing the POJOs. In our case those modules are in the JDK, and I doubt Oracle will be so kind as to open everything to org.infinispan.core ;) Dan From slaskawi at redhat.com Fri May 5 09:48:43 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 05 May 2017 13:48:43 +0000 Subject: [infinispan-dev] Documentation code snippets In-Reply-To: <765772517.3607015.1493817276720.JavaMail.zimbra@redhat.com> References: <316848336.12640070.1491582833099.JavaMail.zimbra@redhat.com> <765772517.3607015.1493817276720.JavaMail.zimbra@redhat.com> Message-ID: Hey Jiri, Very good investigation. I was all for option #2 (use existing testsuite) but now I'm leaning towards option #1 (separate testsuite). I believe there are 3 main parts to be tested and synced with documentation - Hot Rod Client, Infinispan Server and Embedded Mode. The first two can be tested together I think. To some extend this is already implemented in ExampleConfigsIT [1]. The Embedded Mode is much harder to test in my opinion, since the tests are spread all around the repo. I guess this will be the main challenge of this task. Thanks, Sebastian [1] https://github.com/infinispan/infinispan/blob/master/server/integration/testsuite/src/test/java/org/infinispan/server/test/configs/ExampleConfigsIT.java On Wed, May 3, 2017 at 3:20 PM Jiri Holusa wrote: > Moving this to infinispan-dev. > > I've just issued a PR [1], where I setup the code snippets generation. It > was actually pretty easy. I started implementing it for the configuration > part of the documentation and I came across following findings/issues. > > There were more votes for option 2 (see the previous mail for detail, in > summary using existing testsuite), hence I started with that. Pretty > shortly I see following issues: > * XML configuration - since we want to have the element there > in the configuration, I have to do one XML file per one configuration code > snippet -> the number of files will grow and will mess up the "normal" > testsuite > * IMHO biggest problem - our testsuite is usually not written in > "documentation simplicity". For example, in testsuite we barely (= never) > do "EmbeddedCacheManager cacheManager = new DefaultCacheManager("...");", > we obtain the cache manager by some helper method. While this is great for > testing, you don't want to have this in documentation as it should be > simple and straightforward. Another example would be [2]. Look at the > programmatic configuration snippets. In the testsuite, we usually don't > have that trivial setup, not so comprehensively written somewhere. > * When you want to introduce a new code snippet, how can you be sure that > the snippet is not somewhere in the testsuite already, but written a bit > differently? I encountered this right from the beginning, search the test > classes and looking for "good enough" code snippet that I could use. > > Together it seems to me that it will mess up the testsuite quite a bit, > make the maintenance of documentation harder and will significantly prolong > the time needed for writing new documentation. What do you think? How about > we went the same way as Hibernate (option 1 in first email) - creating > separate documentation testsuite that is as simple as possible, descriptive > and straightforward. > > I don't really care, which option we choose, I will implement it either > way, but I wanted to show that there are some pitfalls of the option 2 as > well :( > > Cheers, > Jiri > > [1] https://github.com/infinispan/infinispan/pull/5115 > [2] > http://infinispan.org/docs/stable/user_guide/user_guide.html#configuring_caches_programmatically > > > > ----- Forwarded Message ----- > > From: "Jiri Holusa" > > To: "infinispan-internal" > > Sent: Friday, April 7, 2017 6:33:53 PM > > Subject: [infinispan-internal] Documentation code snippets > > > > Hi everybody, > > > > during the documentation review for JDG 7.1 GA, I came across this little > > thing. > > > > Having a good documentation is IMHO crucial for people to like our > technology > > and the key point is having code snippets in the documentation up to date > > and working. During review of my parts, I found out many and many > outdated > > code snippets, either non-compilable or using deprecated methods. I would > > like to eliminate this issue in the future, so it would make our > > documentation better and also remove burden when doing documentation > review. > > > > I did some research and I found out that Hibernate team (thanks Radim, > Sanne > > for the information) does a very cool thing and that is that the code > > snippets are taken right from testsuite. This way they know that the code > > snippet can always compile and also make sure that it's working > properly. I > > would definitely love to see the same in Infinispan. > > > > It works extremely simply that you mark by comment in the test the part, > you > > want to include in the documentation, see an example here for the > AsciiDoc > > part [1] and here for the test part [2]. There are two ways of how to > > organize that: > > 1) create a separate "documentation testsuite", with as simple as > possible > > test classes - Hibernate team does it this way. Pros: documentation is > > easily separated. Cons: possible duplication. > > 2) use existing testsuite, marking the parts in the existing testsuite. > Pros: > > no duplication. Cons: documentation snippets are spread all across the > > testsuite. > > > > I would definitely volunteer to make this happen in Infinispan > > documentation. > > > > What do you guys think about it? > > > > Cheers, > > Jiri > > > > [1] > > > https://raw.githubusercontent.com/hibernate/hibernate-validator/master/documentation/src/main/asciidoc/ch03.asciidoc > > [2] > > > https://github.com/hibernate/hibernate-orm/blob/master/documentation/src/test/java/org/hibernate/userguide/caching/FirstLevelCacheTest.java > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170505/4f337b75/attachment-0001.html From jholusa at redhat.com Fri May 5 10:43:56 2017 From: jholusa at redhat.com (Jiri Holusa) Date: Fri, 5 May 2017 10:43:56 -0400 (EDT) Subject: [infinispan-dev] Documentation code snippets In-Reply-To: References: <316848336.12640070.1491582833099.JavaMail.zimbra@redhat.com> <765772517.3607015.1493817276720.JavaMail.zimbra@redhat.com> Message-ID: <1540974116.4432448.1493995436154.JavaMail.zimbra@redhat.com> Hi Sebastian, yes, you're right. I think the best way would be to go with option 2 making it comprehensive, clean and transparent. I will issue another preview PR soon that would contain some part from Hot Rod Client, ISPN Serven and Embedded mode snippets making it as an example, what it would look like in the end. If anybody else has other opinions, please jump in, thanks. Jiri ----- Original Message ----- > From: "Sebastian Laskawiec" > To: "infinispan -Dev List" > Sent: Friday, May 5, 2017 3:48:43 PM > Subject: Re: [infinispan-dev] Documentation code snippets > > Hey Jiri, > > Very good investigation. I was all for option #2 (use existing testsuite) but > now I'm leaning towards option #1 (separate testsuite). > > I believe there are 3 main parts to be tested and synced with documentation - > Hot Rod Client, Infinispan Server and Embedded Mode. The first two can be > tested together I think. To some extend this is already implemented in > ExampleConfigsIT [1]. The Embedded Mode is much harder to test in my > opinion, since the tests are spread all around the repo. I guess this will > be the main challenge of this task. > > Thanks, > Sebastian > > [1] > https://github.com/infinispan/infinispan/blob/master/server/integration/testsuite/src/test/java/org/infinispan/server/test/configs/ExampleConfigsIT.java > > On Wed, May 3, 2017 at 3:20 PM Jiri Holusa < jholusa at redhat.com > wrote: > > > Moving this to infinispan-dev. > > I've just issued a PR [1], where I setup the code snippets generation. It was > actually pretty easy. I started implementing it for the configuration part > of the documentation and I came across following findings/issues. > > There were more votes for option 2 (see the previous mail for detail, in > summary using existing testsuite), hence I started with that. Pretty shortly > I see following issues: > * XML configuration - since we want to have the element there in > the configuration, I have to do one XML file per one configuration code > snippet -> the number of files will grow and will mess up the "normal" > testsuite > * IMHO biggest problem - our testsuite is usually not written in > "documentation simplicity". For example, in testsuite we barely (= never) do > "EmbeddedCacheManager cacheManager = new DefaultCacheManager("...");", we > obtain the cache manager by some helper method. While this is great for > testing, you don't want to have this in documentation as it should be simple > and straightforward. Another example would be [2]. Look at the programmatic > configuration snippets. In the testsuite, we usually don't have that trivial > setup, not so comprehensively written somewhere. > * When you want to introduce a new code snippet, how can you be sure that the > snippet is not somewhere in the testsuite already, but written a bit > differently? I encountered this right from the beginning, search the test > classes and looking for "good enough" code snippet that I could use. > > Together it seems to me that it will mess up the testsuite quite a bit, make > the maintenance of documentation harder and will significantly prolong the > time needed for writing new documentation. What do you think? How about we > went the same way as Hibernate (option 1 in first email) - creating separate > documentation testsuite that is as simple as possible, descriptive and > straightforward. > > I don't really care, which option we choose, I will implement it either way, > but I wanted to show that there are some pitfalls of the option 2 as well :( > > Cheers, > Jiri > > [1] https://github.com/infinispan/infinispan/pull/5115 > [2] > http://infinispan.org/docs/stable/user_guide/user_guide.html#configuring_caches_programmatically > > > > ----- Forwarded Message ----- > > From: "Jiri Holusa" < jholusa at redhat.com > > > To: "infinispan-internal" < infinispan-internal at redhat.com > > > Sent: Friday, April 7, 2017 6:33:53 PM > > Subject: [infinispan-internal] Documentation code snippets > > > > Hi everybody, > > > > during the documentation review for JDG 7.1 GA, I came across this little > > thing. > > > > Having a good documentation is IMHO crucial for people to like our > > technology > > and the key point is having code snippets in the documentation up to date > > and working. During review of my parts, I found out many and many outdated > > code snippets, either non-compilable or using deprecated methods. I would > > like to eliminate this issue in the future, so it would make our > > documentation better and also remove burden when doing documentation > > review. > > > > I did some research and I found out that Hibernate team (thanks Radim, > > Sanne > > for the information) does a very cool thing and that is that the code > > snippets are taken right from testsuite. This way they know that the code > > snippet can always compile and also make sure that it's working properly. I > > would definitely love to see the same in Infinispan. > > > > It works extremely simply that you mark by comment in the test the part, > > you > > want to include in the documentation, see an example here for the AsciiDoc > > part [1] and here for the test part [2]. There are two ways of how to > > organize that: > > 1) create a separate "documentation testsuite", with as simple as possible > > test classes - Hibernate team does it this way. Pros: documentation is > > easily separated. Cons: possible duplication. > > 2) use existing testsuite, marking the parts in the existing testsuite. > > Pros: > > no duplication. Cons: documentation snippets are spread all across the > > testsuite. > > > > I would definitely volunteer to make this happen in Infinispan > > documentation. > > > > What do you guys think about it? > > > > Cheers, > > Jiri > > > > [1] > > https://raw.githubusercontent.com/hibernate/hibernate-validator/master/documentation/src/main/asciidoc/ch03.asciidoc > > [2] > > https://github.com/hibernate/hibernate-orm/blob/master/documentation/src/test/java/org/hibernate/userguide/caching/FirstLevelCacheTest.java > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- > > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Fri May 5 11:12:31 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 5 May 2017 17:12:31 +0200 Subject: [infinispan-dev] Documentation code snippets In-Reply-To: <1540974116.4432448.1493995436154.JavaMail.zimbra@redhat.com> References: <316848336.12640070.1491582833099.JavaMail.zimbra@redhat.com> <765772517.3607015.1493817276720.JavaMail.zimbra@redhat.com> <1540974116.4432448.1493995436154.JavaMail.zimbra@redhat.com> Message-ID: <756a428e-a25d-1840-843c-6623f912f95d@redhat.com> +1 for #2 from me too Tristan On 5/5/17 4:43 PM, Jiri Holusa wrote: > Hi Sebastian, > > yes, you're right. I think the best way would be to go with option 2 making it comprehensive, clean and transparent. I will issue another preview PR soon that would contain some part from Hot Rod Client, ISPN Serven and Embedded mode snippets making it as an example, what it would look like in the end. > > If anybody else has other opinions, please jump in, thanks. > Jiri > > > ----- Original Message ----- >> From: "Sebastian Laskawiec" >> To: "infinispan -Dev List" >> Sent: Friday, May 5, 2017 3:48:43 PM >> Subject: Re: [infinispan-dev] Documentation code snippets >> >> Hey Jiri, >> >> Very good investigation. I was all for option #2 (use existing testsuite) but >> now I'm leaning towards option #1 (separate testsuite). >> >> I believe there are 3 main parts to be tested and synced with documentation - >> Hot Rod Client, Infinispan Server and Embedded Mode. The first two can be >> tested together I think. To some extend this is already implemented in >> ExampleConfigsIT [1]. The Embedded Mode is much harder to test in my >> opinion, since the tests are spread all around the repo. I guess this will >> be the main challenge of this task. >> >> Thanks, >> Sebastian >> >> [1] >> https://github.com/infinispan/infinispan/blob/master/server/integration/testsuite/src/test/java/org/infinispan/server/test/configs/ExampleConfigsIT.java >> >> On Wed, May 3, 2017 at 3:20 PM Jiri Holusa < jholusa at redhat.com > wrote: >> >> >> Moving this to infinispan-dev. >> >> I've just issued a PR [1], where I setup the code snippets generation. It was >> actually pretty easy. I started implementing it for the configuration part >> of the documentation and I came across following findings/issues. >> >> There were more votes for option 2 (see the previous mail for detail, in >> summary using existing testsuite), hence I started with that. Pretty shortly >> I see following issues: >> * XML configuration - since we want to have the element there in >> the configuration, I have to do one XML file per one configuration code >> snippet -> the number of files will grow and will mess up the "normal" >> testsuite >> * IMHO biggest problem - our testsuite is usually not written in >> "documentation simplicity". For example, in testsuite we barely (= never) do >> "EmbeddedCacheManager cacheManager = new DefaultCacheManager("...");", we >> obtain the cache manager by some helper method. While this is great for >> testing, you don't want to have this in documentation as it should be simple >> and straightforward. Another example would be [2]. Look at the programmatic >> configuration snippets. In the testsuite, we usually don't have that trivial >> setup, not so comprehensively written somewhere. >> * When you want to introduce a new code snippet, how can you be sure that the >> snippet is not somewhere in the testsuite already, but written a bit >> differently? I encountered this right from the beginning, search the test >> classes and looking for "good enough" code snippet that I could use. >> >> Together it seems to me that it will mess up the testsuite quite a bit, make >> the maintenance of documentation harder and will significantly prolong the >> time needed for writing new documentation. What do you think? How about we >> went the same way as Hibernate (option 1 in first email) - creating separate >> documentation testsuite that is as simple as possible, descriptive and >> straightforward. >> >> I don't really care, which option we choose, I will implement it either way, >> but I wanted to show that there are some pitfalls of the option 2 as well :( >> >> Cheers, >> Jiri >> >> [1] https://github.com/infinispan/infinispan/pull/5115 >> [2] >> http://infinispan.org/docs/stable/user_guide/user_guide.html#configuring_caches_programmatically >> >> >> >> ----- Forwarded Message ----- >>> From: "Jiri Holusa" < jholusa at redhat.com > >>> To: "infinispan-internal" < infinispan-internal at redhat.com > >>> Sent: Friday, April 7, 2017 6:33:53 PM >>> Subject: [infinispan-internal] Documentation code snippets >>> >>> Hi everybody, >>> >>> during the documentation review for JDG 7.1 GA, I came across this little >>> thing. >>> >>> Having a good documentation is IMHO crucial for people to like our >>> technology >>> and the key point is having code snippets in the documentation up to date >>> and working. During review of my parts, I found out many and many outdated >>> code snippets, either non-compilable or using deprecated methods. I would >>> like to eliminate this issue in the future, so it would make our >>> documentation better and also remove burden when doing documentation >>> review. >>> >>> I did some research and I found out that Hibernate team (thanks Radim, >>> Sanne >>> for the information) does a very cool thing and that is that the code >>> snippets are taken right from testsuite. This way they know that the code >>> snippet can always compile and also make sure that it's working properly. I >>> would definitely love to see the same in Infinispan. >>> >>> It works extremely simply that you mark by comment in the test the part, >>> you >>> want to include in the documentation, see an example here for the AsciiDoc >>> part [1] and here for the test part [2]. There are two ways of how to >>> organize that: >>> 1) create a separate "documentation testsuite", with as simple as possible >>> test classes - Hibernate team does it this way. Pros: documentation is >>> easily separated. Cons: possible duplication. >>> 2) use existing testsuite, marking the parts in the existing testsuite. >>> Pros: >>> no duplication. Cons: documentation snippets are spread all across the >>> testsuite. >>> >>> I would definitely volunteer to make this happen in Infinispan >>> documentation. >>> >>> What do you guys think about it? >>> >>> Cheers, >>> Jiri >>> >>> [1] >>> https://raw.githubusercontent.com/hibernate/hibernate-validator/master/documentation/src/main/asciidoc/ch03.asciidoc >>> [2] >>> https://github.com/hibernate/hibernate-orm/blob/master/documentation/src/test/java/org/hibernate/userguide/caching/FirstLevelCacheTest.java >>> >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> -- >> >> >> SEBASTIAN ?ASKAWIEC >> >> INFINISPAN DEVELOPER >> >> Red Hat EMEA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From afield at redhat.com Fri May 5 11:30:01 2017 From: afield at redhat.com (Alan Field) Date: Fri, 5 May 2017 11:30:01 -0400 (EDT) Subject: [infinispan-dev] Documentation code snippets In-Reply-To: <756a428e-a25d-1840-843c-6623f912f95d@redhat.com> References: <316848336.12640070.1491582833099.JavaMail.zimbra@redhat.com> <765772517.3607015.1493817276720.JavaMail.zimbra@redhat.com> <1540974116.4432448.1493995436154.JavaMail.zimbra@redhat.com> <756a428e-a25d-1840-843c-6623f912f95d@redhat.com> Message-ID: <1021847491.5401626.1493998201079.JavaMail.zimbra@redhat.com> As confusing as this is, I agree with Tristan! :-) ----- Original Message ----- > From: "Tristan Tarrant" > To: "infinispan -Dev List" , "Jiri Holusa" > Sent: Friday, May 5, 2017 11:12:31 AM > Subject: Re: [infinispan-dev] Documentation code snippets > > +1 for #2 from me too > > Tristan > > On 5/5/17 4:43 PM, Jiri Holusa wrote: > > Hi Sebastian, > > > > yes, you're right. I think the best way would be to go with option 2 making > > it comprehensive, clean and transparent. I will issue another preview PR > > soon that would contain some part from Hot Rod Client, ISPN Serven and > > Embedded mode snippets making it as an example, what it would look like in > > the end. > > > > If anybody else has other opinions, please jump in, thanks. > > Jiri > > > > > > ----- Original Message ----- > >> From: "Sebastian Laskawiec" > >> To: "infinispan -Dev List" > >> Sent: Friday, May 5, 2017 3:48:43 PM > >> Subject: Re: [infinispan-dev] Documentation code snippets > >> > >> Hey Jiri, > >> > >> Very good investigation. I was all for option #2 (use existing testsuite) > >> but > >> now I'm leaning towards option #1 (separate testsuite). > >> > >> I believe there are 3 main parts to be tested and synced with > >> documentation - > >> Hot Rod Client, Infinispan Server and Embedded Mode. The first two can be > >> tested together I think. To some extend this is already implemented in > >> ExampleConfigsIT [1]. The Embedded Mode is much harder to test in my > >> opinion, since the tests are spread all around the repo. I guess this will > >> be the main challenge of this task. > >> > >> Thanks, > >> Sebastian > >> > >> [1] > >> https://github.com/infinispan/infinispan/blob/master/server/integration/testsuite/src/test/java/org/infinispan/server/test/configs/ExampleConfigsIT.java > >> > >> On Wed, May 3, 2017 at 3:20 PM Jiri Holusa < jholusa at redhat.com > wrote: > >> > >> > >> Moving this to infinispan-dev. > >> > >> I've just issued a PR [1], where I setup the code snippets generation. It > >> was > >> actually pretty easy. I started implementing it for the configuration part > >> of the documentation and I came across following findings/issues. > >> > >> There were more votes for option 2 (see the previous mail for detail, in > >> summary using existing testsuite), hence I started with that. Pretty > >> shortly > >> I see following issues: > >> * XML configuration - since we want to have the element there > >> in > >> the configuration, I have to do one XML file per one configuration code > >> snippet -> the number of files will grow and will mess up the "normal" > >> testsuite > >> * IMHO biggest problem - our testsuite is usually not written in > >> "documentation simplicity". For example, in testsuite we barely (= never) > >> do > >> "EmbeddedCacheManager cacheManager = new DefaultCacheManager("...");", we > >> obtain the cache manager by some helper method. While this is great for > >> testing, you don't want to have this in documentation as it should be > >> simple > >> and straightforward. Another example would be [2]. Look at the > >> programmatic > >> configuration snippets. In the testsuite, we usually don't have that > >> trivial > >> setup, not so comprehensively written somewhere. > >> * When you want to introduce a new code snippet, how can you be sure that > >> the > >> snippet is not somewhere in the testsuite already, but written a bit > >> differently? I encountered this right from the beginning, search the test > >> classes and looking for "good enough" code snippet that I could use. > >> > >> Together it seems to me that it will mess up the testsuite quite a bit, > >> make > >> the maintenance of documentation harder and will significantly prolong the > >> time needed for writing new documentation. What do you think? How about we > >> went the same way as Hibernate (option 1 in first email) - creating > >> separate > >> documentation testsuite that is as simple as possible, descriptive and > >> straightforward. > >> > >> I don't really care, which option we choose, I will implement it either > >> way, > >> but I wanted to show that there are some pitfalls of the option 2 as well > >> :( > >> > >> Cheers, > >> Jiri > >> > >> [1] https://github.com/infinispan/infinispan/pull/5115 > >> [2] > >> http://infinispan.org/docs/stable/user_guide/user_guide.html#configuring_caches_programmatically > >> > >> > >> > >> ----- Forwarded Message ----- > >>> From: "Jiri Holusa" < jholusa at redhat.com > > >>> To: "infinispan-internal" < infinispan-internal at redhat.com > > >>> Sent: Friday, April 7, 2017 6:33:53 PM > >>> Subject: [infinispan-internal] Documentation code snippets > >>> > >>> Hi everybody, > >>> > >>> during the documentation review for JDG 7.1 GA, I came across this little > >>> thing. > >>> > >>> Having a good documentation is IMHO crucial for people to like our > >>> technology > >>> and the key point is having code snippets in the documentation up to date > >>> and working. During review of my parts, I found out many and many > >>> outdated > >>> code snippets, either non-compilable or using deprecated methods. I would > >>> like to eliminate this issue in the future, so it would make our > >>> documentation better and also remove burden when doing documentation > >>> review. > >>> > >>> I did some research and I found out that Hibernate team (thanks Radim, > >>> Sanne > >>> for the information) does a very cool thing and that is that the code > >>> snippets are taken right from testsuite. This way they know that the code > >>> snippet can always compile and also make sure that it's working properly. > >>> I > >>> would definitely love to see the same in Infinispan. > >>> > >>> It works extremely simply that you mark by comment in the test the part, > >>> you > >>> want to include in the documentation, see an example here for the > >>> AsciiDoc > >>> part [1] and here for the test part [2]. There are two ways of how to > >>> organize that: > >>> 1) create a separate "documentation testsuite", with as simple as > >>> possible > >>> test classes - Hibernate team does it this way. Pros: documentation is > >>> easily separated. Cons: possible duplication. > >>> 2) use existing testsuite, marking the parts in the existing testsuite. > >>> Pros: > >>> no duplication. Cons: documentation snippets are spread all across the > >>> testsuite. > >>> > >>> I would definitely volunteer to make this happen in Infinispan > >>> documentation. > >>> > >>> What do you guys think about it? > >>> > >>> Cheers, > >>> Jiri > >>> > >>> [1] > >>> https://raw.githubusercontent.com/hibernate/hibernate-validator/master/documentation/src/main/asciidoc/ch03.asciidoc > >>> [2] > >>> https://github.com/hibernate/hibernate-orm/blob/master/documentation/src/test/java/org/hibernate/userguide/caching/FirstLevelCacheTest.java > >>> > >>> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> -- > >> > >> > >> SEBASTIAN ?ASKAWIEC > >> > >> INFINISPAN DEVELOPER > >> > >> Red Hat EMEA > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From wfink at redhat.com Fri May 5 12:54:32 2017 From: wfink at redhat.com (Wolf Fink) Date: Fri, 5 May 2017 18:54:32 +0200 Subject: [infinispan-dev] Unwrapping exceptions In-Reply-To: References: Message-ID: +1 for Sanne I would expect the same and I suppose one or more customers will raise this issue On Fri, Apr 28, 2017 at 12:56 PM, Sanne Grinovero wrote: > Personally as a user I'd expect to have a CacheException raised if and > only if it's not caused by my own code. > > Imagine my own lambda is explicitly throwing an exception of a type of > my choice, it would be nice to receive that error and not a different > one. > > On 28 April 2017 at 11:38, Katia Aresti wrote: > > Hi all ! > > > > Radim pointed me to this thread discussing the exceptions launched by the > > lambda executed by the user. > > > > So, I've came accros this problem right now with the compute method. > > > > ComputeIfAbsent is used by the QueryCache [1] > > > > This method is now a Command, so when the wrapped lambda throws an > > exception, [2], the expected exception is the one raised by the lambda. > But > > with my modifications, this exception is wrapped in a CacheException. > > > > I discussed with Adrien yesterday, and IMHO and his, a CacheException is > not > > the same thing as the exception raised inside the lambda. Moreover, in > this > > particular case, I don't know if users some code could be broken if we > make > > the user get a CacheException that wrappes the ParseException instead of > the > > ParseException itself. > > > > How can I fix the problem ? > > Should we correct the tests and say that, from now on, CacheException > will > > be raised ? > > Should we handle this CacheException in the QueryCache class when > > computeIfAbsent is called ? > > Should we propagate the lambda's exception as it is ? > > > > Katia > > > > [1] > > https://github.com/infinispan/infinispan/blob/master/query/ > src/main/java/org/infinispan/query/dsl/embedded/impl/QueryCache.java#L79 > > [2] > > https://github.com/infinispan/infinispan/blob/master/query/ > src/test/java/org/infinispan/query/dsl/embedded/ > QueryDslConditionsTest.java#L1913 > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170505/8fe0aeb6/attachment.html From pedro at infinispan.org Fri May 5 12:56:43 2017 From: pedro at infinispan.org (Pedro Ruivo) Date: Fri, 5 May 2017 17:56:43 +0100 Subject: [infinispan-dev] Hot Rod Transactions, a design document Message-ID: <92f3b1f6-f191-4356-e2f0-9e97cd0bfb85@infinispan.org> Hi all, I've created a document describing Hot Rod transactions. For now, only Synchronization enlistment will be supported and full XA support will be implemented in the future (and in another document). The document is here: https://github.com/infinispan/infinispan-designs/pull/6 Take a look and give feedback. Cheers, Pedro From karesti at redhat.com Sat May 6 03:57:04 2017 From: karesti at redhat.com (Katia Aresti) Date: Sat, 6 May 2017 09:57:04 +0200 Subject: [infinispan-dev] Unwrapping exceptions In-Reply-To: References: Message-ID: Thank you for your thoughts ! I do agree with you. Dan, do you have any objection to this ? Katia On Fri, May 5, 2017 at 6:54 PM, Wolf Fink wrote: > +1 for Sanne > I would expect the same and I suppose one or more customers will raise > this issue > > On Fri, Apr 28, 2017 at 12:56 PM, Sanne Grinovero > wrote: > >> Personally as a user I'd expect to have a CacheException raised if and >> only if it's not caused by my own code. >> >> Imagine my own lambda is explicitly throwing an exception of a type of >> my choice, it would be nice to receive that error and not a different >> one. >> >> On 28 April 2017 at 11:38, Katia Aresti wrote: >> > Hi all ! >> > >> > Radim pointed me to this thread discussing the exceptions launched by >> the >> > lambda executed by the user. >> > >> > So, I've came accros this problem right now with the compute method. >> > >> > ComputeIfAbsent is used by the QueryCache [1] >> > >> > This method is now a Command, so when the wrapped lambda throws an >> > exception, [2], the expected exception is the one raised by the lambda. >> But >> > with my modifications, this exception is wrapped in a CacheException. >> > >> > I discussed with Adrien yesterday, and IMHO and his, a CacheException >> is not >> > the same thing as the exception raised inside the lambda. Moreover, in >> this >> > particular case, I don't know if users some code could be broken if we >> make >> > the user get a CacheException that wrappes the ParseException instead >> of the >> > ParseException itself. >> > >> > How can I fix the problem ? >> > Should we correct the tests and say that, from now on, CacheException >> will >> > be raised ? >> > Should we handle this CacheException in the QueryCache class when >> > computeIfAbsent is called ? >> > Should we propagate the lambda's exception as it is ? >> > >> > Katia >> > >> > [1] >> > https://github.com/infinispan/infinispan/blob/master/query/s >> rc/main/java/org/infinispan/query/dsl/embedded/impl/QueryCache.java#L79 >> > [2] >> > https://github.com/infinispan/infinispan/blob/master/query/s >> rc/test/java/org/infinispan/query/dsl/embedded/QueryDslCondi >> tionsTest.java#L1913 >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170506/9bc639f4/attachment.html From slaskawi at redhat.com Mon May 8 03:57:40 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 8 May 2017 09:57:40 +0200 Subject: [infinispan-dev] Exposing cluster deployed in the cloud Message-ID: Hey guys! A while ago I started working on exposing Infinispan Cluster which is hosted in Kubernetes to the outside world: [image: pasted1] I'm currently struggling to get solution like this into the platform [1] but in the meantime I created a very simple POC and I'm testing it locally [2]. There are two main problems with the scenario described above: 1. Infinispan server announces internal addresses (172.17.x.x) to the client. The client needs to remap them into external ones (172.29.x.x). 2. A custom Consistent Hash needs to be supplied to the Hot Rod client. When accessing cache, the Hot Rod Client needs to calculate server id for internal address and then map it to the external one. If there will be no strong opinions regarding to this, I plan to implement this shortly. There will be additional method in Hot Rod Client configuration (ConfigurationBuilder#addServerMapping(String mappingClass)) which will be responsible for mapping external addresses to internal and vice-versa. Thoughts? Thanks, Sebastian [1] https://github.com/kubernetes/community/pull/446 [2] https://github.com/slaskawi/external-ip-proxy -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170508/2172700b/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted1 Type: image/png Size: 36647 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170508/2172700b/attachment-0001.png From gustavo at infinispan.org Mon May 8 05:08:00 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 8 May 2017 10:08:00 +0100 Subject: [infinispan-dev] Exposing cluster deployed in the cloud In-Reply-To: References: Message-ID: Questions inlined: On Mon, May 8, 2017 at 8:57 AM, Sebastian Laskawiec wrote: > Hey guys! > > A while ago I started working on exposing Infinispan Cluster which is > hosted in Kubernetes to the outside world: > What about SNI, wasn't this scenario the reason why it was implemented, IOW to allow HR clients to access an ispn hosted in the cloud? > > [image: pasted1] > > I'm currently struggling to get solution like this into the platform [1] > but in the meantime I created a very simple POC and I'm testing it locally > [2]. > What does "application" mean in the diagram? Are those different pods, or single containers part of a pod? There isn't much doc available at [2], how does it work? > > There are two main problems with the scenario described above: > > 1. Infinispan server announces internal addresses (172.17.x.x) to the > client. The client needs to remap them into external ones (172.29.x.x). > > How would the external address be allocated, e.g. during scaling up and down and how the HR client would know how to map them correctly? > > 1. A custom Consistent Hash needs to be supplied to the Hot Rod > client. When accessing cache, the Hot Rod Client needs to calculate server > id for internal address and then map it to the external one. > > If there will be no strong opinions regarding to this, I plan to implement > this shortly. There will be additional method in Hot Rod Client > configuration (ConfigurationBuilder#addServerMapping(String > mappingClass)) which will be responsible for mapping external addresses to > internal and vice-versa. > > Thoughts? > > Thanks, > Sebastian > > [1] https://github.com/kubernetes/community/pull/446 > [2] https://github.com/slaskawi/external-ip-proxy > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170508/6836e2b0/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted1 Type: image/png Size: 36647 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170508/6836e2b0/attachment-0001.png From galder at redhat.com Mon May 8 07:10:44 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 8 May 2017 13:10:44 +0200 Subject: [infinispan-dev] HotRod client TCK In-Reply-To: <7bba7850-52fa-1b98-45da-603f1443cc34@redhat.com> References: <7bba7850-52fa-1b98-45da-603f1443cc34@redhat.com> Message-ID: <3C4C9C5E-1450-4052-9AB9-9681A0B57695@redhat.com> I think there's some value in Radim's suggestion. The email was not fully clear to me initially but after reading a few times I understood what he was referring to. @Radim, correct me if I'm wrong... Right now clients verify that they behave as expected, e.g. JS client uses its asserts, Java client uses other asserts. What Radim is trying to say is that there needs to be a way to verify they work adequately independent of their implementations. So, the only way to do that is to verify it at the server level. Not sure what exactly he means by the fake server, but more than a fake server, I'd be more inclined to modify the server to that it can somehow act as TCK verifier. This is to avoid having to reimplement transport logic, protocol decoder...etc in a new fake server. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 11 Apr 2017, at 15:57, Radim Vansa wrote: > > Since these tests use real server(s), many of them test not only the > client behaviour (generating correct commands according to the > protocol), but server, too. While this is practical (we need to test > server somehow, too), there's nothing all the tests across languages > will have physically in common and all comparison is prone to human error. > > If we want to test various implementations of the client, maybe it would > make sense to give the clients a fake server that will have just a > scenario of expected commands to receive and pre-defined responses. We > could use audit log to generate such scenario based on the actual Java > tests. > > But then we'd have to test the actual behaviour on server, and we'd need > a way to issue the commands. > > Just my 2c > > Radim > > On 04/11/2017 02:33 PM, Martin Gencur wrote: >> Hello all, >> we have been working on https://issues.jboss.org/browse/ISPN-7120. >> >> Anna has finished the first step from the JIRA - collecting information >> about tests in the Java HotRod client test suite (including server >> integration tests) and it is now prepared for wider review. >> >> She created a spreadsheet [1]. The spread sheet includes for each Java >> test its name, the suggested target package in the TCK, whether to >> include it in the TCK or not, and some other notes. The suggested >> package also poses grouping for the tests (e.g. tck.query, tck.near, >> tck.xsite, ...) >> >> Let me add that right now the goal is not to create a true TCK [2]. The >> goal is to make sure that all implementations of the HotRod protocol >> have sufficient test coverage and possibly the same server side of the >> client-server test (including the server version and configuration). >> >> What are the next step? >> >> * Please review the list (at least a quick look) and see if some of the >> tests which are NOT suggested for the TCK should be added or vice versa. >> * I suppose the next step would then be to check other implementations >> (C#, C++, NodeJS, ..) and identify tests which are missing there (there >> will surely be some). >> * Gradually implement the missing tests in the other implementations >> Note: Here we should ensure that the server is configured in the same >> way for all implementations. One way to achieve this (thanks Anna for >> suggestion!) is to have a shell/batch scripts for CLI which would be >> executed before the tests. This can probably be done for all impls. and >> both UNIX/WINDOWS. I also realize that my PR for ISPN [3] becomes >> useless because it uses Creaper (Java) and we need a language-neutral >> solution for configuring the server. >> >> Some other notes: >> * there are some duplicated tests in hotrod-client and server >> integration test suites, in this case it probably makes sense to only >> include in the TCK the server integration test >> * tests from the hotrod-client module which are supposed to be part of >> the TCK should be copied to the server integration test suite one day >> (possibly later) >> >> Please let us know what you think. >> >> Thanks, >> Martin >> >> >> [1] >> https://docs.google.com/spreadsheets/d/1bZBBi5m4oLL4lBTZhdRbIC_EA0giQNDZWzFNPWrF5G4/edit#gid=0 >> [2] https://en.wikipedia.org/wiki/Technology_Compatibility_Kit >> [3] https://github.com/infinispan/infinispan/pull/5012 >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Mon May 8 07:32:13 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 8 May 2017 13:32:13 +0200 Subject: [infinispan-dev] HotRod client TCK In-Reply-To: References: Message-ID: Btw, thanks Anna for working on this! I've had a look at the list and I have some questions: * HotRodAsyncReplicationTest: I don't think it should be a client TCK test. There's nothing the client does differently compared to executing against a sync repl cache. If anything, it's a server TCK test since it verifies that a put sent by a HR client gets replicated. The same applies to all the test of local vs REPl vs DIST tests. * LockingTest: same story, this is a client+server integration test, I don't think it's a client TCK test. If anything, it's a server TCK test. It verifies that if a client sends a put, the entry is locked. * MixedExpiry*Test: it's dependant on the server configuration, not really a client TCK test IMO. I think the only client TCK tests that deal with expiry should only verify that the entry is expirable if the client decides to make it expirable. * ClientListenerRemoveOnStopTest: Not sure this is a client TCK test. Yeah, it verifies that the client removes its listeners on stop, but it's not a Hot Rod protocol TCK test. Going back to what Radim said, how are you going to verify each client does this? What we can verify for all clients easily is they send the commands to remove the client servers to the server. Maybe for these and below client specific logic related tests, as Martin suggesteds, we go with the approach of just verifying that tests exist. * Protobuf marshaller tests: client specific and testing client-side marshalling logic. Same reasons above. * Near caching tests: client specific and testing client-side near caching logic. Same issues above. * Topology change tests: I consider these TCK tests cos you could think that if the server sends a new topology, the client's next command should have the ID of this topology in its header. * Failover/Retry tests: client specific and testing client-side retry logic. Same issues above, how do you verify it works accross the board for all clients? * Socket timeout tests: again these are client specific... I think in general it'd be a good idea to try to verify somehow most of the TCK via some server-side logic, as Radim hinted, and where that's not possible, revert to just verifying the client has tests to cover certain scenarios. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 11 Apr 2017, at 14:33, Martin Gencur wrote: > > Hello all, > we have been working on https://issues.jboss.org/browse/ISPN-7120. > > Anna has finished the first step from the JIRA - collecting information > about tests in the Java HotRod client test suite (including server > integration tests) and it is now prepared for wider review. > > She created a spreadsheet [1]. The spread sheet includes for each Java > test its name, the suggested target package in the TCK, whether to > include it in the TCK or not, and some other notes. The suggested > package also poses grouping for the tests (e.g. tck.query, tck.near, > tck.xsite, ...) > > Let me add that right now the goal is not to create a true TCK [2]. The > goal is to make sure that all implementations of the HotRod protocol > have sufficient test coverage and possibly the same server side of the > client-server test (including the server version and configuration). > > What are the next step? > > * Please review the list (at least a quick look) and see if some of the > tests which are NOT suggested for the TCK should be added or vice versa. > * I suppose the next step would then be to check other implementations > (C#, C++, NodeJS, ..) and identify tests which are missing there (there > will surely be some). > * Gradually implement the missing tests in the other implementations > Note: Here we should ensure that the server is configured in the same > way for all implementations. One way to achieve this (thanks Anna for > suggestion!) is to have a shell/batch scripts for CLI which would be > executed before the tests. This can probably be done for all impls. and > both UNIX/WINDOWS. I also realize that my PR for ISPN [3] becomes > useless because it uses Creaper (Java) and we need a language-neutral > solution for configuring the server. > > Some other notes: > * there are some duplicated tests in hotrod-client and server > integration test suites, in this case it probably makes sense to only > include in the TCK the server integration test > * tests from the hotrod-client module which are supposed to be part of > the TCK should be copied to the server integration test suite one day > (possibly later) > > Please let us know what you think. > > Thanks, > Martin > > > [1] > https://docs.google.com/spreadsheets/d/1bZBBi5m4oLL4lBTZhdRbIC_EA0giQNDZWzFNPWrF5G4/edit#gid=0 > [2] https://en.wikipedia.org/wiki/Technology_Compatibility_Kit > [3] https://github.com/infinispan/infinispan/pull/5012 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Mon May 8 07:45:52 2017 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 8 May 2017 14:45:52 +0300 Subject: [infinispan-dev] HotRod client TCK In-Reply-To: References: Message-ID: On Mon, May 8, 2017 at 2:32 PM, Galder Zamarre?o wrote: > Btw, thanks Anna for working on this! > > I've had a look at the list and I have some questions: > > * HotRodAsyncReplicationTest: I don't think it should be a client TCK test. There's nothing the client does differently compared to executing against a sync repl cache. If anything, it's a server TCK test since it verifies that a put sent by a HR client gets replicated. The same applies to all the test of local vs REPl vs DIST tests. > > * LockingTest: same story, this is a client+server integration test, I don't think it's a client TCK test. If anything, it's a server TCK test. It verifies that if a client sends a put, the entry is locked. > > * MixedExpiry*Test: it's dependant on the server configuration, not really a client TCK test IMO. I think the only client TCK tests that deal with expiry should only verify that the entry is expirable if the client decides to make it expirable. > I think they should be included, because this is part of the HotRod wire specification: * +0x0002+ = use cache-level configured default lifespan * +0x0004+ = use cache-level configured default max idle > * ClientListenerRemoveOnStopTest: Not sure this is a client TCK test. Yeah, it verifies that the client removes its listeners on stop, but it's not a Hot Rod protocol TCK test. Going back to what Radim said, how are you going to verify each client does this? What we can verify for all clients easily is they send the commands to remove the client servers to the server. Maybe for these and below client specific logic related tests, as Martin suggesteds, we go with the approach of just verifying that tests exist. > > * Protobuf marshaller tests: client specific and testing client-side marshalling logic. Same reasons above. > > * Near caching tests: client specific and testing client-side near caching logic. Same issues above. > > * Topology change tests: I consider these TCK tests cos you could think that if the server sends a new topology, the client's next command should have the ID of this topology in its header. > > * Failover/Retry tests: client specific and testing client-side retry logic. Same issues above, how do you verify it works accross the board for all clients? > > * Socket timeout tests: again these are client specific... > > I think in general it'd be a good idea to try to verify somehow most of the TCK via some server-side logic, as Radim hinted, and where that's not possible, revert to just verifying the client has tests to cover certain scenarios. +1 Dan > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 11 Apr 2017, at 14:33, Martin Gencur wrote: >> >> Hello all, >> we have been working on https://issues.jboss.org/browse/ISPN-7120. >> >> Anna has finished the first step from the JIRA - collecting information >> about tests in the Java HotRod client test suite (including server >> integration tests) and it is now prepared for wider review. >> >> She created a spreadsheet [1]. The spread sheet includes for each Java >> test its name, the suggested target package in the TCK, whether to >> include it in the TCK or not, and some other notes. The suggested >> package also poses grouping for the tests (e.g. tck.query, tck.near, >> tck.xsite, ...) >> >> Let me add that right now the goal is not to create a true TCK [2]. The >> goal is to make sure that all implementations of the HotRod protocol >> have sufficient test coverage and possibly the same server side of the >> client-server test (including the server version and configuration). >> >> What are the next step? >> >> * Please review the list (at least a quick look) and see if some of the >> tests which are NOT suggested for the TCK should be added or vice versa. >> * I suppose the next step would then be to check other implementations >> (C#, C++, NodeJS, ..) and identify tests which are missing there (there >> will surely be some). >> * Gradually implement the missing tests in the other implementations >> Note: Here we should ensure that the server is configured in the same >> way for all implementations. One way to achieve this (thanks Anna for >> suggestion!) is to have a shell/batch scripts for CLI which would be >> executed before the tests. This can probably be done for all impls. and >> both UNIX/WINDOWS. I also realize that my PR for ISPN [3] becomes >> useless because it uses Creaper (Java) and we need a language-neutral >> solution for configuring the server. >> >> Some other notes: >> * there are some duplicated tests in hotrod-client and server >> integration test suites, in this case it probably makes sense to only >> include in the TCK the server integration test >> * tests from the hotrod-client module which are supposed to be part of >> the TCK should be copied to the server integration test suite one day >> (possibly later) >> >> Please let us know what you think. >> >> Thanks, >> Martin >> >> >> [1] >> https://docs.google.com/spreadsheets/d/1bZBBi5m4oLL4lBTZhdRbIC_EA0giQNDZWzFNPWrF5G4/edit#gid=0 >> [2] https://en.wikipedia.org/wiki/Technology_Compatibility_Kit >> [3] https://github.com/infinispan/infinispan/pull/5012 >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From vrigamon at redhat.com Mon May 8 07:49:42 2017 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Mon, 8 May 2017 13:49:42 +0200 Subject: [infinispan-dev] My weekly report Message-ID: Hi team, I won't be able to be in the today's weekly meeting. My updates for the last two weeks: JENKINS worked on Jenkins to setup the build pipeline for C++ and C# client. This task is completed but we have still these open points: the windows machine needs a manual start up at the moment, but we want it to be automatic. need to study how to expose the produced release artifacts 8.1.1 worked on code cleanup for a 0.0.1 release. I'm collection all the changes here: https://github.com/rigazilla/cpp-client/tree/HRCPP-373/warning I would like to cleanup the SChannel socket implementation (windows) but I need to get a more deep knowledge of the windows security api. I'm working on this currently -- Vittorio Rigamonti Senior Software Engineer Red Hat Milan, Italy vrigamon at redhat.com irc: rigazilla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170508/b8b41966/attachment.html From galder at redhat.com Mon May 8 09:58:57 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 8 May 2017 15:58:57 +0200 Subject: [infinispan-dev] to be a command, or not to be a command, that is the question In-Reply-To: References: Message-ID: Hey Katia, Sorry for delay replying back! I'm surprised there has not been more feedback. My position on this is well known around the team, so let me summarise it: My feeling has always been that we have too many commands and we should reduce number of commands. Part of the functional map experiment was to show with a subset of commands, all sorts of front end operations could be exposed. So, I'm on Radim's side on this. By passing functions/lambdas, we get a lot of flexibility with very little cost. IOW, we can add more operations by just passing in different lambdas to existing commands. However, it is true that having different front API methods that only differ in the lambda makes it initially hard to potentially do different things for each, but couldn't that be solved with some kind of enum? Although enums are useful, they're a bit limited, e.g. don't take params, so since you've done Scala before, maybe this could be solved with some Scala-like sealed trait for each front end operation type? I used something like a sealed trait for implementing a more flexible flag system for functional map API called org.infinispan.commons.api.functional.Param The problem I have with adding more commands is the explosion that it provokes in terms of code, with all the required visit* method impls all over the place...etc. I personally think that the lack of a more flexible command architecture is what has stopped us from adding front-end operations more quickly (e.g. counters, multi-maps...etc). IMO, working with generic commands that take lambdas is a way to strike a balance between adding front-end operations quickly and not resulting in a huge explosion of commands. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 20 Apr 2017, at 16:06, Katia Aresti wrote: > > Hi all > > Well, nobody spoke, so I consider that everybody agrees that I can take a decision like a big girl by myself ! :) > > I'm going to add 3 new commands, for merge, compute&computeIfPresent and computeIfAbsent. So I won't use the actual existing commands for the implementation : ReadWriteKeyCommand and ReadWriteKeyValueCommand even if I'm a DRY person and I love reusing code, I'm a KISS person too. > > I tested the implementation using these functional commands and IMHO : > - merge and compute methods worth their own commands, they are very useful and we might want to adjust/optimize them individually > - there are some technical issues related to the TypeConverterDelegatingAdvancedCache that makes me modify these existing functional commands with some hacky code that, for me, should be kept in commands like merge or compute with the correct documentation. They don't belong to a generic command. > - Functional API is experimental right now. It might be non experimental in the near future, but we might decide to move to another thing. The 3 commands are already "coded" in my branches (not everything reviewed yet but soon). If one day we decide to change/simplify or we find a nice way to get rid of commands with a more generic one, removing and simplifying should be less painful than adding commands for these methods. > > That's all ! > > Cheers > > Katia > > > > On Wed, Apr 12, 2017 at 12:11 PM, Katia Aresti wrote: > Hi all, > > As you might know I'm working since my arrival, among other things, on ISPN-5728 Jira [1], where the idea is to override the default ConcurrentMap methods that are missing in CacheImpl (merge, replaceAll, compute ... ) > > I've created a pull-request [2] for compute, computeIfAbsent and computeIfPresent methods, creating two new commands. By the way, I did the same thing for the merge method in a branch that I haven't pull requested yet. > > There is an opposite view between Radim and Will concerning the implementation of these methods. To make it short : > In one side Will considers compute/merge best implementation should be as a new Command (so what is already done) > In the other side, Radim considers adding another command is not necessary as we could simple implement these methods using ReadWriteKeyCommand > > The detailed discussion and arguments of both sides is on GitHub [2] > > Before moving forward and making any choice by myself, I would like to hear your opinions. For the record, it doesn't bother me redoing everything if most people think like Radim because working on commands has helped me to learn and understand more about infinispan internals, so this hasn't been a waste of time for me. > > Katia > > [1] https://issues.jboss.org/browse/ISPN-5728 > [2] https://github.com/infinispan/infinispan/pull/5046 > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From slaskawi at redhat.com Mon May 8 10:10:42 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 08 May 2017 14:10:42 +0000 Subject: [infinispan-dev] Openshift blogposts In-Reply-To: <43d6a13d-90b9-3458-dc5e-db6d3885ba70@redhat.com> References: <43d6a13d-90b9-3458-dc5e-db6d3885ba70@redhat.com> Message-ID: Hey Radim, Moving to dev mailing list. Comments inlined. Thanks, Sebastian On Tue, May 2, 2017 at 5:28 PM Radim Vansa wrote: > Hi Sebastian, > > I am currently getting acquainted with OpenShift so I have been reading > your blogposts about that. Couple of questions: > > http://blog.infinispan.org/2016/10/openshift-and-node-affinity.html > > - so you need to have different deployment config for each rack/site? > Yes. A while ago I read an article about managing scheduler using labels: https://blog.openshift.com/deploying-applications-to-specific-nodes/ So I think it can be optimized to 1 DeploymentConfig + some magic in spec.template. But that's only my intuition. I haven't played with this yet. > > http://blog.infinispan.org/2017/03/checking-infinispan-cluster-health-and.html > > maxUnavailable: 1 and maxSurge: 1 don't sound too good to me - if you > can't fit all the data into single pod, you need to set maxUnavailable: > 0 (to not bring any nodes down before the rolling upgrade completes) and > maxSurge: 100% to have enough nodes started. + Some post-hook to make > sure all data are in new cluster before you bring down the old one. Am I > missing something? > Before answering those questions, let me show you two examples: - maxUnavailable: 1, maxSurge 1 - - oc logs transactions-repository-2-deploy -f 1. --> Scaling up transactions-repository-2 from 0 to 3, scaling down transactions-repository-1 from 3 to 0 (keep 2 pods available, don't exceed 4 pods) 2. * Scaling transactions-repository-2 up to 1* 3. * Scaling transactions-repository-1 down to 2* 4. Scaling transactions-repository-2 up to 2 5. Scaling transactions-repository-1 down to 1 6. Scaling transactions-repository-2 up to 3 7. Scaling transactions-repository-1 down to 0 8. --> Success - maxUnavailable: 0, maxSurge 100% - oc logs transactions-repository-3-deploy -f 1. --> Scaling up transactions-repository-3 from 0 to 3, scaling down transactions-repository-2 from 3 to 0 (keep 3 pods available, don't exceed 6 pods) 2. Scaling transactions-repository-3 up to 3 3. * Scaling transactions-repository-2 down to 1 * 4. * Scaling transactions-repository-2 down to 0* 5. --> Success So we are talking about Kubernetes Rolling Update here. You have a new version of your deployment (e.g. with updated parameters, labels etc) and you want update your deployment in Kubernetes (do not mess it up with Infinispan Rolling Upgrade where the intention is to roll out a new Infinispan cluster). The former approach (maxUnavailable: 1, maxSurge 1) allocates additional Infinispan node for greater cluster capacity. Then it scales the old cluster down. This results in sending KILL [1] signal to the Pod so it gets a chance to shut down gracefully. As a side effect, this also triggers cluster rebalance (since 1 node leaves the cluster). And we go like this on and on until we replace old cluster with new one. The latter approach spins a new cluster up. Then Kubernetes sends KILL signal too *all* old cluster members. Both approaches should work if configured correctly (the former relies heavily on readiness probes and the latter on moving data off the node after receiving KILL signal). However I would assume the latter generates much more network traffic in a short period of time which I consider a bit more risky. Regarding to to a hook which ensures all data has been migrated - I'm not sure how to build such a hook. The main idea is to keep cluster in operational state so that none of the client would notice the rollout. It works like a charm with the former approach. [1] https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods > Radim > > -- > Radim Vansa > JBoss Performance Team > > -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170508/73ff7629/attachment.html From belaban at mailbox.org Mon May 8 11:14:53 2017 From: belaban at mailbox.org (Bela Ban) Date: Mon, 8 May 2017 17:14:53 +0200 Subject: [infinispan-dev] Running an Infinispan cluster on Kubernetes / Google Container Engine Message-ID: <57caf25e-ce03-4829-9804-b74fe8a0c627@mailbox.org> FYI: http://belaban.blogspot.ch/2017/05/running-infinispan-cluster-with.html -- Bela Ban | http://www.jgroups.org From slaskawi at redhat.com Tue May 9 04:53:20 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 09 May 2017 08:53:20 +0000 Subject: [infinispan-dev] Exposing cluster deployed in the cloud In-Reply-To: References: Message-ID: Hey Gustavo, Comments inlined. Thanks, Sebastian On Mon, May 8, 2017 at 11:13 AM Gustavo Fernandes wrote: > Questions inlined: > > On Mon, May 8, 2017 at 8:57 AM, Sebastian Laskawiec > wrote: > >> Hey guys! >> >> A while ago I started working on exposing Infinispan Cluster which is >> hosted in Kubernetes to the outside world: >> > > > What about SNI, wasn't this scenario the reason why it was implemented, > IOW to allow HR clients to access an ispn hosted in the cloud? > The short answer is no. There are at least two major disadvantages of using SNI to connect a Pod: 1. You still need to pass a FQDN in the SNI field. FQDN looks like this [1] transactions-repository-1-myproject.192.168.0.17.nip.io. This allows you to send TCP packets to a desired Route. In order to reach a specific Pod (assuming one among many), you need to get through a Route and a Service. So it seems you will need a "Pod <-> Service <-> Route" combination per each Pod. Ouch!! 2. TLS slows everything down (by ~50% from my benchmark) Also you statement that SNI is needed to access an Infinispan Server hosted in the cloud is misleading. I think it originated a year ago and even then it wasn't quite accurate even then. You can create a Service per Pod and expose it using a LoadBalancer or a NodePort. In my experience creating a Load Balancer per Pod is much simpler than creating a Clustered Service + Route combination and enforcing TLS/SNI. [1] https://github.com/slaskawi/presentations/blob/master/2017_multi_tenancy/cache-checker/src/main/java/org/infinispan/microservices/Main.java#L29 > > > >> >> [image: pasted1] >> >> I'm currently struggling to get solution like this into the platform [1] >> but in the meantime I created a very simple POC and I'm testing it locally >> [2]. >> > > What does "application" mean in the diagram? Are those different pods, or > single containers part of a pod? > Those are Pods. Sorry, I made this image too generic. > > There isn't much doc available at [2], how does it work? > What I'm trying to solve here is accessing the data using shortest possible path - using a "single hop" as we used to call it. In order to do that the client and all the servers need to have the same consistent hash (which is obtained by the client from one of the servers). The problem is that this obtained consistent hash contains internal IP addresses used by the servers to form a cluster. Those addresses are not achievable by the client - it needs to use external ones. So the idea is to let the client use the Consistent Hash with internal addresses but right before sending get request, remap the internal address to the external one. I haven't tried it but looking at the code it shouldn't be that hard. > > >> >> There are two main problems with the scenario described above: >> >> 1. Infinispan server announces internal addresses (172.17.x.x) to the >> client. The client needs to remap them into external ones (172.29.x.x). >> >> > How would the external address be allocated, e.g. during scaling up and > down and how the HR client would know how to map them correctly? > This is the discovery part of the problem and it is pretty hard to be solved. For Kubernetes we can expose a 3rd party REST service which will expose this information. I'm experimenting with this approach in my solution: https://github.com/slaskawi/external-ip-proxy/blob/master/Main.go#L57 (later this week I plan to expose also runtime configuration with internal <-> external mapping). Unfortunately the same problem exists also in some OpenStack configurations (OpenStack also uses internal/external addresses). Therefore some custom REST service would also be needed there. But this is very low priority to me. > > >> >> 1. A custom Consistent Hash needs to be supplied to the Hot Rod >> client. When accessing cache, the Hot Rod Client needs to calculate server >> id for internal address and then map it to the external one. >> >> If there will be no strong opinions regarding to this, I plan to >> implement this shortly. There will be additional method in Hot Rod Client >> configuration (ConfigurationBuilder#addServerMapping(String mappingClass)) >> which will be responsible for mapping external addresses to internal and >> vice-versa. >> >> Thoughts? >> >> Thanks, >> Sebastian >> >> [1] https://github.com/kubernetes/community/pull/446 >> [2] https://github.com/slaskawi/external-ip-proxy >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170509/b438b21a/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted1 Type: image/png Size: 36647 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170509/b438b21a/attachment-0001.png From ttarrant at redhat.com Tue May 9 08:03:50 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 9 May 2017 14:03:50 +0200 Subject: [infinispan-dev] Exposing cluster deployed in the cloud In-Reply-To: References: Message-ID: Sebastian, are you familiar with Hot Rod's proxyHost/proxyPort [1]. In server it is configured using external-host / external-port attributes on the topology-state-transfer element [2] [1] https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/configuration/HotRodServerConfigurationBuilder.java#L43 [2] https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_9_0.xsd#L203 On 5/8/17 9:57 AM, Sebastian Laskawiec wrote: > Hey guys! > > A while ago I started working on exposing Infinispan Cluster which is > hosted in Kubernetes to the outside world: > > pasted1 > > I'm currently struggling to get solution like this into the platform [1] > but in the meantime I created a very simple POC and I'm testing it > locally [2]. > > There are two main problems with the scenario described above: > > 1. Infinispan server announces internal addresses (172.17.x.x) to the > client. The client needs to remap them into external ones (172.29.x.x). > 2. A custom Consistent Hash needs to be supplied to the Hot Rod client. > When accessing cache, the Hot Rod Client needs to calculate server > id for internal address and then map it to the external one. > > If there will be no strong opinions regarding to this, I plan to > implement this shortly. There will be additional method in Hot Rod > Client configuration (ConfigurationBuilder#addServerMapping(String > mappingClass)) which will be responsible for mapping external addresses > to internal and vice-versa. > > Thoughts? > > Thanks, > Sebastian > > [1] https://github.com/kubernetes/community/pull/446 > [2] https://github.com/slaskawi/external-ip-proxy > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Tue May 9 08:24:23 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 9 May 2017 14:24:23 +0200 Subject: [infinispan-dev] Hot Rod secured by default In-Reply-To: <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> References: <78C8F389-2EBA-4E2F-8EFE-CDAAAD65F55D@redhat.com> <20264f25-5a92-f6b9-e8f9-a91d822b5c8f@redhat.com> <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> Message-ID: Hi all, Tristan and I had chat yesterday and I've distilled the contents of the discussion and the feedback here into a JIRA [1]. The JIRA contains several subtasks to handle these aspects: 1. Remove auth check in server's CacheDecodeContext. 2. Default server configuration should require authentication in all entry points. 3. Provide an unauthenticated configuration that users can easily switch to. 4. Remove default username+passwords in docker image and instead show an info/warn message when these are not provided. 5. Add capability to pass in app user role groups to docker image easily, so that its easy to add authorization on top of the server. Cheers, [1] https://issues.jboss.org/browse/ISPN-7811 -- Galder Zamarre?o Infinispan, Red Hat > On 19 Apr 2017, at 12:04, Tristan Tarrant wrote: > > That is caused by not wrapping the calls in PrivilegedActions in all the > correct places and is a bug. > > Tristan > > On 19/04/2017 11:34, Sebastian Laskawiec wrote: >> The proposal look ok to me. >> >> But I would also like to highlight one thing - it seems you can't access >> secured cache properties using CLI. This seems wrong to me (if you can >> invoke the cli, in 99,99% of the cases you have access to the machine, >> so you can do whatever you want). It also breaks healthchecks in Docker >> image. >> >> I would like to make sure we will address those concerns. >> >> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant > > wrote: >> >> Currently the "protected cache access" security is implemented as >> follows: >> >> - if authorization is enabled || client is on loopback >> allow >> >> The first check also implies that authentication needs to be in place, >> as the authorization checks need a valid Subject. >> >> Unfortunately authorization is very heavy-weight and actually overkill >> even for "normal" secure usage. >> >> My proposal is as follows: >> - the "default" configuration files are "secure" by default >> - provide clearly marked "unsecured" configuration files, which the user >> can use >> - drop the "protected cache" check completely >> >> And definitely NO to a dev switch. >> >> Tristan >> >> On 19/04/2017 10:05, Galder Zamarre?o wrote: >>> Agree with Wolf. Let's keep it simple by just providing extra >> configuration files for dev/unsecure envs. >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>>> On 15 Apr 2017, at 12:57, Wolf Fink > > wrote: >>>> >>>> I would think a "switch" can have other impacts as you need to >> check it in the code - and might have security leaks here >>>> >>>> So what is wrong with some configurations which are the default >> and secured. >>>> and a "*-dev or *-unsecure" configuration to start easy. >>>> Also this can be used in production if there is no need for security >>>> >>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec >> > wrote: >>>> I still think it would be better to create an extra switch to >> run infinispan in "development mode". This means no authentication, >> no encryption, possibly with JGroups stack tuned for fast discovery >> (especially in Kubernetes) and a big warning saying "You are in >> development mode, do not use this in production". >>>> >>>> Just something very easy to get you going. >>>> >>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarre?o >> > wrote: >>>> >>>> -- >>>> Galder Zamarre?o >>>> Infinispan, Red Hat >>>> >>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes >> > wrote: >>>>> >>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarre?o >> > wrote: >>>>> Hi all, >>>>> >>>>> As per some discussions we had yesterday on IRC w/ Tristan, >> Gustavo and Sebastian, I've created a docker image snapshot that >> reverts the change stop protected caches from requiring security >> enabled [1]. >>>>> >>>>> In other words, I've removed [2]. The reason for temporarily >> doing that is because with the change as is, the changes required >> for a default server distro require that the entire cache manager's >> security is enabled. This is in turn creates a lot of problems with >> health and running checks used by Kubernetes/OpenShift amongst other >> things. >>>>> >>>>> Judging from our discussions on IRC, the idea is for such >> change to be present in 9.0.1, but I'd like to get final >> confirmation from Tristan et al. >>>>> >>>>> >>>>> +1 >>>>> >>>>> Regarding the "security by default" discussion, I think we >> should ship configurations cloud.xml, clustered.xml and >> standalone.xml with security enabled and disabled variants, and let >> users >>>>> decide which one to pick based on the use case. >>>> >>>> I think that's a better idea. >>>> >>>> We could by default have a secured one, but switching to an >> insecure configuration should be doable with minimal effort, e.g. >> just switching config file. >>>> >>>> As highlighted above, any secured configuration should work >> out-of-the-box with our docker images, e.g. WRT healthy/running checks. >>>> >>>> Cheers, >>>> >>>>> >>>>> Gustavo. >>>>> >>>>> >>>>> Cheers, >>>>> >>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ >> (9.0.1-SNAPSHOT tag for anyone interested) >>>>> [2] >> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118 >>>>> -- >>>>> Galder Zamarre?o >>>>> Infinispan, Red Hat >>>>> >>>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant > > wrote: >>>>>> >>>>>> Dear all, >>>>>> >>>>>> after a mini chat on IRC, I wanted to bring this to >> everybody's attention. >>>>>> >>>>>> We should make the Hot Rod endpoint require authentication in the >>>>>> out-of-the-box configuration. >>>>>> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL >>>>>> mechanism against the ApplicationRealm and require users to >> run the >>>>>> add-user script. >>>>>> This would achieve two goals: >>>>>> - secure out-of-the-box configuration, which is always a good idea >>>>>> - access to the "protected" schema and script caches which is >> prevented >>>>>> when not on loopback on non-authenticated endpoints. >>>>>> >>>>>> Tristan >>>>>> -- >>>>>> Tristan Tarrant >>>>>> Infinispan Lead >>>>>> JBoss, a division of Red Hat >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> -- >>>> SEBASTIAN ?ASKAWIEC >>>> INFINISPAN DEVELOPER >>>> Red Hat EMEA >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> -- >> Tristan Tarrant >> Infinispan Lead >> JBoss, a division of Red Hat >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> >> SEBASTIAN?ASKAWIEC >> >> INFINISPAN DEVELOPER >> >> Red HatEMEA >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From karesti at redhat.com Tue May 9 09:21:07 2017 From: karesti at redhat.com (Katia Aresti) Date: Tue, 9 May 2017 15:21:07 +0200 Subject: [infinispan-dev] Need to understand, ClusteredCacheWithElasticsearchIndexManagerIT Message-ID: Hi all, I'm really struggling with something in order to finish the compute methods. I added a test in *ClusteredCacheWithElasticsearchIndexManagerIT* public void testToto() throws Exception { SearchManager searchManager = Search.getSearchManager(cache2); QueryBuilder queryBuilder = searchManager .buildQueryBuilderForClass(Person.class) .get(); Query allQuery = queryBuilder.all().createQuery(); String key = "newGoat"; Person person4 = new Person(key, "eats something", 42); cache2.putIfAbsent(key, person4); StaticTestingErrorHandler.assertAllGood(cache1, cache2); List found = searchManager.getQuery(allQuery, Person.class).list(); assertEquals(1, found.size()); assertTrue(found.contains(person4)); } I put some logs in the processPutKeyValueCommand method in the *QueryInterceptor* to explain what is happening. *2 threads* Sometimes two threads get involved. = Thread 72 First (or second) call It happens from a non local Node. The so the shouldModifyIndexes says "no, you should not modify any index" because the IndexModificationStrategy is set to "LOCAL ONLY". [1] 72 ctx.getOrigin() = ClusteredCacheWithElasticsearchIndexManagerIT-NodeB-19565 72 should modify false 72 previousValue null 72 putValue Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} // value in the command 72 contextValue Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} //value in the invocation context = Thread 48 Second (or first) call the origin is null, and this is considered as a LOCAL in the SingleKeyNonTxInvocationContext. [2] In this case, the index is modified correctly, the value in the context has already been set up by the PutKeyValueCommand and the index get's correctly updated. 48 ctx.getOrigin() = null 48 should modify true 48 previousValue null 48 putValue Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} 48 contextValue Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} And everything is ok. Everything is fine too in the case of a compute method instead of the put method. But sometimes, this is not executed like that. *3 threads* What is a bit more weird to me is this second scenario where the commands are executed both from non local nodes (A and B). And so the index is not updated. But just later, another thread get's involved and calls the QueryInterceptor with a invocation context where the command has not been executed (the value is not inside the context and the debugger does not enter in the perform method, this has happened just twice before). This call is coming like from a callback? in the QueueAsyncInvocationStage. 80 ctx.getOrigin() = ClusteredCacheWithElasticsearchIndexManagerIT-NodeA-65110 80 should modify false 80 prev null 80 putValue Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} 80 contextValue Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} 38 ctx.getOrigin() = ClusteredCacheWithElasticsearchIndexManagerIT-NodeB-35919 38 should modify false 38 prev null 38 putValue Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} 38 contextValue Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} 48 ctx.getOrigin() = null 48 should modify true 48 prev null 48 putValue Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} 48 contextValue null This execution works perfectly with PutKeyValueCommand. But don't work wth compute. The "computed value" is not inside the Command like put, replace or others. It is computed in the perform method (if needed). So, the first time the command is executed in A, the computed value is in the context, but the index is not updated. Second call, executed in B, value in context, but the index is not updated. The magic callback is executed, but the computed value is nowhere because the command is not executed a third time, so the context is null. Can somebody please give me some light on this and explain to me what am I missing ? Other tests are failing for the same problem, like org.infinispan.query.blackbox.ClusteredCacheWithInfinispanDirectoryTest Thank you very much for your help ! Katia [1] https://github.com/infinispan/infinispan/blob/master/query/src/main/java/org/infinispan/query/backend/IndexModificationStrategy.java#L50 [2] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/context/SingleKeyNonTxInvocationContext.java#L39 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170509/b059956e/attachment-0001.html From mudokonman at gmail.com Tue May 9 09:59:26 2017 From: mudokonman at gmail.com (William Burns) Date: Tue, 09 May 2017 13:59:26 +0000 Subject: [infinispan-dev] RemoteCache putAll javadoc outdated? In-Reply-To: <867143645.1182388.1493120478494.JavaMail.zimbra@redhat.com> References: <1968302460.1182274.1493120425518.JavaMail.zimbra@redhat.com> <867143645.1182388.1493120478494.JavaMail.zimbra@redhat.com> Message-ID: Yeah this should be updated. And to be honest since this is a public API I would say it probably shouldn't have any details like this on it as this is more implementation specific. We can get this fixed up though :) On Tue, Apr 25, 2017 at 7:41 AM Galder Zamarreno wrote: > Hey Will, > > Have we forgotten to update the RemoteCache.putAll javadoc after > implementing ISPN-5266 and related jiras? > > https://github.com/infinispan/infinispan/blob/master/client/hotrod-client/src/main/java/org/infinispan/client/hotrod/RemoteCache.java#L248 > > We're definitely not doing a remote call for each entry in the map anymore > ;) > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170509/7c9904a9/attachment.html From rvansa at redhat.com Tue May 9 13:13:48 2017 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 9 May 2017 13:13:48 -0400 Subject: [infinispan-dev] HotRod client TCK In-Reply-To: <3C4C9C5E-1450-4052-9AB9-9681A0B57695@redhat.com> References: <7bba7850-52fa-1b98-45da-603f1443cc34@redhat.com> <3C4C9C5E-1450-4052-9AB9-9681A0B57695@redhat.com> Message-ID: On 05/08/2017 07:10 AM, Galder Zamarre?o wrote: > I think there's some value in Radim's suggestion. The email was not fully clear to me initially but after reading a few times I understood what he was referring to. @Radim, correct me if I'm wrong... > > Right now clients verify that they behave as expected, e.g. JS client uses its asserts, Java client uses other asserts. What Radim is trying to say is that there needs to be a way to verify they work adequately independent of their implementations. > > So, the only way to do that is to verify it at the server level. Not sure what exactly he means by the fake server, but more than a fake server, I'd be more inclined to modify the server to that it can somehow act as TCK verifier. This is to avoid having to reimplement transport logic, protocol decoder...etc in a new fake server. I think you got the idea. I am not trying to push any particular implementation of the "fake server" - you could just tweak existing one, but the purest and most deterministic approach would be having a script that could look like: expect connection A to serverX/any server expect receive from A send to A <... bytes> expect connection A closed Implementing a server that interprets such script isn't that complex; you don't have to deal with protocol decoder (what's transport logic on server?), because you just expect and send bytes. Radim > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 11 Apr 2017, at 15:57, Radim Vansa wrote: >> >> Since these tests use real server(s), many of them test not only the >> client behaviour (generating correct commands according to the >> protocol), but server, too. While this is practical (we need to test >> server somehow, too), there's nothing all the tests across languages >> will have physically in common and all comparison is prone to human error. >> >> If we want to test various implementations of the client, maybe it would >> make sense to give the clients a fake server that will have just a >> scenario of expected commands to receive and pre-defined responses. We >> could use audit log to generate such scenario based on the actual Java >> tests. >> >> But then we'd have to test the actual behaviour on server, and we'd need >> a way to issue the commands. >> >> Just my 2c >> >> Radim >> >> On 04/11/2017 02:33 PM, Martin Gencur wrote: >>> Hello all, >>> we have been working on https://issues.jboss.org/browse/ISPN-7120. >>> >>> Anna has finished the first step from the JIRA - collecting information >>> about tests in the Java HotRod client test suite (including server >>> integration tests) and it is now prepared for wider review. >>> >>> She created a spreadsheet [1]. The spread sheet includes for each Java >>> test its name, the suggested target package in the TCK, whether to >>> include it in the TCK or not, and some other notes. The suggested >>> package also poses grouping for the tests (e.g. tck.query, tck.near, >>> tck.xsite, ...) >>> >>> Let me add that right now the goal is not to create a true TCK [2]. The >>> goal is to make sure that all implementations of the HotRod protocol >>> have sufficient test coverage and possibly the same server side of the >>> client-server test (including the server version and configuration). >>> >>> What are the next step? >>> >>> * Please review the list (at least a quick look) and see if some of the >>> tests which are NOT suggested for the TCK should be added or vice versa. >>> * I suppose the next step would then be to check other implementations >>> (C#, C++, NodeJS, ..) and identify tests which are missing there (there >>> will surely be some). >>> * Gradually implement the missing tests in the other implementations >>> Note: Here we should ensure that the server is configured in the same >>> way for all implementations. One way to achieve this (thanks Anna for >>> suggestion!) is to have a shell/batch scripts for CLI which would be >>> executed before the tests. This can probably be done for all impls. and >>> both UNIX/WINDOWS. I also realize that my PR for ISPN [3] becomes >>> useless because it uses Creaper (Java) and we need a language-neutral >>> solution for configuring the server. >>> >>> Some other notes: >>> * there are some duplicated tests in hotrod-client and server >>> integration test suites, in this case it probably makes sense to only >>> include in the TCK the server integration test >>> * tests from the hotrod-client module which are supposed to be part of >>> the TCK should be copied to the server integration test suite one day >>> (possibly later) >>> >>> Please let us know what you think. >>> >>> Thanks, >>> Martin >>> >>> >>> [1] >>> https://docs.google.com/spreadsheets/d/1bZBBi5m4oLL4lBTZhdRbIC_EA0giQNDZWzFNPWrF5G4/edit#gid=0 >>> [2] https://en.wikipedia.org/wiki/Technology_Compatibility_Kit >>> [3] https://github.com/infinispan/infinispan/pull/5012 >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From rvansa at redhat.com Tue May 9 14:39:13 2017 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 9 May 2017 14:39:13 -0400 Subject: [infinispan-dev] to be a command, or not to be a command, that is the question In-Reply-To: References: Message-ID: On 05/08/2017 09:58 AM, Galder Zamarre?o wrote: > Hey Katia, > > Sorry for delay replying back! I'm surprised there has not been more feedback. My position on this is well known around the team, so let me summarise it: > > My feeling has always been that we have too many commands and we should reduce number of commands. Part of the functional map experiment was to show with a subset of commands, all sorts of front end operations could be exposed. So, I'm on Radim's side on this. By passing functions/lambdas, we get a lot of flexibility with very little cost. IOW, we can add more operations by just passing in different lambdas to existing commands. > > However, it is true that having different front API methods that only differ in the lambda makes it initially hard to potentially do different things for each, but couldn't that be solved with some kind of enum? > > Although enums are useful, they're a bit limited, e.g. don't take params, so since you've done Scala before, maybe this could be solved with some Scala-like sealed trait for each front end operation type? I used something like a sealed trait for implementing a more flexible flag system for functional map API called org.infinispan.commons.api.functional.Param Do I understand correctly that you're suggesting to add a enum to ReadWriteKeyValueCommand that will say "behave like eval (current)/compute*/merge"? How is that different from just wrapping the 'user function' into adapting function (with registered externalizer == marshalling to just 1-2 bytes)? Handling such enum in interceptors is not better that having additional visitX method. And not handling that does not allow you to apply optimizations which Katia has named as reason #1 to have the separate commands. > The problem I have with adding more commands is the explosion that it provokes in terms of code, with all the required visit* method impls all over the place...etc. > > I personally think that the lack of a more flexible command architecture is what has stopped us from adding front-end operations more quickly (e.g. counters, multi-maps...etc). IMO, working with generic commands that take lambdas is a way to strike a balance between adding front-end operations quickly and not resulting in a huge explosion of commands. So your final verdict is -1 to separate commands? R. PS: besides DRY, I vote for the use of functional commands is that it would encourage us to fix the rest of the parts that might not be working properly - e.g. QueryInterceptor was not updated with the functional stuff (but QI is broken in more ways [1]) [1] https://issues.jboss.org/browse/ISPN-7806 > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 20 Apr 2017, at 16:06, Katia Aresti wrote: >> >> Hi all >> >> Well, nobody spoke, so I consider that everybody agrees that I can take a decision like a big girl by myself ! :) >> >> I'm going to add 3 new commands, for merge, compute&computeIfPresent and computeIfAbsent. So I won't use the actual existing commands for the implementation : ReadWriteKeyCommand and ReadWriteKeyValueCommand even if I'm a DRY person and I love reusing code, I'm a KISS person too. >> >> I tested the implementation using these functional commands and IMHO : >> - merge and compute methods worth their own commands, they are very useful and we might want to adjust/optimize them individually >> - there are some technical issues related to the TypeConverterDelegatingAdvancedCache that makes me modify these existing functional commands with some hacky code that, for me, should be kept in commands like merge or compute with the correct documentation. They don't belong to a generic command. >> - Functional API is experimental right now. It might be non experimental in the near future, but we might decide to move to another thing. The 3 commands are already "coded" in my branches (not everything reviewed yet but soon). If one day we decide to change/simplify or we find a nice way to get rid of commands with a more generic one, removing and simplifying should be less painful than adding commands for these methods. >> >> That's all ! >> >> Cheers >> >> Katia >> >> >> >> On Wed, Apr 12, 2017 at 12:11 PM, Katia Aresti wrote: >> Hi all, >> >> As you might know I'm working since my arrival, among other things, on ISPN-5728 Jira [1], where the idea is to override the default ConcurrentMap methods that are missing in CacheImpl (merge, replaceAll, compute ... ) >> >> I've created a pull-request [2] for compute, computeIfAbsent and computeIfPresent methods, creating two new commands. By the way, I did the same thing for the merge method in a branch that I haven't pull requested yet. >> >> There is an opposite view between Radim and Will concerning the implementation of these methods. To make it short : >> In one side Will considers compute/merge best implementation should be as a new Command (so what is already done) >> In the other side, Radim considers adding another command is not necessary as we could simple implement these methods using ReadWriteKeyCommand >> >> The detailed discussion and arguments of both sides is on GitHub [2] >> >> Before moving forward and making any choice by myself, I would like to hear your opinions. For the record, it doesn't bother me redoing everything if most people think like Radim because working on commands has helped me to learn and understand more about infinispan internals, so this hasn't been a waste of time for me. >> >> Katia >> >> [1] https://issues.jboss.org/browse/ISPN-5728 >> [2] https://github.com/infinispan/infinispan/pull/5046 >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From gustavo at infinispan.org Wed May 10 04:09:28 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Wed, 10 May 2017 09:09:28 +0100 Subject: [infinispan-dev] HotRod client TCK In-Reply-To: <3C4C9C5E-1450-4052-9AB9-9681A0B57695@redhat.com> References: <7bba7850-52fa-1b98-45da-603f1443cc34@redhat.com> <3C4C9C5E-1450-4052-9AB9-9681A0B57695@redhat.com> Message-ID: On Mon, May 8, 2017 at 12:10 PM, Galder Zamarre?o wrote: > I think there's some value in Radim's suggestion. The email was not fully > clear to me initially but after reading a few times I understood what he > was referring to. @Radim, correct me if I'm wrong... > > Right now clients verify that they behave as expected, e.g. JS client uses > its asserts, Java client uses other asserts. What Radim is trying to say is > that there needs to be a way to verify they work adequately independent of > their implementations. > > So, the only way to do that is to verify it at the server level. > Not sure what exactly he means by the fake server, but more than a fake > server, I'd be more inclined to modify the server to that it can somehow > act as TCK verifier. We had a thread about Hot Rod testing last year, and another possible strategy is to use real unmodified servers have the TCK written once in a neutral language, compile/distribute it to each of the clients, where the tests would run as part of the build. More details on [1] [1] http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Hot-Rod-testing-tt4031152.html#a4031213 Gustavo > This is to avoid having to reimplement transport logic, protocol > decoder...etc in a new fake server. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 11 Apr 2017, at 15:57, Radim Vansa wrote: > > > > Since these tests use real server(s), many of them test not only the > > client behaviour (generating correct commands according to the > > protocol), but server, too. While this is practical (we need to test > > server somehow, too), there's nothing all the tests across languages > > will have physically in common and all comparison is prone to human > error. > > > > If we want to test various implementations of the client, maybe it would > > make sense to give the clients a fake server that will have just a > > scenario of expected commands to receive and pre-defined responses. We > > could use audit log to generate such scenario based on the actual Java > > tests. > > > > But then we'd have to test the actual behaviour on server, and we'd need > > a way to issue the commands. > > > > Just my 2c > > > > Radim > > > > On 04/11/2017 02:33 PM, Martin Gencur wrote: > >> Hello all, > >> we have been working on https://issues.jboss.org/browse/ISPN-7120. > >> > >> Anna has finished the first step from the JIRA - collecting information > >> about tests in the Java HotRod client test suite (including server > >> integration tests) and it is now prepared for wider review. > >> > >> She created a spreadsheet [1]. The spread sheet includes for each Java > >> test its name, the suggested target package in the TCK, whether to > >> include it in the TCK or not, and some other notes. The suggested > >> package also poses grouping for the tests (e.g. tck.query, tck.near, > >> tck.xsite, ...) > >> > >> Let me add that right now the goal is not to create a true TCK [2]. The > >> goal is to make sure that all implementations of the HotRod protocol > >> have sufficient test coverage and possibly the same server side of the > >> client-server test (including the server version and configuration). > >> > >> What are the next step? > >> > >> * Please review the list (at least a quick look) and see if some of the > >> tests which are NOT suggested for the TCK should be added or vice versa. > >> * I suppose the next step would then be to check other implementations > >> (C#, C++, NodeJS, ..) and identify tests which are missing there (there > >> will surely be some). > >> * Gradually implement the missing tests in the other implementations > >> Note: Here we should ensure that the server is configured in the same > >> way for all implementations. One way to achieve this (thanks Anna for > >> suggestion!) is to have a shell/batch scripts for CLI which would be > >> executed before the tests. This can probably be done for all impls. and > >> both UNIX/WINDOWS. I also realize that my PR for ISPN [3] becomes > >> useless because it uses Creaper (Java) and we need a language-neutral > >> solution for configuring the server. > >> > >> Some other notes: > >> * there are some duplicated tests in hotrod-client and server > >> integration test suites, in this case it probably makes sense to only > >> include in the TCK the server integration test > >> * tests from the hotrod-client module which are supposed to be part of > >> the TCK should be copied to the server integration test suite one day > >> (possibly later) > >> > >> Please let us know what you think. > >> > >> Thanks, > >> Martin > >> > >> > >> [1] > >> https://docs.google.com/spreadsheets/d/1bZBBi5m4oLL4lBTZhdRbIC_ > EA0giQNDZWzFNPWrF5G4/edit#gid=0 > >> [2] https://en.wikipedia.org/wiki/Technology_Compatibility_Kit > >> [3] https://github.com/infinispan/infinispan/pull/5012 > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > -- > > Radim Vansa > > JBoss Performance Team > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170510/2eb6619b/attachment-0001.html From mgencur at redhat.com Wed May 10 06:37:26 2017 From: mgencur at redhat.com (Martin Gencur) Date: Wed, 10 May 2017 12:37:26 +0200 Subject: [infinispan-dev] HotRod client TCK In-Reply-To: References: Message-ID: <770be0f7-c8e6-e782-df00-a6006da66693@redhat.com> Hi, thanks for looking at the list of tests and thanks for suggestions. We'll incorporate them in the final list of tests. What Radim suggest has some advantages and some drawbacks, but I see this as an addition to the client TCK. This approach can verify that the client sends some predefined commands with predefined values but does that really verify that the user will get the expected results? I'm not so sure. There can be some client-side logic that does other modifications. Here I see room for a lot of missed bugs. I'd say we need real client-side tests which verify the client behavior from the user perspective. Let me also add that I see the Java client test suite as an etalon (reference standard). The server side behavior has been tested through the Java test suite and other clients don't need to test that again, IMO. The goal is to test the client-side. Having a pre-defined configuration for server that would be used in all client implementation tests should provide some common ground for the tests. As to the real TCK suggested by Gustavo, I remember the discussion and we discussed that at the clustering meeting last year. Since the Java HotRod client test suite has about 1500 tests (maybe more now?) we would need to rewrite all the tests in the new language. And I'm not sure running those tests with various implementations would be without problems. I'd love to see this working but I'm afraid that we don't have time and resources to do this any time soon. Martin On 8.5.2017 13:32, Galder Zamarre?o wrote: > I think in general it'd be a good idea to try to verify somehow most of the TCK via some server-side logic, as Radim hinted, and where that's not possible, revert to just verifying the client has tests to cover certain scenarios. From gustavo at infinispan.org Wed May 10 09:33:38 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Wed, 10 May 2017 14:33:38 +0100 Subject: [infinispan-dev] Need to understand, ClusteredCacheWithElasticsearchIndexManagerIT In-Reply-To: References: Message-ID: The test fails every time NodeB (cache2) happens to *not be* the primary owner of 'newGoat', and needs to forward the command to NodeA. Sequence of events: 1) [NodeB] Compute is called, key is 'newGoat' 2) [NodeB] Command gets visited by the QueryInterceptor, that suspends the execution 3) [NodeB] Owner for 'newGoat' is nodeA, so nodeB forwards the command to nodeA 4) [NodeA] Command gets visited by the QueryInterceptor, that suspends the execution 5) [NodeA] perform() is called in the Compute command 6) [NodeA] Command is then sent to NodeB, which is a backup owner 7) [NodeB] Command gets visited by the QueryInterceptor, that suspends the execution 8) [NodeB] perform() is called in the compute command 9) [NodeB] QueryInterceptor resumes execution. Since command was originated remotely, no indexing is done (due to Index.LOCAL) 9) [NodeA] Receive response from the call done on 6) 10)[NodeA] resumes execution from the QueryInterceptor from 4) 11)[NodeA] Since command was originated remotely, no indexing is done (due to Index.LOCAL) 12)[NodeB] receives response from 3). At this point *the computed value is available* as the return type of the remote invocation 13)[NodeB] resumes the QueryInterceptor invocation from 2) 14)[NodeB] processComputes is then executed, but since the computedValue is not available in the command itself nor in the context, indexing is skipped since there is no value to index or remove Looking at the method visitComputCommand, the variable "rv" stores the return value from the command, but it's not being used, instead the stateBeforeCompute is used which is always null in this scenario, because it is evaluated on 2) which is before the the key exists in the data container: return invokeNextThenAccept(ctx, command, (rCtx, rCommand, rv) -> processComputeCommand(((ComputeCommand) rCommand), rCtx, stateBeforeCompute, null)); Gustavo On Tue, May 9, 2017 at 2:21 PM, Katia Aresti wrote: > Hi all, > > I'm really struggling with something in order to finish the compute > methods. > > I added a test in *ClusteredCacheWithElasticsearchIndexManagerIT* > > public void testToto() throws Exception { > SearchManager searchManager = Search.getSearchManager(cache2); > QueryBuilder queryBuilder = searchManager > .buildQueryBuilderForClass(Person.class) > .get(); > Query allQuery = queryBuilder.all().createQuery(); > > String key = "newGoat"; > Person person4 = new Person(key, "eats something", 42); > > cache2.putIfAbsent(key, person4); > StaticTestingErrorHandler.assertAllGood(cache1, cache2); > > List found = searchManager.getQuery(allQuery, Person.class).list(); > assertEquals(1, found.size()); > assertTrue(found.contains(person4)); > } > > I put some logs in the processPutKeyValueCommand method in the > *QueryInterceptor* to explain what is happening. > > *2 threads* > Sometimes two threads get involved. > > = Thread 72 First (or second) call > It happens from a non local Node. The so the shouldModifyIndexes says > "no, you should not modify any index" because the > IndexModificationStrategy is set to "LOCAL ONLY". [1] > > 72 ctx.getOrigin() = ClusteredCacheWithElasticsearc > hIndexManagerIT-NodeB-19565 > 72 should modify false > 72 previousValue null > 72 putValue Person{name='newGoat', blurb='eats something', age=42, > dateOfGraduation=null} // value in the command > 72 contextValue Person{name='newGoat', blurb='eats something', age=42, > dateOfGraduation=null} //value in the invocation context > > = Thread 48 Second (or first) call > the origin is null, and this is considered as a LOCAL in the > SingleKeyNonTxInvocationContext. [2] In this case, the index is modified > correctly, the value in the context has already been set up by the > PutKeyValueCommand and the index get's correctly updated. > > 48 ctx.getOrigin() = null > 48 should modify true > 48 previousValue null > 48 putValue Person{name='newGoat', blurb='eats something', age=42, > dateOfGraduation=null} > 48 contextValue Person{name='newGoat', blurb='eats something', age=42, > dateOfGraduation=null} > > And everything is ok. Everything is fine too in the case of a compute > method instead of the put method. > > But sometimes, this is not executed like that. > > *3 threads* > > What is a bit more weird to me is this second scenario where the commands > are executed both from non local nodes (A and B). And so the index is not > updated. > But just later, another thread get's involved and calls the > QueryInterceptor with a invocation context where the command has not been > executed (the value is not inside the context and the debugger does not > enter in the perform method, this has happened just twice before). This > call is coming like from a callback? in the QueueAsyncInvocationStage. > > 80 ctx.getOrigin() = ClusteredCacheWithElasticsearc > hIndexManagerIT-NodeA-65110 > 80 should modify false > 80 prev null > 80 putValue Person{name='newGoat', blurb='eats something', age=42, > dateOfGraduation=null} > 80 contextValue Person{name='newGoat', blurb='eats something', age=42, > dateOfGraduation=null} > > 38 ctx.getOrigin() = ClusteredCacheWithElasticsearc > hIndexManagerIT-NodeB-35919 > 38 should modify false > 38 prev null > 38 putValue Person{name='newGoat', blurb='eats something', age=42, > dateOfGraduation=null} > 38 contextValue Person{name='newGoat', blurb='eats something', age=42, > dateOfGraduation=null} > > 48 ctx.getOrigin() = null > 48 should modify true > 48 prev null > 48 putValue Person{name='newGoat', blurb='eats something', age=42, > dateOfGraduation=null} > 48 contextValue null > > > This execution works perfectly with PutKeyValueCommand. But don't work wth > compute. > > The "computed value" is not inside the Command like put, replace or > others. It is computed in the perform method (if needed). So, the first > time the command is executed in A, the computed value is in the context, > but the index is not updated. Second call, executed in B, value in context, > but the index is not updated. The magic callback is executed, but the > computed value is nowhere because the command is not executed a third time, > so the context is null. > > Can somebody please give me some light on this and explain to me what am I > missing ? Other tests are failing for the same problem, > like org.infinispan.query.blackbox.ClusteredCacheWithInfinis > panDirectoryTest > > Thank you very much for your help ! > > Katia > > [1] https://github.com/infinispan/infinispan/blob/master/ > query/src/main/java/org/infinispan/query/backend/Index > ModificationStrategy.java#L50 > [2] https://github.com/infinispan/infinispan/blob/master/ > core/src/main/java/org/infinispan/context/SingleKeyNonTxInvo > cationContext.java#L39 > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170510/5c687ec4/attachment.html From karesti at redhat.com Thu May 11 08:54:26 2017 From: karesti at redhat.com (Katia Aresti) Date: Thu, 11 May 2017 14:54:26 +0200 Subject: [infinispan-dev] Need to understand, ClusteredCacheWithElasticsearchIndexManagerIT In-Reply-To: References: Message-ID: Hi Gustavo, thanks for the help ! Indeed, I stopped using the rv variable because I realised it was not always the compute command result, and somehow I decided I could not take for granted in the code that this is the compute command result value. But I think this the case only when the PrepareCommand is called on tx mode for this particular case. What confused me a lot in this matter are commands like RemoveCommand, for example. private void processRemoveCommand(final RemoveCommand command, final InvocationContext ctx, final Object valueRemoved, TransactionContext transactionContext) When this method is called from the visitPrepareCommand method, we indeed pass the previous value found just before the call in the cache. But after, when the same method is called from the visitRemoveCommand we passe the rv parameter. Which is the Remove Command perfom method answer. So, having a look to the RemoveCommand, I can see sometimes we indeed return the prev value. But it might happen depending to return a boolean instead. remove("key", "value")cache2.remove("newGoat"); *LOGS* Thread => 79 ctx.getOrigin() => ClusteredCacheWithElasticsearchIndexManagerIT-NodeA-15125 valueRemoved => Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} Thread => 38 ctx.getOrigin() => ClusteredCacheWithElasticsearchIndexManagerIT-NodeB-23459 valueRemoved => Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} Thread => 47 ctx.getOrigin() => null valueRemoved => Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} *removeFromIndexes method is called !!!* But if we call remove with specific value, the method removeFromIndexes is never called. cache2.remove("newGoat", person4); *LOGS* Thread => 79 ctx.getOrigin() => ClusteredCacheWithElasticsearchIndexManagerIT-NodeA-22063 valueRemoved => true Thread => 38 ctx.getOrigin() => ClusteredCacheWithElasticsearchIndexManagerIT-NodeB-47249 valueRemoved => true Thread => 47 ctx.getOrigin() => null valueRemoved => true But in both cases the remove seems to be working, because this assertions work. found = searchManager.getQuery(allQuery, Person.class).list(); assertEquals(0, found.size()); I'm a bit confused about all this, but we can chat o IRC or bluejeans. It might be some bugs concerning this interceptor, Radim has already opened an issue on this matter. Katia On Wed, May 10, 2017 at 3:33 PM, Gustavo Fernandes wrote: > The test fails every time NodeB (cache2) happens to *not be* the primary > owner of 'newGoat', and needs to forward the command to NodeA. Sequence of > events: > > 1) [NodeB] Compute is called, key is 'newGoat' > 2) [NodeB] Command gets visited by the QueryInterceptor, that suspends the > execution > 3) [NodeB] Owner for 'newGoat' is nodeA, so nodeB forwards the command to > nodeA > > 4) [NodeA] Command gets visited by the QueryInterceptor, that suspends the > execution > 5) [NodeA] perform() is called in the Compute command > 6) [NodeA] Command is then sent to NodeB, which is a backup owner > > 7) [NodeB] Command gets visited by the QueryInterceptor, that suspends the > execution > 8) [NodeB] perform() is called in the compute command > 9) [NodeB] QueryInterceptor resumes execution. Since command was > originated remotely, no indexing is done (due to Index.LOCAL) > > 9) [NodeA] Receive response from the call done on 6) > 10)[NodeA] resumes execution from the QueryInterceptor from 4) > 11)[NodeA] Since command was originated remotely, no indexing is done (due > to Index.LOCAL) > > 12)[NodeB] receives response from 3). At this point *the computed value > is available* as the return type of the remote invocation > 13)[NodeB] resumes the QueryInterceptor invocation from 2) > 14)[NodeB] processComputes is then executed, but since the computedValue > is not available in the command itself nor in the context, indexing is > skipped since there is no value to index or remove > > > Looking at the method visitComputCommand, the variable "rv" stores the > return value from the command, but it's not being used, instead the > stateBeforeCompute is used which is always null in this scenario, > because it is evaluated on 2) which is before the the key exists in the > data container: > > return invokeNextThenAccept(ctx, command, (rCtx, rCommand, rv) -> processComputeCommand(((ComputeCommand) rCommand), rCtx, stateBeforeCompute, null)); > > > Gustavo > > > On Tue, May 9, 2017 at 2:21 PM, Katia Aresti wrote: > >> Hi all, >> >> I'm really struggling with something in order to finish the compute >> methods. >> >> I added a test in *ClusteredCacheWithElasticsearchIndexManagerIT* >> >> public void testToto() throws Exception { >> SearchManager searchManager = Search.getSearchManager(cache2); >> QueryBuilder queryBuilder = searchManager >> .buildQueryBuilderForClass(Person.class) >> .get(); >> Query allQuery = queryBuilder.all().createQuery(); >> >> String key = "newGoat"; >> Person person4 = new Person(key, "eats something", 42); >> >> cache2.putIfAbsent(key, person4); >> StaticTestingErrorHandler.assertAllGood(cache1, cache2); >> >> List found = searchManager.getQuery(allQuery, Person.class).list(); >> assertEquals(1, found.size()); >> assertTrue(found.contains(person4)); >> } >> >> I put some logs in the processPutKeyValueCommand method in the >> *QueryInterceptor* to explain what is happening. >> >> *2 threads* >> Sometimes two threads get involved. >> >> = Thread 72 First (or second) call >> It happens from a non local Node. The so the shouldModifyIndexes says >> "no, you should not modify any index" because the >> IndexModificationStrategy is set to "LOCAL ONLY". [1] >> >> 72 ctx.getOrigin() = ClusteredCacheWithElasticsearc >> hIndexManagerIT-NodeB-19565 >> 72 should modify false >> 72 previousValue null >> 72 putValue Person{name='newGoat', blurb='eats something', age=42, >> dateOfGraduation=null} // value in the command >> 72 contextValue Person{name='newGoat', blurb='eats something', age=42, >> dateOfGraduation=null} //value in the invocation context >> >> = Thread 48 Second (or first) call >> the origin is null, and this is considered as a LOCAL in the >> SingleKeyNonTxInvocationContext. [2] In this case, the index is modified >> correctly, the value in the context has already been set up by the >> PutKeyValueCommand and the index get's correctly updated. >> >> 48 ctx.getOrigin() = null >> 48 should modify true >> 48 previousValue null >> 48 putValue Person{name='newGoat', blurb='eats something', age=42, >> dateOfGraduation=null} >> 48 contextValue Person{name='newGoat', blurb='eats something', age=42, >> dateOfGraduation=null} >> >> And everything is ok. Everything is fine too in the case of a compute >> method instead of the put method. >> >> But sometimes, this is not executed like that. >> >> *3 threads* >> >> What is a bit more weird to me is this second scenario where the commands >> are executed both from non local nodes (A and B). And so the index is not >> updated. >> But just later, another thread get's involved and calls the >> QueryInterceptor with a invocation context where the command has not been >> executed (the value is not inside the context and the debugger does not >> enter in the perform method, this has happened just twice before). This >> call is coming like from a callback? in the QueueAsyncInvocationStage. >> >> 80 ctx.getOrigin() = ClusteredCacheWithElasticsearc >> hIndexManagerIT-NodeA-65110 >> 80 should modify false >> 80 prev null >> 80 putValue Person{name='newGoat', blurb='eats something', age=42, >> dateOfGraduation=null} >> 80 contextValue Person{name='newGoat', blurb='eats something', age=42, >> dateOfGraduation=null} >> >> 38 ctx.getOrigin() = ClusteredCacheWithElasticsearc >> hIndexManagerIT-NodeB-35919 >> 38 should modify false >> 38 prev null >> 38 putValue Person{name='newGoat', blurb='eats something', age=42, >> dateOfGraduation=null} >> 38 contextValue Person{name='newGoat', blurb='eats something', age=42, >> dateOfGraduation=null} >> >> 48 ctx.getOrigin() = null >> 48 should modify true >> 48 prev null >> 48 putValue Person{name='newGoat', blurb='eats something', age=42, >> dateOfGraduation=null} >> 48 contextValue null >> >> >> This execution works perfectly with PutKeyValueCommand. But don't work >> wth compute. >> >> The "computed value" is not inside the Command like put, replace or >> others. It is computed in the perform method (if needed). So, the first >> time the command is executed in A, the computed value is in the context, >> but the index is not updated. Second call, executed in B, value in context, >> but the index is not updated. The magic callback is executed, but the >> computed value is nowhere because the command is not executed a third time, >> so the context is null. >> >> Can somebody please give me some light on this and explain to me what am >> I missing ? Other tests are failing for the same problem, >> like org.infinispan.query.blackbox.ClusteredCacheWithInfinis >> panDirectoryTest >> >> Thank you very much for your help ! >> >> Katia >> >> [1] https://github.com/infinispan/infinispan/blob/master/que >> ry/src/main/java/org/infinispan/query/backend/IndexModificat >> ionStrategy.java#L50 >> [2] https://github.com/infinispan/infinispan/blob/master/cor >> e/src/main/java/org/infinispan/context/SingleKeyNonTxInvocat >> ionContext.java#L39 >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170511/319513fa/attachment-0001.html From gustavo at infinispan.org Thu May 11 09:49:50 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Thu, 11 May 2017 14:49:50 +0100 Subject: [infinispan-dev] Need to understand, ClusteredCacheWithElasticsearchIndexManagerIT In-Reply-To: References: Message-ID: On Thu, May 11, 2017 at 1:54 PM, Katia Aresti wrote: > Hi Gustavo, thanks for the help ! > > Indeed, I stopped using the rv variable because I realised it was not > always the compute command result, and somehow I decided I could not take > for granted in the code that this is the compute command result value. But > I think this the case only when the PrepareCommand is called on tx mode for > this particular case. > > What confused me a lot in this matter are commands like RemoveCommand, for > example. > > private void processRemoveCommand(final RemoveCommand command, final InvocationContext > ctx, final Object valueRemoved, TransactionContext transactionContext) > > When this method is called from the visitPrepareCommand method, we indeed pass the previous value found just before the call in the cache. > > But after, when the same method is called from the visitRemoveCommand we passe the rv parameter. Which is the Remove Command perfom method answer. > > > So, having a look to the RemoveCommand, I can see sometimes we indeed return the prev value. But it might happen depending to return a boolean instead. > > > remove("key", "value")cache2.remove("newGoat"); > > *LOGS* > > Thread => 79 > ctx.getOrigin() => ClusteredCacheWithElasticsearchIndexManagerIT-NodeA-15125 > valueRemoved => Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} > Thread => 38 > ctx.getOrigin() => ClusteredCacheWithElasticsearchIndexManagerIT-NodeB-23459 > valueRemoved => Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} > Thread => 47 > ctx.getOrigin() => null > valueRemoved => Person{name='newGoat', blurb='eats something', age=42, dateOfGraduation=null} > *removeFromIndexes method is called !!!* > > > But if we call remove with specific value, the method removeFromIndexes is never called. > > cache2.remove("newGoat", person4); > > *LOGS* > > Thread => 79 > ctx.getOrigin() => ClusteredCacheWithElasticsearchIndexManagerIT-NodeA-22063 > valueRemoved => true > Thread => 38 > ctx.getOrigin() => ClusteredCacheWithElasticsearchIndexManagerIT-NodeB-47249 > valueRemoved => true > Thread => 47 > ctx.getOrigin() => null > valueRemoved => true > > But in both cases the remove seems to be working, because this assertions work. > > found = searchManager.getQuery(allQuery, Person.class).list(); > > assertEquals(0, found.size()); > > > I'm a bit confused about all this, but we can chat o IRC or bluejeans. It might be some bugs concerning this interceptor, Radim has already opened an issue on this matter. > > It seems to be working, since queries results are correct, but the value is not actually removed from the indexes. https://issues.jboss.org/browse/ISPN-7825 Gustavo > Katia > > > On Wed, May 10, 2017 at 3:33 PM, Gustavo Fernandes > wrote: > >> The test fails every time NodeB (cache2) happens to *not be* the primary >> owner of 'newGoat', and needs to forward the command to NodeA. Sequence of >> events: >> >> 1) [NodeB] Compute is called, key is 'newGoat' >> 2) [NodeB] Command gets visited by the QueryInterceptor, that suspends >> the execution >> 3) [NodeB] Owner for 'newGoat' is nodeA, so nodeB forwards the command to >> nodeA >> >> 4) [NodeA] Command gets visited by the QueryInterceptor, that suspends >> the execution >> 5) [NodeA] perform() is called in the Compute command >> 6) [NodeA] Command is then sent to NodeB, which is a backup owner >> >> 7) [NodeB] Command gets visited by the QueryInterceptor, that suspends >> the execution >> 8) [NodeB] perform() is called in the compute command >> 9) [NodeB] QueryInterceptor resumes execution. Since command was >> originated remotely, no indexing is done (due to Index.LOCAL) >> >> 9) [NodeA] Receive response from the call done on 6) >> 10)[NodeA] resumes execution from the QueryInterceptor from 4) >> 11)[NodeA] Since command was originated remotely, no indexing is done >> (due to Index.LOCAL) >> >> 12)[NodeB] receives response from 3). At this point *the computed value >> is available* as the return type of the remote invocation >> 13)[NodeB] resumes the QueryInterceptor invocation from 2) >> 14)[NodeB] processComputes is then executed, but since the computedValue >> is not available in the command itself nor in the context, indexing is >> skipped since there is no value to index or remove >> >> >> Looking at the method visitComputCommand, the variable "rv" stores the >> return value from the command, but it's not being used, instead the >> stateBeforeCompute is used which is always null in this scenario, >> because it is evaluated on 2) which is before the the key exists in the >> data container: >> >> return invokeNextThenAccept(ctx, command, (rCtx, rCommand, rv) -> processComputeCommand(((ComputeCommand) rCommand), rCtx, stateBeforeCompute, null)); >> >> >> Gustavo >> >> >> On Tue, May 9, 2017 at 2:21 PM, Katia Aresti wrote: >> >>> Hi all, >>> >>> I'm really struggling with something in order to finish the compute >>> methods. >>> >>> I added a test in *ClusteredCacheWithElasticsearchIndexManagerIT* >>> >>> public void testToto() throws Exception { >>> SearchManager searchManager = Search.getSearchManager(cache2); >>> QueryBuilder queryBuilder = searchManager >>> .buildQueryBuilderForClass(Person.class) >>> .get(); >>> Query allQuery = queryBuilder.all().createQuery(); >>> >>> String key = "newGoat"; >>> Person person4 = new Person(key, "eats something", 42); >>> >>> cache2.putIfAbsent(key, person4); >>> StaticTestingErrorHandler.assertAllGood(cache1, cache2); >>> >>> List found = searchManager.getQuery(allQuery, Person.class).list(); >>> assertEquals(1, found.size()); >>> assertTrue(found.contains(person4)); >>> } >>> >>> I put some logs in the processPutKeyValueCommand method in the >>> *QueryInterceptor* to explain what is happening. >>> >>> *2 threads* >>> Sometimes two threads get involved. >>> >>> = Thread 72 First (or second) call >>> It happens from a non local Node. The so the shouldModifyIndexes says >>> "no, you should not modify any index" because the >>> IndexModificationStrategy is set to "LOCAL ONLY". [1] >>> >>> 72 ctx.getOrigin() = ClusteredCacheWithElasticsearc >>> hIndexManagerIT-NodeB-19565 >>> 72 should modify false >>> 72 previousValue null >>> 72 putValue Person{name='newGoat', blurb='eats something', age=42, >>> dateOfGraduation=null} // value in the command >>> 72 contextValue Person{name='newGoat', blurb='eats something', age=42, >>> dateOfGraduation=null} //value in the invocation context >>> >>> = Thread 48 Second (or first) call >>> the origin is null, and this is considered as a LOCAL in the >>> SingleKeyNonTxInvocationContext. [2] In this case, the index is >>> modified correctly, the value in the context has already been set up by the >>> PutKeyValueCommand and the index get's correctly updated. >>> >>> 48 ctx.getOrigin() = null >>> 48 should modify true >>> 48 previousValue null >>> 48 putValue Person{name='newGoat', blurb='eats something', age=42, >>> dateOfGraduation=null} >>> 48 contextValue Person{name='newGoat', blurb='eats something', age=42, >>> dateOfGraduation=null} >>> >>> And everything is ok. Everything is fine too in the case of a compute >>> method instead of the put method. >>> >>> But sometimes, this is not executed like that. >>> >>> *3 threads* >>> >>> What is a bit more weird to me is this second scenario where the >>> commands are executed both from non local nodes (A and B). And so the index >>> is not updated. >>> But just later, another thread get's involved and calls the >>> QueryInterceptor with a invocation context where the command has not been >>> executed (the value is not inside the context and the debugger does not >>> enter in the perform method, this has happened just twice before). This >>> call is coming like from a callback? in the QueueAsyncInvocationStage. >>> >>> 80 ctx.getOrigin() = ClusteredCacheWithElasticsearc >>> hIndexManagerIT-NodeA-65110 >>> 80 should modify false >>> 80 prev null >>> 80 putValue Person{name='newGoat', blurb='eats something', age=42, >>> dateOfGraduation=null} >>> 80 contextValue Person{name='newGoat', blurb='eats something', age=42, >>> dateOfGraduation=null} >>> >>> 38 ctx.getOrigin() = ClusteredCacheWithElasticsearc >>> hIndexManagerIT-NodeB-35919 >>> 38 should modify false >>> 38 prev null >>> 38 putValue Person{name='newGoat', blurb='eats something', age=42, >>> dateOfGraduation=null} >>> 38 contextValue Person{name='newGoat', blurb='eats something', age=42, >>> dateOfGraduation=null} >>> >>> 48 ctx.getOrigin() = null >>> 48 should modify true >>> 48 prev null >>> 48 putValue Person{name='newGoat', blurb='eats something', age=42, >>> dateOfGraduation=null} >>> 48 contextValue null >>> >>> >>> This execution works perfectly with PutKeyValueCommand. But don't work >>> wth compute. >>> >>> The "computed value" is not inside the Command like put, replace or >>> others. It is computed in the perform method (if needed). So, the first >>> time the command is executed in A, the computed value is in the context, >>> but the index is not updated. Second call, executed in B, value in context, >>> but the index is not updated. The magic callback is executed, but the >>> computed value is nowhere because the command is not executed a third time, >>> so the context is null. >>> >>> Can somebody please give me some light on this and explain to me what am >>> I missing ? Other tests are failing for the same problem, >>> like org.infinispan.query.blackbox.ClusteredCacheWithInfinis >>> panDirectoryTest >>> >>> Thank you very much for your help ! >>> >>> Katia >>> >>> [1] https://github.com/infinispan/infinispan/blob/master/que >>> ry/src/main/java/org/infinispan/query/backend/IndexModificat >>> ionStrategy.java#L50 >>> [2] https://github.com/infinispan/infinispan/blob/master/cor >>> e/src/main/java/org/infinispan/context/SingleKeyNonTxInvocat >>> ionContext.java#L39 >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170511/b878bf97/attachment-0001.html From galder at redhat.com Fri May 12 04:08:22 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Fri, 12 May 2017 10:08:22 +0200 Subject: [infinispan-dev] Running an Infinispan cluster on Kubernetes / Google Container Engine In-Reply-To: <57caf25e-ce03-4829-9804-b74fe8a0c627@mailbox.org> References: <57caf25e-ce03-4829-9804-b74fe8a0c627@mailbox.org> Message-ID: <533DE20B-40E8-4D38-9F2A-A2C9CAE04B5C@redhat.com> Awesome!!! Can't wait to try it out :) -- Galder Zamarre?o Infinispan, Red Hat > On 8 May 2017, at 17:14, Bela Ban wrote: > > FYI: http://belaban.blogspot.ch/2017/05/running-infinispan-cluster-with.html > -- > Bela Ban | http://www.jgroups.org > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From slaskawi at redhat.com Fri May 12 06:41:48 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 12 May 2017 10:41:48 +0000 Subject: [infinispan-dev] Running an Infinispan cluster on Kubernetes / Google Container Engine In-Reply-To: <533DE20B-40E8-4D38-9F2A-A2C9CAE04B5C@redhat.com> References: <57caf25e-ce03-4829-9804-b74fe8a0c627@mailbox.org> <533DE20B-40E8-4D38-9F2A-A2C9CAE04B5C@redhat.com> Message-ID: You should have no problems running it. Just remember to use proper OPENSHIFT_KUBE_PING_NAMESPACE and OPENSHIFT_KUBE_PING_LABELS to enable discovery :) Perhaps you could also try out our Infinispan Embedded tutorial: https://github.com/infinispan/infinispan-simple-tutorials/tree/master/kubernetes On Fri, May 12, 2017 at 10:47 AM Galder Zamarre?o wrote: > Awesome!!! Can't wait to try it out :) > > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 8 May 2017, at 17:14, Bela Ban wrote: > > > > FYI: > http://belaban.blogspot.ch/2017/05/running-infinispan-cluster-with.html > > -- > > Bela Ban | http://www.jgroups.org > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170512/14a30ef3/attachment.html From galder at redhat.com Mon May 15 04:59:33 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 15 May 2017 10:59:33 +0200 Subject: [infinispan-dev] to be a command, or not to be a command, that is the question In-Reply-To: References: Message-ID: <66CF9675-C7CC-4421-9694-463C0DDE6073@redhat.com> -- Galder Zamarre?o Infinispan, Red Hat > On 9 May 2017, at 20:39, Radim Vansa wrote: > > On 05/08/2017 09:58 AM, Galder Zamarre?o wrote: >> Hey Katia, >> >> Sorry for delay replying back! I'm surprised there has not been more feedback. My position on this is well known around the team, so let me summarise it: >> >> My feeling has always been that we have too many commands and we should reduce number of commands. Part of the functional map experiment was to show with a subset of commands, all sorts of front end operations could be exposed. So, I'm on Radim's side on this. By passing functions/lambdas, we get a lot of flexibility with very little cost. IOW, we can add more operations by just passing in different lambdas to existing commands. >> >> However, it is true that having different front API methods that only differ in the lambda makes it initially hard to potentially do different things for each, but couldn't that be solved with some kind of enum? >> >> Although enums are useful, they're a bit limited, e.g. don't take params, so since you've done Scala before, maybe this could be solved with some Scala-like sealed trait for each front end operation type? I used something like a sealed trait for implementing a more flexible flag system for functional map API called org.infinispan.commons.api.functional.Param > > Do I understand correctly that you're suggesting to add a enum to > ReadWriteKeyValueCommand that will say "behave like eval > (current)/compute*/merge"? How is that different from just wrapping the > 'user function' into adapting function (with registered externalizer == > marshalling to just 1-2 bytes)? > > Handling such enum in interceptors is not better that having additional > visitX method. And not handling that does not allow you to apply > optimizations which Katia has named as reason #1 to have the separate > commands. TBH, ideally I wouldn't like to have any enums at all since that defeats the purpouse of having commands that have transparent lambdas. The commands themselves, whether Read-Only, Read-Write, Write-Only should be enough distinction to do that what you need to do... However, in real life, I'm not 100% sure if that'd be enough to do what we do... Maybe better than enums, there could be special lambda-bearing commands. > >> The problem I have with adding more commands is the explosion that it provokes in terms of code, with all the required visit* method impls all over the place...etc. >> >> I personally think that the lack of a more flexible command architecture is what has stopped us from adding front-end operations more quickly (e.g. counters, multi-maps...etc). IMO, working with generic commands that take lambdas is a way to strike a balance between adding front-end operations quickly and not resulting in a huge explosion of commands. > > So your final verdict is -1 to separate commands? Yeah. However, I'd say that this is all semi-internal implementation detail and we can change relatively easily. So even if work has already been done using separate commands, we should be able to change that down the line. I call semi-internal because since our interceptor stack is configurable by the user, an advanced user might some day add an interceptor that visits a certain command... Cheers, > > R. > > PS: besides DRY, I vote for the use of functional commands is that it > would encourage us to fix the rest of the parts that might not be > working properly - e.g. QueryInterceptor was not updated with the > functional stuff (but QI is broken in more ways [1]) > > [1] https://issues.jboss.org/browse/ISPN-7806 > >> >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >>> On 20 Apr 2017, at 16:06, Katia Aresti wrote: >>> >>> Hi all >>> >>> Well, nobody spoke, so I consider that everybody agrees that I can take a decision like a big girl by myself ! :) >>> >>> I'm going to add 3 new commands, for merge, compute&computeIfPresent and computeIfAbsent. So I won't use the actual existing commands for the implementation : ReadWriteKeyCommand and ReadWriteKeyValueCommand even if I'm a DRY person and I love reusing code, I'm a KISS person too. >>> >>> I tested the implementation using these functional commands and IMHO : >>> - merge and compute methods worth their own commands, they are very useful and we might want to adjust/optimize them individually >>> - there are some technical issues related to the TypeConverterDelegatingAdvancedCache that makes me modify these existing functional commands with some hacky code that, for me, should be kept in commands like merge or compute with the correct documentation. They don't belong to a generic command. >>> - Functional API is experimental right now. It might be non experimental in the near future, but we might decide to move to another thing. The 3 commands are already "coded" in my branches (not everything reviewed yet but soon). If one day we decide to change/simplify or we find a nice way to get rid of commands with a more generic one, removing and simplifying should be less painful than adding commands for these methods. >>> >>> That's all ! >>> >>> Cheers >>> >>> Katia >>> >>> >>> >>> On Wed, Apr 12, 2017 at 12:11 PM, Katia Aresti wrote: >>> Hi all, >>> >>> As you might know I'm working since my arrival, among other things, on ISPN-5728 Jira [1], where the idea is to override the default ConcurrentMap methods that are missing in CacheImpl (merge, replaceAll, compute ... ) >>> >>> I've created a pull-request [2] for compute, computeIfAbsent and computeIfPresent methods, creating two new commands. By the way, I did the same thing for the merge method in a branch that I haven't pull requested yet. >>> >>> There is an opposite view between Radim and Will concerning the implementation of these methods. To make it short : >>> In one side Will considers compute/merge best implementation should be as a new Command (so what is already done) >>> In the other side, Radim considers adding another command is not necessary as we could simple implement these methods using ReadWriteKeyCommand >>> >>> The detailed discussion and arguments of both sides is on GitHub [2] >>> >>> Before moving forward and making any choice by myself, I would like to hear your opinions. For the record, it doesn't bother me redoing everything if most people think like Radim because working on commands has helped me to learn and understand more about infinispan internals, so this hasn't been a waste of time for me. >>> >>> Katia >>> >>> [1] https://issues.jboss.org/browse/ISPN-5728 >>> [2] https://github.com/infinispan/infinispan/pull/5046 >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From gustavo at infinispan.org Mon May 15 05:09:52 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 15 May 2017 10:09:52 +0100 Subject: [infinispan-dev] Deprecation of Index.LOCAL Message-ID: Hi, the Index.LOCAL setting was introduced eons ago to allow indexing to occur once cluster-wide; thus it's recommended when using an IndexManager such as InfinispanIndexManager and ElasticsearchIndexManager that is shared among all nodes. Furthermore, Index.LOCAL suits ClusteredQueries [1] where each node has its own "private" index and query is broadcasted to each individual node, and aggregated in the caller before returning the results. The issue with Index.LOCAL is when a command is originated in a NON_OWNER (this happens in DIST caches), where there is no context available that prevents obtention of previous values needed certain commands. This makes fixing [2] complex as it requires fiddling with more than a couple of interceptors, and it'd require remote fetching of values. This extra fetch could be avoided if indexing always occurs in the owners. tl;dr The proposal is to deprecate Index.LOCAL, and map it internally to Index.PRIMARY_OWNER Everything should work as before, except if someone is relying to find a certain entry indexed in a specific local index where the put was issued: the ClusteredQuery test suite does that, but I don't think this is a realistic use case. Any objections? Thanks, Gustavo [1] http://infinispan.org/docs/stable/user_guide/user_guide.html#query.clustered-query-api [2] https://issues.jboss.org/browse/ISPN-7806 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170515/495356a1/attachment.html From sanne at infinispan.org Mon May 15 06:12:08 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 15 May 2017 11:12:08 +0100 Subject: [infinispan-dev] Deprecation of Index.LOCAL In-Reply-To: References: Message-ID: +1 to deprecate it We might need some replacement for internal optimisations and building blocks, like you suggested it's at least useful for tests, but we shouldn't expose all this complexity. BTW these options have been introduced before myself and Adrian, so I guess nobody is around anymore to explain why they have been originally introduced. Thanks, Sanne On 15 May 2017 at 10:09, Gustavo Fernandes wrote: > Hi, the Index.LOCAL setting was introduced eons ago to allow indexing to > occur once cluster-wide; > thus it's recommended when using an IndexManager such as > InfinispanIndexManager and ElasticsearchIndexManager that is shared among > all nodes. > > Furthermore, Index.LOCAL suits ClusteredQueries [1] where each node has its > own "private" index and query is broadcasted to each individual node, and > aggregated in the caller before returning the results. > > The issue with Index.LOCAL is when a command is originated in a NON_OWNER > (this happens in DIST caches), where there is no context available that > prevents obtention of previous values needed certain commands. This makes > fixing [2] complex as it requires fiddling with more than a couple of > interceptors, and it'd require remote fetching of values. This extra fetch > could be avoided if indexing always occurs in the owners. > > > tl;dr > > The proposal is to deprecate Index.LOCAL, and map it internally to > Index.PRIMARY_OWNER > Everything should work as before, except if someone is relying to find a > certain entry indexed in a specific local index where the put was issued: > the ClusteredQuery test suite does that, but I don't think this is a > realistic use case. > > Any objections? > > Thanks, > Gustavo > > > [1] > http://infinispan.org/docs/stable/user_guide/user_guide.html#query.clustered-query-api > [2] https://issues.jboss.org/browse/ISPN-7806 > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From belaban at mailbox.org Mon May 15 06:33:40 2017 From: belaban at mailbox.org (Bela Ban) Date: Mon, 15 May 2017 12:33:40 +0200 Subject: [infinispan-dev] S3_PING and GOOGLE_PING deprecated In-Reply-To: References: Message-ID: Hi folks, I've deprecated S3_PING [1] and GOOGLE_PING [3]. The replacements are NATIVE_S3_PING [2] and GOOGLE_PING2 [4]. S3_PING started out as a copy of sample code by Amazon a long time ago, and - because I never wanted Amazon's dependencies - is quite fat, as it contains a bunch of dependent classes. While Amazon's client code has evolved and become more robust, S3_PING has never changed much, so I suspect it contains a number of bugs and is possibly also not very efficient. GOOGLE_PING is a subclass of S3_PING and uses Google's AWS compatibility library. Thus, GOOGLE_PING inherits all of S3_PING's deficiencies. A couple of week's ago, I therefore created NATIVE_S3_PING (a port of Zalando's original protocol of the same name for 3.x), which uses the AWS client SDK to access S3 storage. This week, I also committed a first version of GOOGLE_PING2, which doe not extend S3_PING any longer, but instead uses Google's client library to access Google Cloud Storage directly. The benefit of using an official client library instead of the copy&paste kludge that's called S3_PING is that client libs are maintained / updated / offer new functionality / yada yada yada. Since both protocols have depedencies, they're located in the jgroups-extras repo (https://github.com/jgroups-extras). I want to core JGroups repo to be free of any dependencies. S3_PING and GOOGLE_PING will be removed from JGroups in the next major version (5.x). If you have any concerns that functionality currently present in either S3_PING or GOOGLE_PING is not available in NATIVE_S3_PING or GOOGLE_PING2, let me know. Cheers, [1] http://www.jgroups.org/manual4/index.html#_s3_ping [2] https://github.com/jgroups-extras/native-s3-ping [3] http://www.jgroups.org/manual4/index.html#_google_ping [4] https://github.com/jgroups-extras/jgroups-google From anistor at redhat.com Mon May 15 10:01:13 2017 From: anistor at redhat.com (Adrian Nistor) Date: Mon, 15 May 2017 17:01:13 +0300 Subject: [infinispan-dev] Deprecation of Index.LOCAL In-Reply-To: References: Message-ID: <2403466a-3639-ad5d-6c7d-d3edeaa7bfc8@redhat.com> +1 to kill it. On 05/15/2017 12:09 PM, Gustavo Fernandes wrote: > Hi, the Index.LOCAL setting was introduced eons ago to allow indexing > to occur once cluster-wide; > thus it's recommended when using an IndexManager such as > InfinispanIndexManager and ElasticsearchIndexManager that is shared > among all nodes. > > Furthermore, Index.LOCAL suits ClusteredQueries [1] where each node > has its own "private" index and query is broadcasted to each > individual node, and aggregated in the caller before returning the > results. > > The issue with Index.LOCAL is when a command is originated in a > NON_OWNER (this happens in DIST caches), where there is no context > available that prevents obtention of previous values needed certain > commands. This makes fixing [2] complex as it requires fiddling with > more than a couple of interceptors, and it'd require remote fetching > of values. This extra fetch could be avoided if indexing always occurs > in the owners. > > > tl;dr > > The proposal is to deprecate Index.LOCAL, and map it internally to > Index.PRIMARY_OWNER > Everything should work as before, except if someone is relying to find > a certain entry indexed in a specific local index where the put was > issued: the ClusteredQuery test suite does that, but I don't think > this is a realistic use case. > > Any objections? > > Thanks, > Gustavo > > > [1] > http://infinispan.org/docs/stable/user_guide/user_guide.html#query.clustered-query-api > [2] https://issues.jboss.org/browse/ISPN-7806 > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170515/d5014bc0/attachment.html From ttarrant at redhat.com Mon May 15 11:19:37 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 15 May 2017 17:19:37 +0200 Subject: [infinispan-dev] Weekly meeting logs 2017-05-15 Message-ID: Hi all, the weekly IRC meeting logs are available here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-05-15-14.02.log.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Tue May 16 05:05:52 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 16 May 2017 09:05:52 +0000 Subject: [infinispan-dev] REST Refactoring - breaking changes Message-ID: Hey guys! I'm working on REST Server refactoring and I changed some of the previous behavior. Having in mind that we are implementing this in a minor release, I tried to make those changes really cosmetic: - RestEASY as well as Servlet API have been removed from modules and BOM. If your app relied on it, you'll need to specify them separately in your pom. - Previous implementation picked application/text as a default content type. I replaced it with text/plain with charset which is more precise and seems to be more widely adopted. - Putting an entry without any TTL nor Idle Time made it living forever (which was BTW aligned with the docs). I switched to server configured defaults in this case. If you want to have an entry that lives forever, just specify 0 or -1 there. - Requesting an entry with wrong mime type (imagine it was stored using application/octet-stream and now you're requesting text/plain) cased Bad Request. Now I switched it to Not Acceptable which was designed specially to cover this type of use case. - In compatibility mode the server often tried to "guess" the mimetype (the decision was often between text/plain and application/octet-stream). I honestly think it was a wrong move and made the server side code very hard to read and predict what would be the result. Now the server always returns text/plain by default. If you want to get a byte stream back, just add `Accept: application/octet-stream`. - The server can be started with port 0. This way you are 100% sure that it will start using a unique port without colliding with any other service. - The REST server hosts HTML page if queried using GET on default context. I think it was a bug that it didn't work correctly before. - UTF-8 charset is now the default. You may always ask the server to return different encoding using Accept header. The charset is not returned with binary mime types. - If a HEAD request results in an error, a message will be returned to the client. Even though this behavior breaks Commons HTTP Client (HEAD requests are handled slightly differently and causes the client to hang if a payload is returned), I think it's beneficial to tell the user what went wrong. It's worth to mention that Jetty/Netty HTTP clients work correctly. - RestServer doesn't implement Lifecycle now. The protocol server doesn't support start() method without any arguments. You always need to specify configuration + Embedded Cache Manager. Even though it's a long list, I think all those changes were worth it. Please let me know if you don't agree. Thanks, Sebastian -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170516/4740bc17/attachment.html From galder at redhat.com Tue May 16 09:03:31 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 16 May 2017 15:03:31 +0200 Subject: [infinispan-dev] IRC chat: HB + I9 Message-ID: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> I'm on the move, not sure if Paul/Radim saw my replies: galderz, rvansa: Hey guys - is there a plan for Hibernate & ISPN 9? pferraro: Galder has been working on that pferraro: though I haven't seen any results but a list of stuff that needs to be changed galderz: which Hibernate branch are you targeting? pferraro: 5.2, but there are minute differences between 5.x in terms of the parts that need love to get Infinispan 9 support *** Mode change: +v vblagoje on #infinispan by ChanServ (ChanServ at services.) rvansa: are you suggesting that 5.0 or 5.1 branches will be adapted to additionally support infinispan 9? how is that possible? > pferraro: i'm working on it as we speak... > pferraro: down to 16 failuresd > pferraro: i started a couple of months ago, but had talks/demos to prepare > pferraro: i've got back to working on it this week ... > pferraro: rvansa > rvansa: minute differences my ass ;p > pferraro: did you see my replies? > i got disconnected while replying... hmm - no - I didn't galderz: ^ > pferraro: so, working on the HB + I9 integration as we speak > pferraro: i started a couple of months back but had talks/demos to prepare and had to put that aside > pferraro: i'm down to 16 failures > pferraro: serious refactoring required of the integration to get it to compile and the tests to pass > pferraro: need to switch to async interceptor stack in 2lc integration and get all the subtle changes right > pferraro: it's a painstaking job basically > pferraro: i'm working on https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 > pferraro: i can't remember where i branched off, but it's a branch that steve had since master was focused on 5.x > pferraro: i've no idea when/where we'll integrate this, but one thing is for sure: it's nowhere near backwards compatible > actually, fixed one this morning, so down to 15 failures > pferraro: any suggestions/wishes? > is anyone out there? ;) Cheers, -- Galder Zamarre?o Infinispan, Red Hat From paul.ferraro at redhat.com Tue May 16 11:06:55 2017 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Tue, 16 May 2017 11:06:55 -0400 Subject: [infinispan-dev] IRC chat: HB + I9 In-Reply-To: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> References: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> Message-ID: Thanks Galder. I read through the infinispan-dev thread on the subject, but I'm not sure what was concluded regarding the eventual home for this code. Once the testsuite passes, is the plan to commit to hibernate master? If so, I will likely fork these changes into a WF module (and adapt it for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9 until Hibernate6 is integrated. Radim - one thing you mentioned on that infinispan-dev thread puzzled me: you said that invalidation mode offers no benefits over replication. How is that possible? Can you elaborate? Paul On Tue, May 16, 2017 at 9:03 AM, Galder Zamarre?o wrote: > I'm on the move, not sure if Paul/Radim saw my replies: > > galderz, rvansa: Hey guys - is there a plan for Hibernate & > ISPN 9? > pferraro: Galder has been working on that > pferraro: though I haven't seen any results but a list of > stuff that needs to be changed > galderz: which Hibernate branch are you targeting? > pferraro: 5.2, but there are minute differences between 5.x > in terms of the parts that need love to get Infinispan 9 support > *** Mode change: +v vblagoje on #infinispan by ChanServ > (ChanServ at services.) > rvansa: are you suggesting that 5.0 or 5.1 branches will be > adapted to additionally support infinispan 9? how is that > possible? >> pferraro: i'm working on it as we speak... >> pferraro: down to 16 failuresd >> pferraro: i started a couple of months ago, but had talks/demos to > prepare >> pferraro: i've got back to working on it this week > ... >> pferraro: rvansa >> rvansa: minute differences my ass ;p >> pferraro: did you see my replies? >> i got disconnected while replying... > hmm - no - I didn't > galderz: ^ >> pferraro: so, working on the HB + I9 integration as we speak >> pferraro: i started a couple of months back but had talks/demos to > prepare and had to put that aside >> pferraro: i'm down to 16 failures >> pferraro: serious refactoring required of the integration to get it > to compile and the tests to pass >> pferraro: need to switch to async interceptor stack in 2lc > integration and get all the subtle changes right >> pferraro: it's a painstaking job basically >> pferraro: i'm working on > https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 >> pferraro: i can't remember where i branched off, but it's a branch > that steve had since master was focused on 5.x >> pferraro: i've no idea when/where we'll integrate this, but one > thing is for sure: it's nowhere near backwards compatible >> actually, fixed one this morning, so down to 15 failures >> pferraro: any suggestions/wishes? >> is anyone out there? ;) > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > From rvansa at redhat.com Wed May 17 03:28:02 2017 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 17 May 2017 09:28:02 +0200 Subject: [infinispan-dev] IRC chat: HB + I9 In-Reply-To: References: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> Message-ID: <25a96578-3294-6bc0-d7c0-19793c2eb75e@redhat.com> On 05/16/2017 05:06 PM, Paul Ferraro wrote: > Thanks Galder. I read through the infinispan-dev thread on the > subject, but I'm not sure what was concluded regarding the eventual > home for this code. > Once the testsuite passes, is the plan to commit to hibernate master? > If so, I will likely fork these changes into a WF module (and adapt it > for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9 > until Hibernate6 is integrated. > > Radim - one thing you mentioned on that infinispan-dev thread puzzled > me: you said that invalidation mode offers no benefits over > replication. How is that possible? Can you elaborate? I have worded that a bit incorrectly - it offers no benefits in terms of number of RPCs you have to execute. Yes, keeping the replication semantics, replication should hold the cached data on all nodes instead of only single node. The thing is that 2LC currently tweaks the actual mode so extensively that the maintenance burden is too much on 2LC itself. I perceive the difference between cache modes in the algorithm how operations are routed through the cluster, not only how these entries are stored (though that's just the other face of the coin). And while I've mostly kept the 'storage' part, the routing has changed very much in order to support transparent repeatable-read isolation which I expect from a DB. (Note: while Infinispan claims to support RR, the meaning is different than traditional DBs' RR - actually it's a hybrid between RR and snapshot isolation [1]) [1] https://github.com/infinispan/infinispan/blob/master/documentation/src/main/asciidoc/glossary/glossary.asciidoc#repeatable-read > > Paul > > On Tue, May 16, 2017 at 9:03 AM, Galder Zamarre?o wrote: >> I'm on the move, not sure if Paul/Radim saw my replies: >> >> galderz, rvansa: Hey guys - is there a plan for Hibernate & >> ISPN 9? >> pferraro: Galder has been working on that >> pferraro: though I haven't seen any results but a list of >> stuff that needs to be changed >> galderz: which Hibernate branch are you targeting? >> pferraro: 5.2, but there are minute differences between 5.x >> in terms of the parts that need love to get Infinispan 9 support >> *** Mode change: +v vblagoje on #infinispan by ChanServ >> (ChanServ at services.) >> rvansa: are you suggesting that 5.0 or 5.1 branches will be >> adapted to additionally support infinispan 9? how is that >> possible? >>> pferraro: i'm working on it as we speak... >>> pferraro: down to 16 failuresd >>> pferraro: i started a couple of months ago, but had talks/demos to >> prepare >>> pferraro: i've got back to working on it this week >> ... >>> pferraro: rvansa >>> rvansa: minute differences my ass ;p >>> pferraro: did you see my replies? >>> i got disconnected while replying... >> hmm - no - I didn't >> galderz: ^ >>> pferraro: so, working on the HB + I9 integration as we speak >>> pferraro: i started a couple of months back but had talks/demos to >> prepare and had to put that aside >>> pferraro: i'm down to 16 failures >>> pferraro: serious refactoring required of the integration to get it >> to compile and the tests to pass >>> pferraro: need to switch to async interceptor stack in 2lc >> integration and get all the subtle changes right >>> pferraro: it's a painstaking job basically >>> pferraro: i'm working on >> https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 >>> pferraro: i can't remember where i branched off, but it's a branch >> that steve had since master was focused on 5.x >>> pferraro: i've no idea when/where we'll integrate this, but one >> thing is for sure: it's nowhere near backwards compatible >>> actually, fixed one this morning, so down to 15 failures >>> pferraro: any suggestions/wishes? >>> is anyone out there? ;) >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Wed May 17 10:56:25 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 17 May 2017 16:56:25 +0200 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) Message-ID: Hey all, Infinispan has historically had two ways of performing live migration between two clusters, via Hot Rod and via REST. We do not currently provide an offline migration, although we do have a cache store migration tool. Gustavo has recently made several changes to the Hot Rod implementation which have improved it greatly. The REST implementation is still not robust enough, but I think we can abandon it and just focus on the Hot Rod one ever for servers using REST. The following is a list of stuff, mostly compiled by Gustavo, that needs to be done to make everything smooth and robust: 1) Need a way to automate client redirection to the new cluster. I've often referred to this as L4 client intelligence, which can also be used for server-assisted cross-site failover. 2) Need a way to "rollback" the process in case of failures during the migration: redirecting the clients back to the original cluster without data loss. This would use the above L4 strategy. 3) Expose metrics and progress 4) Expose a way to cancel the process 5) Expose a container-wide migration process which can be applied to all caches instead of one cache at a time. 6) The migration process should also take care of automatically configuring the endpoints / remote cache stores at the beginning of the process and removing any changes at the end. 6) Provide a future-proof format for the entries 7) Implement dump and restore capabilities which can export the contents of a cluster to a file (compressed, encrypted, etc) or a collection of files (one per cache). Anything else ? Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Thu May 18 07:30:58 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 18 May 2017 11:30:58 +0000 Subject: [infinispan-dev] To Optional or not to Optional? Message-ID: Hey! In our past we had a couple of discussions about whether we should or should not use Optionals [1][2]. The main argument against it was performance. On one hand we risk additional object allocation (the Optional itself) and wrong inlining decisions taken by C2 compiler [3]. On the other hand we all probably "feel" that both of those things shouldn't be a problem and should be optimized by C2. Another argument was the Optional's doesn't give us anything but as I checked, we introduced nearly 80 NullPointerException bugs in two years [4]. So we might consider Optional as a way of fighting those things. The final argument that I've seen was about lack of higher order functions which is simply not true since we have #map, #filter and #flatmap functions. You can do pretty amazing things with this. I decided to check the performance when refactoring REST interface. I created a PR with Optionals [5], ran performance tests, removed all Optionals and reran tests. You will be surprised by the results [6]: Test case With Optionals [%] Without Optionals Run 1 Run 2 Avg Run 1 Run 2 Avg Non-TX reads 10 threads Throughput 32.54 32.87 32.71 31.74 34.04 32.89 Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 Non-TX reads 100 threads Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 Response time -6.15 14.93 4.39 7.88 6.49 7.19 Non-TX writes 10 threads Throughput 9.21 7.60 8.41 4.66 7.15 5.91 Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 Non-TX writes 100 threads Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 I also created JMH + Flight Recorder tests and again, the results showed no evidence of slow down caused by Optionals [7]. Now please take those results with a grain of salt since they tend to drift by a factor of +/-5% (sometimes even more). *But it's very clear the performance results are very similar if not the same.* Having those numbers at hand, do we want to have Optionals in Infinispan codebase or not? And if not, let's state it very clearly (and write it into contributing guide), it's because we don't like them. Not because of performance. Thanks, Sebastian [1] http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html [2] http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html [3] http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html [4] https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27 [5] https://github.com/infinispan/infinispan/pull/5094 [6] https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing [7] https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673 -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170518/0311868f/attachment-0001.html From karesti at redhat.com Thu May 18 08:01:23 2017 From: karesti at redhat.com (Katia Aresti) Date: Thu, 18 May 2017 14:01:23 +0200 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: Message-ID: Hi Sebastien, First thing to say, you impress me with all this work ! Thank you very much ! I've been working with Scala for almost three years, and I really appreciate the functional code style. This involves the use of Optionals among other things you mention like map, flapMap etc. Looking at the performance test, it seems Optionals are not an issue, so it's more a matter of coding style and design in most of the cases. After my experience with scala, I believe that if Optional do indeed avoid NullpointerException, they introduce NoSuchElemenException. Because the coding style with functional programming is more than a matter of Optionals Yes or No. It's very different from imperative programming and this will be hard to do really "as it should" in infinispan code base. So in the end, there will be moments where we will be calling "get" to an empty Optional, leading to the kind of bugs you listed before involving NullPointerException. At least, this is the case in my experience, specially coming from years of Java coding and with such a huge code base. But I think it's good to use Optionals, specially on return types (not in method parameters). So 1+ to use Optionals, and +1 to decide clearly how and when and following which coding style rules we should introduce them in our public APIs, internal APIs and codebase in general. My 2 cents, Katia On Thu, May 18, 2017 at 1:30 PM, Sebastian Laskawiec wrote: > Hey! > > In our past we had a couple of discussions about whether we should or > should not use Optionals [1][2]. The main argument against it was > performance. > > On one hand we risk additional object allocation (the Optional itself) and > wrong inlining decisions taken by C2 compiler [3]. On the other hand we all > probably "feel" that both of those things shouldn't be a problem and should > be optimized by C2. Another argument was the Optional's doesn't give us > anything but as I checked, we introduced nearly 80 NullPointerException > bugs in two years [4]. So we might consider Optional as a way of fighting > those things. The final argument that I've seen was about lack of higher > order functions which is simply not true since we have #map, #filter and > #flatmap functions. You can do pretty amazing things with this. > > I decided to check the performance when refactoring REST interface. I > created a PR with Optionals [5], ran performance tests, removed all > Optionals and reran tests. You will be surprised by the results [6]: > > Test case > With Optionals [%] Without Optionals > Run 1 > Run > 2 > > Avg Run 1 > Run > 2 > > Avg > > Non-TX reads 10 threads > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 > > Non-TX reads 100 threads > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 > Response time -6.15 14.93 4.39 7.88 6.49 7.19 > > Non-TX writes 10 threads > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 > > Non-TX writes 100 threads > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 > > I also created JMH + Flight Recorder tests and again, the results showed > no evidence of slow down caused by Optionals [7]. > > Now please take those results with a grain of salt since they tend to > drift by a factor of +/-5% (sometimes even more). *But it's very clear > the performance results are very similar if not the same.* > > Having those numbers at hand, do we want to have Optionals in Infinispan > codebase or not? And if not, let's state it very clearly (and write it into > contributing guide), it's because we don't like them. Not because of > performance. > > Thanks, > Sebastian > > [1] http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html > [2] http://lists.jboss.org/pipermail/infinispan-dev/2016- > August/016796.html > [3] http://vanillajava.blogspot.ro/2015/01/java- > lambdas-and-low-latency.html > [4] https://issues.jboss.org/issues/?jql=project%20%3D% > 20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E% > 20%22NullPointerException%22%20AND%20created%20%3E%3D% > 202015-04-27%20AND%20created%20%3C%3D%202017-04-27 > [5] https://github.com/infinispan/infinispan/pull/5094 > [6] https://docs.google.com/a/redhat.com/spreadsheets/d/ > 1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing > [7] https://github.com/infinispan/infinispan/pull/ > 5094#issuecomment-296970673 > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170518/420480cd/attachment.html From sanne at infinispan.org Thu May 18 08:35:52 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 18 May 2017 13:35:52 +0100 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: Message-ID: Hi Sebastian, sorry but I think you've been wasting time, I hope it was fun :) This is not the right methodology to "settle" the matter (unless you want Radim's eyes to get bloody..). Any change in such a complex system will only affect the performance metrics if you're actually addressing the dominant bottleneck. In some cases it might be CPU, like if your system is at 90%+ CPU then it's likely that reviewing the code to use less CPU would be beneficial; but even that can be counter-productive, for example if you're having contention caused by optimistic locking and you fail to address that while making something else "faster" the performance loss on the optimistic lock might become asymptotic. A good reason to avoid excessive usage of Optional (and *excessive* doesn't mean a couple dozen in a millions lines of code..) is to not run out of eden space, especially for all the code running in interpreted mode. In your case you've been benchmarking a hugely complex beast, not least over REST! When running the REST Server I doubt that allocation in eden is your main problem. You just happened to have a couple Optionals on your path; sure performance changed but there's no enough data in this way to figure out what exactly happened: - did it change at all or was it just because of a lucky optimisation? (The JIT will always optimise stuff differently even when re-running the same code) - did the overall picture improve because this code became much *less* slower? The real complexity in benchmarking is to accurately understand why it changed; this should also tell you why it didn't change more, or less.. To be fair I actually agree that it's very likely that C2 can make any performance penalty disappear.. that's totally possible, although it's unlikely to be faster than just reading the field (assuming we don't need to do branching because of null-checks but C2 can optimise that as well). Still this requires the code to be optimised by JIT first, so it won't prevent us from creating a gazillion of instances if we abuse its usage irresponsibly. Fighting internal NPEs is a matter of writing better code; I'm not against some "Optional" being strategically placed but I believe it's much nicer for most internal code to just avoid null, use "final", and initialize things aggressively. Sure use Optional where it makes sense, probably most on APIs and SPIs, but please don't go overboard with it in internals. That's all I said in the original debate. In case you want to benchmark the impact of Optional make a JMH based microbenchmark - that's interesting to see what C2 is capable of - but even so that's not going to tell you much on the impact it would have to patch thousands of code all around Infinispan. And it will need some peer review before it can tell you anything at all ;) It's actually a very challenging topic, as we produce libraries meant for "anyone to use" and don't get to set the hardware specification requirements it's hard to predict if we should optimise the system for this/that resource consumption. Some people will have plenty of CPU and have problems with us needing too much memory, some others will have the opposite.. the real challenge is in making internals "elastic" to such factors and adaptable without making it too hard to tune. Thanks, Sanne On 18 May 2017 at 12:30, Sebastian Laskawiec wrote: > Hey! > > In our past we had a couple of discussions about whether we should or > should not use Optionals [1][2]. The main argument against it was > performance. > > On one hand we risk additional object allocation (the Optional itself) and > wrong inlining decisions taken by C2 compiler [3]. On the other hand we all > probably "feel" that both of those things shouldn't be a problem and should > be optimized by C2. Another argument was the Optional's doesn't give us > anything but as I checked, we introduced nearly 80 NullPointerException > bugs in two years [4]. So we might consider Optional as a way of fighting > those things. The final argument that I've seen was about lack of higher > order functions which is simply not true since we have #map, #filter and > #flatmap functions. You can do pretty amazing things with this. > > I decided to check the performance when refactoring REST interface. I > created a PR with Optionals [5], ran performance tests, removed all > Optionals and reran tests. You will be surprised by the results [6]: > > Test case > With Optionals [%] Without Optionals > Run 1 > Run > 2 > > Avg Run 1 > Run > 2 > > Avg > > Non-TX reads 10 threads > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 > > Non-TX reads 100 threads > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 > Response time -6.15 14.93 4.39 7.88 6.49 7.19 > > Non-TX writes 10 threads > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 > > Non-TX writes 100 threads > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 > > I also created JMH + Flight Recorder tests and again, the results showed > no evidence of slow down caused by Optionals [7]. > > Now please take those results with a grain of salt since they tend to > drift by a factor of +/-5% (sometimes even more). *But it's very clear > the performance results are very similar if not the same.* > > Having those numbers at hand, do we want to have Optionals in Infinispan > codebase or not? And if not, let's state it very clearly (and write it into > contributing guide), it's because we don't like them. Not because of > performance. > > Thanks, > Sebastian > > [1] http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html > [2] http://lists.jboss.org/pipermail/infinispan-dev/2016- > August/016796.html > [3] http://vanillajava.blogspot.ro/2015/01/java- > lambdas-and-low-latency.html > [4] https://issues.jboss.org/issues/?jql=project%20%3D% > 20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E% > 20%22NullPointerException%22%20AND%20created%20%3E%3D% > 202015-04-27%20AND%20created%20%3C%3D%202017-04-27 > [5] https://github.com/infinispan/infinispan/pull/5094 > [6] https://docs.google.com/a/redhat.com/spreadsheets/d/ > 1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing > [7] https://github.com/infinispan/infinispan/pull/ > 5094#issuecomment-296970673 > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170518/6569ff1a/attachment-0001.html From emmanuel at hibernate.org Thu May 18 11:42:42 2017 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 18 May 2017 17:42:42 +0200 Subject: [infinispan-dev] How to Build a Non-Volatile Memory DBMS References: <2AF7DB44-A619-4880-9FC4-EEE2C90BD14B@redhat.com> Message-ID: <4AB31E81-1345-49A8-B977-97ABC245E8F3@hibernate.org> https://www.cs.cmu.edu/~jarulraj/pages/sigmod_2017_tutorial.html -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170518/ea7b2e3f/attachment.html From rory.odonnell at oracle.com Fri May 19 06:29:41 2017 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Fri, 19 May 2017 11:29:41 +0100 Subject: [infinispan-dev] JDK 9 EA Build 170 is available on jdk.java.net Message-ID: <2477156c-5125-af37-be1e-a756cbb6f48a@oracle.com> Hi Galder * JDK 9 Early Access* build 170 is available at the new location : - jdk.java.net/9/ A summary of all the changes in this build are listed here . Changes which were introduced since the last availability email that may be of interest : * b168 - JDK-8175814: Update default HttpClient protocol version and optional request version o related to JEP 110 : HTTP/2 Client. * b169 - JDK-8178380 : Module system implementation refresh (5/2017) o changes in command line options * b170 - JDK-8177153 : LambdaMetafactory has default constructorIncompatible change, o release note: JDK-8180035 *New Proposal - Mark Reinhold has asked for comments on the jigsaw-dev mailing list *[1] * Proposal: Allow illegal reflective access by default in JDK 9 In short, the existing "big kill switch" of the `--permit-illegal-access` option [1] will become the default behavior of the JDK 9 run-time system, though without as many warnings. The current behavior of JDK 9, in which illegal reflective-access operations from code on the class path are not permitted, will become the default in a future release. Nothing will change at compile time. Rgds,Rory [1] http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-May/012673.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170519/e218cd99/attachment.html From vjuranek at redhat.com Fri May 19 06:50:15 2017 From: vjuranek at redhat.com (Vojtech Juranek) Date: Fri, 19 May 2017 12:50:15 +0200 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) In-Reply-To: References: Message-ID: <1497917.UsyfH1jcIx@localhost> On st?eda 17. kv?tna 2017 16:56:25 CEST Tristan Tarrant wrote: > 2) Need a way to "rollback" the process in case of failures during the > migration: redirecting the clients back to the original cluster without > data loss. This would use the above L4 strategy. it's not only about redirecting clients - IIRC newly created entries on target cluster are not propagated back to source cluster during rolling upgrade, so we need also somehow sync these new data back to source cluster during the rollback to avoid data losses. Same applies to "cancel process" feature -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170519/b85b27a5/attachment.bin From gustavo at infinispan.org Fri May 19 07:05:00 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Fri, 19 May 2017 12:05:00 +0100 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) In-Reply-To: <1497917.UsyfH1jcIx@localhost> References: <1497917.UsyfH1jcIx@localhost> Message-ID: On Fri, May 19, 2017 at 11:50 AM, Vojtech Juranek wrote: > On st?eda 17. kv?tna 2017 16:56:25 CEST Tristan Tarrant wrote: > > 2) Need a way to "rollback" the process in case of failures during the > > migration: redirecting the clients back to the original cluster without > > data loss. This would use the above L4 strategy. > > it's not only about redirecting clients - IIRC newly created entries on > target > cluster are not propagated back to source cluster during rolling upgrade, After the latest changes, new entries written to the target cluster are supposed to propagated back to the source [1]. Did you find any issue with it? [1] https://issues.jboss.org/browse/ISPN-7586 Thanks, Gustavo > so > we need also somehow sync these new data back to source cluster during the > rollback to avoid data losses. Same applies to "cancel process" feature > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170519/60ba4d43/attachment.html From wfink at redhat.com Fri May 19 07:17:22 2017 From: wfink at redhat.com (Wolf Fink) Date: Fri, 19 May 2017 13:17:22 +0200 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) In-Reply-To: <1497917.UsyfH1jcIx@localhost> References: <1497917.UsyfH1jcIx@localhost> Message-ID: +1 for Vojtech yes the client's need to moved to the new cluster in one shot current, that was discussed before. And it makes the migration because most of the customers are not able to make that happen. So there is a small possibility of inconsistence if clients connect to the old server update entries until the new server already migrated it. I see two options 1) source server need to propagate active to target on update 2) with the new L4 strategy all clients are moved automatically to the target. So the source is not updated. I only see a small possibility for this to happen during switch - a client might still have a request to the source until other clients are moved to target and already accessed the key - a new client connects with old properties, here we need to ensure that the first request is redirected to the target and not update the source On Fri, May 19, 2017 at 12:50 PM, Vojtech Juranek wrote: > On st?eda 17. kv?tna 2017 16:56:25 CEST Tristan Tarrant wrote: > > 2) Need a way to "rollback" the process in case of failures during the > > migration: redirecting the clients back to the original cluster without > > data loss. This would use the above L4 strategy. > > it's not only about redirecting clients - IIRC newly created entries on > target > cluster are not propagated back to source cluster during rolling upgrade, > so > we need also somehow sync these new data back to source cluster during the > rollback to avoid data losses. Same applies to "cancel process" feature > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170519/e03e84ab/attachment.html From vjuranek at redhat.com Mon May 22 02:45:56 2017 From: vjuranek at redhat.com (Vojtech Juranek) Date: Mon, 22 May 2017 08:45:56 +0200 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) In-Reply-To: References: <1497917.UsyfH1jcIx@localhost> Message-ID: <2834990.dUSs2InnQi@localhost.localdomain> On p?tek 19. kv?tna 2017 13:05:00 CEST Gustavo Fernandes wrote: > After the latest changes, new entries written to the target cluster are > supposed to propagated back to the source [1]. I thought that when the rolling upgrade is in progress, entries written to target cluster are not written to source cluster. If this is incorrect, what actually does L70 in DistCacheWriterInterceptor [1]? Thanks [1] https://github.com/infinispan/infinispan/blob/master/core/src/main/java/ org/infinispan/interceptors/impl/DistCacheWriterInterceptor.java#L70 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/606ab984/attachment-0001.bin From gustavo at infinispan.org Mon May 22 03:35:44 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 22 May 2017 08:35:44 +0100 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) In-Reply-To: <2834990.dUSs2InnQi@localhost.localdomain> References: <1497917.UsyfH1jcIx@localhost> <2834990.dUSs2InnQi@localhost.localdomain> Message-ID: On Mon, May 22, 2017 at 7:45 AM, Vojtech Juranek wrote: > On p?tek 19. kv?tna 2017 13:05:00 CEST Gustavo Fernandes wrote: > > After the latest changes, new entries written to the target cluster are > > supposed to propagated back to the source [1]. > > I thought that when the rolling upgrade is in progress, entries written to > target cluster are not written to source cluster. During a RU, there are two agents writing data to the target cluster: the user and the RU process itself. Since the remote store in the target cluster is not supposed to be used in read-only mode anymore, all data changes by the user in the target cluster during rolling upgrade are propagate remotely to the source. OTOH, data written by the Rolling Upgrader itself is not written back to the source, since it is merely doing the migration. If this is incorrect, what > actually does L70 in DistCacheWriterInterceptor [1]? > > This line basically says: if data is being written by the rolling upgrade process itself, write it to the stores (excluding the remoteStore), regardless if the command is conditional or not. > Thanks > > [1] https://github.com/infinispan/infinispan/blob/master/core/ > src/main/java/ > org/infinispan/interceptors/impl/DistCacheWriterInterceptor.java#L70 > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/7664ebcf/attachment.html From slaskawi at redhat.com Mon May 22 04:16:49 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 22 May 2017 08:16:49 +0000 Subject: [infinispan-dev] Infinispan Spring Boot Starters 1.0.0.Final released Message-ID: Hey! I'm happy to announce that Infinispan Spring Boot Starters 1.0.0.Final have been released. Change-list: * [https://github.com/infinispan/infinispan-spring-boot/pull/27] Infinispan 9.0.0.Final is used by default * [https://github.com/infinispan/infinispan-spring-boot/pull/25] Added metadata description. Thanks a lot Luca for contributing this! * [https://github.com/infinispan/infinispan-spring-boot/pull/26] Added more documentation You can grab the bits from JBoss Repository [1] after the sync is complete. In the meantime, grab them from here [2]. [1] https://repository.jboss.org/nexus/content/repositories/public-jboss/org/infinispan/infinispan-spring-boot-starter/1.0.0.Final/ [2] https://origin-repository.jboss.org/nexus/content/repositories/public-jboss/org/infinispan/infinispan-spring-boot-starter/1.0.0.Final/ Thanks, Sebastian -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/29374a13/attachment.html From karesti at redhat.com Mon May 22 05:13:53 2017 From: karesti at redhat.com (Katia Aresti) Date: Mon, 22 May 2017 11:13:53 +0200 Subject: [infinispan-dev] Concurrency API Message-ID: Hi all, I've been working last week on a concurrent API design, trying to keep the scope small (but not too small). I've came up with a proposal (WIP) while I've been having a look to what competitors do today. I've shared a small design proposal concerning an Infinispan Global API Object too. TBH nothing very exotic; but I believe that this object could add a stone for user API usability. Please, be kind with me, it's not super detailed (forgive me Radim !) but I would like to share it already so you can add your thoughts/suggestions through comments and help me improve it before heading any implementation. Here is the PR : https://github.com/infinispan/infinispan-designs/pull/8 Cheers Katia -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/6ccdbb51/attachment.html From vjuranek at redhat.com Mon May 22 05:20:27 2017 From: vjuranek at redhat.com (Vojtech Juranek) Date: Mon, 22 May 2017 11:20:27 +0200 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) In-Reply-To: References: <2834990.dUSs2InnQi@localhost.localdomain> Message-ID: <11418335.0MUhlAvVF5@localhost.localdomain> > This line basically says: if data is being written by the rolling upgrade > process itself, write it to the stores (excluding the remoteStore), > regardless if the command is conditional or not. ok, make sense to me now, thanks for explanation! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/2138d860/attachment.bin From galder at redhat.com Mon May 22 05:47:55 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 22 May 2017 11:47:55 +0200 Subject: [infinispan-dev] REST Refactoring - breaking changes In-Reply-To: References: Message-ID: All look good to me :) Thanks Sebastian! -- Galder Zamarre?o Infinispan, Red Hat > On 16 May 2017, at 11:05, Sebastian Laskawiec wrote: > > Hey guys! > > I'm working on REST Server refactoring and I changed some of the previous behavior. Having in mind that we are implementing this in a minor release, I tried to make those changes really cosmetic: > ? RestEASY as well as Servlet API have been removed from modules and BOM. If your app relied on it, you'll need to specify them separately in your pom. > ? Previous implementation picked application/text as a default content type. I replaced it with text/plain with charset which is more precise and seems to be more widely adopted. > ? Putting an entry without any TTL nor Idle Time made it living forever (which was BTW aligned with the docs). I switched to server configured defaults in this case. If you want to have an entry that lives forever, just specify 0 or -1 there. > ? Requesting an entry with wrong mime type (imagine it was stored using application/octet-stream and now you're requesting text/plain) cased Bad Request. Now I switched it to Not Acceptable which was designed specially to cover this type of use case. > ? In compatibility mode the server often tried to "guess" the mimetype (the decision was often between text/plain and application/octet-stream). I honestly think it was a wrong move and made the server side code very hard to read and predict what would be the result. Now the server always returns text/plain by default. If you want to get a byte stream back, just add `Accept: application/octet-stream`. > ? The server can be started with port 0. This way you are 100% sure that it will start using a unique port without colliding with any other service. > ? The REST server hosts HTML page if queried using GET on default context. I think it was a bug that it didn't work correctly before. > ? UTF-8 charset is now the default. You may always ask the server to return different encoding using Accept header. The charset is not returned with binary mime types. > ? If a HEAD request results in an error, a message will be returned to the client. Even though this behavior breaks Commons HTTP Client (HEAD requests are handled slightly differently and causes the client to hang if a payload is returned), I think it's beneficial to tell the user what went wrong. It's worth to mention that Jetty/Netty HTTP clients work correctly. > ? RestServer doesn't implement Lifecycle now. The protocol server doesn't support start() method without any arguments. You always need to specify configuration + Embedded Cache Manager. > Even though it's a long list, I think all those changes were worth it. Please let me know if you don't agree. > > Thanks, > Sebastian > > -- > SEBASTIAN ?ASKAWIEC > INFINISPAN DEVELOPER > Red Hat EMEA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From slaskawi at redhat.com Mon May 22 07:52:23 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 22 May 2017 11:52:23 +0000 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) In-Reply-To: References: <1497917.UsyfH1jcIx@localhost> Message-ID: On Fri, May 19, 2017 at 1:18 PM Wolf Fink wrote: > +1 for Vojtech > > yes the client's need to moved to the new cluster in one shot current, > that was discussed before. > And it makes the migration because most of the customers are not able to > make that happen. > So there is a small possibility of inconsistence if clients connect to the > old server update entries until the new server already migrated it. > > I see two options > 1) > source server need to propagate active to target on update > 2) > with the new L4 strategy all clients are moved automatically to the > target. So the source is not updated. > I only see a small possibility for this to happen during switch > - a client might still have a request to the source until other clients > are moved to target and already accessed the key > - a new client connects with old properties, here we need to ensure that > the first request is redirected to the target and not update the source > Could you please tell me what L4 means in this context? Are you referring to L4 routing/switching (transport level) or new Hot Rod client intelligence? In Kubernetes/OpenShift governing an Infinispan cluster by a Load Balancer could do the trick. If all clients will use Service URL, once Kubernetes kills all "old" Pods, all TCP socket connection will break and the client will retry. This will result in massive load of error messages but the client will eventually connect to the new cluster. > > On Fri, May 19, 2017 at 12:50 PM, Vojtech Juranek > wrote: > >> On st?eda 17. kv?tna 2017 16:56:25 CEST Tristan Tarrant wrote: >> > 2) Need a way to "rollback" the process in case of failures during the >> > migration: redirecting the clients back to the original cluster without >> > data loss. This would use the above L4 strategy. >> >> it's not only about redirecting clients - IIRC newly created entries on >> target >> cluster are not propagated back to source cluster during rolling upgrade, >> so >> we need also somehow sync these new data back to source cluster during the >> rollback to avoid data losses. Same applies to "cancel process" feature >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/7ccfd96f/attachment-0001.html From slaskawi at redhat.com Mon May 22 08:50:20 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 22 May 2017 12:50:20 +0000 Subject: [infinispan-dev] Exposing cluster deployed in the cloud In-Reply-To: References: Message-ID: Hey Tristan! I checked this part and it won't do the trick. The problem is that the server does not know which address is used for exposing its services. Moreover, this address can change with time. Thanks, Sebastian On Tue, May 9, 2017 at 3:28 PM Tristan Tarrant wrote: > Sebastian, > are you familiar with Hot Rod's proxyHost/proxyPort [1]. In server it is > configured using external-host / external-port attributes on the > topology-state-transfer element [2] > > > > [1] > > https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/configuration/HotRodServerConfigurationBuilder.java#L43 > [2] > > https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_9_0.xsd#L203 > > > On 5/8/17 9:57 AM, Sebastian Laskawiec wrote: > > Hey guys! > > > > A while ago I started working on exposing Infinispan Cluster which is > > hosted in Kubernetes to the outside world: > > > > pasted1 > > > > I'm currently struggling to get solution like this into the platform [1] > > but in the meantime I created a very simple POC and I'm testing it > > locally [2]. > > > > There are two main problems with the scenario described above: > > > > 1. Infinispan server announces internal addresses (172.17.x.x) to the > > client. The client needs to remap them into external ones > (172.29.x.x). > > 2. A custom Consistent Hash needs to be supplied to the Hot Rod client. > > When accessing cache, the Hot Rod Client needs to calculate server > > id for internal address and then map it to the external one. > > > > If there will be no strong opinions regarding to this, I plan to > > implement this shortly. There will be additional method in Hot Rod > > Client configuration (ConfigurationBuilder#addServerMapping(String > > mappingClass)) which will be responsible for mapping external addresses > > to internal and vice-versa. > > > > Thoughts? > > > > Thanks, > > Sebastian > > > > [1] https://github.com/kubernetes/community/pull/446 > > [2] https://github.com/slaskawi/external-ip-proxy > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/d47ecf77/attachment.html From mgencur at redhat.com Mon May 22 08:59:57 2017 From: mgencur at redhat.com (Martin Gencur) Date: Mon, 22 May 2017 14:59:57 +0200 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) In-Reply-To: References: <1497917.UsyfH1jcIx@localhost> Message-ID: <0d7a315e-980b-b8b2-5743-61a686619ff4@redhat.com> Hi Wolf, that was exactly my thought. Clients redirected to the target cluster do not get updates written by other clients in the source cluster during the rolling upgrade process. It is because the clients in target cluster won't read the data through the remote cache store if they already have the requested key in the local memory. Is there a BZ/JIRA for this? Martin On 19.5.2017 13:17, Wolf Fink wrote: > +1 for Vojtech > > yes the client's need to moved to the new cluster in one shot current, > that was discussed before. > And it makes the migration because most of the customers are not able > to make that happen. > So there is a small possibility of inconsistence if clients connect to > the old server update entries until the new server already migrated it. > > I see two options > 1) > source server need to propagate active to target on update > 2) > with the new L4 strategy all clients are moved automatically to the > target. So the source is not updated. > I only see a small possibility for this to happen during switch > - a client might still have a request to the source until other > clients are moved to target and already accessed the key > - a new client connects with old properties, here we need to ensure > that the first request is redirected to the target and not update the > source > > On Fri, May 19, 2017 at 12:50 PM, Vojtech Juranek > wrote: > > On st?eda 17. kv?tna 2017 16:56:25 CEST Tristan Tarrant wrote: > > 2) Need a way to "rollback" the process in case of failures > during the > > migration: redirecting the clients back to the original cluster > without > > data loss. This would use the above L4 strategy. > > it's not only about redirecting clients - IIRC newly created > entries on target > cluster are not propagated back to source cluster during rolling > upgrade, so > we need also somehow sync these new data back to source cluster > during the > rollback to avoid data losses. Same applies to "cancel process" > feature > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/47351a94/attachment.html From ttarrant at redhat.com Mon May 22 09:36:34 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 22 May 2017 15:36:34 +0200 Subject: [infinispan-dev] Exposing cluster deployed in the cloud In-Reply-To: References: Message-ID: <31605850-eeea-7805-7f1b-93117aa235b6@redhat.com> We would need to provide a way to supply the external address at runtime, e.g. via JMX. Tristan On 5/22/17 2:50 PM, Sebastian Laskawiec wrote: > Hey Tristan! > > I checked this part and it won't do the trick. The problem is that the > server does not know which address is used for exposing its services. > Moreover, this address can change with time. > > Thanks, > Sebastian > > On Tue, May 9, 2017 at 3:28 PM Tristan Tarrant > wrote: > > Sebastian, > are you familiar with Hot Rod's proxyHost/proxyPort [1]. In server it is > configured using external-host / external-port attributes on the > topology-state-transfer element [2] > > > > [1] > https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/configuration/HotRodServerConfigurationBuilder.java#L43 > [2] > https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_9_0.xsd#L203 > > > On 5/8/17 9:57 AM, Sebastian Laskawiec wrote: > > Hey guys! > > > > A while ago I started working on exposing Infinispan Cluster which is > > hosted in Kubernetes to the outside world: > > > > pasted1 > > > > I'm currently struggling to get solution like this into the > platform [1] > > but in the meantime I created a very simple POC and I'm testing it > > locally [2]. > > > > There are two main problems with the scenario described above: > > > > 1. Infinispan server announces internal addresses (172.17.x.x) > to the > > client. The client needs to remap them into external ones > (172.29.x.x). > > 2. A custom Consistent Hash needs to be supplied to the Hot Rod > client. > > When accessing cache, the Hot Rod Client needs to calculate > server > > id for internal address and then map it to the external one. > > > > If there will be no strong opinions regarding to this, I plan to > > implement this shortly. There will be additional method in Hot Rod > > Client configuration (ConfigurationBuilder#addServerMapping(String > > mappingClass)) which will be responsible for mapping external > addresses > > to internal and vice-versa. > > > > Thoughts? > > > > Thanks, > > Sebastian > > > > [1] https://github.com/kubernetes/community/pull/446 > > [2] https://github.com/slaskawi/external-ip-proxy > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > > SEBASTIAN?ASKAWIEC > > INFINISPAN DEVELOPER > > Red HatEMEA > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Mon May 22 09:50:27 2017 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Mon, 22 May 2017 15:50:27 +0200 Subject: [infinispan-dev] In Memory Data Grid Patterns Demos from Devoxx France! In-Reply-To: <2872808.gplZBrWE3V@dhcp-10-40-5-95.brq.redhat.com> References: <2000E4DB-7A7F-45E4-8833-7EB3A1C60DF0@redhat.com> <2872808.gplZBrWE3V@dhcp-10-40-5-95.brq.redhat.com> Message-ID: <1E62C79A-6E30-47F3-B177-D1A981EE2AEC@redhat.com> Hey Vojtech, Really cool demo!! As you know, we've created an organization to keep infinispan related demos called `infinispan-demos` Can you transfer that demo to the infinispan-demos organization? https://help.github.com/articles/about-repository-transfers/ Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 12 Apr 2017, at 09:13, Vojtech Juranek wrote: > > Thanks for sharing, nice demos! > > On a similar data processing note, here [1] is my demo from DevConf how to use > ISPN in machine learning pipeline (here the data is not processed direcly in > ISPN but in TensorFlow) > > [1] https://github.com/vjuranek/tf-ispn-demo > > On p?tek 7. dubna 2017 10:48:23 CEST Galder Zamarre?o wrote: >> Hi all, >> >> I've just got back from Devoxx France where Emmanuel and I presented about >> in-memory data grid use cases, and during this talk we presented a couple >> of demos on using Infinispan for offline analytics and real-time data >> processing. >> >> I've just created a new blog post with some very quick instructions for you >> to run these demos: >> http://blog.infinispan.org/2017/04/in-memory-data-grid-patterns-demos-from. >> html >> >> Give them a try and let us know what you think! >> >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Mon May 22 09:52:12 2017 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Mon, 22 May 2017 15:52:12 +0200 Subject: [infinispan-dev] In Memory Data Grid Patterns Demos from Devoxx France! In-Reply-To: <1E62C79A-6E30-47F3-B177-D1A981EE2AEC@redhat.com> References: <2000E4DB-7A7F-45E4-8833-7EB3A1C60DF0@redhat.com> <2872808.gplZBrWE3V@dhcp-10-40-5-95.brq.redhat.com> <1E62C79A-6E30-47F3-B177-D1A981EE2AEC@redhat.com> Message-ID: <4B5D3FA9-F763-4DB1-8847-A413B40D3E6F@redhat.com> Another thing, isn't the package.json file missign dependencies? https://github.com/vjuranek/tf-ispn-demo/blob/master/nodejs-consumer/package.json It should have infinispan dependency, 0.4.0 or higher. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 22 May 2017, at 15:50, Galder Zamarre?o wrote: > > Hey Vojtech, > > Really cool demo!! > > As you know, we've created an organization to keep infinispan related demos called `infinispan-demos` > > Can you transfer that demo to the infinispan-demos organization? > > https://help.github.com/articles/about-repository-transfers/ > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 12 Apr 2017, at 09:13, Vojtech Juranek wrote: >> >> Thanks for sharing, nice demos! >> >> On a similar data processing note, here [1] is my demo from DevConf how to use >> ISPN in machine learning pipeline (here the data is not processed direcly in >> ISPN but in TensorFlow) >> >> [1] https://github.com/vjuranek/tf-ispn-demo >> >> On p?tek 7. dubna 2017 10:48:23 CEST Galder Zamarre?o wrote: >>> Hi all, >>> >>> I've just got back from Devoxx France where Emmanuel and I presented about >>> in-memory data grid use cases, and during this talk we presented a couple >>> of demos on using Infinispan for offline analytics and real-time data >>> processing. >>> >>> I've just created a new blog post with some very quick instructions for you >>> to run these demos: >>> http://blog.infinispan.org/2017/04/in-memory-data-grid-patterns-demos-from. >>> html >>> >>> Give them a try and let us know what you think! >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > From gustavo at infinispan.org Mon May 22 10:12:31 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Mon, 22 May 2017 15:12:31 +0100 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) In-Reply-To: <0d7a315e-980b-b8b2-5743-61a686619ff4@redhat.com> References: <1497917.UsyfH1jcIx@localhost> <0d7a315e-980b-b8b2-5743-61a686619ff4@redhat.com> Message-ID: On Mon, May 22, 2017 at 1:59 PM, Martin Gencur wrote: > Hi Wolf, > that was exactly my thought. Clients redirected to the target cluster do > not get updates written by other clients in the source cluster during the > rolling upgrade process. It is because the clients in target cluster won't > read the data through the remote cache store if they already have the > requested key in the local memory. > No, it won't, the source cluster is not supposed to be written to during Rolling Upgrade. That's why the "L4" will prevent that. As per Wolf's comments: > a client might still have a request to the source until other clients are moved to target and already accessed the key The source cluster will enter a "redirect" mode. Every new client and new operation will be sent to the new cluster. Ongoing operations will need to be re-done in the new cluster. > a new client connects with old properties, here we need to ensure that the first request is redirected to the target and not update the source The old server will be in "redirect" mode, this new client will be redirected to the new cluster. After the RU completes, this new client will not be able to connect anymore since the old cluster will have been destroyed. Is there a BZ/JIRA for this? > It will follow soon. In the meantime, please make sure clients are pointing to the new server. Gustavo > Martin > > > On 19.5.2017 13:17, Wolf Fink wrote: > > +1 for Vojtech > > yes the client's need to moved to the new cluster in one shot current, > that was discussed before. > And it makes the migration because most of the customers are not able to > make that happen. > So there is a small possibility of inconsistence if clients connect to the > old server update entries until the new server already migrated it. > > I see two options > 1) > source server need to propagate active to target on update > 2) > with the new L4 strategy all clients are moved automatically to the > target. So the source is not updated. > I only see a small possibility for this to happen during switch > - a client might still have a request to the source until other clients > are moved to target and already accessed the key > - a new client connects with old properties, here we need to ensure that > the first request is redirected to the target and not update the source > > On Fri, May 19, 2017 at 12:50 PM, Vojtech Juranek > wrote: > >> On st?eda 17. kv?tna 2017 16:56:25 CEST Tristan Tarrant wrote: >> > 2) Need a way to "rollback" the process in case of failures during the >> > migration: redirecting the clients back to the original cluster without >> > data loss. This would use the above L4 strategy. >> >> it's not only about redirecting clients - IIRC newly created entries on >> target >> cluster are not propagated back to source cluster during rolling upgrade, >> so >> we need also somehow sync these new data back to source cluster during the >> rollback to avoid data losses. Same applies to "cancel process" feature >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > > _______________________________________________ > infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/efffd05c/attachment.html From galder at redhat.com Mon May 22 11:00:28 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 22 May 2017 17:00:28 +0200 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: Message-ID: I think Sanne's right here, any differences in such large scale test are hard to decipher. Also, as mentioned in a previous email, my view on its usage is same as Sanne's: * Definitely in APIs/SPIs. * Be gentle with it internals. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 18 May 2017, at 14:35, Sanne Grinovero wrote: > > Hi Sebastian, > > sorry but I think you've been wasting time, I hope it was fun :) This is not the right methodology to "settle" the matter (unless you want Radim's eyes to get bloody..). > > Any change in such a complex system will only affect the performance metrics if you're actually addressing the dominant bottleneck. In some cases it might be CPU, like if your system is at 90%+ CPU then it's likely that reviewing the code to use less CPU would be beneficial; but even that can be counter-productive, for example if you're having contention caused by optimistic locking and you fail to address that while making something else "faster" the performance loss on the optimistic lock might become asymptotic. > > A good reason to avoid excessive usage of Optional (and *excessive* doesn't mean a couple dozen in a millions lines of code..) is to not run out of eden space, especially for all the code running in interpreted mode. > > In your case you've been benchmarking a hugely complex beast, not least over REST! When running the REST Server I doubt that allocation in eden is your main problem. You just happened to have a couple Optionals on your path; sure performance changed but there's no enough data in this way to figure out what exactly happened: > - did it change at all or was it just because of a lucky optimisation? (The JIT will always optimise stuff differently even when re-running the same code) > - did the overall picture improve because this code became much *less* slower? > > The real complexity in benchmarking is to accurately understand why it changed; this should also tell you why it didn't change more, or less.. > > To be fair I actually agree that it's very likely that C2 can make any performance penalty disappear.. that's totally possible, although it's unlikely to be faster than just reading the field (assuming we don't need to do branching because of null-checks but C2 can optimise that as well). > Still this requires the code to be optimised by JIT first, so it won't prevent us from creating a gazillion of instances if we abuse its usage irresponsibly. Fighting internal NPEs is a matter of writing better code; I'm not against some "Optional" being strategically placed but I believe it's much nicer for most internal code to just avoid null, use "final", and initialize things aggressively. > > Sure use Optional where it makes sense, probably most on APIs and SPIs, but please don't go overboard with it in internals. That's all I said in the original debate. > > In case you want to benchmark the impact of Optional make a JMH based microbenchmark - that's interesting to see what C2 is capable of - but even so that's not going to tell you much on the impact it would have to patch thousands of code all around Infinispan. And it will need some peer review before it can tell you anything at all ;) > > It's actually a very challenging topic, as we produce libraries meant for "anyone to use" and don't get to set the hardware specification requirements it's hard to predict if we should optimise the system for this/that resource consumption. Some people will have plenty of CPU and have problems with us needing too much memory, some others will have the opposite.. the real challenge is in making internals "elastic" to such factors and adaptable without making it too hard to tune. > > Thanks, > Sanne > > > > On 18 May 2017 at 12:30, Sebastian Laskawiec wrote: > Hey! > > In our past we had a couple of discussions about whether we should or should not use Optionals [1][2]. The main argument against it was performance. > > On one hand we risk additional object allocation (the Optional itself) and wrong inlining decisions taken by C2 compiler [3]. On the other hand we all probably "feel" that both of those things shouldn't be a problem and should be optimized by C2. Another argument was the Optional's doesn't give us anything but as I checked, we introduced nearly 80 NullPointerException bugs in two years [4]. So we might consider Optional as a way of fighting those things. The final argument that I've seen was about lack of higher order functions which is simply not true since we have #map, #filter and #flatmap functions. You can do pretty amazing things with this. > > I decided to check the performance when refactoring REST interface. I created a PR with Optionals [5], ran performance tests, removed all Optionals and reran tests. You will be surprised by the results [6]: > > Test case > With Optionals [%] Without Optionals > Run 1 Run 2 Avg Run 1 Run 2 Avg > Non-TX reads 10 threads > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 > Non-TX reads 100 threads > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 > Response time -6.15 14.93 4.39 7.88 6.49 7.19 > Non-TX writes 10 threads > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 > Non-TX writes 100 threads > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 > > I also created JMH + Flight Recorder tests and again, the results showed no evidence of slow down caused by Optionals [7]. > > Now please take those results with a grain of salt since they tend to drift by a factor of +/-5% (sometimes even more). But it's very clear the performance results are very similar if not the same. > > Having those numbers at hand, do we want to have Optionals in Infinispan codebase or not? And if not, let's state it very clearly (and write it into contributing guide), it's because we don't like them. Not because of performance. > > Thanks, > Sebastian > > [1] http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html > [2] http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html > [3] http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html > [4] https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27 > [5] https://github.com/infinispan/infinispan/pull/5094 > [6] https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing > [7] https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673 > -- > SEBASTIAN ?ASKAWIEC > INFINISPAN DEVELOPER > Red Hat EMEA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From wfink at redhat.com Mon May 22 12:59:55 2017 From: wfink at redhat.com (Wolf Fink) Date: Mon, 22 May 2017 18:59:55 +0200 Subject: [infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc) In-Reply-To: References: <1497917.UsyfH1jcIx@localhost> Message-ID: This is mentioned by Tristan "L4 client intelligence" which mean HotRod not Network On Mon, May 22, 2017 at 1:52 PM, Sebastian Laskawiec wrote: > > > On Fri, May 19, 2017 at 1:18 PM Wolf Fink wrote: > >> +1 for Vojtech >> >> yes the client's need to moved to the new cluster in one shot current, >> that was discussed before. >> And it makes the migration because most of the customers are not able to >> make that happen. >> So there is a small possibility of inconsistence if clients connect to >> the old server update entries until the new server already migrated it. >> >> I see two options >> 1) >> source server need to propagate active to target on update >> 2) >> with the new L4 strategy all clients are moved automatically to the >> target. So the source is not updated. >> I only see a small possibility for this to happen during switch >> - a client might still have a request to the source until other clients >> are moved to target and already accessed the key >> - a new client connects with old properties, here we need to ensure that >> the first request is redirected to the target and not update the source >> > > Could you please tell me what L4 means in this context? Are you referring > to L4 routing/switching (transport level) or new Hot Rod client > intelligence? > > In Kubernetes/OpenShift governing an Infinispan cluster by a Load Balancer > could do the trick. If all clients will use Service URL, once Kubernetes > kills all "old" Pods, all TCP socket connection will break and the client > will retry. This will result in massive load of error messages but the > client will eventually connect to the new cluster. > > >> >> On Fri, May 19, 2017 at 12:50 PM, Vojtech Juranek >> wrote: >> >>> On st?eda 17. kv?tna 2017 16:56:25 CEST Tristan Tarrant wrote: >>> > 2) Need a way to "rollback" the process in case of failures during the >>> > migration: redirecting the clients back to the original cluster without >>> > data loss. This would use the above L4 strategy. >>> >>> it's not only about redirecting clients - IIRC newly created entries on >>> target >>> cluster are not propagated back to source cluster during rolling >>> upgrade, so >>> we need also somehow sync these new data back to source cluster during >>> the >>> rollback to avoid data losses. Same applies to "cancel process" feature >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/fc2e0f17/attachment-0001.html From ttarrant at redhat.com Tue May 23 03:59:37 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 23 May 2017 09:59:37 +0200 Subject: [infinispan-dev] Weekly IRC Meeting Logs 2017-05-22 Message-ID: <0d92d80f-6360-d74d-add9-005e02d6ac38@redhat.com> Hi all, the logs for this week's meeting: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-05-22-14.01.log.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Tue May 23 07:45:17 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 23 May 2017 11:45:17 +0000 Subject: [infinispan-dev] Exposing cluster deployed in the cloud In-Reply-To: <31605850-eeea-7805-7f1b-93117aa235b6@redhat.com> References: <31605850-eeea-7805-7f1b-93117aa235b6@redhat.com> Message-ID: I think the external/internal address translation should be provided by the user. I'm working on a prototype here: https://github.com/slaskawi/infinispan/commit/eeeeae7b567fd84946cba90153d7abf2dd0d6641 I will tidy it up and send a pull request later this week. On Mon, May 22, 2017 at 4:49 PM Tristan Tarrant wrote: > We would need to provide a way to supply the external address at > runtime, e.g. via JMX. > > Tristan > > On 5/22/17 2:50 PM, Sebastian Laskawiec wrote: > > Hey Tristan! > > > > I checked this part and it won't do the trick. The problem is that the > > server does not know which address is used for exposing its services. > > Moreover, this address can change with time. > > > > Thanks, > > Sebastian > > > > On Tue, May 9, 2017 at 3:28 PM Tristan Tarrant > > wrote: > > > > Sebastian, > > are you familiar with Hot Rod's proxyHost/proxyPort [1]. In server > it is > > configured using external-host / external-port attributes on the > > topology-state-transfer element [2] > > > > > > > > [1] > > > https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/configuration/HotRodServerConfigurationBuilder.java#L43 > > [2] > > > https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_9_0.xsd#L203 > > > > > > On 5/8/17 9:57 AM, Sebastian Laskawiec wrote: > > > Hey guys! > > > > > > A while ago I started working on exposing Infinispan Cluster > which is > > > hosted in Kubernetes to the outside world: > > > > > > pasted1 > > > > > > I'm currently struggling to get solution like this into the > > platform [1] > > > but in the meantime I created a very simple POC and I'm testing it > > > locally [2]. > > > > > > There are two main problems with the scenario described above: > > > > > > 1. Infinispan server announces internal addresses (172.17.x.x) > > to the > > > client. The client needs to remap them into external ones > > (172.29.x.x). > > > 2. A custom Consistent Hash needs to be supplied to the Hot Rod > > client. > > > When accessing cache, the Hot Rod Client needs to calculate > > server > > > id for internal address and then map it to the external one. > > > > > > If there will be no strong opinions regarding to this, I plan to > > > implement this shortly. There will be additional method in Hot Rod > > > Client configuration (ConfigurationBuilder#addServerMapping(String > > > mappingClass)) which will be responsible for mapping external > > addresses > > > to internal and vice-versa. > > > > > > Thoughts? > > > > > > Thanks, > > > Sebastian > > > > > > [1] https://github.com/kubernetes/community/pull/446 > > > [2] https://github.com/slaskawi/external-ip-proxy > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org infinispan-dev at lists.jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > > > > SEBASTIAN?ASKAWIEC > > > > INFINISPAN DEVELOPER > > > > Red HatEMEA > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170523/3945fd79/attachment.html From slaskawi at redhat.com Tue May 23 07:54:28 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 23 May 2017 11:54:28 +0000 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: Message-ID: Hey! So I think we have no extreme naysayers to Optional. So let me try to sum up what we have achieved so: - In macroscale benchmark based on REST interface using Optionals didn't lower the performance. - +1 for using it in public APIs, especially for those using functional style. - Creating lots of Optional instances might add some pressure on GC, so we need to be careful when using them in hot code paths. In such cases it is required to run a micro scale benchamark to make sure the performance didn't drop. The microbenchmark should also be followed by macro scale benchamrk - PerfJobAck. Also, keep an eye on Eden space in such cases. If you agree with me, and there are no hard evidence that using Optional degrade performance significantly, I would like to issue a pull request and put those findings into contributing guide [1]. Thanks, Sebastian [1] https://github.com/infinispan/infinispan/tree/master/documentation/src/main/asciidoc/contributing On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o wrote: > I think Sanne's right here, any differences in such large scale test are > hard to decipher. > > Also, as mentioned in a previous email, my view on its usage is same as > Sanne's: > > * Definitely in APIs/SPIs. > * Be gentle with it internals. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 18 May 2017, at 14:35, Sanne Grinovero wrote: > > > > Hi Sebastian, > > > > sorry but I think you've been wasting time, I hope it was fun :) This is > not the right methodology to "settle" the matter (unless you want Radim's > eyes to get bloody..). > > > > Any change in such a complex system will only affect the performance > metrics if you're actually addressing the dominant bottleneck. In some > cases it might be CPU, like if your system is at 90%+ CPU then it's likely > that reviewing the code to use less CPU would be beneficial; but even that > can be counter-productive, for example if you're having contention caused > by optimistic locking and you fail to address that while making something > else "faster" the performance loss on the optimistic lock might become > asymptotic. > > > > A good reason to avoid excessive usage of Optional (and *excessive* > doesn't mean a couple dozen in a millions lines of code..) is to not run > out of eden space, especially for all the code running in interpreted mode. > > > > In your case you've been benchmarking a hugely complex beast, not least > over REST! When running the REST Server I doubt that allocation in eden is > your main problem. You just happened to have a couple Optionals on your > path; sure performance changed but there's no enough data in this way to > figure out what exactly happened: > > - did it change at all or was it just because of a lucky optimisation? > (The JIT will always optimise stuff differently even when re-running the > same code) > > - did the overall picture improve because this code became much *less* > slower? > > > > The real complexity in benchmarking is to accurately understand why it > changed; this should also tell you why it didn't change more, or less.. > > > > To be fair I actually agree that it's very likely that C2 can make any > performance penalty disappear.. that's totally possible, although it's > unlikely to be faster than just reading the field (assuming we don't need > to do branching because of null-checks but C2 can optimise that as well). > > Still this requires the code to be optimised by JIT first, so it won't > prevent us from creating a gazillion of instances if we abuse its usage > irresponsibly. Fighting internal NPEs is a matter of writing better code; > I'm not against some "Optional" being strategically placed but I believe > it's much nicer for most internal code to just avoid null, use "final", and > initialize things aggressively. > > > > Sure use Optional where it makes sense, probably most on APIs and SPIs, > but please don't go overboard with it in internals. That's all I said in > the original debate. > > > > In case you want to benchmark the impact of Optional make a JMH based > microbenchmark - that's interesting to see what C2 is capable of - but even > so that's not going to tell you much on the impact it would have to patch > thousands of code all around Infinispan. And it will need some peer review > before it can tell you anything at all ;) > > > > It's actually a very challenging topic, as we produce libraries meant > for "anyone to use" and don't get to set the hardware specification > requirements it's hard to predict if we should optimise the system for > this/that resource consumption. Some people will have plenty of CPU and > have problems with us needing too much memory, some others will have the > opposite.. the real challenge is in making internals "elastic" to such > factors and adaptable without making it too hard to tune. > > > > Thanks, > > Sanne > > > > > > > > On 18 May 2017 at 12:30, Sebastian Laskawiec > wrote: > > Hey! > > > > In our past we had a couple of discussions about whether we should or > should not use Optionals [1][2]. The main argument against it was > performance. > > > > On one hand we risk additional object allocation (the Optional itself) > and wrong inlining decisions taken by C2 compiler [3]. On the other hand we > all probably "feel" that both of those things shouldn't be a problem and > should be optimized by C2. Another argument was the Optional's doesn't give > us anything but as I checked, we introduced nearly 80 NullPointerException > bugs in two years [4]. So we might consider Optional as a way of fighting > those things. The final argument that I've seen was about lack of higher > order functions which is simply not true since we have #map, #filter and > #flatmap functions. You can do pretty amazing things with this. > > > > I decided to check the performance when refactoring REST interface. I > created a PR with Optionals [5], ran performance tests, removed all > Optionals and reran tests. You will be surprised by the results [6]: > > > > Test case > > With Optionals [%] Without Optionals > > Run 1 Run 2 Avg Run 1 Run 2 Avg > > Non-TX reads 10 threads > > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 > > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 > > Non-TX reads 100 threads > > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 > > Response time -6.15 14.93 4.39 7.88 6.49 7.19 > > Non-TX writes 10 threads > > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 > > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 > > Non-TX writes 100 threads > > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 > > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 > > > > I also created JMH + Flight Recorder tests and again, the results showed > no evidence of slow down caused by Optionals [7]. > > > > Now please take those results with a grain of salt since they tend to > drift by a factor of +/-5% (sometimes even more). But it's very clear the > performance results are very similar if not the same. > > > > Having those numbers at hand, do we want to have Optionals in Infinispan > codebase or not? And if not, let's state it very clearly (and write it into > contributing guide), it's because we don't like them. Not because of > performance. > > > > Thanks, > > Sebastian > > > > [1] > http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html > > [2] > http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html > > [3] > http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html > > [4] > https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27 > > [5] https://github.com/infinispan/infinispan/pull/5094 > > [6] > https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing > > [7] > https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673 > > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170523/376afb4e/attachment-0001.html From galder at redhat.com Tue May 23 09:07:32 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 23 May 2017 15:07:32 +0200 Subject: [infinispan-dev] IRC chat: HB + I9 In-Reply-To: References: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> Message-ID: Hi all, I've just finished integrating Infinispan with a HB 6.x branch Steve had, all tests pass now [1]. Yeah, we didn't commit on the final location for these changes. As far as I know, Hibernate master is not 6.x, but rather 5.2.x. There's no 5.2.x branch in Hibernate main repo. 6.x is just a branch that Steve has. These are the options availble to us: 1. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 6.x branch. 2. Integrate 9.x provider as part of a second Infinispan module in Hibernate 5.x branch. 3. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 5.x branch. This is problematic for since the provider is not backwards compatible. 4. Integrate 9.x provider in infinispan and deliver it as part of Infinispan rather than Hibernate. I'm not sure which one I prefer the most TBH... 1. is the ideal solution but doesn't seem there will be a Hibernate 6.x release for a while. 2./3./4. all have their downsides... :\ Thoughts? [1] https://github.com/galderz/hibernate-orm/commits/t_i9x_v2 -- Galder Zamarre?o Infinispan, Red Hat > On 16 May 2017, at 17:06, Paul Ferraro wrote: > > Thanks Galder. I read through the infinispan-dev thread on the > subject, but I'm not sure what was concluded regarding the eventual > home for this code. > Once the testsuite passes, is the plan to commit to hibernate master? > If so, I will likely fork these changes into a WF module (and adapt it > for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9 > until Hibernate6 is integrated. > > Radim - one thing you mentioned on that infinispan-dev thread puzzled > me: you said that invalidation mode offers no benefits over > replication. How is that possible? Can you elaborate? > > Paul > > On Tue, May 16, 2017 at 9:03 AM, Galder Zamarre?o wrote: >> I'm on the move, not sure if Paul/Radim saw my replies: >> >> galderz, rvansa: Hey guys - is there a plan for Hibernate & >> ISPN 9? >> pferraro: Galder has been working on that >> pferraro: though I haven't seen any results but a list of >> stuff that needs to be changed >> galderz: which Hibernate branch are you targeting? >> pferraro: 5.2, but there are minute differences between 5.x >> in terms of the parts that need love to get Infinispan 9 support >> *** Mode change: +v vblagoje on #infinispan by ChanServ >> (ChanServ at services.) >> rvansa: are you suggesting that 5.0 or 5.1 branches will be >> adapted to additionally support infinispan 9? how is that >> possible? >>> pferraro: i'm working on it as we speak... >>> pferraro: down to 16 failuresd >>> pferraro: i started a couple of months ago, but had talks/demos to >> prepare >>> pferraro: i've got back to working on it this week >> ... >>> pferraro: rvansa >>> rvansa: minute differences my ass ;p >>> pferraro: did you see my replies? >>> i got disconnected while replying... >> hmm - no - I didn't >> galderz: ^ >>> pferraro: so, working on the HB + I9 integration as we speak >>> pferraro: i started a couple of months back but had talks/demos to >> prepare and had to put that aside >>> pferraro: i'm down to 16 failures >>> pferraro: serious refactoring required of the integration to get it >> to compile and the tests to pass >>> pferraro: need to switch to async interceptor stack in 2lc >> integration and get all the subtle changes right >>> pferraro: it's a painstaking job basically >>> pferraro: i'm working on >> https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 >>> pferraro: i can't remember where i branched off, but it's a branch >> that steve had since master was focused on 5.x >>> pferraro: i've no idea when/where we'll integrate this, but one >> thing is for sure: it's nowhere near backwards compatible >>> actually, fixed one this morning, so down to 15 failures >>> pferraro: any suggestions/wishes? >>> is anyone out there? ;) >> >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> From galder at redhat.com Tue May 23 09:10:12 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 23 May 2017 15:10:12 +0200 Subject: [infinispan-dev] IRC chat: HB + I9 In-Reply-To: References: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> Message-ID: One final thing, [1] requires ISPN-7853 fix, which will be part of 9.0.1. I know the branch currently points to 9.1.0-SNAPSHOT. That was just simply cos I tested out the fix in master first. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 23 May 2017, at 15:07, Galder Zamarre?o wrote: > > Hi all, > > I've just finished integrating Infinispan with a HB 6.x branch Steve had, all tests pass now [1]. > > Yeah, we didn't commit on the final location for these changes. > > As far as I know, Hibernate master is not 6.x, but rather 5.2.x. There's no 5.2.x branch in Hibernate main repo. 6.x is just a branch that Steve has. > > These are the options availble to us: > > 1. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 6.x branch. > > 2. Integrate 9.x provider as part of a second Infinispan module in Hibernate 5.x branch. > > 3. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 5.x branch. This is problematic for since the provider is not backwards compatible. > > 4. Integrate 9.x provider in infinispan and deliver it as part of Infinispan rather than Hibernate. > > I'm not sure which one I prefer the most TBH... 1. is the ideal solution but doesn't seem there will be a Hibernate 6.x release for a while. 2./3./4. all have their downsides... :\ > > Thoughts? > > [1] https://github.com/galderz/hibernate-orm/commits/t_i9x_v2 > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 16 May 2017, at 17:06, Paul Ferraro wrote: >> >> Thanks Galder. I read through the infinispan-dev thread on the >> subject, but I'm not sure what was concluded regarding the eventual >> home for this code. >> Once the testsuite passes, is the plan to commit to hibernate master? >> If so, I will likely fork these changes into a WF module (and adapt it >> for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9 >> until Hibernate6 is integrated. >> >> Radim - one thing you mentioned on that infinispan-dev thread puzzled >> me: you said that invalidation mode offers no benefits over >> replication. How is that possible? Can you elaborate? >> >> Paul >> >> On Tue, May 16, 2017 at 9:03 AM, Galder Zamarre?o wrote: >>> I'm on the move, not sure if Paul/Radim saw my replies: >>> >>> galderz, rvansa: Hey guys - is there a plan for Hibernate & >>> ISPN 9? >>> pferraro: Galder has been working on that >>> pferraro: though I haven't seen any results but a list of >>> stuff that needs to be changed >>> galderz: which Hibernate branch are you targeting? >>> pferraro: 5.2, but there are minute differences between 5.x >>> in terms of the parts that need love to get Infinispan 9 support >>> *** Mode change: +v vblagoje on #infinispan by ChanServ >>> (ChanServ at services.) >>> rvansa: are you suggesting that 5.0 or 5.1 branches will be >>> adapted to additionally support infinispan 9? how is that >>> possible? >>>> pferraro: i'm working on it as we speak... >>>> pferraro: down to 16 failuresd >>>> pferraro: i started a couple of months ago, but had talks/demos to >>> prepare >>>> pferraro: i've got back to working on it this week >>> ... >>>> pferraro: rvansa >>>> rvansa: minute differences my ass ;p >>>> pferraro: did you see my replies? >>>> i got disconnected while replying... >>> hmm - no - I didn't >>> galderz: ^ >>>> pferraro: so, working on the HB + I9 integration as we speak >>>> pferraro: i started a couple of months back but had talks/demos to >>> prepare and had to put that aside >>>> pferraro: i'm down to 16 failures >>>> pferraro: serious refactoring required of the integration to get it >>> to compile and the tests to pass >>>> pferraro: need to switch to async interceptor stack in 2lc >>> integration and get all the subtle changes right >>>> pferraro: it's a painstaking job basically >>>> pferraro: i'm working on >>> https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 >>>> pferraro: i can't remember where i branched off, but it's a branch >>> that steve had since master was focused on 5.x >>>> pferraro: i've no idea when/where we'll integrate this, but one >>> thing is for sure: it's nowhere near backwards compatible >>>> actually, fixed one this morning, so down to 15 failures >>>> pferraro: any suggestions/wishes? >>>> is anyone out there? ;) >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> > From remerson at redhat.com Tue May 23 09:10:32 2017 From: remerson at redhat.com (Ryan Emerson) Date: Tue, 23 May 2017 09:10:32 -0400 (EDT) Subject: [infinispan-dev] Infinispan 9.0.1.Final Released In-Reply-To: <679877690.11477031.1495544562454.JavaMail.zimbra@redhat.com> Message-ID: <1320493520.11484797.1495545032491.JavaMail.zimbra@redhat.com> Dear all, Infinispan 9.0.1.Final has been released: http://blog.infinispan.org/2017/05/infinispan-901final-released.html Cheers Ryan From slaskawi at redhat.com Tue May 23 09:19:58 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 23 May 2017 13:19:58 +0000 Subject: [infinispan-dev] KUBE_PING 0.9.3 released Message-ID: Hey! I'm happy to announce that JGroups KUBE_PING 0.9.3 was released. The major changes include: - Fixed releasing connections for embedded HTTP Server - Fixed JGroups 3/4 compatibility issues - Fixed test suite - Fixed `Message.setSrc` compatibility issues - Updated documentation The bits might be downloaded from [1] as soon as the sync completes. Please download them from [2] in the meantime. I would also like to recommend you recent blog post created by Bela Ban [3]. KUBE_PING was completely revamped (no embedded HTTP Server, reduced dependencies) and we plan to use new, 1.0.0 version in Infinispan soon! If you'd like to try it out, grab it from here [4]. Thanks, Sebastian [1] https://repository.jboss.org/nexus/content/repositories/public-jboss/org/jgroups/kubernetes/kubernetes/0.9.3/ [2] https://origin-repository.jboss.org/nexus/content/repositories/public-jboss/org/jgroups/kubernetes/kubernetes/0.9.3/ [3] http://belaban.blogspot.com/2017/05/running-infinispan-cluster-with.html [4] https://repository.jboss.org/nexus/content/repositories/public-jboss/org/jgroups/kubernetes/kubernetes/1.0.0-SNAPSHOT/ -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170523/865661e1/attachment.html From rvansa at redhat.com Tue May 23 09:46:48 2017 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 23 May 2017 15:46:48 +0200 Subject: [infinispan-dev] REST Refactoring - breaking changes In-Reply-To: References: Message-ID: <95f6b3f1-7f25-712e-b5fb-9e196cc93591@redhat.com> On 05/16/2017 11:05 AM, Sebastian Laskawiec wrote: > Hey guys! > > I'm working on REST Server refactoring and I changed some of the > previous behavior. Having in mind that we are implementing this in a > minor release, I tried to make those changes really cosmetic: > > * RestEASY as well as Servlet API have been removed from modules and > BOM. If your app relied on it, you'll need to specify them > separately in your pom. > * Previous implementation picked application/text as a default > content type. I replaced it with text/plain with charset which is > more precise and seems to be more widely adopted. > * Putting an entry without any TTL nor Idle Time made it living > forever (which was BTW aligned with the docs). I switched to > server configured defaults in this case. If you want to have an > entry that lives forever, just specify 0 or -1 there. > * Requesting an entry with wrong mime type (imagine it was stored > using application/octet-stream and now you're requesting > text/plain) cased Bad Request. Now I switched it to Not Acceptable > which was designed specially to cover this type of use case. > * In compatibility mode the server often tried to "guess" the > mimetype (the decision was often between text/plain and > application/octet-stream). I honestly think it was a wrong move > and made the server side code very hard to read and predict what > would be the result. Now the server always returns text/plain by > default. If you want to get a byte stream back, just add `Accept: > application/octet-stream`. > * The server can be started with port 0. This way you are 100% sure > that it will start using a unique port without colliding with any > other service. > How can the client now the port number, then? Is the actual port exposed through JMX? > * The REST server hosts HTML page if queried using GET on default > context. I think it was a bug that it didn't work correctly before. > Did it return 404? What's on that page? Do we expose keys/values/entries anywhere in the REST endpoint? > * UTF-8 charset is now the default. You may always ask the server to > return different encoding using Accept header. The charset is not > returned with binary mime types. > * If a HEAD request results in an error, a message will be returned > to the client. Even though this behavior breaks Commons HTTP > Client (HEAD requests are handled slightly differently and causes > the client to hang if a payload is returned), I think it's > beneficial to tell the user what went wrong. It's worth to mention > that Jetty/Netty HTTP clients work correctly. > * RestServer doesn't implement Lifecycle now. The protocol server > doesn't support start() method without any arguments. You always > need to specify configuration + Embedded Cache Manager. > > Even though it's a long list, I think all those changes were worth it. > Please let me know if you don't agree. Couple of other questions: * do we accept GET with Range header on keys? What about delta-updating entries with Content-Range on PUTs? * For PUTs/POSTs, do we return 200/201/204 according to the spec? (modified/created/modified) * Do we have any way to execute a replace (or the other prev-value returning ops) through REST using single request? For example let DELETE return the prev entity (it should return 200 & entity or 204 and no response) * Do we handle OPTIONS in any way? Radim > > Thanks, > Sebastian > > -- > > SEBASTIAN?ASKAWIEC > > INFINISPAN DEVELOPER > > Red HatEMEA > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From dan.berindei at gmail.com Tue May 23 09:58:37 2017 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 23 May 2017 15:58:37 +0200 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: Message-ID: I wouldn't say I'm an extreme naysayer, but I do have 2 issues with Optional: 1. Performance becomes harder to quantify: the allocations may or may not be eliminated, and a change in one part of the code may change how allocations are eliminated in a completely different part of the code. 2. My personal opinion is it's just ugly... instead of having one field that could be null or non-null, you now have a field that could be null, Optional.empty(), or Optional.of(something). Cheers Dan On Tue, May 23, 2017 at 1:54 PM, Sebastian Laskawiec wrote: > Hey! > > So I think we have no extreme naysayers to Optional. So let me try to sum > up what we have achieved so: > > - In macroscale benchmark based on REST interface using Optionals > didn't lower the performance. > - +1 for using it in public APIs, especially for those using > functional style. > - Creating lots of Optional instances might add some pressure on GC, > so we need to be careful when using them in hot code paths. In such cases > it is required to run a micro scale benchamark to make sure the performance > didn't drop. The microbenchmark should also be followed by macro scale > benchamrk - PerfJobAck. Also, keep an eye on Eden space in such cases. > > If you agree with me, and there are no hard evidence that using Optional > degrade performance significantly, I would like to issue a pull request and > put those findings into contributing guide [1]. > > Thanks, > Sebastian > > [1] https://github.com/infinispan/infinispan/tree/ > master/documentation/src/main/asciidoc/contributing > > On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o > wrote: > >> I think Sanne's right here, any differences in such large scale test are >> hard to decipher. >> >> Also, as mentioned in a previous email, my view on its usage is same as >> Sanne's: >> >> * Definitely in APIs/SPIs. >> * Be gentle with it internals. >> >> Cheers, >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> > On 18 May 2017, at 14:35, Sanne Grinovero wrote: >> > >> > Hi Sebastian, >> > >> > sorry but I think you've been wasting time, I hope it was fun :) This >> is not the right methodology to "settle" the matter (unless you want >> Radim's eyes to get bloody..). >> > >> > Any change in such a complex system will only affect the performance >> metrics if you're actually addressing the dominant bottleneck. In some >> cases it might be CPU, like if your system is at 90%+ CPU then it's likely >> that reviewing the code to use less CPU would be beneficial; but even that >> can be counter-productive, for example if you're having contention caused >> by optimistic locking and you fail to address that while making something >> else "faster" the performance loss on the optimistic lock might become >> asymptotic. >> > >> > A good reason to avoid excessive usage of Optional (and *excessive* >> doesn't mean a couple dozen in a millions lines of code..) is to not run >> out of eden space, especially for all the code running in interpreted mode. >> > >> > In your case you've been benchmarking a hugely complex beast, not least >> over REST! When running the REST Server I doubt that allocation in eden is >> your main problem. You just happened to have a couple Optionals on your >> path; sure performance changed but there's no enough data in this way to >> figure out what exactly happened: >> > - did it change at all or was it just because of a lucky optimisation? >> (The JIT will always optimise stuff differently even when re-running the >> same code) >> > - did the overall picture improve because this code became much *less* >> slower? >> > >> > The real complexity in benchmarking is to accurately understand why it >> changed; this should also tell you why it didn't change more, or less.. >> > >> > To be fair I actually agree that it's very likely that C2 can make any >> performance penalty disappear.. that's totally possible, although it's >> unlikely to be faster than just reading the field (assuming we don't need >> to do branching because of null-checks but C2 can optimise that as well). >> > Still this requires the code to be optimised by JIT first, so it won't >> prevent us from creating a gazillion of instances if we abuse its usage >> irresponsibly. Fighting internal NPEs is a matter of writing better code; >> I'm not against some "Optional" being strategically placed but I believe >> it's much nicer for most internal code to just avoid null, use "final", and >> initialize things aggressively. >> > >> > Sure use Optional where it makes sense, probably most on APIs and SPIs, >> but please don't go overboard with it in internals. That's all I said in >> the original debate. >> > >> > In case you want to benchmark the impact of Optional make a JMH based >> microbenchmark - that's interesting to see what C2 is capable of - but even >> so that's not going to tell you much on the impact it would have to patch >> thousands of code all around Infinispan. And it will need some peer review >> before it can tell you anything at all ;) >> > >> > It's actually a very challenging topic, as we produce libraries meant >> for "anyone to use" and don't get to set the hardware specification >> requirements it's hard to predict if we should optimise the system for >> this/that resource consumption. Some people will have plenty of CPU and >> have problems with us needing too much memory, some others will have the >> opposite.. the real challenge is in making internals "elastic" to such >> factors and adaptable without making it too hard to tune. >> > >> > Thanks, >> > Sanne >> > >> > >> > >> > On 18 May 2017 at 12:30, Sebastian Laskawiec >> wrote: >> > Hey! >> > >> > In our past we had a couple of discussions about whether we should or >> should not use Optionals [1][2]. The main argument against it was >> performance. >> > >> > On one hand we risk additional object allocation (the Optional itself) >> and wrong inlining decisions taken by C2 compiler [3]. On the other hand we >> all probably "feel" that both of those things shouldn't be a problem and >> should be optimized by C2. Another argument was the Optional's doesn't give >> us anything but as I checked, we introduced nearly 80 NullPointerException >> bugs in two years [4]. So we might consider Optional as a way of fighting >> those things. The final argument that I've seen was about lack of higher >> order functions which is simply not true since we have #map, #filter and >> #flatmap functions. You can do pretty amazing things with this. >> > >> > I decided to check the performance when refactoring REST interface. I >> created a PR with Optionals [5], ran performance tests, removed all >> Optionals and reran tests. You will be surprised by the results [6]: >> > >> > Test case >> > With Optionals [%] Without Optionals >> > Run 1 Run 2 Avg Run 1 Run 2 Avg >> > Non-TX reads 10 threads >> > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 >> > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 >> > Non-TX reads 100 threads >> > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 >> > Response time -6.15 14.93 4.39 7.88 6.49 7.19 >> > Non-TX writes 10 threads >> > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 >> > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 >> > Non-TX writes 100 threads >> > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 >> > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 >> > >> > I also created JMH + Flight Recorder tests and again, the results >> showed no evidence of slow down caused by Optionals [7]. >> > >> > Now please take those results with a grain of salt since they tend to >> drift by a factor of +/-5% (sometimes even more). But it's very clear the >> performance results are very similar if not the same. >> > >> > Having those numbers at hand, do we want to have Optionals in >> Infinispan codebase or not? And if not, let's state it very clearly (and >> write it into contributing guide), it's because we don't like them. Not >> because of performance. >> > >> > Thanks, >> > Sebastian >> > >> > [1] http://lists.jboss.org/pipermail/infinispan-dev/2017- >> March/017370.html >> > [2] http://lists.jboss.org/pipermail/infinispan-dev/2016- >> August/016796.html >> > [3] http://vanillajava.blogspot.ro/2015/01/java-lambdas-and- >> low-latency.html >> > [4] https://issues.jboss.org/issues/?jql=project%20%3D% >> 20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E% >> 20%22NullPointerException%22%20AND%20created%20%3E%3D% >> 202015-04-27%20AND%20created%20%3C%3D%202017-04-27 >> > [5] https://github.com/infinispan/infinispan/pull/5094 >> > [6] https://docs.google.com/a/redhat.com/spreadsheets/d/ >> 1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing >> > [7] https://github.com/infinispan/infinispan/pull/5094# >> issuecomment-296970673 >> > -- >> > SEBASTIAN ?ASKAWIEC >> > INFINISPAN DEVELOPER >> > Red Hat EMEA >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170523/585f560a/attachment.html From karesti at redhat.com Tue May 23 11:45:50 2017 From: karesti at redhat.com (Katia Aresti) Date: Tue, 23 May 2017 17:45:50 +0200 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: Message-ID: Dan, I disagree with point 2 where you say "You now have a field that could be null, Optional.empty(), or Optional.of(something)" This is the point of optional. You shouldn't have a field that has these 3 possible values, just two of them = Some or None. If the field is mutable, it should be initialised to Optional.empty(). In the case of an API, Optional implicitly says that the return value can be empty, but when you return a "normal" object, either the user reads the doc, either will have bugs or boilerplate code defending from the possible null value (even if never ever this API will return null) :o) Cheers On Tue, May 23, 2017 at 3:58 PM, Dan Berindei wrote: > I wouldn't say I'm an extreme naysayer, but I do have 2 issues with > Optional: > > 1. Performance becomes harder to quantify: the allocations may or may not > be eliminated, and a change in one part of the code may change how > allocations are eliminated in a completely different part of the code. > 2. My personal opinion is it's just ugly... instead of having one field > that could be null or non-null, you now have a field that could be null, > Optional.empty(), or Optional.of(something). > > Cheers > Dan > > > > On Tue, May 23, 2017 at 1:54 PM, Sebastian Laskawiec > wrote: > >> Hey! >> >> So I think we have no extreme naysayers to Optional. So let me try to sum >> up what we have achieved so: >> >> - In macroscale benchmark based on REST interface using Optionals >> didn't lower the performance. >> - +1 for using it in public APIs, especially for those using >> functional style. >> - Creating lots of Optional instances might add some pressure on GC, >> so we need to be careful when using them in hot code paths. In such cases >> it is required to run a micro scale benchamark to make sure the performance >> didn't drop. The microbenchmark should also be followed by macro scale >> benchamrk - PerfJobAck. Also, keep an eye on Eden space in such cases. >> >> If you agree with me, and there are no hard evidence that using Optional >> degrade performance significantly, I would like to issue a pull request and >> put those findings into contributing guide [1]. >> >> Thanks, >> Sebastian >> >> [1] https://github.com/infinispan/infinispan/tree/master/ >> documentation/src/main/asciidoc/contributing >> >> On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o >> wrote: >> >>> I think Sanne's right here, any differences in such large scale test are >>> hard to decipher. >>> >>> Also, as mentioned in a previous email, my view on its usage is same as >>> Sanne's: >>> >>> * Definitely in APIs/SPIs. >>> * Be gentle with it internals. >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> > On 18 May 2017, at 14:35, Sanne Grinovero >>> wrote: >>> > >>> > Hi Sebastian, >>> > >>> > sorry but I think you've been wasting time, I hope it was fun :) This >>> is not the right methodology to "settle" the matter (unless you want >>> Radim's eyes to get bloody..). >>> > >>> > Any change in such a complex system will only affect the performance >>> metrics if you're actually addressing the dominant bottleneck. In some >>> cases it might be CPU, like if your system is at 90%+ CPU then it's likely >>> that reviewing the code to use less CPU would be beneficial; but even that >>> can be counter-productive, for example if you're having contention caused >>> by optimistic locking and you fail to address that while making something >>> else "faster" the performance loss on the optimistic lock might become >>> asymptotic. >>> > >>> > A good reason to avoid excessive usage of Optional (and *excessive* >>> doesn't mean a couple dozen in a millions lines of code..) is to not run >>> out of eden space, especially for all the code running in interpreted mode. >>> > >>> > In your case you've been benchmarking a hugely complex beast, not >>> least over REST! When running the REST Server I doubt that allocation in >>> eden is your main problem. You just happened to have a couple Optionals on >>> your path; sure performance changed but there's no enough data in this way >>> to figure out what exactly happened: >>> > - did it change at all or was it just because of a lucky >>> optimisation? (The JIT will always optimise stuff differently even when >>> re-running the same code) >>> > - did the overall picture improve because this code became much >>> *less* slower? >>> > >>> > The real complexity in benchmarking is to accurately understand why it >>> changed; this should also tell you why it didn't change more, or less.. >>> > >>> > To be fair I actually agree that it's very likely that C2 can make any >>> performance penalty disappear.. that's totally possible, although it's >>> unlikely to be faster than just reading the field (assuming we don't need >>> to do branching because of null-checks but C2 can optimise that as well). >>> > Still this requires the code to be optimised by JIT first, so it won't >>> prevent us from creating a gazillion of instances if we abuse its usage >>> irresponsibly. Fighting internal NPEs is a matter of writing better code; >>> I'm not against some "Optional" being strategically placed but I believe >>> it's much nicer for most internal code to just avoid null, use "final", and >>> initialize things aggressively. >>> > >>> > Sure use Optional where it makes sense, probably most on APIs and >>> SPIs, but please don't go overboard with it in internals. That's all I said >>> in the original debate. >>> > >>> > In case you want to benchmark the impact of Optional make a JMH based >>> microbenchmark - that's interesting to see what C2 is capable of - but even >>> so that's not going to tell you much on the impact it would have to patch >>> thousands of code all around Infinispan. And it will need some peer review >>> before it can tell you anything at all ;) >>> > >>> > It's actually a very challenging topic, as we produce libraries meant >>> for "anyone to use" and don't get to set the hardware specification >>> requirements it's hard to predict if we should optimise the system for >>> this/that resource consumption. Some people will have plenty of CPU and >>> have problems with us needing too much memory, some others will have the >>> opposite.. the real challenge is in making internals "elastic" to such >>> factors and adaptable without making it too hard to tune. >>> > >>> > Thanks, >>> > Sanne >>> > >>> > >>> > >>> > On 18 May 2017 at 12:30, Sebastian Laskawiec >>> wrote: >>> > Hey! >>> > >>> > In our past we had a couple of discussions about whether we should or >>> should not use Optionals [1][2]. The main argument against it was >>> performance. >>> > >>> > On one hand we risk additional object allocation (the Optional itself) >>> and wrong inlining decisions taken by C2 compiler [3]. On the other hand we >>> all probably "feel" that both of those things shouldn't be a problem and >>> should be optimized by C2. Another argument was the Optional's doesn't give >>> us anything but as I checked, we introduced nearly 80 NullPointerException >>> bugs in two years [4]. So we might consider Optional as a way of fighting >>> those things. The final argument that I've seen was about lack of higher >>> order functions which is simply not true since we have #map, #filter and >>> #flatmap functions. You can do pretty amazing things with this. >>> > >>> > I decided to check the performance when refactoring REST interface. I >>> created a PR with Optionals [5], ran performance tests, removed all >>> Optionals and reran tests. You will be surprised by the results [6]: >>> > >>> > Test case >>> > With Optionals [%] Without Optionals >>> > Run 1 Run 2 Avg Run 1 Run 2 Avg >>> > Non-TX reads 10 threads >>> > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 >>> > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 >>> > Non-TX reads 100 threads >>> > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 >>> > Response time -6.15 14.93 4.39 7.88 6.49 7.19 >>> > Non-TX writes 10 threads >>> > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 >>> > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 >>> > Non-TX writes 100 threads >>> > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 >>> > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 >>> > >>> > I also created JMH + Flight Recorder tests and again, the results >>> showed no evidence of slow down caused by Optionals [7]. >>> > >>> > Now please take those results with a grain of salt since they tend to >>> drift by a factor of +/-5% (sometimes even more). But it's very clear the >>> performance results are very similar if not the same. >>> > >>> > Having those numbers at hand, do we want to have Optionals in >>> Infinispan codebase or not? And if not, let's state it very clearly (and >>> write it into contributing guide), it's because we don't like them. Not >>> because of performance. >>> > >>> > Thanks, >>> > Sebastian >>> > >>> > [1] http://lists.jboss.org/pipermail/infinispan-dev/2017-March/ >>> 017370.html >>> > [2] http://lists.jboss.org/pipermail/infinispan-dev/2016-August/ >>> 016796.html >>> > [3] http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low- >>> latency.html >>> > [4] https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN% >>> 20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20% >>> 22NullPointerException%22%20AND%20created%20%3E%3D%202015- >>> 04-27%20AND%20created%20%3C%3D%202017-04-27 >>> > [5] https://github.com/infinispan/infinispan/pull/5094 >>> > [6] https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was >>> 0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing >>> > [7] https://github.com/infinispan/infinispan/pull/5094#issuecomm >>> ent-296970673 >>> > -- >>> > SEBASTIAN ?ASKAWIEC >>> > INFINISPAN DEVELOPER >>> > Red Hat EMEA >>> > >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> >> SEBASTIAN ?ASKAWIEC >> >> INFINISPAN DEVELOPER >> >> Red Hat EMEA >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170523/27da71a1/attachment-0001.html From vrigamon at redhat.com Tue May 23 11:45:51 2017 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Tue, 23 May 2017 17:45:51 +0200 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: Message-ID: +1 to Dan's opinion On Tue, May 23, 2017 at 3:58 PM, Dan Berindei wrote: > I wouldn't say I'm an extreme naysayer, but I do have 2 issues with > Optional: > > 1. Performance becomes harder to quantify: the allocations may or may not > be eliminated, and a change in one part of the code may change how > allocations are eliminated in a completely different part of the code. > 2. My personal opinion is it's just ugly... instead of having one field > that could be null or non-null, you now have a field that could be null, > Optional.empty(), or Optional.of(something). > > Cheers > Dan > > > > On Tue, May 23, 2017 at 1:54 PM, Sebastian Laskawiec > wrote: > >> Hey! >> >> So I think we have no extreme naysayers to Optional. So let me try to sum >> up what we have achieved so: >> >> - In macroscale benchmark based on REST interface using Optionals >> didn't lower the performance. >> - +1 for using it in public APIs, especially for those using >> functional style. >> - Creating lots of Optional instances might add some pressure on GC, >> so we need to be careful when using them in hot code paths. In such cases >> it is required to run a micro scale benchamark to make sure the performance >> didn't drop. The microbenchmark should also be followed by macro scale >> benchamrk - PerfJobAck. Also, keep an eye on Eden space in such cases. >> >> If you agree with me, and there are no hard evidence that using Optional >> degrade performance significantly, I would like to issue a pull request and >> put those findings into contributing guide [1]. >> >> Thanks, >> Sebastian >> >> [1] https://github.com/infinispan/infinispan/tree/master/ >> documentation/src/main/asciidoc/contributing >> >> On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o >> wrote: >> >>> I think Sanne's right here, any differences in such large scale test are >>> hard to decipher. >>> >>> Also, as mentioned in a previous email, my view on its usage is same as >>> Sanne's: >>> >>> * Definitely in APIs/SPIs. >>> * Be gentle with it internals. >>> >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> > On 18 May 2017, at 14:35, Sanne Grinovero >>> wrote: >>> > >>> > Hi Sebastian, >>> > >>> > sorry but I think you've been wasting time, I hope it was fun :) This >>> is not the right methodology to "settle" the matter (unless you want >>> Radim's eyes to get bloody..). >>> > >>> > Any change in such a complex system will only affect the performance >>> metrics if you're actually addressing the dominant bottleneck. In some >>> cases it might be CPU, like if your system is at 90%+ CPU then it's likely >>> that reviewing the code to use less CPU would be beneficial; but even that >>> can be counter-productive, for example if you're having contention caused >>> by optimistic locking and you fail to address that while making something >>> else "faster" the performance loss on the optimistic lock might become >>> asymptotic. >>> > >>> > A good reason to avoid excessive usage of Optional (and *excessive* >>> doesn't mean a couple dozen in a millions lines of code..) is to not run >>> out of eden space, especially for all the code running in interpreted mode. >>> > >>> > In your case you've been benchmarking a hugely complex beast, not >>> least over REST! When running the REST Server I doubt that allocation in >>> eden is your main problem. You just happened to have a couple Optionals on >>> your path; sure performance changed but there's no enough data in this way >>> to figure out what exactly happened: >>> > - did it change at all or was it just because of a lucky >>> optimisation? (The JIT will always optimise stuff differently even when >>> re-running the same code) >>> > - did the overall picture improve because this code became much >>> *less* slower? >>> > >>> > The real complexity in benchmarking is to accurately understand why it >>> changed; this should also tell you why it didn't change more, or less.. >>> > >>> > To be fair I actually agree that it's very likely that C2 can make any >>> performance penalty disappear.. that's totally possible, although it's >>> unlikely to be faster than just reading the field (assuming we don't need >>> to do branching because of null-checks but C2 can optimise that as well). >>> > Still this requires the code to be optimised by JIT first, so it won't >>> prevent us from creating a gazillion of instances if we abuse its usage >>> irresponsibly. Fighting internal NPEs is a matter of writing better code; >>> I'm not against some "Optional" being strategically placed but I believe >>> it's much nicer for most internal code to just avoid null, use "final", and >>> initialize things aggressively. >>> > >>> > Sure use Optional where it makes sense, probably most on APIs and >>> SPIs, but please don't go overboard with it in internals. That's all I said >>> in the original debate. >>> > >>> > In case you want to benchmark the impact of Optional make a JMH based >>> microbenchmark - that's interesting to see what C2 is capable of - but even >>> so that's not going to tell you much on the impact it would have to patch >>> thousands of code all around Infinispan. And it will need some peer review >>> before it can tell you anything at all ;) >>> > >>> > It's actually a very challenging topic, as we produce libraries meant >>> for "anyone to use" and don't get to set the hardware specification >>> requirements it's hard to predict if we should optimise the system for >>> this/that resource consumption. Some people will have plenty of CPU and >>> have problems with us needing too much memory, some others will have the >>> opposite.. the real challenge is in making internals "elastic" to such >>> factors and adaptable without making it too hard to tune. >>> > >>> > Thanks, >>> > Sanne >>> > >>> > >>> > >>> > On 18 May 2017 at 12:30, Sebastian Laskawiec >>> wrote: >>> > Hey! >>> > >>> > In our past we had a couple of discussions about whether we should or >>> should not use Optionals [1][2]. The main argument against it was >>> performance. >>> > >>> > On one hand we risk additional object allocation (the Optional itself) >>> and wrong inlining decisions taken by C2 compiler [3]. On the other hand we >>> all probably "feel" that both of those things shouldn't be a problem and >>> should be optimized by C2. Another argument was the Optional's doesn't give >>> us anything but as I checked, we introduced nearly 80 NullPointerException >>> bugs in two years [4]. So we might consider Optional as a way of fighting >>> those things. The final argument that I've seen was about lack of higher >>> order functions which is simply not true since we have #map, #filter and >>> #flatmap functions. You can do pretty amazing things with this. >>> > >>> > I decided to check the performance when refactoring REST interface. I >>> created a PR with Optionals [5], ran performance tests, removed all >>> Optionals and reran tests. You will be surprised by the results [6]: >>> > >>> > Test case >>> > With Optionals [%] Without Optionals >>> > Run 1 Run 2 Avg Run 1 Run 2 Avg >>> > Non-TX reads 10 threads >>> > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 >>> > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 >>> > Non-TX reads 100 threads >>> > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 >>> > Response time -6.15 14.93 4.39 7.88 6.49 7.19 >>> > Non-TX writes 10 threads >>> > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 >>> > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 >>> > Non-TX writes 100 threads >>> > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 >>> > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 >>> > >>> > I also created JMH + Flight Recorder tests and again, the results >>> showed no evidence of slow down caused by Optionals [7]. >>> > >>> > Now please take those results with a grain of salt since they tend to >>> drift by a factor of +/-5% (sometimes even more). But it's very clear the >>> performance results are very similar if not the same. >>> > >>> > Having those numbers at hand, do we want to have Optionals in >>> Infinispan codebase or not? And if not, let's state it very clearly (and >>> write it into contributing guide), it's because we don't like them. Not >>> because of performance. >>> > >>> > Thanks, >>> > Sebastian >>> > >>> > [1] http://lists.jboss.org/pipermail/infinispan-dev/2017-March/ >>> 017370.html >>> > [2] http://lists.jboss.org/pipermail/infinispan-dev/2016-August/ >>> 016796.html >>> > [3] http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low- >>> latency.html >>> > [4] https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN% >>> 20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20% >>> 22NullPointerException%22%20AND%20created%20%3E%3D%202015- >>> 04-27%20AND%20created%20%3C%3D%202017-04-27 >>> > [5] https://github.com/infinispan/infinispan/pull/5094 >>> > [6] https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was >>> 0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing >>> > [7] https://github.com/infinispan/infinispan/pull/5094#issuecomm >>> ent-296970673 >>> > -- >>> > SEBASTIAN ?ASKAWIEC >>> > INFINISPAN DEVELOPER >>> > Red Hat EMEA >>> > >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> >> SEBASTIAN ?ASKAWIEC >> >> INFINISPAN DEVELOPER >> >> Red Hat EMEA >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Vittorio Rigamonti Senior Software Engineer Red Hat Milan, Italy vrigamon at redhat.com irc: rigazilla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170523/dd68879c/attachment-0001.html From belaban at mailbox.org Tue May 23 12:47:35 2017 From: belaban at mailbox.org (Bela Ban) Date: Tue, 23 May 2017 18:47:35 +0200 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: Message-ID: <441f3a85-111d-acef-d68c-794b672f06bd@mailbox.org> Actually, I'm an extreme naysayer! I actually voiced concerns so I'm wondering where your assumption there are no naysayers is coming from... :-) On 23/05/17 1:54 PM, Sebastian Laskawiec wrote: > Hey! > > So I think we have no extreme naysayers to Optional. So let me try to > sum up what we have achieved so: > > * In macroscale benchmark based on REST interface using Optionals > didn't lower the performance. > * +1 for using it in public APIs, especially for those using > functional style. > * Creating lots of Optional instances might add some pressure on GC, > so we need to be careful when using them in hot code paths. In > such cases it is required to run a micro scale benchamark to make > sure the performance didn't drop. The microbenchmark should also > be followed by macro scale benchamrk - PerfJobAck. Also, keep an > eye on Eden space in such cases. > > If you agree with me, and there are no hard evidence that using > Optional degrade performance significantly, I would like to issue a > pull request and put those findings into contributing guide [1]. > > Thanks, > Sebastian > > [1] > https://github.com/infinispan/infinispan/tree/master/documentation/src/main/asciidoc/contributing > > On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o > wrote: > > I think Sanne's right here, any differences in such large scale > test are hard to decipher. > > Also, as mentioned in a previous email, my view on its usage is > same as Sanne's: > > * Definitely in APIs/SPIs. > * Be gentle with it internals. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 18 May 2017, at 14:35, Sanne Grinovero > wrote: > > > > Hi Sebastian, > > > > sorry but I think you've been wasting time, I hope it was fun :) > This is not the right methodology to "settle" the matter (unless > you want Radim's eyes to get bloody..). > > > > Any change in such a complex system will only affect the > performance metrics if you're actually addressing the dominant > bottleneck. In some cases it might be CPU, like if your system is > at 90%+ CPU then it's likely that reviewing the code to use less > CPU would be beneficial; but even that can be counter-productive, > for example if you're having contention caused by optimistic > locking and you fail to address that while making something else > "faster" the performance loss on the optimistic lock might become > asymptotic. > > > > A good reason to avoid excessive usage of Optional (and > *excessive* doesn't mean a couple dozen in a millions lines of > code..) is to not run out of eden space, especially for all the > code running in interpreted mode. > > > > In your case you've been benchmarking a hugely complex beast, > not least over REST! When running the REST Server I doubt that > allocation in eden is your main problem. You just happened to have > a couple Optionals on your path; sure performance changed but > there's no enough data in this way to figure out what exactly > happened: > > - did it change at all or was it just because of a lucky > optimisation? (The JIT will always optimise stuff differently even > when re-running the same code) > > - did the overall picture improve because this code became much > *less* slower? > > > > The real complexity in benchmarking is to accurately understand > why it changed; this should also tell you why it didn't change > more, or less.. > > > > To be fair I actually agree that it's very likely that C2 can > make any performance penalty disappear.. that's totally possible, > although it's unlikely to be faster than just reading the field > (assuming we don't need to do branching because of null-checks but > C2 can optimise that as well). > > Still this requires the code to be optimised by JIT first, so it > won't prevent us from creating a gazillion of instances if we > abuse its usage irresponsibly. Fighting internal NPEs is a matter > of writing better code; I'm not against some "Optional" being > strategically placed but I believe it's much nicer for most > internal code to just avoid null, use "final", and initialize > things aggressively. > > > > Sure use Optional where it makes sense, probably most on APIs > and SPIs, but please don't go overboard with it in internals. > That's all I said in the original debate. > > > > In case you want to benchmark the impact of Optional make a JMH > based microbenchmark - that's interesting to see what C2 is > capable of - but even so that's not going to tell you much on the > impact it would have to patch thousands of code all around > Infinispan. And it will need some peer review before it can tell > you anything at all ;) > > > > It's actually a very challenging topic, as we produce libraries > meant for "anyone to use" and don't get to set the hardware > specification requirements it's hard to predict if we should > optimise the system for this/that resource consumption. Some > people will have plenty of CPU and have problems with us needing > too much memory, some others will have the opposite.. the real > challenge is in making internals "elastic" to such factors and > adaptable without making it too hard to tune. > > > > Thanks, > > Sanne > > > > > > > > On 18 May 2017 at 12:30, Sebastian Laskawiec > > wrote: > > Hey! > > > > In our past we had a couple of discussions about whether we > should or should not use Optionals [1][2]. The main argument > against it was performance. > > > > On one hand we risk additional object allocation (the Optional > itself) and wrong inlining decisions taken by C2 compiler [3]. On > the other hand we all probably "feel" that both of those things > shouldn't be a problem and should be optimized by C2. Another > argument was the Optional's doesn't give us anything but as I > checked, we introduced nearly 80 NullPointerException bugs in two > years [4]. So we might consider Optional as a way of fighting > those things. The final argument that I've seen was about lack of > higher order functions which is simply not true since we have > #map, #filter and #flatmap functions. You can do pretty amazing > things with this. > > > > I decided to check the performance when refactoring REST > interface. I created a PR with Optionals [5], ran performance > tests, removed all Optionals and reran tests. You will be > surprised by the results [6]: > > > > Test case > > With Optionals [%] Without Optionals > > Run 1 Run 2 Avg Run 1 Run 2 Avg > > Non-TX reads 10 threads > > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 > > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 > > Non-TX reads 100 threads > > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 > > Response time -6.15 14.93 4.39 7.88 6.49 7.19 > > Non-TX writes 10 threads > > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 > > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 > > Non-TX writes 100 threads > > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 > > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 > > > > I also created JMH + Flight Recorder tests and again, the > results showed no evidence of slow down caused by Optionals [7]. > > > > Now please take those results with a grain of salt since they > tend to drift by a factor of +/-5% (sometimes even more). But it's > very clear the performance results are very similar if not the same. > > > > Having those numbers at hand, do we want to have Optionals in > Infinispan codebase or not? And if not, let's state it very > clearly (and write it into contributing guide), it's because we > don't like them. Not because of performance. > > > > Thanks, > > Sebastian > > > > [1] > http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html > > [2] > http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html > > [3] > http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html > > [4] > https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27 > > [5] https://github.com/infinispan/infinispan/pull/5094 > > [6] > https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing > > [7] > https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673 > > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > > SEBASTIAN?ASKAWIEC > > INFINISPAN DEVELOPER > > Red HatEMEA > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Bela Ban, JGroups lead (http://www.jgroups.org) From rvansa at redhat.com Wed May 24 03:49:38 2017 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 24 May 2017 09:49:38 +0200 Subject: [infinispan-dev] IRC chat: HB + I9 In-Reply-To: References: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> Message-ID: <8c834e21-438b-d883-6ce0-fa331ccc14fc@redhat.com> Hi Galder, I think that (3) is simply not possible (from non-technical perspective) and I don't think we have the manpower to maintain 2 different modules (2). The current version does not seem ready (generic enough) to get into Infinispan, so either (1), or a lot of more work towards (4) (which would be my preference). I haven't thought about all the steps for (4), but it seems that UnorderedDistributionInterceptor and LockingInterceptor should get into Infinispan as a flavour of repl/dist cache mode that applies update in parallel on all owners without any ordering; it's up to the user to guarantee that changes to an entry are commutative. The 2LC code itself shouldn't use the TombstoneCallInterceptor/VersionedCallInterceptor now that there is the functional API, you should move the behavior to functions. Regarding the invalidation mode, I think that a variant that would void any writes to the entry (begin/end invalidation) could be moved to Infinispan, too. I am not even sure if current invalidation in Infinispan is useful - you can't transparantly cache access to repeatable-read isolated DB (where reads block writes), but the blocking as we do in 2LC now is probably too strong if we're working with DB using just read committed as the isolation level. I was always trying to enforce linearizability, TBH I don't know how to write a test that would test a more relaxed consistency. Btw., I've noticed that you've set isolation level to READ_COMMITTED in default configuration - isolation level does not apply at all to non-transactional caches, so please remove that as it would be just a noise. Radim On 05/23/2017 03:07 PM, Galder Zamarre?o wrote: > Hi all, > > I've just finished integrating Infinispan with a HB 6.x branch Steve had, all tests pass now [1]. > > Yeah, we didn't commit on the final location for these changes. > > As far as I know, Hibernate master is not 6.x, but rather 5.2.x. There's no 5.2.x branch in Hibernate main repo. 6.x is just a branch that Steve has. > > These are the options availble to us: > > 1. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 6.x branch. > > 2. Integrate 9.x provider as part of a second Infinispan module in Hibernate 5.x branch. > > 3. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 5.x branch. This is problematic for since the provider is not backwards compatible. > > 4. Integrate 9.x provider in infinispan and deliver it as part of Infinispan rather than Hibernate. > > I'm not sure which one I prefer the most TBH... 1. is the ideal solution but doesn't seem there will be a Hibernate 6.x release for a while. 2./3./4. all have their downsides... :\ > > Thoughts? > > [1] https://github.com/galderz/hibernate-orm/commits/t_i9x_v2 > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 16 May 2017, at 17:06, Paul Ferraro wrote: >> >> Thanks Galder. I read through the infinispan-dev thread on the >> subject, but I'm not sure what was concluded regarding the eventual >> home for this code. >> Once the testsuite passes, is the plan to commit to hibernate master? >> If so, I will likely fork these changes into a WF module (and adapt it >> for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9 >> until Hibernate6 is integrated. >> >> Radim - one thing you mentioned on that infinispan-dev thread puzzled >> me: you said that invalidation mode offers no benefits over >> replication. How is that possible? Can you elaborate? >> >> Paul >> >> On Tue, May 16, 2017 at 9:03 AM, Galder Zamarre?o wrote: >>> I'm on the move, not sure if Paul/Radim saw my replies: >>> >>> galderz, rvansa: Hey guys - is there a plan for Hibernate & >>> ISPN 9? >>> pferraro: Galder has been working on that >>> pferraro: though I haven't seen any results but a list of >>> stuff that needs to be changed >>> galderz: which Hibernate branch are you targeting? >>> pferraro: 5.2, but there are minute differences between 5.x >>> in terms of the parts that need love to get Infinispan 9 support >>> *** Mode change: +v vblagoje on #infinispan by ChanServ >>> (ChanServ at services.) >>> rvansa: are you suggesting that 5.0 or 5.1 branches will be >>> adapted to additionally support infinispan 9? how is that >>> possible? >>>> pferraro: i'm working on it as we speak... >>>> pferraro: down to 16 failuresd >>>> pferraro: i started a couple of months ago, but had talks/demos to >>> prepare >>>> pferraro: i've got back to working on it this week >>> ... >>>> pferraro: rvansa >>>> rvansa: minute differences my ass ;p >>>> pferraro: did you see my replies? >>>> i got disconnected while replying... >>> hmm - no - I didn't >>> galderz: ^ >>>> pferraro: so, working on the HB + I9 integration as we speak >>>> pferraro: i started a couple of months back but had talks/demos to >>> prepare and had to put that aside >>>> pferraro: i'm down to 16 failures >>>> pferraro: serious refactoring required of the integration to get it >>> to compile and the tests to pass >>>> pferraro: need to switch to async interceptor stack in 2lc >>> integration and get all the subtle changes right >>>> pferraro: it's a painstaking job basically >>>> pferraro: i'm working on >>> https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 >>>> pferraro: i can't remember where i branched off, but it's a branch >>> that steve had since master was focused on 5.x >>>> pferraro: i've no idea when/where we'll integrate this, but one >>> thing is for sure: it's nowhere near backwards compatible >>>> actually, fixed one this morning, so down to 15 failures >>>> pferraro: any suggestions/wishes? >>>> is anyone out there? ;) >>> Cheers, >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> -- Radim Vansa JBoss Performance Team From rvansa at redhat.com Wed May 24 04:04:10 2017 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 24 May 2017 10:04:10 +0200 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: Message-ID: <26d94e24-d80d-c9c6-bbcf-c397a13a1e35@redhat.com> I haven't checked Sebastian's refactored code, but does it use Optionals as a *field* type? That's misuse (same as using it as an arg), it's intended solely as method return type. Radim On 05/23/2017 05:45 PM, Katia Aresti wrote: > Dan, I disagree with point 2 where you say "You now have a field that > could be null, Optional.empty(), or Optional.of(something)" > > This is the point of optional. You shouldn't have a field that has > these 3 possible values, just two of them = Some or None. If the field > is mutable, it should be initialised to Optional.empty(). In the case > of an API, Optional implicitly says that the return value can be > empty, but when you return a "normal" object, either the user reads > the doc, either will have bugs or boilerplate code defending from the > possible null value (even if never ever this API will return null) > > :o) > > Cheers > > > > On Tue, May 23, 2017 at 3:58 PM, Dan Berindei > wrote: > > I wouldn't say I'm an extreme naysayer, but I do have 2 issues > with Optional: > > 1. Performance becomes harder to quantify: the allocations may or > may not be eliminated, and a change in one part of the code may > change how allocations are eliminated in a completely different > part of the code. > 2. My personal opinion is it's just ugly... instead of having one > field that could be null or non-null, you now have a field that > could be null, Optional.empty(), or Optional.of(something). > > Cheers > Dan > > > > On Tue, May 23, 2017 at 1:54 PM, Sebastian Laskawiec > > wrote: > > Hey! > > So I think we have no extreme naysayers to Optional. So let me > try to sum up what we have achieved so: > > * In macroscale benchmark based on REST interface using > Optionals didn't lower the performance. > * +1 for using it in public APIs, especially for those using > functional style. > * Creating lots of Optional instances might add some > pressure on GC, so we need to be careful when using them > in hot code paths. In such cases it is required to run a > micro scale benchamark to make sure the performance didn't > drop. The microbenchmark should also be followed by macro > scale benchamrk - PerfJobAck. Also, keep an eye on Eden > space in such cases. > > If you agree with me, and there are no hard evidence that > using Optional degrade performance significantly, I would like > to issue a pull request and put those findings into > contributing guide [1]. > > Thanks, > Sebastian > > [1] > https://github.com/infinispan/infinispan/tree/master/documentation/src/main/asciidoc/contributing > > > On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o > > wrote: > > I think Sanne's right here, any differences in such large > scale test are hard to decipher. > > Also, as mentioned in a previous email, my view on its > usage is same as Sanne's: > > * Definitely in APIs/SPIs. > * Be gentle with it internals. > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 18 May 2017, at 14:35, Sanne Grinovero > > wrote: > > > > Hi Sebastian, > > > > sorry but I think you've been wasting time, I hope it > was fun :) This is not the right methodology to "settle" > the matter (unless you want Radim's eyes to get bloody..). > > > > Any change in such a complex system will only affect the > performance metrics if you're actually addressing the > dominant bottleneck. In some cases it might be CPU, like > if your system is at 90%+ CPU then it's likely that > reviewing the code to use less CPU would be beneficial; > but even that can be counter-productive, for example if > you're having contention caused by optimistic locking and > you fail to address that while making something else > "faster" the performance loss on the optimistic lock might > become asymptotic. > > > > A good reason to avoid excessive usage of Optional (and > *excessive* doesn't mean a couple dozen in a millions > lines of code..) is to not run out of eden space, > especially for all the code running in interpreted mode. > > > > In your case you've been benchmarking a hugely complex > beast, not least over REST! When running the REST Server I > doubt that allocation in eden is your main problem. You > just happened to have a couple Optionals on your path; > sure performance changed but there's no enough data in > this way to figure out what exactly happened: > > - did it change at all or was it just because of a > lucky optimisation? (The JIT will always optimise stuff > differently even when re-running the same code) > > - did the overall picture improve because this code > became much *less* slower? > > > > The real complexity in benchmarking is to accurately > understand why it changed; this should also tell you why > it didn't change more, or less.. > > > > To be fair I actually agree that it's very likely that > C2 can make any performance penalty disappear.. that's > totally possible, although it's unlikely to be faster than > just reading the field (assuming we don't need to do > branching because of null-checks but C2 can optimise that > as well). > > Still this requires the code to be optimised by JIT > first, so it won't prevent us from creating a gazillion of > instances if we abuse its usage irresponsibly. Fighting > internal NPEs is a matter of writing better code; I'm not > against some "Optional" being strategically placed but I > believe it's much nicer for most internal code to just > avoid null, use "final", and initialize things aggressively. > > > > Sure use Optional where it makes sense, probably most on > APIs and SPIs, but please don't go overboard with it in > internals. That's all I said in the original debate. > > > > In case you want to benchmark the impact of Optional > make a JMH based microbenchmark - that's interesting to > see what C2 is capable of - but even so that's not going > to tell you much on the impact it would have to patch > thousands of code all around Infinispan. And it will need > some peer review before it can tell you anything at all ;) > > > > It's actually a very challenging topic, as we produce > libraries meant for "anyone to use" and don't get to set > the hardware specification requirements it's hard to > predict if we should optimise the system for this/that > resource consumption. Some people will have plenty of CPU > and have problems with us needing too much memory, some > others will have the opposite.. the real challenge is in > making internals "elastic" to such factors and adaptable > without making it too hard to tune. > > > > Thanks, > > Sanne > > > > > > > > On 18 May 2017 at 12:30, Sebastian Laskawiec > > wrote: > > Hey! > > > > In our past we had a couple of discussions about whether > we should or should not use Optionals [1][2]. The main > argument against it was performance. > > > > On one hand we risk additional object allocation (the > Optional itself) and wrong inlining decisions taken by C2 > compiler [3]. On the other hand we all probably "feel" > that both of those things shouldn't be a problem and > should be optimized by C2. Another argument was the > Optional's doesn't give us anything but as I checked, we > introduced nearly 80 NullPointerException bugs in two > years [4]. So we might consider Optional as a way of > fighting those things. The final argument that I've seen > was about lack of higher order functions which is simply > not true since we have #map, #filter and #flatmap > functions. You can do pretty amazing things with this. > > > > I decided to check the performance when refactoring REST > interface. I created a PR with Optionals [5], ran > performance tests, removed all Optionals and reran tests. > You will be surprised by the results [6]: > > > > Test case > > With Optionals [%] Without Optionals > > Run 1 Run 2 Avg Run 1 Run 2 Avg > > Non-TX reads 10 threads > > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 > > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 > > Non-TX reads 100 threads > > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 > > Response time -6.15 14.93 4.39 7.88 6.49 7.19 > > Non-TX writes 10 threads > > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 > > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 > > Non-TX writes 100 threads > > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 > > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 > > > > I also created JMH + Flight Recorder tests and again, > the results showed no evidence of slow down caused by > Optionals [7]. > > > > Now please take those results with a grain of salt since > they tend to drift by a factor of +/-5% (sometimes even > more). But it's very clear the performance results are > very similar if not the same. > > > > Having those numbers at hand, do we want to have > Optionals in Infinispan codebase or not? And if not, let's > state it very clearly (and write it into contributing > guide), it's because we don't like them. Not because of > performance. > > > > Thanks, > > Sebastian > > > > [1] > http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html > > > [2] > http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html > > > [3] > http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html > > > [4] > https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27 > > > [5] https://github.com/infinispan/infinispan/pull/5094 > > > [6] > https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing > > > [7] > https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673 > > > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > > SEBASTIAN?ASKAWIEC > > INFINISPAN DEVELOPER > > Red HatEMEA > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From slaskawi at redhat.com Wed May 24 04:44:34 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 24 May 2017 08:44:34 +0000 Subject: [infinispan-dev] REST Refactoring - breaking changes In-Reply-To: <95f6b3f1-7f25-712e-b5fb-9e196cc93591@redhat.com> References: <95f6b3f1-7f25-712e-b5fb-9e196cc93591@redhat.com> Message-ID: On Tue, May 23, 2017 at 5:06 PM Radim Vansa wrote: > On 05/16/2017 11:05 AM, Sebastian Laskawiec wrote: > > Hey guys! > > > > I'm working on REST Server refactoring and I changed some of the > > previous behavior. Having in mind that we are implementing this in a > > minor release, I tried to make those changes really cosmetic: > > > > * RestEASY as well as Servlet API have been removed from modules and > > BOM. If your app relied on it, you'll need to specify them > > separately in your pom. > > * Previous implementation picked application/text as a default > > content type. I replaced it with text/plain with charset which is > > more precise and seems to be more widely adopted. > > * Putting an entry without any TTL nor Idle Time made it living > > forever (which was BTW aligned with the docs). I switched to > > server configured defaults in this case. If you want to have an > > entry that lives forever, just specify 0 or -1 there. > > * Requesting an entry with wrong mime type (imagine it was stored > > using application/octet-stream and now you're requesting > > text/plain) cased Bad Request. Now I switched it to Not Acceptable > > which was designed specially to cover this type of use case. > > * In compatibility mode the server often tried to "guess" the > > mimetype (the decision was often between text/plain and > > application/octet-stream). I honestly think it was a wrong move > > and made the server side code very hard to read and predict what > > would be the result. Now the server always returns text/plain by > > default. If you want to get a byte stream back, just add `Accept: > > application/octet-stream`. > > * The server can be started with port 0. This way you are 100% sure > > that it will start using a unique port without colliding with any > > other service. > > > How can the client now the port number, then? Is the actual port exposed > through JMX? > > > * The REST server hosts HTML page if queried using GET on default > > context. I think it was a bug that it didn't work correctly before. > > > Did it return 404? What's on that page? Do we expose keys/values/entries > anywhere in the REST endpoint? > Exactly. You may try it using our Docker image and invoking something like this: curl -v -u user:changeme http://172.17.0.6:8080/rest > > > * UTF-8 charset is now the default. You may always ask the server to > > return different encoding using Accept header. The charset is not > > returned with binary mime types. > > * If a HEAD request results in an error, a message will be returned > > to the client. Even though this behavior breaks Commons HTTP > > Client (HEAD requests are handled slightly differently and causes > > the client to hang if a payload is returned), I think it's > > beneficial to tell the user what went wrong. It's worth to mention > > that Jetty/Netty HTTP clients work correctly. > > * RestServer doesn't implement Lifecycle now. The protocol server > > doesn't support start() method without any arguments. You always > > need to specify configuration + Embedded Cache Manager. > > > > Even though it's a long list, I think all those changes were worth it. > > Please let me know if you don't agree. > > Couple of other questions: > > * do we accept GET with Range header on keys? What about delta-updating > entries with Content-Range on PUTs? > No and AFAK there are no plans to do it (but perhaps Tristan could shed some more light onto this). We could use HTTP PATCH for delta updates... > * For PUTs/POSTs, do we return 200/201/204 according to the spec? > (modified/created/modified) > No, I decided to leave it as 200 for compatibility reasons. But I agree, we could change this as well. > * Do we have any way to execute a replace (or the other prev-value > returning ops) through REST using single request? For example let DELETE > return the prev entity (it should return 200 & entity or 204 and no > response) > Yes, PUT replaces previous value [1] if such exists (whereas POST would return a conflict). If for some reason you can not replace current value, you will get a preconditions failed error. [1] https://github.com/infinispan/infinispan/pull/5094/files#diff-58f67698080cc0242320614c921559a8R301 > * Do we handle OPTIONS in any way? > No. Do we need it? I haven't seen any real implementation that uses that for discovering REST operations. > > Radim > > > > > Thanks, > > Sebastian > > > > -- > > > > SEBASTIAN?ASKAWIEC > > > > INFINISPAN DEVELOPER > > > > Red HatEMEA > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170524/7dc534aa/attachment.html From sanne at infinispan.org Wed May 24 11:18:52 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 24 May 2017 16:18:52 +0100 Subject: [infinispan-dev] IRC chat: HB + I9 In-Reply-To: <8c834e21-438b-d883-6ce0-fa331ccc14fc@redhat.com> References: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> <8c834e21-438b-d883-6ce0-fa331ccc14fc@redhat.com> Message-ID: I would suggest option 4# : move the 2LC implementation to Infinispan. I already suggested this in the past, but to remind the main arguments I have: - neither repository is ideal, but having it here vs there is not just moving the problem as the two projects are different, have different timelines and different backwards compatibility policies. - Infinispan already depends on several Hibernate projects - even directly to Hibernate ORM itself via the JPA cachestore and indirectly via Hibernate Search and WildFly - so moving the Infinispan dependency out of the Hibernate repository helps to linearize the build for one consistent stack. For example right now WildFly master contains a combination of Hibernate ORM and Infinispan 2LC, which is not the same combination as tested by running the 2LC testsuite; this happens all the time and brings its own set of issues & delays. - Infinispan changes way more often - and as Radim already suggested in his previous email - there's more benefit in having such advanced code more closely tied to Infinispan so that it can benefit from new capabilities even though these might not be ready to be blessed as long term API. The 2LC SPI in Hibernate on the other hand is stable, and has to stay stable anyway, for other reasons not least integration with other providers, so there's no symmetric benefit in having this code in Hibernate. - Infinispan releases breaking changes with a more aggressive pace. It's more useful for Infinispan 9 to be able to support older versions of Hibernate ORM, than the drawback of a new ORM release not having yet an Infinispan release compatible. This last point is the only drawback I can see, and franckly it's both a temporary situation as Infinispan can catch up quickly and a very inlikely situation as Hibernate ORM is unlikely to change these SPIs in e.g. the next major release 6.0. - Infinispan occasionally breaks expectations of the 2LC code, as Galder just had to figure out with a painful upgrade. We can all agree that these changes are necessary, but I strongly believe it's useful to *know* about such breackages ASAP from the testsuite, not half a year later when a major dependency upgrade propagates to other projects. - The Hibernate ORM would appreciate getting rid of debugging clustering and networking issues when there's the occasional failure, which are stressful as they are out of their area of expertise. I hope that makes sense? Thanks, Sanne On 24 May 2017 at 08:49, Radim Vansa wrote: > Hi Galder, > > I think that (3) is simply not possible (from non-technical perspective) > and I don't think we have the manpower to maintain 2 different modules > (2). The current version does not seem ready (generic enough) to get > into Infinispan, so either (1), or a lot of more work towards (4) (which > would be my preference). > > I haven't thought about all the steps for (4), but it seems that > UnorderedDistributionInterceptor and LockingInterceptor should get into > Infinispan as a flavour of repl/dist cache mode that applies update in > parallel on all owners without any ordering; it's up to the user to > guarantee that changes to an entry are commutative. > > The 2LC code itself shouldn't use the > TombstoneCallInterceptor/VersionedCallInterceptor now that there is the > functional API, you should move the behavior to functions. > > Regarding the invalidation mode, I think that a variant that would void > any writes to the entry (begin/end invalidation) could be moved to > Infinispan, too. I am not even sure if current invalidation in > Infinispan is useful - you can't transparantly cache access to > repeatable-read isolated DB (where reads block writes), but the blocking > as we do in 2LC now is probably too strong if we're working with DB > using just read committed as the isolation level. I was always trying to > enforce linearizability, TBH I don't know how to write a test that would > test a more relaxed consistency. > > Btw., I've noticed that you've set isolation level to READ_COMMITTED in > default configuration - isolation level does not apply at all to > non-transactional caches, so please remove that as it would be just a noise. > > Radim > > On 05/23/2017 03:07 PM, Galder Zamarre?o wrote: >> Hi all, >> >> I've just finished integrating Infinispan with a HB 6.x branch Steve had, all tests pass now [1]. >> >> Yeah, we didn't commit on the final location for these changes. >> >> As far as I know, Hibernate master is not 6.x, but rather 5.2.x. There's no 5.2.x branch in Hibernate main repo. 6.x is just a branch that Steve has. >> >> These are the options availble to us: >> >> 1. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 6.x branch. >> >> 2. Integrate 9.x provider as part of a second Infinispan module in Hibernate 5.x branch. >> >> 3. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 5.x branch. This is problematic for since the provider is not backwards compatible. >> >> 4. Integrate 9.x provider in infinispan and deliver it as part of Infinispan rather than Hibernate. >> >> I'm not sure which one I prefer the most TBH... 1. is the ideal solution but doesn't seem there will be a Hibernate 6.x release for a while. 2./3./4. all have their downsides... :\ >> >> Thoughts? >> >> [1] https://github.com/galderz/hibernate-orm/commits/t_i9x_v2 >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >>> On 16 May 2017, at 17:06, Paul Ferraro wrote: >>> >>> Thanks Galder. I read through the infinispan-dev thread on the >>> subject, but I'm not sure what was concluded regarding the eventual >>> home for this code. >>> Once the testsuite passes, is the plan to commit to hibernate master? >>> If so, I will likely fork these changes into a WF module (and adapt it >>> for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9 >>> until Hibernate6 is integrated. >>> >>> Radim - one thing you mentioned on that infinispan-dev thread puzzled >>> me: you said that invalidation mode offers no benefits over >>> replication. How is that possible? Can you elaborate? >>> >>> Paul >>> >>> On Tue, May 16, 2017 at 9:03 AM, Galder Zamarre?o wrote: >>>> I'm on the move, not sure if Paul/Radim saw my replies: >>>> >>>> galderz, rvansa: Hey guys - is there a plan for Hibernate & >>>> ISPN 9? >>>> pferraro: Galder has been working on that >>>> pferraro: though I haven't seen any results but a list of >>>> stuff that needs to be changed >>>> galderz: which Hibernate branch are you targeting? >>>> pferraro: 5.2, but there are minute differences between 5.x >>>> in terms of the parts that need love to get Infinispan 9 support >>>> *** Mode change: +v vblagoje on #infinispan by ChanServ >>>> (ChanServ at services.) >>>> rvansa: are you suggesting that 5.0 or 5.1 branches will be >>>> adapted to additionally support infinispan 9? how is that >>>> possible? >>>>> pferraro: i'm working on it as we speak... >>>>> pferraro: down to 16 failuresd >>>>> pferraro: i started a couple of months ago, but had talks/demos to >>>> prepare >>>>> pferraro: i've got back to working on it this week >>>> ... >>>>> pferraro: rvansa >>>>> rvansa: minute differences my ass ;p >>>>> pferraro: did you see my replies? >>>>> i got disconnected while replying... >>>> hmm - no - I didn't >>>> galderz: ^ >>>>> pferraro: so, working on the HB + I9 integration as we speak >>>>> pferraro: i started a couple of months back but had talks/demos to >>>> prepare and had to put that aside >>>>> pferraro: i'm down to 16 failures >>>>> pferraro: serious refactoring required of the integration to get it >>>> to compile and the tests to pass >>>>> pferraro: need to switch to async interceptor stack in 2lc >>>> integration and get all the subtle changes right >>>>> pferraro: it's a painstaking job basically >>>>> pferraro: i'm working on >>>> https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 >>>>> pferraro: i can't remember where i branched off, but it's a branch >>>> that steve had since master was focused on 5.x >>>>> pferraro: i've no idea when/where we'll integrate this, but one >>>> thing is for sure: it's nowhere near backwards compatible >>>>> actually, fixed one this morning, so down to 15 failures >>>>> pferraro: any suggestions/wishes? >>>>> is anyone out there? ;) >>>> Cheers, >>>> -- >>>> Galder Zamarre?o >>>> Infinispan, Red Hat >>>> > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From paul.ferraro at redhat.com Wed May 24 11:56:43 2017 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Wed, 24 May 2017 11:56:43 -0400 Subject: [infinispan-dev] IRC chat: HB + I9 In-Reply-To: References: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> <8c834e21-438b-d883-6ce0-fa331ccc14fc@redhat.com> Message-ID: Option #4 would be my preference as well. The integration into WF has become increasingly cumbersome as the pace of Infinispan releases (and associated API changes) has increased. I would really rather avoid having to create and maintain forks of hibernate-infinispan to support the combination of Hibernate and Infinispan that don't exist in the upstream codebase. On Wed, May 24, 2017 at 11:18 AM, Sanne Grinovero wrote: > I would suggest option 4# : move the 2LC implementation to Infinispan. > > I already suggested this in the past, but to remind the main arguments I have: > > - neither repository is ideal, but having it here vs there is not > just moving the problem as the two projects are different, have > different timelines and different backwards compatibility policies. > > - Infinispan already depends on several Hibernate projects - even > directly to Hibernate ORM itself via the JPA cachestore and indirectly > via Hibernate Search and WildFly - so moving the Infinispan dependency > out of the Hibernate repository helps to linearize the build for one > consistent stack. > For example right now WildFly master contains a combination of > Hibernate ORM and Infinispan 2LC, which is not the same combination as > tested by running the 2LC testsuite; this happens all the time and > brings its own set of issues & delays. > > - Infinispan changes way more often - and as Radim already suggested > in his previous email - there's more benefit in having such advanced > code more closely tied to Infinispan so that it can benefit from new > capabilities even though these might not be ready to be blessed as > long term API. The 2LC SPI in Hibernate on the other hand is stable, > and has to stay stable anyway, for other reasons not least integration > with other providers, so there's no symmetric benefit in having this > code in Hibernate. > > - Infinispan releases breaking changes with a more aggressive pace. > It's more useful for Infinispan 9 to be able to support older versions > of Hibernate ORM, than the drawback of a new ORM release not having > yet an Infinispan release compatible. This last point is the only > drawback I can see, and franckly it's both a temporary situation as > Infinispan can catch up quickly and a very inlikely situation as > Hibernate ORM is unlikely to change these SPIs in e.g. the next major > release 6.0. > > - Infinispan occasionally breaks expectations of the 2LC code, as > Galder just had to figure out with a painful upgrade. We can all agree > that these changes are necessary, but I strongly believe it's useful > to *know* about such breackages ASAP from the testsuite, not half a > year later when a major dependency upgrade propagates to other > projects. > > - The Hibernate ORM would appreciate getting rid of debugging > clustering and networking issues when there's the occasional failure, > which are stressful as they are out of their area of expertise. > > I hope that makes sense? > > Thanks, > Sanne > > > > On 24 May 2017 at 08:49, Radim Vansa wrote: >> Hi Galder, >> >> I think that (3) is simply not possible (from non-technical perspective) >> and I don't think we have the manpower to maintain 2 different modules >> (2). The current version does not seem ready (generic enough) to get >> into Infinispan, so either (1), or a lot of more work towards (4) (which >> would be my preference). >> >> I haven't thought about all the steps for (4), but it seems that >> UnorderedDistributionInterceptor and LockingInterceptor should get into >> Infinispan as a flavour of repl/dist cache mode that applies update in >> parallel on all owners without any ordering; it's up to the user to >> guarantee that changes to an entry are commutative. >> >> The 2LC code itself shouldn't use the >> TombstoneCallInterceptor/VersionedCallInterceptor now that there is the >> functional API, you should move the behavior to functions. >> >> Regarding the invalidation mode, I think that a variant that would void >> any writes to the entry (begin/end invalidation) could be moved to >> Infinispan, too. I am not even sure if current invalidation in >> Infinispan is useful - you can't transparantly cache access to >> repeatable-read isolated DB (where reads block writes), but the blocking >> as we do in 2LC now is probably too strong if we're working with DB >> using just read committed as the isolation level. I was always trying to >> enforce linearizability, TBH I don't know how to write a test that would >> test a more relaxed consistency. >> >> Btw., I've noticed that you've set isolation level to READ_COMMITTED in >> default configuration - isolation level does not apply at all to >> non-transactional caches, so please remove that as it would be just a noise. >> >> Radim >> >> On 05/23/2017 03:07 PM, Galder Zamarre?o wrote: >>> Hi all, >>> >>> I've just finished integrating Infinispan with a HB 6.x branch Steve had, all tests pass now [1]. >>> >>> Yeah, we didn't commit on the final location for these changes. >>> >>> As far as I know, Hibernate master is not 6.x, but rather 5.2.x. There's no 5.2.x branch in Hibernate main repo. 6.x is just a branch that Steve has. >>> >>> These are the options availble to us: >>> >>> 1. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 6.x branch. >>> >>> 2. Integrate 9.x provider as part of a second Infinispan module in Hibernate 5.x branch. >>> >>> 3. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 5.x branch. This is problematic for since the provider is not backwards compatible. >>> >>> 4. Integrate 9.x provider in infinispan and deliver it as part of Infinispan rather than Hibernate. >>> >>> I'm not sure which one I prefer the most TBH... 1. is the ideal solution but doesn't seem there will be a Hibernate 6.x release for a while. 2./3./4. all have their downsides... :\ >>> >>> Thoughts? >>> >>> [1] https://github.com/galderz/hibernate-orm/commits/t_i9x_v2 >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>>> On 16 May 2017, at 17:06, Paul Ferraro wrote: >>>> >>>> Thanks Galder. I read through the infinispan-dev thread on the >>>> subject, but I'm not sure what was concluded regarding the eventual >>>> home for this code. >>>> Once the testsuite passes, is the plan to commit to hibernate master? >>>> If so, I will likely fork these changes into a WF module (and adapt it >>>> for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9 >>>> until Hibernate6 is integrated. >>>> >>>> Radim - one thing you mentioned on that infinispan-dev thread puzzled >>>> me: you said that invalidation mode offers no benefits over >>>> replication. How is that possible? Can you elaborate? >>>> >>>> Paul >>>> >>>> On Tue, May 16, 2017 at 9:03 AM, Galder Zamarre?o wrote: >>>>> I'm on the move, not sure if Paul/Radim saw my replies: >>>>> >>>>> galderz, rvansa: Hey guys - is there a plan for Hibernate & >>>>> ISPN 9? >>>>> pferraro: Galder has been working on that >>>>> pferraro: though I haven't seen any results but a list of >>>>> stuff that needs to be changed >>>>> galderz: which Hibernate branch are you targeting? >>>>> pferraro: 5.2, but there are minute differences between 5.x >>>>> in terms of the parts that need love to get Infinispan 9 support >>>>> *** Mode change: +v vblagoje on #infinispan by ChanServ >>>>> (ChanServ at services.) >>>>> rvansa: are you suggesting that 5.0 or 5.1 branches will be >>>>> adapted to additionally support infinispan 9? how is that >>>>> possible? >>>>>> pferraro: i'm working on it as we speak... >>>>>> pferraro: down to 16 failuresd >>>>>> pferraro: i started a couple of months ago, but had talks/demos to >>>>> prepare >>>>>> pferraro: i've got back to working on it this week >>>>> ... >>>>>> pferraro: rvansa >>>>>> rvansa: minute differences my ass ;p >>>>>> pferraro: did you see my replies? >>>>>> i got disconnected while replying... >>>>> hmm - no - I didn't >>>>> galderz: ^ >>>>>> pferraro: so, working on the HB + I9 integration as we speak >>>>>> pferraro: i started a couple of months back but had talks/demos to >>>>> prepare and had to put that aside >>>>>> pferraro: i'm down to 16 failures >>>>>> pferraro: serious refactoring required of the integration to get it >>>>> to compile and the tests to pass >>>>>> pferraro: need to switch to async interceptor stack in 2lc >>>>> integration and get all the subtle changes right >>>>>> pferraro: it's a painstaking job basically >>>>>> pferraro: i'm working on >>>>> https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 >>>>>> pferraro: i can't remember where i branched off, but it's a branch >>>>> that steve had since master was focused on 5.x >>>>>> pferraro: i've no idea when/where we'll integrate this, but one >>>>> thing is for sure: it's nowhere near backwards compatible >>>>>> actually, fixed one this morning, so down to 15 failures >>>>>> pferraro: any suggestions/wishes? >>>>>> is anyone out there? ;) >>>>> Cheers, >>>>> -- >>>>> Galder Zamarre?o >>>>> Infinispan, Red Hat >>>>> >> >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Wed May 24 12:04:12 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 24 May 2017 18:04:12 +0200 Subject: [infinispan-dev] IRC chat: HB + I9 In-Reply-To: References: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> <8c834e21-438b-d883-6ce0-fa331ccc14fc@redhat.com> Message-ID: <848888EB-EA5C-4A34-B579-B86C610167D4@redhat.com> Adding Steve, Scott Marlow just reminded me that you've advocated for Infinispan 2LC provider to be moved to Infinispan source tree [2]. So, you might want to add your thoughts to this thread? Cheers, [2] http://transcripts.jboss.org/channel/irc.freenode.org/%23hibernate-dev/2015/%23hibernate-dev.2015-08-06.log.html -- Galder Zamarre?o Infinispan, Red Hat > On 24 May 2017, at 17:56, Paul Ferraro wrote: > > Option #4 would be my preference as well. The integration into WF has > become increasingly cumbersome as the pace of Infinispan releases (and > associated API changes) has increased. I would really rather avoid > having to create and maintain forks of hibernate-infinispan to support > the combination of Hibernate and Infinispan that don't exist in the > upstream codebase. > > On Wed, May 24, 2017 at 11:18 AM, Sanne Grinovero wrote: >> I would suggest option 4# : move the 2LC implementation to Infinispan. >> >> I already suggested this in the past, but to remind the main arguments I have: >> >> - neither repository is ideal, but having it here vs there is not >> just moving the problem as the two projects are different, have >> different timelines and different backwards compatibility policies. >> >> - Infinispan already depends on several Hibernate projects - even >> directly to Hibernate ORM itself via the JPA cachestore and indirectly >> via Hibernate Search and WildFly - so moving the Infinispan dependency >> out of the Hibernate repository helps to linearize the build for one >> consistent stack. >> For example right now WildFly master contains a combination of >> Hibernate ORM and Infinispan 2LC, which is not the same combination as >> tested by running the 2LC testsuite; this happens all the time and >> brings its own set of issues & delays. >> >> - Infinispan changes way more often - and as Radim already suggested >> in his previous email - there's more benefit in having such advanced >> code more closely tied to Infinispan so that it can benefit from new >> capabilities even though these might not be ready to be blessed as >> long term API. The 2LC SPI in Hibernate on the other hand is stable, >> and has to stay stable anyway, for other reasons not least integration >> with other providers, so there's no symmetric benefit in having this >> code in Hibernate. >> >> - Infinispan releases breaking changes with a more aggressive pace. >> It's more useful for Infinispan 9 to be able to support older versions >> of Hibernate ORM, than the drawback of a new ORM release not having >> yet an Infinispan release compatible. This last point is the only >> drawback I can see, and franckly it's both a temporary situation as >> Infinispan can catch up quickly and a very inlikely situation as >> Hibernate ORM is unlikely to change these SPIs in e.g. the next major >> release 6.0. >> >> - Infinispan occasionally breaks expectations of the 2LC code, as >> Galder just had to figure out with a painful upgrade. We can all agree >> that these changes are necessary, but I strongly believe it's useful >> to *know* about such breackages ASAP from the testsuite, not half a >> year later when a major dependency upgrade propagates to other >> projects. >> >> - The Hibernate ORM would appreciate getting rid of debugging >> clustering and networking issues when there's the occasional failure, >> which are stressful as they are out of their area of expertise. >> >> I hope that makes sense? >> >> Thanks, >> Sanne >> >> >> >> On 24 May 2017 at 08:49, Radim Vansa wrote: >>> Hi Galder, >>> >>> I think that (3) is simply not possible (from non-technical perspective) >>> and I don't think we have the manpower to maintain 2 different modules >>> (2). The current version does not seem ready (generic enough) to get >>> into Infinispan, so either (1), or a lot of more work towards (4) (which >>> would be my preference). >>> >>> I haven't thought about all the steps for (4), but it seems that >>> UnorderedDistributionInterceptor and LockingInterceptor should get into >>> Infinispan as a flavour of repl/dist cache mode that applies update in >>> parallel on all owners without any ordering; it's up to the user to >>> guarantee that changes to an entry are commutative. >>> >>> The 2LC code itself shouldn't use the >>> TombstoneCallInterceptor/VersionedCallInterceptor now that there is the >>> functional API, you should move the behavior to functions. >>> >>> Regarding the invalidation mode, I think that a variant that would void >>> any writes to the entry (begin/end invalidation) could be moved to >>> Infinispan, too. I am not even sure if current invalidation in >>> Infinispan is useful - you can't transparantly cache access to >>> repeatable-read isolated DB (where reads block writes), but the blocking >>> as we do in 2LC now is probably too strong if we're working with DB >>> using just read committed as the isolation level. I was always trying to >>> enforce linearizability, TBH I don't know how to write a test that would >>> test a more relaxed consistency. >>> >>> Btw., I've noticed that you've set isolation level to READ_COMMITTED in >>> default configuration - isolation level does not apply at all to >>> non-transactional caches, so please remove that as it would be just a noise. >>> >>> Radim >>> >>> On 05/23/2017 03:07 PM, Galder Zamarre?o wrote: >>>> Hi all, >>>> >>>> I've just finished integrating Infinispan with a HB 6.x branch Steve had, all tests pass now [1]. >>>> >>>> Yeah, we didn't commit on the final location for these changes. >>>> >>>> As far as I know, Hibernate master is not 6.x, but rather 5.2.x. There's no 5.2.x branch in Hibernate main repo. 6.x is just a branch that Steve has. >>>> >>>> These are the options availble to us: >>>> >>>> 1. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 6.x branch. >>>> >>>> 2. Integrate 9.x provider as part of a second Infinispan module in Hibernate 5.x branch. >>>> >>>> 3. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 5.x branch. This is problematic for since the provider is not backwards compatible. >>>> >>>> 4. Integrate 9.x provider in infinispan and deliver it as part of Infinispan rather than Hibernate. >>>> >>>> I'm not sure which one I prefer the most TBH... 1. is the ideal solution but doesn't seem there will be a Hibernate 6.x release for a while. 2./3./4. all have their downsides... :\ >>>> >>>> Thoughts? >>>> >>>> [1] https://github.com/galderz/hibernate-orm/commits/t_i9x_v2 >>>> -- >>>> Galder Zamarre?o >>>> Infinispan, Red Hat >>>> >>>>> On 16 May 2017, at 17:06, Paul Ferraro wrote: >>>>> >>>>> Thanks Galder. I read through the infinispan-dev thread on the >>>>> subject, but I'm not sure what was concluded regarding the eventual >>>>> home for this code. >>>>> Once the testsuite passes, is the plan to commit to hibernate master? >>>>> If so, I will likely fork these changes into a WF module (and adapt it >>>>> for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9 >>>>> until Hibernate6 is integrated. >>>>> >>>>> Radim - one thing you mentioned on that infinispan-dev thread puzzled >>>>> me: you said that invalidation mode offers no benefits over >>>>> replication. How is that possible? Can you elaborate? >>>>> >>>>> Paul >>>>> >>>>> On Tue, May 16, 2017 at 9:03 AM, Galder Zamarre?o wrote: >>>>>> I'm on the move, not sure if Paul/Radim saw my replies: >>>>>> >>>>>> galderz, rvansa: Hey guys - is there a plan for Hibernate & >>>>>> ISPN 9? >>>>>> pferraro: Galder has been working on that >>>>>> pferraro: though I haven't seen any results but a list of >>>>>> stuff that needs to be changed >>>>>> galderz: which Hibernate branch are you targeting? >>>>>> pferraro: 5.2, but there are minute differences between 5.x >>>>>> in terms of the parts that need love to get Infinispan 9 support >>>>>> *** Mode change: +v vblagoje on #infinispan by ChanServ >>>>>> (ChanServ at services.) >>>>>> rvansa: are you suggesting that 5.0 or 5.1 branches will be >>>>>> adapted to additionally support infinispan 9? how is that >>>>>> possible? >>>>>>> pferraro: i'm working on it as we speak... >>>>>>> pferraro: down to 16 failuresd >>>>>>> pferraro: i started a couple of months ago, but had talks/demos to >>>>>> prepare >>>>>>> pferraro: i've got back to working on it this week >>>>>> ... >>>>>>> pferraro: rvansa >>>>>>> rvansa: minute differences my ass ;p >>>>>>> pferraro: did you see my replies? >>>>>>> i got disconnected while replying... >>>>>> hmm - no - I didn't >>>>>> galderz: ^ >>>>>>> pferraro: so, working on the HB + I9 integration as we speak >>>>>>> pferraro: i started a couple of months back but had talks/demos to >>>>>> prepare and had to put that aside >>>>>>> pferraro: i'm down to 16 failures >>>>>>> pferraro: serious refactoring required of the integration to get it >>>>>> to compile and the tests to pass >>>>>>> pferraro: need to switch to async interceptor stack in 2lc >>>>>> integration and get all the subtle changes right >>>>>>> pferraro: it's a painstaking job basically >>>>>>> pferraro: i'm working on >>>>>> https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 >>>>>>> pferraro: i can't remember where i branched off, but it's a branch >>>>>> that steve had since master was focused on 5.x >>>>>>> pferraro: i've no idea when/where we'll integrate this, but one >>>>>> thing is for sure: it's nowhere near backwards compatible >>>>>>> actually, fixed one this morning, so down to 15 failures >>>>>>> pferraro: any suggestions/wishes? >>>>>>> is anyone out there? ;) >>>>>> Cheers, >>>>>> -- >>>>>> Galder Zamarre?o >>>>>> Infinispan, Red Hat >>>>>> >>> >>> >>> -- >>> Radim Vansa >>> JBoss Performance Team >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Wed May 24 18:14:34 2017 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 25 May 2017 00:14:34 +0200 Subject: [infinispan-dev] REST Refactoring - breaking changes In-Reply-To: References: <95f6b3f1-7f25-712e-b5fb-9e196cc93591@redhat.com> Message-ID: <0e217509-ea9d-2bae-950b-fd438334f89a@redhat.com> On 05/24/2017 10:44 AM, Sebastian Laskawiec wrote: > > > On Tue, May 23, 2017 at 5:06 PM Radim Vansa > wrote: > > On 05/16/2017 11:05 AM, Sebastian Laskawiec wrote: > > Hey guys! > > > > I'm working on REST Server refactoring and I changed some of the > > previous behavior. Having in mind that we are implementing this in a > > minor release, I tried to make those changes really cosmetic: > > > > * RestEASY as well as Servlet API have been removed from > modules and > > BOM. If your app relied on it, you'll need to specify them > > separately in your pom. > > * Previous implementation picked application/text as a default > > content type. I replaced it with text/plain with charset > which is > > more precise and seems to be more widely adopted. > > * Putting an entry without any TTL nor Idle Time made it living > > forever (which was BTW aligned with the docs). I switched to > > server configured defaults in this case. If you want to have an > > entry that lives forever, just specify 0 or -1 there. > > * Requesting an entry with wrong mime type (imagine it was stored > > using application/octet-stream and now you're requesting > > text/plain) cased Bad Request. Now I switched it to Not > Acceptable > > which was designed specially to cover this type of use case. > > * In compatibility mode the server often tried to "guess" the > > mimetype (the decision was often between text/plain and > > application/octet-stream). I honestly think it was a wrong move > > and made the server side code very hard to read and predict what > > would be the result. Now the server always returns text/plain by > > default. If you want to get a byte stream back, just add > `Accept: > > application/octet-stream`. > > * The server can be started with port 0. This way you are 100% > sure > > that it will start using a unique port without colliding > with any > > other service. > > > How can the client now the port number, then? Is the actual port > exposed > through JMX? > > > * The REST server hosts HTML page if queried using GET on default > > context. I think it was a bug that it didn't work correctly > before. > > > Did it return 404? What's on that page? Do we expose > keys/values/entries > anywhere in the REST endpoint? > > > Exactly. You may try it using our Docker image and invoking something > like this: curl -v -u user:changeme http://172.17.0.6:8080/rest > > > > * UTF-8 charset is now the default. You may always ask the > server to > > return different encoding using Accept header. The charset > is not > > returned with binary mime types. > > * If a HEAD request results in an error, a message will be > returned > > to the client. Even though this behavior breaks Commons HTTP > > Client (HEAD requests are handled slightly differently and > causes > > the client to hang if a payload is returned), I think it's > > beneficial to tell the user what went wrong. It's worth to > mention > > that Jetty/Netty HTTP clients work correctly. > > * RestServer doesn't implement Lifecycle now. The protocol server > > doesn't support start() method without any arguments. You always > > need to specify configuration + Embedded Cache Manager. > > > > Even though it's a long list, I think all those changes were > worth it. > > Please let me know if you don't agree. > > Couple of other questions: > > * do we accept GET with Range header on keys? What about > delta-updating > entries with Content-Range on PUTs? > > > No and AFAK there are no plans to do it (but perhaps Tristan could > shed some more light onto this). We could use HTTP PATCH for delta > updates... > > * For PUTs/POSTs, do we return 200/201/204 according to the spec? > (modified/created/modified) > > > No, I decided to leave it as 200 for compatibility reasons. But I > agree, we could change this as well. Compatibility reasons? You mean compatibility with clients not adhering to spec? (clients should accept any 2xy as success by definition). > > * Do we have any way to execute a replace (or the other prev-value > returning ops) through REST using single request? For example let > DELETE > return the prev entity (it should return 200 & entity or 204 and no > response) > > > Yes, PUT replaces previous value [1] if such exists (whereas POST > would return a conflict). If for some reason you can not replace > current value, you will get a preconditions failed error. I have misformulated the question. I meant to ask if there is a way to return the previous value when you've replaced it? > > [1] > https://github.com/infinispan/infinispan/pull/5094/files#diff-58f67698080cc0242320614c921559a8R301 Looking into the code, without considering performance at all, I think that you've become too ecstatic about Optionals. These should be used as return types for methods, not a) parameters to methods nor b) fields. This is a misuse of the API, according to the authors of Optionals in JDK. Most of the time, you're not using optionals to have fluent chain of method invocations, so -100 to that. > * Do we handle OPTIONS in any way? > > > No. Do we need it? I haven't seen any real implementation that uses > that for discovering REST operations. > > > Radim > > > > > Thanks, > > Sebastian > > > > -- > > > > SEBASTIAN?ASKAWIEC > > > > INFINISPAN DEVELOPER > > > > Red HatEMEA > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > > SEBASTIAN?ASKAWIEC > > INFINISPAN DEVELOPER > > Red HatEMEA > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From slaskawi at redhat.com Thu May 25 03:56:05 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 25 May 2017 07:56:05 +0000 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: <441f3a85-111d-acef-d68c-794b672f06bd@mailbox.org> References: <441f3a85-111d-acef-d68c-794b672f06bd@mailbox.org> Message-ID: Indeed Bela, you're an extreme naysayer! :) I'm actually trying to get as many comments and arguments out of this discussion. I hope we will be able to iron out a general recommendation or approach how we want to treat Optionals. On Tue, May 23, 2017 at 10:14 PM Bela Ban wrote: > Actually, I'm an extreme naysayer! I actually voiced concerns so I'm > wondering where your assumption there are no naysayers is coming from... > :-) > > > On 23/05/17 1:54 PM, Sebastian Laskawiec wrote: > > Hey! > > > > So I think we have no extreme naysayers to Optional. So let me try to > > sum up what we have achieved so: > > > > * In macroscale benchmark based on REST interface using Optionals > > didn't lower the performance. > > * +1 for using it in public APIs, especially for those using > > functional style. > > * Creating lots of Optional instances might add some pressure on GC, > > so we need to be careful when using them in hot code paths. In > > such cases it is required to run a micro scale benchamark to make > > sure the performance didn't drop. The microbenchmark should also > > be followed by macro scale benchamrk - PerfJobAck. Also, keep an > > eye on Eden space in such cases. > > > > If you agree with me, and there are no hard evidence that using > > Optional degrade performance significantly, I would like to issue a > > pull request and put those findings into contributing guide [1]. > > > > Thanks, > > Sebastian > > > > [1] > > > https://github.com/infinispan/infinispan/tree/master/documentation/src/main/asciidoc/contributing > > > > On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o > > wrote: > > > > I think Sanne's right here, any differences in such large scale > > test are hard to decipher. > > > > Also, as mentioned in a previous email, my view on its usage is > > same as Sanne's: > > > > * Definitely in APIs/SPIs. > > * Be gentle with it internals. > > > > Cheers, > > -- > > Galder Zamarre?o > > Infinispan, Red Hat > > > > > On 18 May 2017, at 14:35, Sanne Grinovero > > wrote: > > > > > > Hi Sebastian, > > > > > > sorry but I think you've been wasting time, I hope it was fun :) > > This is not the right methodology to "settle" the matter (unless > > you want Radim's eyes to get bloody..). > > > > > > Any change in such a complex system will only affect the > > performance metrics if you're actually addressing the dominant > > bottleneck. In some cases it might be CPU, like if your system is > > at 90%+ CPU then it's likely that reviewing the code to use less > > CPU would be beneficial; but even that can be counter-productive, > > for example if you're having contention caused by optimistic > > locking and you fail to address that while making something else > > "faster" the performance loss on the optimistic lock might become > > asymptotic. > > > > > > A good reason to avoid excessive usage of Optional (and > > *excessive* doesn't mean a couple dozen in a millions lines of > > code..) is to not run out of eden space, especially for all the > > code running in interpreted mode. > > > > > > In your case you've been benchmarking a hugely complex beast, > > not least over REST! When running the REST Server I doubt that > > allocation in eden is your main problem. You just happened to have > > a couple Optionals on your path; sure performance changed but > > there's no enough data in this way to figure out what exactly > > happened: > > > - did it change at all or was it just because of a lucky > > optimisation? (The JIT will always optimise stuff differently even > > when re-running the same code) > > > - did the overall picture improve because this code became much > > *less* slower? > > > > > > The real complexity in benchmarking is to accurately understand > > why it changed; this should also tell you why it didn't change > > more, or less.. > > > > > > To be fair I actually agree that it's very likely that C2 can > > make any performance penalty disappear.. that's totally possible, > > although it's unlikely to be faster than just reading the field > > (assuming we don't need to do branching because of null-checks but > > C2 can optimise that as well). > > > Still this requires the code to be optimised by JIT first, so it > > won't prevent us from creating a gazillion of instances if we > > abuse its usage irresponsibly. Fighting internal NPEs is a matter > > of writing better code; I'm not against some "Optional" being > > strategically placed but I believe it's much nicer for most > > internal code to just avoid null, use "final", and initialize > > things aggressively. > > > > > > Sure use Optional where it makes sense, probably most on APIs > > and SPIs, but please don't go overboard with it in internals. > > That's all I said in the original debate. > > > > > > In case you want to benchmark the impact of Optional make a JMH > > based microbenchmark - that's interesting to see what C2 is > > capable of - but even so that's not going to tell you much on the > > impact it would have to patch thousands of code all around > > Infinispan. And it will need some peer review before it can tell > > you anything at all ;) > > > > > > It's actually a very challenging topic, as we produce libraries > > meant for "anyone to use" and don't get to set the hardware > > specification requirements it's hard to predict if we should > > optimise the system for this/that resource consumption. Some > > people will have plenty of CPU and have problems with us needing > > too much memory, some others will have the opposite.. the real > > challenge is in making internals "elastic" to such factors and > > adaptable without making it too hard to tune. > > > > > > Thanks, > > > Sanne > > > > > > > > > > > > On 18 May 2017 at 12:30, Sebastian Laskawiec > > > wrote: > > > Hey! > > > > > > In our past we had a couple of discussions about whether we > > should or should not use Optionals [1][2]. The main argument > > against it was performance. > > > > > > On one hand we risk additional object allocation (the Optional > > itself) and wrong inlining decisions taken by C2 compiler [3]. On > > the other hand we all probably "feel" that both of those things > > shouldn't be a problem and should be optimized by C2. Another > > argument was the Optional's doesn't give us anything but as I > > checked, we introduced nearly 80 NullPointerException bugs in two > > years [4]. So we might consider Optional as a way of fighting > > those things. The final argument that I've seen was about lack of > > higher order functions which is simply not true since we have > > #map, #filter and #flatmap functions. You can do pretty amazing > > things with this. > > > > > > I decided to check the performance when refactoring REST > > interface. I created a PR with Optionals [5], ran performance > > tests, removed all Optionals and reran tests. You will be > > surprised by the results [6]: > > > > > > Test case > > > With Optionals [%] Without Optionals > > > Run 1 Run 2 Avg Run 1 Run 2 Avg > > > Non-TX reads 10 threads > > > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 > > > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 > > > Non-TX reads 100 threads > > > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 > > > Response time -6.15 14.93 4.39 7.88 6.49 7.19 > > > Non-TX writes 10 threads > > > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 > > > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 > > > Non-TX writes 100 threads > > > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 > > > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 > > > > > > I also created JMH + Flight Recorder tests and again, the > > results showed no evidence of slow down caused by Optionals [7]. > > > > > > Now please take those results with a grain of salt since they > > tend to drift by a factor of +/-5% (sometimes even more). But it's > > very clear the performance results are very similar if not the same. > > > > > > Having those numbers at hand, do we want to have Optionals in > > Infinispan codebase or not? And if not, let's state it very > > clearly (and write it into contributing guide), it's because we > > don't like them. Not because of performance. > > > > > > Thanks, > > > Sebastian > > > > > > [1] > > > http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html > > > [2] > > > http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html > > > [3] > > > http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html > > > [4] > > > https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27 > > > [5] https://github.com/infinispan/infinispan/pull/5094 > > > [6] > > > https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing > > > [7] > > > https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673 > > > -- > > > SEBASTIAN ?ASKAWIEC > > > INFINISPAN DEVELOPER > > > Red Hat EMEA > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org infinispan-dev at lists.jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > > > > SEBASTIAN?ASKAWIEC > > > > INFINISPAN DEVELOPER > > > > Red HatEMEA > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170525/e441dc76/attachment-0001.html From slaskawi at redhat.com Thu May 25 04:00:09 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 25 May 2017 08:00:09 +0000 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: <26d94e24-d80d-c9c6-bbcf-c397a13a1e35@redhat.com> References: <26d94e24-d80d-c9c6-bbcf-c397a13a1e35@redhat.com> Message-ID: Adding part of your email from REST refactoring thread: Looking into the code, without considering performance at all, I think > that you've become too ecstatic about Optionals. These should be used as > return types for methods, not a) parameters to methods nor b) fields. > This is a misuse of the API, according to the authors of Optionals in > JDK. Most of the time, you're not using optionals to have fluent chain > of method invocations, so -100 to that. > I'm sorry I'm not picking up the discussion about REST refactoring PR since it has been already merged. Plus I'm not planning to do any Optionals refactoring as long I don't have a clear vision how we'd like to approach it. But I'm actually very happy you touched the use case topic. So far we were discussing advantages and disadvantages of Optionals and we didn't say much about potential use cases (Katia, Dan, Galder and Sanne also touched a little this topic). Indeed, Stephen Colebourne [1] mentions that it should be used as method return types: "My only fear is that Optional will be overused. Please focus on using it as a return type (from methods that perform some useful piece of functionality) Please don't use it as the field of a Java-Bean." Brian Goetz also said a few words on Stack Overflow about this [2]: "For example, you probably should never use it for something that returns an array of results, or a list of results; instead return an empty array or list. You should almost never use it as a field of something or a method parameter. I think routinely using it as a return value for getters would definitely be over-use." So if we want to be really dogmatic here, we wouldn't be able to use Optionals in fields, method parameters, and getters. Please note that I'm blindly putting recommendations mentioned above into code. As it turns out we can use Optionals anywhere, except method returning some objects which are not getters. It is also worth to say that both gentlemen are worried that Optionals might be overused in the libraries. On the other hand we have Oracle's tutorials which use Optionals as a fields [3]: "public class Soundcard { private Optional usb; public Optional getUSB() { ... } }" and say no word about recommendations mentioned in [1] and [2]. Also many libraries (like Jackson, Hibernate validator) support Optionals as fields [5]. So it must be somewhat popular use case right? I think my favorit reading about Optional use cases is this [6]. So the author suggests to use Optionals as a return types in API boundaries but use nulls inside classes. This has two major advantages: - It makes the library caller aware that the value might not be there - The returned Optional object will probably die very soon (a called will probably do something with it right away) An example based on Oracle's tutorial would look like this (following this recommendation): "public class Soundcard { private USB usb; public Optional getUSB() { return Optional.ofNullable(usb); } }" I think it hits exactly into Katia's, Sanne's, Dan's and Galder's points. What do you think? [1] http://blog.joda.org/2014/11/optional-in-java-se-8.html [2] https://stackoverflow.com/questions/26327957/should-java-8-getters-return-optional-type/26328555#26328555 [3] http://www.oracle.com/technetwork/articles/java/java8-optional-2175753.html [4] http://blog.joda.org/2015/08/java-se-8-optional-pragmatic-approach.html [5] http://dolszewski.com/java/java-8-optional-use-cases/ [6] http://blog.joda.org/2015/08/java-se-8-optional-pragmatic-approach.html On Wed, May 24, 2017 at 4:56 PM Radim Vansa wrote: > I haven't checked Sebastian's refactored code, but does it use Optionals > as a *field* type? That's misuse (same as using it as an arg), it's > intended solely as method return type. > > Radim > > On 05/23/2017 05:45 PM, Katia Aresti wrote: > > Dan, I disagree with point 2 where you say "You now have a field that > > could be null, Optional.empty(), or Optional.of(something)" > > > > This is the point of optional. You shouldn't have a field that has > > these 3 possible values, just two of them = Some or None. If the field > > is mutable, it should be initialised to Optional.empty(). In the case > > of an API, Optional implicitly says that the return value can be > > empty, but when you return a "normal" object, either the user reads > > the doc, either will have bugs or boilerplate code defending from the > > possible null value (even if never ever this API will return null) > > > > :o) > > > > Cheers > > > > > > > > On Tue, May 23, 2017 at 3:58 PM, Dan Berindei > > wrote: > > > > I wouldn't say I'm an extreme naysayer, but I do have 2 issues > > with Optional: > > > > 1. Performance becomes harder to quantify: the allocations may or > > may not be eliminated, and a change in one part of the code may > > change how allocations are eliminated in a completely different > > part of the code. > > 2. My personal opinion is it's just ugly... instead of having one > > field that could be null or non-null, you now have a field that > > could be null, Optional.empty(), or Optional.of(something). > > > > Cheers > > Dan > > > > > > > > On Tue, May 23, 2017 at 1:54 PM, Sebastian Laskawiec > > > wrote: > > > > Hey! > > > > So I think we have no extreme naysayers to Optional. So let me > > try to sum up what we have achieved so: > > > > * In macroscale benchmark based on REST interface using > > Optionals didn't lower the performance. > > * +1 for using it in public APIs, especially for those using > > functional style. > > * Creating lots of Optional instances might add some > > pressure on GC, so we need to be careful when using them > > in hot code paths. In such cases it is required to run a > > micro scale benchamark to make sure the performance didn't > > drop. The microbenchmark should also be followed by macro > > scale benchamrk - PerfJobAck. Also, keep an eye on Eden > > space in such cases. > > > > If you agree with me, and there are no hard evidence that > > using Optional degrade performance significantly, I would like > > to issue a pull request and put those findings into > > contributing guide [1]. > > > > Thanks, > > Sebastian > > > > [1] > > > https://github.com/infinispan/infinispan/tree/master/documentation/src/main/asciidoc/contributing > > < > https://github.com/infinispan/infinispan/tree/master/documentation/src/main/asciidoc/contributing > > > > > > On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o > > > wrote: > > > > I think Sanne's right here, any differences in such large > > scale test are hard to decipher. > > > > Also, as mentioned in a previous email, my view on its > > usage is same as Sanne's: > > > > * Definitely in APIs/SPIs. > > * Be gentle with it internals. > > > > Cheers, > > -- > > Galder Zamarre?o > > Infinispan, Red Hat > > > > > On 18 May 2017, at 14:35, Sanne Grinovero > > > wrote: > > > > > > Hi Sebastian, > > > > > > sorry but I think you've been wasting time, I hope it > > was fun :) This is not the right methodology to "settle" > > the matter (unless you want Radim's eyes to get bloody..). > > > > > > Any change in such a complex system will only affect the > > performance metrics if you're actually addressing the > > dominant bottleneck. In some cases it might be CPU, like > > if your system is at 90%+ CPU then it's likely that > > reviewing the code to use less CPU would be beneficial; > > but even that can be counter-productive, for example if > > you're having contention caused by optimistic locking and > > you fail to address that while making something else > > "faster" the performance loss on the optimistic lock might > > become asymptotic. > > > > > > A good reason to avoid excessive usage of Optional (and > > *excessive* doesn't mean a couple dozen in a millions > > lines of code..) is to not run out of eden space, > > especially for all the code running in interpreted mode. > > > > > > In your case you've been benchmarking a hugely complex > > beast, not least over REST! When running the REST Server I > > doubt that allocation in eden is your main problem. You > > just happened to have a couple Optionals on your path; > > sure performance changed but there's no enough data in > > this way to figure out what exactly happened: > > > - did it change at all or was it just because of a > > lucky optimisation? (The JIT will always optimise stuff > > differently even when re-running the same code) > > > - did the overall picture improve because this code > > became much *less* slower? > > > > > > The real complexity in benchmarking is to accurately > > understand why it changed; this should also tell you why > > it didn't change more, or less.. > > > > > > To be fair I actually agree that it's very likely that > > C2 can make any performance penalty disappear.. that's > > totally possible, although it's unlikely to be faster than > > just reading the field (assuming we don't need to do > > branching because of null-checks but C2 can optimise that > > as well). > > > Still this requires the code to be optimised by JIT > > first, so it won't prevent us from creating a gazillion of > > instances if we abuse its usage irresponsibly. Fighting > > internal NPEs is a matter of writing better code; I'm not > > against some "Optional" being strategically placed but I > > believe it's much nicer for most internal code to just > > avoid null, use "final", and initialize things aggressively. > > > > > > Sure use Optional where it makes sense, probably most on > > APIs and SPIs, but please don't go overboard with it in > > internals. That's all I said in the original debate. > > > > > > In case you want to benchmark the impact of Optional > > make a JMH based microbenchmark - that's interesting to > > see what C2 is capable of - but even so that's not going > > to tell you much on the impact it would have to patch > > thousands of code all around Infinispan. And it will need > > some peer review before it can tell you anything at all ;) > > > > > > It's actually a very challenging topic, as we produce > > libraries meant for "anyone to use" and don't get to set > > the hardware specification requirements it's hard to > > predict if we should optimise the system for this/that > > resource consumption. Some people will have plenty of CPU > > and have problems with us needing too much memory, some > > others will have the opposite.. the real challenge is in > > making internals "elastic" to such factors and adaptable > > without making it too hard to tune. > > > > > > Thanks, > > > Sanne > > > > > > > > > > > > On 18 May 2017 at 12:30, Sebastian Laskawiec > > > wrote: > > > Hey! > > > > > > In our past we had a couple of discussions about whether > > we should or should not use Optionals [1][2]. The main > > argument against it was performance. > > > > > > On one hand we risk additional object allocation (the > > Optional itself) and wrong inlining decisions taken by C2 > > compiler [3]. On the other hand we all probably "feel" > > that both of those things shouldn't be a problem and > > should be optimized by C2. Another argument was the > > Optional's doesn't give us anything but as I checked, we > > introduced nearly 80 NullPointerException bugs in two > > years [4]. So we might consider Optional as a way of > > fighting those things. The final argument that I've seen > > was about lack of higher order functions which is simply > > not true since we have #map, #filter and #flatmap > > functions. You can do pretty amazing things with this. > > > > > > I decided to check the performance when refactoring REST > > interface. I created a PR with Optionals [5], ran > > performance tests, removed all Optionals and reran tests. > > You will be surprised by the results [6]: > > > > > > Test case > > > With Optionals [%] Without Optionals > > > Run 1 Run 2 Avg Run 1 Run 2 Avg > > > Non-TX reads 10 threads > > > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 > > > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 > > > Non-TX reads 100 threads > > > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 > > > Response time -6.15 14.93 4.39 7.88 6.49 7.19 > > > Non-TX writes 10 threads > > > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 > > > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 > > > Non-TX writes 100 threads > > > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 > > > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 > > > > > > I also created JMH + Flight Recorder tests and again, > > the results showed no evidence of slow down caused by > > Optionals [7]. > > > > > > Now please take those results with a grain of salt since > > they tend to drift by a factor of +/-5% (sometimes even > > more). But it's very clear the performance results are > > very similar if not the same. > > > > > > Having those numbers at hand, do we want to have > > Optionals in Infinispan codebase or not? And if not, let's > > state it very clearly (and write it into contributing > > guide), it's because we don't like them. Not because of > > performance. > > > > > > Thanks, > > > Sebastian > > > > > > [1] > > > http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html > > < > http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html> > > > [2] > > > http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html > > < > http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html> > > > [3] > > > http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html > > < > http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html> > > > [4] > > > https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27 > > < > https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27 > > > > > [5] https://github.com/infinispan/infinispan/pull/5094 > > > > > [6] > > > https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing > > < > https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing > > > > > [7] > > > https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673 > > < > https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673> > > > -- > > > SEBASTIAN ?ASKAWIEC > > > INFINISPAN DEVELOPER > > > Red Hat EMEA > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > -- > > > > SEBASTIAN?ASKAWIEC > > > > INFINISPAN DEVELOPER > > > > Red HatEMEA > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org infinispan-dev at lists.jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170525/d09878c9/attachment-0001.html From slaskawi at redhat.com Thu May 25 04:01:19 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 25 May 2017 08:01:19 +0000 Subject: [infinispan-dev] REST Refactoring - breaking changes In-Reply-To: <0e217509-ea9d-2bae-950b-fd438334f89a@redhat.com> References: <95f6b3f1-7f25-712e-b5fb-9e196cc93591@redhat.com> <0e217509-ea9d-2bae-950b-fd438334f89a@redhat.com> Message-ID: On Thu, May 25, 2017 at 7:42 AM Radim Vansa wrote: > On 05/24/2017 10:44 AM, Sebastian Laskawiec wrote: > > > > > > On Tue, May 23, 2017 at 5:06 PM Radim Vansa > > wrote: > > > > On 05/16/2017 11:05 AM, Sebastian Laskawiec wrote: > > > Hey guys! > > > > > > I'm working on REST Server refactoring and I changed some of the > > > previous behavior. Having in mind that we are implementing this in > a > > > minor release, I tried to make those changes really cosmetic: > > > > > > * RestEASY as well as Servlet API have been removed from > > modules and > > > BOM. If your app relied on it, you'll need to specify them > > > separately in your pom. > > > * Previous implementation picked application/text as a default > > > content type. I replaced it with text/plain with charset > > which is > > > more precise and seems to be more widely adopted. > > > * Putting an entry without any TTL nor Idle Time made it living > > > forever (which was BTW aligned with the docs). I switched to > > > server configured defaults in this case. If you want to have an > > > entry that lives forever, just specify 0 or -1 there. > > > * Requesting an entry with wrong mime type (imagine it was stored > > > using application/octet-stream and now you're requesting > > > text/plain) cased Bad Request. Now I switched it to Not > > Acceptable > > > which was designed specially to cover this type of use case. > > > * In compatibility mode the server often tried to "guess" the > > > mimetype (the decision was often between text/plain and > > > application/octet-stream). I honestly think it was a wrong move > > > and made the server side code very hard to read and predict > what > > > would be the result. Now the server always returns text/plain > by > > > default. If you want to get a byte stream back, just add > > `Accept: > > > application/octet-stream`. > > > * The server can be started with port 0. This way you are 100% > > sure > > > that it will start using a unique port without colliding > > with any > > > other service. > > > > > How can the client now the port number, then? Is the actual port > > exposed > > through JMX? > > > > > * The REST server hosts HTML page if queried using GET on default > > > context. I think it was a bug that it didn't work correctly > > before. > > > > > Did it return 404? What's on that page? Do we expose > > keys/values/entries > > anywhere in the REST endpoint? > > > > > > Exactly. You may try it using our Docker image and invoking something > > like this: curl -v -u user:changeme http://172.17.0.6:8080/rest > > > > > > > * UTF-8 charset is now the default. You may always ask the > > server to > > > return different encoding using Accept header. The charset > > is not > > > returned with binary mime types. > > > * If a HEAD request results in an error, a message will be > > returned > > > to the client. Even though this behavior breaks Commons HTTP > > > Client (HEAD requests are handled slightly differently and > > causes > > > the client to hang if a payload is returned), I think it's > > > beneficial to tell the user what went wrong. It's worth to > > mention > > > that Jetty/Netty HTTP clients work correctly. > > > * RestServer doesn't implement Lifecycle now. The protocol server > > > doesn't support start() method without any arguments. You > always > > > need to specify configuration + Embedded Cache Manager. > > > > > > Even though it's a long list, I think all those changes were > > worth it. > > > Please let me know if you don't agree. > > > > Couple of other questions: > > > > * do we accept GET with Range header on keys? What about > > delta-updating > > entries with Content-Range on PUTs? > > > > > > No and AFAK there are no plans to do it (but perhaps Tristan could > > shed some more light onto this). We could use HTTP PATCH for delta > > updates... > > > > * For PUTs/POSTs, do we return 200/201/204 according to the spec? > > (modified/created/modified) > > > > > > No, I decided to leave it as 200 for compatibility reasons. But I > > agree, we could change this as well. > > Compatibility reasons? You mean compatibility with clients not adhering > to spec? (clients should accept any 2xy as success by definition). > +1, created https://issues.jboss.org/browse/ISPN-7859 > > > > * Do we have any way to execute a replace (or the other prev-value > > returning ops) through REST using single request? For example let > > DELETE > > return the prev entity (it should return 200 & entity or 204 and no > > response) > > > > > > Yes, PUT replaces previous value [1] if such exists (whereas POST > > would return a conflict). If for some reason you can not replace > > current value, you will get a preconditions failed error. > > I have misformulated the question. I meant to ask if there is a way to > return the previous value when you've replaced it? > Unfortunately no. And there wasn't in previous REST implementation as far as I know. > > > > > > [1] > > > https://github.com/infinispan/infinispan/pull/5094/files#diff-58f67698080cc0242320614c921559a8R301 > > Looking into the code, without considering performance at all, I think > that you've become too ecstatic about Optionals. These should be used as > return types for methods, not a) parameters to methods nor b) fields. > This is a misuse of the API, according to the authors of Optionals in > JDK. Most of the time, you're not using optionals to have fluent chain > of method invocations, so -100 to that. > Answered in "To Optional or not to Optional thread". > > > * Do we handle OPTIONS in any way? > > > > > > No. Do we need it? I haven't seen any real implementation that uses > > that for discovering REST operations. > > > > > > Radim > > > > > > > > Thanks, > > > Sebastian > > > > > > -- > > > > > > SEBASTIAN?ASKAWIEC > > > > > > INFINISPAN DEVELOPER > > > > > > Red HatEMEA > > > > > > > > > > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > -- > > Radim Vansa > > > JBoss Performance Team > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org infinispan-dev at lists.jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > > > > SEBASTIAN?ASKAWIEC > > > > INFINISPAN DEVELOPER > > > > Red HatEMEA > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss Performance Team > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170525/e24926ea/attachment.html From vjuranek at redhat.com Thu May 25 08:38:57 2017 From: vjuranek at redhat.com (Vojtech Juranek) Date: Thu, 25 May 2017 14:38:57 +0200 Subject: [infinispan-dev] In Memory Data Grid Patterns Demos from Devoxx France! In-Reply-To: <4B5D3FA9-F763-4DB1-8847-A413B40D3E6F@redhat.com> References: <2000E4DB-7A7F-45E4-8833-7EB3A1C60DF0@redhat.com> <1E62C79A-6E30-47F3-B177-D1A981EE2AEC@redhat.com> <4B5D3FA9-F763-4DB1-8847-A413B40D3E6F@redhat.com> Message-ID: <18023209.Bv0R3koziO@dhcp-10-40-4-226.brq.redhat.com> On pond?l? 22. kv?tna 2017 15:52:12 CEST Galder Zamarre?o wrote: > Another thing, isn't the package.json file missign dependencies? yes, thanks for spotting it - it should be fixed now. I also initiated the transfer of the repo under infinispan-demos Cheers Vojta -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: This is a digitally signed message part. Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170525/5940a8e9/attachment.bin From steve at hibernate.org Thu May 25 09:31:38 2017 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 25 May 2017 13:31:38 +0000 Subject: [infinispan-dev] IRC chat: HB + I9 In-Reply-To: <848888EB-EA5C-4A34-B579-B86C610167D4@redhat.com> References: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> <8c834e21-438b-d883-6ce0-fa331ccc14fc@redhat.com> <848888EB-EA5C-4A34-B579-B86C610167D4@redhat.com> Message-ID: A lot to read through here so I apologize up front if I missed something... So to be fair I am biased as I would really like to not have to deal with these integrations :) That said, I do really believe that the best option is to move this code out of the hibernate/hibernate-orm repo. To me that could mean a separate repo altogether (infinispan/infinispan-hibernate-l2c, or sim) or into infinispan proper if Infinispan already has Hibernate dependency as Sanne mentioned somewhere. As far as Hibernate.. master is in fact 5.2, 6.0 exists just in my fork atm - we are still discussing the exact event that should trigger moving that 6.0 branch up stream. 6.0 timeline is still basically unknown especially as far as a Final goes. On Wed, May 24, 2017, 11:04 AM Galder Zamarre?o wrote: > Adding Steve, > > Scott Marlow just reminded me that you've advocated for Infinispan 2LC > provider to be moved to Infinispan source tree [2]. > > So, you might want to add your thoughts to this thread? > > Cheers, > > [2] > http://transcripts.jboss.org/channel/irc.freenode.org/%23hibernate-dev/2015/%23hibernate-dev.2015-08-06.log.html > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 24 May 2017, at 17:56, Paul Ferraro wrote: > > > > Option #4 would be my preference as well. The integration into WF has > > become increasingly cumbersome as the pace of Infinispan releases (and > > associated API changes) has increased. I would really rather avoid > > having to create and maintain forks of hibernate-infinispan to support > > the combination of Hibernate and Infinispan that don't exist in the > > upstream codebase. > > > > On Wed, May 24, 2017 at 11:18 AM, Sanne Grinovero > wrote: > >> I would suggest option 4# : move the 2LC implementation to Infinispan. > >> > >> I already suggested this in the past, but to remind the main arguments > I have: > >> > >> - neither repository is ideal, but having it here vs there is not > >> just moving the problem as the two projects are different, have > >> different timelines and different backwards compatibility policies. > >> > >> - Infinispan already depends on several Hibernate projects - even > >> directly to Hibernate ORM itself via the JPA cachestore and indirectly > >> via Hibernate Search and WildFly - so moving the Infinispan dependency > >> out of the Hibernate repository helps to linearize the build for one > >> consistent stack. > >> For example right now WildFly master contains a combination of > >> Hibernate ORM and Infinispan 2LC, which is not the same combination as > >> tested by running the 2LC testsuite; this happens all the time and > >> brings its own set of issues & delays. > >> > >> - Infinispan changes way more often - and as Radim already suggested > >> in his previous email - there's more benefit in having such advanced > >> code more closely tied to Infinispan so that it can benefit from new > >> capabilities even though these might not be ready to be blessed as > >> long term API. The 2LC SPI in Hibernate on the other hand is stable, > >> and has to stay stable anyway, for other reasons not least integration > >> with other providers, so there's no symmetric benefit in having this > >> code in Hibernate. > >> > >> - Infinispan releases breaking changes with a more aggressive pace. > >> It's more useful for Infinispan 9 to be able to support older versions > >> of Hibernate ORM, than the drawback of a new ORM release not having > >> yet an Infinispan release compatible. This last point is the only > >> drawback I can see, and franckly it's both a temporary situation as > >> Infinispan can catch up quickly and a very inlikely situation as > >> Hibernate ORM is unlikely to change these SPIs in e.g. the next major > >> release 6.0. > >> > >> - Infinispan occasionally breaks expectations of the 2LC code, as > >> Galder just had to figure out with a painful upgrade. We can all agree > >> that these changes are necessary, but I strongly believe it's useful > >> to *know* about such breackages ASAP from the testsuite, not half a > >> year later when a major dependency upgrade propagates to other > >> projects. > >> > >> - The Hibernate ORM would appreciate getting rid of debugging > >> clustering and networking issues when there's the occasional failure, > >> which are stressful as they are out of their area of expertise. > >> > >> I hope that makes sense? > >> > >> Thanks, > >> Sanne > >> > >> > >> > >> On 24 May 2017 at 08:49, Radim Vansa wrote: > >>> Hi Galder, > >>> > >>> I think that (3) is simply not possible (from non-technical > perspective) > >>> and I don't think we have the manpower to maintain 2 different modules > >>> (2). The current version does not seem ready (generic enough) to get > >>> into Infinispan, so either (1), or a lot of more work towards (4) > (which > >>> would be my preference). > >>> > >>> I haven't thought about all the steps for (4), but it seems that > >>> UnorderedDistributionInterceptor and LockingInterceptor should get into > >>> Infinispan as a flavour of repl/dist cache mode that applies update in > >>> parallel on all owners without any ordering; it's up to the user to > >>> guarantee that changes to an entry are commutative. > >>> > >>> The 2LC code itself shouldn't use the > >>> TombstoneCallInterceptor/VersionedCallInterceptor now that there is the > >>> functional API, you should move the behavior to functions. > >>> > >>> Regarding the invalidation mode, I think that a variant that would void > >>> any writes to the entry (begin/end invalidation) could be moved to > >>> Infinispan, too. I am not even sure if current invalidation in > >>> Infinispan is useful - you can't transparantly cache access to > >>> repeatable-read isolated DB (where reads block writes), but the > blocking > >>> as we do in 2LC now is probably too strong if we're working with DB > >>> using just read committed as the isolation level. I was always trying > to > >>> enforce linearizability, TBH I don't know how to write a test that > would > >>> test a more relaxed consistency. > >>> > >>> Btw., I've noticed that you've set isolation level to READ_COMMITTED in > >>> default configuration - isolation level does not apply at all to > >>> non-transactional caches, so please remove that as it would be just a > noise. > >>> > >>> Radim > >>> > >>> On 05/23/2017 03:07 PM, Galder Zamarre?o wrote: > >>>> Hi all, > >>>> > >>>> I've just finished integrating Infinispan with a HB 6.x branch Steve > had, all tests pass now [1]. > >>>> > >>>> Yeah, we didn't commit on the final location for these changes. > >>>> > >>>> As far as I know, Hibernate master is not 6.x, but rather 5.2.x. > There's no 5.2.x branch in Hibernate main repo. 6.x is just a branch that > Steve has. > >>>> > >>>> These are the options availble to us: > >>>> > >>>> 1. Integrate 9.x provider as part of 'hibernate-infinispan' in > Hibernate 6.x branch. > >>>> > >>>> 2. Integrate 9.x provider as part of a second Infinispan module in > Hibernate 5.x branch. > >>>> > >>>> 3. Integrate 9.x provider as part of 'hibernate-infinispan' in > Hibernate 5.x branch. This is problematic for since the provider is not > backwards compatible. > >>>> > >>>> 4. Integrate 9.x provider in infinispan and deliver it as part of > Infinispan rather than Hibernate. > >>>> > >>>> I'm not sure which one I prefer the most TBH... 1. is the ideal > solution but doesn't seem there will be a Hibernate 6.x release for a > while. 2./3./4. all have their downsides... :\ > >>>> > >>>> Thoughts? > >>>> > >>>> [1] https://github.com/galderz/hibernate-orm/commits/t_i9x_v2 > >>>> -- > >>>> Galder Zamarre?o > >>>> Infinispan, Red Hat > >>>> > >>>>> On 16 May 2017, at 17:06, Paul Ferraro > wrote: > >>>>> > >>>>> Thanks Galder. I read through the infinispan-dev thread on the > >>>>> subject, but I'm not sure what was concluded regarding the eventual > >>>>> home for this code. > >>>>> Once the testsuite passes, is the plan to commit to hibernate master? > >>>>> If so, I will likely fork > these changes into a WF module (and adapt it > >>>>> for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9 > >>>>> until Hibernate6 is integrated. > >>>>> > >>>>> Radim - one thing you mentioned on that infinispan-dev thread puzzled > >>>>> me: you said that invalidation mode offers no benefits over > >>>>> replication. How is that possible? Can you elaborate? > >>>>> > >>>>> Paul > >>>>> > >>>>> On Tue, May 16, 2017 at 9:03 AM, Galder Zamarre?o > wrote: > >>>>>> I'm on the move, not sure if Paul/Radim saw my replies: > >>>>>> > >>>>>> galderz, rvansa: Hey guys - is there a plan for > Hibernate & > >>>>>> ISPN 9? > >>>>>> pferraro: Galder has been working on that > >>>>>> pferraro: though I haven't seen any results but a list of > >>>>>> stuff that needs to be changed > >>>>>> galderz: which Hibernate branch are you targeting? > >>>>>> pferraro: 5.2, but there are minute differences between 5.x > >>>>>> in terms of the parts that need love to get Infinispan 9 support > >>>>>> *** Mode change: +v vblagoje on #infinispan by ChanServ > >>>>>> (ChanServ at services.) > >>>>>> rvansa: are you suggesting that 5.0 or 5.1 branches will > be > >>>>>> adapted to additionally support infinispan 9? how is that > >>>>>> possible? > >>>>>>> pferraro: i'm working on it as we speak... > >>>>>>> pferraro: down to 16 failuresd > >>>>>>> pferraro: i started a couple of months ago, but had talks/demos to > >>>>>> prepare > >>>>>>> pferraro: i've got back to working on it this week > >>>>>> ... > >>>>>>> pferraro: rvansa > >>>>>>> rvansa: minute differences my ass ;p > >>>>>>> pferraro: did you see my replies? > >>>>>>> i got disconnected while replying... > >>>>>> hmm - no - I didn't > >>>>>> galderz: ^ > >>>>>>> pferraro: so, working on the HB + I9 integration as we speak > >>>>>>> pferraro: i started a couple of months back but had talks/demos to > >>>>>> prepare and had to put that aside > >>>>>>> pferraro: i'm down to 16 failures > >>>>>>> pferraro: serious refactoring required of the integration to get it > >>>>>> to compile and the tests to pass > >>>>>>> pferraro: need to switch to async interceptor stack in 2lc > >>>>>> integration and get all the subtle changes right > >>>>>>> pferraro: it's a painstaking job basically > >>>>>>> pferraro: i'm working on > >>>>>> https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 > >>>>>>> pferraro: i can't remember where i branched off, but it's a branch > >>>>>> that steve had since master was focused on 5.x > >>>>>>> pferraro: i've no idea when/where we'll integrate this, but one > >>>>>> thing is for sure: it's nowhere near backwards compatible > >>>>>>> actually, fixed one this morning, so down to 15 failures > >>>>>>> pferraro: any suggestions/wishes? > >>>>>>> is anyone out there? ;) > >>>>>> Cheers, > >>>>>> -- > >>>>>> Galder Zamarre?o > >>>>>> Infinispan, Red Hat > >>>>>> > >>> > >>> > >>> -- > >>> Radim Vansa > >>> JBoss Performance Team > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170525/cf30f5d8/attachment-0001.html From emmanuel at hibernate.org Thu May 25 11:08:43 2017 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 25 May 2017 17:08:43 +0200 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: <26d94e24-d80d-c9c6-bbcf-c397a13a1e35@redhat.com> Message-ID: <488A8723-F6C4-4805-86E7-E358CB25562A@hibernate.org> > On 25 May 2017, at 10:00, Sebastian Laskawiec wrote: > . As it turns out we can use Optionals anywhere, except method returning some objects which are not getters. You can't use it on non getter return types ? Why ? > > It is also worth to say that both gentlemen are worried that Optionals might be overused in the libraries. > > On the other hand we have Oracle's tutorials which use Optionals as a fields [3]: > "public class Soundcard { > private Optional usb; > public Optional getUSB() { ... } > }" > and say no word about recommendations mentioned in [1] and [2]. Yes but tutorials are not written by the best possible people. I would not use them as gospel. > > Also many libraries (like Jackson, Hibernate validator) support Optionals as fields [5]. So it must be somewhat popular use case right? For Hibernate validator we added the support because we do support validation of return types. And instead of making an opinionated spec we let the support for params and fields slip in as it was more regular for us anyways. So don't take it as a measure of popularity or endorsement. > > I think my favorit reading about Optional use cases is this [6]. So the author suggests to use Optionals as a return types in API boundaries but use nulls inside classes. This has two major advantages: > It makes the library caller aware that the value might not be there > The returned Optional object will probably die very soon (a called will probably do something with it right away) > An example based on Oracle's tutorial would look like this (following this recommendation): > "public class Soundcard { > private USB usb; > public Optional getUSB() { return Optional.ofNullable(usb); } > }" > > I think it hits exactly into Katia's, Sanne's, Dan's and Galder's points. > > What do you think? > > [1] http://blog.joda.org/2014/11/optional-in-java-se-8.html > [2] https://stackoverflow.com/questions/26327957/should-java-8-getters-return-optional-type/26328555#26328555 > [3] http://www.oracle.com/technetwork/articles/java/java8-optional-2175753.html > [4] http://blog.joda.org/2015/08/java-se-8-optional-pragmatic-approach.html > [5] http://dolszewski.com/java/java-8-optional-use-cases/ > [6] http://blog.joda.org/2015/08/java-se-8-optional-pragmatic-approach.html > >> On Wed, May 24, 2017 at 4:56 PM Radim Vansa wrote: >> I haven't checked Sebastian's refactored code, but does it use Optionals >> as a *field* type? That's misuse (same as using it as an arg), it's >> intended solely as method return type. >> >> Radim >> >> On 05/23/2017 05:45 PM, Katia Aresti wrote: >> > Dan, I disagree with point 2 where you say "You now have a field that >> > could be null, Optional.empty(), or Optional.of(something)" >> > >> > This is the point of optional. You shouldn't have a field that has >> > these 3 possible values, just two of them = Some or None. If the field >> > is mutable, it should be initialised to Optional.empty(). In the case >> > of an API, Optional implicitly says that the return value can be >> > empty, but when you return a "normal" object, either the user reads >> > the doc, either will have bugs or boilerplate code defending from the >> > possible null value (even if never ever this API will return null) >> > >> > :o) >> > >> > Cheers >> > >> > >> > >> > On Tue, May 23, 2017 at 3:58 PM, Dan Berindei > > > wrote: >> > >> > I wouldn't say I'm an extreme naysayer, but I do have 2 issues >> > with Optional: >> > >> > 1. Performance becomes harder to quantify: the allocations may or >> > may not be eliminated, and a change in one part of the code may >> > change how allocations are eliminated in a completely different >> > part of the code. >> > 2. My personal opinion is it's just ugly... instead of having one >> > field that could be null or non-null, you now have a field that >> > could be null, Optional.empty(), or Optional.of(something). >> > >> > Cheers >> > Dan >> > >> > >> > >> > On Tue, May 23, 2017 at 1:54 PM, Sebastian Laskawiec >> > > wrote: >> > >> > Hey! >> > >> > So I think we have no extreme naysayers to Optional. So let me >> > try to sum up what we have achieved so: >> > >> > * In macroscale benchmark based on REST interface using >> > Optionals didn't lower the performance. >> > * +1 for using it in public APIs, especially for those using >> > functional style. >> > * Creating lots of Optional instances might add some >> > pressure on GC, so we need to be careful when using them >> > in hot code paths. In such cases it is required to run a >> > micro scale benchamark to make sure the performance didn't >> > drop. The microbenchmark should also be followed by macro >> > scale benchamrk - PerfJobAck. Also, keep an eye on Eden >> > space in such cases. >> > >> > If you agree with me, and there are no hard evidence that >> > using Optional degrade performance significantly, I would like >> > to issue a pull request and put those findings into >> > contributing guide [1]. >> > >> > Thanks, >> > Sebastian >> > >> > [1] >> > https://github.com/infinispan/infinispan/tree/master/documentation/src/main/asciidoc/contributing >> > >> > >> > On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o >> > > wrote: >> > >> > I think Sanne's right here, any differences in such large >> > scale test are hard to decipher. >> > >> > Also, as mentioned in a previous email, my view on its >> > usage is same as Sanne's: >> > >> > * Definitely in APIs/SPIs. >> > * Be gentle with it internals. >> > >> > Cheers, >> > -- >> > Galder Zamarre?o >> > Infinispan, Red Hat >> > >> > > On 18 May 2017, at 14:35, Sanne Grinovero >> > > wrote: >> > > >> > > Hi Sebastian, >> > > >> > > sorry but I think you've been wasting time, I hope it >> > was fun :) This is not the right methodology to "settle" >> > the matter (unless you want Radim's eyes to get bloody..). >> > > >> > > Any change in such a complex system will only affect the >> > performance metrics if you're actually addressing the >> > dominant bottleneck. In some cases it might be CPU, like >> > if your system is at 90%+ CPU then it's likely that >> > reviewing the code to use less CPU would be beneficial; >> > but even that can be counter-productive, for example if >> > you're having contention caused by optimistic locking and >> > you fail to address that while making something else >> > "faster" the performance loss on the optimistic lock might >> > become asymptotic. >> > > >> > > A good reason to avoid excessive usage of Optional (and >> > *excessive* doesn't mean a couple dozen in a millions >> > lines of code..) is to not run out of eden space, >> > especially for all the code running in interpreted mode. >> > > >> > > In your case you've been benchmarking a hugely complex >> > beast, not least over REST! When running the REST Server I >> > doubt that allocation in eden is your main problem. You >> > just happened to have a couple Optionals on your path; >> > sure performance changed but there's no enough data in >> > this way to figure out what exactly happened: >> > > - did it change at all or was it just because of a >> > lucky optimisation? (The JIT will always optimise stuff >> > differently even when re-running the same code) >> > > - did the overall picture improve because this code >> > became much *less* slower? >> > > >> > > The real complexity in benchmarking is to accurately >> > understand why it changed; this should also tell you why >> > it didn't change more, or less.. >> > > >> > > To be fair I actually agree that it's very likely that >> > C2 can make any performance penalty disappear.. that's >> > totally possible, although it's unlikely to be faster than >> > just reading the field (assuming we don't need to do >> > branching because of null-checks but C2 can optimise that >> > as well). >> > > Still this requires the code to be optimised by JIT >> > first, so it won't prevent us from creating a gazillion of >> > instances if we abuse its usage irresponsibly. Fighting >> > internal NPEs is a matter of writing better code; I'm not >> > against some "Optional" being strategically placed but I >> > believe it's much nicer for most internal code to just >> > avoid null, use "final", and initialize things aggressively. >> > > >> > > Sure use Optional where it makes sense, probably most on >> > APIs and SPIs, but please don't go overboard with it in >> > internals. That's all I said in the original debate. >> > > >> > > In case you want to benchmark the impact of Optional >> > make a JMH based microbenchmark - that's interesting to >> > see what C2 is capable of - but even so that's not going >> > to tell you much on the impact it would have to patch >> > thousands of code all around Infinispan. And it will need >> > some peer review before it can tell you anything at all ;) >> > > >> > > It's actually a very challenging topic, as we produce >> > libraries meant for "anyone to use" and don't get to set >> > the hardware specification requirements it's hard to >> > predict if we should optimise the system for this/that >> > resource consumption. Some people will have plenty of CPU >> > and have problems with us needing too much memory, some >> > others will have the opposite.. the real challenge is in >> > making internals "elastic" to such factors and adaptable >> > without making it too hard to tune. >> > > >> > > Thanks, >> > > Sanne >> > > >> > > >> > > >> > > On 18 May 2017 at 12:30, Sebastian Laskawiec >> > > wrote: >> > > Hey! >> > > >> > > In our past we had a couple of discussions about whether >> > we should or should not use Optionals [1][2]. The main >> > argument against it was performance. >> > > >> > > On one hand we risk additional object allocation (the >> > Optional itself) and wrong inlining decisions taken by C2 >> > compiler [3]. On the other hand we all probably "feel" >> > that both of those things shouldn't be a problem and >> > should be optimized by C2. Another argument was the >> > Optional's doesn't give us anything but as I checked, we >> > introduced nearly 80 NullPointerException bugs in two >> > years [4]. So we might consider Optional as a way of >> > fighting those things. The final argument that I've seen >> > was about lack of higher order functions which is simply >> > not true since we have #map, #filter and #flatmap >> > functions. You can do pretty amazing things with this. >> > > >> > > I decided to check the performance when refactoring REST >> > interface. I created a PR with Optionals [5], ran >> > performance tests, removed all Optionals and reran tests. >> > You will be surprised by the results [6]: >> > > >> > > Test case >> > > With Optionals [%] Without Optionals >> > > Run 1 Run 2 Avg Run 1 Run 2 Avg >> > > Non-TX reads 10 threads >> > > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 >> > > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 >> > > Non-TX reads 100 threads >> > > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 >> > > Response time -6.15 14.93 4.39 7.88 6.49 7.19 >> > > Non-TX writes 10 threads >> > > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 >> > > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 >> > > Non-TX writes 100 threads >> > > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 >> > > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 >> > > >> > > I also created JMH + Flight Recorder tests and again, >> > the results showed no evidence of slow down caused by >> > Optionals [7]. >> > > >> > > Now please take those results with a grain of salt since >> > they tend to drift by a factor of +/-5% (sometimes even >> > more). But it's very clear the performance results are >> > very similar if not the same. >> > > >> > > Having those numbers at hand, do we want to have >> > Optionals in Infinispan codebase or not? And if not, let's >> > state it very clearly (and write it into contributing >> > guide), it's because we don't like them. Not because of >> > performance. >> > > >> > > Thanks, >> > > Sebastian >> > > >> > > [1] >> > http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html >> > >> > > [2] >> > http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html >> > >> > > [3] >> > http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html >> > >> > > [4] >> > https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27 >> > >> > > [5] https://github.com/infinispan/infinispan/pull/5094 >> > >> > > [6] >> > https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing >> > >> > > [7] >> > https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673 >> > >> > > -- >> > > SEBASTIAN ?ASKAWIEC >> > > INFINISPAN DEVELOPER >> > > Red Hat EMEA >> > > >> > > >> > > _______________________________________________ >> > > infinispan-dev mailing list >> > > infinispan-dev at lists.jboss.org >> > >> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > > >> > > _______________________________________________ >> > > infinispan-dev mailing list >> > > infinispan-dev at lists.jboss.org >> > >> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > -- >> > >> > SEBASTIAN?ASKAWIEC >> > >> > INFINISPAN DEVELOPER >> > >> > Red HatEMEA >> > >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > SEBASTIAN ?ASKAWIEC > INFINISPAN DEVELOPER > Red Hat EMEA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170525/f55c654a/attachment-0001.html From david.lloyd at redhat.com Thu May 25 11:16:44 2017 From: david.lloyd at redhat.com (David M. Lloyd) Date: Thu, 25 May 2017 10:16:44 -0500 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: <441f3a85-111d-acef-d68c-794b672f06bd@mailbox.org> Message-ID: <313d2e59-cb84-1ddd-cbff-169b43462106@redhat.com> I'm not an Infinispan developer, but I'll chime in anyway. :) I've never been a fan of Optional. But the theory behind it that made it acceptable in the first place is that it generally gets optimized away. Now this theory is only true if you never hold a reference to it in any persistent manner (it should be very short-lived even as a local variable (after optimizations like dead-code & etc. have run)). What should happen is, the actual allocation should get deleted, and the (usually one or two) strictly monomorphic or bi-morphic call(s) should each be flattened into what amounts to (at most) simple if/else statement(s), all of which should be very fast (as fast as a null-check, in theory) and branch-predictable and all that good stuff. (Now null checks have a very slight advantage in that they can be optimistically removed in some cases, and only re-added once the operating system gets a SIGSEGV or equivalent, but that difference is usually going to be pretty small even in fairly tightly optimized code). This seems consistent with the benchmark results. I doubt it has much to do with how hot the code path is (in fact, a hotter code path should mean Optional usages will get more accurately optimized by C2 over time). With an appropriate JDK build, you can browse the compiler output to test this theory out. If you put an Optional into a field, all of this is very likely to be thrown in the garbage. I think there is some escape analysis stuff that might apply but in the most likely case, the heap will simply be polluted with a bunch useless crap, along with all the consequences thereof. So if (and only if) you're using Optional as a return value, creating it on the fly (particularly in monomorphic methods), and using the result directly via the chainable methods on the Optional class or keeping it around only for one or two usages in a local variable (not referring to it afterwards in any way), it *should* be fine from a performance perspective. Note that HotSpot is pretty good at knowing the difference between when *you* think the value is being referred to and when the value is *really* being referred to (this is what caused the finalize() debacle that resulted in Runtime.reachabilityFence() being added to Java 9 - HotSpot was a little *too* good at it). Aesthetically it's a different story. There's no magic silver bullet to make null problems go away; Optional is a tradeoff just like everything in engineering and in life. I would never put Optional into one of my APIs as it exists today (or even as it exists in Java 9, where several deficiencies have admittedly been addressed). I would only start using it once it is extremely well-understood and well-established (i.e. maybe in 5-10 years I'll have another look). On 05/25/2017 02:56 AM, Sebastian Laskawiec wrote: > Indeed Bela, you're an extreme naysayer! :) > > I'm actually trying to get as many comments and arguments out of this > discussion. I hope we will be able to iron out a general recommendation > or approach how we want to treat Optionals. > > On Tue, May 23, 2017 at 10:14 PM Bela Ban > wrote: > > Actually, I'm an extreme naysayer! I actually voiced concerns so I'm > wondering where your assumption there are no naysayers is coming > from... :-) > > > On 23/05/17 1:54 PM, Sebastian Laskawiec wrote: > > Hey! > > > > So I think we have no extreme naysayers to Optional. So let me try to > > sum up what we have achieved so: > > > > * In macroscale benchmark based on REST interface using Optionals > > didn't lower the performance. > > * +1 for using it in public APIs, especially for those using > > functional style. > > * Creating lots of Optional instances might add some pressure > on GC, > > so we need to be careful when using them in hot code paths. In > > such cases it is required to run a micro scale benchamark to make > > sure the performance didn't drop. The microbenchmark should also > > be followed by macro scale benchamrk - PerfJobAck. Also, keep an > > eye on Eden space in such cases. > > > > If you agree with me, and there are no hard evidence that using > > Optional degrade performance significantly, I would like to issue a > > pull request and put those findings into contributing guide [1]. > > > > Thanks, > > Sebastian > > > > [1] > > > https://github.com/infinispan/infinispan/tree/master/documentation/src/main/asciidoc/contributing > > > > On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o > > > >> wrote: > > > > I think Sanne's right here, any differences in such large scale > > test are hard to decipher. > > > > Also, as mentioned in a previous email, my view on its usage is > > same as Sanne's: > > > > * Definitely in APIs/SPIs. > > * Be gentle with it internals. > > > > Cheers, > > -- > > Galder Zamarre?o > > Infinispan, Red Hat > > > > > On 18 May 2017, at 14:35, Sanne Grinovero > > > >> > wrote: > > > > > > Hi Sebastian, > > > > > > sorry but I think you've been wasting time, I hope it was > fun :) > > This is not the right methodology to "settle" the matter (unless > > you want Radim's eyes to get bloody..). > > > > > > Any change in such a complex system will only affect the > > performance metrics if you're actually addressing the dominant > > bottleneck. In some cases it might be CPU, like if your system is > > at 90%+ CPU then it's likely that reviewing the code to use less > > CPU would be beneficial; but even that can be counter-productive, > > for example if you're having contention caused by optimistic > > locking and you fail to address that while making something else > > "faster" the performance loss on the optimistic lock might become > > asymptotic. > > > > > > A good reason to avoid excessive usage of Optional (and > > *excessive* doesn't mean a couple dozen in a millions lines of > > code..) is to not run out of eden space, especially for all the > > code running in interpreted mode. > > > > > > In your case you've been benchmarking a hugely complex beast, > > not least over REST! When running the REST Server I doubt that > > allocation in eden is your main problem. You just happened to > have > > a couple Optionals on your path; sure performance changed but > > there's no enough data in this way to figure out what exactly > > happened: > > > - did it change at all or was it just because of a lucky > > optimisation? (The JIT will always optimise stuff differently > even > > when re-running the same code) > > > - did the overall picture improve because this code became > much > > *less* slower? > > > > > > The real complexity in benchmarking is to accurately understand > > why it changed; this should also tell you why it didn't change > > more, or less.. > > > > > > To be fair I actually agree that it's very likely that C2 can > > make any performance penalty disappear.. that's totally possible, > > although it's unlikely to be faster than just reading the field > > (assuming we don't need to do branching because of > null-checks but > > C2 can optimise that as well). > > > Still this requires the code to be optimised by JIT first, > so it > > won't prevent us from creating a gazillion of instances if we > > abuse its usage irresponsibly. Fighting internal NPEs is a matter > > of writing better code; I'm not against some "Optional" being > > strategically placed but I believe it's much nicer for most > > internal code to just avoid null, use "final", and initialize > > things aggressively. > > > > > > Sure use Optional where it makes sense, probably most on APIs > > and SPIs, but please don't go overboard with it in internals. > > That's all I said in the original debate. > > > > > > In case you want to benchmark the impact of Optional make a JMH > > based microbenchmark - that's interesting to see what C2 is > > capable of - but even so that's not going to tell you much on the > > impact it would have to patch thousands of code all around > > Infinispan. And it will need some peer review before it can tell > > you anything at all ;) > > > > > > It's actually a very challenging topic, as we produce libraries > > meant for "anyone to use" and don't get to set the hardware > > specification requirements it's hard to predict if we should > > optimise the system for this/that resource consumption. Some > > people will have plenty of CPU and have problems with us needing > > too much memory, some others will have the opposite.. the real > > challenge is in making internals "elastic" to such factors and > > adaptable without making it too hard to tune. > > > > > > Thanks, > > > Sanne > > > > > > > > > > > > On 18 May 2017 at 12:30, Sebastian Laskawiec > > > >> wrote: > > > Hey! > > > > > > In our past we had a couple of discussions about whether we > > should or should not use Optionals [1][2]. The main argument > > against it was performance. > > > > > > On one hand we risk additional object allocation (the Optional > > itself) and wrong inlining decisions taken by C2 compiler [3]. On > > the other hand we all probably "feel" that both of those things > > shouldn't be a problem and should be optimized by C2. Another > > argument was the Optional's doesn't give us anything but as I > > checked, we introduced nearly 80 NullPointerException bugs in two > > years [4]. So we might consider Optional as a way of fighting > > those things. The final argument that I've seen was about lack of > > higher order functions which is simply not true since we have > > #map, #filter and #flatmap functions. You can do pretty amazing > > things with this. > > > > > > I decided to check the performance when refactoring REST > > interface. I created a PR with Optionals [5], ran performance > > tests, removed all Optionals and reran tests. You will be > > surprised by the results [6]: > > > > > > Test case > > > With Optionals [%] Without Optionals > > > Run 1 Run 2 Avg Run 1 Run 2 Avg > > > Non-TX reads 10 threads > > > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 > > > Response time -24.12 -24.63 -24.38 -24.37 -25.69 -25.03 > > > Non-TX reads 100 threads > > > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 > > > Response time -6.15 14.93 4.39 7.88 6.49 7.19 > > > Non-TX writes 10 threads > > > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 > > > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 > > > Non-TX writes 100 threads > > > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 > > > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 > > > > > > I also created JMH + Flight Recorder tests and again, the > > results showed no evidence of slow down caused by Optionals [7]. > > > > > > Now please take those results with a grain of salt since they > > tend to drift by a factor of +/-5% (sometimes even more). But > it's > > very clear the performance results are very similar if not > the same. > > > > > > Having those numbers at hand, do we want to have Optionals in > > Infinispan codebase or not? And if not, let's state it very > > clearly (and write it into contributing guide), it's because we > > don't like them. Not because of performance. > > > > > > Thanks, > > > Sebastian > > > > > > [1] > > > http://lists.jboss.org/pipermail/infinispan-dev/2017-March/017370.html > > > [2] > > > http://lists.jboss.org/pipermail/infinispan-dev/2016-August/016796.html > > > [3] > > > http://vanillajava.blogspot.ro/2015/01/java-lambdas-and-low-latency.html > > > [4] > > > https://issues.jboss.org/issues/?jql=project%20%3D%20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E%20%22NullPointerException%22%20AND%20created%20%3E%3D%202015-04-27%20AND%20created%20%3C%3D%202017-04-27 > > > [5] https://github.com/infinispan/infinispan/pull/5094 > > > [6] > > > https://docs.google.com/a/redhat.com/spreadsheets/d/1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing > > > [7] > > > https://github.com/infinispan/infinispan/pull/5094#issuecomment-296970673 > > > -- > > > SEBASTIAN ?ASKAWIEC > > > INFINISPAN DEVELOPER > > > Red Hat EMEA > > > > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > > > > SEBASTIAN?ASKAWIEC > > > > INFINISPAN DEVELOPER > > > > Red HatEMEA > > > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Bela Ban, JGroups lead (http://www.jgroups.org) > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > > SEBASTIAN?ASKAWIEC > > INFINISPAN DEVELOPER > > Red HatEMEA > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- - DML From sanne at infinispan.org Thu May 25 12:11:08 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 25 May 2017 17:11:08 +0100 Subject: [infinispan-dev] To Optional or not to Optional? In-Reply-To: References: <26d94e24-d80d-c9c6-bbcf-c397a13a1e35@redhat.com> Message-ID: On 25 May 2017 at 09:00, Sebastian Laskawiec wrote: > Adding part of your email from REST refactoring thread: > > Looking into the code, without considering performance at all, I think >> that you've become too ecstatic about Optionals. These should be used as >> return types for methods, not a) parameters to methods nor b) fields. >> This is a misuse of the API, according to the authors of Optionals in >> JDK. Most of the time, you're not using optionals to have fluent chain >> of method invocations, so -100 to that. >> > > I'm sorry I'm not picking up the discussion about REST refactoring PR > since it has been already merged. Plus I'm not planning to do any Optionals > refactoring as long I don't have a clear vision how we'd like to approach > it. > > But I'm actually very happy you touched the use case topic. So far we were > discussing advantages and disadvantages of Optionals and we didn't say much > about potential use cases (Katia, Dan, Galder and Sanne also touched a > little this topic). > > Indeed, Stephen Colebourne [1] mentions that it should be used as method > return types: > "My only fear is that Optional will be overused. Please focus on using it > as a return type (from methods that perform some useful piece of > functionality) Please don't use it as the field of a Java-Bean." > > Brian Goetz also said a few words on Stack Overflow about this [2]: > "For example, you probably should never use it for something that returns > an array of results, or a list of results; instead return an empty array or > list. You should almost never use it as a field of something or a method > parameter. > I think routinely using it as a return value for getters would definitely > be over-use." > > So if we want to be really dogmatic here, we wouldn't be able to use > Optionals in fields, method parameters, and getters. Please note that I'm > blindly putting recommendations mentioned above into code. As it turns out > we can use Optionals anywhere, except method returning some objects which > are not getters. > > It is also worth to say that both gentlemen are worried that Optionals > might be overused in the libraries. > > On the other hand we have Oracle's tutorials which use Optionals as a > fields [3]: > "public class Soundcard { > private Optional usb; > public Optional getUSB() { ... } > }" > and say no word about recommendations mentioned in [1] and [2]. > > Also many libraries (like Jackson, Hibernate validator) support Optionals > as fields [5]. So it must be somewhat popular use case right? > ?No :) Speaking for Hibernate I can assure you that allowing *a small minority* of people to use them is not the same as encouraging or endorsing it, and certainly doesn't mean that most Hibernate users have Optional fields. Fact is we DO NOT use Optional much internally, and even supporting this option was a questionable choice that not all of us agreed on. > > I think my favorit reading about Optional use cases is this [6]. So the > author suggests to use Optionals as a return types in API boundaries but > use nulls inside classes. This has two major advantages: > > - It makes the library caller aware that the value might not be there > - The returned Optional object will probably die very soon (a called > will probably do something with it right away) > > An example based on Oracle's tutorial would look like this (following this > recommendation): > "public class Soundcard { > private USB usb; > public Optional getUSB() { return Optional.ofNullable(usb); } > }" > > I think it hits exactly into Katia's, Sanne's, Dan's and Galder's points. > > What do you think? > Since you? expressed the desire for a clear directive, let me try formulate one: We can allow Optional to be used as long as all these conditions are verified: - it's a public API - it's not possibly affecting performance of an hot spot (e.g. I'd not want it on Cache#get ) - it looks good for the use case, e.g. allows fluent APIs and functions to work better - on return types, and not for types having a cardinality > 1 (Collections and Iterables). [Needless to say you shall not break backwards compatibility within a major version, so nothing gets changed now] Additionally I see no problems on having it on internal code which is rarely executed, e.g. configuration parsing and bootstrap, but please don't start on a quest to update the code for the sake of it: if it evolves over time that's fine but reviewing ad-hoc PRs would be a waste of the team's time. Thanks, Sanne > > [1] http://blog.joda.org/2014/11/optional-in-java-se-8.html > [2] https://stackoverflow.com/questions/26327957/should- > java-8-getters-return-optional-type/26328555#26328555 > [3] http://www.oracle.com/technetwork/articles/java/ > java8-optional-2175753.html > [4] http://blog.joda.org/2015/08/java-se-8-optional- > pragmatic-approach.html > [5] http://dolszewski.com/java/java-8-optional-use-cases/ > [6] http://blog.joda.org/2015/08/java-se-8-optional- > pragmatic-approach.html > > On Wed, May 24, 2017 at 4:56 PM Radim Vansa wrote: > >> I haven't checked Sebastian's refactored code, but does it use Optionals >> as a *field* type? That's misuse (same as using it as an arg), it's >> intended solely as method return type. >> >> Radim >> >> On 05/23/2017 05:45 PM, Katia Aresti wrote: >> > Dan, I disagree with point 2 where you say "You now have a field that >> > could be null, Optional.empty(), or Optional.of(something)" >> > >> > This is the point of optional. You shouldn't have a field that has >> > these 3 possible values, just two of them = Some or None. If the field >> > is mutable, it should be initialised to Optional.empty(). In the case >> > of an API, Optional implicitly says that the return value can be >> > empty, but when you return a "normal" object, either the user reads >> > the doc, either will have bugs or boilerplate code defending from the >> > possible null value (even if never ever this API will return null) >> > >> > :o) >> > >> > Cheers >> > >> > >> > >> > On Tue, May 23, 2017 at 3:58 PM, Dan Berindei > > > wrote: >> > >> > I wouldn't say I'm an extreme naysayer, but I do have 2 issues >> > with Optional: >> > >> > 1. Performance becomes harder to quantify: the allocations may or >> > may not be eliminated, and a change in one part of the code may >> > change how allocations are eliminated in a completely different >> > part of the code. >> > 2. My personal opinion is it's just ugly... instead of having one >> > field that could be null or non-null, you now have a field that >> > could be null, Optional.empty(), or Optional.of(something). >> > >> > Cheers >> > Dan >> > >> > >> > >> > On Tue, May 23, 2017 at 1:54 PM, Sebastian Laskawiec >> > > wrote: >> > >> > Hey! >> > >> > So I think we have no extreme naysayers to Optional. So let me >> > try to sum up what we have achieved so: >> > >> > * In macroscale benchmark based on REST interface using >> > Optionals didn't lower the performance. >> > * +1 for using it in public APIs, especially for those using >> > functional style. >> > * Creating lots of Optional instances might add some >> > pressure on GC, so we need to be careful when using them >> > in hot code paths. In such cases it is required to run a >> > micro scale benchamark to make sure the performance didn't >> > drop. The microbenchmark should also be followed by macro >> > scale benchamrk - PerfJobAck. Also, keep an eye on Eden >> > space in such cases. >> > >> > If you agree with me, and there are no hard evidence that >> > using Optional degrade performance significantly, I would like >> > to issue a pull request and put those findings into >> > contributing guide [1]. >> > >> > Thanks, >> > Sebastian >> > >> > [1] >> > https://github.com/infinispan/infinispan/tree/ >> master/documentation/src/main/asciidoc/contributing >> > > master/documentation/src/main/asciidoc/contributing> >> > >> > On Mon, May 22, 2017 at 6:36 PM Galder Zamarre?o >> > > wrote: >> > >> > I think Sanne's right here, any differences in such large >> > scale test are hard to decipher. >> > >> > Also, as mentioned in a previous email, my view on its >> > usage is same as Sanne's: >> > >> > * Definitely in APIs/SPIs. >> > * Be gentle with it internals. >> > >> > Cheers, >> > -- >> > Galder Zamarre?o >> > Infinispan, Red Hat >> > >> > > On 18 May 2017, at 14:35, Sanne Grinovero >> > > wrote: >> > > >> > > Hi Sebastian, >> > > >> > > sorry but I think you've been wasting time, I hope it >> > was fun :) This is not the right methodology to "settle" >> > the matter (unless you want Radim's eyes to get bloody..). >> > > >> > > Any change in such a complex system will only affect the >> > performance metrics if you're actually addressing the >> > dominant bottleneck. In some cases it might be CPU, like >> > if your system is at 90%+ CPU then it's likely that >> > reviewing the code to use less CPU would be beneficial; >> > but even that can be counter-productive, for example if >> > you're having contention caused by optimistic locking and >> > you fail to address that while making something else >> > "faster" the performance loss on the optimistic lock might >> > become asymptotic. >> > > >> > > A good reason to avoid excessive usage of Optional (and >> > *excessive* doesn't mean a couple dozen in a millions >> > lines of code..) is to not run out of eden space, >> > especially for all the code running in interpreted mode. >> > > >> > > In your case you've been benchmarking a hugely complex >> > beast, not least over REST! When running the REST Server I >> > doubt that allocation in eden is your main problem. You >> > just happened to have a couple Optionals on your path; >> > sure performance changed but there's no enough data in >> > this way to figure out what exactly happened: >> > > - did it change at all or was it just because of a >> > lucky optimisation? (The JIT will always optimise stuff >> > differently even when re-running the same code) >> > > - did the overall picture improve because this code >> > became much *less* slower? >> > > >> > > The real complexity in benchmarking is to accurately >> > understand why it changed; this should also tell you why >> > it didn't change more, or less.. >> > > >> > > To be fair I actually agree that it's very likely that >> > C2 can make any performance penalty disappear.. that's >> > totally possible, although it's unlikely to be faster than >> > just reading the field (assuming we don't need to do >> > branching because of null-checks but C2 can optimise that >> > as well). >> > > Still this requires the code to be optimised by JIT >> > first, so it won't prevent us from creating a gazillion of >> > instances if we abuse its usage irresponsibly. Fighting >> > internal NPEs is a matter of writing better code; I'm not >> > against some "Optional" being strategically placed but I >> > believe it's much nicer for most internal code to just >> > avoid null, use "final", and initialize things aggressively. >> > > >> > > Sure use Optional where it makes sense, probably most on >> > APIs and SPIs, but please don't go overboard with it in >> > internals. That's all I said in the original debate. >> > > >> > > In case you want to benchmark the impact of Optional >> > make a JMH based microbenchmark - that's interesting to >> > see what C2 is capable of - but even so that's not going >> > to tell you much on the impact it would have to patch >> > thousands of code all around Infinispan. And it will need >> > some peer review before it can tell you anything at all ;) >> > > >> > > It's actually a very challenging topic, as we produce >> > libraries meant for "anyone to use" and don't get to set >> > the hardware specification requirements it's hard to >> > predict if we should optimise the system for this/that >> > resource consumption. Some people will have plenty of CPU >> > and have problems with us needing too much memory, some >> > others will have the opposite.. the real challenge is in >> > making internals "elastic" to such factors and adaptable >> > without making it too hard to tune. >> > > >> > > Thanks, >> > > Sanne >> > > >> > > >> > > >> > > On 18 May 2017 at 12:30, Sebastian Laskawiec >> > > wrote: >> > > Hey! >> > > >> > > In our past we had a couple of discussions about whether >> > we should or should not use Optionals [1][2]. The main >> > argument against it was performance. >> > > >> > > On one hand we risk additional object allocation (the >> > Optional itself) and wrong inlining decisions taken by C2 >> > compiler [3]. On the other hand we all probably "feel" >> > that both of those things shouldn't be a problem and >> > should be optimized by C2. Another argument was the >> > Optional's doesn't give us anything but as I checked, we >> > introduced nearly 80 NullPointerException bugs in two >> > years [4]. So we might consider Optional as a way of >> > fighting those things. The final argument that I've seen >> > was about lack of higher order functions which is simply >> > not true since we have #map, #filter and #flatmap >> > functions. You can do pretty amazing things with this. >> > > >> > > I decided to check the performance when refactoring REST >> > interface. I created a PR with Optionals [5], ran >> > performance tests, removed all Optionals and reran tests. >> > You will be surprised by the results [6]: >> > > >> > > Test case >> > > With Optionals [%] Without Optionals >> > > Run 1 Run 2 Avg Run 1 Run 2 Avg >> > > Non-TX reads 10 threads >> > > Throughput 32.54 32.87 32.71 31.74 34.04 32.89 >> > > Response time -24.12 -24.63 -24.38 -24.37 -25.69 >> -25.03 >> > > Non-TX reads 100 threads >> > > Throughput 6.48 -12.79 -3.16 -7.06 -6.14 -6.60 >> > > Response time -6.15 14.93 4.39 7.88 6.49 7.19 >> > > Non-TX writes 10 threads >> > > Throughput 9.21 7.60 8.41 4.66 7.15 5.91 >> > > Response time -8.92 -7.11 -8.02 -5.29 -6.93 -6.11 >> > > Non-TX writes 100 threads >> > > Throughput 2.53 1.65 2.09 -1.16 4.67 1.76 >> > > Response time -2.13 -1.79 -1.96 0.91 -4.67 -1.88 >> > > >> > > I also created JMH + Flight Recorder tests and again, >> > the results showed no evidence of slow down caused by >> > Optionals [7]. >> > > >> > > Now please take those results with a grain of salt since >> > they tend to drift by a factor of +/-5% (sometimes even >> > more). But it's very clear the performance results are >> > very similar if not the same. >> > > >> > > Having those numbers at hand, do we want to have >> > Optionals in Infinispan codebase or not? And if not, let's >> > state it very clearly (and write it into contributing >> > guide), it's because we don't like them. Not because of >> > performance. >> > > >> > > Thanks, >> > > Sebastian >> > > >> > > [1] >> > http://lists.jboss.org/pipermail/infinispan-dev/2017- >> March/017370.html >> > > March/017370.html> >> > > [2] >> > http://lists.jboss.org/pipermail/infinispan-dev/2016- >> August/016796.html >> > > August/016796.html> >> > > [3] >> > http://vanillajava.blogspot.ro/2015/01/java-lambdas-and- >> low-latency.html >> > > low-latency.html> >> > > [4] >> > https://issues.jboss.org/issues/?jql=project%20%3D% >> 20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E% >> 20%22NullPointerException%22%20AND%20created%20%3E%3D% >> 202015-04-27%20AND%20created%20%3C%3D%202017-04-27 >> > > 20ISPN%20AND%20issuetype%20%3D%20Bug%20AND%20text%20%7E% >> 20%22NullPointerException%22%20AND%20created%20%3E%3D% >> 202015-04-27%20AND%20created%20%3C%3D%202017-04-27> >> > > [5] https://github.com/infinispan/infinispan/pull/5094 >> > >> > > [6] >> > https://docs.google.com/a/redhat.com/spreadsheets/d/ >> 1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing >> > > 1oep6Was0FfvHdqBCwpCFIqcPfJZ5-5_YYUqlRtUxEkM/edit?usp=sharing> >> > > [7] >> > https://github.com/infinispan/infinispan/pull/ >> 5094#issuecomment-296970673 >> > > 5094#issuecomment-296970673> >> > > -- >> > > SEBASTIAN ?ASKAWIEC >> > > INFINISPAN DEVELOPER >> > > Red Hat EMEA >> > > >> > > >> > > _______________________________________________ >> > > infinispan-dev mailing list >> > > infinispan-dev at lists.jboss.org >> > >> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > > >> > > _______________________________________________ >> > > infinispan-dev mailing list >> > > infinispan-dev at lists.jboss.org >> > >> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > -- >> > >> > SEBASTIAN?ASKAWIEC >> > >> > INFINISPAN DEVELOPER >> > >> > Red HatEMEA >> > >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org > jboss.org> >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss Performance Team >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > > SEBASTIAN ?ASKAWIEC > > INFINISPAN DEVELOPER > > Red Hat EMEA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170525/4d92342d/attachment-0001.html From slaskawi at redhat.com Mon May 29 06:57:00 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 29 May 2017 10:57:00 +0000 Subject: [infinispan-dev] Status 2017-05-29 Message-ID: Hey! Unfortunately I won't be able to attend today's status meeting, so here are tasks I was looking at last week: - I released KUBE_PING 9.3 and merged some downstream fixes into it. - I release Spring Boot Starters 1.0.0.Final and added some more documentation - I added Infinispan Spring Boot Starters to Spring Initializr ( https://start.spring.io/) - PR: https://github.com/spring-io/initializr/pull/434 - Unfortunately it was rejected (see comments in PR). The main argument against it is that Spring Boot Developers are not in favor of adding another caching library. I'm trying to convince them that it will be beneficial to do so. Please chime into the discussion if you wish. - I worked on accessing Infinispan cluster hosted in Kubernetes from the outside world. I have very good results in: - https://github.com/slaskawi/external-ip-proxy - https://github.com/slaskawi/infinispan/tree/ISPN-7793/map_internal_external_addresses - I'm doing benchmarks at the moment and will post some more information soon. - I've been doing most of the tests on Google Container Engine and I must say, the user experience is really good. - I created OpenShift templates for bootstrapping Infinispan: https://github.com/infinispan/infinispan-openshift-templates - I also participated in lots and lots of discussion around Infinispan and Kubernetes. This week I plan: - Finish exposing cluster hosted in Kube to the outside world and publish benchmark results. - Look briefly at Istio (sidecar approach) - Start looking at ALPN and HTTP/2 support Thanks, Sebastian -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170529/f957acee/attachment.html From vrigamon at redhat.com Mon May 29 08:21:56 2017 From: vrigamon at redhat.com (Vittorio Rigamonti) Date: Mon, 29 May 2017 14:21:56 +0200 Subject: [infinispan-dev] Weekly meeting Message-ID: Hi team, I won't be able to be in the today's weekly meeting. Here's my log for the last week: HRCPP-379 Add ArchiveArtifacts plugins to Jenkinsfile worked on the jenkins pipeline that allows to build releases. This took me some time mainly because I spent lot of time in running the build to test my devs. Then I've released the 8.1.1 clients. Annouced here: http://blog.infinispan.org/2017/05/hotrod-clients-cc-811final-released.html I'm also working on these: HRCPP-376 SASL (PLAIN,MD5) implementation for Windows rebasing the related PR HRCPP-377 Expose SASL configuration to .NET I think it would take me some of this week's time, because I need to implement a mechanism to share a .NET callback function to the C++ core. This I'm sure will get me a lot of fun with SWIG. -- Vittorio Rigamonti Senior Software Engineer Red Hat Milan, Italy vrigamon at redhat.com irc: rigazilla -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170529/b4fc09af/attachment.html From galder at redhat.com Mon May 29 10:44:23 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 29 May 2017 16:44:23 +0200 Subject: [infinispan-dev] Weekly IRC Meeting Logs 2017-05-29 Message-ID: <4EF05063-AB55-470A-AA04-317C2EDC02FE@redhat.com> Hi all, The logs for this week's meeting: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-05-29-14.02.log.html Cheers, -- Galder Zamarre?o Infinispan, Red Hat From slaskawi at redhat.com Tue May 30 08:43:14 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 30 May 2017 14:43:14 +0200 Subject: [infinispan-dev] Using load balancers for Infinispan in Kubernetes Message-ID: Hey guys! Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following: [image: pasted1] As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse. During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3]. The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings): - Benchmark app deployed inside Kuberenetes and using internal addresses (baseline): - 10k puts: 674.244 ? 16.654 - 10k puts&gets: 1288.437 ? 136.207 - Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence: - *10k puts: 1465.567 ? 176.349* - *10k puts&gets: 2684.984 ? 114.993* - Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing: - *10k puts: 1052.891 ? 31.218* - *10k puts&gets: 2465.586 ? 85.034* Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small. So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer. Thanks, Sebastian [1] https://github.com/slaskawi/external-ip-proxy [2] https://github.com/infinispan/infinispan/pull/5164 [3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170530/18e7bbb4/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted1 Type: image/png Size: 32688 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170530/18e7bbb4/attachment-0001.png From mudokonman at gmail.com Tue May 30 09:25:16 2017 From: mudokonman at gmail.com (William Burns) Date: Tue, 30 May 2017 13:25:16 +0000 Subject: [infinispan-dev] Weekly IRC Meeting Logs 2017-05-29 In-Reply-To: <4EF05063-AB55-470A-AA04-317C2EDC02FE@redhat.com> References: <4EF05063-AB55-470A-AA04-317C2EDC02FE@redhat.com> Message-ID: Monday (yesterday) was a holiday for me. Last week I was mostly working on ISPN-7841 (lock streams). Thanks to Radim for review comments. I hope to have this all updated today. I also started looking into documentation for the new user guide section regarding executing code in the grid. Unfortunately I got side tracked and logged ISPN-7864 and ISPN-7865. The latter involves enhance streams to use less threads, which should help a lot of people :) I was working on that a bit in the later part of last week. So this week I hope to get to get to the documentation after the lock stream, unless I get side tracked again which is a big possibility. On Mon, May 29, 2017 at 11:18 AM Galder Zamarre?o wrote: > Hi all, > > The logs for this week's meeting: > > http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-05-29-14.02.log.html > > Cheers, > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170530/0fd6f387/attachment.html From galder at redhat.com Tue May 30 09:36:19 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 30 May 2017 15:36:19 +0200 Subject: [infinispan-dev] IRC chat: HB + I9 In-Reply-To: References: <284D876E-99B2-4CCF-B0C7-49CF4C12A417@redhat.com> <8c834e21-438b-d883-6ce0-fa331ccc14fc@redhat.com> <848888EB-EA5C-4A34-B579-B86C610167D4@redhat.com> Message-ID: <2FEBC79C-CF2B-4470-933D-A0D0C833BAC4@redhat.com> Thx Steve for your input. Seems like everyone agrees moving to Infinispan might be best option, so I'll be sending a proposal to the list in the next few days. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 25 May 2017, at 15:31, Steve Ebersole wrote: > > A lot to read through here so I apologize up front if I missed something... > > So to be fair I am biased as I would really like to not have to deal with these integrations :) That said, I do really believe that the best option is to move this code out of the hibernate/hibernate-orm repo. To me that could mean a separate repo altogether (infinispan/infinispan-hibernate-l2c, or sim) or into infinispan proper if Infinispan already has Hibernate dependency as Sanne mentioned somewhere. > > As far as Hibernate.. master is in fact 5.2, 6.0 exists just in my fork atm - we are still discussing the exact event that should trigger moving that 6.0 branch up stream. 6.0 timeline is still basically unknown especially as far as a Final goes. > > > On Wed, May 24, 2017, 11:04 AM Galder Zamarre?o wrote: > Adding Steve, > > Scott Marlow just reminded me that you've advocated for Infinispan 2LC provider to be moved to Infinispan source tree [2]. > > So, you might want to add your thoughts to this thread? > > Cheers, > > [2] http://transcripts.jboss.org/channel/irc.freenode.org/%23hibernate-dev/2015/%23hibernate-dev.2015-08-06.log.html > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 24 May 2017, at 17:56, Paul Ferraro wrote: > > > > Option #4 would be my preference as well. The integration into WF has > > become increasingly cumbersome as the pace of Infinispan releases (and > > associated API changes) has increased. I would really rather avoid > > having to create and maintain forks of hibernate-infinispan to support > > the combination of Hibernate and Infinispan that don't exist in the > > upstream codebase. > > > > On Wed, May 24, 2017 at 11:18 AM, Sanne Grinovero wrote: > >> I would suggest option 4# : move the 2LC implementation to Infinispan. > >> > >> I already suggested this in the past, but to remind the main arguments I have: > >> > >> - neither repository is ideal, but having it here vs there is not > >> just moving the problem as the two projects are different, have > >> different timelines and different backwards compatibility policies. > >> > >> - Infinispan already depends on several Hibernate projects - even > >> directly to Hibernate ORM itself via the JPA cachestore and indirectly > >> via Hibernate Search and WildFly - so moving the Infinispan dependency > >> out of the Hibernate repository helps to linearize the build for one > >> consistent stack. > >> For example right now WildFly master contains a combination of > >> Hibernate ORM and Infinispan 2LC, which is not the same combination as > >> tested by running the 2LC testsuite; this happens all the time and > >> brings its own set of issues & delays. > >> > >> - Infinispan changes way more often - and as Radim already suggested > >> in his previous email - there's more benefit in having such advanced > >> code more closely tied to Infinispan so that it can benefit from new > >> capabilities even though these might not be ready to be blessed as > >> long term API. The 2LC SPI in Hibernate on the other hand is stable, > >> and has to stay stable anyway, for other reasons not least integration > >> with other providers, so there's no symmetric benefit in having this > >> code in Hibernate. > >> > >> - Infinispan releases breaking changes with a more aggressive pace. > >> It's more useful for Infinispan 9 to be able to support older versions > >> of Hibernate ORM, than the drawback of a new ORM release not having > >> yet an Infinispan release compatible. This last point is the only > >> drawback I can see, and franckly it's both a temporary situation as > >> Infinispan can catch up quickly and a very inlikely situation as > >> Hibernate ORM is unlikely to change these SPIs in e.g. the next major > >> release 6.0. > >> > >> - Infinispan occasionally breaks expectations of the 2LC code, as > >> Galder just had to figure out with a painful upgrade. We can all agree > >> that these changes are necessary, but I strongly believe it's useful > >> to *know* about such breackages ASAP from the testsuite, not half a > >> year later when a major dependency upgrade propagates to other > >> projects. > >> > >> - The Hibernate ORM would appreciate getting rid of debugging > >> clustering and networking issues when there's the occasional failure, > >> which are stressful as they are out of their area of expertise. > >> > >> I hope that makes sense? > >> > >> Thanks, > >> Sanne > >> > >> > >> > >> On 24 May 2017 at 08:49, Radim Vansa wrote: > >>> Hi Galder, > >>> > >>> I think that (3) is simply not possible (from non-technical perspective) > >>> and I don't think we have the manpower to maintain 2 different modules > >>> (2). The current version does not seem ready (generic enough) to get > >>> into Infinispan, so either (1), or a lot of more work towards (4) (which > >>> would be my preference). > >>> > >>> I haven't thought about all the steps for (4), but it seems that > >>> UnorderedDistributionInterceptor and LockingInterceptor should get into > >>> Infinispan as a flavour of repl/dist cache mode that applies update in > >>> parallel on all owners without any ordering; it's up to the user to > >>> guarantee that changes to an entry are commutative. > >>> > >>> The 2LC code itself shouldn't use the > >>> TombstoneCallInterceptor/VersionedCallInterceptor now that there is the > >>> functional API, you should move the behavior to functions. > >>> > >>> Regarding the invalidation mode, I think that a variant that would void > >>> any writes to the entry (begin/end invalidation) could be moved to > >>> Infinispan, too. I am not even sure if current invalidation in > >>> Infinispan is useful - you can't transparantly cache access to > >>> repeatable-read isolated DB (where reads block writes), but the blocking > >>> as we do in 2LC now is probably too strong if we're working with DB > >>> using just read committed as the isolation level. I was always trying to > >>> enforce linearizability, TBH I don't know how to write a test that would > >>> test a more relaxed consistency. > >>> > >>> Btw., I've noticed that you've set isolation level to READ_COMMITTED in > >>> default configuration - isolation level does not apply at all to > >>> non-transactional caches, so please remove that as it would be just a noise. > >>> > >>> Radim > >>> > >>> On 05/23/2017 03:07 PM, Galder Zamarre?o wrote: > >>>> Hi all, > >>>> > >>>> I've just finished integrating Infinispan with a HB 6.x branch Steve had, all tests pass now [1]. > >>>> > >>>> Yeah, we didn't commit on the final location for these changes. > >>>> > >>>> As far as I know, Hibernate master is not 6.x, but rather 5.2.x. There's no 5.2.x branch in Hibernate main repo. 6.x is just a branch that Steve has. > >>>> > >>>> These are the options availble to us: > >>>> > >>>> 1. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 6.x branch. > >>>> > >>>> 2. Integrate 9.x provider as part of a second Infinispan module in Hibernate 5.x branch. > >>>> > >>>> 3. Integrate 9.x provider as part of 'hibernate-infinispan' in Hibernate 5.x branch. This is problematic for since the provider is not backwards compatible. > >>>> > >>>> 4. Integrate 9.x provider in infinispan and deliver it as part of Infinispan rather than Hibernate. > >>>> > >>>> I'm not sure which one I prefer the most TBH... 1. is the ideal solution but doesn't seem there will be a Hibernate 6.x release for a while. 2./3./4. all have their downsides... :\ > >>>> > >>>> Thoughts? > >>>> > >>>> [1] https://github.com/galderz/hibernate-orm/commits/t_i9x_v2 > >>>> -- > >>>> Galder Zamarre?o > >>>> Infinispan, Red Hat > >>>> > >>>>> On 16 May 2017, at 17:06, Paul Ferraro wrote: > >>>>> > >>>>> Thanks Galder. I read through the infinispan-dev thread on the > >>>>> subject, but I'm not sure what was concluded regarding the eventual > >>>>> home for this code. > >>>>> Once the testsuite passes, is the plan to commit to hibernate master? > >>>>> If so, I will likely fork these changes into a WF module (and adapt it > >>>>> for Hibernate 5.1.x) so that WF12 can move to JGroups4+Infinispan9 > >>>>> until Hibernate6 is integrated. > >>>>> > >>>>> Radim - one thing you mentioned on that infinispan-dev thread puzzled > >>>>> me: you said that invalidation mode offers no benefits over > >>>>> replication. How is that possible? Can you elaborate? > >>>>> > >>>>> Paul > >>>>> > >>>>> On Tue, May 16, 2017 at 9:03 AM, Galder Zamarre?o wrote: > >>>>>> I'm on the move, not sure if Paul/Radim saw my replies: > >>>>>> > >>>>>> galderz, rvansa: Hey guys - is there a plan for Hibernate & > >>>>>> ISPN 9? > >>>>>> pferraro: Galder has been working on that > >>>>>> pferraro: though I haven't seen any results but a list of > >>>>>> stuff that needs to be changed > >>>>>> galderz: which Hibernate branch are you targeting? > >>>>>> pferraro: 5.2, but there are minute differences between 5.x > >>>>>> in terms of the parts that need love to get Infinispan 9 support > >>>>>> *** Mode change: +v vblagoje on #infinispan by ChanServ > >>>>>> (ChanServ at services.) > >>>>>> rvansa: are you suggesting that 5.0 or 5.1 branches will be > >>>>>> adapted to additionally support infinispan 9? how is that > >>>>>> possible? > >>>>>>> pferraro: i'm working on it as we speak... > >>>>>>> pferraro: down to 16 failuresd > >>>>>>> pferraro: i started a couple of months ago, but had talks/demos to > >>>>>> prepare > >>>>>>> pferraro: i've got back to working on it this week > >>>>>> ... > >>>>>>> pferraro: rvansa > >>>>>>> rvansa: minute differences my ass ;p > >>>>>>> pferraro: did you see my replies? > >>>>>>> i got disconnected while replying... > >>>>>> hmm - no - I didn't > >>>>>> galderz: ^ > >>>>>>> pferraro: so, working on the HB + I9 integration as we speak > >>>>>>> pferraro: i started a couple of months back but had talks/demos to > >>>>>> prepare and had to put that aside > >>>>>>> pferraro: i'm down to 16 failures > >>>>>>> pferraro: serious refactoring required of the integration to get it > >>>>>> to compile and the tests to pass > >>>>>>> pferraro: need to switch to async interceptor stack in 2lc > >>>>>> integration and get all the subtle changes right > >>>>>>> pferraro: it's a painstaking job basically > >>>>>>> pferraro: i'm working on > >>>>>> https://github.com/galderz/hibernate-orm/tree/t_i9x_v2 > >>>>>>> pferraro: i can't remember where i branched off, but it's a branch > >>>>>> that steve had since master was focused on 5.x > >>>>>>> pferraro: i've no idea when/where we'll integrate this, but one > >>>>>> thing is for sure: it's nowhere near backwards compatible > >>>>>>> actually, fixed one this morning, so down to 15 failures > >>>>>>> pferraro: any suggestions/wishes? > >>>>>>> is anyone out there? ;) > >>>>>> Cheers, > >>>>>> -- > >>>>>> Galder Zamarre?o > >>>>>> Infinispan, Red Hat > >>>>>> > >>> > >>> > >>> -- > >>> Radim Vansa > >>> JBoss Performance Team > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > From sanne at infinispan.org Tue May 30 10:46:24 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 30 May 2017 15:46:24 +0100 Subject: [infinispan-dev] Using load balancers for Infinispan in Kubernetes In-Reply-To: References: Message-ID: Hi Sebastian, the "intelligent routing" of Hot Rod being one of - if not the main - reason to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with HTTP (REST) in such architectures. Several people have suggested in the past the need to have an HTTP smart load balancer which would be able to route the external REST requests to the right node. Essentially have people use REST over the wider network, up to reaching the Infinispan cluster where the service endpoint (the load balancer) can convert them to optimised Hot Rod calls, or just leave them in the same format but routing them with the same intelligence to the right nodes. I realise my proposal requires some work on several fronts, at very least we would need: - feature parity Hot Rod / REST so that people can actually use it - a REST load balancer But I think the output of such a direction would be far more reusable, as both these points are high on the wish list anyway. Not least having a "REST load balancer" would allow to deploy Infinispan as an HTTP cache; just honouring the HTTP caching protocols and existing standards would allow people to use any client to their liking, without us having to maintain Hot Rod clients and support it on many exotic platforms - we would still have Hot Rod clients but we'd be able to pick a smaller set of strategical platforms (e.g. Windows doesn't have to be in that list). Such a load balancer could be written in Java (recent WildFly versions are able to do this efficiently) or it could be written in another language, all it takes is to integrate an Hot Rod client - or just the intelligence of it- as an extension into an existing load balancer of our choice. Allow me a bit more nit-picking on your benchmarks ;) As you pointed out yourself there are several flaws in your setup: "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's a flawed setup I'd rather not publish figures, especially not suggest to make decisions based on such results. At this level of design need to focus on getting the architecture right; it should be self-speaking that your proposal of actually using intelligent routing in some way should be better than not using it. Once we'll have an agreement on a sound architecture, then we'll be able to make the implementation efficient. Thanks, Sanne On 30 May 2017 at 13:43, Sebastian Laskawiec wrote: > Hey guys! > > Over past few weeks I've been working on accessing Infinispan cluster > deployed inside Kubernetes from the outside world. The POC diagram looks > like the following: > > [image: pasted1] > > As a reminder, the easiest (though not the most effective) way to do it is > to expose a load balancer Service (or a Node Port Service) and access it > using a client with basic intelligence (so that it doesn't try to update > server list based on topology information). As you might expect, this won't > give you much performance but at least you could access the cluster. > Another approach is to use TLS/SNI but again, the performance would be even > worse. > > During the research I tried to answer this problem and created "External > IP Controller" [1] (and corresponding Pull Request for mapping > internal/external addresses [2]). The main idea is to create a controller > deployed inside Kubernetes which will create (and destroy if not needed) a > load balancer per Infinispan Pod. Additionally the controller exposes > mapping between internal and external addresses which allows the client to > properly update server list as well as consistent hash information. A full > working example is located here [3]. > > The biggest question is whether it's worth it? The short answer is yes. > Here are some benchmark results of performing 10k puts and 10k puts&gets > (please take them with a big grain of salt, I didn't optimize any server > settings): > > - Benchmark app deployed inside Kuberenetes and using internal > addresses (baseline): > - 10k puts: 674.244 ? 16.654 > - 10k puts&gets: 1288.437 ? 136.207 > - Benchamrking app deployed in a VM outside of Kubernetes with basic > intelligence: > - *10k puts: 1465.567 ? 176.349* > - *10k puts&gets: 2684.984 ? 114.993* > - Benchamrking app deployed in a VM outside of Kubernetes with address > mapping and topology aware hashing: > - *10k puts: 1052.891 ? 31.218* > - *10k puts&gets: 2465.586 ? 85.034* > > Note that benchmarking Infinispan from a VM might be very misleading since > it depends on data center configuration. Benchmarks above definitely > contain some delay between Google Compute Engine VM and a Kubernetes > cluster deployed in Google Container Engine. How big is the delay? Hard to > tell. What counts is the difference between client using basic intelligence > and topology aware intelligence. And as you can see it's not that small. > > So the bottom line - if you can, deploy your application along with > Infinispan cluster inside Kubernetes. That's the fastest configuration > since only iptables are involved. Otherwise use a load balancer per pod > with External IP Controller. If you don't care about performance, just use > basic client intelligence and expose everything using single load balancer. > > Thanks, > Sebastian > > [1] https://github.com/slaskawi/external-ip-proxy > [2] https://github.com/infinispan/infinispan/pull/5164 > [3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170530/2434dddc/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted1 Type: image/png Size: 32688 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170530/2434dddc/attachment-0001.png From emmanuel at hibernate.org Wed May 31 03:15:21 2017 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 31 May 2017 15:15:21 +0800 Subject: [infinispan-dev] Using load balancers for Infinispan in Kubernetes In-Reply-To: References: Message-ID: <38965B3D-F854-47BC-9F0B-F56BAEB78FD5@hibernate.org> To Sanne?s point, I think HTTP(/2) would be a better longer term path if we think we can make it as efficient as current HR. But let?s evaluate the numbers of cycles to reach that point. Doing Seb?s approach might be a good first step. Speaking of Sebastian, I have been discussing with Burr, Edson on the idea of a *node* sidecar (as opposed to a *pod* sidecar). To your problem, could you use Daemonset to enforce one Load Balancer per node or at least per project instead of one per pod deployed with Infinispan in it? WDYT, is it possible? > On 30 May 2017, at 20:43, Sebastian Laskawiec wrote: > > Hey guys! > > Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following: > > > > As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse. > > During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3]. > > The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings): > Benchmark app deployed inside Kuberenetes and using internal addresses (baseline): > 10k puts: 674.244 ? 16.654 > 10k puts&gets: 1288.437 ? 136.207 > Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence: > 10k puts: 1465.567 ? 176.349 > 10k puts&gets: 2684.984 ? 114.993 > Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing: > 10k puts: 1052.891 ? 31.218 > 10k puts&gets: 2465.586 ? 85.034 > Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small. > > So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer. > > Thanks, > Sebastian > > [1] https://github.com/slaskawi/external-ip-proxy > [2] https://github.com/infinispan/infinispan/pull/5164 > [3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170531/ca24b92c/attachment.html From rvansa at redhat.com Wed May 31 03:32:50 2017 From: rvansa at redhat.com (Radim Vansa) Date: Wed, 31 May 2017 09:32:50 +0200 Subject: [infinispan-dev] Using load balancers for Infinispan in Kubernetes In-Reply-To: References: Message-ID: On 05/30/2017 04:46 PM, Sanne Grinovero wrote: > Hi Sebastian, > > the "intelligent routing" of Hot Rod being one of - if not the main - > reason to use Hot Rod, I wonder if we shouldn't rather suggest people > to stick with HTTP (REST) in such architectures. > > Several people have suggested in the past the need to have an HTTP > smart load balancer which would be able to route the external REST > requests to the right node. Essentially have people use REST over the > wider network, up to reaching the Infinispan cluster where the service > endpoint (the load balancer) can convert them to optimised Hot Rod > calls, or just leave them in the same format but routing them with the > same intelligence to the right nodes. > > I realise my proposal requires some work on several fronts, at very > least we would need: > - feature parity Hot Rod / REST so that people can actually use it > - a REST load balancer > > But I think the output of such a direction would be far more reusable, > as both these points are high on the wish list anyway. You could already create this architecture: expose the REST interface on a node with capacity factor 0 and this node will convert the REST calls into 'optimized JGroups calls'. You could have multiple such nodes, to eliminate single point of failure. There could be a very short hiccup when you remove/add these 'routers', but since these don't contain any data, it will be very short. Or you could even keep data on these nodes and then some of the operations will be even faster. Problem solved? > > Not least having a "REST load balancer" would allow to deploy > Infinispan as an HTTP cache; just honouring the HTTP caching protocols > and existing standards would allow people to use any client to their > liking, without us having to maintain Hot Rod clients and support it > on many exotic platforms - we would still have Hot Rod clients but > we'd be able to pick a smaller set of strategical platforms (e.g. > Windows doesn't have to be in that list). > > Such a load balancer could be written in Java (recent WildFly versions > are able to do this efficiently) or it could be written in another > language, all it takes is to integrate an Hot Rod client - or just the > intelligence of it- as an extension into an existing load balancer of > our choice. > > Allow me a bit more nit-picking on your benchmarks ;) > As you pointed out yourself there are several flaws in your setup: > "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if > you know it's a flawed setup I'd rather not publish figures, > especially not suggest to make decisions based on such results. > At this level of design need to focus on getting the architecture > right; it should be self-speaking that your proposal of actually using > intelligent routing in some way should be better than not using it. > Once we'll have an agreement on a sound architecture, then we'll be > able to make the implementation efficient. > > Thanks, > Sanne > > > > > On 30 May 2017 at 13:43, Sebastian Laskawiec > wrote: > > Hey guys! > > Over past few weeks I've been working on accessing Infinispan > cluster deployed inside Kubernetes from the outside world. The POC > diagram looks like the following: > > pasted1 > > As a reminder, the easiest (though not the most effective) way to > do it is to expose a load balancer Service (or a Node Port > Service) and access it using a client with basic intelligence (so > that it doesn't try to update server list based on topology > information). As you might expect, this won't give you much > performance but at least you could access the cluster. Another > approach is to use TLS/SNI but again, the performance would be > even worse. > > During the research I tried to answer this problem and created > "External IP Controller" [1] (and corresponding Pull Request for > mapping internal/external addresses [2]). The main idea is to > create a controller deployed inside Kubernetes which will create > (and destroy if not needed) a load balancer per Infinispan Pod. > Additionally the controller exposes mapping between internal and > external addresses which allows the client to properly update > server list as well as consistent hash information. A full working > example is located here [3]. > > The biggest question is whether it's worth it? The short answer is > yes. Here are some benchmark results of performing 10k puts and > 10k puts&gets (please take them with a big grain of salt, I didn't > optimize any server settings): > > * Benchmark app deployed inside Kuberenetes and using internal > addresses (baseline): > o 10k puts: 674.244 ? 16.654 > o 10k puts&gets: 1288.437 ? 136.207 > * Benchamrking app deployed in a VM outside of Kubernetes with > basic intelligence: > o *10k puts: 1465.567 ? 176.349* > o *10k puts&gets: 2684.984 ? 114.993* > * Benchamrking app deployed in a VM outside of Kubernetes with > address mapping and topology aware hashing: > o *10k puts: 1052.891 ? 31.218* > o *10k puts&gets: 2465.586 ? 85.034* > > Note that benchmarking Infinispan from a VM might be very > misleading since it depends on data center configuration. > Benchmarks above definitely contain some delay between Google > Compute Engine VM and a Kubernetes cluster deployed in Google > Container Engine. How big is the delay? Hard to tell. What counts > is the difference between client using basic intelligence and > topology aware intelligence. And as you can see it's not that small. > > So the bottom line - if you can, deploy your application along > with Infinispan cluster inside Kubernetes. That's the fastest > configuration since only iptables are involved. Otherwise use a > load balancer per pod with External IP Controller. If you don't > care about performance, just use basic client intelligence and > expose everything using single load balancer. > > Thanks, > Sebastian > > [1] https://github.com/slaskawi/external-ip-proxy > > [2] https://github.com/infinispan/infinispan/pull/5164 > > [3] > https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From slaskawi at redhat.com Wed May 31 03:38:38 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 31 May 2017 07:38:38 +0000 Subject: [infinispan-dev] Using load balancers for Infinispan in Kubernetes In-Reply-To: References: Message-ID: Hey Sanne, Comments inlined. Thanks, Sebastian On Tue, May 30, 2017 at 5:58 PM Sanne Grinovero wrote: > Hi Sebastian, > > the "intelligent routing" of Hot Rod being one of - if not the main - > reason to use Hot Rod, I wonder if we shouldn't rather suggest people to > stick with HTTP (REST) in such architectures. > > Several people have suggested in the past the need to have an HTTP smart > load balancer which would be able to route the external REST requests to > the right node. Essentially have people use REST over the wider network, up > to reaching the Infinispan cluster where the service endpoint (the load > balancer) can convert them to optimised Hot Rod calls, or just leave them > in the same format but routing them with the same intelligence to the right > nodes. > > I realise my proposal requires some work on several fronts, at very least > we would need: > - feature parity Hot Rod / REST so that people can actually use it > - a REST load balancer > > But I think the output of such a direction would be far more reusable, as > both these points are high on the wish list anyway. > Unfortunately I'm not convinced into this idea. Let me elaborate... It goes without saying that HTTP payload is simply larger and require much more processing. That alone makes it slower than Hot Rod (I believe Martin could provide you some numbers on that). The second arguments is that switching/routing inside Kubernetes is bloody fast (since it's based on iptables) and some cloud vendors optimize it even further (e.g. Google Andromeda [1][2], I would be surprised if AWS didn't have anything similar). During the work on this prototype I wrote a simple async binary proxy [3] and measured GCP load balancer vs my proxy performance. They were twice as fast [4][5]. You may argue whether I could write a better proxy. Probably I could, but the bottom line is that another performance hit is inevitable. They are really fast and they operate on their own infrastructure (load balancers is something that is provided by the cloud vendor to Kubernetes, not the other way around). So with all that in mind, are we going to get better results comparing to my proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which comes really soon I hope). The second question is whether this new "REST load balancer" will work better than a standard load balancer using round robin strategy? Again I dare to doubt, even if you you're faster at routing request to proper node, you introduce another layer of latency. Of course the priority of this is up to Tristan but I definitely wouldn't place it high on todo list. And before even looking at it I would recommend taking Netty HTTP Proxy, putting it in the middle between real load balancer and Infinispan app and measure performance with and without it. Another test could be with 1 and 10 replicas to check the performance penalty of hitting 100% and 10% requests into proper node. [1] https://cloudplatform.googleblog.com/2014/08/containers-vms-kubernetes-and-vmware.html [2] https://cloudplatform.googleblog.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html [3] https://github.com/slaskawi/external-ip-proxy/blob/Benchmark_with_proxy/Proxy/Proxy.go [4] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20proxy.txt [5] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20loadbalancer.txt > Not least having a "REST load balancer" would allow to deploy Infinispan > as an HTTP cache; just honouring the HTTP caching protocols and existing > standards would allow people to use any client to their liking, > Could you please give me an example how this could work? The only way that I know is to plug a cache into reverse proxy. NGNIX supports pluggable Redis for example [6]. [6] https://www.nginx.com/resources/wiki/modules/redis/ > without us having to maintain Hot Rod clients and support it on many > exotic platforms - we would still have Hot Rod clients but we'd be able to > pick a smaller set of strategical platforms (e.g. Windows doesn't have to > be in that list). > As I mentioned before, I really doubts HTTP will be faster then Hot Rod in *any* scenario. > Such a load balancer could be written in Java (recent WildFly versions are > able to do this efficiently) or it could be written in another language, > all it takes is to integrate an Hot Rod client - or just the intelligence > of it- as an extension into an existing load balancer of our choice. > As I mentioned before, with custom load balancer you're introducing another layer of latency. It's not a free ride. > Allow me a bit more nit-picking on your benchmarks ;) > As you pointed out yourself there are several flaws in your setup: "didn't > tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's > a flawed setup I'd rather not publish figures, especially not suggest to > make decisions based on such results. > Why not? Infinispan is a public project and anyone can benchmark it using JMH and taking decisions based on figures is always better than on intuition. Even though there were multiple unknown factors involved in this benchmark (this is why I pointed them out and asked to take the results with a grain of salt), the test conditions for all scenarios were the same. For me this is sufficient to give a general recommendation as I did. BTW, this recommendation fits exactly my expectations (communication inside Kube the fastest, LB per Pod a bit slower and no advanced routing the slowest). Finally, the recommendation is based on a POC which by definition means it doesn't fit all scenarios. You should always measure your system! So unless you can prove the benchmark results are fundamentally wrong and I have drawn wrong conclusions (e.g. a simple client is the fastest solution whereas inside Kubernetes communication is the slowest), please don't use "naaah, that's wrong" argument. It's rude. > At this level of design need to focus on getting the architecture right; > it should be self-speaking that your proposal of actually using intelligent > routing in some way should be better than not using it. > My benchmark confirmed this. But as always I would be happy to discuss some alternatives. But before trying to convince me to "REST Router", please prove that introducing a load balancer (or just a simple async proxy for start) gives similar or better performance than a simple load balancer with round robin strategy. > Once we'll have an agreement on a sound architecture, then we'll be able > to make the implementation efficient. > > Thanks, > Sanne > > > > > On 30 May 2017 at 13:43, Sebastian Laskawiec wrote: > >> Hey guys! >> >> Over past few weeks I've been working on accessing Infinispan cluster >> deployed inside Kubernetes from the outside world. The POC diagram looks >> like the following: >> >> [image: pasted1] >> >> As a reminder, the easiest (though not the most effective) way to do it >> is to expose a load balancer Service (or a Node Port Service) and access it >> using a client with basic intelligence (so that it doesn't try to update >> server list based on topology information). As you might expect, this won't >> give you much performance but at least you could access the cluster. >> Another approach is to use TLS/SNI but again, the performance would be even >> worse. >> >> During the research I tried to answer this problem and created "External >> IP Controller" [1] (and corresponding Pull Request for mapping >> internal/external addresses [2]). The main idea is to create a controller >> deployed inside Kubernetes which will create (and destroy if not needed) a >> load balancer per Infinispan Pod. Additionally the controller exposes >> mapping between internal and external addresses which allows the client to >> properly update server list as well as consistent hash information. A full >> working example is located here [3]. >> >> The biggest question is whether it's worth it? The short answer is yes. >> Here are some benchmark results of performing 10k puts and 10k puts&gets >> (please take them with a big grain of salt, I didn't optimize any server >> settings): >> >> - Benchmark app deployed inside Kuberenetes and using internal >> addresses (baseline): >> - 10k puts: 674.244 ? 16.654 >> - 10k puts&gets: 1288.437 ? 136.207 >> - Benchamrking app deployed in a VM outside of Kubernetes with basic >> intelligence: >> - *10k puts: 1465.567 ? 176.349* >> - *10k puts&gets: 2684.984 ? 114.993* >> - Benchamrking app deployed in a VM outside of Kubernetes with >> address mapping and topology aware hashing: >> - *10k puts: 1052.891 ? 31.218* >> - *10k puts&gets: 2465.586 ? 85.034* >> >> Note that benchmarking Infinispan from a VM might be very misleading >> since it depends on data center configuration. Benchmarks above definitely >> contain some delay between Google Compute Engine VM and a Kubernetes >> cluster deployed in Google Container Engine. How big is the delay? Hard to >> tell. What counts is the difference between client using basic intelligence >> and topology aware intelligence. And as you can see it's not that small. >> >> So the bottom line - if you can, deploy your application along with >> Infinispan cluster inside Kubernetes. That's the fastest configuration >> since only iptables are involved. Otherwise use a load balancer per pod >> with External IP Controller. If you don't care about performance, just use >> basic client intelligence and expose everything using single load balancer. >> >> Thanks, >> Sebastian >> >> [1] https://github.com/slaskawi/external-ip-proxy >> [2] https://github.com/infinispan/infinispan/pull/5164 >> [3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- SEBASTIAN ?ASKAWIEC INFINISPAN DEVELOPER Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170531/55b0195d/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted1 Type: image/png Size: 32688 bytes Desc: not available Url : http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170531/55b0195d/attachment-0001.png From amanukya at redhat.com Wed May 31 05:48:36 2017 From: amanukya at redhat.com (Anna Manukyan) Date: Wed, 31 May 2017 05:48:36 -0400 (EDT) Subject: [infinispan-dev] HotRod client TCK In-Reply-To: References: Message-ID: <655311575.13648540.1496224116536.JavaMail.zimbra@redhat.com> Hey Galder, thanks a lot for review. I have updated the document based on your suggestions. Best regards, Anna. ----- Original Message ----- From: "Galder Zamarre?o" To: "infinispan -Dev List" Sent: Monday, May 8, 2017 1:32:13 PM Subject: Re: [infinispan-dev] HotRod client TCK Btw, thanks Anna for working on this! I've had a look at the list and I have some questions: * HotRodAsyncReplicationTest: I don't think it should be a client TCK test. There's nothing the client does differently compared to executing against a sync repl cache. If anything, it's a server TCK test since it verifies that a put sent by a HR client gets replicated. The same applies to all the test of local vs REPl vs DIST tests. * LockingTest: same story, this is a client+server integration test, I don't think it's a client TCK test. If anything, it's a server TCK test. It verifies that if a client sends a put, the entry is locked. * MixedExpiry*Test: it's dependant on the server configuration, not really a client TCK test IMO. I think the only client TCK tests that deal with expiry should only verify that the entry is expirable if the client decides to make it expirable. * ClientListenerRemoveOnStopTest: Not sure this is a client TCK test. Yeah, it verifies that the client removes its listeners on stop, but it's not a Hot Rod protocol TCK test. Going back to what Radim said, how are you going to verify each client does this? What we can verify for all clients easily is they send the commands to remove the client servers to the server. Maybe for these and below client specific logic related tests, as Martin suggesteds, we go with the approach of just verifying that tests exist. * Protobuf marshaller tests: client specific and testing client-side marshalling logic. Same reasons above. * Near caching tests: client specific and testing client-side near caching logic. Same issues above. * Topology change tests: I consider these TCK tests cos you could think that if the server sends a new topology, the client's next command should have the ID of this topology in its header. * Failover/Retry tests: client specific and testing client-side retry logic. Same issues above, how do you verify it works accross the board for all clients? * Socket timeout tests: again these are client specific... I think in general it'd be a good idea to try to verify somehow most of the TCK via some server-side logic, as Radim hinted, and where that's not possible, revert to just verifying the client has tests to cover certain scenarios. Cheers, -- Galder Zamarre?o Infinispan, Red Hat > On 11 Apr 2017, at 14:33, Martin Gencur wrote: > > Hello all, > we have been working on https://issues.jboss.org/browse/ISPN-7120. > > Anna has finished the first step from the JIRA - collecting information > about tests in the Java HotRod client test suite (including server > integration tests) and it is now prepared for wider review. > > She created a spreadsheet [1]. The spread sheet includes for each Java > test its name, the suggested target package in the TCK, whether to > include it in the TCK or not, and some other notes. The suggested > package also poses grouping for the tests (e.g. tck.query, tck.near, > tck.xsite, ...) > > Let me add that right now the goal is not to create a true TCK [2]. The > goal is to make sure that all implementations of the HotRod protocol > have sufficient test coverage and possibly the same server side of the > client-server test (including the server version and configuration). > > What are the next step? > > * Please review the list (at least a quick look) and see if some of the > tests which are NOT suggested for the TCK should be added or vice versa. > * I suppose the next step would then be to check other implementations > (C#, C++, NodeJS, ..) and identify tests which are missing there (there > will surely be some). > * Gradually implement the missing tests in the other implementations > Note: Here we should ensure that the server is configured in the same > way for all implementations. One way to achieve this (thanks Anna for > suggestion!) is to have a shell/batch scripts for CLI which would be > executed before the tests. This can probably be done for all impls. and > both UNIX/WINDOWS. I also realize that my PR for ISPN [3] becomes > useless because it uses Creaper (Java) and we need a language-neutral > solution for configuring the server. > > Some other notes: > * there are some duplicated tests in hotrod-client and server > integration test suites, in this case it probably makes sense to only > include in the TCK the server integration test > * tests from the hotrod-client module which are supposed to be part of > the TCK should be copied to the server integration test suite one day > (possibly later) > > Please let us know what you think. > > Thanks, > Martin > > > [1] > https://docs.google.com/spreadsheets/d/1bZBBi5m4oLL4lBTZhdRbIC_EA0giQNDZWzFNPWrF5G4/edit#gid=0 > [2] https://en.wikipedia.org/wiki/Technology_Compatibility_Kit > [3] https://github.com/infinispan/infinispan/pull/5012 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev _______________________________________________ infinispan-dev mailing list infinispan-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev From remerson at redhat.com Wed May 31 08:27:42 2017 From: remerson at redhat.com (Ryan Emerson) Date: Wed, 31 May 2017 08:27:42 -0400 (EDT) Subject: [infinispan-dev] Infinispan 9.1.0.Alpha1 Released In-Reply-To: <903528815.14419296.1496233616676.JavaMail.zimbra@redhat.com> Message-ID: <1973086041.14419475.1496233662713.JavaMail.zimbra@redhat.com> Dear All, Infinispan 9.1.0.Alpha1 has been released: http://blog.infinispan.org/2017/05/infinispan-910alpha1-released.html Cheers Ryan From sanne at infinispan.org Wed May 31 10:08:02 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 31 May 2017 15:08:02 +0100 Subject: [infinispan-dev] Allocation costs of TypeConverterDelegatingAdvancedCache Message-ID: Hi all, I've been running some benchmarks and for the fist time playing with Infinispan 9+, so please bear with me as I might shoot some dumb questions to the list in the following days. The need for TypeConverterDelegatingAdvancedCache to wrap most operations - especially "convertKeys" - is highlighet as one of the high allocators in my Search-centric use case. I'm wondering: A - Could this implementation be improved? B - Could I bypass / disable it? Not sure why it's there. Thanks, Sanne From galder at redhat.com Wed May 31 10:21:34 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 31 May 2017 16:21:34 +0200 Subject: [infinispan-dev] Proposal for moving Hibernate 2l provider to Infinispan Message-ID: <46A375F3-84FB-46FD-8523-7F3340B4DCA5@redhat.com> Hi all, Given all the previous discussions we've had on this list [1] [2], seems like there's a majority of opinions towards moving Infinispan Hibernate 2LC cache provider to the Infinispan repo. Although we could put it in a completely separate repo, given its importance, I think we should keep it in the main Infinispan repo. With this in mind, I wanted to propose the following: 1. Move the code Hibernate repository and bring it to Infinispan master and 9.0.x branches. We'd need to introduce the module in the 9.0.x branch so that 9.0.x users are not left out. 2. Create a root directory called `hibernate-orm` within Infinispan main repo. Within it, we'd keep 1 or more cache provider modules based on major Hibernate versions. 3. What should be the artifact name? Should it be 'hibernate-infinispan' like it is today? The difference with the existing cache provider would be the groupId. Or some other artifact id? 4. Should the main artifact contain the hibernate major version it belongs to? E.g. assuming we take 'hibernate-infinispan', should it be like that, or should it instead be 'hibernate5-infinispan'? This is where it'd be interesting to hear about our past Lucene directory or Query integration experience. 5. A thing to consider also is whether to maintain same package naming. We're currently using 'org.hibernate.cache.infinispan.*'. From a compatibility sense, it'd help to keep same package since users reference region factory fully qualified class names. We'd also continue to be sole owners of 'org.hibernate.cache.infinispan.*'. However, I dunno whether having 'org.hibernate...' package name within Infinispan repo would create other issues? 6. Testing wise, the cache provider is currently tested one test at the time, using JUnit. The testsuite already runs fast enough and I'd prefer not to change anything in this area right now. Is that Ok? Or is there any desire to move it to TestNG? Thoughts? Am I forgetting something? Cheers, [1] http://lists.jboss.org/pipermail/infinispan-dev/2017-February/017173.html [2] http://lists.jboss.org/pipermail/infinispan-dev/2017-May/017546.html -- Galder Zamarre?o Infinispan, Red Hat From steve at hibernate.org Wed May 31 11:02:19 2017 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 31 May 2017 15:02:19 +0000 Subject: [infinispan-dev] Proposal for moving Hibernate 2l provider to Infinispan In-Reply-To: <46A375F3-84FB-46FD-8523-7F3340B4DCA5@redhat.com> References: <46A375F3-84FB-46FD-8523-7F3340B4DCA5@redhat.com> Message-ID: Just a heads up - FWIW I doubt my reply goes through to the entire infinispan-dev list. Replies inline... 3. What should be the artifact name? Should it be 'hibernate-infinispan' > like it is today? The difference with the existing cache provider would be > the groupId. Or some other artifact id? > Since you use Maven (IIUC) you could just publish a relocation... 4. Should the main artifact contain the hibernate major version it belongs > to? E.g. assuming we take 'hibernate-infinispan', should it be like that, > or should it instead be 'hibernate5-infinispan'? This is where it'd be > interesting to hear about our past Lucene directory or Query integration > experience. > Probably, but (no promises) one thing I wanted to look at since Hibernate baselines on Java 8, is to maintain the existing SPI using default methods as a bridge. But failing that, I think your suggestion is the best option. > 5. A thing to consider also is whether to maintain same package naming. > We're currently using 'org.hibernate.cache.infinispan.*'. From a > compatibility sense, it'd help to keep same package since users reference > region factory fully qualified class names. We'd also continue to be sole > owners of 'org.hibernate.cache.infinispan.*'. However, I dunno whether > having 'org.hibernate...' package name within Infinispan repo would create > other issues? > FWIW Hibernate offers "short naming" or "friendly naming" for many configurable settings, cache providers being one. For hibernate-infinispan we register 2: "infinispan" and "infinispan-jndi". You can see this in org.hibernate.cache.infinispan.StrategyRegistrationProviderImpl. That approach will continue to work when you move it. The point being that users do not specify the class name in config, they'd just specify "infinispan", "infinispan-jndi", etc. 6. Testing wise, the cache provider is currently tested one test at the > time, using JUnit. The testsuite already runs fast enough and I'd prefer > not to change anything in this area right now. Is that Ok? Or is there any > desire to move it to TestNG? > Hmmm, that is actually surprising... I thought the hibernate-infinispan provider tests were still disabled as they had routinely caused intermittent failures of the build. I guess this was rectified? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170531/09539bff/attachment.html From galder at redhat.com Wed May 31 12:48:29 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 31 May 2017 18:48:29 +0200 Subject: [infinispan-dev] Using load balancers for Infinispan in Kubernetes In-Reply-To: References: Message-ID: <1E9DF83C-0BEC-4F7B-B91C-F1B4313F1E9C@redhat.com> Cool down peoples! http://www.quickmeme.com/meme/35ovcy Sebastian, don't think Sanne was being rude, he's just blunt and we need his bluntness :) Sanne, be nice to Sebastian and get him a beer next time around ;) Peace out! :) -- Galder Zamarre?o Infinispan, Red Hat > On 31 May 2017, at 09:38, Sebastian Laskawiec wrote: > > Hey Sanne, > > Comments inlined. > > Thanks, > Sebastian > > On Tue, May 30, 2017 at 5:58 PM Sanne Grinovero wrote: > Hi Sebastian, > > the "intelligent routing" of Hot Rod being one of - if not the main - reason to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with HTTP (REST) in such architectures. > > Several people have suggested in the past the need to have an HTTP smart load balancer which would be able to route the external REST requests to the right node. Essentially have people use REST over the wider network, up to reaching the Infinispan cluster where the service endpoint (the load balancer) can convert them to optimised Hot Rod calls, or just leave them in the same format but routing them with the same intelligence to the right nodes. > > I realise my proposal requires some work on several fronts, at very least we would need: > - feature parity Hot Rod / REST so that people can actually use it > - a REST load balancer > > But I think the output of such a direction would be far more reusable, as both these points are high on the wish list anyway. > > Unfortunately I'm not convinced into this idea. Let me elaborate... > > It goes without saying that HTTP payload is simply larger and require much more processing. That alone makes it slower than Hot Rod (I believe Martin could provide you some numbers on that). The second arguments is that switching/routing inside Kubernetes is bloody fast (since it's based on iptables) and some cloud vendors optimize it even further (e.g. Google Andromeda [1][2], I would be surprised if AWS didn't have anything similar). During the work on this prototype I wrote a simple async binary proxy [3] and measured GCP load balancer vs my proxy performance. They were twice as fast [4][5]. You may argue whether I could write a better proxy. Probably I could, but the bottom line is that another performance hit is inevitable. They are really fast and they operate on their own infrastructure (load balancers is something that is provided by the cloud vendor to Kubernetes, not the other way around). > > So with all that in mind, are we going to get better results comparing to my proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which comes really soon I hope). The second question is whether this new "REST load balancer" will work better than a standard load balancer using round robin strategy? Again I dare to doubt, even if you you're faster at routing request to proper node, you introduce another layer of latency. > > Of course the priority of this is up to Tristan but I definitely wouldn't place it high on todo list. And before even looking at it I would recommend taking Netty HTTP Proxy, putting it in the middle between real load balancer and Infinispan app and measure performance with and without it. Another test could be with 1 and 10 replicas to check the performance penalty of hitting 100% and 10% requests into proper node. > > [1] https://cloudplatform.googleblog.com/2014/08/containers-vms-kubernetes-and-vmware.html > [2] https://cloudplatform.googleblog.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html > [3] https://github.com/slaskawi/external-ip-proxy/blob/Benchmark_with_proxy/Proxy/Proxy.go > [4] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20proxy.txt > [5] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20loadbalancer.txt > > Not least having a "REST load balancer" would allow to deploy Infinispan as an HTTP cache; just honouring the HTTP caching protocols and existing standards would allow people to use any client to their liking, > > Could you please give me an example how this could work? The only way that I know is to plug a cache into reverse proxy. NGNIX supports pluggable Redis for example [6]. > > [6] https://www.nginx.com/resources/wiki/modules/redis/ > > without us having to maintain Hot Rod clients and support it on many exotic platforms - we would still have Hot Rod clients but we'd be able to pick a smaller set of strategical platforms (e.g. Windows doesn't have to be in that list). > > As I mentioned before, I really doubts HTTP will be faster then Hot Rod in any scenario. > > Such a load balancer could be written in Java (recent WildFly versions are able to do this efficiently) or it could be written in another language, all it takes is to integrate an Hot Rod client - or just the intelligence of it- as an extension into an existing load balancer of our choice. > > As I mentioned before, with custom load balancer you're introducing another layer of latency. It's not a free ride. > > Allow me a bit more nit-picking on your benchmarks ;) > As you pointed out yourself there are several flaws in your setup: "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's a flawed setup I'd rather not publish figures, especially not suggest to make decisions based on such results. > > Why not? Infinispan is a public project and anyone can benchmark it using JMH and taking decisions based on figures is always better than on intuition. Even though there were multiple unknown factors involved in this benchmark (this is why I pointed them out and asked to take the results with a grain of salt), the test conditions for all scenarios were the same. For me this is sufficient to give a general recommendation as I did. BTW, this recommendation fits exactly my expectations (communication inside Kube the fastest, LB per Pod a bit slower and no advanced routing the slowest). Finally, the recommendation is based on a POC which by definition means it doesn't fit all scenarios. You should always measure your system! > > So unless you can prove the benchmark results are fundamentally wrong and I have drawn wrong conclusions (e.g. a simple client is the fastest solution whereas inside Kubernetes communication is the slowest), please don't use "naaah, that's wrong" argument. It's rude. > > At this level of design need to focus on getting the architecture right; it should be self-speaking that your proposal of actually using intelligent routing in some way should be better than not using it. > > My benchmark confirmed this. But as always I would be happy to discuss some alternatives. But before trying to convince me to "REST Router", please prove that introducing a load balancer (or just a simple async proxy for start) gives similar or better performance than a simple load balancer with round robin strategy. > > Once we'll have an agreement on a sound architecture, then we'll be able to make the implementation efficient. > > Thanks, > Sanne > > > > > On 30 May 2017 at 13:43, Sebastian Laskawiec wrote: > Hey guys! > > Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following: > > > > As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse. > > During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3]. > > The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings): > ? Benchmark app deployed inside Kuberenetes and using internal addresses (baseline): > ? 10k puts: 674.244 ? 16.654 > ? 10k puts&gets: 1288.437 ? 136.207 > ? Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence: > ? 10k puts: 1465.567 ? 176.349 > ? 10k puts&gets: 2684.984 ? 114.993 > ? Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing: > ? 10k puts: 1052.891 ? 31.218 > ? 10k puts&gets: 2465.586 ? 85.034 > Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small. > > So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer. > > Thanks, > Sebastian > > [1] https://github.com/slaskawi/external-ip-proxy > [2] https://github.com/infinispan/infinispan/pull/5164 > [3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- > SEBASTIAN ?ASKAWIEC > INFINISPAN DEVELOPER > Red Hat EMEA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Wed May 31 15:05:45 2017 From: mudokonman at gmail.com (William Burns) Date: Wed, 31 May 2017 19:05:45 +0000 Subject: [infinispan-dev] Allocation costs of TypeConverterDelegatingAdvancedCache In-Reply-To: References: Message-ID: Let me explain why it is there first :) This class was added for two main reasons: as a replacement for compatibility and for supporting equality of byte[] object. What this class does is at the user side is box the given arguments (eg. byte[] -> WrappedByteArray) then the cache only ever deals with the boxed types and then does the unboxing for any values that are returned. There are some exceptions with how the boxing/unboxing works for both cases such as Streams and Indexing which have to rebox the data to work properly. But the cost is pretty minimal. While compatibility is always on or off unfortunately anyone can pass in a byte[] at any point for a key or value. So we need to have these wrappers there to make sure they work properly. We could add a option to the cache, which people didn't show interest before, to have a cache that doesn't support byte[] or compatibility. In this case there would be no need for the wrapper. Compatibility By using the wrapper around the cache, compatibility becomes quite trivial since we just need a converter in the wrapper and it does everything else for us. My guess is the new encoding changes will utilize these wrapper classes as well as they are quite easy to plug in and have things just work. Equality With the rewrite for eviction we lost the ability to use custom Equality in the data container. The only option for that is to wrap a byte[] to provide our own equality. Therefore the wrapper does this conversion for us by converting between automatically. On Wed, May 31, 2017 at 2:34 PM Sanne Grinovero wrote: > Hi all, > > I've been running some benchmarks and for the fist time playing with > Infinispan 9+, so please bear with me as I might shoot some dumb > questions to the list in the following days. > > The need for TypeConverterDelegatingAdvancedCache to wrap most > operations - especially "convertKeys" - is highlighet as one of the > It should be wrapping every operation pretty much. Unfortunately the methods this hurts the most are putAll, getAll etc as they have to not only box every entry but copy them into a new collection as you saw in "convertKeys". And for getAll it also has to unbox the return as well. We could reduce allocations in the collection methods by not creating the new collection until we run into one key or value that required boxing/unboxing. This would still require fully iterating over the collection at best case. This should perform well in majority of cases as I would expect all or almost all entries either require or don't require the boxing. The cases that would be harmed most would be ones that have a sparse number of entries that require boxing. > high allocators in my Search-centric use case. > > I'm wondering: > A - Could this implementation be improved? > Most anything can be improved :) The best way would be to add another knob. B - Could I bypass / disable it? Not sure why it's there. > There is no way to bypass currently. Explained above. > > Thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170531/6ce6eabd/attachment.html From sanne at infinispan.org Wed May 31 19:16:49 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 1 Jun 2017 00:16:49 +0100 Subject: [infinispan-dev] Using load balancers for Infinispan in Kubernetes In-Reply-To: <1E9DF83C-0BEC-4F7B-B91C-F1B4313F1E9C@redhat.com> References: <1E9DF83C-0BEC-4F7B-B91C-F1B4313F1E9C@redhat.com> Message-ID: On 31 May 2017 at 17:48, Galder Zamarre?o wrote: > Cool down peoples! > > http://www.quickmeme.com/meme/35ovcy > > Sebastian, don't think Sanne was being rude, he's just blunt and we need his bluntness :) > > Sanne, be nice to Sebastian and get him a beer next time around ;) Hey, he started it! His email was formatted with HTML !? ;) But seriously, I didn't mean to be rude or disrespectful; if it comes across like that I'm sorry. FWIW the answers seemed cool to me too. Let me actually clarify that I love the attitude of Sebastian to try the various approaches and get with some measurements to help with important design decisions. It's good that we spend some time evaluating the alternatives, and it's equally good that we debate the trade-offs here. As warned in my email I'm "nit-picking" on the benchmark methodology, and probably more than the usual, because I care! I am highlighting what I believe to be useful advice though: the absolute metrics of such tests need not be taken as primary (exclusive?) decision factor. Which doesn't mean that performing such tests is not useful, they certainly provide a lot to think about. Yet the interpretation of such results need to not be generalised, and the interpretation process is more important than the absolute ballpark figures they provide; for example it's paramount to figure out which factors of the test could theoretically invert the results. Using them to identify a binary faster/slow yes/no to proof/disproof a design decision is a dangerous fallacy .. and I'm not picking on Sebastian specifically, just reminding about it as we've all been guilty of it: confirmation bias, etc.. The best advice I've ever had myself in performance analysis is to not try to figure out which implementation is faster "on my machine", but to understand why it's producing a specific result, and what is preventing it to produce an higher figure. Once you know that, it's very valuable information as it will tell you either what needs fixing with a benchmark, or what needs to be done to improve the performance of your implementation ;) So that's why I personally don't publish figures often, but hey I still run such tests too and spend a lot of time analysing them, to eventually share what I figure out in the process... Thanks, Sanne > > Peace out! :) > -- > Galder Zamarre?o > Infinispan, Red Hat > >> On 31 May 2017, at 09:38, Sebastian Laskawiec wrote: >> >> Hey Sanne, >> >> Comments inlined. >> >> Thanks, >> Sebastian >> >> On Tue, May 30, 2017 at 5:58 PM Sanne Grinovero wrote: >> Hi Sebastian, >> >> the "intelligent routing" of Hot Rod being one of - if not the main - reason to use Hot Rod, I wonder if we shouldn't rather suggest people to stick with HTTP (REST) in such architectures. >> >> Several people have suggested in the past the need to have an HTTP smart load balancer which would be able to route the external REST requests to the right node. Essentially have people use REST over the wider network, up to reaching the Infinispan cluster where the service endpoint (the load balancer) can convert them to optimised Hot Rod calls, or just leave them in the same format but routing them with the same intelligence to the right nodes. >> >> I realise my proposal requires some work on several fronts, at very least we would need: >> - feature parity Hot Rod / REST so that people can actually use it >> - a REST load balancer >> >> But I think the output of such a direction would be far more reusable, as both these points are high on the wish list anyway. >> >> Unfortunately I'm not convinced into this idea. Let me elaborate... >> >> It goes without saying that HTTP payload is simply larger and require much more processing. That alone makes it slower than Hot Rod (I believe Martin could provide you some numbers on that). The second arguments is that switching/routing inside Kubernetes is bloody fast (since it's based on iptables) and some cloud vendors optimize it even further (e.g. Google Andromeda [1][2], I would be surprised if AWS didn't have anything similar). During the work on this prototype I wrote a simple async binary proxy [3] and measured GCP load balancer vs my proxy performance. They were twice as fast [4][5]. You may argue whether I could write a better proxy. Probably I could, but the bottom line is that another performance hit is inevitable. They are really fast and they operate on their own infrastructure (load balancers is something that is provided by the cloud vendor to Kubernetes, not the other way around). >> >> So with all that in mind, are we going to get better results comparing to my proposal for Hot Rod? I dare to doubt, even with HTTP/2 support (which comes really soon I hope). The second question is whether this new "REST load balancer" will work better than a standard load balancer using round robin strategy? Again I dare to doubt, even if you you're faster at routing request to proper node, you introduce another layer of latency. >> >> Of course the priority of this is up to Tristan but I definitely wouldn't place it high on todo list. And before even looking at it I would recommend taking Netty HTTP Proxy, putting it in the middle between real load balancer and Infinispan app and measure performance with and without it. Another test could be with 1 and 10 replicas to check the performance penalty of hitting 100% and 10% requests into proper node. >> >> [1] https://cloudplatform.googleblog.com/2014/08/containers-vms-kubernetes-and-vmware.html >> [2] https://cloudplatform.googleblog.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html >> [3] https://github.com/slaskawi/external-ip-proxy/blob/Benchmark_with_proxy/Proxy/Proxy.go >> [4] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20proxy.txt >> [5] https://github.com/slaskawi/external-ip-proxy/blob/master/benchmark/results%20with%20loadbalancer.txt >> >> Not least having a "REST load balancer" would allow to deploy Infinispan as an HTTP cache; just honouring the HTTP caching protocols and existing standards would allow people to use any client to their liking, >> >> Could you please give me an example how this could work? The only way that I know is to plug a cache into reverse proxy. NGNIX supports pluggable Redis for example [6]. >> >> [6] https://www.nginx.com/resources/wiki/modules/redis/ >> >> without us having to maintain Hot Rod clients and support it on many exotic platforms - we would still have Hot Rod clients but we'd be able to pick a smaller set of strategical platforms (e.g. Windows doesn't have to be in that list). >> >> As I mentioned before, I really doubts HTTP will be faster then Hot Rod in any scenario. >> >> Such a load balancer could be written in Java (recent WildFly versions are able to do this efficiently) or it could be written in another language, all it takes is to integrate an Hot Rod client - or just the intelligence of it- as an extension into an existing load balancer of our choice. >> >> As I mentioned before, with custom load balancer you're introducing another layer of latency. It's not a free ride. >> >> Allow me a bit more nit-picking on your benchmarks ;) >> As you pointed out yourself there are several flaws in your setup: "didn't tune", "running in a VM", "benchmarked on a mac mini", ...if you know it's a flawed setup I'd rather not publish figures, especially not suggest to make decisions based on such results. >> >> Why not? Infinispan is a public project and anyone can benchmark it using JMH and taking decisions based on figures is always better than on intuition. Even though there were multiple unknown factors involved in this benchmark (this is why I pointed them out and asked to take the results with a grain of salt), the test conditions for all scenarios were the same. For me this is sufficient to give a general recommendation as I did. BTW, this recommendation fits exactly my expectations (communication inside Kube the fastest, LB per Pod a bit slower and no advanced routing the slowest). Finally, the recommendation is based on a POC which by definition means it doesn't fit all scenarios. You should always measure your system! >> >> So unless you can prove the benchmark results are fundamentally wrong and I have drawn wrong conclusions (e.g. a simple client is the fastest solution whereas inside Kubernetes communication is the slowest), please don't use "naaah, that's wrong" argument. It's rude. >> >> At this level of design need to focus on getting the architecture right; it should be self-speaking that your proposal of actually using intelligent routing in some way should be better than not using it. >> >> My benchmark confirmed this. But as always I would be happy to discuss some alternatives. But before trying to convince me to "REST Router", please prove that introducing a load balancer (or just a simple async proxy for start) gives similar or better performance than a simple load balancer with round robin strategy. >> >> Once we'll have an agreement on a sound architecture, then we'll be able to make the implementation efficient. >> >> Thanks, >> Sanne >> >> >> >> >> On 30 May 2017 at 13:43, Sebastian Laskawiec wrote: >> Hey guys! >> >> Over past few weeks I've been working on accessing Infinispan cluster deployed inside Kubernetes from the outside world. The POC diagram looks like the following: >> >> >> >> As a reminder, the easiest (though not the most effective) way to do it is to expose a load balancer Service (or a Node Port Service) and access it using a client with basic intelligence (so that it doesn't try to update server list based on topology information). As you might expect, this won't give you much performance but at least you could access the cluster. Another approach is to use TLS/SNI but again, the performance would be even worse. >> >> During the research I tried to answer this problem and created "External IP Controller" [1] (and corresponding Pull Request for mapping internal/external addresses [2]). The main idea is to create a controller deployed inside Kubernetes which will create (and destroy if not needed) a load balancer per Infinispan Pod. Additionally the controller exposes mapping between internal and external addresses which allows the client to properly update server list as well as consistent hash information. A full working example is located here [3]. >> >> The biggest question is whether it's worth it? The short answer is yes. Here are some benchmark results of performing 10k puts and 10k puts&gets (please take them with a big grain of salt, I didn't optimize any server settings): >> ? Benchmark app deployed inside Kuberenetes and using internal addresses (baseline): >> ? 10k puts: 674.244 ? 16.654 >> ? 10k puts&gets: 1288.437 ? 136.207 >> ? Benchamrking app deployed in a VM outside of Kubernetes with basic intelligence: >> ? 10k puts: 1465.567 ? 176.349 >> ? 10k puts&gets: 2684.984 ? 114.993 >> ? Benchamrking app deployed in a VM outside of Kubernetes with address mapping and topology aware hashing: >> ? 10k puts: 1052.891 ? 31.218 >> ? 10k puts&gets: 2465.586 ? 85.034 >> Note that benchmarking Infinispan from a VM might be very misleading since it depends on data center configuration. Benchmarks above definitely contain some delay between Google Compute Engine VM and a Kubernetes cluster deployed in Google Container Engine. How big is the delay? Hard to tell. What counts is the difference between client using basic intelligence and topology aware intelligence. And as you can see it's not that small. >> >> So the bottom line - if you can, deploy your application along with Infinispan cluster inside Kubernetes. That's the fastest configuration since only iptables are involved. Otherwise use a load balancer per pod with External IP Controller. If you don't care about performance, just use basic client intelligence and expose everything using single load balancer. >> >> Thanks, >> Sebastian >> >> [1] https://github.com/slaskawi/external-ip-proxy >> [2] https://github.com/infinispan/infinispan/pull/5164 >> [3] https://github.com/slaskawi/external-ip-proxy/tree/master/benchmark >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> -- >> SEBASTIAN ?ASKAWIEC >> INFINISPAN DEVELOPER >> Red Hat EMEA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev