From belaban at mailbox.org Thu Feb 1 04:54:14 2018 From: belaban at mailbox.org (Bela Ban) Date: Thu, 1 Feb 2018 10:54:14 +0100 Subject: [infinispan-dev] JGroups 4.0.10.Final released Message-ID: <24584194-96ba-4a2b-d00a-bf8445344d94@mailbox.org> FYI Major features/issues fixed: - INJECT_VIEW: contribution by Andrea Tarocchi to inject arbitrary views into a running cluster, e.g. probe.sh op=INJECT_VIEW.injectView["A=A/B;B=A/B;C=C/D;D=C/D"] injects a split brain view {A,B} into A and B and {C,D} into C and D. Very handy for interactive testing of split brains, thanks Andrea! [https://issues.jboss.org/browse/JGRP-2243] - Internal thread pool (and its factory) can be set now. Important for Wildfly to prevent classloader leaks [https://issues.jboss.org/browse/JGRP-2244] [https://issues.jboss.org/browse/JGRP-2246] - RPCs cannot invoke default methods inherited from interfaces [https://issues.jboss.org/browse/JGRP-2247] - Fix for FILE_PING (and subclasses, such as NATIVE_S3_PING/GOOGLE_PING2, JDBC_PING etc) to remove members from the backend store [https://issues.jboss.org/browse/JGRP-2232] Cheers, -- Bela Ban | http://www.jgroups.org From gustavo at infinispan.org Fri Feb 2 04:28:19 2018 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Fri, 2 Feb 2018 09:28:19 +0000 Subject: [infinispan-dev] [ANNOUNCE] Infinispan 9.2.0.RC2 Message-ID: Dear Infinispan community, Infinspan 9.2.0.CR2 has been released! Read all about it our blog: http://blog.infinispan.org/2018/02/infinispan-920cr2-is-out.html Cheers, Gustavo -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180202/78990c23/attachment.html From ttarrant at redhat.com Mon Feb 5 06:02:40 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 5 Feb 2018 12:02:40 +0100 Subject: [infinispan-dev] Hot Rod secured by default In-Reply-To: <87659D4A-0085-436B-897B-802B8E3DAB3F@redhat.com> References: <78C8F389-2EBA-4E2F-8EFE-CDAAAD65F55D@redhat.com> <20264f25-5a92-f6b9-e8f9-a91d822b5c8f@redhat.com> <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> <87659D4A-0085-436B-897B-802B8E3DAB3F@redhat.com> Message-ID: Sorry for reviving this thread, but I want to make sure we all agree on the following points. DEFAULT CONFIGURATIONS - The endpoints MUST be secure by default (authentication MUST be enabled and required) in all of the supplied default configurations. - We can ship non-secure configurations, but these need to be clearly marked as such in the configuration filename (e.g. standalone-unsecured.xml). - Memcached MUST NOT be enabled by default as we do not implement the binary protocol which is the only one that can do authn/encryption - The default configurations (standalone.xml, domain.xml, cloud.xml) MUST enable only non-plaintext mechs (e.g. digest et al) SERVER CHANGES - Warn if a plain text mech is enabled on an unencrypted endpoint API - We MUST NOT add a "trust all certs" switch to the client config as that would thwart the whole purpose of encryption. OPENSHIFT - In the context of OpenShift, all pods MUST trust the master CA. This means that the CA must be injected into the trusted CAs for the pods AND into the JDK cacerts file. This MUST be done by the OpenShift JDK image automatically. (Debian does this on startup: [1]) Tristan [1] https://git.mikael.io/mikaelhg/ca-certificates-java/blob/debian/20170531/src/main/java/org/debian/security/UpdateCertificates.java On 9/14/17 5:45 PM, Galder Zamarre?o wrote: > Gustavo's reply was the agreement reached. Secured by default and an easy way to use it unsecured is the best middle ground IMO. > > So, we've done the securing part partially, which needs to be completed by [2] (currently assigned to Tristan). > > More importantly, we also need to complete [3] so that we have ship the unsecured configuration, and then show people how to use that (docus, examples...etc). > > If you want to help, taking ownership of [3] would be best. > > Cheers, > > [2] https://issues.jboss.org/browse/ISPN-7815 > [3] https://issues.jboss.org/browse/ISPN-7818 > >> On 6 Sep 2017, at 11:03, Katia Aresti wrote: >> >> @Emmanuel, sure it't not a big deal, but starting fast and smooth without any trouble helps adoption. Concerning the ticket, there is already one that was acted. I can work on that, even if is assigned to Galder now. >> >> @Gustavo >> Yes, as I read - better - now on the security part, it is said for the console that we need those. My head skipped that paragraph or I read that badly, and I was wondering if it was more something related to "roles" rather than a user. My bad, because I read too fast sometimes and skip things ! Maybe the paragraph of the security in the console should be moved down to the console part, which is small to read now ? When I read there "see the security part bellow" I was like : ok, the security is done !! :) >> >> Thank you for your replies ! >> >> Katia >> >> >> On Wed, Sep 6, 2017 at 10:52 AM, Gustavo Fernandes wrote: >> Comments inlined >> >> On Tue, Sep 5, 2017 at 5:03 PM, Katia Aresti wrote: >> And then I want to go to the console, requires me to put again the user/password. And it does not work. And I don't see how to disable security. And I don't know what to do. And I'm like : why do I need security at all here ? >> >> >> The console credentials are specified with MGMT_USER/MGMT_PASS env variables, did you try those? It will not work for APP_USER/APP_PASS. >> >> >> I wonder if you want to reconsider the "secured by default" point after my experience. >> >> >> The outcome of the discussion is that the clustered.xml will be secured by default, but you should be able to launch a container without any security by simply passing an alternate xml in the startup, and we'll ship this XML with the server. >> >> >> Gustavo >> >> >> My 2 cents, >> >> Katia >> >> On Tue, May 9, 2017 at 2:24 PM, Galder Zamarre?o wrote: >> Hi all, >> >> Tristan and I had chat yesterday and I've distilled the contents of the discussion and the feedback here into a JIRA [1]. The JIRA contains several subtasks to handle these aspects: >> >> 1. Remove auth check in server's CacheDecodeContext. >> 2. Default server configuration should require authentication in all entry points. >> 3. Provide an unauthenticated configuration that users can easily switch to. >> 4. Remove default username+passwords in docker image and instead show an info/warn message when these are not provided. >> 5. Add capability to pass in app user role groups to docker image easily, so that its easy to add authorization on top of the server. >> >> Cheers, >> >> [1] https://issues.jboss.org/browse/ISPN-7811 >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >>> On 19 Apr 2017, at 12:04, Tristan Tarrant wrote: >>> >>> That is caused by not wrapping the calls in PrivilegedActions in all the >>> correct places and is a bug. >>> >>> Tristan >>> >>> On 19/04/2017 11:34, Sebastian Laskawiec wrote: >>>> The proposal look ok to me. >>>> >>>> But I would also like to highlight one thing - it seems you can't access >>>> secured cache properties using CLI. This seems wrong to me (if you can >>>> invoke the cli, in 99,99% of the cases you have access to the machine, >>>> so you can do whatever you want). It also breaks healthchecks in Docker >>>> image. >>>> >>>> I would like to make sure we will address those concerns. >>>> >>>> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant >>> > wrote: >>>> >>>> Currently the "protected cache access" security is implemented as >>>> follows: >>>> >>>> - if authorization is enabled || client is on loopback >>>> allow >>>> >>>> The first check also implies that authentication needs to be in place, >>>> as the authorization checks need a valid Subject. >>>> >>>> Unfortunately authorization is very heavy-weight and actually overkill >>>> even for "normal" secure usage. >>>> >>>> My proposal is as follows: >>>> - the "default" configuration files are "secure" by default >>>> - provide clearly marked "unsecured" configuration files, which the user >>>> can use >>>> - drop the "protected cache" check completely >>>> >>>> And definitely NO to a dev switch. >>>> >>>> Tristan >>>> >>>> On 19/04/2017 10:05, Galder Zamarre?o wrote: >>>>> Agree with Wolf. Let's keep it simple by just providing extra >>>> configuration files for dev/unsecure envs. >>>>> >>>>> Cheers, >>>>> -- >>>>> Galder Zamarre?o >>>>> Infinispan, Red Hat >>>>> >>>>>> On 15 Apr 2017, at 12:57, Wolf Fink >>> > wrote: >>>>>> >>>>>> I would think a "switch" can have other impacts as you need to >>>> check it in the code - and might have security leaks here >>>>>> >>>>>> So what is wrong with some configurations which are the default >>>> and secured. >>>>>> and a "*-dev or *-unsecure" configuration to start easy. >>>>>> Also this can be used in production if there is no need for security >>>>>> >>>>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec >>>> > wrote: >>>>>> I still think it would be better to create an extra switch to >>>> run infinispan in "development mode". This means no authentication, >>>> no encryption, possibly with JGroups stack tuned for fast discovery >>>> (especially in Kubernetes) and a big warning saying "You are in >>>> development mode, do not use this in production". >>>>>> >>>>>> Just something very easy to get you going. >>>>>> >>>>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarre?o >>>> > wrote: >>>>>> >>>>>> -- >>>>>> Galder Zamarre?o >>>>>> Infinispan, Red Hat >>>>>> >>>>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes >>>> > wrote: >>>>>>> >>>>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarre?o >>>> > wrote: >>>>>>> Hi all, >>>>>>> >>>>>>> As per some discussions we had yesterday on IRC w/ Tristan, >>>> Gustavo and Sebastian, I've created a docker image snapshot that >>>> reverts the change stop protected caches from requiring security >>>> enabled [1]. >>>>>>> >>>>>>> In other words, I've removed [2]. The reason for temporarily >>>> doing that is because with the change as is, the changes required >>>> for a default server distro require that the entire cache manager's >>>> security is enabled. This is in turn creates a lot of problems with >>>> health and running checks used by Kubernetes/OpenShift amongst other >>>> things. >>>>>>> >>>>>>> Judging from our discussions on IRC, the idea is for such >>>> change to be present in 9.0.1, but I'd like to get final >>>> confirmation from Tristan et al. >>>>>>> >>>>>>> >>>>>>> +1 >>>>>>> >>>>>>> Regarding the "security by default" discussion, I think we >>>> should ship configurations cloud.xml, clustered.xml and >>>> standalone.xml with security enabled and disabled variants, and let >>>> users >>>>>>> decide which one to pick based on the use case. >>>>>> >>>>>> I think that's a better idea. >>>>>> >>>>>> We could by default have a secured one, but switching to an >>>> insecure configuration should be doable with minimal effort, e.g. >>>> just switching config file. >>>>>> >>>>>> As highlighted above, any secured configuration should work >>>> out-of-the-box with our docker images, e.g. WRT healthy/running checks. >>>>>> >>>>>> Cheers, >>>>>> >>>>>>> >>>>>>> Gustavo. >>>>>>> >>>>>>> >>>>>>> Cheers, >>>>>>> >>>>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ >>>> (9.0.1-SNAPSHOT tag for anyone interested) >>>>>>> [2] >>>> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118 >>>>>>> -- >>>>>>> Galder Zamarre?o >>>>>>> Infinispan, Red Hat >>>>>>> >>>>>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant >>> > wrote: >>>>>>>> >>>>>>>> Dear all, >>>>>>>> >>>>>>>> after a mini chat on IRC, I wanted to bring this to >>>> everybody's attention. >>>>>>>> >>>>>>>> We should make the Hot Rod endpoint require authentication in the >>>>>>>> out-of-the-box configuration. >>>>>>>> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL >>>>>>>> mechanism against the ApplicationRealm and require users to >>>> run the >>>>>>>> add-user script. >>>>>>>> This would achieve two goals: >>>>>>>> - secure out-of-the-box configuration, which is always a good idea >>>>>>>> - access to the "protected" schema and script caches which is >>>> prevented >>>>>>>> when not on loopback on non-authenticated endpoints. >>>>>>>> >>>>>>>> Tristan >>>>>>>> -- >>>>>>>> Tristan Tarrant >>>>>>>> Infinispan Lead >>>>>>>> JBoss, a division of Red Hat >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>> >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> -- >>>>>> SEBASTIAN ?ASKAWIEC >>>>>> INFINISPAN DEVELOPER >>>>>> Red Hat EMEA >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> -- >>>> Tristan Tarrant >>>> Infinispan Lead >>>> JBoss, a division of Red Hat >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> -- >>>> >>>> SEBASTIAN?ASKAWIEC >>>> >>>> INFINISPAN DEVELOPER >>>> >>>> Red HatEMEA >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> -- >>> Tristan Tarrant >>> Infinispan Lead >>> JBoss, a division of Red Hat >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead and Data Grid Chief Architect JBoss, a division of Red Hat From sanne at infinispan.org Mon Feb 5 06:15:18 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 5 Feb 2018 11:15:18 +0000 Subject: [infinispan-dev] Hot Rod secured by default In-Reply-To: References: <78C8F389-2EBA-4E2F-8EFE-CDAAAD65F55D@redhat.com> <20264f25-5a92-f6b9-e8f9-a91d822b5c8f@redhat.com> <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> <87659D4A-0085-436B-897B-802B8E3DAB3F@redhat.com> Message-ID: +1 To improve convenience, would be nice to also have some examples to configure credentials in some automated fashion; e.g. CLI scripts and an example of running them for a Maven integration test? Thanks, Sanne On 5 February 2018 at 11:02, Tristan Tarrant wrote: > Sorry for reviving this thread, but I want to make sure we all agree on > the following points. > > DEFAULT CONFIGURATIONS > - The endpoints MUST be secure by default (authentication MUST be > enabled and required) in all of the supplied default configurations. > - We can ship non-secure configurations, but these need to be clearly > marked as such in the configuration filename (e.g. > standalone-unsecured.xml). > - Memcached MUST NOT be enabled by default as we do not implement the > binary protocol which is the only one that can do authn/encryption > - The default configurations (standalone.xml, domain.xml, cloud.xml) > MUST enable only non-plaintext mechs (e.g. digest et al) > > SERVER CHANGES > - Warn if a plain text mech is enabled on an unencrypted endpoint > > API > - We MUST NOT add a "trust all certs" switch to the client config as > that would thwart the whole purpose of encryption. > > OPENSHIFT > - In the context of OpenShift, all pods MUST trust the master CA. This > means that the CA must be injected into the trusted CAs for the pods AND > into the JDK cacerts file. This MUST be done by the OpenShift JDK image > automatically. (Debian does this on startup: [1]) > > Tristan > > [1] > https://git.mikael.io/mikaelhg/ca-certificates-java/blob/debian/20170531/src/main/java/org/debian/security/UpdateCertificates.java > > On 9/14/17 5:45 PM, Galder Zamarre?o wrote: >> Gustavo's reply was the agreement reached. Secured by default and an easy way to use it unsecured is the best middle ground IMO. >> >> So, we've done the securing part partially, which needs to be completed by [2] (currently assigned to Tristan). >> >> More importantly, we also need to complete [3] so that we have ship the unsecured configuration, and then show people how to use that (docus, examples...etc). >> >> If you want to help, taking ownership of [3] would be best. >> >> Cheers, >> >> [2] https://issues.jboss.org/browse/ISPN-7815 >> [3] https://issues.jboss.org/browse/ISPN-7818 >> >>> On 6 Sep 2017, at 11:03, Katia Aresti wrote: >>> >>> @Emmanuel, sure it't not a big deal, but starting fast and smooth without any trouble helps adoption. Concerning the ticket, there is already one that was acted. I can work on that, even if is assigned to Galder now. >>> >>> @Gustavo >>> Yes, as I read - better - now on the security part, it is said for the console that we need those. My head skipped that paragraph or I read that badly, and I was wondering if it was more something related to "roles" rather than a user. My bad, because I read too fast sometimes and skip things ! Maybe the paragraph of the security in the console should be moved down to the console part, which is small to read now ? When I read there "see the security part bellow" I was like : ok, the security is done !! :) >>> >>> Thank you for your replies ! >>> >>> Katia >>> >>> >>> On Wed, Sep 6, 2017 at 10:52 AM, Gustavo Fernandes wrote: >>> Comments inlined >>> >>> On Tue, Sep 5, 2017 at 5:03 PM, Katia Aresti wrote: >>> And then I want to go to the console, requires me to put again the user/password. And it does not work. And I don't see how to disable security. And I don't know what to do. And I'm like : why do I need security at all here ? >>> >>> >>> The console credentials are specified with MGMT_USER/MGMT_PASS env variables, did you try those? It will not work for APP_USER/APP_PASS. >>> >>> >>> I wonder if you want to reconsider the "secured by default" point after my experience. >>> >>> >>> The outcome of the discussion is that the clustered.xml will be secured by default, but you should be able to launch a container without any security by simply passing an alternate xml in the startup, and we'll ship this XML with the server. >>> >>> >>> Gustavo >>> >>> >>> My 2 cents, >>> >>> Katia >>> >>> On Tue, May 9, 2017 at 2:24 PM, Galder Zamarre?o wrote: >>> Hi all, >>> >>> Tristan and I had chat yesterday and I've distilled the contents of the discussion and the feedback here into a JIRA [1]. The JIRA contains several subtasks to handle these aspects: >>> >>> 1. Remove auth check in server's CacheDecodeContext. >>> 2. Default server configuration should require authentication in all entry points. >>> 3. Provide an unauthenticated configuration that users can easily switch to. >>> 4. Remove default username+passwords in docker image and instead show an info/warn message when these are not provided. >>> 5. Add capability to pass in app user role groups to docker image easily, so that its easy to add authorization on top of the server. >>> >>> Cheers, >>> >>> [1] https://issues.jboss.org/browse/ISPN-7811 >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>>> On 19 Apr 2017, at 12:04, Tristan Tarrant wrote: >>>> >>>> That is caused by not wrapping the calls in PrivilegedActions in all the >>>> correct places and is a bug. >>>> >>>> Tristan >>>> >>>> On 19/04/2017 11:34, Sebastian Laskawiec wrote: >>>>> The proposal look ok to me. >>>>> >>>>> But I would also like to highlight one thing - it seems you can't access >>>>> secured cache properties using CLI. This seems wrong to me (if you can >>>>> invoke the cli, in 99,99% of the cases you have access to the machine, >>>>> so you can do whatever you want). It also breaks healthchecks in Docker >>>>> image. >>>>> >>>>> I would like to make sure we will address those concerns. >>>>> >>>>> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant >>>> > wrote: >>>>> >>>>> Currently the "protected cache access" security is implemented as >>>>> follows: >>>>> >>>>> - if authorization is enabled || client is on loopback >>>>> allow >>>>> >>>>> The first check also implies that authentication needs to be in place, >>>>> as the authorization checks need a valid Subject. >>>>> >>>>> Unfortunately authorization is very heavy-weight and actually overkill >>>>> even for "normal" secure usage. >>>>> >>>>> My proposal is as follows: >>>>> - the "default" configuration files are "secure" by default >>>>> - provide clearly marked "unsecured" configuration files, which the user >>>>> can use >>>>> - drop the "protected cache" check completely >>>>> >>>>> And definitely NO to a dev switch. >>>>> >>>>> Tristan >>>>> >>>>> On 19/04/2017 10:05, Galder Zamarre?o wrote: >>>>>> Agree with Wolf. Let's keep it simple by just providing extra >>>>> configuration files for dev/unsecure envs. >>>>>> >>>>>> Cheers, >>>>>> -- >>>>>> Galder Zamarre?o >>>>>> Infinispan, Red Hat >>>>>> >>>>>>> On 15 Apr 2017, at 12:57, Wolf Fink >>>> > wrote: >>>>>>> >>>>>>> I would think a "switch" can have other impacts as you need to >>>>> check it in the code - and might have security leaks here >>>>>>> >>>>>>> So what is wrong with some configurations which are the default >>>>> and secured. >>>>>>> and a "*-dev or *-unsecure" configuration to start easy. >>>>>>> Also this can be used in production if there is no need for security >>>>>>> >>>>>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec >>>>> > wrote: >>>>>>> I still think it would be better to create an extra switch to >>>>> run infinispan in "development mode". This means no authentication, >>>>> no encryption, possibly with JGroups stack tuned for fast discovery >>>>> (especially in Kubernetes) and a big warning saying "You are in >>>>> development mode, do not use this in production". >>>>>>> >>>>>>> Just something very easy to get you going. >>>>>>> >>>>>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarre?o >>>>> > wrote: >>>>>>> >>>>>>> -- >>>>>>> Galder Zamarre?o >>>>>>> Infinispan, Red Hat >>>>>>> >>>>>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes >>>>> > wrote: >>>>>>>> >>>>>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarre?o >>>>> > wrote: >>>>>>>> Hi all, >>>>>>>> >>>>>>>> As per some discussions we had yesterday on IRC w/ Tristan, >>>>> Gustavo and Sebastian, I've created a docker image snapshot that >>>>> reverts the change stop protected caches from requiring security >>>>> enabled [1]. >>>>>>>> >>>>>>>> In other words, I've removed [2]. The reason for temporarily >>>>> doing that is because with the change as is, the changes required >>>>> for a default server distro require that the entire cache manager's >>>>> security is enabled. This is in turn creates a lot of problems with >>>>> health and running checks used by Kubernetes/OpenShift amongst other >>>>> things. >>>>>>>> >>>>>>>> Judging from our discussions on IRC, the idea is for such >>>>> change to be present in 9.0.1, but I'd like to get final >>>>> confirmation from Tristan et al. >>>>>>>> >>>>>>>> >>>>>>>> +1 >>>>>>>> >>>>>>>> Regarding the "security by default" discussion, I think we >>>>> should ship configurations cloud.xml, clustered.xml and >>>>> standalone.xml with security enabled and disabled variants, and let >>>>> users >>>>>>>> decide which one to pick based on the use case. >>>>>>> >>>>>>> I think that's a better idea. >>>>>>> >>>>>>> We could by default have a secured one, but switching to an >>>>> insecure configuration should be doable with minimal effort, e.g. >>>>> just switching config file. >>>>>>> >>>>>>> As highlighted above, any secured configuration should work >>>>> out-of-the-box with our docker images, e.g. WRT healthy/running checks. >>>>>>> >>>>>>> Cheers, >>>>>>> >>>>>>>> >>>>>>>> Gustavo. >>>>>>>> >>>>>>>> >>>>>>>> Cheers, >>>>>>>> >>>>>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ >>>>> (9.0.1-SNAPSHOT tag for anyone interested) >>>>>>>> [2] >>>>> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118 >>>>>>>> -- >>>>>>>> Galder Zamarre?o >>>>>>>> Infinispan, Red Hat >>>>>>>> >>>>>>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant >>>> > wrote: >>>>>>>>> >>>>>>>>> Dear all, >>>>>>>>> >>>>>>>>> after a mini chat on IRC, I wanted to bring this to >>>>> everybody's attention. >>>>>>>>> >>>>>>>>> We should make the Hot Rod endpoint require authentication in the >>>>>>>>> out-of-the-box configuration. >>>>>>>>> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL >>>>>>>>> mechanism against the ApplicationRealm and require users to >>>>> run the >>>>>>>>> add-user script. >>>>>>>>> This would achieve two goals: >>>>>>>>> - secure out-of-the-box configuration, which is always a good idea >>>>>>>>> - access to the "protected" schema and script caches which is >>>>> prevented >>>>>>>>> when not on loopback on non-authenticated endpoints. >>>>>>>>> >>>>>>>>> Tristan >>>>>>>>> -- >>>>>>>>> Tristan Tarrant >>>>>>>>> Infinispan Lead >>>>>>>>> JBoss, a division of Red Hat >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> -- >>>>>>> SEBASTIAN ?ASKAWIEC >>>>>>> INFINISPAN DEVELOPER >>>>>>> Red Hat EMEA >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>> >>>>> -- >>>>> Tristan Tarrant >>>>> Infinispan Lead >>>>> JBoss, a division of Red Hat >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> -- >>>>> >>>>> SEBASTIAN?ASKAWIEC >>>>> >>>>> INFINISPAN DEVELOPER >>>>> >>>>> Red HatEMEA >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> -- >>>> Tristan Tarrant >>>> Infinispan Lead >>>> JBoss, a division of Red Hat >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -- > Tristan Tarrant > Infinispan Lead and Data Grid Chief Architect > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Mon Feb 5 10:44:35 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 5 Feb 2018 16:44:35 +0100 Subject: [infinispan-dev] Weekly IRC Meeting logs 2018-02-05 Message-ID: <9d3eafed-afd7-d97e-51fc-ced47e64edd0@redhat.com> Hi all, the weekly meeting logs are here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-02-05-15.01.log.html Tristan From sanne at infinispan.org Wed Feb 7 11:59:30 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 7 Feb 2018 16:59:30 +0000 Subject: [infinispan-dev] Hibernate Search integration modules for WildFly have been broken Message-ID: Hi all, I was going to give ISPN-8779 a shot but I'm finding a mess. the root pom contains these twice (and inconsistent!): 5.8.1.Final [...] 5.8.0.Final the BOM cointains a copy of `version.hibernate.search` as well. I don't mind deleting duplicate properties, but we used to have clearly separate properties for different purposes, and this separation is essential. I've mentioned this multiple times when reviewing PRs which would get my attention, but I didn't see these changes - certainly didn't expect you all to forget the special purpose of these modules. It's quite messy now and I'm honestly lost myself at how I could revert it. In particular this module is broken now as it's targeting the wrong slot: - https://github.com/infinispan/infinispan/blob/master/wildfly-modules/src/main/resources/org/infinispan/for-hibernatesearch-wildfly/main/module.xml#L27 Clearly it's not consistent with the comment I've put on the module descriptor. I don't see that module being included in the released modules either, and clearly the integration tests didn't catch it because they have been patched to use the wrong modules too :( Other essential integration tests which I had put in place to make sure they'd get your attention in case someone had such an idea.. have been deleted. Opening ISPN-8780, I would consider this a release blocker. Thanks, Sanne See also: - https://github.com/infinispan/infinispan/blob/master/integrationtests/as-lucene-directory/READ.ME From sanne at infinispan.org Wed Feb 7 19:45:35 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 8 Feb 2018 00:45:35 +0000 Subject: [infinispan-dev] PersistentUUIDManagerImpl NPEs being logged when running the testsuite In-Reply-To: References: Message-ID: FWIW, I'm still seeing these exceptions when running the testsuite. Opening a JIRA: - https://issues.jboss.org/browse/ISPN-8782 Maybe you should introduce some mechanism to mark the build failed when there are unexpected exceptions thrown in background? Thanks, Sanne On 30 January 2018 at 16:30, Sanne Grinovero wrote: > Hi all, > > I'm building master [1] and see such NPEs dumped on my terminal quite > often; I guess you all noticed already? I couldn't find a JIRA.. > > 16:24:03,083 FATAL > (transport-thread-StateTransferLinkFailuresTest[null, > tx=false]-NodeN-p63985-t2) [PersistentUUIDManagerImpl] Cannot find > mapping for address StateTransferLinkFailuresTest[null, > tx=false]-NodeN-32100 java.lang.NullPointerException > at org.infinispan.topology.PersistentUUIDManagerImpl.mapAddresses(PersistentUUIDManagerImpl.java:70) > at org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy.onPartitionMerge(PreferAvailabilityStrategy.java:214) > at org.infinispan.topology.ClusterCacheStatus.doMergePartitions(ClusterCacheStatus.java:597) > at org.infinispan.topology.ClusterTopologyManagerImpl.lambda$recoverClusterStatus$6(ClusterTopologyManagerImpl.java:519) > at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:144) > at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:33) > at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:174) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > 16:24:03,115 FATAL > (transport-thread-StateTransferLinkFailuresTest[null, > tx=false]-NodeQ-p64193-t5) [PersistentUUIDManagerImpl] Cannot find > mapping for address StateTransferLinkFailuresTest[null, > tx=false]-NodeQ-10499 java.lang.NullPointerException > at org.infinispan.topology.PersistentUUIDManagerImpl.mapAddresses(PersistentUUIDManagerImpl.java:70) > at org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy.onPartitionMerge(PreferAvailabilityStrategy.java:214) > at org.infinispan.topology.ClusterCacheStatus.doMergePartitions(ClusterCacheStatus.java:597) > at org.infinispan.topology.ClusterTopologyManagerImpl.lambda$recoverClusterStatus$6(ClusterTopologyManagerImpl.java:519) > at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:144) > at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:33) > at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:174) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > 1 - cc2744e9f509d917f1ed0ff1a18b28b72595af83 > > Thanks, > Sanne From ttarrant at redhat.com Sun Feb 11 16:52:11 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Sun, 11 Feb 2018 22:52:11 +0100 Subject: [infinispan-dev] 9.2.0 endgame plan Message-ID: <07596dce-6678-3a03-428e-5d6e5448c988@redhat.com> I had originally planned for a release for Wed 14th, but there are a number of things I'd like to see landing before Final and looking at the list I recommend doing a CR3. In particular: - Radim's Hot Rod changes - Performance regressions as reported by Will - Ensure that Sanne is happy with the WF modules - Documentation and quickstarts/simple tutorials for new features - Quickstarts/simple tutorials work flawlessly Tristan P.S. I'll be on PTO Mon/Tue 12/13 February. -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From rory.odonnell at oracle.com Tue Feb 13 05:18:55 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Tue, 13 Feb 2018 10:18:55 +0000 Subject: [infinispan-dev] JDK 10: First Release Candidate - JDK 10 b43 Message-ID: <73dbb05b-f6c8-aae6-15a1-74a16184bbe9@oracle.com> ? Hi Galder, *JDK 10 build 43 is our first JDK 10 Release Candidate [1]* * JDK 10 Early Access? build 43 is available at : - jdk.java.net/10/ Notable changes since previous email.** *build 43 * * JDK-8194764 - javac incorrectly flags deprecated for removal imports * JDK-8196678 - avoid printing uninitialized buffer in os::print_memory_info on AIX * JDK-8195837 - (tz) Upgrade time-zone data to tzdata2018c ** *Bug fixes reported by Open Source Projects? :* * JDK-8196296 Lucene test crashes C2 compilation *Security Manager Survey * If you have written or maintain code that uses the SecurityManager or related APIs such as the AccessController, then we would appreciate if you would complete this survey: https://www.surveymonkey.com/r/RSGMF3K More info on the survey? [2] Regards, Rory [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000742.html [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000649.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180213/58ca4877/attachment.html From paul.ferraro at redhat.com Tue Feb 13 08:20:50 2018 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Tue, 13 Feb 2018 08:20:50 -0500 Subject: [infinispan-dev] ISPN-8798 ByteString places too strict a constraint on cache name length Message-ID: Can one of the devs please review this patch? https://github.com/infinispan/infinispan/pull/5750 The limit of cache names sizes to 127 bytes is too limiting for hibernate/JPA 2nd level cache deployments, which generate cache names using fully qualified class names of entity classes, which are user generated thus can easily exceed 128 bytes (but are far less likely to exceed 255). This is exacerbated by the JPA integration, which additionally appends the deployment name. We have a long term solution for this, but in the meantime, the above patch is sufficient to pass the TCK. We'll also need a 9.1.6.Final release ASAP, lest we revert back to Infinispan 8.2.x for WF12, the feature freeze for which is tomorrow (they are considering this upgrade a feature, given the scope of its impact). Thanks, Paul From ttarrant at redhat.com Tue Feb 13 08:33:02 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 13 Feb 2018 14:33:02 +0100 Subject: [infinispan-dev] ISPN-8798 ByteString places too strict a constraint on cache name length In-Reply-To: References: Message-ID: We can cut 9.1.6.Final today. -- Tristan Tarrant Infinispan Lead & Data Grid Architect Red Hat On 13 Feb 2018 14:21, "Paul Ferraro" wrote: > Can one of the devs please review this patch? > https://github.com/infinispan/infinispan/pull/5750 > > The limit of cache names sizes to 127 bytes is too limiting for > hibernate/JPA 2nd level cache deployments, which generate cache names > using fully qualified class names of entity classes, which are user > generated thus can easily exceed 128 bytes (but are far less likely to > exceed 255). This is exacerbated by the JPA integration, which > additionally appends the deployment name. We have a long term > solution for this, but in the meantime, the above patch is sufficient > to pass the TCK. > > We'll also need a 9.1.6.Final release ASAP, lest we revert back to > Infinispan 8.2.x for WF12, the feature freeze for which is tomorrow > (they are considering this upgrade a feature, given the scope of its > impact). > > Thanks, > > Paul > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180213/de6acf38/attachment-0001.html From paul.ferraro at redhat.com Tue Feb 13 09:10:10 2018 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Tue, 13 Feb 2018 09:10:10 -0500 Subject: [infinispan-dev] ISPN-8798 ByteString places too strict a constraint on cache name length In-Reply-To: References: Message-ID: Excellent. Thanks a million. On Tue, Feb 13, 2018 at 8:33 AM, Tristan Tarrant wrote: > We can cut 9.1.6.Final today. > > -- > Tristan Tarrant > Infinispan Lead & Data Grid Architect > Red Hat > > On 13 Feb 2018 14:21, "Paul Ferraro" wrote: >> >> Can one of the devs please review this patch? >> https://github.com/infinispan/infinispan/pull/5750 >> >> The limit of cache names sizes to 127 bytes is too limiting for >> hibernate/JPA 2nd level cache deployments, which generate cache names >> using fully qualified class names of entity classes, which are user >> generated thus can easily exceed 128 bytes (but are far less likely to >> exceed 255). This is exacerbated by the JPA integration, which >> additionally appends the deployment name. We have a long term >> solution for this, but in the meantime, the above patch is sufficient >> to pass the TCK. >> >> We'll also need a 9.1.6.Final release ASAP, lest we revert back to >> Infinispan 8.2.x for WF12, the feature freeze for which is tomorrow >> (they are considering this upgrade a feature, given the scope of its >> impact). >> >> Thanks, >> >> Paul >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From tsegismont at gmail.com Wed Feb 14 09:44:51 2018 From: tsegismont at gmail.com (Thomas SEGISMONT) Date: Wed, 14 Feb 2018 15:44:51 +0100 Subject: [infinispan-dev] [ANNOUNCE] Infinispan 9.2.0.RC2 In-Reply-To: References: Message-ID: Hi everyone, Just wanted to let you know that the Vert.x cluster manager test suite works fine with 9.2.0.CR2 ( https://github.com/vert-x3/vertx-infinispan/pull/47). It uses these new features: - multimap caches - clustered locks - strong counters Thank you all for the great work and especially Katia for contributing the clustered locks update in the Vert.x repo. Cheers, Thomas 2018-02-02 10:28 GMT+01:00 Gustavo Fernandes : > Dear Infinispan community, > > Infinspan 9.2.0.CR2 has been released! > > Read all about it our blog: > > http://blog.infinispan.org/2018/02/infinispan-920cr2-is-out.html > > Cheers, > Gustavo > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180214/a5662f4f/attachment.html From sanne at infinispan.org Wed Feb 14 18:46:32 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 14 Feb 2018 23:46:32 +0000 Subject: [infinispan-dev] Testsuite: memory usage? Message-ID: Hey all, I'm having OOMs running the tests of infinispan-core. Initially I thought it was related to limits and security as that's the usual suspect, but no it's really just not enough memory :) Found that the root pom.xml sets a property to Xmx1G for surefire; I've been observing the growth of heap usage in JConsole and it's clearly not enough. What surprises me is that - as an occasional tester - I shouldn't be the one to notice such a new requirement first. A leak which only manifests in certain conditions? What do others observe? FWIW, I'm running it with 8G heap now and it's working much better; still a couple of failures but at least they're not OOM related. Thanks, Sanne From dan.berindei at gmail.com Thu Feb 15 01:51:47 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 15 Feb 2018 06:51:47 +0000 Subject: [infinispan-dev] Testsuite: memory usage? In-Reply-To: References: Message-ID: forkJvmArgs used to be "-Xmx2G" before ISPN-8478. I reduced the heap to 1G because we were trying to run the build on agent VMs with only 4GB of RAM, and the 2GB heap was making the build run out of native memory. I've yet to see an OOME in the core tests, locally or in CI. But I also included -XX:+HeapDumpOnOutOfMemoryError in forkJvmArgs, so assuming there's a new leak it should be easy to track down in the heap dump. Cheers Dan On Wed, Feb 14, 2018 at 11:46 PM, Sanne Grinovero wrote: > Hey all, > > I'm having OOMs running the tests of infinispan-core. > > Initially I thought it was related to limits and security as that's > the usual suspect, but no it's really just not enough memory :) > > Found that the root pom.xml sets a property to Xmx1G for > surefire; I've been observing the growth of heap usage in JConsole and > it's clearly not enough. > > What surprises me is that - as an occasional tester - I shouldn't be > the one to notice such a new requirement first. A leak which only > manifests in certain conditions? > > What do others observe? > > FWIW, I'm running it with 8G heap now and it's working much better; > still a couple of failures but at least they're not OOM related. > > Thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180215/8ba9cadb/attachment.html From sanne at infinispan.org Thu Feb 15 05:31:00 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 15 Feb 2018 10:31:00 +0000 Subject: [infinispan-dev] Testsuite: memory usage? In-Reply-To: References: Message-ID: Thanks Dan. Do you happen to have observed the memory trend during a build? After a couple more attempts it passed the build once, so that shows it's possible to pass.. but even though it's a small sample so far that's 1 pass vs 3 OOMs on my machine. Even the one time it successfully completed the tests I see it wasted ~80% of total build time doing GC runs.. it was likely very close to fall over, and definitely not an efficient setting for regular builds. Observing trends on my machine I'd guess a reasonable value to be around 5GB to keep builds fast, or a minimum of 1.3 GB to be able to complete successfully without often failing. The memory issues are worse towards the end of the testsuite, and steadily growing. I won't be able to investigate further as I need to urgently work on modules, but I noticed there are quite some MBeans according to JConsole. I guess it would be good to check if we're not leaking the MBean registration, and therefore leaking (stopped?) CacheManagers from there? Even near the beginning of the tests, when forcing a full GC I see about 400MB being "not free". That's quite a lot for some simple tests, no? Thanks, Sanne On 15 February 2018 at 06:51, Dan Berindei wrote: > forkJvmArgs used to be "-Xmx2G" before ISPN-8478. I reduced the heap to 1G > because we were trying to run the build on agent VMs with only 4GB of RAM, > and the 2GB heap was making the build run out of native memory. > > I've yet to see an OOME in the core tests, locally or in CI. But I also > included -XX:+HeapDumpOnOutOfMemoryError in forkJvmArgs, so assuming there's > a new leak it should be easy to track down in the heap dump. > > Cheers > Dan > > > On Wed, Feb 14, 2018 at 11:46 PM, Sanne Grinovero > wrote: >> >> Hey all, >> >> I'm having OOMs running the tests of infinispan-core. >> >> Initially I thought it was related to limits and security as that's >> the usual suspect, but no it's really just not enough memory :) >> >> Found that the root pom.xml sets a property to Xmx1G for >> surefire; I've been observing the growth of heap usage in JConsole and >> it's clearly not enough. >> >> What surprises me is that - as an occasional tester - I shouldn't be >> the one to notice such a new requirement first. A leak which only >> manifests in certain conditions? >> >> What do others observe? >> >> FWIW, I'm running it with 8G heap now and it's working much better; >> still a couple of failures but at least they're not OOM related. >> >> Thanks, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Thu Feb 15 07:25:33 2018 From: galder at redhat.com (Galder =?utf-8?Q?Zamarre=C3=B1o?=) Date: Thu, 15 Feb 2018 13:25:33 +0100 Subject: [infinispan-dev] Best practices for Netty version clashes Message-ID: Hi, I was playing around with GRPC for a talk next month and made a mistake that threw me a little bit and wanted to share it here to see if we can do something about it. My demo uses GRPC and Infinispan embedded cache (9.2.0.CR1), so I added my GRPC dependencies and Infinispan bom dependency [1]. This combo resulted in breaking my GRPC demos. The bom imports Netty 4.1.9.Final while GRPC requires 4.1.17.Final. The dependency tree showed GRPC using 4.1.9.Final which lead to the failure. This failure does not seem present in 4.1.17.Final. Should we have an embedded bom where no client libraries are depended upon? This would work for my particular use case... However, someone might develop a GRPC server (which I *think* it still requires netty) and they could then use Infinispan remote client to bridge over to Infinispan sever. For example: this could be way to move clients over a new client while other clients use an older protocol. How should a user solve this clash? I can only see exclusions and depending on latest Netty version as solution. Any other solutions though? Cheers, [1] https://gist.github.com/galderz/300cc2708eab76b9861985c216b90136 From slaskawi at redhat.com Thu Feb 15 07:52:56 2018 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 15 Feb 2018 12:52:56 +0000 Subject: [infinispan-dev] Best practices for Netty version clashes In-Reply-To: References: Message-ID: This is actually how the dependency resolution (strike it out and replace with hell) works. In this particular example, Netty 4.1.9 is "closer" to the project you're building than Netty 4.1.17 [1]. This happened since Maven just copy-past the Dependency Management section from imported bom. So effectively Netty from Infinispan BOM got into the Dependency Management section of your project. Of course, if you hit an integration problem like this, you may declare Netty version directly in your Dependency Management. This way you will enforce Maven to you what you want. IMO, the end user can do nothing about such errors (and this is really sad). Your particular problem is about Netty but I can easily imagine users who got the same problem with Apache Commons (although the chances are smaller since they are backwards compatible... opposed to Netty). Maybe someday the Jigsaw will solve it... But for now - just don't use BOM or declare Netty version in your Dependency Management section. Thanks, Sebastian [1] https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html On Thu, Feb 15, 2018 at 1:26 PM Galder Zamarre?o wrote: > Hi, > > I was playing around with GRPC for a talk next month and made a mistake > that threw me a little bit and wanted to share it here to see if we can > do something about it. > > My demo uses GRPC and Infinispan embedded cache (9.2.0.CR1), so I added > my GRPC dependencies and Infinispan bom dependency [1]. > > This combo resulted in breaking my GRPC demos. > > The bom imports Netty 4.1.9.Final while GRPC requires 4.1.17.Final. The > dependency tree showed GRPC using 4.1.9.Final which lead to the > failure. This failure does not seem present in 4.1.17.Final. > > Should we have an embedded bom where no client libraries are depended > upon? This would work for my particular use case... > > However, someone might develop a GRPC server (which I *think* it still > requires netty) and they could then use Infinispan remote client to > bridge over to Infinispan sever. For example: this could be way to move > clients over a new client while other clients use an older protocol. > > How should a user solve this clash? I can only see exclusions and > depending on latest Netty version as solution. Any other solutions > though? > > Cheers, > > [1] https://gist.github.com/galderz/300cc2708eab76b9861985c216b90136 > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180215/de82cac7/attachment.html From mudokonman at gmail.com Thu Feb 15 08:32:46 2018 From: mudokonman at gmail.com (William Burns) Date: Thu, 15 Feb 2018 13:32:46 +0000 Subject: [infinispan-dev] Testsuite: memory usage? In-Reply-To: References: Message-ID: So I must admit I had noticed a while back that I was having some issues with running the core test suite. Unfortunately at the time CI and everyone else seemed to not have any issues. I just ignored it because at the time I didn't need to run core tests. But now that Sanne pointed this out, by increasing the heap variable in the pom.xml, I was for the first time able to run the test suite completely. It would normally hang for an extremely long time near the 9k-10K test completed point and never finish for me (at least I didn't wait long enough). So it definitely seems there is something leaking in the test suite causing the GC to use a ton of CPU time. - Will On Thu, Feb 15, 2018 at 5:40 AM Sanne Grinovero wrote: > Thanks Dan. > > Do you happen to have observed the memory trend during a build? > > After a couple more attempts it passed the build once, so that shows > it's possible to pass.. but even though it's a small sample so far > that's 1 pass vs 3 OOMs on my machine. > > Even the one time it successfully completed the tests I see it wasted > ~80% of total build time doing GC runs.. it was likely very close to > fall over, and definitely not an efficient setting for regular builds. > Observing trends on my machine I'd guess a reasonable value to be > around 5GB to keep builds fast, or a minimum of 1.3 GB to be able to > complete successfully without often failing. > > The memory issues are worse towards the end of the testsuite, and > steadily growing. > > I won't be able to investigate further as I need to urgently work on > modules, but I noticed there are quite some MBeans according to > JConsole. I guess it would be good to check if we're not leaking the > MBean registration, and therefore leaking (stopped?) CacheManagers > from there? > > Even near the beginning of the tests, when forcing a full GC I see > about 400MB being "not free". That's quite a lot for some simple > tests, no? > > Thanks, > Sanne > > > On 15 February 2018 at 06:51, Dan Berindei wrote: > > forkJvmArgs used to be "-Xmx2G" before ISPN-8478. I reduced the heap to > 1G > > because we were trying to run the build on agent VMs with only 4GB of > RAM, > > and the 2GB heap was making the build run out of native memory. > > > > I've yet to see an OOME in the core tests, locally or in CI. But I also > > included -XX:+HeapDumpOnOutOfMemoryError in forkJvmArgs, so assuming > there's > > a new leak it should be easy to track down in the heap dump. > > > > Cheers > > Dan > > > > > > On Wed, Feb 14, 2018 at 11:46 PM, Sanne Grinovero > > wrote: > >> > >> Hey all, > >> > >> I'm having OOMs running the tests of infinispan-core. > >> > >> Initially I thought it was related to limits and security as that's > >> the usual suspect, but no it's really just not enough memory :) > >> > >> Found that the root pom.xml sets a property to Xmx1G for > >> surefire; I've been observing the growth of heap usage in JConsole and > >> it's clearly not enough. > >> > >> What surprises me is that - as an occasional tester - I shouldn't be > >> the one to notice such a new requirement first. A leak which only > >> manifests in certain conditions? > >> > >> What do others observe? > >> > >> FWIW, I'm running it with 8G heap now and it's working much better; > >> still a couple of failures but at least they're not OOM related. > >> > >> Thanks, > >> Sanne > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180215/b2731226/attachment.html From dan.berindei at gmail.com Thu Feb 15 08:32:54 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 15 Feb 2018 13:32:54 +0000 Subject: [infinispan-dev] Best practices for Netty version clashes In-Reply-To: References: Message-ID: The application POM could use dependency convergence [1], but we probably can't (and shouldn't) use the plugin in the BOM and enforce it's usage in applications. Dan [1]: http://maven.apache.org/enforcer/enforcer-rules/dependencyConvergence.html On Thu, Feb 15, 2018 at 12:52 PM, Sebastian Laskawiec wrote: > This is actually how the dependency resolution (strike it out and replace > with hell) works. > > In this particular example, Netty 4.1.9 is "closer" to the project you're > building than Netty 4.1.17 [1]. This happened since Maven just copy-past > the Dependency Management section from imported bom. So effectively Netty > from Infinispan BOM got into the Dependency Management section of your > project. > > Of course, if you hit an integration problem like this, you may declare > Netty version directly in your Dependency Management. This way you will > enforce Maven to you what you want. > > IMO, the end user can do nothing about such errors (and this is really > sad). Your particular problem is about Netty but I can easily imagine users > who got the same problem with Apache Commons (although the chances are > smaller since they are backwards compatible... opposed to Netty). Maybe > someday the Jigsaw will solve it... But for now - just don't use BOM or > declare Netty version in your Dependency Management section. > > Thanks, > Sebastian > > [1] https://maven.apache.org/guides/introduction/ > introduction-to-dependency-mechanism.html > > On Thu, Feb 15, 2018 at 1:26 PM Galder Zamarre?o > wrote: > >> Hi, >> >> I was playing around with GRPC for a talk next month and made a mistake >> that threw me a little bit and wanted to share it here to see if we can >> do something about it. >> >> My demo uses GRPC and Infinispan embedded cache (9.2.0.CR1), so I added >> my GRPC dependencies and Infinispan bom dependency [1]. >> >> This combo resulted in breaking my GRPC demos. >> >> The bom imports Netty 4.1.9.Final while GRPC requires 4.1.17.Final. The >> dependency tree showed GRPC using 4.1.9.Final which lead to the >> failure. This failure does not seem present in 4.1.17.Final. >> >> Should we have an embedded bom where no client libraries are depended >> upon? This would work for my particular use case... >> >> However, someone might develop a GRPC server (which I *think* it still >> requires netty) and they could then use Infinispan remote client to >> bridge over to Infinispan sever. For example: this could be way to move >> clients over a new client while other clients use an older protocol. >> >> How should a user solve this clash? I can only see exclusions and >> depending on latest Netty version as solution. Any other solutions >> though? >> >> Cheers, >> >> [1] https://gist.github.com/galderz/300cc2708eab76b9861985c216b90136 >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180215/e494bd72/attachment-0001.html From dan.berindei at gmail.com Thu Feb 15 08:39:25 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 15 Feb 2018 13:39:25 +0000 Subject: [infinispan-dev] Testsuite: memory usage? In-Reply-To: References: Message-ID: And here I was thinking that by adding -XX:+HeapDumpOnOutOfMemoryError anyone would be able to look into OOMEs and I wouldn't have to reproduce the failures myself :) Dan On Thu, Feb 15, 2018 at 1:32 PM, William Burns wrote: > So I must admit I had noticed a while back that I was having some issues > with running the core test suite. Unfortunately at the time CI and everyone > else seemed to not have any issues. I just ignored it because at the time I > didn't need to run core tests. But now that Sanne pointed this out, by > increasing the heap variable in the pom.xml, I was for the first time able > to run the test suite completely. It would normally hang for an extremely > long time near the 9k-10K test completed point and never finish for me (at > least I didn't wait long enough). > > So it definitely seems there is something leaking in the test suite > causing the GC to use a ton of CPU time. > > - Will > > On Thu, Feb 15, 2018 at 5:40 AM Sanne Grinovero > wrote: > >> Thanks Dan. >> >> Do you happen to have observed the memory trend during a build? >> >> After a couple more attempts it passed the build once, so that shows >> it's possible to pass.. but even though it's a small sample so far >> that's 1 pass vs 3 OOMs on my machine. >> >> Even the one time it successfully completed the tests I see it wasted >> ~80% of total build time doing GC runs.. it was likely very close to >> fall over, and definitely not an efficient setting for regular builds. >> Observing trends on my machine I'd guess a reasonable value to be >> around 5GB to keep builds fast, or a minimum of 1.3 GB to be able to >> complete successfully without often failing. >> >> The memory issues are worse towards the end of the testsuite, and >> steadily growing. >> >> I won't be able to investigate further as I need to urgently work on >> modules, but I noticed there are quite some MBeans according to >> JConsole. I guess it would be good to check if we're not leaking the >> MBean registration, and therefore leaking (stopped?) CacheManagers >> from there? >> >> Even near the beginning of the tests, when forcing a full GC I see >> about 400MB being "not free". That's quite a lot for some simple >> tests, no? >> >> Thanks, >> Sanne >> >> >> On 15 February 2018 at 06:51, Dan Berindei >> wrote: >> > forkJvmArgs used to be "-Xmx2G" before ISPN-8478. I reduced the heap to >> 1G >> > because we were trying to run the build on agent VMs with only 4GB of >> RAM, >> > and the 2GB heap was making the build run out of native memory. >> > >> > I've yet to see an OOME in the core tests, locally or in CI. But I also >> > included -XX:+HeapDumpOnOutOfMemoryError in forkJvmArgs, so assuming >> there's >> > a new leak it should be easy to track down in the heap dump. >> > >> > Cheers >> > Dan >> > >> > >> > On Wed, Feb 14, 2018 at 11:46 PM, Sanne Grinovero > > >> > wrote: >> >> >> >> Hey all, >> >> >> >> I'm having OOMs running the tests of infinispan-core. >> >> >> >> Initially I thought it was related to limits and security as that's >> >> the usual suspect, but no it's really just not enough memory :) >> >> >> >> Found that the root pom.xml sets a property to Xmx1G for >> >> surefire; I've been observing the growth of heap usage in JConsole and >> >> it's clearly not enough. >> >> >> >> What surprises me is that - as an occasional tester - I shouldn't be >> >> the one to notice such a new requirement first. A leak which only >> >> manifests in certain conditions? >> >> >> >> What do others observe? >> >> >> >> FWIW, I'm running it with 8G heap now and it's working much better; >> >> still a couple of failures but at least they're not OOM related. >> >> >> >> Thanks, >> >> Sanne >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180215/e1fc7b60/attachment.html From dan.berindei at gmail.com Thu Feb 15 08:59:41 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 15 Feb 2018 13:59:41 +0000 Subject: [infinispan-dev] Testsuite: memory usage? In-Reply-To: References: Message-ID: Hmmm, I didn't notice that I was running with -XX:+UseG1GC, so perhaps our test suite is a pathological case for the default collector? [INFO] Total time: 12:45 min GC Time: 52.593s Class Loader Time: 1m 26.007s Compile Time: 10m 10.216s I'll try without -XX:+UseG1GC later. Cheers Dan On Thu, Feb 15, 2018 at 1:39 PM, Dan Berindei wrote: > And here I was thinking that by adding -XX:+HeapDumpOnOutOfMemoryError > anyone would be able to look into OOMEs and I wouldn't have to reproduce > the failures myself :) > > Dan > > > On Thu, Feb 15, 2018 at 1:32 PM, William Burns > wrote: > >> So I must admit I had noticed a while back that I was having some issues >> with running the core test suite. Unfortunately at the time CI and everyone >> else seemed to not have any issues. I just ignored it because at the time I >> didn't need to run core tests. But now that Sanne pointed this out, by >> increasing the heap variable in the pom.xml, I was for the first time able >> to run the test suite completely. It would normally hang for an extremely >> long time near the 9k-10K test completed point and never finish for me (at >> least I didn't wait long enough). >> >> So it definitely seems there is something leaking in the test suite >> causing the GC to use a ton of CPU time. >> >> - Will >> >> On Thu, Feb 15, 2018 at 5:40 AM Sanne Grinovero >> wrote: >> >>> Thanks Dan. >>> >>> Do you happen to have observed the memory trend during a build? >>> >>> After a couple more attempts it passed the build once, so that shows >>> it's possible to pass.. but even though it's a small sample so far >>> that's 1 pass vs 3 OOMs on my machine. >>> >>> Even the one time it successfully completed the tests I see it wasted >>> ~80% of total build time doing GC runs.. it was likely very close to >>> fall over, and definitely not an efficient setting for regular builds. >>> Observing trends on my machine I'd guess a reasonable value to be >>> around 5GB to keep builds fast, or a minimum of 1.3 GB to be able to >>> complete successfully without often failing. >>> >>> The memory issues are worse towards the end of the testsuite, and >>> steadily growing. >>> >>> I won't be able to investigate further as I need to urgently work on >>> modules, but I noticed there are quite some MBeans according to >>> JConsole. I guess it would be good to check if we're not leaking the >>> MBean registration, and therefore leaking (stopped?) CacheManagers >>> from there? >>> >>> Even near the beginning of the tests, when forcing a full GC I see >>> about 400MB being "not free". That's quite a lot for some simple >>> tests, no? >>> >>> Thanks, >>> Sanne >>> >>> >>> On 15 February 2018 at 06:51, Dan Berindei >>> wrote: >>> > forkJvmArgs used to be "-Xmx2G" before ISPN-8478. I reduced the heap >>> to 1G >>> > because we were trying to run the build on agent VMs with only 4GB of >>> RAM, >>> > and the 2GB heap was making the build run out of native memory. >>> > >>> > I've yet to see an OOME in the core tests, locally or in CI. But I also >>> > included -XX:+HeapDumpOnOutOfMemoryError in forkJvmArgs, so assuming >>> there's >>> > a new leak it should be easy to track down in the heap dump. >>> > >>> > Cheers >>> > Dan >>> > >>> > >>> > On Wed, Feb 14, 2018 at 11:46 PM, Sanne Grinovero < >>> sanne at infinispan.org> >>> > wrote: >>> >> >>> >> Hey all, >>> >> >>> >> I'm having OOMs running the tests of infinispan-core. >>> >> >>> >> Initially I thought it was related to limits and security as that's >>> >> the usual suspect, but no it's really just not enough memory :) >>> >> >>> >> Found that the root pom.xml sets a property to Xmx1G for >>> >> surefire; I've been observing the growth of heap usage in JConsole and >>> >> it's clearly not enough. >>> >> >>> >> What surprises me is that - as an occasional tester - I shouldn't be >>> >> the one to notice such a new requirement first. A leak which only >>> >> manifests in certain conditions? >>> >> >>> >> What do others observe? >>> >> >>> >> FWIW, I'm running it with 8G heap now and it's working much better; >>> >> still a couple of failures but at least they're not OOM related. >>> >> >>> >> Thanks, >>> >> Sanne >>> >> _______________________________________________ >>> >> infinispan-dev mailing list >>> >> infinispan-dev at lists.jboss.org >>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> > >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180215/469b7cdb/attachment-0001.html From dan.berindei at gmail.com Fri Feb 16 03:05:07 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Fri, 16 Feb 2018 08:05:07 +0000 Subject: [infinispan-dev] Testsuite: memory usage? In-Reply-To: References: Message-ID: Yeah, I got a much slower run with the default collector (parallel): [INFO] Total time: 17:45 min GC Time: 2m 43s Compile time: 18m 20s I'm not sure if it's really the GC affecting the compile time or there's another factor hiding there. But I did get a heap dump and I'm analyzing it now. Cheers Dan On Thu, Feb 15, 2018 at 1:59 PM, Dan Berindei wrote: > Hmmm, I didn't notice that I was running with -XX:+UseG1GC, so perhaps our > test suite is a pathological case for the default collector? > > [INFO] Total time: 12:45 min > GC Time: 52.593s > Class Loader Time: 1m 26.007s > Compile Time: 10m 10.216s > > I'll try without -XX:+UseG1GC later. > > Cheers > Dan > > > On Thu, Feb 15, 2018 at 1:39 PM, Dan Berindei > wrote: > >> And here I was thinking that by adding -XX:+HeapDumpOnOutOfMemoryError >> anyone would be able to look into OOMEs and I wouldn't have to reproduce >> the failures myself :) >> >> Dan >> >> >> On Thu, Feb 15, 2018 at 1:32 PM, William Burns >> wrote: >> >>> So I must admit I had noticed a while back that I was having some issues >>> with running the core test suite. Unfortunately at the time CI and everyone >>> else seemed to not have any issues. I just ignored it because at the time I >>> didn't need to run core tests. But now that Sanne pointed this out, by >>> increasing the heap variable in the pom.xml, I was for the first time able >>> to run the test suite completely. It would normally hang for an extremely >>> long time near the 9k-10K test completed point and never finish for me (at >>> least I didn't wait long enough). >>> >>> So it definitely seems there is something leaking in the test suite >>> causing the GC to use a ton of CPU time. >>> >>> - Will >>> >>> On Thu, Feb 15, 2018 at 5:40 AM Sanne Grinovero >>> wrote: >>> >>>> Thanks Dan. >>>> >>>> Do you happen to have observed the memory trend during a build? >>>> >>>> After a couple more attempts it passed the build once, so that shows >>>> it's possible to pass.. but even though it's a small sample so far >>>> that's 1 pass vs 3 OOMs on my machine. >>>> >>>> Even the one time it successfully completed the tests I see it wasted >>>> ~80% of total build time doing GC runs.. it was likely very close to >>>> fall over, and definitely not an efficient setting for regular builds. >>>> Observing trends on my machine I'd guess a reasonable value to be >>>> around 5GB to keep builds fast, or a minimum of 1.3 GB to be able to >>>> complete successfully without often failing. >>>> >>>> The memory issues are worse towards the end of the testsuite, and >>>> steadily growing. >>>> >>>> I won't be able to investigate further as I need to urgently work on >>>> modules, but I noticed there are quite some MBeans according to >>>> JConsole. I guess it would be good to check if we're not leaking the >>>> MBean registration, and therefore leaking (stopped?) CacheManagers >>>> from there? >>>> >>>> Even near the beginning of the tests, when forcing a full GC I see >>>> about 400MB being "not free". That's quite a lot for some simple >>>> tests, no? >>>> >>>> Thanks, >>>> Sanne >>>> >>>> >>>> On 15 February 2018 at 06:51, Dan Berindei >>>> wrote: >>>> > forkJvmArgs used to be "-Xmx2G" before ISPN-8478. I reduced the heap >>>> to 1G >>>> > because we were trying to run the build on agent VMs with only 4GB of >>>> RAM, >>>> > and the 2GB heap was making the build run out of native memory. >>>> > >>>> > I've yet to see an OOME in the core tests, locally or in CI. But I >>>> also >>>> > included -XX:+HeapDumpOnOutOfMemoryError in forkJvmArgs, so assuming >>>> there's >>>> > a new leak it should be easy to track down in the heap dump. >>>> > >>>> > Cheers >>>> > Dan >>>> > >>>> > >>>> > On Wed, Feb 14, 2018 at 11:46 PM, Sanne Grinovero < >>>> sanne at infinispan.org> >>>> > wrote: >>>> >> >>>> >> Hey all, >>>> >> >>>> >> I'm having OOMs running the tests of infinispan-core. >>>> >> >>>> >> Initially I thought it was related to limits and security as that's >>>> >> the usual suspect, but no it's really just not enough memory :) >>>> >> >>>> >> Found that the root pom.xml sets a property to Xmx1G >>>> for >>>> >> surefire; I've been observing the growth of heap usage in JConsole >>>> and >>>> >> it's clearly not enough. >>>> >> >>>> >> What surprises me is that - as an occasional tester - I shouldn't be >>>> >> the one to notice such a new requirement first. A leak which only >>>> >> manifests in certain conditions? >>>> >> >>>> >> What do others observe? >>>> >> >>>> >> FWIW, I'm running it with 8G heap now and it's working much better; >>>> >> still a couple of failures but at least they're not OOM related. >>>> >> >>>> >> Thanks, >>>> >> Sanne >>>> >> _______________________________________________ >>>> >> infinispan-dev mailing list >>>> >> infinispan-dev at lists.jboss.org >>>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> > >>>> > >>>> > >>>> > _______________________________________________ >>>> > infinispan-dev mailing list >>>> > infinispan-dev at lists.jboss.org >>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180216/ad813b56/attachment.html From sanne at infinispan.org Sun Feb 18 19:17:19 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 19 Feb 2018 00:17:19 +0000 Subject: [infinispan-dev] Hibernate Search integration modules for WildFly have been broken In-Reply-To: References: Message-ID: Hi all, spent friday and a good deal of the weekend exploring some options; finally I have a proposal. - https://github.com/infinispan/infinispan/pull/5766 But let me give some background: I was originally planning to upgrade Infinispan to Hibernate Search 5.9.0.Final only in Infinispan 9.3, as while the code itself didn't change much, I was expecting quite some work in the area of the build of WildFly modules. However since the module structure in the current master is lacking [ISPN-8780] I'm proposing now to do this upgrade already, so to not have to refactor the build of modules now and then again shortly. As a reminder, we want Infinispan to be able to provide a "Lucene Directory Provider" for 3 versions of Hibernate Search: A) the version which is included in latest WildFly stable release B) the version which Infinispan Query is using C) the latest stable version It is of course possible - even likely - that some of these versions happen to be the same for a particular release train; that's nice however please remember that while they might be *coincidentally* the same, that's no good reason to remove the capability of the build system to support these different versions when they happen to diverge. That's why the properties which mark each of these versions was strictly separate - and more crucially was not "redundant". In the specific case, I was looking for the properties I needed to change to make sure we could support latest Hibernate Search release but they were gone, my guess is this was caused by the fact that recently version[B] and version[C] happened to match so this confused. At this stage I don't think it's worth it to try find and identify all changes which should be reverted, as with the upcoming upgrade to Hibernate Search 5.9.0.Final several changes in how we build modules would be needed, so I'll send a PR to upgrade to HS 5.9.0.Final already. Risks and implications? HS 5.9 is mostly the same as 5.8, the main difference for end users is the integration with a different version of Hibernate ORM now supporting JPA 2.2 - but this isn't relevant for Infinispan so I consider the upgrade overall a low risk. The other relevant difference is that we no longer publish a zip of the modules for WildFly - we publish instead a set of rather fine grained feature packs. This is good news for Infinispan as we can better cherry pick which components you actually want, but it means the build needs to be adapted now to deal with these feature packs. In my PR for ISPN-8779 I'll use the Provisioning plugin for Maven to materialize the modules that Infinispan needs, so that for now they can be re-packaged into the zip files - so to not have any impact on Infinispan users. I hope in a second stage you'll all see the benefits of distributing feature packs so we'll be able to simplify some things. Feature packs normally have a notion of dependency to other feature packs, but to keep the adaptation into "old style fat zip" I'll reconfigure the provisioning task to disable transitive dependencies; the drawback is that I'll have to declare the version of each feature pack we need explicitly in the parent pom: we'll be able to avoid the need to match the dependant versions when Infinispan also will produce feature packs rather than the fat zip. N.B. this PR #5766 doesn't address the fact that the build doesn't differentiate between the above cases B and C, but since B and C would just happen to be the same version again in Infinispan 9.2 the issue is no longer urgent, so ISPN-8780 could be postponed to 9.3+ and it might be much easier after migrating some Infinispan modules to feature packs as well. Thanks, Sanne On 7 February 2018 at 16:59, Sanne Grinovero wrote: > Hi all, > > I was going to give ISPN-8779 a shot but I'm finding a mess. > > the root pom contains these twice (and inconsistent!): > > 5.8.1.Final > [...] > 5.8.0.Final > > the BOM cointains a copy of `version.hibernate.search` as well. > > I don't mind deleting duplicate properties, but we used to have > clearly separate properties for different purposes, and this > separation is essential. > > I've mentioned this multiple times when reviewing PRs which would get > my attention, but I didn't see these changes - certainly didn't expect > you all to forget the special purpose of these modules. > It's quite messy now and I'm honestly lost myself at how I could revert it. > > In particular this module is broken now as it's targeting the wrong slot: > - https://github.com/infinispan/infinispan/blob/master/wildfly-modules/src/main/resources/org/infinispan/for-hibernatesearch-wildfly/main/module.xml#L27 > > Clearly it's not consistent with the comment I've put on the module descriptor. > I don't see that module being included in the released modules either, > and clearly the integration tests didn't catch it because they have > been patched to use the wrong modules too :( > > Other essential integration tests which I had put in place to make > sure they'd get your attention in case someone had such an idea.. have > been deleted. > > Opening ISPN-8780, I would consider this a release blocker. > > Thanks, > Sanne > > See also: > - https://github.com/infinispan/infinispan/blob/master/integrationtests/as-lucene-directory/READ.ME From remerson at redhat.com Mon Feb 19 05:29:54 2018 From: remerson at redhat.com (Ryan Emerson) Date: Mon, 19 Feb 2018 05:29:54 -0500 (EST) Subject: [infinispan-dev] Hibernate Search integration modules for WildFly have been broken In-Reply-To: References: Message-ID: <156352272.3326182.1519036194938.JavaMail.zimbra@redhat.com> Hi Sanne, Thanks for the input and the proposed solution. I am not familiar with Hibernate search and our integrations, so I will leave others to comment on the version upgrade. I just wanted to let you know that the transition to feature-packs is on the roadmap for 9.3 as part of our ongoing server refactoring efforts. Furthermore, I believe the breaking of the wildfly-modules was caused by my work on the bom and parent pom refactoring, so apologies for the inconvenience, I owe you a beer! Cheers Ryan ----- Original Message ----- > Hi all, > > spent friday and a good deal of the weekend exploring some options; > finally I have a proposal. > - https://github.com/infinispan/infinispan/pull/5766 > > But let me give some background: > > I was originally planning to upgrade Infinispan to Hibernate Search > 5.9.0.Final only in Infinispan 9.3, as while the code itself didn't > change much, I was expecting quite some work in the area of the build > of WildFly modules. > > However since the module structure in the current master is lacking > [ISPN-8780] I'm proposing now to do this upgrade already, so to not > have to refactor the build of modules now and then again shortly. > > As a reminder, we want Infinispan to be able to provide a "Lucene > Directory Provider" for 3 versions of Hibernate Search: > A) the version which is included in latest WildFly stable release > B) the version which Infinispan Query is using > C) the latest stable version > > It is of course possible - even likely - that some of these versions > happen to be the same for a particular release train; that's nice > however please remember that while they might be *coincidentally* the > same, that's no good reason to remove the capability of the build > system to support these different versions when they happen to > diverge. That's why the properties which mark each of these versions > was strictly separate - and more crucially was not "redundant". > > In the specific case, I was looking for the properties I needed to > change to make sure we could support latest Hibernate Search release > but they were gone, my guess is this was caused by the fact that > recently version[B] and version[C] happened to match so this confused. > > At this stage I don't think it's worth it to try find and identify all > changes which should be reverted, as with the upcoming upgrade to > Hibernate Search 5.9.0.Final several changes in how we build modules > would be needed, so I'll send a PR to upgrade to HS 5.9.0.Final > already. > > Risks and implications? > > HS 5.9 is mostly the same as 5.8, the main difference for end users is > the integration with a different version of Hibernate ORM now > supporting JPA 2.2 - but this isn't relevant for Infinispan so I > consider the upgrade overall a low risk. > The other relevant difference is that we no longer publish a zip of > the modules for WildFly - we publish instead a set of rather fine > grained feature packs. This is good news for Infinispan as we can > better cherry pick which components you actually want, but it means > the build needs to be adapted now to deal with these feature packs. > > In my PR for ISPN-8779 I'll use the Provisioning plugin for Maven to > materialize the modules that Infinispan needs, so that for now they > can be re-packaged into the zip files - so to not have any impact on > Infinispan users. I hope in a second stage you'll all see the benefits > of distributing feature packs so we'll be able to simplify some > things. > > Feature packs normally have a notion of dependency to other feature > packs, but to keep the adaptation into "old style fat zip" I'll > reconfigure the provisioning task to disable transitive dependencies; > the drawback is that I'll have to declare the version of each feature > pack we need explicitly in the parent pom: we'll be able to avoid the > need to match the dependant versions when Infinispan also will produce > feature packs rather than the fat zip. > > N.B. this PR #5766 doesn't address the fact that the build doesn't > differentiate between the above cases B and C, but since B and C would > just happen to be the same version again in Infinispan 9.2 the issue > is no longer urgent, so ISPN-8780 could be postponed to 9.3+ and it > might be much easier after migrating some Infinispan modules to > feature packs as well. > > Thanks, > Sanne > > > On 7 February 2018 at 16:59, Sanne Grinovero wrote: > > Hi all, > > > > I was going to give ISPN-8779 a shot but I'm finding a mess. > > > > the root pom contains these twice (and inconsistent!): > > > > 5.8.1.Final > > [...] > > 5.8.0.Final > > > > the BOM cointains a copy of `version.hibernate.search` as well. > > > > I don't mind deleting duplicate properties, but we used to have > > clearly separate properties for different purposes, and this > > separation is essential. > > > > I've mentioned this multiple times when reviewing PRs which would get > > my attention, but I didn't see these changes - certainly didn't expect > > you all to forget the special purpose of these modules. > > It's quite messy now and I'm honestly lost myself at how I could revert it. > > > > In particular this module is broken now as it's targeting the wrong slot: > > - > > https://github.com/infinispan/infinispan/blob/master/wildfly-modules/src/main/resources/org/infinispan/for-hibernatesearch-wildfly/main/module.xml#L27 > > > > Clearly it's not consistent with the comment I've put on the module > > descriptor. > > I don't see that module being included in the released modules either, > > and clearly the integration tests didn't catch it because they have > > been patched to use the wrong modules too :( > > > > Other essential integration tests which I had put in place to make > > sure they'd get your attention in case someone had such an idea.. have > > been deleted. > > > > Opening ISPN-8780, I would consider this a release blocker. > > > > Thanks, > > Sanne > > > > See also: > > - > > https://github.com/infinispan/infinispan/blob/master/integrationtests/as-lucene-directory/READ.ME > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From sanne at infinispan.org Mon Feb 19 06:18:51 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 19 Feb 2018 11:18:51 +0000 Subject: [infinispan-dev] Hibernate Search integration modules for WildFly have been broken In-Reply-To: <156352272.3326182.1519036194938.JavaMail.zimbra@redhat.com> References: <156352272.3326182.1519036194938.JavaMail.zimbra@redhat.com> Message-ID: Hi Ryan, no problem at all. I can see how it's confusing, and open to suggestions to make this clearer. Also I don't blame who's working to make progress but I'll blame the reviewers, they should know by know about these modules :P Regarding the Search upgrade, that's up to Adrian, Gustavo and Tristan, but as I mentioned the version upgrade was classified as a minor mostly because of the ORM integration changes, which won't affect Infinispan so not very disruptive at all - except in how those modules have to be built of course. Thanks, Sanne On 19 February 2018 at 10:29, Ryan Emerson wrote: > Hi Sanne, > > Thanks for the input and the proposed solution. I am not familiar > with Hibernate search and our integrations, so I will leave others to > comment on the version upgrade. I just wanted to let you know that the > transition to feature-packs is on the roadmap for 9.3 as part of our > ongoing server refactoring efforts. Furthermore, I believe the breaking > of the wildfly-modules was caused by my work on the bom and parent pom > refactoring, so apologies for the inconvenience, I owe you a beer! > > Cheers > Ryan > > ----- Original Message ----- >> Hi all, >> >> spent friday and a good deal of the weekend exploring some options; >> finally I have a proposal. >> - https://github.com/infinispan/infinispan/pull/5766 >> >> But let me give some background: >> >> I was originally planning to upgrade Infinispan to Hibernate Search >> 5.9.0.Final only in Infinispan 9.3, as while the code itself didn't >> change much, I was expecting quite some work in the area of the build >> of WildFly modules. >> >> However since the module structure in the current master is lacking >> [ISPN-8780] I'm proposing now to do this upgrade already, so to not >> have to refactor the build of modules now and then again shortly. >> >> As a reminder, we want Infinispan to be able to provide a "Lucene >> Directory Provider" for 3 versions of Hibernate Search: >> A) the version which is included in latest WildFly stable release >> B) the version which Infinispan Query is using >> C) the latest stable version >> >> It is of course possible - even likely - that some of these versions >> happen to be the same for a particular release train; that's nice >> however please remember that while they might be *coincidentally* the >> same, that's no good reason to remove the capability of the build >> system to support these different versions when they happen to >> diverge. That's why the properties which mark each of these versions >> was strictly separate - and more crucially was not "redundant". >> >> In the specific case, I was looking for the properties I needed to >> change to make sure we could support latest Hibernate Search release >> but they were gone, my guess is this was caused by the fact that >> recently version[B] and version[C] happened to match so this confused. >> >> At this stage I don't think it's worth it to try find and identify all >> changes which should be reverted, as with the upcoming upgrade to >> Hibernate Search 5.9.0.Final several changes in how we build modules >> would be needed, so I'll send a PR to upgrade to HS 5.9.0.Final >> already. >> >> Risks and implications? >> >> HS 5.9 is mostly the same as 5.8, the main difference for end users is >> the integration with a different version of Hibernate ORM now >> supporting JPA 2.2 - but this isn't relevant for Infinispan so I >> consider the upgrade overall a low risk. >> The other relevant difference is that we no longer publish a zip of >> the modules for WildFly - we publish instead a set of rather fine >> grained feature packs. This is good news for Infinispan as we can >> better cherry pick which components you actually want, but it means >> the build needs to be adapted now to deal with these feature packs. >> >> In my PR for ISPN-8779 I'll use the Provisioning plugin for Maven to >> materialize the modules that Infinispan needs, so that for now they >> can be re-packaged into the zip files - so to not have any impact on >> Infinispan users. I hope in a second stage you'll all see the benefits >> of distributing feature packs so we'll be able to simplify some >> things. >> >> Feature packs normally have a notion of dependency to other feature >> packs, but to keep the adaptation into "old style fat zip" I'll >> reconfigure the provisioning task to disable transitive dependencies; >> the drawback is that I'll have to declare the version of each feature >> pack we need explicitly in the parent pom: we'll be able to avoid the >> need to match the dependant versions when Infinispan also will produce >> feature packs rather than the fat zip. >> >> N.B. this PR #5766 doesn't address the fact that the build doesn't >> differentiate between the above cases B and C, but since B and C would >> just happen to be the same version again in Infinispan 9.2 the issue >> is no longer urgent, so ISPN-8780 could be postponed to 9.3+ and it >> might be much easier after migrating some Infinispan modules to >> feature packs as well. >> >> Thanks, >> Sanne >> >> >> On 7 February 2018 at 16:59, Sanne Grinovero wrote: >> > Hi all, >> > >> > I was going to give ISPN-8779 a shot but I'm finding a mess. >> > >> > the root pom contains these twice (and inconsistent!): >> > >> > 5.8.1.Final >> > [...] >> > 5.8.0.Final >> > >> > the BOM cointains a copy of `version.hibernate.search` as well. >> > >> > I don't mind deleting duplicate properties, but we used to have >> > clearly separate properties for different purposes, and this >> > separation is essential. >> > >> > I've mentioned this multiple times when reviewing PRs which would get >> > my attention, but I didn't see these changes - certainly didn't expect >> > you all to forget the special purpose of these modules. >> > It's quite messy now and I'm honestly lost myself at how I could revert it. >> > >> > In particular this module is broken now as it's targeting the wrong slot: >> > - >> > https://github.com/infinispan/infinispan/blob/master/wildfly-modules/src/main/resources/org/infinispan/for-hibernatesearch-wildfly/main/module.xml#L27 >> > >> > Clearly it's not consistent with the comment I've put on the module >> > descriptor. >> > I don't see that module being included in the released modules either, >> > and clearly the integration tests didn't catch it because they have >> > been patched to use the wrong modules too :( >> > >> > Other essential integration tests which I had put in place to make >> > sure they'd get your attention in case someone had such an idea.. have >> > been deleted. >> > >> > Opening ISPN-8780, I would consider this a release blocker. >> > >> > Thanks, >> > Sanne >> > >> > See also: >> > - >> > https://github.com/infinispan/infinispan/blob/master/integrationtests/as-lucene-directory/READ.ME >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Mon Feb 19 06:57:38 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 19 Feb 2018 11:57:38 +0000 Subject: [infinispan-dev] Testsuite: memory usage? In-Reply-To: References: Message-ID: Ok, so the biggest problem is that TestNG keeps test instances around until the end of the test suite, and many of our tests are quite heavyweight because they keep references to caches/managers even after they finish. I've opened a PR to set those fields to null, fix some smaller leaks, and use -XX:+UseG1GC -XX:-TieredCompilation, and I'm getting ~ 11 mins on my laptop. https://github.com/infinispan/infinispan/pull/5768 It's still a lot, especially knowing that not long ago it would take half of that, but making it shorter would probably involve looking deeper into the (many) tests that we've added in the last year or so. Cheers Dan On Fri, Feb 16, 2018 at 8:05 AM, Dan Berindei wrote: > Yeah, I got a much slower run with the default collector (parallel): > > [INFO] Total time: 17:45 min > GC Time: 2m 43s > Compile time: 18m 20s > > I'm not sure if it's really the GC affecting the compile time or there's > another factor hiding there. But I did get a heap dump and I'm analyzing it > now. > > Cheers > Dan > > > On Thu, Feb 15, 2018 at 1:59 PM, Dan Berindei > wrote: > >> Hmmm, I didn't notice that I was running with -XX:+UseG1GC, so perhaps >> our test suite is a pathological case for the default collector? >> >> [INFO] Total time: 12:45 min >> GC Time: 52.593s >> Class Loader Time: 1m 26.007s >> Compile Time: 10m 10.216s >> >> I'll try without -XX:+UseG1GC later. >> >> Cheers >> Dan >> >> >> On Thu, Feb 15, 2018 at 1:39 PM, Dan Berindei >> wrote: >> >>> And here I was thinking that by adding -XX:+HeapDumpOnOutOfMemoryError >>> anyone would be able to look into OOMEs and I wouldn't have to reproduce >>> the failures myself :) >>> >>> Dan >>> >>> >>> On Thu, Feb 15, 2018 at 1:32 PM, William Burns >>> wrote: >>> >>>> So I must admit I had noticed a while back that I was having some >>>> issues with running the core test suite. Unfortunately at the time CI and >>>> everyone else seemed to not have any issues. I just ignored it because at >>>> the time I didn't need to run core tests. But now that Sanne pointed this >>>> out, by increasing the heap variable in the pom.xml, I was for the first >>>> time able to run the test suite completely. It would normally hang for an >>>> extremely long time near the 9k-10K test completed point and never finish >>>> for me (at least I didn't wait long enough). >>>> >>>> So it definitely seems there is something leaking in the test suite >>>> causing the GC to use a ton of CPU time. >>>> >>>> - Will >>>> >>>> On Thu, Feb 15, 2018 at 5:40 AM Sanne Grinovero >>>> wrote: >>>> >>>>> Thanks Dan. >>>>> >>>>> Do you happen to have observed the memory trend during a build? >>>>> >>>>> After a couple more attempts it passed the build once, so that shows >>>>> it's possible to pass.. but even though it's a small sample so far >>>>> that's 1 pass vs 3 OOMs on my machine. >>>>> >>>>> Even the one time it successfully completed the tests I see it wasted >>>>> ~80% of total build time doing GC runs.. it was likely very close to >>>>> fall over, and definitely not an efficient setting for regular builds. >>>>> Observing trends on my machine I'd guess a reasonable value to be >>>>> around 5GB to keep builds fast, or a minimum of 1.3 GB to be able to >>>>> complete successfully without often failing. >>>>> >>>>> The memory issues are worse towards the end of the testsuite, and >>>>> steadily growing. >>>>> >>>>> I won't be able to investigate further as I need to urgently work on >>>>> modules, but I noticed there are quite some MBeans according to >>>>> JConsole. I guess it would be good to check if we're not leaking the >>>>> MBean registration, and therefore leaking (stopped?) CacheManagers >>>>> from there? >>>>> >>>>> Even near the beginning of the tests, when forcing a full GC I see >>>>> about 400MB being "not free". That's quite a lot for some simple >>>>> tests, no? >>>>> >>>>> Thanks, >>>>> Sanne >>>>> >>>>> >>>>> On 15 February 2018 at 06:51, Dan Berindei >>>>> wrote: >>>>> > forkJvmArgs used to be "-Xmx2G" before ISPN-8478. I reduced the heap >>>>> to 1G >>>>> > because we were trying to run the build on agent VMs with only 4GB >>>>> of RAM, >>>>> > and the 2GB heap was making the build run out of native memory. >>>>> > >>>>> > I've yet to see an OOME in the core tests, locally or in CI. But I >>>>> also >>>>> > included -XX:+HeapDumpOnOutOfMemoryError in forkJvmArgs, so >>>>> assuming there's >>>>> > a new leak it should be easy to track down in the heap dump. >>>>> > >>>>> > Cheers >>>>> > Dan >>>>> > >>>>> > >>>>> > On Wed, Feb 14, 2018 at 11:46 PM, Sanne Grinovero < >>>>> sanne at infinispan.org> >>>>> > wrote: >>>>> >> >>>>> >> Hey all, >>>>> >> >>>>> >> I'm having OOMs running the tests of infinispan-core. >>>>> >> >>>>> >> Initially I thought it was related to limits and security as that's >>>>> >> the usual suspect, but no it's really just not enough memory :) >>>>> >> >>>>> >> Found that the root pom.xml sets a property to Xmx1G >>>>> for >>>>> >> surefire; I've been observing the growth of heap usage in JConsole >>>>> and >>>>> >> it's clearly not enough. >>>>> >> >>>>> >> What surprises me is that - as an occasional tester - I shouldn't be >>>>> >> the one to notice such a new requirement first. A leak which only >>>>> >> manifests in certain conditions? >>>>> >> >>>>> >> What do others observe? >>>>> >> >>>>> >> FWIW, I'm running it with 8G heap now and it's working much better; >>>>> >> still a couple of failures but at least they're not OOM related. >>>>> >> >>>>> >> Thanks, >>>>> >> Sanne >>>>> >> _______________________________________________ >>>>> >> infinispan-dev mailing list >>>>> >> infinispan-dev at lists.jboss.org >>>>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> > >>>>> > >>>>> > >>>>> > _______________________________________________ >>>>> > infinispan-dev mailing list >>>>> > infinispan-dev at lists.jboss.org >>>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180219/2b76dc70/attachment-0001.html From sanne at infinispan.org Mon Feb 19 07:45:02 2018 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 19 Feb 2018 12:45:02 +0000 Subject: [infinispan-dev] Testsuite: memory usage? In-Reply-To: References: Message-ID: Thanks Dan, that solved the main issue, I no longer have OOMs on the core module. I'll merge your PR as soon as I completed the full build. Interesting idea to disable TieredCompilation, I'll try that on other projects too. If someone is up for some additional love as follow ups: - raising the heap from 1G to ~1300M does give it quite some more breathing space, I believe it should still work on a 2GB testing machine. - I still see quite some MBeans in the JConsole at the end of the build, something is leaking these and they do keep references to CacheManagers. - still seeing an unreasonable amount of threads as well, varying from ~200 to ~2000. Possibly related to the previous point? Cheers, Sanne On 19 February 2018 at 11:57, Dan Berindei wrote: > Ok, so the biggest problem is that TestNG keeps test instances around until > the end of the test suite, and many of our tests are quite heavyweight > because they keep references to caches/managers even after they finish. I've > opened a PR to set those fields to null, fix some smaller leaks, and use > -XX:+UseG1GC -XX:-TieredCompilation, and I'm getting ~ 11 mins on my laptop. > > https://github.com/infinispan/infinispan/pull/5768 > > It's still a lot, especially knowing that not long ago it would take half of > that, but making it shorter would probably involve looking deeper into the > (many) tests that we've added in the last year or so. > > Cheers > Dan > > > On Fri, Feb 16, 2018 at 8:05 AM, Dan Berindei > wrote: >> >> Yeah, I got a much slower run with the default collector (parallel): >> >> [INFO] Total time: 17:45 min >> GC Time: 2m 43s >> Compile time: 18m 20s >> >> I'm not sure if it's really the GC affecting the compile time or there's >> another factor hiding there. But I did get a heap dump and I'm analyzing it >> now. >> >> Cheers >> Dan >> >> >> On Thu, Feb 15, 2018 at 1:59 PM, Dan Berindei >> wrote: >>> >>> Hmmm, I didn't notice that I was running with -XX:+UseG1GC, so perhaps >>> our test suite is a pathological case for the default collector? >>> >>> [INFO] Total time: 12:45 min >>> GC Time: 52.593s >>> Class Loader Time: 1m 26.007s >>> Compile Time: 10m 10.216s >>> >>> I'll try without -XX:+UseG1GC later. >>> >>> Cheers >>> Dan >>> >>> >>> On Thu, Feb 15, 2018 at 1:39 PM, Dan Berindei >>> wrote: >>>> >>>> And here I was thinking that by adding -XX:+HeapDumpOnOutOfMemoryError >>>> anyone would be able to look into OOMEs and I wouldn't have to reproduce the >>>> failures myself :) >>>> >>>> Dan >>>> >>>> >>>> On Thu, Feb 15, 2018 at 1:32 PM, William Burns >>>> wrote: >>>>> >>>>> So I must admit I had noticed a while back that I was having some >>>>> issues with running the core test suite. Unfortunately at the time CI and >>>>> everyone else seemed to not have any issues. I just ignored it because at >>>>> the time I didn't need to run core tests. But now that Sanne pointed this >>>>> out, by increasing the heap variable in the pom.xml, I was for the first >>>>> time able to run the test suite completely. It would normally hang for an >>>>> extremely long time near the 9k-10K test completed point and never finish >>>>> for me (at least I didn't wait long enough). >>>>> >>>>> So it definitely seems there is something leaking in the test suite >>>>> causing the GC to use a ton of CPU time. >>>>> >>>>> - Will >>>>> >>>>> On Thu, Feb 15, 2018 at 5:40 AM Sanne Grinovero >>>>> wrote: >>>>>> >>>>>> Thanks Dan. >>>>>> >>>>>> Do you happen to have observed the memory trend during a build? >>>>>> >>>>>> After a couple more attempts it passed the build once, so that shows >>>>>> it's possible to pass.. but even though it's a small sample so far >>>>>> that's 1 pass vs 3 OOMs on my machine. >>>>>> >>>>>> Even the one time it successfully completed the tests I see it wasted >>>>>> ~80% of total build time doing GC runs.. it was likely very close to >>>>>> fall over, and definitely not an efficient setting for regular builds. >>>>>> Observing trends on my machine I'd guess a reasonable value to be >>>>>> around 5GB to keep builds fast, or a minimum of 1.3 GB to be able to >>>>>> complete successfully without often failing. >>>>>> >>>>>> The memory issues are worse towards the end of the testsuite, and >>>>>> steadily growing. >>>>>> >>>>>> I won't be able to investigate further as I need to urgently work on >>>>>> modules, but I noticed there are quite some MBeans according to >>>>>> JConsole. I guess it would be good to check if we're not leaking the >>>>>> MBean registration, and therefore leaking (stopped?) CacheManagers >>>>>> from there? >>>>>> >>>>>> Even near the beginning of the tests, when forcing a full GC I see >>>>>> about 400MB being "not free". That's quite a lot for some simple >>>>>> tests, no? >>>>>> >>>>>> Thanks, >>>>>> Sanne >>>>>> >>>>>> >>>>>> On 15 February 2018 at 06:51, Dan Berindei >>>>>> wrote: >>>>>> > forkJvmArgs used to be "-Xmx2G" before ISPN-8478. I reduced the heap >>>>>> > to 1G >>>>>> > because we were trying to run the build on agent VMs with only 4GB >>>>>> > of RAM, >>>>>> > and the 2GB heap was making the build run out of native memory. >>>>>> > >>>>>> > I've yet to see an OOME in the core tests, locally or in CI. But I >>>>>> > also >>>>>> > included -XX:+HeapDumpOnOutOfMemoryError in forkJvmArgs, so assuming >>>>>> > there's >>>>>> > a new leak it should be easy to track down in the heap dump. >>>>>> > >>>>>> > Cheers >>>>>> > Dan >>>>>> > >>>>>> > >>>>>> > On Wed, Feb 14, 2018 at 11:46 PM, Sanne Grinovero >>>>>> > >>>>>> > wrote: >>>>>> >> >>>>>> >> Hey all, >>>>>> >> >>>>>> >> I'm having OOMs running the tests of infinispan-core. >>>>>> >> >>>>>> >> Initially I thought it was related to limits and security as that's >>>>>> >> the usual suspect, but no it's really just not enough memory :) >>>>>> >> >>>>>> >> Found that the root pom.xml sets a property to Xmx1G >>>>>> >> for >>>>>> >> surefire; I've been observing the growth of heap usage in JConsole >>>>>> >> and >>>>>> >> it's clearly not enough. >>>>>> >> >>>>>> >> What surprises me is that - as an occasional tester - I shouldn't >>>>>> >> be >>>>>> >> the one to notice such a new requirement first. A leak which only >>>>>> >> manifests in certain conditions? >>>>>> >> >>>>>> >> What do others observe? >>>>>> >> >>>>>> >> FWIW, I'm running it with 8G heap now and it's working much better; >>>>>> >> still a couple of failures but at least they're not OOM related. >>>>>> >> >>>>>> >> Thanks, >>>>>> >> Sanne >>>>>> >> _______________________________________________ >>>>>> >> infinispan-dev mailing list >>>>>> >> infinispan-dev at lists.jboss.org >>>>>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> > >>>>>> > >>>>>> > >>>>>> > _______________________________________________ >>>>>> > infinispan-dev mailing list >>>>>> > infinispan-dev at lists.jboss.org >>>>>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>> >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Mon Feb 19 10:44:05 2018 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 19 Feb 2018 15:44:05 +0000 Subject: [infinispan-dev] Weekly IRC meeting logs 2018-02-19 Message-ID: Here are the logs of this week's IRC meeting: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-02-19-15.02.log.html I forgot to send out the logs of last week's meeting, so I'm including them here as well: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-02-12-15.05.log.html Cheers Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20180219/7d5dc99e/attachment.html From ttarrant at redhat.com Thu Feb 22 02:33:30 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 22 Feb 2018 08:33:30 +0100 Subject: [infinispan-dev] Infinispan 9.2.0.CR3 Message-ID: <3a62bbad-a50f-8f48-1d59-43e53b59afdf@redhat.com> Dear all, we have released Infinispan 9.2.0.CR3. Read all about it here: http://blog.infinispan.org/2018/02/infinispan-920cr3.html Enjoy ! Tristan -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From rvansa at redhat.com Thu Feb 22 04:09:53 2018 From: rvansa at redhat.com (Radim Vansa) Date: Thu, 22 Feb 2018 10:09:53 +0100 Subject: [infinispan-dev] Ordering of includeCurrentState events Message-ID: Currently remote events caused by includeCurrentState=true are not guaranteed to be delivered before the operation completes; these are only queued on the server to be sent but not actually sent over wire. Do we want any such guarantee? Do we want to add to make events from current state somehow distinguishable from the 'online' ones? Given all the non-reliability with listeners failover I don't think this is needed, but I'll rather check in the crowd. Radim -- Radim Vansa JBoss Performance Team From anistor at redhat.com Thu Feb 22 08:16:59 2018 From: anistor at redhat.com (Adrian Nistor) Date: Thu, 22 Feb 2018 15:16:59 +0200 Subject: [infinispan-dev] Ordering of includeCurrentState events In-Reply-To: References: Message-ID: Hi Radim, From the continuous query point of view it does not matter if 'existing-state-events' are queued for a while as long as they are delivered _before_ the 'online' events. For CQ we do not care to make them distinguishable, but we do want this order! Other use cases might have different needs (probably more relaxed), but this is the minimal for CQ. Adrian On 02/22/2018 11:09 AM, Radim Vansa wrote: > Currently remote events caused by includeCurrentState=true are not > guaranteed to be delivered before the operation completes; these are > only queued on the server to be sent but not actually sent over wire. > > Do we want any such guarantee? Do we want to add to make events from > current state somehow distinguishable from the 'online' ones? > > Given all the non-reliability with listeners failover I don't think this > is needed, but I'll rather check in the crowd. > > Radim > From ttarrant at redhat.com Mon Feb 26 11:06:34 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 26 Feb 2018 17:06:34 +0100 Subject: [infinispan-dev] Weekly IRC Meeting Logs 2018-02-26 Message-ID: <86bd12f3-66fb-32f0-a3b8-bdcf399599ae@redhat.com> Hi all, the weekly meeting logs are available: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-02-26-15.02.log.html Tristan -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat From galder at redhat.com Tue Feb 27 11:06:10 2018 From: galder at redhat.com (Galder =?utf-8?Q?Zamarre=C3=B1o?=) Date: Tue, 27 Feb 2018 17:06:10 +0100 Subject: [infinispan-dev] Hot Rod secured by default In-Reply-To: (Tristan Tarrant's message of "Mon, 5 Feb 2018 12:02:40 +0100") References: <78C8F389-2EBA-4E2F-8EFE-CDAAAD65F55D@redhat.com> <20264f25-5a92-f6b9-e8f9-a91d822b5c8f@redhat.com> <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> <87659D4A-0085-436B-897B-802B8E3DAB3F@redhat.com> Message-ID: Tristan Tarrant writes: > Sorry for reviving this thread, but I want to make sure we all agree on > the following points. > > DEFAULT CONFIGURATIONS > - The endpoints MUST be secure by default (authentication MUST be > enabled and required) in all of the supplied default configurations. > - We can ship non-secure configurations, but these need to be clearly > marked as such in the configuration filename (e.g. > standalone-unsecured.xml). > - Memcached MUST NOT be enabled by default as we do not implement the > binary protocol which is the only one that can do authn/encryption > - The default configurations (standalone.xml, domain.xml, cloud.xml) > MUST enable only non-plaintext mechs (e.g. digest et al) +1 > > SERVER CHANGES > - Warn if a plain text mech is enabled on an unencrypted endpoint > > API > - We MUST NOT add a "trust all certs" switch to the client config as > that would thwart the whole purpose of encryption. > > OPENSHIFT > - In the context of OpenShift, all pods MUST trust the master CA. This > means that the CA must be injected into the trusted CAs for the pods AND > into the JDK cacerts file. This MUST be done by the OpenShift JDK image > automatically. (Debian does this on startup: [1]) > > Tristan > > [1] > https://git.mikael.io/mikaelhg/ca-certificates-java/blob/debian/20170531/src/main/java/org/debian/security/UpdateCertificates.java > > On 9/14/17 5:45 PM, Galder Zamarre?o wrote: >> Gustavo's reply was the agreement reached. Secured by default and an >> easy way to use it unsecured is the best middle ground IMO. >> >> So, we've done the securing part partially, which needs to be >> completed by [2] (currently assigned to Tristan). >> >> More importantly, we also need to complete [3] so that we have ship >> the unsecured configuration, and then show people how to use that >> (docus, examples...etc). >> >> If you want to help, taking ownership of [3] would be best. >> >> Cheers, >> >> [2] https://issues.jboss.org/browse/ISPN-7815 >> [3] https://issues.jboss.org/browse/ISPN-7818 >> >>> On 6 Sep 2017, at 11:03, Katia Aresti wrote: >>> >>> @Emmanuel, sure it't not a big deal, but starting fast and smooth >>> without any trouble helps adoption. Concerning the ticket, there is >>> already one that was acted. I can work on that, even if is assigned >>> to Galder now. >>> >>> @Gustavo >>> Yes, as I read - better - now on the security part, it is said for >>> the console that we need those. My head skipped that paragraph or I >>> read that badly, and I was wondering if it was more something >>> related to "roles" rather than a user. My bad, because I read too >>> fast sometimes and skip things ! Maybe the paragraph of the >>> security in the console should be moved down to the console part, >>> which is small to read now ? When I read there "see the security >>> part bellow" I was like : ok, the security is done !! :) >>> >>> Thank you for your replies ! >>> >>> Katia >>> >>> >>> On Wed, Sep 6, 2017 at 10:52 AM, Gustavo Fernandes wrote: >>> Comments inlined >>> >>> On Tue, Sep 5, 2017 at 5:03 PM, Katia Aresti wrote: >>> And then I want to go to the console, requires me to put again the >>> user/password. And it does not work. And I don't see how to disable >>> security. And I don't know what to do. And I'm like : why do I need >>> security at all here ? >>> >>> >>> The console credentials are specified with MGMT_USER/MGMT_PASS env >>> variables, did you try those? It will not work for >>> APP_USER/APP_PASS. >>> >>> >>> I wonder if you want to reconsider the "secured by default" point >>> after my experience. >>> >>> >>> The outcome of the discussion is that the clustered.xml will be >>> secured by default, but you should be able to launch a container >>> without any security by simply passing an alternate xml in the >>> startup, and we'll ship this XML with the server. >>> >>> >>> Gustavo >>> >>> >>> My 2 cents, >>> >>> Katia >>> >>> On Tue, May 9, 2017 at 2:24 PM, Galder Zamarre?o wrote: >>> Hi all, >>> >>> Tristan and I had chat yesterday and I've distilled the contents of >>> the discussion and the feedback here into a JIRA [1]. The JIRA >>> contains several subtasks to handle these aspects: >>> >>> 1. Remove auth check in server's CacheDecodeContext. >>> 2. Default server configuration should require authentication in all entry points. >>> 3. Provide an unauthenticated configuration that users can easily switch to. >>> 4. Remove default username+passwords in docker image and instead >>> show an info/warn message when these are not provided. >>> 5. Add capability to pass in app user role groups to docker image >>> easily, so that its easy to add authorization on top of the server. >>> >>> Cheers, >>> >>> [1] https://issues.jboss.org/browse/ISPN-7811 >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>>> On 19 Apr 2017, at 12:04, Tristan Tarrant wrote: >>>> >>>> That is caused by not wrapping the calls in PrivilegedActions in all the >>>> correct places and is a bug. >>>> >>>> Tristan >>>> >>>> On 19/04/2017 11:34, Sebastian Laskawiec wrote: >>>>> The proposal look ok to me. >>>>> >>>>> But I would also like to highlight one thing - it seems you can't access >>>>> secured cache properties using CLI. This seems wrong to me (if you can >>>>> invoke the cli, in 99,99% of the cases you have access to the machine, >>>>> so you can do whatever you want). It also breaks healthchecks in Docker >>>>> image. >>>>> >>>>> I would like to make sure we will address those concerns. >>>>> >>>>> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant >>>> > wrote: >>>>> >>>>> Currently the "protected cache access" security is implemented as >>>>> follows: >>>>> >>>>> - if authorization is enabled || client is on loopback >>>>> allow >>>>> >>>>> The first check also implies that authentication needs to be in place, >>>>> as the authorization checks need a valid Subject. >>>>> >>>>> Unfortunately authorization is very heavy-weight and actually overkill >>>>> even for "normal" secure usage. >>>>> >>>>> My proposal is as follows: >>>>> - the "default" configuration files are "secure" by default >>>>> - provide clearly marked "unsecured" configuration files, which the user >>>>> can use >>>>> - drop the "protected cache" check completely >>>>> >>>>> And definitely NO to a dev switch. >>>>> >>>>> Tristan >>>>> >>>>> On 19/04/2017 10:05, Galder Zamarre?o wrote: >>>>>> Agree with Wolf. Let's keep it simple by just providing extra >>>>> configuration files for dev/unsecure envs. >>>>>> >>>>>> Cheers, >>>>>> -- >>>>>> Galder Zamarre?o >>>>>> Infinispan, Red Hat >>>>>> >>>>>>> On 15 Apr 2017, at 12:57, Wolf Fink >>>> > wrote: >>>>>>> >>>>>>> I would think a "switch" can have other impacts as you need to >>>>> check it in the code - and might have security leaks here >>>>>>> >>>>>>> So what is wrong with some configurations which are the default >>>>> and secured. >>>>>>> and a "*-dev or *-unsecure" configuration to start easy. >>>>>>> Also this can be used in production if there is no need for security >>>>>>> >>>>>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec >>>>> > wrote: >>>>>>> I still think it would be better to create an extra switch to >>>>> run infinispan in "development mode". This means no authentication, >>>>> no encryption, possibly with JGroups stack tuned for fast discovery >>>>> (especially in Kubernetes) and a big warning saying "You are in >>>>> development mode, do not use this in production". >>>>>>> >>>>>>> Just something very easy to get you going. >>>>>>> >>>>>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarre?o >>>>> > wrote: >>>>>>> >>>>>>> -- >>>>>>> Galder Zamarre?o >>>>>>> Infinispan, Red Hat >>>>>>> >>>>>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes >>>>> > wrote: >>>>>>>> >>>>>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarre?o >>>>> > wrote: >>>>>>>> Hi all, >>>>>>>> >>>>>>>> As per some discussions we had yesterday on IRC w/ Tristan, >>>>> Gustavo and Sebastian, I've created a docker image snapshot that >>>>> reverts the change stop protected caches from requiring security >>>>> enabled [1]. >>>>>>>> >>>>>>>> In other words, I've removed [2]. The reason for temporarily >>>>> doing that is because with the change as is, the changes required >>>>> for a default server distro require that the entire cache manager's >>>>> security is enabled. This is in turn creates a lot of problems with >>>>> health and running checks used by Kubernetes/OpenShift amongst other >>>>> things. >>>>>>>> >>>>>>>> Judging from our discussions on IRC, the idea is for such >>>>> change to be present in 9.0.1, but I'd like to get final >>>>> confirmation from Tristan et al. >>>>>>>> >>>>>>>> >>>>>>>> +1 >>>>>>>> >>>>>>>> Regarding the "security by default" discussion, I think we >>>>> should ship configurations cloud.xml, clustered.xml and >>>>> standalone.xml with security enabled and disabled variants, and let >>>>> users >>>>>>>> decide which one to pick based on the use case. >>>>>>> >>>>>>> I think that's a better idea. >>>>>>> >>>>>>> We could by default have a secured one, but switching to an >>>>> insecure configuration should be doable with minimal effort, e.g. >>>>> just switching config file. >>>>>>> >>>>>>> As highlighted above, any secured configuration should work >>>>> out-of-the-box with our docker images, e.g. WRT healthy/running checks. >>>>>>> >>>>>>> Cheers, >>>>>>> >>>>>>>> >>>>>>>> Gustavo. >>>>>>>> >>>>>>>> >>>>>>>> Cheers, >>>>>>>> >>>>>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ >>>>> (9.0.1-SNAPSHOT tag for anyone interested) >>>>>>>> [2] >>>>> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118 >>>>>>>> -- >>>>>>>> Galder Zamarre?o >>>>>>>> Infinispan, Red Hat >>>>>>>> >>>>>>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant >>>> > wrote: >>>>>>>>> >>>>>>>>> Dear all, >>>>>>>>> >>>>>>>>> after a mini chat on IRC, I wanted to bring this to >>>>> everybody's attention. >>>>>>>>> >>>>>>>>> We should make the Hot Rod endpoint require authentication in the >>>>>>>>> out-of-the-box configuration. >>>>>>>>> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL >>>>>>>>> mechanism against the ApplicationRealm and require users to >>>>> run the >>>>>>>>> add-user script. >>>>>>>>> This would achieve two goals: >>>>>>>>> - secure out-of-the-box configuration, which is always a good idea >>>>>>>>> - access to the "protected" schema and script caches which is >>>>> prevented >>>>>>>>> when not on loopback on non-authenticated endpoints. >>>>>>>>> >>>>>>>>> Tristan >>>>>>>>> -- >>>>>>>>> Tristan Tarrant >>>>>>>>> Infinispan Lead >>>>>>>>> JBoss, a division of Red Hat >>>>>>>>> _______________________________________________ >>>>>>>>> infinispan-dev mailing list >>>>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> infinispan-dev mailing list >>>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> -- >>>>>>> SEBASTIAN ?ASKAWIEC >>>>>>> INFINISPAN DEVELOPER >>>>>>> Red Hat EMEA >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>>> >>>>>>> _______________________________________________ >>>>>>> infinispan-dev mailing list >>>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> infinispan-dev mailing list >>>>>> infinispan-dev at lists.jboss.org >>>>> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>>> >>>>> >>>>> -- >>>>> Tristan Tarrant >>>>> Infinispan Lead >>>>> JBoss, a division of Red Hat >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>>> -- >>>>> >>>>> SEBASTIAN?ASKAWIEC >>>>> >>>>> INFINISPAN DEVELOPER >>>>> >>>>> Red HatEMEA >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> -- >>>> Tristan Tarrant >>>> Infinispan Lead >>>> JBoss, a division of Red Hat >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> From galder at redhat.com Tue Feb 27 11:13:51 2018 From: galder at redhat.com (Galder =?utf-8?Q?Zamarre=C3=B1o?=) Date: Tue, 27 Feb 2018 17:13:51 +0100 Subject: [infinispan-dev] spare cycles In-Reply-To: <20a3e345-fe25-fe9a-4a6a-acf7f148df45@infinispan.org> (Ion Savin's message of "Wed, 31 Jan 2018 10:10:07 +0200") References: <20a3e345-fe25-fe9a-4a6a-acf7f148df45@infinispan.org> Message-ID: Hi Ion, Great to hear that you have time to contribute to stuff. Any particular interests? Hackathon ideas are a good place to start: https://issues.jboss.org/browse/ISPN-2234?filter=12322175 Having a HotRod URL format would be a good one :) Cheers Ion Savin writes: > Hi all, > > I have some spare cycles over the course of the year which I'm going to > use to contribute to open source projects. > > If you can think of anything specific that you could use some help with > please let me know. > > Thanks, > Ion Savin > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Wed Feb 28 18:25:49 2018 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 1 Mar 2018 00:25:49 +0100 Subject: [infinispan-dev] Infinispan 9.2.0.Final Message-ID: We have finally release Infinispan 9.2.0.Final. Come and read all about it: http://blog.infinispan.org/2018/02/infinispan-920final.html Thanks to the whole core team and community for the contributions. You are awesome ! Tristan -- Tristan Tarrant Infinispan Lead and Data Grid Architect JBoss, a division of Red Hat