From galder at redhat.com Fri Sep 1 07:11:21 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Fri, 1 Sep 2017 13:11:21 +0200 Subject: [infinispan-dev] Smoke test suite for hibernate-cache In-Reply-To: <11383c31-d845-410f-7210-bc0e2ea0ae7a@redhat.com> References: <11383c31-d845-410f-7210-bc0e2ea0ae7a@redhat.com> Message-ID: <80BB8D4F-3285-499D-8CB7-DD4B6A81CFA3@redhat.com> Hey Martin, Thanks for working on this. I'd suggest these: - NaturalIdInvalidationTest - EntityRegionAccessStrategyTest - CollectionRegionAccessStrategyTest - QueryRegionImplTest - TimestampsRegionImplTest They cover most of the functionality offer, but they run in more than just a few seconds... Each of those normally cycle through different configuration options, both at Hibernate and Infinispan level, so that's why they take more than just a few seconds. Try those and see what you think. Cheers, > On 18 Aug 2017, at 14:26, Martin Gencur wrote: > > Hi all, > I'm currently in the process of refreshing the "smoke" test suite for > Infinispan. > There's a relatively new module called hibernate-cache. Could someone > suggest tests that should be part of the smoke test suite? > Ideally just a few tens of test cases (maybe a few hundreds at most but > the test suite execution should finish in a few seconds). > > A list of test classes as a reply to this email would be ideal:) > > Thanks, > Martin > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o Infinispan, Red Hat From mgencur at redhat.com Mon Sep 4 04:40:23 2017 From: mgencur at redhat.com (Martin Gencur) Date: Mon, 4 Sep 2017 10:40:23 +0200 Subject: [infinispan-dev] Smoke test suite for hibernate-cache In-Reply-To: <80BB8D4F-3285-499D-8CB7-DD4B6A81CFA3@redhat.com> References: <11383c31-d845-410f-7210-bc0e2ea0ae7a@redhat.com> <80BB8D4F-3285-499D-8CB7-DD4B6A81CFA3@redhat.com> Message-ID: <292ee8dc-5cfe-2d74-78b7-d129efeddf2f@redhat.com> Thanks, Galder. I've created this PR with your suggested tests: https://github.com/infinispan/infinispan/pull/5404 The smoke test suite now runs 349 tests (in one minute) compared to 1036 when the whole test suite is run. This is good enough for now, IMO. Martin On 1.9.2017 13:11, Galder Zamarre?o wrote: > Hey Martin, > > Thanks for working on this. I'd suggest these: > > - NaturalIdInvalidationTest > - EntityRegionAccessStrategyTest > - CollectionRegionAccessStrategyTest > - QueryRegionImplTest > - TimestampsRegionImplTest > > They cover most of the functionality offer, but they run in more than just a few seconds... Each of those normally cycle through different configuration options, both at Hibernate and Infinispan level, so that's why they take more than just a few seconds. > > Try those and see what you think. > > Cheers, > >> On 18 Aug 2017, at 14:26, Martin Gencur wrote: >> >> Hi all, >> I'm currently in the process of refreshing the "smoke" test suite for >> Infinispan. >> There's a relatively new module called hibernate-cache. Could someone >> suggest tests that should be part of the smoke test suite? >> Ideally just a few tens of test cases (maybe a few hundreds at most but >> the test suite execution should finish in a few seconds). >> >> A list of test classes as a reply to this email would be ideal:) >> >> Thanks, >> Martin >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From karesti at redhat.com Tue Sep 5 12:03:00 2017 From: karesti at redhat.com (Katia Aresti) Date: Tue, 5 Sep 2017 18:03:00 +0200 Subject: [infinispan-dev] Hot Rod secured by default In-Reply-To: References: <78C8F389-2EBA-4E2F-8EFE-CDAAAD65F55D@redhat.com> <20264f25-5a92-f6b9-e8f9-a91d822b5c8f@redhat.com> <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> Message-ID: Hi all, I came up to this thread now, at the time I did not understand what we discussed here. Let me tell you my experience of today. I want to create a vert.x application, super simple, that connects to infinispan-server. Everything in open-shift. I take an vert-x example as base, super simple one, that deploys directly a super simple web app with a simple command line. Works fine, I see the "hello world" webapp in my laptop on the openshift and my navigator. I want this time to access to infinispan-server. I go into the interface, and I say "hey, grab the jboss/infinispan-server" image from docker registry. I try to go to the REST console, and oh, I need to authenticate. I go back, and then I read that I need to put 2 env variables. So there it is, now it works. And then I want to go to the console, requires me to put again the user/password. And it does not work. And I don't see how to disable security. And I don't know what to do. And I'm like : why do I need security at all here ? I'm a lambda dev, in a lambda project, considering using infinispan-server for the first time, and at this point, security of the cache I don't care and I don't know why I have to deal with it. I know you acted a ticket and decided to have security on by default. IMHO lambda dev creating the hello world with the docker image provided by jboss... does not care about it. At all. He or she just wants to see everything working as soon as possible. Once they decide to use infinispan, if the cache needs to be secured, which seems to be recommended, they will go into that point (devs and ops) and change the code and the configuration to add the security level they need for their project. I wonder if you want to reconsider the "secured by default" point after my experience. My 2 cents, Katia On Tue, May 9, 2017 at 2:24 PM, Galder Zamarre?o wrote: > Hi all, > > Tristan and I had chat yesterday and I've distilled the contents of the > discussion and the feedback here into a JIRA [1]. The JIRA contains several > subtasks to handle these aspects: > > 1. Remove auth check in server's CacheDecodeContext. > 2. Default server configuration should require authentication in all entry > points. > 3. Provide an unauthenticated configuration that users can easily switch > to. > 4. Remove default username+passwords in docker image and instead show an > info/warn message when these are not provided. > 5. Add capability to pass in app user role groups to docker image easily, > so that its easy to add authorization on top of the server. > > Cheers, > > [1] https://issues.jboss.org/browse/ISPN-7811 > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 19 Apr 2017, at 12:04, Tristan Tarrant wrote: > > > > That is caused by not wrapping the calls in PrivilegedActions in all the > > correct places and is a bug. > > > > Tristan > > > > On 19/04/2017 11:34, Sebastian Laskawiec wrote: > >> The proposal look ok to me. > >> > >> But I would also like to highlight one thing - it seems you can't access > >> secured cache properties using CLI. This seems wrong to me (if you can > >> invoke the cli, in 99,99% of the cases you have access to the machine, > >> so you can do whatever you want). It also breaks healthchecks in Docker > >> image. > >> > >> I would like to make sure we will address those concerns. > >> > >> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant >> > wrote: > >> > >> Currently the "protected cache access" security is implemented as > >> follows: > >> > >> - if authorization is enabled || client is on loopback > >> allow > >> > >> The first check also implies that authentication needs to be in > place, > >> as the authorization checks need a valid Subject. > >> > >> Unfortunately authorization is very heavy-weight and actually > overkill > >> even for "normal" secure usage. > >> > >> My proposal is as follows: > >> - the "default" configuration files are "secure" by default > >> - provide clearly marked "unsecured" configuration files, which the > user > >> can use > >> - drop the "protected cache" check completely > >> > >> And definitely NO to a dev switch. > >> > >> Tristan > >> > >> On 19/04/2017 10:05, Galder Zamarre?o wrote: > >>> Agree with Wolf. Let's keep it simple by just providing extra > >> configuration files for dev/unsecure envs. > >>> > >>> Cheers, > >>> -- > >>> Galder Zamarre?o > >>> Infinispan, Red Hat > >>> > >>>> On 15 Apr 2017, at 12:57, Wolf Fink >> > wrote: > >>>> > >>>> I would think a "switch" can have other impacts as you need to > >> check it in the code - and might have security leaks here > >>>> > >>>> So what is wrong with some configurations which are the default > >> and secured. > >>>> and a "*-dev or *-unsecure" configuration to start easy. > >>>> Also this can be used in production if there is no need for security > >>>> > >>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec > >> > wrote: > >>>> I still think it would be better to create an extra switch to > >> run infinispan in "development mode". This means no authentication, > >> no encryption, possibly with JGroups stack tuned for fast discovery > >> (especially in Kubernetes) and a big warning saying "You are in > >> development mode, do not use this in production". > >>>> > >>>> Just something very easy to get you going. > >>>> > >>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarre?o > >> > wrote: > >>>> > >>>> -- > >>>> Galder Zamarre?o > >>>> Infinispan, Red Hat > >>>> > >>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes > >> > wrote: > >>>>> > >>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarre?o > >> > wrote: > >>>>> Hi all, > >>>>> > >>>>> As per some discussions we had yesterday on IRC w/ Tristan, > >> Gustavo and Sebastian, I've created a docker image snapshot that > >> reverts the change stop protected caches from requiring security > >> enabled [1]. > >>>>> > >>>>> In other words, I've removed [2]. The reason for temporarily > >> doing that is because with the change as is, the changes required > >> for a default server distro require that the entire cache manager's > >> security is enabled. This is in turn creates a lot of problems with > >> health and running checks used by Kubernetes/OpenShift amongst other > >> things. > >>>>> > >>>>> Judging from our discussions on IRC, the idea is for such > >> change to be present in 9.0.1, but I'd like to get final > >> confirmation from Tristan et al. > >>>>> > >>>>> > >>>>> +1 > >>>>> > >>>>> Regarding the "security by default" discussion, I think we > >> should ship configurations cloud.xml, clustered.xml and > >> standalone.xml with security enabled and disabled variants, and let > >> users > >>>>> decide which one to pick based on the use case. > >>>> > >>>> I think that's a better idea. > >>>> > >>>> We could by default have a secured one, but switching to an > >> insecure configuration should be doable with minimal effort, e.g. > >> just switching config file. > >>>> > >>>> As highlighted above, any secured configuration should work > >> out-of-the-box with our docker images, e.g. WRT healthy/running > checks. > >>>> > >>>> Cheers, > >>>> > >>>>> > >>>>> Gustavo. > >>>>> > >>>>> > >>>>> Cheers, > >>>>> > >>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ > >> (9.0.1-SNAPSHOT tag for anyone interested) > >>>>> [2] > >> https://github.com/infinispan/infinispan/blob/master/server/ > hotrod/src/main/java/org/infinispan/server/hotrod/ > CacheDecodeContext.java#L114-L118 > >>>>> -- > >>>>> Galder Zamarre?o > >>>>> Infinispan, Red Hat > >>>>> > >>>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant >> > wrote: > >>>>>> > >>>>>> Dear all, > >>>>>> > >>>>>> after a mini chat on IRC, I wanted to bring this to > >> everybody's attention. > >>>>>> > >>>>>> We should make the Hot Rod endpoint require authentication in the > >>>>>> out-of-the-box configuration. > >>>>>> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL > >>>>>> mechanism against the ApplicationRealm and require users to > >> run the > >>>>>> add-user script. > >>>>>> This would achieve two goals: > >>>>>> - secure out-of-the-box configuration, which is always a good idea > >>>>>> - access to the "protected" schema and script caches which is > >> prevented > >>>>>> when not on loopback on non-authenticated endpoints. > >>>>>> > >>>>>> Tristan > >>>>>> -- > >>>>>> Tristan Tarrant > >>>>>> Infinispan Lead > >>>>>> JBoss, a division of Red Hat > >>>>>> _______________________________________________ > >>>>>> infinispan-dev mailing list > >>>>>> infinispan-dev at lists.jboss.org > >> > >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >> > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >> > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> > >>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >> > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> -- > >>>> SEBASTIAN ?ASKAWIEC > >>>> INFINISPAN DEVELOPER > >>>> Red Hat EMEA > >>>> > >>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >> > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >> > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >> > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> > >> > >> -- > >> Tristan Tarrant > >> Infinispan Lead > >> JBoss, a division of Red Hat > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org jboss.org> > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> -- > >> > >> SEBASTIAN?ASKAWIEC > >> > >> INFINISPAN DEVELOPER > >> > >> Red HatEMEA > >> > >> > >> > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > > > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170905/e3748751/attachment-0001.html From emmanuel at hibernate.org Wed Sep 6 01:54:37 2017 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 6 Sep 2017 07:54:37 +0200 Subject: [infinispan-dev] Hot Rod secured by default In-Reply-To: References: <78C8F389-2EBA-4E2F-8EFE-CDAAAD65F55D@redhat.com> <20264f25-5a92-f6b9-e8f9-a91d822b5c8f@redhat.com> <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> Message-ID: <9A6EFFCD-7473-4FD4-A758-ED57D36A1F63@hibernate.org> When you deploy a RDBMS on OpenShift, it is secured by default I think and it's not a big deal. It is possible to read back secrets in OpenShift in case you forgot them. Maybe there is a way to either document that better, maybe having the same password between console and cache by default is easier ? We live in a world where we need to take security from day one. Remember OpenShift means you can start stuff super easily, some in the cloud, you don't want it open by default. But your input is super valuable, open a j'irai with that experience and let's try and make it easier. > On 5 Sep 2017, at 18:03, Katia Aresti wrote: > > Hi all, > > I came up to this thread now, at the time I did not understand what we discussed here. > > Let me tell you my experience of today. > > I want to create a vert.x application, super simple, that connects to infinispan-server. Everything in open-shift. > I take an vert-x example as base, super simple one, that deploys directly a super simple web app with a simple command line. Works fine, I see the "hello world" webapp in my laptop on the openshift and my navigator. > > I want this time to access to infinispan-server. I go into the interface, and I say "hey, grab the jboss/infinispan-server" image from docker registry. I try to go to the REST console, and oh, I need to authenticate. I go back, and then I read that I need to put 2 env variables. So there it is, now it works. > And then I want to go to the console, requires me to put again the user/password. And it does not work. And I don't see how to disable security. And I don't know what to do. And I'm like : why do I need security at all here ? > I'm a lambda dev, in a lambda project, considering using infinispan-server for the first time, and at this point, security of the cache I don't care and I don't know why I have to deal with it. > > I know you acted a ticket and decided to have security on by default. IMHO lambda dev creating the hello world with the docker image provided by jboss... does not care about it. At all. He or she just wants to see everything working as soon as possible. Once they decide to use infinispan, if the cache needs to be secured, which seems to be recommended, they will go into that point (devs and ops) and change the code and the configuration to add the security level they need for their project. > > I wonder if you want to reconsider the "secured by default" point after my experience. > > My 2 cents, > > Katia > >> On Tue, May 9, 2017 at 2:24 PM, Galder Zamarre?o wrote: >> Hi all, >> >> Tristan and I had chat yesterday and I've distilled the contents of the discussion and the feedback here into a JIRA [1]. The JIRA contains several subtasks to handle these aspects: >> >> 1. Remove auth check in server's CacheDecodeContext. >> 2. Default server configuration should require authentication in all entry points. >> 3. Provide an unauthenticated configuration that users can easily switch to. >> 4. Remove default username+passwords in docker image and instead show an info/warn message when these are not provided. >> 5. Add capability to pass in app user role groups to docker image easily, so that its easy to add authorization on top of the server. >> >> Cheers, >> >> [1] https://issues.jboss.org/browse/ISPN-7811 >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> > On 19 Apr 2017, at 12:04, Tristan Tarrant wrote: >> > >> > That is caused by not wrapping the calls in PrivilegedActions in all the >> > correct places and is a bug. >> > >> > Tristan >> > >> > On 19/04/2017 11:34, Sebastian Laskawiec wrote: >> >> The proposal look ok to me. >> >> >> >> But I would also like to highlight one thing - it seems you can't access >> >> secured cache properties using CLI. This seems wrong to me (if you can >> >> invoke the cli, in 99,99% of the cases you have access to the machine, >> >> so you can do whatever you want). It also breaks healthchecks in Docker >> >> image. >> >> >> >> I would like to make sure we will address those concerns. >> >> >> >> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant > >> > wrote: >> >> >> >> Currently the "protected cache access" security is implemented as >> >> follows: >> >> >> >> - if authorization is enabled || client is on loopback >> >> allow >> >> >> >> The first check also implies that authentication needs to be in place, >> >> as the authorization checks need a valid Subject. >> >> >> >> Unfortunately authorization is very heavy-weight and actually overkill >> >> even for "normal" secure usage. >> >> >> >> My proposal is as follows: >> >> - the "default" configuration files are "secure" by default >> >> - provide clearly marked "unsecured" configuration files, which the user >> >> can use >> >> - drop the "protected cache" check completely >> >> >> >> And definitely NO to a dev switch. >> >> >> >> Tristan >> >> >> >> On 19/04/2017 10:05, Galder Zamarre?o wrote: >> >>> Agree with Wolf. Let's keep it simple by just providing extra >> >> configuration files for dev/unsecure envs. >> >>> >> >>> Cheers, >> >>> -- >> >>> Galder Zamarre?o >> >>> Infinispan, Red Hat >> >>> >> >>>> On 15 Apr 2017, at 12:57, Wolf Fink > >> > wrote: >> >>>> >> >>>> I would think a "switch" can have other impacts as you need to >> >> check it in the code - and might have security leaks here >> >>>> >> >>>> So what is wrong with some configurations which are the default >> >> and secured. >> >>>> and a "*-dev or *-unsecure" configuration to start easy. >> >>>> Also this can be used in production if there is no need for security >> >>>> >> >>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec >> >> > wrote: >> >>>> I still think it would be better to create an extra switch to >> >> run infinispan in "development mode". This means no authentication, >> >> no encryption, possibly with JGroups stack tuned for fast discovery >> >> (especially in Kubernetes) and a big warning saying "You are in >> >> development mode, do not use this in production". >> >>>> >> >>>> Just something very easy to get you going. >> >>>> >> >>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarre?o >> >> > wrote: >> >>>> >> >>>> -- >> >>>> Galder Zamarre?o >> >>>> Infinispan, Red Hat >> >>>> >> >>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes >> >> > wrote: >> >>>>> >> >>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarre?o >> >> > wrote: >> >>>>> Hi all, >> >>>>> >> >>>>> As per some discussions we had yesterday on IRC w/ Tristan, >> >> Gustavo and Sebastian, I've created a docker image snapshot that >> >> reverts the change stop protected caches from requiring security >> >> enabled [1]. >> >>>>> >> >>>>> In other words, I've removed [2]. The reason for temporarily >> >> doing that is because with the change as is, the changes required >> >> for a default server distro require that the entire cache manager's >> >> security is enabled. This is in turn creates a lot of problems with >> >> health and running checks used by Kubernetes/OpenShift amongst other >> >> things. >> >>>>> >> >>>>> Judging from our discussions on IRC, the idea is for such >> >> change to be present in 9.0.1, but I'd like to get final >> >> confirmation from Tristan et al. >> >>>>> >> >>>>> >> >>>>> +1 >> >>>>> >> >>>>> Regarding the "security by default" discussion, I think we >> >> should ship configurations cloud.xml, clustered.xml and >> >> standalone.xml with security enabled and disabled variants, and let >> >> users >> >>>>> decide which one to pick based on the use case. >> >>>> >> >>>> I think that's a better idea. >> >>>> >> >>>> We could by default have a secured one, but switching to an >> >> insecure configuration should be doable with minimal effort, e.g. >> >> just switching config file. >> >>>> >> >>>> As highlighted above, any secured configuration should work >> >> out-of-the-box with our docker images, e.g. WRT healthy/running checks. >> >>>> >> >>>> Cheers, >> >>>> >> >>>>> >> >>>>> Gustavo. >> >>>>> >> >>>>> >> >>>>> Cheers, >> >>>>> >> >>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ >> >> (9.0.1-SNAPSHOT tag for anyone interested) >> >>>>> [2] >> >> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118 >> >>>>> -- >> >>>>> Galder Zamarre?o >> >>>>> Infinispan, Red Hat >> >>>>> >> >>>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant > >> > wrote: >> >>>>>> >> >>>>>> Dear all, >> >>>>>> >> >>>>>> after a mini chat on IRC, I wanted to bring this to >> >> everybody's attention. >> >>>>>> >> >>>>>> We should make the Hot Rod endpoint require authentication in the >> >>>>>> out-of-the-box configuration. >> >>>>>> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL >> >>>>>> mechanism against the ApplicationRealm and require users to >> >> run the >> >>>>>> add-user script. >> >>>>>> This would achieve two goals: >> >>>>>> - secure out-of-the-box configuration, which is always a good idea >> >>>>>> - access to the "protected" schema and script caches which is >> >> prevented >> >>>>>> when not on loopback on non-authenticated endpoints. >> >>>>>> >> >>>>>> Tristan >> >>>>>> -- >> >>>>>> Tristan Tarrant >> >>>>>> Infinispan Lead >> >>>>>> JBoss, a division of Red Hat >> >>>>>> _______________________________________________ >> >>>>>> infinispan-dev mailing list >> >>>>>> infinispan-dev at lists.jboss.org >> >> >> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>> >> >>>>> >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >> >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>> >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >> >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> >> >>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >> >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> -- >> >>>> SEBASTIAN ?ASKAWIEC >> >>>> INFINISPAN DEVELOPER >> >>>> Red Hat EMEA >> >>>> >> >>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >> >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >> >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>> >> >>> >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >> >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>> >> >> >> >> -- >> >> Tristan Tarrant >> >> Infinispan Lead >> >> JBoss, a division of Red Hat >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> -- >> >> >> >> SEBASTIAN?ASKAWIEC >> >> >> >> INFINISPAN DEVELOPER >> >> >> >> Red HatEMEA >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> > >> > -- >> > Tristan Tarrant >> > Infinispan Lead >> > JBoss, a division of Red Hat >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170906/f70806c2/attachment-0001.html From wfink at redhat.com Wed Sep 6 03:48:16 2017 From: wfink at redhat.com (Wolf Fink) Date: Wed, 6 Sep 2017 09:48:16 +0200 Subject: [infinispan-dev] Hot Rod secured by default In-Reply-To: <9A6EFFCD-7473-4FD4-A758-ED57D36A1F63@hibernate.org> References: <78C8F389-2EBA-4E2F-8EFE-CDAAAD65F55D@redhat.com> <20264f25-5a92-f6b9-e8f9-a91d822b5c8f@redhat.com> <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> <9A6EFFCD-7473-4FD4-A758-ED57D36A1F63@hibernate.org> Message-ID: We have that discussion several times and I remember it years ago. What I've seen in onld EAP4 times is that user start a JBoss instance (which run with unsecure console by default) and secured everything they use, but forgot the console which they don't use. Finally that ends in an open door to manipulate the server or shut it down, you were able to search for the JBoss console start page and you find a lot of server in production mode. Because of this we should keep connections secured by default and find a balance to keep it easy to remove it. An extra example marked as unsecured and a documentation what the default is and how to make it unsecure or more secure would be good. Also default passwords are not good enough as everybody knows it and this is no security in my opinion (i.e. admin/admin ;) ) So a simple section in the "getting started" doc should make it clear and the user know that the security has been removed and the connections are public visible. my 2ct Wolf On Wed, Sep 6, 2017 at 7:54 AM, Emmanuel Bernard wrote: > When you deploy a RDBMS on OpenShift, it is secured by default I think and > it's not a big deal. It is possible to read back secrets in OpenShift in > case you forgot them. > Maybe there is a way to either document that better, maybe having the same > password between console and cache by default is easier ? > We live in a world where we need to take security from day one. Remember > OpenShift means you can start stuff super easily, some in the cloud, you > don't want it open by default. > But your input is super valuable, open a j'irai with that experience and > let's try and make it easier. > > On 5 Sep 2017, at 18:03, Katia Aresti wrote: > > Hi all, > > I came up to this thread now, at the time I did not understand what we > discussed here. > > Let me tell you my experience of today. > > I want to create a vert.x application, super simple, that connects to > infinispan-server. Everything in open-shift. > I take an vert-x example as base, super simple one, that deploys directly > a super simple web app with a simple command line. Works fine, I see the > "hello world" webapp in my laptop on the openshift and my navigator. > > I want this time to access to infinispan-server. I go into the interface, > and I say "hey, grab the jboss/infinispan-server" image from docker > registry. I try to go to the REST console, and oh, I need to authenticate. > I go back, and then I read that I need to put 2 env variables. So there it > is, now it works. > And then I want to go to the console, requires me to put again the > user/password. And it does not work. And I don't see how to disable > security. And I don't know what to do. And I'm like : why do I need > security at all here ? > I'm a lambda dev, in a lambda project, considering using infinispan-server > for the first time, and at this point, security of the cache I don't care > and I don't know why I have to deal with it. > > I know you acted a ticket and decided to have security on by default. IMHO > lambda dev creating the hello world with the docker image provided by > jboss... does not care about it. At all. He or she just wants to see > everything working as soon as possible. Once they decide to use infinispan, > if the cache needs to be secured, which seems to be recommended, they will > go into that point (devs and ops) and change the code and the configuration > to add the security level they need for their project. > > I wonder if you want to reconsider the "secured by default" point after my > experience. > > My 2 cents, > > Katia > > On Tue, May 9, 2017 at 2:24 PM, Galder Zamarre?o > wrote: > >> Hi all, >> >> Tristan and I had chat yesterday and I've distilled the contents of the >> discussion and the feedback here into a JIRA [1]. The JIRA contains several >> subtasks to handle these aspects: >> >> 1. Remove auth check in server's CacheDecodeContext. >> 2. Default server configuration should require authentication in all >> entry points. >> 3. Provide an unauthenticated configuration that users can easily switch >> to. >> 4. Remove default username+passwords in docker image and instead show an >> info/warn message when these are not provided. >> 5. Add capability to pass in app user role groups to docker image easily, >> so that its easy to add authorization on top of the server. >> >> Cheers, >> >> [1] https://issues.jboss.org/browse/ISPN-7811 >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> > On 19 Apr 2017, at 12:04, Tristan Tarrant wrote: >> > >> > That is caused by not wrapping the calls in PrivilegedActions in all the >> > correct places and is a bug. >> > >> > Tristan >> > >> > On 19/04/2017 11:34, Sebastian Laskawiec wrote: >> >> The proposal look ok to me. >> >> >> >> But I would also like to highlight one thing - it seems you can't >> access >> >> secured cache properties using CLI. This seems wrong to me (if you can >> >> invoke the cli, in 99,99% of the cases you have access to the machine, >> >> so you can do whatever you want). It also breaks healthchecks in Docker >> >> image. >> >> >> >> I would like to make sure we will address those concerns. >> >> >> >> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant > >> > wrote: >> >> >> >> Currently the "protected cache access" security is implemented as >> >> follows: >> >> >> >> - if authorization is enabled || client is on loopback >> >> allow >> >> >> >> The first check also implies that authentication needs to be in >> place, >> >> as the authorization checks need a valid Subject. >> >> >> >> Unfortunately authorization is very heavy-weight and actually >> overkill >> >> even for "normal" secure usage. >> >> >> >> My proposal is as follows: >> >> - the "default" configuration files are "secure" by default >> >> - provide clearly marked "unsecured" configuration files, which the >> user >> >> can use >> >> - drop the "protected cache" check completely >> >> >> >> And definitely NO to a dev switch. >> >> >> >> Tristan >> >> >> >> On 19/04/2017 10:05, Galder Zamarre?o wrote: >> >>> Agree with Wolf. Let's keep it simple by just providing extra >> >> configuration files for dev/unsecure envs. >> >>> >> >>> Cheers, >> >>> -- >> >>> Galder Zamarre?o >> >>> Infinispan, Red Hat >> >>> >> >>>> On 15 Apr 2017, at 12:57, Wolf Fink > >> > wrote: >> >>>> >> >>>> I would think a "switch" can have other impacts as you need to >> >> check it in the code - and might have security leaks here >> >>>> >> >>>> So what is wrong with some configurations which are the default >> >> and secured. >> >>>> and a "*-dev or *-unsecure" configuration to start easy. >> >>>> Also this can be used in production if there is no need for security >> >>>> >> >>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec >> >> > wrote: >> >>>> I still think it would be better to create an extra switch to >> >> run infinispan in "development mode". This means no authentication, >> >> no encryption, possibly with JGroups stack tuned for fast discovery >> >> (especially in Kubernetes) and a big warning saying "You are in >> >> development mode, do not use this in production". >> >>>> >> >>>> Just something very easy to get you going. >> >>>> >> >>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarre?o >> >> > wrote: >> >>>> >> >>>> -- >> >>>> Galder Zamarre?o >> >>>> Infinispan, Red Hat >> >>>> >> >>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes >> >> > wrote: >> >>>>> >> >>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarre?o >> >> > wrote: >> >>>>> Hi all, >> >>>>> >> >>>>> As per some discussions we had yesterday on IRC w/ Tristan, >> >> Gustavo and Sebastian, I've created a docker image snapshot that >> >> reverts the change stop protected caches from requiring security >> >> enabled [1]. >> >>>>> >> >>>>> In other words, I've removed [2]. The reason for temporarily >> >> doing that is because with the change as is, the changes required >> >> for a default server distro require that the entire cache manager's >> >> security is enabled. This is in turn creates a lot of problems with >> >> health and running checks used by Kubernetes/OpenShift amongst other >> >> things. >> >>>>> >> >>>>> Judging from our discussions on IRC, the idea is for such >> >> change to be present in 9.0.1, but I'd like to get final >> >> confirmation from Tristan et al. >> >>>>> >> >>>>> >> >>>>> +1 >> >>>>> >> >>>>> Regarding the "security by default" discussion, I think we >> >> should ship configurations cloud.xml, clustered.xml and >> >> standalone.xml with security enabled and disabled variants, and let >> >> users >> >>>>> decide which one to pick based on the use case. >> >>>> >> >>>> I think that's a better idea. >> >>>> >> >>>> We could by default have a secured one, but switching to an >> >> insecure configuration should be doable with minimal effort, e.g. >> >> just switching config file. >> >>>> >> >>>> As highlighted above, any secured configuration should work >> >> out-of-the-box with our docker images, e.g. WRT healthy/running >> checks. >> >>>> >> >>>> Cheers, >> >>>> >> >>>>> >> >>>>> Gustavo. >> >>>>> >> >>>>> >> >>>>> Cheers, >> >>>>> >> >>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ >> >> (9.0.1-SNAPSHOT tag for anyone interested) >> >>>>> [2] >> >> https://github.com/infinispan/infinispan/blob/master/server/ >> hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecod >> eContext.java#L114-L118 >> >>>>> -- >> >>>>> Galder Zamarre?o >> >>>>> Infinispan, Red Hat >> >>>>> >> >>>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant > >> > wrote: >> >>>>>> >> >>>>>> Dear all, >> >>>>>> >> >>>>>> after a mini chat on IRC, I wanted to bring this to >> >> everybody's attention. >> >>>>>> >> >>>>>> We should make the Hot Rod endpoint require authentication in the >> >>>>>> out-of-the-box configuration. >> >>>>>> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL >> >>>>>> mechanism against the ApplicationRealm and require users to >> >> run the >> >>>>>> add-user script. >> >>>>>> This would achieve two goals: >> >>>>>> - secure out-of-the-box configuration, which is always a good idea >> >>>>>> - access to the "protected" schema and script caches which is >> >> prevented >> >>>>>> when not on loopback on non-authenticated endpoints. >> >>>>>> >> >>>>>> Tristan >> >>>>>> -- >> >>>>>> Tristan Tarrant >> >>>>>> Infinispan Lead >> >>>>>> JBoss, a division of Red Hat >> >>>>>> _______________________________________________ >> >>>>>> infinispan-dev mailing list >> >>>>>> infinispan-dev at lists.jboss.org >> >> >> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>> >> >>>>> >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >> >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>> >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >> >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> >> >>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >> >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> -- >> >>>> SEBASTIAN ?ASKAWIEC >> >>>> INFINISPAN DEVELOPER >> >>>> Red Hat EMEA >> >>>> >> >>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >> >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >> >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>> >> >>> >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >> >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>> >> >> >> >> -- >> >> Tristan Tarrant >> >> Infinispan Lead >> >> JBoss, a division of Red Hat >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org > boss.org> >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> -- >> >> >> >> SEBASTIAN?ASKAWIEC >> >> >> >> INFINISPAN DEVELOPER >> >> >> >> Red HatEMEA >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> > >> > -- >> > Tristan Tarrant >> > Infinispan Lead >> > JBoss, a division of Red Hat >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170906/8db8390f/attachment-0001.html From gustavo at infinispan.org Wed Sep 6 04:52:28 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Wed, 6 Sep 2017 09:52:28 +0100 Subject: [infinispan-dev] Hot Rod secured by default In-Reply-To: References: <78C8F389-2EBA-4E2F-8EFE-CDAAAD65F55D@redhat.com> <20264f25-5a92-f6b9-e8f9-a91d822b5c8f@redhat.com> <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> Message-ID: Comments inlined On Tue, Sep 5, 2017 at 5:03 PM, Katia Aresti wrote: > > And then I want to go to the console, requires me to put again the > user/password. And it does not work. And I don't see how to disable > security. And I don't know what to do. And I'm like : why do I need > security at all here ? > The console credentials are specified with MGMT_USER/MGMT_PASS env variables, did you try those? It will not work for APP_USER/APP_PASS. > I wonder if you want to reconsider the "secured by default" point after my > experience. > The outcome of the discussion is that the clustered.xml will be secured by default, but you should be able to launch a container without any security by simply passing an alternate xml in the startup, and we'll ship this XML with the server. Gustavo > > My 2 cents, > > Katia > > On Tue, May 9, 2017 at 2:24 PM, Galder Zamarre?o > wrote: > >> Hi all, >> >> Tristan and I had chat yesterday and I've distilled the contents of the >> discussion and the feedback here into a JIRA [1]. The JIRA contains several >> subtasks to handle these aspects: >> >> 1. Remove auth check in server's CacheDecodeContext. >> 2. Default server configuration should require authentication in all >> entry points. >> 3. Provide an unauthenticated configuration that users can easily switch >> to. >> 4. Remove default username+passwords in docker image and instead show an >> info/warn message when these are not provided. >> 5. Add capability to pass in app user role groups to docker image easily, >> so that its easy to add authorization on top of the server. >> >> Cheers, >> >> [1] https://issues.jboss.org/browse/ISPN-7811 >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> > On 19 Apr 2017, at 12:04, Tristan Tarrant wrote: >> > >> > That is caused by not wrapping the calls in PrivilegedActions in all the >> > correct places and is a bug. >> > >> > Tristan >> > >> > On 19/04/2017 11:34, Sebastian Laskawiec wrote: >> >> The proposal look ok to me. >> >> >> >> But I would also like to highlight one thing - it seems you can't >> access >> >> secured cache properties using CLI. This seems wrong to me (if you can >> >> invoke the cli, in 99,99% of the cases you have access to the machine, >> >> so you can do whatever you want). It also breaks healthchecks in Docker >> >> image. >> >> >> >> I would like to make sure we will address those concerns. >> >> >> >> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant > >> > wrote: >> >> >> >> Currently the "protected cache access" security is implemented as >> >> follows: >> >> >> >> - if authorization is enabled || client is on loopback >> >> allow >> >> >> >> The first check also implies that authentication needs to be in >> place, >> >> as the authorization checks need a valid Subject. >> >> >> >> Unfortunately authorization is very heavy-weight and actually >> overkill >> >> even for "normal" secure usage. >> >> >> >> My proposal is as follows: >> >> - the "default" configuration files are "secure" by default >> >> - provide clearly marked "unsecured" configuration files, which the >> user >> >> can use >> >> - drop the "protected cache" check completely >> >> >> >> And definitely NO to a dev switch. >> >> >> >> Tristan >> >> >> >> On 19/04/2017 10:05, Galder Zamarre?o wrote: >> >>> Agree with Wolf. Let's keep it simple by just providing extra >> >> configuration files for dev/unsecure envs. >> >>> >> >>> Cheers, >> >>> -- >> >>> Galder Zamarre?o >> >>> Infinispan, Red Hat >> >>> >> >>>> On 15 Apr 2017, at 12:57, Wolf Fink > >> > wrote: >> >>>> >> >>>> I would think a "switch" can have other impacts as you need to >> >> check it in the code - and might have security leaks here >> >>>> >> >>>> So what is wrong with some configurations which are the default >> >> and secured. >> >>>> and a "*-dev or *-unsecure" configuration to start easy. >> >>>> Also this can be used in production if there is no need for security >> >>>> >> >>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec >> >> > wrote: >> >>>> I still think it would be better to create an extra switch to >> >> run infinispan in "development mode". This means no authentication, >> >> no encryption, possibly with JGroups stack tuned for fast discovery >> >> (especially in Kubernetes) and a big warning saying "You are in >> >> development mode, do not use this in production". >> >>>> >> >>>> Just something very easy to get you going. >> >>>> >> >>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarre?o >> >> > wrote: >> >>>> >> >>>> -- >> >>>> Galder Zamarre?o >> >>>> Infinispan, Red Hat >> >>>> >> >>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes >> >> > wrote: >> >>>>> >> >>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarre?o >> >> > wrote: >> >>>>> Hi all, >> >>>>> >> >>>>> As per some discussions we had yesterday on IRC w/ Tristan, >> >> Gustavo and Sebastian, I've created a docker image snapshot that >> >> reverts the change stop protected caches from requiring security >> >> enabled [1]. >> >>>>> >> >>>>> In other words, I've removed [2]. The reason for temporarily >> >> doing that is because with the change as is, the changes required >> >> for a default server distro require that the entire cache manager's >> >> security is enabled. This is in turn creates a lot of problems with >> >> health and running checks used by Kubernetes/OpenShift amongst other >> >> things. >> >>>>> >> >>>>> Judging from our discussions on IRC, the idea is for such >> >> change to be present in 9.0.1, but I'd like to get final >> >> confirmation from Tristan et al. >> >>>>> >> >>>>> >> >>>>> +1 >> >>>>> >> >>>>> Regarding the "security by default" discussion, I think we >> >> should ship configurations cloud.xml, clustered.xml and >> >> standalone.xml with security enabled and disabled variants, and let >> >> users >> >>>>> decide which one to pick based on the use case. >> >>>> >> >>>> I think that's a better idea. >> >>>> >> >>>> We could by default have a secured one, but switching to an >> >> insecure configuration should be doable with minimal effort, e.g. >> >> just switching config file. >> >>>> >> >>>> As highlighted above, any secured configuration should work >> >> out-of-the-box with our docker images, e.g. WRT healthy/running >> checks. >> >>>> >> >>>> Cheers, >> >>>> >> >>>>> >> >>>>> Gustavo. >> >>>>> >> >>>>> >> >>>>> Cheers, >> >>>>> >> >>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ >> >> (9.0.1-SNAPSHOT tag for anyone interested) >> >>>>> [2] >> >> https://github.com/infinispan/infinispan/blob/master/server/ >> hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecod >> eContext.java#L114-L118 >> >>>>> -- >> >>>>> Galder Zamarre?o >> >>>>> Infinispan, Red Hat >> >>>>> >> >>>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant > >> > wrote: >> >>>>>> >> >>>>>> Dear all, >> >>>>>> >> >>>>>> after a mini chat on IRC, I wanted to bring this to >> >> everybody's attention. >> >>>>>> >> >>>>>> We should make the Hot Rod endpoint require authentication in the >> >>>>>> out-of-the-box configuration. >> >>>>>> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL >> >>>>>> mechanism against the ApplicationRealm and require users to >> >> run the >> >>>>>> add-user script. >> >>>>>> This would achieve two goals: >> >>>>>> - secure out-of-the-box configuration, which is always a good idea >> >>>>>> - access to the "protected" schema and script caches which is >> >> prevented >> >>>>>> when not on loopback on non-authenticated endpoints. >> >>>>>> >> >>>>>> Tristan >> >>>>>> -- >> >>>>>> Tristan Tarrant >> >>>>>> Infinispan Lead >> >>>>>> JBoss, a division of Red Hat >> >>>>>> _______________________________________________ >> >>>>>> infinispan-dev mailing list >> >>>>>> infinispan-dev at lists.jboss.org >> >> >> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>> >> >>>>> >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >> >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>>> >> >>>>> _______________________________________________ >> >>>>> infinispan-dev mailing list >> >>>>> infinispan-dev at lists.jboss.org >> >> >> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> >> >>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >> >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> -- >> >>>> SEBASTIAN ?ASKAWIEC >> >>>> INFINISPAN DEVELOPER >> >>>> Red Hat EMEA >> >>>> >> >>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >> >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>>> >> >>>> _______________________________________________ >> >>>> infinispan-dev mailing list >> >>>> infinispan-dev at lists.jboss.org >> >> >> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>> >> >>> >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >> >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >>> >> >> >> >> -- >> >> Tristan Tarrant >> >> Infinispan Lead >> >> JBoss, a division of Red Hat >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org > boss.org> >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> -- >> >> >> >> SEBASTIAN?ASKAWIEC >> >> >> >> INFINISPAN DEVELOPER >> >> >> >> Red HatEMEA >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> > >> > -- >> > Tristan Tarrant >> > Infinispan Lead >> > JBoss, a division of Red Hat >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170906/abcf8f8b/attachment-0001.html From karesti at redhat.com Wed Sep 6 05:03:55 2017 From: karesti at redhat.com (Katia Aresti) Date: Wed, 6 Sep 2017 11:03:55 +0200 Subject: [infinispan-dev] Hot Rod secured by default In-Reply-To: References: <78C8F389-2EBA-4E2F-8EFE-CDAAAD65F55D@redhat.com> <20264f25-5a92-f6b9-e8f9-a91d822b5c8f@redhat.com> <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> Message-ID: @Emmanuel, sure it't not a big deal, but starting fast and smooth without any trouble helps adoption. Concerning the ticket, there is already one that was acted. I can work on that, even if is assigned to Galder now. @Gustavo Yes, as I read - better - now on the security part, it is said for the console that we need those. My head skipped that paragraph or I read that badly, and I was wondering if it was more something related to "roles" rather than a user. My bad, because I read too fast sometimes and skip things ! Maybe the paragraph of the security in the console should be moved down to the console part, which is small to read now ? When I read there "see the security part bellow" I was like : ok, the security is done !! :) Thank you for your replies ! Katia On Wed, Sep 6, 2017 at 10:52 AM, Gustavo Fernandes wrote: > Comments inlined > > On Tue, Sep 5, 2017 at 5:03 PM, Katia Aresti wrote: >> >> And then I want to go to the console, requires me to put again the >> user/password. And it does not work. And I don't see how to disable >> security. And I don't know what to do. And I'm like : why do I need >> security at all here ? >> > > > The console credentials are specified with MGMT_USER/MGMT_PASS env > variables, did you try those? It will not work for APP_USER/APP_PASS. > > > >> I wonder if you want to reconsider the "secured by default" point after >> my experience. >> > > > The outcome of the discussion is that the clustered.xml will be secured by > default, but you should be able to launch a container without any security > by simply passing an alternate xml in the startup, and we'll ship this XML > with the server. > > > Gustavo > > >> >> My 2 cents, >> >> Katia >> >> On Tue, May 9, 2017 at 2:24 PM, Galder Zamarre?o >> wrote: >> >>> Hi all, >>> >>> Tristan and I had chat yesterday and I've distilled the contents of the >>> discussion and the feedback here into a JIRA [1]. The JIRA contains several >>> subtasks to handle these aspects: >>> >>> 1. Remove auth check in server's CacheDecodeContext. >>> 2. Default server configuration should require authentication in all >>> entry points. >>> 3. Provide an unauthenticated configuration that users can easily switch >>> to. >>> 4. Remove default username+passwords in docker image and instead show an >>> info/warn message when these are not provided. >>> 5. Add capability to pass in app user role groups to docker image >>> easily, so that its easy to add authorization on top of the server. >>> >>> Cheers, >>> >>> [1] https://issues.jboss.org/browse/ISPN-7811 >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> > On 19 Apr 2017, at 12:04, Tristan Tarrant wrote: >>> > >>> > That is caused by not wrapping the calls in PrivilegedActions in all >>> the >>> > correct places and is a bug. >>> > >>> > Tristan >>> > >>> > On 19/04/2017 11:34, Sebastian Laskawiec wrote: >>> >> The proposal look ok to me. >>> >> >>> >> But I would also like to highlight one thing - it seems you can't >>> access >>> >> secured cache properties using CLI. This seems wrong to me (if you can >>> >> invoke the cli, in 99,99% of the cases you have access to the machine, >>> >> so you can do whatever you want). It also breaks healthchecks in >>> Docker >>> >> image. >>> >> >>> >> I would like to make sure we will address those concerns. >>> >> >>> >> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant >> >> > wrote: >>> >> >>> >> Currently the "protected cache access" security is implemented as >>> >> follows: >>> >> >>> >> - if authorization is enabled || client is on loopback >>> >> allow >>> >> >>> >> The first check also implies that authentication needs to be in >>> place, >>> >> as the authorization checks need a valid Subject. >>> >> >>> >> Unfortunately authorization is very heavy-weight and actually >>> overkill >>> >> even for "normal" secure usage. >>> >> >>> >> My proposal is as follows: >>> >> - the "default" configuration files are "secure" by default >>> >> - provide clearly marked "unsecured" configuration files, which >>> the user >>> >> can use >>> >> - drop the "protected cache" check completely >>> >> >>> >> And definitely NO to a dev switch. >>> >> >>> >> Tristan >>> >> >>> >> On 19/04/2017 10:05, Galder Zamarre?o wrote: >>> >>> Agree with Wolf. Let's keep it simple by just providing extra >>> >> configuration files for dev/unsecure envs. >>> >>> >>> >>> Cheers, >>> >>> -- >>> >>> Galder Zamarre?o >>> >>> Infinispan, Red Hat >>> >>> >>> >>>> On 15 Apr 2017, at 12:57, Wolf Fink >> >> > wrote: >>> >>>> >>> >>>> I would think a "switch" can have other impacts as you need to >>> >> check it in the code - and might have security leaks here >>> >>>> >>> >>>> So what is wrong with some configurations which are the default >>> >> and secured. >>> >>>> and a "*-dev or *-unsecure" configuration to start easy. >>> >>>> Also this can be used in production if there is no need for security >>> >>>> >>> >>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec >>> >> > wrote: >>> >>>> I still think it would be better to create an extra switch to >>> >> run infinispan in "development mode". This means no authentication, >>> >> no encryption, possibly with JGroups stack tuned for fast discovery >>> >> (especially in Kubernetes) and a big warning saying "You are in >>> >> development mode, do not use this in production". >>> >>>> >>> >>>> Just something very easy to get you going. >>> >>>> >>> >>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarre?o >>> >> > wrote: >>> >>>> >>> >>>> -- >>> >>>> Galder Zamarre?o >>> >>>> Infinispan, Red Hat >>> >>>> >>> >>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes >>> >> > wrote: >>> >>>>> >>> >>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarre?o >>> >> > wrote: >>> >>>>> Hi all, >>> >>>>> >>> >>>>> As per some discussions we had yesterday on IRC w/ Tristan, >>> >> Gustavo and Sebastian, I've created a docker image snapshot that >>> >> reverts the change stop protected caches from requiring security >>> >> enabled [1]. >>> >>>>> >>> >>>>> In other words, I've removed [2]. The reason for temporarily >>> >> doing that is because with the change as is, the changes required >>> >> for a default server distro require that the entire cache manager's >>> >> security is enabled. This is in turn creates a lot of problems with >>> >> health and running checks used by Kubernetes/OpenShift amongst >>> other >>> >> things. >>> >>>>> >>> >>>>> Judging from our discussions on IRC, the idea is for such >>> >> change to be present in 9.0.1, but I'd like to get final >>> >> confirmation from Tristan et al. >>> >>>>> >>> >>>>> >>> >>>>> +1 >>> >>>>> >>> >>>>> Regarding the "security by default" discussion, I think we >>> >> should ship configurations cloud.xml, clustered.xml and >>> >> standalone.xml with security enabled and disabled variants, and let >>> >> users >>> >>>>> decide which one to pick based on the use case. >>> >>>> >>> >>>> I think that's a better idea. >>> >>>> >>> >>>> We could by default have a secured one, but switching to an >>> >> insecure configuration should be doable with minimal effort, e.g. >>> >> just switching config file. >>> >>>> >>> >>>> As highlighted above, any secured configuration should work >>> >> out-of-the-box with our docker images, e.g. WRT healthy/running >>> checks. >>> >>>> >>> >>>> Cheers, >>> >>>> >>> >>>>> >>> >>>>> Gustavo. >>> >>>>> >>> >>>>> >>> >>>>> Cheers, >>> >>>>> >>> >>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ >>> >> (9.0.1-SNAPSHOT tag for anyone interested) >>> >>>>> [2] >>> >> https://github.com/infinispan/infinispan/blob/master/server/ >>> hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecod >>> eContext.java#L114-L118 >>> >>>>> -- >>> >>>>> Galder Zamarre?o >>> >>>>> Infinispan, Red Hat >>> >>>>> >>> >>>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant >> >> > wrote: >>> >>>>>> >>> >>>>>> Dear all, >>> >>>>>> >>> >>>>>> after a mini chat on IRC, I wanted to bring this to >>> >> everybody's attention. >>> >>>>>> >>> >>>>>> We should make the Hot Rod endpoint require authentication in the >>> >>>>>> out-of-the-box configuration. >>> >>>>>> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL >>> >>>>>> mechanism against the ApplicationRealm and require users to >>> >> run the >>> >>>>>> add-user script. >>> >>>>>> This would achieve two goals: >>> >>>>>> - secure out-of-the-box configuration, which is always a good idea >>> >>>>>> - access to the "protected" schema and script caches which is >>> >> prevented >>> >>>>>> when not on loopback on non-authenticated endpoints. >>> >>>>>> >>> >>>>>> Tristan >>> >>>>>> -- >>> >>>>>> Tristan Tarrant >>> >>>>>> Infinispan Lead >>> >>>>>> JBoss, a division of Red Hat >>> >>>>>> _______________________________________________ >>> >>>>>> infinispan-dev mailing list >>> >>>>>> infinispan-dev at lists.jboss.org >>> >> >>> >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>>>> >>> >>>>> >>> >>>>> _______________________________________________ >>> >>>>> infinispan-dev mailing list >>> >>>>> infinispan-dev at lists.jboss.org >>> >> >>> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>>>> >>> >>>>> _______________________________________________ >>> >>>>> infinispan-dev mailing list >>> >>>>> infinispan-dev at lists.jboss.org >>> >> >>> >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>>> >>> >>>> >>> >>>> _______________________________________________ >>> >>>> infinispan-dev mailing list >>> >>>> infinispan-dev at lists.jboss.org >>> >> >>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>>> -- >>> >>>> SEBASTIAN ?ASKAWIEC >>> >>>> INFINISPAN DEVELOPER >>> >>>> Red Hat EMEA >>> >>>> >>> >>>> >>> >>>> _______________________________________________ >>> >>>> infinispan-dev mailing list >>> >>>> infinispan-dev at lists.jboss.org >>> >> >>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>>> >>> >>>> _______________________________________________ >>> >>>> infinispan-dev mailing list >>> >>>> infinispan-dev at lists.jboss.org >>> >> >>> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> >>> >>> _______________________________________________ >>> >>> infinispan-dev mailing list >>> >>> infinispan-dev at lists.jboss.org >>> >> >>> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >> >>> >> -- >>> >> Tristan Tarrant >>> >> Infinispan Lead >>> >> JBoss, a division of Red Hat >>> >> _______________________________________________ >>> >> infinispan-dev mailing list >>> >> infinispan-dev at lists.jboss.org >> boss.org> >>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >>> >> -- >>> >> >>> >> SEBASTIAN?ASKAWIEC >>> >> >>> >> INFINISPAN DEVELOPER >>> >> >>> >> Red HatEMEA >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> _______________________________________________ >>> >> infinispan-dev mailing list >>> >> infinispan-dev at lists.jboss.org >>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >>> > >>> > -- >>> > Tristan Tarrant >>> > Infinispan Lead >>> > JBoss, a division of Red Hat >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170906/91f038ab/attachment-0001.html From galder at redhat.com Wed Sep 6 10:51:12 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 6 Sep 2017 16:51:12 +0200 Subject: [infinispan-dev] Join us online on 7th September for DevNation talk on Infinispan! Message-ID: Hi all, I will be doing an live tech talk for DevNation tomorrow, 7th September at 12:00pm. More details here: http://blog.infinispan.org/2017/09/join-us-online-on-7th-september-for.html Cheers, -- Galder Zamarre?o Infinispan, Red Hat From rory.odonnell at oracle.com Thu Sep 7 05:30:57 2017 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Thu, 7 Sep 2017 10:30:57 +0100 Subject: [infinispan-dev] Moving Java Forward Faster Message-ID: <01cafe16-aa57-3e20-b897-bbb45b5ba655@oracle.com> Hi Galder, Oracle is proposing a rapid release model for Java SE going-forward. The high points are highlighted below, details of the changes can be found on Mark Reinhold?s blog [1] , OpenJDK discussion email list [2]. Under the proposed release model, after JDK 9, we will adopt a strict, time-based model with a new major release every six months, update releases every quarter, and a long-term support release every three years. The new JDK Project will run a bit differently than the past "JDK $N" Projects: - The main development line will always be open but fixes, enhancements, and features will be merged only when they're nearly finished. The main line will be Feature Complete [3] at all times. - We'll continue to use the JEP Process [4] for new features and other significant changes. The bar to target a JEP to a specific release will, however, be higher since the work must be Feature Complete in order to go in. Owners of large or risky features will be strongly encouraged to split such features up into smaller and safer parts, to integrate earlier in the release cycle, and to publish separate lines of early-access builds prior to integration. The JDK Updates Project will run in much the same way as the past "JDK $N" Updates Projects, though update releases will be strictly limited to fixes of security issues, regressions, and bugs in newer features. Related to this proposal, we intend to make a few changes in what we do: - Starting with JDK 9 we'll ship OpenJDK builds under the GPL [5], to make it easier for developers to deploy Java applications to cloud environments. We'll initially publish OpenJDK builds for Linux/x64, followed later by builds for macOS/x64 and Windows/x64. - We'll continue to ship proprietary "Oracle JDK" builds, which include "commercial features" [6] such as Java Flight Recorder and Mission Control [7], under a click-through binary-code license [8]. Oracle will continue to offer paid support for these builds. - After JDK 9 we'll open-source the commercial features in order to make the OpenJDK builds more attractive to developers and to reduce the differences between those builds and the Oracle JDK. This will take some time, but the ultimate goal is to make OpenJDK and Oracle JDK builds completely interchangeable. - Finally, for the long term we'll work with other OpenJDK contributors to establish an open build-and-test infrastructure. This will make it easier to publish early-access builds for features in development, and eventually make it possible for the OpenJDK Community itself to publish authoritative builds of the JDK. Questions , comments, feedback to OpenJDK discuss mailing list [2] Rgds,Rory [1]https://mreinhold.org/blog/forward-faster [2]http://mail.openjdk.java.net/pipermail/discuss/2017-September/004281.html [3]http://openjdk.java.net/projects/jdk8/milestones#Feature_Complete [4]http://openjdk.java.net/jeps/0 [5]http://openjdk.java.net/legal/gplv2+ce.html [6]http://www.oracle.com/technetwork/java/javase/terms/products/index.html [7]http://www.oracle.com/technetwork/java/javaseproducts/mission-control/index.html [8]http://www.oracle.com/technetwork/java/javase/terms/license/index.html From slaskawi at redhat.com Fri Sep 8 03:04:50 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 08 Sep 2017 07:04:50 +0000 Subject: [infinispan-dev] How about moving Infinispan forums to StackOverflow? Message-ID: Hey guys, I'm pretty sure you have seen: https://developer.jboss.org/thread/275956 How about moving Infinispan questions too? Thanks, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170908/e45a4fbc/attachment.html From rvansa at redhat.com Fri Sep 8 03:17:36 2017 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 8 Sep 2017 09:17:36 +0200 Subject: [infinispan-dev] How about moving Infinispan forums to StackOverflow? In-Reply-To: References: Message-ID: While I regularly respond to SO, It seems to me that in many cases on the forum we have longer threads than Q -> A. While you can simulate this on SO using comments/edits, it's not suitable for everything. I monitor both places through mail client, so it doesn't pose much overhead but occasional crosslink of duplicate questions. R. On 09/08/2017 09:04 AM, Sebastian Laskawiec wrote: > Hey guys, > > I'm pretty sure you have seen: https://developer.jboss.org/thread/275956 > > How about moving Infinispan questions too? > > Thanks, > Sebastian > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From ttarrant at redhat.com Fri Sep 8 03:51:34 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 8 Sep 2017 09:51:34 +0200 Subject: [infinispan-dev] How about moving Infinispan forums to StackOverflow? In-Reply-To: References: Message-ID: Yes, I think it would be a good idea. I've seen a number of users post in both places, but SO is definitely more discoverable by the wider community and has a lower barrier to entry. Tristan On 9/8/17 9:04 AM, Sebastian Laskawiec wrote: > Hey guys, > > I'm pretty sure you have seen: https://developer.jboss.org/thread/275956 > > How about moving Infinispan questions too? > > Thanks, > Sebastian > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Tue Sep 12 04:37:37 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 12 Sep 2017 10:37:37 +0200 Subject: [infinispan-dev] Weekly Infninispan IRC Meeting Logs 2017-09-11 Message-ID: <0c3365b4-2bc7-5eef-5837-751c35ed92d6@redhat.com> Dear all, the logs for yesterday's IRC meeting are here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-09-11-14.00.log.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Thu Sep 14 05:43:18 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Thu, 14 Sep 2017 11:43:18 +0200 Subject: [infinispan-dev] DevNation Live talk - Big Data In Action w/ Infinispan Message-ID: Hi, Last week I gave a 30m talk for DevNation Live on Big Data In Action w/ Infinispan. The video can be found here: https://www.youtube.com/watch?v=ZUZeAfdmeX0 Slides: https://speakerdeck.com/galderz/big-data-in-action-with-infinispan-2 Cheers, -- Galder Zamarre?o Infinispan, Red Hat From galder at redhat.com Thu Sep 14 11:45:47 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Thu, 14 Sep 2017 17:45:47 +0200 Subject: [infinispan-dev] Hot Rod secured by default In-Reply-To: References: <78C8F389-2EBA-4E2F-8EFE-CDAAAD65F55D@redhat.com> <20264f25-5a92-f6b9-e8f9-a91d822b5c8f@redhat.com> <913ec0fd-46dd-57e5-0eee-cb190066ed2e@redhat.com> Message-ID: <87659D4A-0085-436B-897B-802B8E3DAB3F@redhat.com> Gustavo's reply was the agreement reached. Secured by default and an easy way to use it unsecured is the best middle ground IMO. So, we've done the securing part partially, which needs to be completed by [2] (currently assigned to Tristan). More importantly, we also need to complete [3] so that we have ship the unsecured configuration, and then show people how to use that (docus, examples...etc). If you want to help, taking ownership of [3] would be best. Cheers, [2] https://issues.jboss.org/browse/ISPN-7815 [3] https://issues.jboss.org/browse/ISPN-7818 > On 6 Sep 2017, at 11:03, Katia Aresti wrote: > > @Emmanuel, sure it't not a big deal, but starting fast and smooth without any trouble helps adoption. Concerning the ticket, there is already one that was acted. I can work on that, even if is assigned to Galder now. > > @Gustavo > Yes, as I read - better - now on the security part, it is said for the console that we need those. My head skipped that paragraph or I read that badly, and I was wondering if it was more something related to "roles" rather than a user. My bad, because I read too fast sometimes and skip things ! Maybe the paragraph of the security in the console should be moved down to the console part, which is small to read now ? When I read there "see the security part bellow" I was like : ok, the security is done !! :) > > Thank you for your replies ! > > Katia > > > On Wed, Sep 6, 2017 at 10:52 AM, Gustavo Fernandes wrote: > Comments inlined > > On Tue, Sep 5, 2017 at 5:03 PM, Katia Aresti wrote: > And then I want to go to the console, requires me to put again the user/password. And it does not work. And I don't see how to disable security. And I don't know what to do. And I'm like : why do I need security at all here ? > > > The console credentials are specified with MGMT_USER/MGMT_PASS env variables, did you try those? It will not work for APP_USER/APP_PASS. > > > I wonder if you want to reconsider the "secured by default" point after my experience. > > > The outcome of the discussion is that the clustered.xml will be secured by default, but you should be able to launch a container without any security by simply passing an alternate xml in the startup, and we'll ship this XML with the server. > > > Gustavo > > > My 2 cents, > > Katia > > On Tue, May 9, 2017 at 2:24 PM, Galder Zamarre?o wrote: > Hi all, > > Tristan and I had chat yesterday and I've distilled the contents of the discussion and the feedback here into a JIRA [1]. The JIRA contains several subtasks to handle these aspects: > > 1. Remove auth check in server's CacheDecodeContext. > 2. Default server configuration should require authentication in all entry points. > 3. Provide an unauthenticated configuration that users can easily switch to. > 4. Remove default username+passwords in docker image and instead show an info/warn message when these are not provided. > 5. Add capability to pass in app user role groups to docker image easily, so that its easy to add authorization on top of the server. > > Cheers, > > [1] https://issues.jboss.org/browse/ISPN-7811 > -- > Galder Zamarre?o > Infinispan, Red Hat > > > On 19 Apr 2017, at 12:04, Tristan Tarrant wrote: > > > > That is caused by not wrapping the calls in PrivilegedActions in all the > > correct places and is a bug. > > > > Tristan > > > > On 19/04/2017 11:34, Sebastian Laskawiec wrote: > >> The proposal look ok to me. > >> > >> But I would also like to highlight one thing - it seems you can't access > >> secured cache properties using CLI. This seems wrong to me (if you can > >> invoke the cli, in 99,99% of the cases you have access to the machine, > >> so you can do whatever you want). It also breaks healthchecks in Docker > >> image. > >> > >> I would like to make sure we will address those concerns. > >> > >> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant >> > wrote: > >> > >> Currently the "protected cache access" security is implemented as > >> follows: > >> > >> - if authorization is enabled || client is on loopback > >> allow > >> > >> The first check also implies that authentication needs to be in place, > >> as the authorization checks need a valid Subject. > >> > >> Unfortunately authorization is very heavy-weight and actually overkill > >> even for "normal" secure usage. > >> > >> My proposal is as follows: > >> - the "default" configuration files are "secure" by default > >> - provide clearly marked "unsecured" configuration files, which the user > >> can use > >> - drop the "protected cache" check completely > >> > >> And definitely NO to a dev switch. > >> > >> Tristan > >> > >> On 19/04/2017 10:05, Galder Zamarre?o wrote: > >>> Agree with Wolf. Let's keep it simple by just providing extra > >> configuration files for dev/unsecure envs. > >>> > >>> Cheers, > >>> -- > >>> Galder Zamarre?o > >>> Infinispan, Red Hat > >>> > >>>> On 15 Apr 2017, at 12:57, Wolf Fink >> > wrote: > >>>> > >>>> I would think a "switch" can have other impacts as you need to > >> check it in the code - and might have security leaks here > >>>> > >>>> So what is wrong with some configurations which are the default > >> and secured. > >>>> and a "*-dev or *-unsecure" configuration to start easy. > >>>> Also this can be used in production if there is no need for security > >>>> > >>>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec > >> > wrote: > >>>> I still think it would be better to create an extra switch to > >> run infinispan in "development mode". This means no authentication, > >> no encryption, possibly with JGroups stack tuned for fast discovery > >> (especially in Kubernetes) and a big warning saying "You are in > >> development mode, do not use this in production". > >>>> > >>>> Just something very easy to get you going. > >>>> > >>>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarre?o > >> > wrote: > >>>> > >>>> -- > >>>> Galder Zamarre?o > >>>> Infinispan, Red Hat > >>>> > >>>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes > >> > wrote: > >>>>> > >>>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarre?o > >> > wrote: > >>>>> Hi all, > >>>>> > >>>>> As per some discussions we had yesterday on IRC w/ Tristan, > >> Gustavo and Sebastian, I've created a docker image snapshot that > >> reverts the change stop protected caches from requiring security > >> enabled [1]. > >>>>> > >>>>> In other words, I've removed [2]. The reason for temporarily > >> doing that is because with the change as is, the changes required > >> for a default server distro require that the entire cache manager's > >> security is enabled. This is in turn creates a lot of problems with > >> health and running checks used by Kubernetes/OpenShift amongst other > >> things. > >>>>> > >>>>> Judging from our discussions on IRC, the idea is for such > >> change to be present in 9.0.1, but I'd like to get final > >> confirmation from Tristan et al. > >>>>> > >>>>> > >>>>> +1 > >>>>> > >>>>> Regarding the "security by default" discussion, I think we > >> should ship configurations cloud.xml, clustered.xml and > >> standalone.xml with security enabled and disabled variants, and let > >> users > >>>>> decide which one to pick based on the use case. > >>>> > >>>> I think that's a better idea. > >>>> > >>>> We could by default have a secured one, but switching to an > >> insecure configuration should be doable with minimal effort, e.g. > >> just switching config file. > >>>> > >>>> As highlighted above, any secured configuration should work > >> out-of-the-box with our docker images, e.g. WRT healthy/running checks. > >>>> > >>>> Cheers, > >>>> > >>>>> > >>>>> Gustavo. > >>>>> > >>>>> > >>>>> Cheers, > >>>>> > >>>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ > >> (9.0.1-SNAPSHOT tag for anyone interested) > >>>>> [2] > >> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118 > >>>>> -- > >>>>> Galder Zamarre?o > >>>>> Infinispan, Red Hat > >>>>> > >>>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant >> > wrote: > >>>>>> > >>>>>> Dear all, > >>>>>> > >>>>>> after a mini chat on IRC, I wanted to bring this to > >> everybody's attention. > >>>>>> > >>>>>> We should make the Hot Rod endpoint require authentication in the > >>>>>> out-of-the-box configuration. > >>>>>> The proposal is to enable the PLAIN (or, preferably, DIGEST) SASL > >>>>>> mechanism against the ApplicationRealm and require users to > >> run the > >>>>>> add-user script. > >>>>>> This would achieve two goals: > >>>>>> - secure out-of-the-box configuration, which is always a good idea > >>>>>> - access to the "protected" schema and script caches which is > >> prevented > >>>>>> when not on loopback on non-authenticated endpoints. > >>>>>> > >>>>>> Tristan > >>>>>> -- > >>>>>> Tristan Tarrant > >>>>>> Infinispan Lead > >>>>>> JBoss, a division of Red Hat > >>>>>> _______________________________________________ > >>>>>> infinispan-dev mailing list > >>>>>> infinispan-dev at lists.jboss.org > >> > >>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >> > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >> > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> > >>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >> > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> -- > >>>> SEBASTIAN ?ASKAWIEC > >>>> INFINISPAN DEVELOPER > >>>> Red Hat EMEA > >>>> > >>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >> > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >> > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >> > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> > >> > >> -- > >> Tristan Tarrant > >> Infinispan Lead > >> JBoss, a division of Red Hat > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> -- > >> > >> SEBASTIAN?ASKAWIEC > >> > >> INFINISPAN DEVELOPER > >> > >> Red HatEMEA > >> > >> > >> > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > > > > -- > > Tristan Tarrant > > Infinispan Lead > > JBoss, a division of Red Hat > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o Infinispan, Red Hat From galder at redhat.com Thu Sep 14 12:03:49 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Thu, 14 Sep 2017 18:03:49 +0200 Subject: [infinispan-dev] How about moving Infinispan forums to StackOverflow? In-Reply-To: References: Message-ID: Sounds like a good idea, and I've considered it for previous projects I worked on, but I remember having some downsides. I'd suggest checking with Mark Newton (at Red Hat). Cheers, > On 8 Sep 2017, at 09:51, Tristan Tarrant wrote: > > Yes, I think it would be a good idea. I've seen a number of users post > in both places, but SO is definitely more discoverable by the wider > community and has a lower barrier to entry. > > Tristan > > On 9/8/17 9:04 AM, Sebastian Laskawiec wrote: >> Hey guys, >> >> I'm pretty sure you have seen: https://developer.jboss.org/thread/275956 >> >> How about moving Infinispan questions too? >> >> Thanks, >> Sebastian >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o Infinispan, Red Hat From galder at redhat.com Tue Sep 19 03:42:00 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 19 Sep 2017 09:42:00 +0200 Subject: [infinispan-dev] Why do we need separate Infinispan OpenShift template repo? Message-ID: Hi, I was looking at the Infinispan OpenShift template repo [1], and I started questioning why this repo contains Infinispan configurations for the cloud [2]. Shouldn't these be part of the Infinispan Server distribution? Otherwise this repo is going to somehow versioned depending on the Infinispan version... Which lead me to think, should repo [1] exist at all? Why aren't all its contents part of infinispan/infinispan? The only reason that I could think for keeping a different repo is maybe if you want to version it according to different OpenShift versions, but that could easily be achieved in infinispan/infinispan with different folders. Cheers, [1] https://github.com/infinispan/infinispan-openshift-templates [2] https://github.com/infinispan/infinispan-openshift-templates/blob/master/configurations/cloud-ephemeral.xml -- Galder Zamarre?o Infinispan, Red Hat From ttarrant at redhat.com Tue Sep 19 04:33:11 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 19 Sep 2017 10:33:11 +0200 Subject: [infinispan-dev] Why do we need separate Infinispan OpenShift template repo? In-Reply-To: References: Message-ID: <3d93675b-ade9-c7f7-54bd-deb50fe0667b@redhat.com> On 9/19/17 9:42 AM, Galder Zamarre?o wrote: > Hi, > > I was looking at the Infinispan OpenShift template repo [1], and I started questioning why this repo contains Infinispan configurations for the cloud [2]. Shouldn't these be part of the Infinispan Server distribution? Otherwise this repo is going to somehow versioned depending on the Infinispan version... > > Which lead me to think, should repo [1] exist at all? Why aren't all its contents part of infinispan/infinispan? The only reason that I could think for keeping a different repo is maybe if you want to version it according to different OpenShift versions, but that could easily be achieved in infinispan/infinispan with different folders. It was created separately because its release cycle can be much faster. Once things settle we can bring it in. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Tue Sep 19 05:30:36 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 19 Sep 2017 09:30:36 +0000 Subject: [infinispan-dev] Why do we need separate Infinispan OpenShift template repo? In-Reply-To: <3d93675b-ade9-c7f7-54bd-deb50fe0667b@redhat.com> References: <3d93675b-ade9-c7f7-54bd-deb50fe0667b@redhat.com> Message-ID: Hey Galder, That sounds like an interesting idea but let me give some more context and propose other options... So during the first iteration I wanted to create templates inside OpenShift Template Library [1]. However it turned out that this repo works in a very specific way - it pulls templates from other repositories and puts them in one, single place. According to my knowledge there are plans to use it OpenShift Online (I can tell you more offline). This is why I came up with a separate repository only for templates and image streams. When adding more and more features to the templates, my goal was to externalize configuration into a ConfigMap. This makes it very convenient for editing in OpenShift UI. The main problem is how to put it there? The easiest way was to hardcode it inside a template (and I decided to go that way). But a much more robust approach would be to spin up a small container (maybe an Init Container??) that would pull proper version of Infinispan and use Kubernetes REST API to create that ConfigMap on the fly. I'm not sure if putting templates into Infinispan repository would solve our problems. Although granted, we would have an easy access to configuration but still providing custom Docker image [2] (possibly with custom configuration) is something I expect to happen frequently. Also I'm not a big fan of putting many bits in a single repository. So having said that, I believe the proper way is to implement a small container (maybe an Init Container or just a script inside the same Docker image) responsible for unpacking desired Infinispan package and creating ConfigMap directly in Kubernetes. WDYT? Thanks, Sebastian [1] https://github.com/openshift/library [2] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L376 On Tue, Sep 19, 2017 at 10:34 AM Tristan Tarrant wrote: > On 9/19/17 9:42 AM, Galder Zamarre?o wrote: > > Hi, > > > > I was looking at the Infinispan OpenShift template repo [1], and I > started questioning why this repo contains Infinispan configurations for > the cloud [2]. Shouldn't these be part of the Infinispan Server > distribution? Otherwise this repo is going to somehow versioned depending > on the Infinispan version... > > > > Which lead me to think, should repo [1] exist at all? Why aren't all its > contents part of infinispan/infinispan? The only reason that I could think > for keeping a different repo is maybe if you want to version it according > to different OpenShift versions, but that could easily be achieved in > infinispan/infinispan with different folders. > > It was created separately because its release cycle can be much faster. > Once things settle we can bring it in. > > Tristan > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170919/540dc3d9/attachment.html From galder at redhat.com Tue Sep 19 10:03:41 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Tue, 19 Sep 2017 16:03:41 +0200 Subject: [infinispan-dev] Why do we need separate Infinispan OpenShift template repo? In-Reply-To: References: <3d93675b-ade9-c7f7-54bd-deb50fe0667b@redhat.com> Message-ID: <347E9316-A57E-49A4-AA14-C40202484C62@redhat.com> That sounds like a good idea. My main worry with how things are right now is that the config will get outdated and you need to keep in check not only with version changes, but any default behaviour changes we make. I'm happy for it to be a temporary solution for now. Cheers, > On 19 Sep 2017, at 11:30, Sebastian Laskawiec wrote: > > Hey Galder, > > That sounds like an interesting idea but let me give some more context and propose other options... > > So during the first iteration I wanted to create templates inside OpenShift Template Library [1]. However it turned out that this repo works in a very specific way - it pulls templates from other repositories and puts them in one, single place. According to my knowledge there are plans to use it OpenShift Online (I can tell you more offline). > > This is why I came up with a separate repository only for templates and image streams. When adding more and more features to the templates, my goal was to externalize configuration into a ConfigMap. This makes it very convenient for editing in OpenShift UI. The main problem is how to put it there? The easiest way was to hardcode it inside a template (and I decided to go that way). But a much more robust approach would be to spin up a small container (maybe an Init Container??) that would pull proper version of Infinispan and use Kubernetes REST API to create that ConfigMap on the fly. > > I'm not sure if putting templates into Infinispan repository would solve our problems. Although granted, we would have an easy access to configuration but still providing custom Docker image [2] (possibly with custom configuration) is something I expect to happen frequently. Also I'm not a big fan of putting many bits in a single repository. > > So having said that, I believe the proper way is to implement a small container (maybe an Init Container or just a script inside the same Docker image) responsible for unpacking desired Infinispan package and creating ConfigMap directly in Kubernetes. > > WDYT? > > Thanks, > Sebastian > > [1] https://github.com/openshift/library > [2] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L376 > > On Tue, Sep 19, 2017 at 10:34 AM Tristan Tarrant wrote: > On 9/19/17 9:42 AM, Galder Zamarre?o wrote: > > Hi, > > > > I was looking at the Infinispan OpenShift template repo [1], and I started questioning why this repo contains Infinispan configurations for the cloud [2]. Shouldn't these be part of the Infinispan Server distribution? Otherwise this repo is going to somehow versioned depending on the Infinispan version... > > > > Which lead me to think, should repo [1] exist at all? Why aren't all its contents part of infinispan/infinispan? The only reason that I could think for keeping a different repo is maybe if you want to version it according to different OpenShift versions, but that could easily be achieved in infinispan/infinispan with different folders. > > It was created separately because its release cycle can be much faster. > Once things settle we can bring it in. > > Tristan > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o Infinispan, Red Hat From ttarrant at redhat.com Wed Sep 20 11:03:26 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 20 Sep 2017 17:03:26 +0200 Subject: [infinispan-dev] Infinispan 9.1.1.Final is out Message-ID: <0b29e771-671c-499e-8726-08cae35574f2@redhat.com> We have just released Infnispan 9.1.1.Final. Read about it here: http://blog.infinispan.org/2017/09/infinispan-911final-is-out.html Enjoy ! Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Wed Sep 20 11:15:05 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 20 Sep 2017 17:15:05 +0200 Subject: [infinispan-dev] Infinispan 9.2 schedule Message-ID: <9f298f3b-f890-1884-4928-5f3d5c63d843@redhat.com> With the release of 9.1.1.Final, and the delay it introduced, I have updated the 9.2.x schedule and roadmap. These are the expected release dates: 9.2.0.Alpha1 Oct 4th 9.2.0.Alpha2 Oct 18th 9.2.0.Beta1 Nov 1st 9.2.0.Beta2 Nov 15th (feature freeze) 9.2.0.CR1 Nov 29th (component upgrade freeze) 9.2.0.Final Dec 13th Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Wed Sep 20 15:17:23 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 20 Sep 2017 21:17:23 +0200 Subject: [infinispan-dev] Code examples in multiple languages Message-ID: One thing that I wish we had is the ability, when possible, to give code examples for our API in all of our implementations (embedded, hotrod java, c++, c#, node.js and REST). Currently each one handles documentation differently and we are not very consistent with structure, content and examples. I've been looking at Slate [1] which uses Markdown and is quite nice, but has the big disadvantage that it would create something which is separate from our current documentation... An alternative approach would be to implement an asciidoctor plugin which provides some kind of tabbed code block. Any other ideas ? Tristan [1] https://lord.github.io/slate/ -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From rory.odonnell at oracle.com Thu Sep 21 16:41:19 2017 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Thu, 21 Sep 2017 21:41:19 +0100 Subject: [infinispan-dev] Release Announcement: General Availability of JDK 9 Message-ID: Hi Galder, Three items to share with you today * *JDK 9 General Availability * o GPL'd binaries from Oracle are available here: + http://jdk.java.net/9 o See Mark Reinhold's email for more details on the Release [1] + delivery of Project Jigsaw [2] * Are you JDK 9 Ready ? o The Quality Outreach wiki has been updated to include a JDK 9 Ready column. o If you would like us to identify your project as JDK 9 ready , please let me know and I will add it to the wiki. * Quality Outreach Report for September 2017**is available o many thanks for your continued support and welcome to the new projects! Rgds,Rory [1] http://mail.openjdk.java.net/pipermail/announce/2017-September/000230.html [2] https://mreinhold.org/blog/jigsaw-complete -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170921/4be52456/attachment.html From galder at redhat.com Fri Sep 22 08:32:51 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Fri, 22 Sep 2017 14:32:51 +0200 Subject: [infinispan-dev] Adjusting memory settings in template Message-ID: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> Hi Sebastian, How do you change memory settings for Infinispan started via service catalog? The memory settings seem defined in [1], but this is not one of the parameters supported. I guess we want this as parameter? Cheers, [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 -- Galder Zamarre?o Infinispan, Red Hat From slaskawi at redhat.com Fri Sep 22 08:49:42 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 22 Sep 2017 12:49:42 +0000 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> Message-ID: It's very tricky... Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more). Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU). Thanks, Sebastian [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o wrote: > Hi Sebastian, > > How do you change memory settings for Infinispan started via service > catalog? > > The memory settings seem defined in [1], but this is not one of the > parameters supported. > > I guess we want this as parameter? > > Cheers, > > [1] > https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 > -- > Galder Zamarre?o > Infinispan, Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170922/34c895c4/attachment.html From sanne at infinispan.org Fri Sep 22 10:38:50 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 22 Sep 2017 15:38:50 +0100 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> Message-ID: On 22 September 2017 at 13:49, Sebastian Laskawiec wrote: > It's very tricky... > > Memory is adjusted automatically to the container size [1] (of course you > may override it by supplying Xmx or "-n" as parameters [2]). The safe limit > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, > that you can squeeze Infinispan much, much more). > > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in > bustable memory category so if there is additional memory in the node, we'll > get it. But if not, we won't go below 512 MB (and 500 mCPU). I hope that's a temporary choice of the work in process? Doesn't sound acceptable to address real world requirements.. Infinispan expects users to estimate how much memory they will need - which is hard enough - and then we should at least be able to start a cluster to address the specified need. Being able to rely on 512MB only per node would require lots of nodes even for small data sets, leading to extreme resource waste as each node would consume some non negligible portion of memory just to run the thing. Thanks, Sanne > > Thanks, > Sebastian > > [1] > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory > [2] > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 > [4] > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html > > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o wrote: >> >> Hi Sebastian, >> >> How do you change memory settings for Infinispan started via service >> catalog? >> >> The memory settings seem defined in [1], but this is not one of the >> parameters supported. >> >> I guess we want this as parameter? >> >> Cheers, >> >> [1] >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From slaskawi at redhat.com Fri Sep 22 11:58:13 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 22 Sep 2017 15:58:13 +0000 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> Message-ID: On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero wrote: > On 22 September 2017 at 13:49, Sebastian Laskawiec > wrote: > > It's very tricky... > > > > Memory is adjusted automatically to the container size [1] (of course you > > may override it by supplying Xmx or "-n" as parameters [2]). The safe > limit > > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, > > that you can squeeze Infinispan much, much more). > > > > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in > > bustable memory category so if there is additional memory in the node, > we'll > > get it. But if not, we won't go below 512 MB (and 500 mCPU). > > I hope that's a temporary choice of the work in process? > > Doesn't sound acceptable to address real world requirements.. > Infinispan expects users to estimate how much memory they will need - > which is hard enough - and then we should at least be able to start a > cluster to address the specified need. Being able to rely on 512MB > only per node would require lots of nodes even for small data sets, > leading to extreme resource waste as each node would consume some non > negligible portion of memory just to run the thing. > hmmm yeah - its finished. I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or setting 50% of container memory? If the former and you set nothing, you will get the worse QoS and Kubernetes will shut your container in first order whenever it gets out of resources (I really recommend reading [4] and watching [3]). If the latter, yeah I guess we can tune it a little with off-heap but, as my the latest tests showed, if you enable RocksDB Cache Store, allocating even 50% is too much (the container got killed by OOM Killer). That's probably the reason why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting it to 50% means that we take the risk... So TBH, I see no silver bullet here and I'm open for suggestions. IMO if you're really know what you're doing, you should set Xmx yourself (this will turn off setting Xmx automatically by the bootstrap script) and possibly set limits (and adjust requests) in your Deployment Configuration (if you set both requests and limits you will have the best QoS). > Thanks, > Sanne > > > > > Thanks, > > Sebastian > > > > [1] > > > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory > > [2] > > > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 > > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 > > [4] > > > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html > > > > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o > wrote: > >> > >> Hi Sebastian, > >> > >> How do you change memory settings for Infinispan started via service > >> catalog? > >> > >> The memory settings seem defined in [1], but this is not one of the > >> parameters supported. > >> > >> I guess we want this as parameter? > >> > >> Cheers, > >> > >> [1] > >> > https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 > >> -- > >> Galder Zamarre?o > >> Infinispan, Red Hat > >> > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170922/0ca2228b/attachment-0001.html From galder at redhat.com Mon Sep 25 05:54:39 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 25 Sep 2017 11:54:39 +0200 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> Message-ID: <03D1CAB5-4EC7-446F-A5DF-BE674DB00B1D@redhat.com> I don't understand your reply here... are you talking about Infinispan instances deployed on OpenShift Online? Or on premise? I can understand having some limits for OpenShift Online, but these templates should also be applicable on premise, in which case I should be able to easily define how much memory I want for the data grid, and the rest of the parameters would be worked out by OpenShift/Kubernetes? To demand on premise users to go and change their template just to adjust the memory settings seems to me goes against all the usability improvements we're trying to achieve. Cheers, > On 22 Sep 2017, at 14:49, Sebastian Laskawiec wrote: > > It's very tricky... > > Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more). > > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU). > > Thanks, > Sebastian > > [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory > [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 > [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html > > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o wrote: > Hi Sebastian, > > How do you change memory settings for Infinispan started via service catalog? > > The memory settings seem defined in [1], but this is not one of the parameters supported. > > I guess we want this as parameter? > > Cheers, > > [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 > -- > Galder Zamarre?o > Infinispan, Red Hat > -- Galder Zamarre?o Infinispan, Red Hat From galder at redhat.com Mon Sep 25 05:57:29 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 25 Sep 2017 11:57:29 +0200 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> Message-ID: > On 22 Sep 2017, at 17:58, Sebastian Laskawiec wrote: > > > > On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero wrote: > On 22 September 2017 at 13:49, Sebastian Laskawiec wrote: > > It's very tricky... > > > > Memory is adjusted automatically to the container size [1] (of course you > > may override it by supplying Xmx or "-n" as parameters [2]). The safe limit > > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, > > that you can squeeze Infinispan much, much more). > > > > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in > > bustable memory category so if there is additional memory in the node, we'll > > get it. But if not, we won't go below 512 MB (and 500 mCPU). > > I hope that's a temporary choice of the work in process? > > Doesn't sound acceptable to address real world requirements.. > Infinispan expects users to estimate how much memory they will need - > which is hard enough - and then we should at least be able to start a > cluster to address the specified need. Being able to rely on 512MB > only per node would require lots of nodes even for small data sets, > leading to extreme resource waste as each node would consume some non > negligible portion of memory just to run the thing. > > hmmm yeah - its finished. > > I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or setting 50% of container memory? > > If the former and you set nothing, you will get the worse QoS and Kubernetes will shut your container in first order whenever it gets out of resources (I really recommend reading [4] and watching [3]). If the latter, yeah I guess we can tune it a little with off-heap but, as my the latest tests showed, if you enable RocksDB Cache Store, allocating even 50% is too much (the container got killed by OOM Killer). That's probably the reason why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting it to 50% means that we take the risk... > > So TBH, I see no silver bullet here and I'm open for suggestions. IMO if you're really know what you're doing, you should set Xmx yourself (this will turn off setting Xmx automatically by the bootstrap script) and possibly set limits (and adjust requests) in your Deployment Configuration (if you set both requests and limits you will have the best QoS). Try put it this way: I've just started an Infinispan ephermeral instance and trying to load some data and it's running out of memory. What knobs/settings does the template offer to make sure I have a big enough Infinispan instance(s) to handle my data? (Don't reply with: make your data smaller) Cheers, > > > Thanks, > Sanne > > > > > Thanks, > > Sebastian > > > > [1] > > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory > > [2] > > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 > > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 > > [4] > > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html > > > > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o wrote: > >> > >> Hi Sebastian, > >> > >> How do you change memory settings for Infinispan started via service > >> catalog? > >> > >> The memory settings seem defined in [1], but this is not one of the > >> parameters supported. > >> > >> I guess we want this as parameter? > >> > >> Cheers, > >> > >> [1] > >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 > >> -- > >> Galder Zamarre?o > >> Infinispan, Red Hat > >> > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o Infinispan, Red Hat From slaskawi at redhat.com Mon Sep 25 06:30:07 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 25 Sep 2017 10:30:07 +0000 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: <03D1CAB5-4EC7-446F-A5DF-BE674DB00B1D@redhat.com> References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> <03D1CAB5-4EC7-446F-A5DF-BE674DB00B1D@redhat.com> Message-ID: On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarre?o wrote: > I don't understand your reply here... are you talking about Infinispan > instances deployed on OpenShift Online? Or on premise? > TBH - I think there is no difference, so I'm thinking about both. > I can understand having some limits for OpenShift Online, but these > templates should also be applicable on premise, in which case I should be > able to easily define how much memory I want for the data grid, and the > rest of the parameters would be worked out by OpenShift/Kubernetes? > I have written a couple of emails about this on internal mailing list. Let me just point of some bits here: - We need to set either Xmx or MaxRAM to tell the JVM how much memory it can allocate. As you probably know JDK8 is not CGroups aware by default (there are some experimental options but they set MaxRAM parameter equal to CGroups limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess allocating Xmx=(CGroups limit)/4 is too high for us, so we need to set it explicitly. - in our Docker image we set Xmx = 50% of CGroups limit. This is better than settings above but there is some risk in certain scenarios. - As I mentioned in my previous email, in the templates we are setting Requests (not Limits!!!). So you will probably get more memory than specified in the template but it depends on the node you're running on. The key point is that you won't get less than those 512 MB. - You can always edit your DeploymentConfig (after creating your application from template) and adjust Limits (or even requests). - For simple scenarios and bigger containers (like 4 GB) we can go more than 50% (see internal mailing list for details). And as I said before - if you guys think we should do it differently, I'm open for suggestions. I think it's quite standard way of configuring this sort of stuff. > > To demand on premise users to go and change their template just to adjust > the memory settings seems to me goes against all the usability improvements > we're trying to achieve. > At some point you need to define how much memory you will need. Whether it's in the template, your DeploymentConfiguration (created from template using oc process), Quota - it doesn't matter. You must write it somewhere - don't you? With current approach, the best way to do it is in Deployment Configuration Requests. This sets CGroups limit, and based on that, Infinispan bootstrap scripts will calculate Xmx. > > Cheers, > > > On 22 Sep 2017, at 14:49, Sebastian Laskawiec > wrote: > > > > It's very tricky... > > > > Memory is adjusted automatically to the container size [1] (of course > you may override it by supplying Xmx or "-n" as parameters [2]). The safe > limit is roughly Xmx=Xms=50% of container capacity (unless you do the > off-heap, that you can squeeze Infinispan much, much more). > > > > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in > bustable memory category so if there is additional memory in the node, > we'll get it. But if not, we won't go below 512 MB (and 500 mCPU). > > > > Thanks, > > Sebastian > > > > [1] > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory > > [2] > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 > > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 > > [4] > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html > > > > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o > wrote: > > Hi Sebastian, > > > > How do you change memory settings for Infinispan started via service > catalog? > > > > The memory settings seem defined in [1], but this is not one of the > parameters supported. > > > > I guess we want this as parameter? > > > > Cheers, > > > > [1] > https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 > > -- > > Galder Zamarre?o > > Infinispan, Red Hat > > > > -- > Galder Zamarre?o > Infinispan, Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170925/e8f2a3ee/attachment.html From slaskawi at redhat.com Mon Sep 25 06:37:59 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 25 Sep 2017 10:37:59 +0000 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> Message-ID: On Mon, Sep 25, 2017 at 11:58 AM Galder Zamarre?o wrote: > > > > On 22 Sep 2017, at 17:58, Sebastian Laskawiec > wrote: > > > > > > > > On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero > wrote: > > On 22 September 2017 at 13:49, Sebastian Laskawiec > wrote: > > > It's very tricky... > > > > > > Memory is adjusted automatically to the container size [1] (of course > you > > > may override it by supplying Xmx or "-n" as parameters [2]). The safe > limit > > > is roughly Xmx=Xms=50% of container capacity (unless you do the > off-heap, > > > that you can squeeze Infinispan much, much more). > > > > > > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in > > > bustable memory category so if there is additional memory in the node, > we'll > > > get it. But if not, we won't go below 512 MB (and 500 mCPU). > > > > I hope that's a temporary choice of the work in process? > > > > Doesn't sound acceptable to address real world requirements.. > > Infinispan expects users to estimate how much memory they will need - > > which is hard enough - and then we should at least be able to start a > > cluster to address the specified need. Being able to rely on 512MB > > only per node would require lots of nodes even for small data sets, > > leading to extreme resource waste as each node would consume some non > > negligible portion of memory just to run the thing. > > > > hmmm yeah - its finished. > > > > I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? > Or setting 50% of container memory? > > > > If the former and you set nothing, you will get the worse QoS and > Kubernetes will shut your container in first order whenever it gets out of > resources (I really recommend reading [4] and watching [3]). If the latter, > yeah I guess we can tune it a little with off-heap but, as my the latest > tests showed, if you enable RocksDB Cache Store, allocating even 50% is too > much (the container got killed by OOM Killer). That's probably the reason > why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So > even setting it to 50% means that we take the risk... > > > > So TBH, I see no silver bullet here and I'm open for suggestions. IMO if > you're really know what you're doing, you should set Xmx yourself (this > will turn off setting Xmx automatically by the bootstrap script) and > possibly set limits (and adjust requests) in your Deployment Configuration > (if you set both requests and limits you will have the best QoS). > > Try put it this way: > > I've just started an Infinispan ephermeral instance and trying to load > some data and it's running out of memory. What knobs/settings does the > template offer to make sure I have a big enough Infinispan instance(s) to > handle my data? > Unfortunately calculating the number of instances based on input (e.g. "I want to have 10 GB of space for my data, please calculate how many 1 GB instances I need to create and adjust my app") is something that can not be done with templates. Templates are pretty simple and they do not support any calculations. You will probably need an Ansible Service Broker or Service Broker SDK to do it. So assuming you did the math on paper and you need 10 replicas, 1 GB each - just type oc edit dc/ and modify number of replicas and increase memory request. That should do the trick. Alternatively you can edit the ConfigMap and turn eviction on (but it really depends on your use case). BTW, the number of replicas is a parameter in template [1]. I can also expose memory request if you want me to (in that case just shoot me a ticket: https://github.com/infinispan/infinispan-openshift-templates/issues). And let me say it one more time - I'm open for suggestions (and pull requests) if you think this is not the way it should be done. [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L382 > > (Don't reply with: make your data smaller) > > Cheers, > > > > > > > Thanks, > > Sanne > > > > > > > > Thanks, > > > Sebastian > > > > > > [1] > > > > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory > > > [2] > > > > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 > > > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 > > > [4] > > > > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html > > > > > > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o > wrote: > > >> > > >> Hi Sebastian, > > >> > > >> How do you change memory settings for Infinispan started via service > > >> catalog? > > >> > > >> The memory settings seem defined in [1], but this is not one of the > > >> parameters supported. > > >> > > >> I guess we want this as parameter? > > >> > > >> Cheers, > > >> > > >> [1] > > >> > https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 > > >> -- > > >> Galder Zamarre?o > > >> Infinispan, Red Hat > > >> > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170925/90c322fe/attachment-0001.html From galder at redhat.com Mon Sep 25 06:54:04 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 25 Sep 2017 12:54:04 +0200 Subject: [infinispan-dev] Unable to cluster Infinispan ephemeral template instances Message-ID: <472A4551-3E35-48F8-8E7F-B1DDE0C9B228@redhat.com> Hey Sebastian, I've started 2 instances of Infinispan ephemeral [1] and they don't seem to cluster together with the pods showing this message: 10:51:12,014 WARN [org.jgroups.protocols.kubernetes.KUBE_PING] (jgroups-4,datagrid-1-187kx) failed getting JSON response from Kubernetes Client[masterUrl=https://172.30.0.1:443/api/v1, headers={Authorization=#MASKED:862#}, connectTimeout=5000, readTimeout=30000, operationAttempts=3, operationSleep=1000, streamProvider=org.jgroups.protocols.kubernetes.stream.InsecureStreamProvider at 51522f72] for cluster [cluster], namespace [openshift], labels [application=datagrid]; encountered [java.lang.Exception: 3 attempt(s) with a 1000ms sleep to execute [OpenStream] failed. Last failure was [java.io.IOException: Server returned HTTP response code: 403 for URL: https://172.30.0.1:443/api/v1/namespaces/openshift/pods?labelSelector=application%3Ddatagrid]] These are the options I'm giving to the template: oc process infinispan-ephemeral -p \ NUMBER_OF_INSTANCES=2 \ APPLICATION_NAME=datagrid \ APPLICATION_USER=developer \ APPLICATION_PASSWORD=developer I'd expect this to work out of the box, or do you need to pass in a management usr/pwd for it to work? Cheers, [1] https://github.com/infinispan/infinispan-openshift-templates -- Galder Zamarre?o Infinispan, Red Hat From slaskawi at redhat.com Mon Sep 25 07:11:26 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 25 Sep 2017 11:11:26 +0000 Subject: [infinispan-dev] Unable to cluster Infinispan ephemeral template instances In-Reply-To: <472A4551-3E35-48F8-8E7F-B1DDE0C9B228@redhat.com> References: <472A4551-3E35-48F8-8E7F-B1DDE0C9B228@redhat.com> Message-ID: Seems like you didn't fill the namespace parameter while creating an app: https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L336 I already tried to eliminate this parameter (because it seems redundant) but currently there is no way to do it [1]. It s required for Role Binding which enables the Pod to query Kubernetes API and ask about Pods [2]. You may also try to use the third way: oc policy add-role-to-user view system:serviceaccount:<>:<> [1] https://github.com/infinispan/infinispan-openshift-templates/pull/9#discussion_r131409849 [2] https://docs.openshift.com/enterprise/3.0/dev_guide/service_accounts.html On Mon, Sep 25, 2017 at 12:54 PM Galder Zamarre?o wrote: > Hey Sebastian, > > I've started 2 instances of Infinispan ephemeral [1] and they don't seem > to cluster together with the pods showing this message: > > 10:51:12,014 WARN [org.jgroups.protocols.kubernetes.KUBE_PING] > (jgroups-4,datagrid-1-187kx) failed getting JSON response from Kubernetes > Client[masterUrl=https://172.30.0.1:443/api/v1, > headers={Authorization=#MASKED:862#}, connectTimeout=5000, > readTimeout=30000, operationAttempts=3, operationSleep=1000, > streamProvider=org.jgroups.protocols.kubernetes.stream.InsecureStreamProvider at 51522f72] > for cluster [cluster], namespace [openshift], labels > [application=datagrid]; encountered [java.lang.Exception: 3 attempt(s) with > a 1000ms sleep to execute [OpenStream] failed. Last failure was > [java.io.IOException: Server returned HTTP response code: 403 for URL: > https://172.30.0.1:443/api/v1/namespaces/openshift/pods?labelSelector=application%3Ddatagrid > ]] > > These are the options I'm giving to the template: > > oc process infinispan-ephemeral -p \ > NUMBER_OF_INSTANCES=2 \ > APPLICATION_NAME=datagrid \ > APPLICATION_USER=developer \ > APPLICATION_PASSWORD=developer > > I'd expect this to work out of the box, or do you need to pass in a > management usr/pwd for it to work? > > Cheers, > > [1] https://github.com/infinispan/infinispan-openshift-templates > -- > Galder Zamarre?o > Infinispan, Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170925/8334a42f/attachment.html From galder at redhat.com Mon Sep 25 07:15:48 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 25 Sep 2017 13:15:48 +0200 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> Message-ID: > On 25 Sep 2017, at 12:37, Sebastian Laskawiec wrote: > > > > On Mon, Sep 25, 2017 at 11:58 AM Galder Zamarre?o wrote: > > > > On 22 Sep 2017, at 17:58, Sebastian Laskawiec wrote: > > > > > > > > On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero wrote: > > On 22 September 2017 at 13:49, Sebastian Laskawiec wrote: > > > It's very tricky... > > > > > > Memory is adjusted automatically to the container size [1] (of course you > > > may override it by supplying Xmx or "-n" as parameters [2]). The safe limit > > > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, > > > that you can squeeze Infinispan much, much more). > > > > > > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in > > > bustable memory category so if there is additional memory in the node, we'll > > > get it. But if not, we won't go below 512 MB (and 500 mCPU). > > > > I hope that's a temporary choice of the work in process? > > > > Doesn't sound acceptable to address real world requirements.. > > Infinispan expects users to estimate how much memory they will need - > > which is hard enough - and then we should at least be able to start a > > cluster to address the specified need. Being able to rely on 512MB > > only per node would require lots of nodes even for small data sets, > > leading to extreme resource waste as each node would consume some non > > negligible portion of memory just to run the thing. > > > > hmmm yeah - its finished. > > > > I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or setting 50% of container memory? > > > > If the former and you set nothing, you will get the worse QoS and Kubernetes will shut your container in first order whenever it gets out of resources (I really recommend reading [4] and watching [3]). If the latter, yeah I guess we can tune it a little with off-heap but, as my the latest tests showed, if you enable RocksDB Cache Store, allocating even 50% is too much (the container got killed by OOM Killer). That's probably the reason why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting it to 50% means that we take the risk... > > > > So TBH, I see no silver bullet here and I'm open for suggestions. IMO if you're really know what you're doing, you should set Xmx yourself (this will turn off setting Xmx automatically by the bootstrap script) and possibly set limits (and adjust requests) in your Deployment Configuration (if you set both requests and limits you will have the best QoS). > > Try put it this way: > > I've just started an Infinispan ephermeral instance and trying to load some data and it's running out of memory. What knobs/settings does the template offer to make sure I have a big enough Infinispan instance(s) to handle my data? > > Unfortunately calculating the number of instances based on input (e.g. "I want to have 10 GB of space for my data, please calculate how many 1 GB instances I need to create and adjust my app") is something that can not be done with templates. Templates are pretty simple and they do not support any calculations. You will probably need an Ansible Service Broker or Service Broker SDK to do it. > > So assuming you did the math on paper and you need 10 replicas, 1 GB each - just type oc edit dc/ and modify number of replicas and increase memory request. That should do the trick. Alternatively you can edit the ConfigMap and turn eviction on (but it really depends on your use case). > > BTW, the number of replicas is a parameter in template [1]. I can also expose memory request if you want me to (in that case just shoot me a ticket: https://github.com/infinispan/infinispan-openshift-templates/issues). And let me say it one more time - I'm open for suggestions (and pull requests) if you think this is not the way it should be done. I don't know how the overarching OpenShift caching, or shared memory services will be exposed, as an OpenShift user that was to store data in Infinispan, I should be able to provide how much (total) data I will put on it, and optionally how many backups I want for the data, and OpenShift should maybe provide with some options on how to do this: User: I want 2gb of data OpenShift: Assuming default of 1 backup (2 copies of data), I can offer you (assuming at least 25% overhead): a) 2 nodes of 2b b) 4 nodes of 1gb c) 8 nodes of 512mb And user decides... Assuming those higher level OpenShift services consume the Infinispan OpenShift templates, and you try to implement a situation like above, where the user specifies total amount of data, and you decide what options to offer them..., then the template would need to expose number of instances (done already) and memory for each of those instance (not there yet). Still, I'll try to see if I can get my use case working with only 512mb per node, and use the number of instances as a way to add more memory. However, I feel that only exposing number of instances is not enough... Btw, this is something that needs to be agreed on and should be part of our Infinispan OpenShift integration specification/plan. Cheers, > > [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L382 > > > (Don't reply with: make your data smaller) > > Cheers, > > > > > > > Thanks, > > Sanne > > > > > > > > Thanks, > > > Sebastian > > > > > > [1] > > > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory > > > [2] > > > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 > > > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 > > > [4] > > > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html > > > > > > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o wrote: > > >> > > >> Hi Sebastian, > > >> > > >> How do you change memory settings for Infinispan started via service > > >> catalog? > > >> > > >> The memory settings seem defined in [1], but this is not one of the > > >> parameters supported. > > >> > > >> I guess we want this as parameter? > > >> > > >> Cheers, > > >> > > >> [1] > > >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 > > >> -- > > >> Galder Zamarre?o > > >> Infinispan, Red Hat > > >> > > > > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o Infinispan, Red Hat From galder at redhat.com Mon Sep 25 07:18:56 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 25 Sep 2017 13:18:56 +0200 Subject: [infinispan-dev] Unable to cluster Infinispan ephemeral template instances In-Reply-To: References: <472A4551-3E35-48F8-8E7F-B1DDE0C9B228@redhat.com> Message-ID: <284BE256-C4C0-4644-B2D9-859EE1C3D4ED@redhat.com> Hmmm, is there a way to say that if you don't pass in namespace, you take the application name as namespace? > On 25 Sep 2017, at 13:11, Sebastian Laskawiec wrote: > > Seems like you didn't fill the namespace parameter while creating an app: https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L336 > > I already tried to eliminate this parameter (because it seems redundant) but currently there is no way to do it [1]. It s required for Role Binding which enables the Pod to query Kubernetes API and ask about Pods [2]. > > You may also try to use the third way: > oc policy add-role-to-user view system:serviceaccount:<>:<> > > [1] https://github.com/infinispan/infinispan-openshift-templates/pull/9#discussion_r131409849 > [2] https://docs.openshift.com/enterprise/3.0/dev_guide/service_accounts.html > > On Mon, Sep 25, 2017 at 12:54 PM Galder Zamarre?o wrote: > Hey Sebastian, > > I've started 2 instances of Infinispan ephemeral [1] and they don't seem to cluster together with the pods showing this message: > > 10:51:12,014 WARN [org.jgroups.protocols.kubernetes.KUBE_PING] (jgroups-4,datagrid-1-187kx) failed getting JSON response from Kubernetes Client[masterUrl=https://172.30.0.1:443/api/v1, headers={Authorization=#MASKED:862#}, connectTimeout=5000, readTimeout=30000, operationAttempts=3, operationSleep=1000, streamProvider=org.jgroups.protocols.kubernetes.stream.InsecureStreamProvider at 51522f72] for cluster [cluster], namespace [openshift], labels [application=datagrid]; encountered [java.lang.Exception: 3 attempt(s) with a 1000ms sleep to execute [OpenStream] failed. Last failure was [java.io.IOException: Server returned HTTP response code: 403 for URL: https://172.30.0.1:443/api/v1/namespaces/openshift/pods?labelSelector=application%3Ddatagrid]] > > These are the options I'm giving to the template: > > oc process infinispan-ephemeral -p \ > NUMBER_OF_INSTANCES=2 \ > APPLICATION_NAME=datagrid \ > APPLICATION_USER=developer \ > APPLICATION_PASSWORD=developer > > I'd expect this to work out of the box, or do you need to pass in a management usr/pwd for it to work? > > Cheers, > > [1] https://github.com/infinispan/infinispan-openshift-templates > -- > Galder Zamarre?o > Infinispan, Red Hat > -- Galder Zamarre?o Infinispan, Red Hat From galder at redhat.com Mon Sep 25 07:26:00 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 25 Sep 2017 13:26:00 +0200 Subject: [infinispan-dev] Unable to cluster Infinispan ephemeral template instances In-Reply-To: <284BE256-C4C0-4644-B2D9-859EE1C3D4ED@redhat.com> References: <472A4551-3E35-48F8-8E7F-B1DDE0C9B228@redhat.com> <284BE256-C4C0-4644-B2D9-859EE1C3D4ED@redhat.com> Message-ID: <392EDDA4-8DEF-42CE-93B8-9F89F1A16726@redhat.com> Sebastian, are you sure the namespace is the problem? The template seems to define a default value for namepsace [2]. Anyway, I've tried to pass a NAMESPACE value and I still the same WARN messages and no cluster formed. Cheers, [2] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L338 > On 25 Sep 2017, at 13:18, Galder Zamarre?o wrote: > > Hmmm, is there a way to say that if you don't pass in namespace, you take the application name as namespace? > >> On 25 Sep 2017, at 13:11, Sebastian Laskawiec wrote: >> >> Seems like you didn't fill the namespace parameter while creating an app: https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L336 >> >> I already tried to eliminate this parameter (because it seems redundant) but currently there is no way to do it [1]. It s required for Role Binding which enables the Pod to query Kubernetes API and ask about Pods [2]. >> >> You may also try to use the third way: >> oc policy add-role-to-user view system:serviceaccount:<>:<> >> >> [1] https://github.com/infinispan/infinispan-openshift-templates/pull/9#discussion_r131409849 >> [2] https://docs.openshift.com/enterprise/3.0/dev_guide/service_accounts.html >> >> On Mon, Sep 25, 2017 at 12:54 PM Galder Zamarre?o wrote: >> Hey Sebastian, >> >> I've started 2 instances of Infinispan ephemeral [1] and they don't seem to cluster together with the pods showing this message: >> >> 10:51:12,014 WARN [org.jgroups.protocols.kubernetes.KUBE_PING] (jgroups-4,datagrid-1-187kx) failed getting JSON response from Kubernetes Client[masterUrl=https://172.30.0.1:443/api/v1, headers={Authorization=#MASKED:862#}, connectTimeout=5000, readTimeout=30000, operationAttempts=3, operationSleep=1000, streamProvider=org.jgroups.protocols.kubernetes.stream.InsecureStreamProvider at 51522f72] for cluster [cluster], namespace [openshift], labels [application=datagrid]; encountered [java.lang.Exception: 3 attempt(s) with a 1000ms sleep to execute [OpenStream] failed. Last failure was [java.io.IOException: Server returned HTTP response code: 403 for URL: https://172.30.0.1:443/api/v1/namespaces/openshift/pods?labelSelector=application%3Ddatagrid]] >> >> These are the options I'm giving to the template: >> >> oc process infinispan-ephemeral -p \ >> NUMBER_OF_INSTANCES=2 \ >> APPLICATION_NAME=datagrid \ >> APPLICATION_USER=developer \ >> APPLICATION_PASSWORD=developer >> >> I'd expect this to work out of the box, or do you need to pass in a management usr/pwd for it to work? >> >> Cheers, >> >> [1] https://github.com/infinispan/infinispan-openshift-templates >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> > > -- > Galder Zamarre?o > Infinispan, Red Hat > -- Galder Zamarre?o Infinispan, Red Hat From sanne at infinispan.org Mon Sep 25 07:56:06 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 25 Sep 2017 12:56:06 +0100 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> Message-ID: On 22 September 2017 at 16:58, Sebastian Laskawiec wrote: > > > On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero > wrote: >> >> On 22 September 2017 at 13:49, Sebastian Laskawiec >> wrote: >> > It's very tricky... >> > >> > Memory is adjusted automatically to the container size [1] (of course >> > you >> > may override it by supplying Xmx or "-n" as parameters [2]). The safe >> > limit >> > is roughly Xmx=Xms=50% of container capacity (unless you do the >> > off-heap, >> > that you can squeeze Infinispan much, much more). >> > >> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in >> > bustable memory category so if there is additional memory in the node, >> > we'll >> > get it. But if not, we won't go below 512 MB (and 500 mCPU). >> >> I hope that's a temporary choice of the work in process? >> >> Doesn't sound acceptable to address real world requirements.. >> Infinispan expects users to estimate how much memory they will need - >> which is hard enough - and then we should at least be able to start a >> cluster to address the specified need. Being able to rely on 512MB >> only per node would require lots of nodes even for small data sets, >> leading to extreme resource waste as each node would consume some non >> negligible portion of memory just to run the thing. > > > hmmm yeah - its finished. > > I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or > setting 50% of container memory? If the orchestrator "might" give us more than 512MB but this is not guaranteed, we can't rely on it and we'll have to assume we have 512M only. I see no use in getting some heap size which was not explicitly set; if there's extra available memory that's not too bad to use as native memory (e.g. buffering RocksDB IO operations) so you might as well not assign it to the JVM - since we can't rely on it we won't make effective use of it. Secondarily, yes we should make sure it's easy enough to request nodes with more than 512MB each as Infinispan gets way more useful with larger heaps. The ROI on 512MB would make me want to use a different technology! > > If the former and you set nothing, you will get the worse QoS and Kubernetes > will shut your container in first order whenever it gets out of resources (I > really recommend reading [4] and watching [3]). If the latter, yeah I guess > we can tune it a little with off-heap but, as my the latest tests showed, if > you enable RocksDB Cache Store, allocating even 50% is too much (the > container got killed by OOM Killer). That's probably the reason why setting > MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting > it to 50% means that we take the risk... > > So TBH, I see no silver bullet here and I'm open for suggestions. IMO if > you're really know what you're doing, you should set Xmx yourself (this will > turn off setting Xmx automatically by the bootstrap script) and possibly set > limits (and adjust requests) in your Deployment Configuration (if you set > both requests and limits you will have the best QoS). +1 Let's recommend this approach, and discourage the automated sizing at least until we can implement some of the things Galder is also suggesting. I'd just remove that option as it's going to cause more trouble than what it's worth it. You are the OpenShift expert and I have no idea how this could be done :) I'm just highlighting that Infinispan can't deal with having some variable heap size, having this would makes right-size tuning extremely more complex to users - heck I wouldn't know how to do it myself. +1 to Galder's suggestions; I particularly like the idea to create various templates specifically tuned for some fixed heap values; for example we could create one for each of the common machine types on popular cloud providers. Not suggesting to have a template for each of them but we could pick some reasonable configurations so that then we can help matching the template to the physical machine. I guess this doesn't translate directly to OpenShift resource limits but that's something you could figure out? After all an OS container has to run on some cloud so it would still help people to have a template "suited" for each popular, actually existing machine type. Incidentally this approach would also produce helpful configuration templates for people running on clouds directly. Thanks, Sanne > >> >> Thanks, >> Sanne >> >> > >> > Thanks, >> > Sebastian >> > >> > [1] >> > >> > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory >> > [2] >> > >> > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 >> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 >> > [4] >> > >> > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html >> > >> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o >> > wrote: >> >> >> >> Hi Sebastian, >> >> >> >> How do you change memory settings for Infinispan started via service >> >> catalog? >> >> >> >> The memory settings seem defined in [1], but this is not one of the >> >> parameters supported. >> >> >> >> I guess we want this as parameter? >> >> >> >> Cheers, >> >> >> >> [1] >> >> >> >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 >> >> -- >> >> Galder Zamarre?o >> >> Infinispan, Red Hat >> >> >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Mon Sep 25 11:01:46 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 25 Sep 2017 17:01:46 +0200 Subject: [infinispan-dev] Weekly Infinispan IRC Meeting Logs 2017-09-25 Message-ID: <9aa342c0-70ef-a455-d739-0320deea59e6@redhat.com> Howdy, the weekly infinispan meeting happened on IRC like every Monday, and the logs are here for your perusal: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-09-25-14.06.log.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Mon Sep 25 11:44:51 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Mon, 25 Sep 2017 17:44:51 +0200 Subject: [infinispan-dev] Code examples in multiple languages In-Reply-To: References: Message-ID: <3FA5FB9D-491A-44F7-8941-F27C1CFD6711@redhat.com> I asked Dan Allen et al on twitter [2]. Spring has developed a similar plugin [3] and it appears to be included in [4]. Cheers, [2] https://twitter.com/galderz/status/910848538720038913 [3] https://docs.spring.io/spring-restdocs/docs/current/reference/html5/#getting-started-build-configuration [4] https://github.com/spring-io/spring-asciidoctor-extensions > On 20 Sep 2017, at 21:17, Tristan Tarrant wrote: > > One thing that I wish we had is the ability, when possible, to give code > examples for our API in all of our implementations (embedded, hotrod > java, c++, c#, node.js and REST). > > Currently each one handles documentation differently and we are not very > consistent with structure, content and examples. > > I've been looking at Slate [1] which uses Markdown and is quite nice, > but has the big disadvantage that it would create something which is > separate from our current documentation... > > An alternative approach would be to implement an asciidoctor plugin > which provides some kind of tabbed code block. > > Any other ideas ? > > > Tristan > > [1] https://lord.github.io/slate/ > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o Infinispan, Red Hat From slaskawi at redhat.com Tue Sep 26 22:56:37 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 27 Sep 2017 02:56:37 +0000 Subject: [infinispan-dev] Jenkins - HTTPS only Message-ID: Hey, During the upgrade of SSL certificate there was a recommendation to disable HTTP. It actually makes sense, so from now on, please use HTTPS only. Thanks, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170927/30a2368e/attachment.html From slaskawi at redhat.com Wed Sep 27 09:42:06 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Wed, 27 Sep 2017 13:42:06 +0000 Subject: [infinispan-dev] Jenkins - HTTPS only In-Reply-To: References: Message-ID: Restored previous settings... sorry for killing slaves ;) On Wed, Sep 27, 2017 at 4:56 AM Sebastian Laskawiec wrote: > Hey, > > During the upgrade of SSL certificate there was a recommendation to > disable HTTP. It actually makes sense, so from now on, please use HTTPS > only. > > Thanks, > Sebastian > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170927/e02478c4/attachment.html From emmanuel at hibernate.org Thu Sep 28 02:37:02 2017 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 28 Sep 2017 08:37:02 +0200 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> <03D1CAB5-4EC7-446F-A5DF-BE674DB00B1D@redhat.com> Message-ID: Sebastian, What Galder, Sanne and others are saying is that in OpenShift on prem, there is no or at least a higher limit in the minimal container memory you can ask. And in these deployment, Infinispan should target the multi GB, not 512 MB. Of course, *if* you ask for a guaranteed 512MB, then it would be silly to try and consume more. > On 25 Sep 2017, at 12:30, Sebastian Laskawiec wrote: > > > > On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarre?o > wrote: > I don't understand your reply here... are you talking about Infinispan instances deployed on OpenShift Online? Or on premise? > > TBH - I think there is no difference, so I'm thinking about both. > > I can understand having some limits for OpenShift Online, but these templates should also be applicable on premise, in which case I should be able to easily define how much memory I want for the data grid, and the rest of the parameters would be worked out by OpenShift/Kubernetes? > > I have written a couple of emails about this on internal mailing list. Let me just point of some bits here: > We need to set either Xmx or MaxRAM to tell the JVM how much memory it can allocate. As you probably know JDK8 is not CGroups aware by default (there are some experimental options but they set MaxRAM parameter equal to CGroups limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess allocating Xmx=(CGroups limit)/4 is too high for us, so we need to set it explicitly. > in our Docker image we set Xmx = 50% of CGroups limit. This is better than settings above but there is some risk in certain scenarios. > As I mentioned in my previous email, in the templates we are setting Requests (not Limits!!!). So you will probably get more memory than specified in the template but it depends on the node you're running on. The key point is that you won't get less than those 512 MB. > You can always edit your DeploymentConfig (after creating your application from template) and adjust Limits (or even requests). > For simple scenarios and bigger containers (like 4 GB) we can go more than 50% (see internal mailing list for details). > And as I said before - if you guys think we should do it differently, I'm open for suggestions. I think it's quite standard way of configuring this sort of stuff. > > To demand on premise users to go and change their template just to adjust the memory settings seems to me goes against all the usability improvements we're trying to achieve. > > At some point you need to define how much memory you will need. Whether it's in the template, your DeploymentConfiguration (created from template using oc process), Quota - it doesn't matter. You must write it somewhere - don't you? With current approach, the best way to do it is in Deployment Configuration Requests. This sets CGroups limit, and based on that, Infinispan bootstrap scripts will calculate Xmx. > > > Cheers, > > > On 22 Sep 2017, at 14:49, Sebastian Laskawiec > wrote: > > > > It's very tricky... > > > > Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more). > > > > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU). > > > > Thanks, > > Sebastian > > > > [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory > > [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 > > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 > > [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html > > > > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o > wrote: > > Hi Sebastian, > > > > How do you change memory settings for Infinispan started via service catalog? > > > > The memory settings seem defined in [1], but this is not one of the parameters supported. > > > > I guess we want this as parameter? > > > > Cheers, > > > > [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 > > -- > > Galder Zamarre?o > > Infinispan, Red Hat > > > > -- > Galder Zamarre?o > Infinispan, Red Hat > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170928/7f30578b/attachment-0001.html From slaskawi at redhat.com Thu Sep 28 06:00:23 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 28 Sep 2017 10:00:23 +0000 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> <03D1CAB5-4EC7-446F-A5DF-BE674DB00B1D@redhat.com> Message-ID: So how about exposing two parameters - Xms/Xmx and Total amount of memory for Pod (Request = Limit in that case). Would it work for you? On Thu, Sep 28, 2017 at 8:38 AM Emmanuel Bernard wrote: > Sebastian, > > What Galder, Sanne and others are saying is that in OpenShift on prem, > there is no or at least a higher limit in the minimal container memory you > can ask. And in these deployment, Infinispan should target the multi GB, > not 512 MB. > > Of course, *if* you ask for a guaranteed 512MB, then it would be silly to > try and consume more. > > On 25 Sep 2017, at 12:30, Sebastian Laskawiec wrote: > > > > On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarre?o > wrote: > >> I don't understand your reply here... are you talking about Infinispan >> instances deployed on OpenShift Online? Or on premise? >> > > TBH - I think there is no difference, so I'm thinking about both. > > >> I can understand having some limits for OpenShift Online, but these >> templates should also be applicable on premise, in which case I should be >> able to easily define how much memory I want for the data grid, and the >> rest of the parameters would be worked out by OpenShift/Kubernetes? >> > > I have written a couple of emails about this on internal mailing list. Let > me just point of some bits here: > > - We need to set either Xmx or MaxRAM to tell the JVM how much memory > it can allocate. As you probably know JDK8 is not CGroups aware by default > (there are some experimental options but they set MaxRAM parameter equal to > CGroups limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess > allocating Xmx=(CGroups limit)/4 is too high for us, so we need to set it > explicitly. > - in our Docker image we set Xmx = 50% of CGroups limit. This is > better than settings above but there is some risk in certain scenarios. > - As I mentioned in my previous email, in the templates we are setting > Requests (not Limits!!!). So you will probably get more memory than > specified in the template but it depends on the node you're running on. The > key point is that you won't get less than those 512 MB. > - You can always edit your DeploymentConfig (after creating your > application from template) and adjust Limits (or even requests). > - For simple scenarios and bigger containers (like 4 GB) we can go > more than 50% (see internal mailing list for details). > > And as I said before - if you guys think we should do it differently, I'm > open for suggestions. I think it's quite standard way of configuring this > sort of stuff. > >> >> To demand on premise users to go and change their template just to adjust >> the memory settings seems to me goes against all the usability improvements >> we're trying to achieve. >> > > At some point you need to define how much memory you will need. Whether > it's in the template, your DeploymentConfiguration (created from template > using oc process), Quota - it doesn't matter. You must write it somewhere - > don't you? With current approach, the best way to do it is in Deployment > Configuration Requests. This sets CGroups limit, and based on that, > Infinispan bootstrap scripts will calculate Xmx. > > >> >> Cheers, >> >> > On 22 Sep 2017, at 14:49, Sebastian Laskawiec >> wrote: >> > >> > It's very tricky... >> > >> > Memory is adjusted automatically to the container size [1] (of course >> you may override it by supplying Xmx or "-n" as parameters [2]). The safe >> limit is roughly Xmx=Xms=50% of container capacity (unless you do the >> off-heap, that you can squeeze Infinispan much, much more). >> > >> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in >> bustable memory category so if there is additional memory in the node, >> we'll get it. But if not, we won't go below 512 MB (and 500 mCPU). >> > >> > Thanks, >> > Sebastian >> > >> > [1] >> https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory >> > [2] >> https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 >> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 >> > [4] >> https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html >> > >> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o >> wrote: >> > Hi Sebastian, >> > >> > How do you change memory settings for Infinispan started via service >> catalog? >> > >> > The memory settings seem defined in [1], but this is not one of the >> parameters supported. >> > >> > I guess we want this as parameter? >> > >> > Cheers, >> > >> > [1] >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 >> > -- >> > Galder Zamarre?o >> > Infinispan, Red Hat >> > >> >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170928/8b60735d/attachment.html From emmanuel at hibernate.org Thu Sep 28 09:13:47 2017 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 28 Sep 2017 15:13:47 +0200 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> <03D1CAB5-4EC7-446F-A5DF-BE674DB00B1D@redhat.com> Message-ID: <9312470B-4CE9-428E-8147-6343153FCE3F@hibernate.org> I am personally content if you provide the total amount of memory for Pod and you as OSB designer decide of the -Xms/Xmx for the services. Unlike what Sanne said I think, Amazon and the like they don?t give you x GB of cache. They give you an instance of Redis or Memcached within a VM that has x amount of GB allocated. What you can stuck in is left as an exercise for the reader. Not ideal but I think they went for the practical in this case. For the pain JDG, then more options is fine. > On 28 Sep 2017, at 12:00, Sebastian Laskawiec wrote: > > So how about exposing two parameters - Xms/Xmx and Total amount of memory for Pod (Request = Limit in that case). Would it work for you? > > On Thu, Sep 28, 2017 at 8:38 AM Emmanuel Bernard > wrote: > Sebastian, > > What Galder, Sanne and others are saying is that in OpenShift on prem, there is no or at least a higher limit in the minimal container memory you can ask. And in these deployment, Infinispan should target the multi GB, not 512 MB. > > Of course, *if* you ask for a guaranteed 512MB, then it would be silly to try and consume more. > > >> On 25 Sep 2017, at 12:30, Sebastian Laskawiec > wrote: >> > >> >> >> On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarre?o > wrote: >> I don't understand your reply here... are you talking about Infinispan instances deployed on OpenShift Online? Or on premise? >> >> TBH - I think there is no difference, so I'm thinking about both. >> >> I can understand having some limits for OpenShift Online, but these templates should also be applicable on premise, in which case I should be able to easily define how much memory I want for the data grid, and the rest of the parameters would be worked out by OpenShift/Kubernetes? >> >> I have written a couple of emails about this on internal mailing list. Let me just point of some bits here: >> We need to set either Xmx or MaxRAM to tell the JVM how much memory it can allocate. As you probably know JDK8 is not CGroups aware by default (there are some experimental options but they set MaxRAM parameter equal to CGroups limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess allocating Xmx=(CGroups limit)/4 is too high for us, so we need to set it explicitly. >> in our Docker image we set Xmx = 50% of CGroups limit. This is better than settings above but there is some risk in certain scenarios. >> As I mentioned in my previous email, in the templates we are setting Requests (not Limits!!!). So you will probably get more memory than specified in the template but it depends on the node you're running on. The key point is that you won't get less than those 512 MB. >> You can always edit your DeploymentConfig (after creating your application from template) and adjust Limits (or even requests). >> For simple scenarios and bigger containers (like 4 GB) we can go more than 50% (see internal mailing list for details). >> And as I said before - if you guys think we should do it differently, I'm open for suggestions. I think it's quite standard way of configuring this sort of stuff. >> >> To demand on premise users to go and change their template just to adjust the memory settings seems to me goes against all the usability improvements we're trying to achieve. >> >> At some point you need to define how much memory you will need. Whether it's in the template, your DeploymentConfiguration (created from template using oc process), Quota - it doesn't matter. You must write it somewhere - don't you? With current approach, the best way to do it is in Deployment Configuration Requests. This sets CGroups limit, and based on that, Infinispan bootstrap scripts will calculate Xmx. >> >> >> Cheers, >> >> > On 22 Sep 2017, at 14:49, Sebastian Laskawiec > wrote: >> > >> > It's very tricky... >> > >> > Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more). >> > >> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU). >> > >> > Thanks, >> > Sebastian >> > >> > [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory >> > [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 >> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 >> > [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html >> > >> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o > wrote: >> > Hi Sebastian, >> > >> > How do you change memory settings for Infinispan started via service catalog? >> > >> > The memory settings seem defined in [1], but this is not one of the parameters supported. >> > >> > I guess we want this as parameter? >> > >> > Cheers, >> > >> > [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 >> > -- >> > Galder Zamarre?o >> > Infinispan, Red Hat >> > >> >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> > >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170928/95b4db3b/attachment-0001.html From emmanuel at hibernate.org Thu Sep 28 11:09:52 2017 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 28 Sep 2017 17:09:52 +0200 Subject: [infinispan-dev] Adjusting memory settings in template In-Reply-To: <9312470B-4CE9-428E-8147-6343153FCE3F@hibernate.org> References: <01F28A4B-7E72-4177-9B73-B3EA7714E972@redhat.com> <03D1CAB5-4EC7-446F-A5DF-BE674DB00B1D@redhat.com> <9312470B-4CE9-428E-8147-6343153FCE3F@hibernate.org> Message-ID: <9EA17701-BA9B-4468-B884-3832C9F5A2E9@hibernate.org> Just to clarify, What the user should be able to set is memory request according to the definition here https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/ and we chose a memory limit with reasonable margin (20%?) but aim at never going over memory request And to achieve that, we will build estimates based on the test work Sebastian has been doing around Xmx / memory request ratio. Each usage type will have to have revised estimates. These will be ?hardcoded? for a given memory request size. At least for a given usage and version of Infinispan. I like Sanne?s idea of a calculator where you input your data size needs and it offers pod number / pod size options. But we will have to offer that in the doc or something. Not as part of the service catalog UI in its current incarnation. Emmanuel > On 28 Sep 2017, at 15:13, Emmanuel Bernard wrote: > > I am personally content if you provide the total amount of memory for Pod and you as OSB designer decide of the -Xms/Xmx for the services. Unlike what Sanne said I think, Amazon and the like they don?t give you x GB of cache. They give you an instance of Redis or Memcached within a VM that has x amount of GB allocated. What you can stuck in is left as an exercise for the reader. > > Not ideal but I think they went for the practical in this case. > > For the pain JDG, then more options is fine. > >> On 28 Sep 2017, at 12:00, Sebastian Laskawiec > wrote: >> >> So how about exposing two parameters - Xms/Xmx and Total amount of memory for Pod (Request = Limit in that case). Would it work for you? >> >> On Thu, Sep 28, 2017 at 8:38 AM Emmanuel Bernard > wrote: >> Sebastian, >> >> What Galder, Sanne and others are saying is that in OpenShift on prem, there is no or at least a higher limit in the minimal container memory you can ask. And in these deployment, Infinispan should target the multi GB, not 512 MB. >> >> Of course, *if* you ask for a guaranteed 512MB, then it would be silly to try and consume more. >> >> >>> On 25 Sep 2017, at 12:30, Sebastian Laskawiec > wrote: >>> >> >>> >>> >>> On Mon, Sep 25, 2017 at 11:54 AM Galder Zamarre?o > wrote: >>> I don't understand your reply here... are you talking about Infinispan instances deployed on OpenShift Online? Or on premise? >>> >>> TBH - I think there is no difference, so I'm thinking about both. >>> >>> I can understand having some limits for OpenShift Online, but these templates should also be applicable on premise, in which case I should be able to easily define how much memory I want for the data grid, and the rest of the parameters would be worked out by OpenShift/Kubernetes? >>> >>> I have written a couple of emails about this on internal mailing list. Let me just point of some bits here: >>> We need to set either Xmx or MaxRAM to tell the JVM how much memory it can allocate. As you probably know JDK8 is not CGroups aware by default (there are some experimental options but they set MaxRAM parameter equal to CGroups limit; this translates to Xmx=MaxRAM(CGroups limit) / 4. I guess allocating Xmx=(CGroups limit)/4 is too high for us, so we need to set it explicitly. >>> in our Docker image we set Xmx = 50% of CGroups limit. This is better than settings above but there is some risk in certain scenarios. >>> As I mentioned in my previous email, in the templates we are setting Requests (not Limits!!!). So you will probably get more memory than specified in the template but it depends on the node you're running on. The key point is that you won't get less than those 512 MB. >>> You can always edit your DeploymentConfig (after creating your application from template) and adjust Limits (or even requests). >>> For simple scenarios and bigger containers (like 4 GB) we can go more than 50% (see internal mailing list for details). >>> And as I said before - if you guys think we should do it differently, I'm open for suggestions. I think it's quite standard way of configuring this sort of stuff. >>> >>> To demand on premise users to go and change their template just to adjust the memory settings seems to me goes against all the usability improvements we're trying to achieve. >>> >>> At some point you need to define how much memory you will need. Whether it's in the template, your DeploymentConfiguration (created from template using oc process), Quota - it doesn't matter. You must write it somewhere - don't you? With current approach, the best way to do it is in Deployment Configuration Requests. This sets CGroups limit, and based on that, Infinispan bootstrap scripts will calculate Xmx. >>> >>> >>> Cheers, >>> >>> > On 22 Sep 2017, at 14:49, Sebastian Laskawiec > wrote: >>> > >>> > It's very tricky... >>> > >>> > Memory is adjusted automatically to the container size [1] (of course you may override it by supplying Xmx or "-n" as parameters [2]). The safe limit is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap, that you can squeeze Infinispan much, much more). >>> > >>> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in bustable memory category so if there is additional memory in the node, we'll get it. But if not, we won't go below 512 MB (and 500 mCPU). >>> > >>> > Thanks, >>> > Sebastian >>> > >>> > [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory >>> > [2] https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308 >>> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4 >>> > [4] https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html >>> > >>> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarre?o > wrote: >>> > Hi Sebastian, >>> > >>> > How do you change memory settings for Infinispan started via service catalog? >>> > >>> > The memory settings seem defined in [1], but this is not one of the parameters supported. >>> > >>> > I guess we want this as parameter? >>> > >>> > Cheers, >>> > >>> > [1] https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308 >>> > -- >>> > Galder Zamarre?o >>> > Infinispan, Red Hat >>> > >>> >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170928/83544517/attachment.html