From vblagoje at redhat.com Thu Nov 2 08:33:12 2017 From: vblagoje at redhat.com (Vladimir Blagojevic) Date: Thu, 2 Nov 2017 08:33:12 -0400 Subject: [infinispan-dev] Counters and their configurations in Infinispan server (DMR) Message-ID: <33981678-3153-2739-e728-020a19bb255b@redhat.com> Hey guys, How do you anticipate users are going to deal with counters? Are they going to be creating a lot of them in their applications, say dozens, hundreds, thousands? I am asking because I have a dilemma about their representation in DMR and therefore in the admin console and potentially wider. The dilemma is related to splitting the concepts and the mapping between counter configuration and counter instances. On one end of the possible spectrum use, if users are going to have many counters that have the same configuration then it makes sense to delineate the DMR concept of the counter configuration and its counter instance just like we do for caches and cache configuration templates. We deal with cache configurations as templates; one could create hundreds of caches from the same template. Similarly, we can do with counters. On the other end if users are going to create very few counters then it likely does not make much sense to separate counter configurations from its instance, they would have one to one mapping. For each new counter, users would just enter counter configuration and launch an instance of a corresponding counter. The first approach saves resources and makes large counter instantiations easier while the second approach is easier to understand conceptually but is inefficient if we are going to have many counter instance. Thoughts? Vladimir From pedro at infinispan.org Thu Nov 2 11:07:52 2017 From: pedro at infinispan.org (Pedro Ruivo) Date: Thu, 2 Nov 2017 15:07:52 +0000 Subject: [infinispan-dev] Counters and their configurations in Infinispan server (DMR) In-Reply-To: <33981678-3153-2739-e728-020a19bb255b@redhat.com> References: <33981678-3153-2739-e728-020a19bb255b@redhat.com> Message-ID: Hi, IMO, I would separate the concept of counter and configuration. Even if an user doesn't create many counters, I think most of them will share the same configuration. As a bad example, if you want to counter oranges and apples, you're going to use the same configuration... probably :) In addition, it is symmetric to the cache DMR tree. This would reduce the learning curve if the user is already used to cli (i.e create caches). Cheers, Pedro On 02-11-2017 12:33, Vladimir Blagojevic wrote: > Hey guys, > > How do you anticipate users are going to deal with counters? Are they > going to be creating a lot of them in their applications, say dozens, > hundreds, thousands? > > I am asking because I have a dilemma about their representation in DMR > and therefore in the admin console and potentially wider. The dilemma is > related to splitting the concepts and the mapping between counter > configuration and counter instances. On one end of the possible spectrum > use, if users are going to have many counters that have the same > configuration then it makes sense to delineate the DMR concept of the > counter configuration and its counter instance just like we do for > caches and cache configuration templates. We deal with cache > configurations as templates; one could create hundreds of caches from > the same template. Similarly, we can do with counters. On the other end > if users are going to create very few counters then it likely does not > make much sense to separate counter configurations from its instance, > they would have one to one mapping. For each new counter, users would > just enter counter configuration and launch an instance of a > corresponding counter. > > The first approach saves resources and makes large counter > instantiations easier while the second approach is easier to understand > conceptually but is inefficient if we are going to have many counter > instance. > > Thoughts? > > Vladimir > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From galder at redhat.com Thu Nov 2 09:18:13 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Thu, 2 Nov 2017 14:18:13 +0100 Subject: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore Message-ID: <79C9D2B6-C425-4617-85F1-52B2253C20EB@redhat.com> Hi all, I'm currently going through the JCache 1.1 proposed changes, and one that made me think is [1]. In particular: > Caches do not use forward slashes (/) or colons (:) as part of their names. Additionally it is > recommended that cache names starting with java. or javax.should not be used. I'm wondering whether in the future we should move away from the triple underscore trick we use for internal cache names, and instead just prepend them with `org.infinispan`, which is our group id. I think it'd be cleaner. Thoughts? [1] https://github.com/jsr107/jsr107spec/issues/350 -- Galder Zamarre?o Infinispan, Red Hat From anistor at redhat.com Thu Nov 2 18:20:06 2017 From: anistor at redhat.com (Adrian Nistor) Date: Fri, 3 Nov 2017 00:20:06 +0200 Subject: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore In-Reply-To: <79C9D2B6-C425-4617-85F1-52B2253C20EB@redhat.com> References: <79C9D2B6-C425-4617-85F1-52B2253C20EB@redhat.com> Message-ID: I like this proposal. On 11/02/2017 03:18 PM, Galder Zamarre?o wrote: > Hi all, > > I'm currently going through the JCache 1.1 proposed changes, and one that made me think is [1]. In particular: > >> Caches do not use forward slashes (/) or colons (:) as part of their names. Additionally it is >> recommended that cache names starting with java. or javax.should not be used. > I'm wondering whether in the future we should move away from the triple underscore trick we use for internal cache names, and instead just prepend them with `org.infinispan`, which is our group id. I think it'd be cleaner. > > Thoughts? > > [1] https://github.com/jsr107/jsr107spec/issues/350 > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Thu Nov 2 19:36:24 2017 From: mudokonman at gmail.com (William Burns) Date: Thu, 02 Nov 2017 23:36:24 +0000 Subject: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore In-Reply-To: References: <79C9D2B6-C425-4617-85F1-52B2253C20EB@redhat.com> Message-ID: +1 On Thu, Nov 2, 2017, 7:35 PM Adrian Nistor wrote: > I like this proposal. > > On 11/02/2017 03:18 PM, Galder Zamarre?o wrote: > > Hi all, > > > > I'm currently going through the JCache 1.1 proposed changes, and one > that made me think is [1]. In particular: > > > >> Caches do not use forward slashes (/) or colons (:) as part of their > names. Additionally it is > >> recommended that cache names starting with java. or javax.should not be > used. > > I'm wondering whether in the future we should move away from the triple > underscore trick we use for internal cache names, and instead just prepend > them with `org.infinispan`, which is our group id. I think it'd be cleaner. > > > > Thoughts? > > > > [1] https://github.com/jsr107/jsr107spec/issues/350 > > -- > > Galder Zamarre?o > > Infinispan, Red Hat > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171102/284df780/attachment.html From sanne at infinispan.org Thu Nov 2 19:42:04 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 2 Nov 2017 23:42:04 +0000 Subject: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore In-Reply-To: References: <79C9D2B6-C425-4617-85F1-52B2253C20EB@redhat.com> Message-ID: On 2 November 2017 at 22:20, Adrian Nistor wrote: > I like this proposal. +1 > On 11/02/2017 03:18 PM, Galder Zamarre?o wrote: >> Hi all, >> >> I'm currently going through the JCache 1.1 proposed changes, and one that made me think is [1]. In particular: >> >>> Caches do not use forward slashes (/) or colons (:) as part of their names. Additionally it is >>> recommended that cache names starting with java. or javax.should not be used. >> I'm wondering whether in the future we should move away from the triple underscore trick we use for internal cache names, and instead just prepend them with `org.infinispan`, which is our group id. I think it'd be cleaner. >> >> Thoughts? >> >> [1] https://github.com/jsr107/jsr107spec/issues/350 >> -- >> Galder Zamarre?o >> Infinispan, Red Hat >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Fri Nov 3 03:39:37 2017 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 3 Nov 2017 08:39:37 +0100 Subject: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore In-Reply-To: References: <79C9D2B6-C425-4617-85F1-52B2253C20EB@redhat.com> Message-ID: From systematic POV, +1. For marshalling it would bring another 11 bytes, which is not ideal, so we might consider encoding that differently. Not sure how error-prone would some naming that has non-trivial transformation be. R. On 11/03/2017 12:42 AM, Sanne Grinovero wrote: > On 2 November 2017 at 22:20, Adrian Nistor wrote: >> I like this proposal. > +1 > >> On 11/02/2017 03:18 PM, Galder Zamarre?o wrote: >>> Hi all, >>> >>> I'm currently going through the JCache 1.1 proposed changes, and one that made me think is [1]. In particular: >>> >>>> Caches do not use forward slashes (/) or colons (:) as part of their names. Additionally it is >>>> recommended that cache names starting with java. or javax.should not be used. >>> I'm wondering whether in the future we should move away from the triple underscore trick we use for internal cache names, and instead just prepend them with `org.infinispan`, which is our group id. I think it'd be cleaner. >>> >>> Thoughts? >>> >>> [1] https://github.com/jsr107/jsr107spec/issues/350 >>> -- >>> Galder Zamarre?o >>> Infinispan, Red Hat >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From slaskawi at redhat.com Fri Nov 3 04:05:26 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Fri, 03 Nov 2017 08:05:26 +0000 Subject: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore In-Reply-To: References: <79C9D2B6-C425-4617-85F1-52B2253C20EB@redhat.com> Message-ID: I'm pretty sure it's a silly question, but I need to ask it :) Why can't we store all our internal information in a single, replicated cache (of a type ). PURPOSE could be an enum or a string identifying whether it's scripting cache, transaction cache or anything else. The value (Map) would store whatever you need. On Fri, Nov 3, 2017 at 2:24 AM Sanne Grinovero wrote: > On 2 November 2017 at 22:20, Adrian Nistor wrote: > > I like this proposal. > > +1 > > > On 11/02/2017 03:18 PM, Galder Zamarre?o wrote: > >> Hi all, > >> > >> I'm currently going through the JCache 1.1 proposed changes, and one > that made me think is [1]. In particular: > >> > >>> Caches do not use forward slashes (/) or colons (:) as part of their > names. Additionally it is > >>> recommended that cache names starting with java. or javax.should not > be used. > >> I'm wondering whether in the future we should move away from the triple > underscore trick we use for internal cache names, and instead just prepend > them with `org.infinispan`, which is our group id. I think it'd be cleaner. > >> > >> Thoughts? > >> > >> [1] https://github.com/jsr107/jsr107spec/issues/350 > >> -- > >> Galder Zamarre?o > >> Infinispan, Red Hat > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171103/a1154db7/attachment.html From rvansa at redhat.com Fri Nov 3 04:42:36 2017 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 3 Nov 2017 09:42:36 +0100 Subject: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore In-Reply-To: References: <79C9D2B6-C425-4617-85F1-52B2253C20EB@redhat.com> Message-ID: <82703f60-3b6d-0f6e-26a1-263ec52fe442@redhat.com> Because you would have to duplicate entire Map on each update, unless you used not-100%-so-far functional commands. We've used the ScopedKey that would make this Cache, Object>. This approach was abandoned with ISPN-5932 [1], Adrian and Tristan can elaborate why. Radim [1] https://issues.jboss.org/browse/ISPN-5932 On 11/03/2017 09:05 AM, Sebastian Laskawiec wrote: > I'm pretty sure it's a silly question, but I need to ask it :) > > Why can't we store all our internal information in a single, > replicated cache (of a type ). PURPOSE > could be an enum or a string identifying whether it's scripting cache, > transaction cache or anything else. The value (Map) > would store whatever you need. > > On Fri, Nov 3, 2017 at 2:24 AM Sanne Grinovero > wrote: > > On 2 November 2017 at 22:20, Adrian Nistor > wrote: > > I like this proposal. > > +1 > > > On 11/02/2017 03:18 PM, Galder Zamarre?o wrote: > >> Hi all, > >> > >> I'm currently going through the JCache 1.1 proposed changes, > and one that made me think is [1]. In particular: > >> > >>> Caches do not use forward slashes (/) or colons (:) as part of > their names. Additionally it is > >>> recommended that cache names starting with java. or > javax.should not be used. > >> I'm wondering whether in the future we should move away from > the triple underscore trick we use for internal cache names, and > instead just prepend them with `org.infinispan`, which is our > group id. I think it'd be cleaner. > >> > >> Thoughts? > >> > >> [1] https://github.com/jsr107/jsr107spec/issues/350 > >> -- > >> Galder Zamarre?o > >> Infinispan, Red Hat > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss Performance Team From rory.odonnell at oracle.com Fri Nov 3 06:13:27 2017 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Fri, 3 Nov 2017 10:13:27 +0000 Subject: [infinispan-dev] JDK 10 b29 Early Access is available on jdk.java.net Message-ID: <4f5f047b-cb32-9dc7-6bea-e4077817f8ae@oracle.com> Hi Galder, JDK 10 Early Access? build 29 is available at : - jdk.java.net/10/ JDK 10 Early Access Release Notes are available [1] JDK 10 Schedule, Status & Features are available [2] Notes * OpenJDK EA binaries will be available at a later date. * Oracle has proposed: Newer version-string scheme for the Java SE Platform and the JDK o Please see Mark Reinhold's proposal [3] , feedback via the mailing list to Mark please. Feedback - If you have suggestions or encounter bugs, please submit them using the usual Java SE bug-reporting channel. Be sure to include complete version information from the output of the |java --version| command. Regards, Rory [1] http://jdk.java.net/10/release-notes [2] http://openjdk.java.net/projects/jdk/10/ [3] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November/000089.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171103/750b6e6f/attachment-0001.html From galder at redhat.com Fri Nov 3 06:02:12 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Fri, 3 Nov 2017 11:02:12 +0100 Subject: [infinispan-dev] Counters and their configurations in Infinispan server (DMR) In-Reply-To: References: <33981678-3153-2739-e728-020a19bb255b@redhat.com> Message-ID: <4D39A03B-F258-4A03-B017-E785DCAD5983@redhat.com> An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171103/ddd6dedf/attachment.html From vblagoje at redhat.com Fri Nov 3 08:58:24 2017 From: vblagoje at redhat.com (Vladimir Blagojevic) Date: Fri, 3 Nov 2017 08:58:24 -0400 Subject: [infinispan-dev] Counters and their configurations in Infinispan server (DMR) In-Reply-To: <4D39A03B-F258-4A03-B017-E785DCAD5983@redhat.com> References: <33981678-3153-2739-e728-020a19bb255b@redhat.com> <4D39A03B-F258-4A03-B017-E785DCAD5983@redhat.com> Message-ID: Thanks Galder and Pedro. I'll implement them as you suggested! Cheers On 2017-11-03 6:02 AM, Galder Zamarre?o wrote: > At first glance, I'd agree with Pedro. > >> On 2 Nov 2017, at 16:07, Pedro Ruivo wrote: >> >> Hi, >> >> IMO, I would separate the concept of counter and configuration. >> >> Even if an user doesn't create many counters, I think most of them will >> share the same configuration. As a bad example, if you want to counter >> oranges and apples, you're going to use the same configuration... >> probably :) >> >> In addition, it is symmetric to the cache DMR tree. This would reduce >> the learning curve if the user is already used to cli (i.e create >> caches). >> >> Cheers, >> Pedro >> >> >> >> On 02-11-2017 12:33, Vladimir Blagojevic wrote: >>> Hey guys, >>> >>> How do you anticipate users are going to deal with counters? Are they >>> going to be creating a lot of them in their applications, say dozens, >>> hundreds, thousands? >>> >>> I am asking because I have a dilemma about their representation in DMR >>> and therefore in the admin console and potentially wider. The dilemma is >>> related to splitting the concepts and the mapping between counter >>> configuration and counter instances. On one end of the possible spectrum >>> use, if users are going to have many counters that have the same >>> configuration then it makes sense to delineate the DMR concept of the >>> counter configuration and its counter instance just like we do for >>> caches and cache configuration templates. We deal with cache >>> configurations as templates; one could create hundreds of caches from >>> the same template. Similarly, we can do with counters. On the other end >>> if users are going to create very few counters then it likely does not >>> make much sense to separate counter configurations from its instance, >>> they would have one to one mapping. For each new counter, users would >>> just enter counter configuration and launch an instance of a >>> corresponding counter. >>> >>> The first approach saves resources and makes large counter >>> instantiations easier while the second approach is easier to understand >>> conceptually but is inefficient if we are going to have many counter >>> instance. >>> >>> Thoughts? >>> >>> Vladimir >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -- > Galder Zamarre?o > Infinispan, Red Hat > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171103/dbd413d3/attachment.html From ttarrant at redhat.com Mon Nov 6 04:00:26 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 6 Nov 2017 10:00:26 +0100 Subject: [infinispan-dev] Beta1 this week Message-ID: <61958b75-69d1-6e91-b19e-1d63e487265a@redhat.com> Hey all, we will be releasing Beta1 this week, so please dedicate most of your time to reviewing and merging PRs (and implementing requested changes to your own PRs). Adrian is release wrangler. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From anistor at redhat.com Mon Nov 6 04:46:45 2017 From: anistor at redhat.com (Adrian Nistor) Date: Mon, 6 Nov 2017 11:46:45 +0200 Subject: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore In-Reply-To: <82703f60-3b6d-0f6e-26a1-263ec52fe442@redhat.com> References: <79C9D2B6-C425-4617-85F1-52B2253C20EB@redhat.com> <82703f60-3b6d-0f6e-26a1-263ec52fe442@redhat.com> Message-ID: <65a5f299-8fd0-a187-8951-e10698c2c280@redhat.com> Different internal caches have different needs regarding consistency, tx, persistence, etc... The first incarnation of ClusterRegistry was using a single cache and was implemented exactly as you suggested, but had major shortcomings satisfying the needs of several unrelated users, so we decided to split. On 11/03/2017 10:42 AM, Radim Vansa wrote: > Because you would have to duplicate entire Map on each update, unless > you used not-100%-so-far functional commands. We've used the ScopedKey > that would make this Cache, Object>. This > approach was abandoned with ISPN-5932 [1], Adrian and Tristan can > elaborate why. > > Radim > > [1] https://issues.jboss.org/browse/ISPN-5932 > > On 11/03/2017 09:05 AM, Sebastian Laskawiec wrote: >> I'm pretty sure it's a silly question, but I need to ask it :) >> >> Why can't we store all our internal information in a single, >> replicated cache (of a type ). PURPOSE >> could be an enum or a string identifying whether it's scripting cache, >> transaction cache or anything else. The value (Map) >> would store whatever you need. >> >> On Fri, Nov 3, 2017 at 2:24 AM Sanne Grinovero > > wrote: >> >> On 2 November 2017 at 22:20, Adrian Nistor > > wrote: >> > I like this proposal. >> >> +1 >> >> > On 11/02/2017 03:18 PM, Galder Zamarre?o wrote: >> >> Hi all, >> >> >> >> I'm currently going through the JCache 1.1 proposed changes, >> and one that made me think is [1]. In particular: >> >> >> >>> Caches do not use forward slashes (/) or colons (:) as part of >> their names. Additionally it is >> >>> recommended that cache names starting with java. or >> javax.should not be used. >> >> I'm wondering whether in the future we should move away from >> the triple underscore trick we use for internal cache names, and >> instead just prepend them with `org.infinispan`, which is our >> group id. I think it'd be cleaner. >> >> >> >> Thoughts? >> >> >> >> [1] https://github.com/jsr107/jsr107spec/issues/350 >> >> -- >> >> Galder Zamarre?o >> >> Infinispan, Red Hat >> >> >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > >> > >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > From ttarrant at redhat.com Mon Nov 6 06:31:03 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 6 Nov 2017 12:31:03 +0100 Subject: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore In-Reply-To: <65a5f299-8fd0-a187-8951-e10698c2c280@redhat.com> References: <79C9D2B6-C425-4617-85F1-52B2253C20EB@redhat.com> <82703f60-3b6d-0f6e-26a1-263ec52fe442@redhat.com> <65a5f299-8fd0-a187-8951-e10698c2c280@redhat.com> Message-ID: <40590687-6bdd-fd3c-29a9-bb047c82323e@redhat.com> To add to Adrian's history lesson: ClusterRegistry (a single, replicated, non-persistent, scoped cache) was replaced with the InternalCacheRegistry which provides a common way for subsystems to register internal caches with the "traits" they want but configured to take into account some global settings. This means setting up proper security roles, persistent paths, etc. We do however have a proliferation of caches and in my ISPN-7776 PR I've reintroduced a scoped config/state cache which can be shared by interested parties. I do like the org.infinispan prefix for internal caches (and I've amended my PR to use that). I'm not that concerned about the additional payload, since most of the internal caches we have at the moment change infrequently (schema, script, topology, etc), but we should probably come up with a proper way to identify caches with a common short ID. Tristan On 11/6/17 10:46 AM, Adrian Nistor wrote: > Different internal caches have different needs regarding consistency, > tx, persistence, etc... > The first incarnation of ClusterRegistry was using a single cache and > was implemented exactly as you suggested, but had major shortcomings > satisfying the needs of several unrelated users, so we decided to split. > > On 11/03/2017 10:42 AM, Radim Vansa wrote: >> Because you would have to duplicate entire Map on each update, unless >> you used not-100%-so-far functional commands. We've used the ScopedKey >> that would make this Cache, Object>. This >> approach was abandoned with ISPN-5932 [1], Adrian and Tristan can >> elaborate why. >> >> Radim >> >> [1] https://issues.jboss.org/browse/ISPN-5932 >> >> On 11/03/2017 09:05 AM, Sebastian Laskawiec wrote: >>> I'm pretty sure it's a silly question, but I need to ask it :) >>> >>> Why can't we store all our internal information in a single, >>> replicated cache (of a type ). PURPOSE >>> could be an enum or a string identifying whether it's scripting cache, >>> transaction cache or anything else. The value (Map) >>> would store whatever you need. >>> >>> On Fri, Nov 3, 2017 at 2:24 AM Sanne Grinovero >> > wrote: >>> >>> On 2 November 2017 at 22:20, Adrian Nistor >> > wrote: >>> > I like this proposal. >>> >>> +1 >>> >>> > On 11/02/2017 03:18 PM, Galder Zamarre?o wrote: >>> >> Hi all, >>> >> >>> >> I'm currently going through the JCache 1.1 proposed changes, >>> and one that made me think is [1]. In particular: >>> >> >>> >>> Caches do not use forward slashes (/) or colons (:) as part of >>> their names. Additionally it is >>> >>> recommended that cache names starting with java. or >>> javax.should not be used. >>> >> I'm wondering whether in the future we should move away from >>> the triple underscore trick we use for internal cache names, and >>> instead just prepend them with `org.infinispan`, which is our >>> group id. I think it'd be cleaner. >>> >> >>> >> Thoughts? >>> >> >>> >> [1] https://github.com/jsr107/jsr107spec/issues/350 >>> >> -- >>> >> Galder Zamarre?o >>> >> Infinispan, Red Hat >>> >> >>> >> >>> >> _______________________________________________ >>> >> infinispan-dev mailing list >>> >> infinispan-dev at lists.jboss.org >>> >>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> > >>> > >>> > _______________________________________________ >>> > infinispan-dev mailing list >>> > infinispan-dev at lists.jboss.org >>> >>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From dan.berindei at gmail.com Mon Nov 6 11:07:37 2017 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 6 Nov 2017 16:07:37 +0000 Subject: [infinispan-dev] Weekly IRC Meeting logs 2017-11-06 Message-ID: Hi everyone JBott wasn't available, so the meeting logs are available here: https://gist.github.com/danberindei/6d4d7e742eba41b0fb1bcba0ee735a8e Cheers Dan From sanne at infinispan.org Mon Nov 6 13:11:48 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 6 Nov 2017 18:11:48 +0000 Subject: [infinispan-dev] Weekly IRC Meeting logs 2017-11-06 In-Reply-To: References: Message-ID: On 6 November 2017 at 16:07, Dan Berindei wrote: > Hi everyone > > JBott wasn't available, so the meeting logs are available here: > > https://gist.github.com/danberindei/6d4d7e742eba41b0fb1bcba0ee735a8e Not a particularly critical detail, but it's quite hard to follow who said what in this log format ;) Thanks, Sanne > > Cheers > Dan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From karesti at redhat.com Mon Nov 6 14:23:16 2017 From: karesti at redhat.com (Katia Aresti) Date: Mon, 06 Nov 2017 19:23:16 +0000 Subject: [infinispan-dev] Weekly IRC Meeting logs 2017-11-06 In-Reply-To: References: Message-ID: Totally agree with you Sanne, we need slack so this won?t happen again ! Le lun. 6 nov. 2017 ? 20:10, Sanne Grinovero a ?crit : > On 6 November 2017 at 16:07, Dan Berindei wrote: > > > Hi everyone > > > > > > JBott wasn't available, so the meeting logs are available here: > > > > > > https://gist.github.com/danberindei/6d4d7e742eba41b0fb1bcba0ee735a8e > > > > Not a particularly critical detail, but it's quite hard to follow who > > said what in this log format ;) > > > > Thanks, > > Sanne > > > > > > > > Cheers > > > Dan > > > _______________________________________________ > > > infinispan-dev mailing list > > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171106/04c31aed/attachment.html From ttarrant at redhat.com Mon Nov 6 14:41:37 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 6 Nov 2017 20:41:37 +0100 Subject: [infinispan-dev] Weekly IRC Meeting logs 2017-11-06 In-Reply-To: References: Message-ID: If jbott is down there is not much we can do about it. Tristan On 11/6/17 7:11 PM, Sanne Grinovero wrote: > On 6 November 2017 at 16:07, Dan Berindei wrote: >> Hi everyone >> >> JBott wasn't available, so the meeting logs are available here: >> >> https://gist.github.com/danberindei/6d4d7e742eba41b0fb1bcba0ee735a8e > > Not a particularly critical detail, but it's quite hard to follow who > said what in this log format ;) > > Thanks, > Sanne > >> >> Cheers >> Dan >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From sanne at infinispan.org Mon Nov 6 18:16:07 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 6 Nov 2017 23:16:07 +0000 Subject: [infinispan-dev] Weekly IRC Meeting logs 2017-11-06 In-Reply-To: References: Message-ID: On 6 November 2017 at 19:23, Katia Aresti wrote: > Totally agree with you Sanne, we need slack so this won?t happen again ! touch? ! > > Le lun. 6 nov. 2017 ? 20:10, Sanne Grinovero a ?crit > : >> >> On 6 November 2017 at 16:07, Dan Berindei wrote: >> >> > Hi everyone >> >> > >> >> > JBott wasn't available, so the meeting logs are available here: >> >> > >> >> > https://gist.github.com/danberindei/6d4d7e742eba41b0fb1bcba0ee735a8e >> >> >> >> Not a particularly critical detail, but it's quite hard to follow who >> >> said what in this log format ;) >> >> >> >> Thanks, >> >> Sanne >> >> >> >> > >> >> > Cheers >> >> > Dan >> >> > _______________________________________________ >> >> > infinispan-dev mailing list >> >> > infinispan-dev at lists.jboss.org >> >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From vblagoje at redhat.com Tue Nov 7 07:27:03 2017 From: vblagoje at redhat.com (Vladimir Blagojevic) Date: Tue, 7 Nov 2017 07:27:03 -0500 Subject: [infinispan-dev] Counters and their configurations in Infinispan server (DMR) In-Reply-To: References: <33981678-3153-2739-e728-020a19bb255b@redhat.com> <4D39A03B-F258-4A03-B017-E785DCAD5983@redhat.com> Message-ID: Hi everyone, Here is most of the implementation with example use outlined in https://github.com/infinispan/infinispan/pull/5570 Would you please review it as I am not an expert in DMR and I need one now :-) Regards, Vladimir On Fri, Nov 3, 2017 at 8:58 AM, Vladimir Blagojevic wrote: > Thanks Galder and Pedro. I'll implement them as you suggested! > Cheers > On 2017-11-03 6:02 AM, Galder Zamarre?o wrote: > > At first glance, I'd agree with Pedro. > > On 2 Nov 2017, at 16:07, Pedro Ruivo > wrote: > > Hi, > > IMO, I would separate the concept of counter and configuration. > > Even if an user doesn't create many counters, I think most of them will > share the same configuration. As a bad example, if you want to counter > oranges and apples, you're going to use the same configuration... > probably :) > > In addition, it is symmetric to the cache DMR tree. This would reduce > the learning curve if the user is already used to cli (i.e create caches). > > Cheers, > Pedro > > > > On 02-11-2017 12:33, Vladimir Blagojevic wrote: > > Hey guys, > > How do you anticipate users are going to deal with counters? Are they > going to be creating a lot of them in their applications, say dozens, > hundreds, thousands? > > I am asking because I have a dilemma about their representation in DMR > and therefore in the admin console and potentially wider. The dilemma is > related to splitting the concepts and the mapping between counter > configuration and counter instances. On one end of the possible spectrum > use, if users are going to have many counters that have the same > configuration then it makes sense to delineate the DMR concept of the > counter configuration and its counter instance just like we do for > caches and cache configuration templates. We deal with cache > configurations as templates; one could create hundreds of caches from > the same template. Similarly, we can do with counters. On the other end > if users are going to create very few counters then it likely does not > make much sense to separate counter configurations from its instance, > they would have one to one mapping. For each new counter, users would > just enter counter configuration and launch an instance of a > corresponding counter. > > The first approach saves resources and makes large counter > instantiations easier while the second approach is easier to understand > conceptually but is inefficient if we are going to have many counter > instance. > > Thoughts? > > Vladimir > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > Infinispan, Red Hat > > > > _______________________________________________ > infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171107/e889718c/attachment-0001.html From slaskawi at redhat.com Tue Nov 7 10:14:07 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Tue, 07 Nov 2017 15:14:07 +0000 Subject: [infinispan-dev] The future of Infinispan Docker image Message-ID: Hey! Together with Ryan we are thinking about the future of Infinispan Docker image [1]. Currently we use a single Dockerfile and a bootstrap script which is responsible for setting up memory limits and creating/generating (if necessary) credentials. Our build pipeline uses Docker HUB integration hooks, so whenever we push a new commit (or a tag) our images are being rebuilt. This is very simple to understand and very powerful setup. However we are thinking about bringing product and project images closer together and possibly reusing some bits (a common example might be Jolokia - those bits could be easily reused without touching core server distribution). This however requires converting our image to a framework called Concreate [2]. Concreate divides setup scripts into modules which are later on assembled into a single Dockerfile and built. Modules can also be pulled from other public git repository and I consider this as the most powerful option. It is also worth to mention, that Concreate is based on YAML file - here's an example of JDG image [3]. As you can see, this can be quite a change so I would like to reach out for some opinions. The biggest issue I can see is that we will lose our Docker HUB build pipeline and we will need to build and push images on our CI (which already does this locally for Online Services). WDYT? Thanks, Sebastian [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server [2] http://concreate.readthedocs.io/en/latest/ [3] https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171107/1eb51a0d/attachment.html From gustavo at infinispan.org Tue Nov 7 13:43:18 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Tue, 7 Nov 2017 18:43:18 +0000 Subject: [infinispan-dev] The future of Infinispan Docker image In-Reply-To: References: Message-ID: IMHO we should ship things like scripts, external modules, drivers, etc with the server itself, leaving the least amount of logic in the Docker image. What you are proposing is the opposite: introducing a templating engine that adds a level of indirection to the Docker image (the Dockerfile is generated) plus it grabs jars, modules, scripts, xmls, etc from potentially external sources and does several patches to the server at Docker image creation time. WRT the docker hub, I think it could be used with Concreate by using hooks, I did a quick experiment of a Dockerhub automated build having a dynamically generating a Dockerfile in [1], but I guess the biggest question is if the added overall complexity is worth it. I'm leaning towards a -1, but would like to hear more opinions :) [1] https://hub.docker.com/r/gustavonalle/dockerhub-test/ Thanks, Gustavo On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec wrote: > Hey! > > Together with Ryan we are thinking about the future of Infinispan Docker > image [1]. > > Currently we use a single Dockerfile and a bootstrap script which is > responsible for setting up memory limits and creating/generating (if > necessary) credentials. Our build pipeline uses Docker HUB integration > hooks, so whenever we push a new commit (or a tag) our images are being > rebuilt. This is very simple to understand and very powerful setup. > > However we are thinking about bringing product and project images closer > together and possibly reusing some bits (a common example might be Jolokia > - those bits could be easily reused without touching core server > distribution). This however requires converting our image to a framework > called Concreate [2]. Concreate divides setup scripts into modules which > are later on assembled into a single Dockerfile and built. Modules can also > be pulled from other public git repository and I consider this as the most > powerful option. It is also worth to mention, that Concreate is based on > YAML file - here's an example of JDG image [3]. > > As you can see, this can be quite a change so I would like to reach out > for some opinions. The biggest issue I can see is that we will lose our > Docker HUB build pipeline and we will need to build and push images on our > CI (which already does this locally for Online Services). > > WDYT? > > Thanks, > Sebastian > > [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server > [2] http://concreate.readthedocs.io/en/latest/ > [3] https://github.com/jboss-container-images/jboss- > datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171107/e31cf4f9/attachment.html From slaskawi at redhat.com Thu Nov 9 06:33:40 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Thu, 09 Nov 2017 11:33:40 +0000 Subject: [infinispan-dev] The future of Infinispan Docker image In-Reply-To: References: Message-ID: That's a very good point Gustavo. Let me try to iterate on pros and cons of each approach: - Putting all bits into distribution: - Pros: - Unified approach for both project and product - Supporting all platforms with a single distribution - Cons: - Long turnaround from community to the product based bits (like Online Services) - Some work has already been done in Concreate-based approach (like Jolokia) and battle-tested (e.g. with EAP). - Putting all additional bits into integration layers (Concreate-based approach): - Pros: - Short turnaround, in most of the cases we need to patch the integration bits only - Some integration bits has already been implemented for us (Joloka, DB drivers etc) - Cons: - Some integrations bits needs to be reimplemented, e.g. KUBE_PING - Each integration layer needs to have its own code (e.g. community Docker image, xPaaS images, Online Services) I must admit that in the past I was a pretty big fan of putting all bits into community distribution and driving it forward from there. But this actually changed once Concreate tool appeared. It allows to externalize modules into separate repositories which promotes code reuse (e.g. we could easily use Jolokia integration implemented for EAP and at the same time provide our own custom configuration for it). Of course most of the bits assume that underlying OS is RHEL which is not true for the community (community images use CentOS) so there might be some mismatch there but it's definitely something to start with. The final argument that made me change my mind was turnaround loop. Going through all those releases is quite time-consuming and sometimes we just need to update micro version to fix something. A nice example of this is KUBE_PING which had a memory leak - with concreate-based approach we could fix it in one day; but as long as it is in distribution, we need to wait whole release cycle. Thanks, Sebastian On Tue, Nov 7, 2017 at 8:07 PM Gustavo Fernandes wrote: > IMHO we should ship things like scripts, external modules, drivers, etc > with the server itself, leaving the least amount of logic in the Docker > image. > > What you are proposing is the opposite: introducing a templating engine > that adds a level of indirection to the Docker image (the Dockerfile is > generated) plus > it grabs jars, modules, scripts, xmls, etc from potentially external > sources and does several patches to the server at Docker image creation > time. > > WRT the docker hub, I think it could be used with Concreate by using > hooks, I did a quick experiment of a Dockerhub automated build having a > dynamically generating a Dockerfile in [1], but I guess > the biggest question is if the added overall complexity is worth it. I'm > leaning towards a -1, but would like to hear more opinions :) > > [1] https://hub.docker.com/r/gustavonalle/dockerhub-test/ > > Thanks, > Gustavo > > On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec > wrote: > >> Hey! >> >> Together with Ryan we are thinking about the future of Infinispan Docker >> image [1]. >> >> Currently we use a single Dockerfile and a bootstrap script which is >> responsible for setting up memory limits and creating/generating (if >> necessary) credentials. Our build pipeline uses Docker HUB integration >> hooks, so whenever we push a new commit (or a tag) our images are being >> rebuilt. This is very simple to understand and very powerful setup. >> >> However we are thinking about bringing product and project images closer >> together and possibly reusing some bits (a common example might be Jolokia >> - those bits could be easily reused without touching core server >> distribution). This however requires converting our image to a framework >> called Concreate [2]. Concreate divides setup scripts into modules which >> are later on assembled into a single Dockerfile and built. Modules can also >> be pulled from other public git repository and I consider this as the most >> powerful option. It is also worth to mention, that Concreate is based on >> YAML file - here's an example of JDG image [3]. >> >> As you can see, this can be quite a change so I would like to reach out >> for some opinions. The biggest issue I can see is that we will lose our >> Docker HUB build pipeline and we will need to build and push images on our >> CI (which already does this locally for Online Services). >> >> WDYT? >> >> Thanks, >> Sebastian >> >> [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server >> [2] http://concreate.readthedocs.io/en/latest/ >> [3] >> https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171109/cce65fa2/attachment-0001.html From galder at redhat.com Thu Nov 9 15:20:41 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Thu, 9 Nov 2017 21:20:41 +0100 Subject: [infinispan-dev] Fwd: [infinispan/infinispan] ISPN-8113 Querying via Rest endpoint (#5557) References: Message-ID: This is HUGE!! Kudos to Gustavo for the hard work you've done to get this in!! > Begin forwarded message: > > From: Adrian Nistor > Subject: Re: [infinispan/infinispan] ISPN-8113 Querying via Rest endpoint (#5557) > Date: 7 November 2017 at 09:57:08 CET > To: infinispan/infinispan > Cc: Subscribed > Reply-To: infinispan/infinispan > > Integrated. Thanks @gustavonalle ! > > ? > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub , or mute the thread . > -- Galder Zamarre?o Infinispan, Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171109/b7a22d7c/attachment.html From gustavo at infinispan.org Fri Nov 10 12:31:11 2017 From: gustavo at infinispan.org (Gustavo Fernandes) Date: Fri, 10 Nov 2017 17:31:11 +0000 Subject: [infinispan-dev] The future of Infinispan Docker image In-Reply-To: References: Message-ID: IMHO the cons are much more significant than the pros, here's a few more: - Increase the barrier to users/contributors, forcing them to learn a new tool if they need to customize the image; - Prevents usage of new/existent features in the Dockerfile, such as [1], at least until the generator supports it; - Makes the integration with Dockerhub harder. Furthermore, integrating Jolokia and DB drivers are trivial tasks, it hardly justifies migrating the image completely just to be able to re-use some external scripts to patch the server at Docker build time. With relation to the release cycle, well, this is another discussion. As far as Infinispan is concerned, it takes roughly 1h to release both the project and the docker image :) So my vote is -1 [1] https://docs.docker.com/engine/userguide/eng-image/ multistage-build/#before-multi-stage-builds Thanks, Gustavo On Thu, Nov 9, 2017 at 11:33 AM, Sebastian Laskawiec wrote: > That's a very good point Gustavo. > > Let me try to iterate on pros and cons of each approach: > > - Putting all bits into distribution: > - Pros: > - Unified approach for both project and product > - Supporting all platforms with a single distribution > - Cons: > - Long turnaround from community to the product based bits (like > Online Services) > - Some work has already been done in Concreate-based approach > (like Jolokia) and battle-tested (e.g. with EAP). > - Putting all additional bits into integration layers > (Concreate-based approach): > - Pros: > - Short turnaround, in most of the cases we need to patch the > integration bits only > - Some integration bits has already been implemented for us > (Joloka, DB drivers etc) > - Cons: > - Some integrations bits needs to be reimplemented, e.g. > KUBE_PING > - Each integration layer needs to have its own code (e.g. > community Docker image, xPaaS images, Online Services) > > I must admit that in the past I was a pretty big fan of putting all bits > into community distribution and driving it forward from there. But this > actually changed once Concreate tool appeared. It allows to externalize > modules into separate repositories which promotes code reuse (e.g. we could > easily use Jolokia integration implemented for EAP and at the same time > provide our own custom configuration for it). Of course most of the bits > assume that underlying OS is RHEL which is not true for the community > (community images use CentOS) so there might be some mismatch there but > it's definitely something to start with. The final argument that made me > change my mind was turnaround loop. Going through all those releases is > quite time-consuming and sometimes we just need to update micro version to > fix something. A nice example of this is KUBE_PING which had a memory leak > - with concreate-based approach we could fix it in one day; but as long as > it is in distribution, we need to wait whole release cycle. > > Thanks, > Sebastian > > On Tue, Nov 7, 2017 at 8:07 PM Gustavo Fernandes > wrote: > >> IMHO we should ship things like scripts, external modules, drivers, etc >> with the server itself, leaving the least amount of logic in the Docker >> image. >> >> What you are proposing is the opposite: introducing a templating engine >> that adds a level of indirection to the Docker image (the Dockerfile is >> generated) plus >> it grabs jars, modules, scripts, xmls, etc from potentially external >> sources and does several patches to the server at Docker image creation >> time. >> >> WRT the docker hub, I think it could be used with Concreate by using >> hooks, I did a quick experiment of a Dockerhub automated build having a >> dynamically generating a Dockerfile in [1], but I guess >> the biggest question is if the added overall complexity is worth it. I'm >> leaning towards a -1, but would like to hear more opinions :) >> >> [1] https://hub.docker.com/r/gustavonalle/dockerhub-test/ >> >> Thanks, >> Gustavo >> >> On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec >> wrote: >> >>> Hey! >>> >>> Together with Ryan we are thinking about the future of Infinispan Docker >>> image [1]. >>> >>> Currently we use a single Dockerfile and a bootstrap script which is >>> responsible for setting up memory limits and creating/generating (if >>> necessary) credentials. Our build pipeline uses Docker HUB integration >>> hooks, so whenever we push a new commit (or a tag) our images are being >>> rebuilt. This is very simple to understand and very powerful setup. >>> >>> However we are thinking about bringing product and project images closer >>> together and possibly reusing some bits (a common example might be Jolokia >>> - those bits could be easily reused without touching core server >>> distribution). This however requires converting our image to a framework >>> called Concreate [2]. Concreate divides setup scripts into modules which >>> are later on assembled into a single Dockerfile and built. Modules can also >>> be pulled from other public git repository and I consider this as the most >>> powerful option. It is also worth to mention, that Concreate is based on >>> YAML file - here's an example of JDG image [3]. >>> >>> As you can see, this can be quite a change so I would like to reach out >>> for some opinions. The biggest issue I can see is that we will lose our >>> Docker HUB build pipeline and we will need to build and push images on our >>> CI (which already does this locally for Online Services). >>> >>> WDYT? >>> >>> Thanks, >>> Sebastian >>> >>> [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server >>> [2] http://concreate.readthedocs.io/en/latest/ >>> [3] https://github.com/jboss-container-images/jboss-datagrid >>> -7-openshift-image/blob/datagrid71-dev/image.yaml >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171110/39cb266b/attachment-0001.html From sanne at infinispan.org Fri Nov 10 12:40:14 2017 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 10 Nov 2017 18:40:14 +0100 Subject: [infinispan-dev] Fwd: [infinispan/infinispan] ISPN-8113 Querying via Rest endpoint (#5557) In-Reply-To: References: Message-ID: +1 very interesting! On 9 November 2017 at 21:20, Galder Zamarre?o wrote: > This is HUGE!! Kudos to Gustavo for the hard work you've done to get this > in!! > > Begin forwarded message: > > *From: *Adrian Nistor > *Subject: **Re: [infinispan/infinispan] ISPN-8113 Querying via Rest > endpoint (#5557)* > *Date: *7 November 2017 at 09:57:08 CET > *To: *infinispan/infinispan > *Cc: *Subscribed > *Reply-To: *infinispan/infinispan 44448b201314502b1392cf000000011619376392a169ce10187aa4 at reply.github.com> > > Integrated. Thanks @gustavonalle ! > > ? > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub > , > or mute the thread > > . > > > -- > Galder Zamarre?o > Infinispan, Red Hat > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171110/c596d519/attachment.html From ttarrant at redhat.com Mon Nov 13 11:32:00 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 13 Nov 2017 17:32:00 +0100 Subject: [infinispan-dev] Infinispan IRC meeting logs 2017-11-13 Message-ID: <4f525c44-fd52-8386-0040-9b9efd70b25e@redhat.com> Here are the logs for this week's IRC meeting: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-11-13-15.02.log.html Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From anistor at redhat.com Tue Nov 14 09:53:50 2017 From: anistor at redhat.com (Adrian Nistor) Date: Tue, 14 Nov 2017 16:53:50 +0200 Subject: [infinispan-dev] Infinispan 9.2.0.Beta1 and 9.1.3.Final have been released Message-ID: Hello everyone, Our first beta release of the Infinispan 9.2 stream is available, as well as a new release of our stable branch (9.1). I welcome you to read all about it on our team blog at https://goo.gl/asvaS3 Cheers, Adrian From galder at redhat.com Wed Nov 15 05:13:53 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 15 Nov 2017 11:13:53 +0100 Subject: [infinispan-dev] The future of Infinispan Docker image In-Reply-To: References: Message-ID: I lean towards Gustavo's arguments, so -1 from me. > On 10 Nov 2017, at 18:31, Gustavo Fernandes wrote: > > IMHO the cons are much more significant than the pros, here's a few more: > > - Increase the barrier to users/contributors, forcing them to learn a new tool if they need to customize the image; > - Prevents usage of new/existent features in the Dockerfile, such as [1], at least until the generator supports it; > - Makes the integration with Dockerhub harder. > > Furthermore, integrating Jolokia and DB drivers are trivial tasks, it hardly justifies migrating the image completely just to be able to re-use some external scripts to patch the server at Docker build time. > > With relation to the release cycle, well, this is another discussion. As far as Infinispan is concerned, it takes roughly 1h to release both the project and the docker image :) > > So my vote is -1 > > [1] https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds > > Thanks, > Gustavo > > On Thu, Nov 9, 2017 at 11:33 AM, Sebastian Laskawiec wrote: > That's a very good point Gustavo. > > Let me try to iterate on pros and cons of each approach: > ? Putting all bits into distribution: > ? Pros: > ? Unified approach for both project and product > ? Supporting all platforms with a single distribution > ? Cons: > ? Long turnaround from community to the product based bits (like Online Services) > ? Some work has already been done in Concreate-based approach (like Jolokia) and battle-tested (e.g. with EAP). > ? Putting all additional bits into integration layers (Concreate-based approach): > ? Pros: > ? Short turnaround, in most of the cases we need to patch the integration bits only > ? Some integration bits has already been implemented for us (Joloka, DB drivers etc) > ? Cons: > ? Some integrations bits needs to be reimplemented, e.g. KUBE_PING > ? Each integration layer needs to have its own code (e.g. community Docker image, xPaaS images, Online Services) > I must admit that in the past I was a pretty big fan of putting all bits into community distribution and driving it forward from there. But this actually changed once Concreate tool appeared. It allows to externalize modules into separate repositories which promotes code reuse (e.g. we could easily use Jolokia integration implemented for EAP and at the same time provide our own custom configuration for it). Of course most of the bits assume that underlying OS is RHEL which is not true for the community (community images use CentOS) so there might be some mismatch there but it's definitely something to start with. The final argument that made me change my mind was turnaround loop. Going through all those releases is quite time-consuming and sometimes we just need to update micro version to fix something. A nice example of this is KUBE_PING which had a memory leak - with concreate-based approach we could fix it in one day; but as long as it is in distribution, we need to wait whole release cycle. > > Thanks, > Sebastian > > On Tue, Nov 7, 2017 at 8:07 PM Gustavo Fernandes wrote: > IMHO we should ship things like scripts, external modules, drivers, etc with the server itself, leaving the least amount of logic in the Docker image. > > What you are proposing is the opposite: introducing a templating engine that adds a level of indirection to the Docker image (the Dockerfile is generated) plus > it grabs jars, modules, scripts, xmls, etc from potentially external sources and does several patches to the server at Docker image creation time. > > WRT the docker hub, I think it could be used with Concreate by using hooks, I did a quick experiment of a Dockerhub automated build having a dynamically generating a Dockerfile in [1], but I guess > the biggest question is if the added overall complexity is worth it. I'm leaning towards a -1, but would like to hear more opinions :) > > [1] https://hub.docker.com/r/gustavonalle/dockerhub-test/ > > Thanks, > Gustavo > > On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec wrote: > Hey! > > Together with Ryan we are thinking about the future of Infinispan Docker image [1]. > > Currently we use a single Dockerfile and a bootstrap script which is responsible for setting up memory limits and creating/generating (if necessary) credentials. Our build pipeline uses Docker HUB integration hooks, so whenever we push a new commit (or a tag) our images are being rebuilt. This is very simple to understand and very powerful setup. > > However we are thinking about bringing product and project images closer together and possibly reusing some bits (a common example might be Jolokia - those bits could be easily reused without touching core server distribution). This however requires converting our image to a framework called Concreate [2]. Concreate divides setup scripts into modules which are later on assembled into a single Dockerfile and built. Modules can also be pulled from other public git repository and I consider this as the most powerful option. It is also worth to mention, that Concreate is based on YAML file - here's an example of JDG image [3]. > > As you can see, this can be quite a change so I would like to reach out for some opinions. The biggest issue I can see is that we will lose our Docker HUB build pipeline and we will need to build and push images on our CI (which already does this locally for Online Services). > > WDYT? > > Thanks, > Sebastian > > [1] https://github.com/jboss-dockerfiles/infinispan/tree/master/server > [2] http://concreate.readthedocs.io/en/latest/ > [3] https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o Infinispan, Red Hat From ttarrant at redhat.com Mon Nov 20 04:19:48 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Mon, 20 Nov 2017 10:19:48 +0100 Subject: [infinispan-dev] The future of Infinispan Docker image In-Reply-To: References: Message-ID: <39bcb244-38c2-7cd6-fe30-36ff661be3f8@redhat.com> I tend to agree with Gustavo. The docker image should be as straightforward as possible. All the fancy build tools and layerings just create multiple levels of indirection. It also makes things more brittle. So -1 from me. Tristan On 11/10/17 6:31 PM, Gustavo Fernandes wrote: > IMHO the cons are much more significant than the pros, here's a few more: > > - Increase the barrier to users/contributors, forcing them to learn a > new tool if they need to customize the image; > - Prevents usage of new/existent features in the Dockerfile, such as > [1], at least until the generator supports it; > - Makes the integration with Dockerhub harder. > > Furthermore, integrating Jolokia and DB drivers are trivial tasks, it > hardly justifies migrating the image completely just to be able to > re-use some external scripts to patch the server at Docker build time. > > With relation to the release cycle, well, this is another discussion. As > far as Infinispan is concerned, it takes roughly 1h to release both the > project and the docker image :) > > So my vote is -1 > > [1] > https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds > > > Thanks, > Gustavo > > On Thu, Nov 9, 2017 at 11:33 AM, Sebastian Laskawiec > > wrote: > > That's a very good point Gustavo. > > Let me try to iterate on pros and cons of each approach: > > * Putting all bits into distribution: > o Pros: > + Unified approach for both project and product > + Supporting all platforms with a single distribution > o Cons: > + Long turnaround from community to the product based bits > (like Online Services) > + Some work has already been done in Concreate-based > approach (like Jolokia) and battle-tested (e.g. with EAP). > * Putting all additional bits into integration layers > (Concreate-based approach): > o Pros: > + Short turnaround, in most of the cases we need to patch > the integration bits only > + Some integration bits has already been implemented for > us (Joloka, DB drivers etc) > o Cons: > + Some integrations bits needs to be reimplemented, e.g. > KUBE_PING > + Each integration layer needs to have its own code (e.g. > community Docker image, xPaaS images, Online Services) > > I must admit that in the past I was a pretty big fan of putting all > bits into community distribution and driving it forward from there. > But this actually changed once Concreate tool appeared. It allows to > externalize modules into separate repositories which promotes code > reuse (e.g. we could easily use Jolokia integration implemented for > EAP and at the same time provide our own custom configuration for > it). Of course most of the bits assume that underlying OS is RHEL > which is not true for the community (community images use CentOS) so > there might be some mismatch there but it's definitely something to > start with. The final argument that made me change my mind was > turnaround loop. Going through all those releases is quite > time-consuming and sometimes we just need to update micro version to > fix something. A nice example of this is KUBE_PING which had a > memory leak - with concreate-based approach we could fix it in one > day; but as long as it is in distribution, we need to wait whole > release cycle. > > Thanks, > Sebastian > > On Tue, Nov 7, 2017 at 8:07 PM Gustavo Fernandes > > wrote: > > IMHO we should ship things like scripts, external modules, > drivers, etc with the server itself, leaving the least amount of > logic in the Docker image. > > What you are proposing is the opposite: introducing a templating > engine that adds a level of indirection to the Docker image (the > Dockerfile is generated) plus > it grabs jars, modules, scripts, xmls, etc from potentially > external sources and does several patches to the server at > Docker image creation time. > > WRT the docker hub, I think it could be used with Concreate by > using hooks, I did a quick experiment of a Dockerhub automated > build having a dynamically generating a Dockerfile in [1], but I > guess > the biggest question is if the added overall complexity is worth > it. I'm leaning towards a -1, but would like to hear more > opinions :) > > [1] https://hub.docker.com/r/gustavonalle/dockerhub-test/ > > > Thanks, > Gustavo > > On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec > > wrote: > > Hey! > > Together with Ryan we are thinking about the future of > Infinispan Docker image [1]. > > Currently we use a single Dockerfile and a bootstrap script > which is responsible for setting up memory limits and > creating/generating (if necessary) credentials. Our build > pipeline uses Docker HUB integration hooks, so whenever we > push a new commit (or a tag) our images are being rebuilt. > This is very simple to understand and very powerful setup. > > However we are thinking about bringing product and project > images closer together and possibly reusing some bits (a > common example might be Jolokia - those bits could be easily > reused without touching core server distribution). This > however requires converting our image to a framework called > Concreate [2]. Concreate divides setup scripts into modules > which are later on assembled into a single Dockerfile and > built. Modules can also be pulled from other public git > repository and I consider this as the most powerful option. > It is also worth to mention, that Concreate is based on YAML > file - here's an example of JDG image [3]. > > As you can see, this can be quite a change so I would like > to reach out for some opinions. The biggest issue I can see > is that we will lose our Docker HUB build pipeline and we > will need to build and push images on our CI (which already > does this locally for Online Services). > > WDYT? > > Thanks, > Sebastian > > [1] > https://github.com/jboss-dockerfiles/infinispan/tree/master/server > > [2] http://concreate.readthedocs.io/en/latest/ > > [3] > https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From slaskawi at redhat.com Mon Nov 20 08:39:11 2017 From: slaskawi at redhat.com (Sebastian Laskawiec) Date: Mon, 20 Nov 2017 13:39:11 +0000 Subject: [infinispan-dev] The future of Infinispan Docker image In-Reply-To: <39bcb244-38c2-7cd6-fe30-36ff661be3f8@redhat.com> References: <39bcb244-38c2-7cd6-fe30-36ff661be3f8@redhat.com> Message-ID: Agreed than. We'll stick with plan Dockerfile. Thanks everyone for good discussion and putting good arguments on the table. On Mon, Nov 20, 2017 at 10:28 AM Tristan Tarrant wrote: > I tend to agree with Gustavo. > The docker image should be as straightforward as possible. All the fancy > build tools and layerings just create multiple levels of indirection. It > also makes things more brittle. > > So -1 from me. > > Tristan > > On 11/10/17 6:31 PM, Gustavo Fernandes wrote: > > IMHO the cons are much more significant than the pros, here's a few more: > > > > - Increase the barrier to users/contributors, forcing them to learn a > > new tool if they need to customize the image; > > - Prevents usage of new/existent features in the Dockerfile, such as > > [1], at least until the generator supports it; > > - Makes the integration with Dockerhub harder. > > > > Furthermore, integrating Jolokia and DB drivers are trivial tasks, it > > hardly justifies migrating the image completely just to be able to > > re-use some external scripts to patch the server at Docker build time. > > > > With relation to the release cycle, well, this is another discussion. As > > far as Infinispan is concerned, it takes roughly 1h to release both the > > project and the docker image :) > > > > So my vote is -1 > > > > [1] > > > https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds > > < > https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds > > > > > > Thanks, > > Gustavo > > > > On Thu, Nov 9, 2017 at 11:33 AM, Sebastian Laskawiec > > > wrote: > > > > That's a very good point Gustavo. > > > > Let me try to iterate on pros and cons of each approach: > > > > * Putting all bits into distribution: > > o Pros: > > + Unified approach for both project and product > > + Supporting all platforms with a single distribution > > o Cons: > > + Long turnaround from community to the product based bits > > (like Online Services) > > + Some work has already been done in Concreate-based > > approach (like Jolokia) and battle-tested (e.g. with > EAP). > > * Putting all additional bits into integration layers > > (Concreate-based approach): > > o Pros: > > + Short turnaround, in most of the cases we need to patch > > the integration bits only > > + Some integration bits has already been implemented for > > us (Joloka, DB drivers etc) > > o Cons: > > + Some integrations bits needs to be reimplemented, e.g. > > KUBE_PING > > + Each integration layer needs to have its own code (e.g. > > community Docker image, xPaaS images, Online Services) > > > > I must admit that in the past I was a pretty big fan of putting all > > bits into community distribution and driving it forward from there. > > But this actually changed once Concreate tool appeared. It allows to > > externalize modules into separate repositories which promotes code > > reuse (e.g. we could easily use Jolokia integration implemented for > > EAP and at the same time provide our own custom configuration for > > it). Of course most of the bits assume that underlying OS is RHEL > > which is not true for the community (community images use CentOS) so > > there might be some mismatch there but it's definitely something to > > start with. The final argument that made me change my mind was > > turnaround loop. Going through all those releases is quite > > time-consuming and sometimes we just need to update micro version to > > fix something. A nice example of this is KUBE_PING which had a > > memory leak - with concreate-based approach we could fix it in one > > day; but as long as it is in distribution, we need to wait whole > > release cycle. > > > > Thanks, > > Sebastian > > > > On Tue, Nov 7, 2017 at 8:07 PM Gustavo Fernandes > > > wrote: > > > > IMHO we should ship things like scripts, external modules, > > drivers, etc with the server itself, leaving the least amount of > > logic in the Docker image. > > > > What you are proposing is the opposite: introducing a templating > > engine that adds a level of indirection to the Docker image (the > > Dockerfile is generated) plus > > it grabs jars, modules, scripts, xmls, etc from potentially > > external sources and does several patches to the server at > > Docker image creation time. > > > > WRT the docker hub, I think it could be used with Concreate by > > using hooks, I did a quick experiment of a Dockerhub automated > > build having a dynamically generating a Dockerfile in [1], but I > > guess > > the biggest question is if the added overall complexity is worth > > it. I'm leaning towards a -1, but would like to hear more > > opinions :) > > > > [1] https://hub.docker.com/r/gustavonalle/dockerhub-test/ > > > > > > Thanks, > > Gustavo > > > > On Tue, Nov 7, 2017 at 3:14 PM, Sebastian Laskawiec > > > wrote: > > > > Hey! > > > > Together with Ryan we are thinking about the future of > > Infinispan Docker image [1]. > > > > Currently we use a single Dockerfile and a bootstrap script > > which is responsible for setting up memory limits and > > creating/generating (if necessary) credentials. Our build > > pipeline uses Docker HUB integration hooks, so whenever we > > push a new commit (or a tag) our images are being rebuilt. > > This is very simple to understand and very powerful setup. > > > > However we are thinking about bringing product and project > > images closer together and possibly reusing some bits (a > > common example might be Jolokia - those bits could be easily > > reused without touching core server distribution). This > > however requires converting our image to a framework called > > Concreate [2]. Concreate divides setup scripts into modules > > which are later on assembled into a single Dockerfile and > > built. Modules can also be pulled from other public git > > repository and I consider this as the most powerful option. > > It is also worth to mention, that Concreate is based on YAML > > file - here's an example of JDG image [3]. > > > > As you can see, this can be quite a change so I would like > > to reach out for some opinions. The biggest issue I can see > > is that we will lose our Docker HUB build pipeline and we > > will need to build and push images on our CI (which already > > does this locally for Online Services). > > > > WDYT? > > > > Thanks, > > Sebastian > > > > [1] > > > https://github.com/jboss-dockerfiles/infinispan/tree/master/server > > < > https://github.com/jboss-dockerfiles/infinispan/tree/master/server> > > [2] http://concreate.readthedocs.io/en/latest/ > > > > [3] > > > https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml > > < > https://github.com/jboss-container-images/jboss-datagrid-7-openshift-image/blob/datagrid71-dev/image.yaml > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org infinispan-dev at lists.jboss.org> > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Tristan Tarrant > Infinispan Lead > JBoss, a division of Red Hat > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171120/575a6e87/attachment.html From dan.berindei at gmail.com Mon Nov 20 10:59:50 2017 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 20 Nov 2017 15:59:50 +0000 Subject: [infinispan-dev] IRC meeting logs 2017-11-20 Message-ID: Hi all Here are the logs of today's IRC meeting: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-11-20-15.09.log.html Cheers Dan From ttarrant at redhat.com Tue Nov 21 02:18:05 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 21 Nov 2017 08:18:05 +0100 Subject: [infinispan-dev] Weekly Infinispan Meeting IRC Logs 2017-11-20 Message-ID: <30474b5e-5735-d4ce-8310-264100fee3af@redhat.com> Dear all, yesterday's meeting logs are here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-11-20-15.09.log.html Since I wasn't present, here is my update: - ISPN-8543 Pluggable global configuration persistence I've extracted the persistence logic from the global configuration stuff introduced in ISPN-7776 so that in the future we can add alternate providers, such as ones for server standalone and domain mode, kubernetes configmap, etc. - ISPN-8529 Implement cache admin ops over the REST protocol I wrote a quick implementation of this using WebDAV protocol ops, but at Gustavo's suggestion I'm reworking it to make it more RESTful. I should have it done this week. I've also spent a little time looking at some test failures and trying to iron out the two component upgrade PRs. Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From ttarrant at redhat.com Tue Nov 21 02:22:12 2017 From: ttarrant at redhat.com (Tristan Tarrant) Date: Tue, 21 Nov 2017 08:22:12 +0100 Subject: [infinispan-dev] 9.2.0 schedule Message-ID: <3443760a-5a32-2a31-ec4d-de8e3df2de20@redhat.com> Dear all, I've made some changes to the 9.2.0 schedule to adapt them to the delays introduced during this cycle. Because of the Christmas holidays, I've moved Final over to January. 2017-11-29 9.2.0.Beta2 2017-12-13 9.2.0.CR1 2018-01-10 9.2.0.Final Hack on! Tristan -- Tristan Tarrant Infinispan Lead JBoss, a division of Red Hat From galder at redhat.com Wed Nov 22 07:27:21 2017 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Wed, 22 Nov 2017 13:27:21 +0100 Subject: [infinispan-dev] infinispan-bom needs some love Message-ID: <486C6D24-F7C1-4487-8CFC-1FCA36ADCA01@redhat.com> Hi all, Re: https://issues.jboss.org/browse/ISPN-8552 Re: https://issues.jboss.org/browse/ISPN-8408 Just fell of my chair with ^ Did I somehow miss a discussion on ISPN-8408? Anything that changes infinispan-bom needs to be discussed in this list :| Can someone ellaborate what problem ISPN-8408 is trying to fix in infinispan-bom exactly? I have personally not heard anyone complaining about it. >From my POV, the easiest way to consume Infinispan is leaving the infinispan-bom as it was. So, the vert.x way. If we want a different "bom" that doesn't contain Infinispan modules, maybe we can add it separately and don't break existing examples/apps... but it really needs solves a problem :| Cheers, -- Galder Zamarre?o Infinispan, Red Hat From rory.odonnell at oracle.com Tue Nov 28 10:35:11 2017 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Tue, 28 Nov 2017 15:35:11 +0000 Subject: [infinispan-dev] JDK 10 Early Access b33 and JDK 8u162 Early Access b03 are available on jdk.java.net Message-ID: <2ae90723-9766-0838-3eb5-f775a5a0f16f@oracle.com> Hi Galder, *JDK 10 Early Access? build 33 is available at : - **jdk.java.net/10/* Notable changes since previous email. JDK-8180019 - *javadoc treats failure to access a URL as an error , not a warning.* If javadoc cannot access the contents of a URL provided with the -link or -linkoffline options,the tool will now report an error. Previously, the tool continued with a warning, producing incorrect documentation output. JDK-8175094 *- **The java.security.acl APIs are deprecated, for removal**** *?The deprecated java.security.acl APIs are now marked with forRemoval=true and are subject to removal in a future version of Java SE. JDK-8175091 *- The java.security.{Certificate,Identity,IdentityScope,Signer} APIs are deprecated, for removal* The deprecated java.security.{Certificate, Identity, IdentityScope, Signer} classes are now marked with forRemoval=true and are subject to removal in a future version of Java SE. JDK 10 Schedule, Status & Features are available [1] Notes * OpenJDK EA binaries will be available at a later date. * Oracle has proposed: Newer version-string scheme for the Java SE Platform and the JDK o Please see Mark Reinhold's proposal [2] *JDK 8u162 Early Access build 03 is available at :- http://jdk.java.net/8/* *Feedback* - If you have suggestions or encounter bugs, please submit them using the usual Java SE bug-reporting channel. Be sure to include complete version information from the output of the |java --version| command. Regards, Rory [1] http://openjdk.java.net/projects/jdk/10/ [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2017-November/000089.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20171128/4ebd7dfd/attachment.html