From sanne at hibernate.org Tue Jan 2 06:00:49 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 2 Jan 2018 11:00:49 +0000 Subject: [hibernate-dev] Realising the JavaDoc jars as well In-Reply-To: References: Message-ID: On 24 December 2017 at 14:23, Steve Ebersole wrote: > Sure, but the question remains :P It just adds another one: What I meant to suggest is: if we agree that we're not going to bother with publishing javadocs for "internal", we're effectively getting rid of one of the 3 "groups"; one to go.. Personally I believe the SPI/API package differentiation is a gray area anough to not need physical separation of javadoc archives, so I'd just make a single javadoc output for all SPI/API. > Should internal packages be generated into the javadocs (individual and/or > aggregated)? > Should the individual javadocs (only intended for publishing to Central) > group the packages into api/spi(/internal) the way we do for the aggregated > javadocs? > > Personally I think filtering out internal packages is a great idea. > > Regarding grouping packages, I think its not worth the effort for the > individual ones - just have an overview for these that just notes this > distinction. +1 to keep it simple, for both us and users: I don't think people will want to learn about the techniques we use to keep our projects organized as a pre-requisite to be able to find the javadocs they are after. Thanks, Sanne > > On Sat, Dec 23, 2017 at 6:53 AM Sanne Grinovero wrote: >> >> On 22 December 2017 at 18:16, Steve Ebersole wrote: >> > I wanted to get everyone's opinion about the api/spi/internal package >> > grouping we do in the aggregated Javadoc in regards to the per-module >> > javadocs. Adding this logic adds significant overhead to the process of >> > building the Javadoc, to the point where I am considering not performing >> > that grouping there. >> > >> > Thoughts? >> >> For Hibernate Search we recently decided to not produce javadocs at >> all for "internal"; everything else is just documented as a single >> group. >> >> That cuts on the "need to know" complexity of end users. Advanced >> users who could have benefitted from knowing more about the internals >> will likely have sources. >> >> > >> > On Tue, Dec 12, 2017 at 11:37 AM Vlad Mihalcea >> > wrote: >> >> >> >> I tested it locally, and when publishing the jars to Maven local, the >> >> JavaDoc is now included. >> >> >> >> Don't know if there's anything to be done about it. >> >> >> >> Vlad >> >> >> >> On Mon, Dec 11, 2017 at 9:32 PM, Sanne Grinovero >> >> wrote: >> >> >> >> > +1 to merge it (if it works - which I didn't check) >> >> > >> >> > Some history can easily be found: >> >> > - >> >> > >> >> > http://lists.jboss.org/pipermail/hibernate-dev/2017-January/015758.html >> >> > >> >> > Thanks, >> >> > Sanne >> >> > >> >> > >> >> > On 11 December 2017 at 15:24, Vlad Mihalcea >> >> > wrote: >> >> > > Hi, >> >> > > >> >> > > I've noticed this Pull Request which is valid and worth >> >> > > integrating: >> >> > > >> >> > > https://github.com/hibernate/hibernate-orm/pull/2078 >> >> > > >> >> > > Before I merge it, I wanted to make sure whether this change was >> >> > accidental >> >> > > or intentional. >> >> > > >> >> > > Was there any reason not to ship the JavaDoc jars along with the >> >> > > release >> >> > > artifacts and the sources jars as well? >> >> > > >> >> > > Thanks, >> >> > > Vlad >> >> > > _______________________________________________ >> >> > > hibernate-dev mailing list >> >> > > hibernate-dev at lists.jboss.org >> >> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> > >> >> _______________________________________________ >> >> hibernate-dev mailing list >> >> hibernate-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Tue Jan 2 14:15:14 2018 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 02 Jan 2018 19:15:14 +0000 Subject: [hibernate-dev] ORM & Java 9 - strange javadoc failure In-Reply-To: References: Message-ID: Sanne, have you had a chance to look at this? If not, I may have to just disable Java 9 from Travis On Wed, Dec 27, 2017 at 8:37 PM Steve Ebersole wrote: > I worked on getting Travis CI set up on ORM for reasons discussed here > previously. But I am running into a really strange error when I enabled > Java 9: > > javadoc: error - An exception occurred while building a component: > ClassSerializedForm > (com.sun.tools.javac.code.Symbol$CompletionFailure: class file for > org.hibernate.engine.Mapping not found) > Please file a bug against the javadoc tool via the Java bug reporting page > (http://bugreport.java.com) after checking the Bug Database ( > http://bugs.java.com) > for duplicates. Include error messages and the following diagnostic in > your report. Thank you. > com.sun.tools.javac.code.Symbol$CompletionFailure: class file for > org.hibernate.engine.Mapping not found > > It seems like javadoc is complaining because it sees a reference to a > class (org.hibernate.engine.Mapping) that it cannot find. It is true that > there is no class named org.hibernate.engine.Mapping, the real name is > org.hibernate.engine.spi.Mapping - but what is strange is that I search the > entire ORM project and found zero references to the String > org.hibernate.engine.Mapping. > > I just kicked off a run of the ORM / Java 9 Jenkins job to see if it has > the same failure. > > Anyone have any ideas? > > From smarlow at redhat.com Tue Jan 2 14:37:35 2018 From: smarlow at redhat.com (Scott Marlow) Date: Tue, 2 Jan 2018 14:37:35 -0500 Subject: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope In-Reply-To: References: Message-ID: On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero wrote: > Any dependency injection framework will have some capability to define > the graph of dependencies across components, and such graph could be > very complex, with details only known to the framework. > > I don't think we can solve the integration by having "before all > others" / "after all others" phases as that's too coarse grained to > define a full graph; we need to find a way to have the DI framework > take in consideration our additional components both in terms of DI > consumers and providers - then let the framework wire up things in the > order it prefers. This is also to allow the DI engine to print > appropriate warnings for un-resolvable situations with its native > error handling, which would resolve in more familiar error messages. > > If that's not doable *or a priority* then all we can do is try to make > it clear enough that there will be limitations and hopefully describe > these clearly. Some of such limitations might be puzzling as you > describe. > > > > On 20 December 2017 at 12:50, Yoann Rodiere wrote: > > Hello all, > > > > TL;DR: Application-scoped beans cannot be used as part of the @PreDestroy > > method of ORM-instantiated CDI beans, and it's a bit odd because they can > > be used as part of the @PostConstruct method. > > > > I've been testing the CDI integration in Hibernate ORM for the past few > > days, trying to integrate it into Search. I think I've discovered > something > > odd: when CDI-managed beans are destroyed, they cannot access other > > Application-scoped CDI beans anymore. Not sure whether this is a problem > or > > not, so maybe we should discuss it a bit before going forward with the > > current behavior. > > > > Short reminder: scopes define when CDI beans are created and destroyed. > > @ApplicationScoped is pretty self-explanatory: created when the > application > > starts and destroyed when it stops. Some other scopes are a bit more > > convoluted: @Singleton basically means created *before* the application > > starts and destroyed *after* the application stops (and also means "this > > bean shall not be proxied"), @Dependent means created when an instance is > > requested and destroyed when the instance is released, etc. > > > > The thing is, Hibernate ORM is typically started very early and shut down > > very late in the CDI lifecycle - at least within WildFly. So when > Hibernate > > starts, CDI Application-scoped beans haven't been instantiated yet, and > it > > turns out that when Hibernate ORM shuts down, CDI has already destroyed > > Application-scoped beans. > > > > Regarding startup, Steve and Scott solved the problem by delaying bean > > instantiation to some point in the future when the Application scope is > > active (and thus Application-scoped beans are available). This makes it > > possible to use Application-scoped beans within ORM-instantiated beans as > > soon as the latter are constructed (i.e. within their @PostConstruct > > methods). > > However, when Hibernate ORM shuts down, the Application scope has already > > been terminated. So when ORM destroys the beans it instantiated, those > > ORM-instantiated beans cannot call a method on referenced > > Application-scoped beans (CDI proxies will throw an exception). > > > > All in all, the only type of beans we can currently use in a @PreDestroy > > method of an ORM-instantiated bean is @Dependent beans. @Singleton beans > > will work, but only because they are not proxied and thus you can cheat > and > > use them even after they have been destroyed... which I definitely > wouldn't > > recommend. > > > > I see two ways to handle the issue: > > > > 1. We don't change anything, and simply document somewhere that beans > > instantiated as part of the CDI integration are instantiated within > the > > Application scope, but are destroyed outside of it. And we suggest > that any > > bean used in @PostDestroy method in an ORM-instantiated bean > (directly or > > not) must have either a @Dependent scope, or a @Singleton scope and no > > @PostDestroy method. > > 2. We implement an "early shut-down" somehow, which would bring > forward > > bean destruction to some time when the Application scope is still > active. > org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager mentions that we could look at introducing a beanManagerDestroyed notification, if that is useful and we can find a way to implement it (javax.enterprise.spi.BeforeShutdown [1] is not early enough to meet your requirements). Scott [1] https://docs.oracle.com/javaee/7/api/javax/enterprise/inject/spi/BeforeShutdown.html > > > > #1 may be enough for now, even though the behavior feels a bit odd, and > > forces users to resort to less-than-ideal practices (using a @Singleton > > bean after it has been destroyed). > > > > #2 would require changes in WildFly and may be a bit complex. In > > particular, if we aren't careful, Application-scoped beans may not be > able > > to use Hibernate ORM from within their @PreDestroy methods... Which is > > probably not a good idea. So we would have to find a solution together > with > > the WildFly team. Also to be considered: Hibernate Search would have to > be > > shut down just before the "early shut-down" of Hibernate ORM occurs, > > because Hibernate Search cannot function at all without the beans it > > retrieves from the CDI context. > > > > Thoughts? > > > > > > Yoann Rodi?re > > Hibernate NoORM Team > > yoann at hibernate.org > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Tue Jan 2 14:42:40 2018 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 02 Jan 2018 19:42:40 +0000 Subject: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope In-Reply-To: References: Message-ID: Scott, how would we register a listener for this event? The problem we have had with most CDI "listeners" so far is that they are non-contextual, meaning there has been no way to link that back to a specific SessionFactory.. If I can register this listener with a reference back to the Sessionfactory, this should actually be fine. On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow wrote: > On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero > wrote: > > > Any dependency injection framework will have some capability to define > > the graph of dependencies across components, and such graph could be > > very complex, with details only known to the framework. > > > > I don't think we can solve the integration by having "before all > > others" / "after all others" phases as that's too coarse grained to > > define a full graph; we need to find a way to have the DI framework > > take in consideration our additional components both in terms of DI > > consumers and providers - then let the framework wire up things in the > > order it prefers. This is also to allow the DI engine to print > > appropriate warnings for un-resolvable situations with its native > > error handling, which would resolve in more familiar error messages. > > > > If that's not doable *or a priority* then all we can do is try to make > > it clear enough that there will be limitations and hopefully describe > > these clearly. Some of such limitations might be puzzling as you > > describe. > > > > > > > > On 20 December 2017 at 12:50, Yoann Rodiere wrote: > > > Hello all, > > > > > > TL;DR: Application-scoped beans cannot be used as part of the > @PreDestroy > > > method of ORM-instantiated CDI beans, and it's a bit odd because they > can > > > be used as part of the @PostConstruct method. > > > > > > I've been testing the CDI integration in Hibernate ORM for the past few > > > days, trying to integrate it into Search. I think I've discovered > > something > > > odd: when CDI-managed beans are destroyed, they cannot access other > > > Application-scoped CDI beans anymore. Not sure whether this is a > problem > > or > > > not, so maybe we should discuss it a bit before going forward with the > > > current behavior. > > > > > > Short reminder: scopes define when CDI beans are created and destroyed. > > > @ApplicationScoped is pretty self-explanatory: created when the > > application > > > starts and destroyed when it stops. Some other scopes are a bit more > > > convoluted: @Singleton basically means created *before* the application > > > starts and destroyed *after* the application stops (and also means > "this > > > bean shall not be proxied"), @Dependent means created when an instance > is > > > requested and destroyed when the instance is released, etc. > > > > > > The thing is, Hibernate ORM is typically started very early and shut > down > > > very late in the CDI lifecycle - at least within WildFly. So when > > Hibernate > > > starts, CDI Application-scoped beans haven't been instantiated yet, and > > it > > > turns out that when Hibernate ORM shuts down, CDI has already destroyed > > > Application-scoped beans. > > > > > > Regarding startup, Steve and Scott solved the problem by delaying bean > > > instantiation to some point in the future when the Application scope is > > > active (and thus Application-scoped beans are available). This makes it > > > possible to use Application-scoped beans within ORM-instantiated beans > as > > > soon as the latter are constructed (i.e. within their @PostConstruct > > > methods). > > > However, when Hibernate ORM shuts down, the Application scope has > already > > > been terminated. So when ORM destroys the beans it instantiated, those > > > ORM-instantiated beans cannot call a method on referenced > > > Application-scoped beans (CDI proxies will throw an exception). > > > > > > All in all, the only type of beans we can currently use in a > @PreDestroy > > > method of an ORM-instantiated bean is @Dependent beans. @Singleton > beans > > > will work, but only because they are not proxied and thus you can cheat > > and > > > use them even after they have been destroyed... which I definitely > > wouldn't > > > recommend. > > > > > > I see two ways to handle the issue: > > > > > > 1. We don't change anything, and simply document somewhere that > beans > > > instantiated as part of the CDI integration are instantiated within > > the > > > Application scope, but are destroyed outside of it. And we suggest > > that any > > > bean used in @PostDestroy method in an ORM-instantiated bean > > (directly or > > > not) must have either a @Dependent scope, or a @Singleton scope and > no > > > @PostDestroy method. > > > 2. We implement an "early shut-down" somehow, which would bring > > forward > > > bean destruction to some time when the Application scope is still > > active. > > > > org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager mentions that we could > look at introducing a beanManagerDestroyed notification, if that is useful > and we can find a way to implement it (javax.enterprise.spi.BeforeShutdown > [1] is not early enough to meet your requirements). > > Scott > > [1] > > https://docs.oracle.com/javaee/7/api/javax/enterprise/inject/spi/BeforeShutdown.html > > > > > > > > #1 may be enough for now, even though the behavior feels a bit odd, and > > > forces users to resort to less-than-ideal practices (using a @Singleton > > > bean after it has been destroyed). > > > > > > #2 would require changes in WildFly and may be a bit complex. In > > > particular, if we aren't careful, Application-scoped beans may not be > > able > > > to use Hibernate ORM from within their @PreDestroy methods... Which is > > > probably not a good idea. So we would have to find a solution together > > with > > > the WildFly team. Also to be considered: Hibernate Search would have to > > be > > > shut down just before the "early shut-down" of Hibernate ORM occurs, > > > because Hibernate Search cannot function at all without the beans it > > > retrieves from the CDI context. > > > > > > Thoughts? > > > > > > > > > Yoann Rodi?re > > > Hibernate NoORM Team > > > yoann at hibernate.org > > > _______________________________________________ > > > hibernate-dev mailing list > > > hibernate-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From chris at hibernate.org Tue Jan 2 14:43:32 2018 From: chris at hibernate.org (Chris Cranford) Date: Tue, 2 Jan 2018 14:43:32 -0500 Subject: [hibernate-dev] Realising the JavaDoc jars as well In-Reply-To: References: Message-ID: <64d228b0-011e-dd94-1d55-78a4efc37120@hibernate.org> I agree with Andrea. On 12/29/2017 09:14 AM, andrea boriero wrote: > +1 for filtering out internal packages. > > not a strong opinion on grouping > > On 24 December 2017 at 14:23, Steve Ebersole wrote: > >> Sure, but the question remains :P It just adds another one: >> >> >> 1. Should internal packages be generated into the javadocs (individual >> and/or aggregated)? >> 2. Should the individual javadocs (only intended for publishing to >> Central) group the packages into api/spi(/internal) the way we do for >> the >> aggregated javadocs? >> >> Personally I think filtering out internal packages is a great idea. >> >> Regarding grouping packages, I think its not worth the effort for the >> individual ones - just have an overview for these that just notes this >> distinction. >> >> On Sat, Dec 23, 2017 at 6:53 AM Sanne Grinovero >> wrote: >> >>> On 22 December 2017 at 18:16, Steve Ebersole >> wrote: >>>> I wanted to get everyone's opinion about the api/spi/internal package >>>> grouping we do in the aggregated Javadoc in regards to the per-module >>>> javadocs. Adding this logic adds significant overhead to the process >> of >>>> building the Javadoc, to the point where I am considering not >> performing >>>> that grouping there. >>>> >>>> Thoughts? >>> For Hibernate Search we recently decided to not produce javadocs at >>> all for "internal"; everything else is just documented as a single >>> group. >>> >>> That cuts on the "need to know" complexity of end users. Advanced >>> users who could have benefitted from knowing more about the internals >>> will likely have sources. >>> >>>> On Tue, Dec 12, 2017 at 11:37 AM Vlad Mihalcea < >> mihalcea.vlad at gmail.com> >>>> wrote: >>>>> I tested it locally, and when publishing the jars to Maven local, the >>>>> JavaDoc is now included. >>>>> >>>>> Don't know if there's anything to be done about it. >>>>> >>>>> Vlad >>>>> >>>>> On Mon, Dec 11, 2017 at 9:32 PM, Sanne Grinovero >>>> wrote: >>>>> >>>>>> +1 to merge it (if it works - which I didn't check) >>>>>> >>>>>> Some history can easily be found: >>>>>> - >>>>>> >>> http://lists.jboss.org/pipermail/hibernate-dev/2017-January/015758.html >>>>>> Thanks, >>>>>> Sanne >>>>>> >>>>>> >>>>>> On 11 December 2017 at 15:24, Vlad Mihalcea < >> mihalcea.vlad at gmail.com> >>>>>> wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I've noticed this Pull Request which is valid and worth >> integrating: >>>>>>> https://github.com/hibernate/hibernate-orm/pull/2078 >>>>>>> >>>>>>> Before I merge it, I wanted to make sure whether this change was >>>>>> accidental >>>>>>> or intentional. >>>>>>> >>>>>>> Was there any reason not to ship the JavaDoc jars along with the >>>>>>> release >>>>>>> artifacts and the sources jars as well? >>>>>>> >>>>>>> Thanks, >>>>>>> Vlad >>>>>>> _______________________________________________ >>>>>>> hibernate-dev mailing list >>>>>>> hibernate-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>>> _______________________________________________ >>>>> hibernate-dev mailing list >>>>> hibernate-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Tue Jan 2 14:49:42 2018 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 02 Jan 2018 19:49:42 +0000 Subject: [hibernate-dev] Realising the JavaDoc jars as well In-Reply-To: <64d228b0-011e-dd94-1d55-78a4efc37120@hibernate.org> References: <64d228b0-011e-dd94-1d55-78a4efc37120@hibernate.org> Message-ID: This is already what I have done over a week ago ;) On Tue, Jan 2, 2018 at 1:43 PM Chris Cranford wrote: > I agree with Andrea. > > > On 12/29/2017 09:14 AM, andrea boriero wrote: > > +1 for filtering out internal packages. > > not a strong opinion on grouping > > On 24 December 2017 at 14:23, Steve Ebersole wrote: > > > Sure, but the question remains :P It just adds another one: > > > 1. Should internal packages be generated into the javadocs (individual > and/or aggregated)? > 2. Should the individual javadocs (only intended for publishing to > Central) group the packages into api/spi(/internal) the way we do for > the > aggregated javadocs? > > Personally I think filtering out internal packages is a great idea. > > Regarding grouping packages, I think its not worth the effort for the > individual ones - just have an overview for these that just notes this > distinction. > > On Sat, Dec 23, 2017 at 6:53 AM Sanne Grinovero > wrote: > > > On 22 December 2017 at 18:16, Steve Ebersole > > wrote: > > I wanted to get everyone's opinion about the api/spi/internal package > grouping we do in the aggregated Javadoc in regards to the per-module > javadocs. Adding this logic adds significant overhead to the process > > of > > building the Javadoc, to the point where I am considering not > > performing > > that grouping there. > > Thoughts? > > > For Hibernate Search we recently decided to not produce javadocs at > all for "internal"; everything else is just documented as a single > group. > > That cuts on the "need to know" complexity of end users. Advanced > users who could have benefitted from knowing more about the internals > will likely have sources. > > > > On Tue, Dec 12, 2017 at 11:37 AM Vlad Mihalcea < > > mihalcea.vlad at gmail.com> > > wrote: > > > I tested it locally, and when publishing the jars to Maven local, the > JavaDoc is now included. > > Don't know if there's anything to be done about it. > > Vlad > > On Mon, Dec 11, 2017 at 9:32 PM, Sanne Grinovero > wrote: > > > +1 to merge it (if it works - which I didn't check) > > Some history can easily be found: > - > > > http://lists.jboss.org/pipermail/hibernate-dev/2017-January/015758.html > > > Thanks, > Sanne > > > On 11 December 2017 at 15:24, Vlad Mihalcea < > > mihalcea.vlad at gmail.com> > > wrote: > > Hi, > > I've noticed this Pull Request which is valid and worth > > integrating: > > https://github.com/hibernate/hibernate-orm/pull/2078 > > Before I merge it, I wanted to make sure whether this change was > > accidental > > or intentional. > > Was there any reason not to ship the JavaDoc jars along with the > release > artifacts and the sources jars as well? > > Thanks, > Vlad > _______________________________________________ > hibernate-dev mailing listhibernate-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/hibernate-dev > > _______________________________________________ > hibernate-dev mailing listhibernate-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/hibernate-dev > > _______________________________________________ > hibernate-dev mailing listhibernate-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/hibernate-dev > > _______________________________________________ > hibernate-dev mailing listhibernate-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/hibernate-dev > > > From steve at hibernate.org Tue Jan 2 14:54:42 2018 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 02 Jan 2018 19:54:42 +0000 Subject: [hibernate-dev] ORM CI jobs - erroneous github triggers Message-ID: The legacy ORM jobs (5.1-based ones at least) are getting triggered when they should not be. Generally they all show they the run is triggered by a "SCM change", but it does not show any changes. The underlying problem (although I am at a loss as to why) is that there has indeed been SCM changes pushed to Github, but against completely different branches. As far as I can tell these job's Github setting are correct. Any ideas what is going on? This would not be such a big deal if the CI environment did not throttle all waiting jobs down to one active job. So the jobs I am actually interested in are forced to wait (sometimes over an hour) for these jobs that should not even be running. From gunnar at hibernate.org Tue Jan 2 16:12:09 2018 From: gunnar at hibernate.org (Gunnar Morling) Date: Tue, 2 Jan 2018 22:12:09 +0100 Subject: [hibernate-dev] Using Hibernate ORM as automatic JPMS modules In-Reply-To: References: Message-ID: 2017-12-28 18:07 GMT+01:00 Steve Ebersole : > Gunnar, back to the original discussion... > > I asked you about this specifically in Paris and you responded "no" - but > reading info I have found online seems to indicate that it is indeed > perfectly valid to build with Java 9 and include a module-info.class into > the jar and be able to load that into Java 8 (module-info is simply > ignored). Are the resources I have read just wrong? > I think it's feasible to do this (assuming of course that the class files other than module-info.class are compiled to Java 8 byte code level). The module-info.class file should be ignored, although there are some exceptions, e.g. Jandex used to stumble upon the module-info.class file until recently (as it's doing its own scanning of JARs). This one has been fixed, but other libraries doing their own scanning may have similar problems. What's definitely not recommended is to publish a modularized JAR with dependences to non-stable module names (i.e. dependences to any JARs which neither have an Automatic-Module-Name header nor a proper module-info.class descriptor, see http://blog.joda.org/2017/05/java-se-9-jpms-automatic-modules.html). Besides that I'd expect a few more changes to be required in order to make ORM a modularized JAR. One specific issue is related to retrieval of XSDs (or more generally any resources loaded from ORM's own module). Currently this is done via ClassLoader#getResource() (see PersistenceXmlParser.resolveLocalSchema() and ClassLoaderServiceImpl), which will return null for resources in named modules unless that module is opened (see https://docs.oracle.com/javase/9/docs/api/java/lang/ClassLoader.html#getResource-java.lang.String-), which of course isn't desirable for the ORM module. The recommended way to load resources from your own module is to use Class#getResources() (or an equivalent on Module, which would tie this code to Java 9, though) as per Mark Reinhold (see https://stackoverflow.com/questions/45166757/loading-classes-and-resources-in-java-9/45173837 ). I've pushed an update to the "orm-modularized" branch in my exploration repo ( https://github.com/gunnarmorling/hibernate-orm-on-java9-modules/tree/orm-modularized), which makes ORM a named module and runs the same test as before. By making it an open module, the issue described above is circumvented, but I wouldn't be surprised if there were more issues of that kind when doing some more advanced tests for other ORM functionality. > > > On Thu, Dec 28, 2017 at 9:06 AM Steve Ebersole > wrote: > > > After tweaking this, here is what I have... > > > > Manifest-Version: 1.0 > > Created-By: 1.8.0_121 (Oracle Corporation) > > Main-Class: org.hibernate.Version > > > > Specification-Title: hibernate-core > > Specification-Version: 5.3 > > Specification-Vendor: Hibernate.org > > > > Implementation-Title: hibernate-core > > Implementation-Version: 5.3.0.SNAPSHOT > > Implementation-Vendor-Id: org.hibernate > > Implementation-Vendor: Hibernate.org > > Implementation-Url: http://hibernate.org > > > > Automatic-Module-Name: org.hibernate.orm.core > > > > Bundle-ManifestVersion: 2 > > Require-Capability: osgi.ee;filter:="(&(osgi.ee=JavaSE)(version=1.8))" > > Tool: Bnd-3.4.0.201707252008 > > > > Bundle-SymbolicName: org.hibernate.orm.core > > Bundle-Version: 5.3.0.SNAPSHOT > > Bundle-Name: hibernate-core > > Bundle-Description: A module of the Hibernate O/RM project > > Bundle-Vendor: Hibernate.org > > Bundle-DocURL: http://www.hibernate.org/orm/5.3 > > Bnd-LastModified: 1513615321000 > > > > Import-Package: ... > > Export-Package: ... > > > > > > Which looks great to me... > > > > On Wed, Dec 27, 2017 at 3:39 PM Steve Ebersole > > wrote: > > > >> I had intended this for 5.3 which hasn't even gone Beta yet (we wont > have > >> an Alpha). > >> > >> On Wed, Dec 27, 2017 at 3:38 PM Brett Meyer > wrote: > >> > >>> +1 from me on making them consistent. In practice, Bundle-SymbolicName > >>> isn't used for much, other than a guaranteed unique identifier. One of > >>> the Karaf guys pointed out that Bundle-SymbolicName is used to link a > >>> fragment bundle to its host bundle, but we've been able to avoid > >>> fragments like the plague on purpose. > >>> > >>> In practice, most users should be pulling in and interacting with our > >>> bundles purely through Maven artifacts or our features.xml, so a change > >>> would largely be unnoticed. > >>> > >>> We still might consider holding off doing that until at least a minor > >>> version change, since there is a potential issue for any tooling that > >>> might be relying on that (logging/auditing, etc.)... > >>> > >>> > >>> On 12/23/17 11:38 PM, Steve Ebersole wrote: > >>> > Another thing I was noticing was an annoying minor difference between > >>> the > >>> > OSGi bundle name and the Java 9 module name: > >>> > > >>> > Automatic-Module-Name: org.hibernate.orm.core > >>> > Bundle-SymbolicName: org.hibernate.core > >>> > > >>> > Does it make sense to adjust the OSGi bundle name to follow the > module > >>> > naming? > >>> > > >>> > On Sat, Dec 23, 2017 at 8:47 AM Steve Ebersole > >>> wrote: > >>> > > >>> >> I already did a PR for the `Automatic-Module-Name` yesterday and > >>> added you > >>> >> as a reviewer. when you get a chance... > >>> >> > >>> >> > >>> >> > >>> >> On Sat, Dec 23, 2017 at 8:36 AM Gunnar Morling < > gunnar at hibernate.org> > >>> >> wrote: > >>> >> > >>> >>> 2017-12-22 23:07 GMT+01:00 Steve Ebersole : > >>> >>> > >>> >>>> I created a Jira to track this: > >>> >>>> https://hibernate.atlassian.net/browse/HHH-12188 > >>> >>>> > >>> >>>> On Fri, Dec 22, 2017 at 5:33 AM Steve Ebersole < > steve at hibernate.org > >>> > > >>> >>>> wrote: > >>> >>>> > >>> >>>>> Thanks for investigating this Gunnar. > >>> >>>>> > >>> >>>>> Some thoughts inline... > >>> >>>>> > >>> >>>>> On Wed, Dec 20, 2017 at 3:54 PM Gunnar Morling < > >>> gunnar at hibernate.org> > >>> >>>>> wrote: > >>> >>>>> > >>> >>>>> > >>> >>>>>> * JDK 9 comes with an incomplete JTA module (java.transaction), > >>> so a > >>> >>>>>> complete one must be provided via --upgrade-module-path (I'm > >>> using the > >>> >>>>>> 2.0.0.Alpha1 version Tomaz Cerar has created for that purpose) > >>> >>>>>> > >>> >>>>> Do you know if there is a plan to fix this in Java 9? Seems > >>> bizarre > >>> >>>>> that Java 9 expects all kinds of strict modularity from libraries > >>> and > >>> >>>>> applications when the JDK itself can't follow that.. > >>> >>>>> > >>> >>> The "java.transaction" module of the JDK is marked with > >>> >>> @Deprecated(forRemoval=true) as of Java 9, but I don't know when > the > >>> >>> removal will happen. There's JEP 320 for this ( > >>> >>> http://openjdk.java.net/jeps/320), which also describes why the > >>> module > >>> >>> exists in its current form. It's not scheduled for Java 10 > >>> currently, and > >>> >>> given the latter is in rampdown already, I wouldn't expect this > >>> removal to > >>> >>> happen before Java 11. > >>> >>> > >>> >>> > >>> >>>>>> * hibernate-jpa-2.1-api-1.0.0.Final.jar can't be used as an > >>> automatic > >>> >>>>>> module, as the automatic naming algorithm stumples upon the > >>> numbers > >>> >>>>>> (2.1) > >>> >>>>>> within the module name it derives; I'm therefore using my > ModiTect > >>> >>>>>> tooling ( > >>> >>>>>> https://github.com/moditect/moditect/) to convert the JPA API > JAR > >>> >>>>>> into an > >>> >>>>>> explicit module on the fly > >>> >>>>>> > >>> >>>>> We actually no longer use that artifact as a dependency. Since > JPA > >>> >>>>> 2.2, the EG publishes a "blessed" API jar which is what we use > as a > >>> >>>>> dependency. > >>> >>>>> > >>> >>> Ah, yes, very nice. That one already defines an explicit module > name > >>> >>> ("java.persistence") via the Automatic-Module-Name manifest entry. > >>> >>> > >>> >>>>>> * When using ByteBuddy as the byte code provider, a reads > >>> relationship > >>> >>>>>> must > >>> >>>>>> be added from the user's module towards hibernate.core > ("requires > >>> >>>>>> hibernate.core"). This is due to the usage of > >>> >>>>>> org.hibernate.proxy.ProxyConfiguration within the generated > proxy > >>> >>>>>> classes. > >>> >>>>>> Ideally no dependence to the JPA provider should be needed when > >>> solely > >>> >>>>>> working with the JPA API (as this demo does), but I'm not sure > >>> whether > >>> >>>>>> this > >>> >>>>>> can be avoided when using proxies (or could we construct proxies > >>> in a > >>> >>>>>> way > >>> >>>>>> not requiring this dependence?). > >>> >>>>>> > >>> >>>>> I'm not sure what a decent solution would be here. Ultimately > the > >>> >>>>> runtime needs to be able to communicate with the generated > proxies > >>> - how > >>> >>>>> else would you suggest this happen? > >>> >>>>> > >>> >>> Not sure either. Maybe we could generate a dedicated interface into > >>> the > >>> >>> user's module and then inject a generated implementation -- living > >>> within > >>> >>> the ORM module -- of that interface into the entities. Worth some > >>> tinkering > >>> >>> I reckon. > >>> >>> > >>> >>>>> * When using ByteBuddy as the byte code provider, I still needed > >>> to have > >>> >>>>>> Javassist around, as it's used in ClassFileArchiveEntryHandler. > I > >>> >>>>>> understand that eventually this should be using Jandex, but I'm > >>> >>>>>> wondering > >>> >>>>>> whether we could (temporarily) change it to use ASM instead of > >>> >>>>>> Javassist > >>> >>>>>> (at least when using ByteBuddy as byte code provider, which is > >>> based on > >>> >>>>>> ASM), so people don't need to have Javassist *and* ByteBuddy > when > >>> >>>>>> using the > >>> >>>>>> latter as byte code provider? This seems desirable esp. once we > >>> move to > >>> >>>>>> ByteBuddy by default. > >>> >>>>>> > >>> >>>>> Yes, Sanne brought this up in Paris and it is something I will > >>> look at > >>> >>>>> prior to a 5.3.0.Final > >>> >>>>> > >>> >>> Excellent. > >>> >>> > >>> >>>>> * Multiple methods in ReflectHelper call setAccessible() without > >>> >>>>>> checking > >>> >>>>>> whether the method/field/constructor already is accessible. If > we > >>> >>>>>> changed > >>> >>>>>> that to only call setAccessible() if actually needed, people > would > >>> >>>>>> have to > >>> >>>>>> be a little bit less permissive in their module descriptor. It'd > >>> >>>>>> suffice > >>> >>>>>> for them to declare "exports com.example.entities to > >>> hibernate.core" > >>> >>>>>> instead of "opens com.example.entities to hibernate.core", > unless > >>> they > >>> >>>>>> mandate (private) field access for their entities. > >>> >>>>>> > >>> >>>>> Can you open a Jira for that? > >>> >>>>> > >>> >>> Done: https://hibernate.atlassian.net/browse/HHH-12189. > >>> >>> > >>> >>> > >>> >>>>>> The demo is very simple (insert and load of an entity with a > lazy > >>> >>>>>> association). If there's anything else you'd like to try out > when > >>> >>>>>> using ORM > >>> >>>>>> as JPMS modules, let me know or just fork the demo and try it > out > >>> >>>>>> yourself > >>> >>>>>> > >>> >>>>> IIUC for jars targeting both Java 8 and Java 9 we cannot include > a > >>> >>>>> module-info file. But we need to set the module names - you > >>> mentioned > >>> >>>>> there was a "hinting" process. From what I could glean from > >>> searching > >>> >>>>> (which was oddly not many hits), this is achieved by adding a > >>> >>>>> `Automatic-Module-Name` entry in the JAR's MANIFEST.MF. Correct? > >>> >>>>> > >>> >>> Yes, exactly that's the mechanism. Jason Greene is working on a > >>> document > >>> >>> with recommendations around naming patterns, I hope it'll be > >>> published soon. > >>> >>> > >>> >>> > >>> >>>>> Also, IIRC we agreed with `org.hibernate.orm` as the base for all > >>> ORM > >>> >>>>> module names, so we'd have: > >>> >>>>> > >>> >>>>> - org.hibernate.orm.c3p0 > >>> >>>>> - org.hibernate.orm.core > >>> >>>>> - ... > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> > _______________________________________________ > >>> > hibernate-dev mailing list > >>> > hibernate-dev at lists.jboss.org > >>> > https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>> > >>> > >>> _______________________________________________ > >>> hibernate-dev mailing list > >>> hibernate-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >> > >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From gunnar at hibernate.org Tue Jan 2 16:23:38 2018 From: gunnar at hibernate.org (Gunnar Morling) Date: Tue, 2 Jan 2018 22:23:38 +0100 Subject: [hibernate-dev] Using Hibernate ORM as automatic JPMS modules In-Reply-To: References: Message-ID: 2017-12-29 21:34 GMT+01:00 Steve Ebersole : > On Fri, Dec 22, 2017 at 5:33 AM Steve Ebersole > wrote: > >> >> * When using ByteBuddy as the byte code provider, I still needed to have >>> Javassist around, as it's used in ClassFileArchiveEntryHandler. I >>> understand that eventually this should be using Jandex, but I'm wondering >>> whether we could (temporarily) change it to use ASM instead of Javassist >>> (at least when using ByteBuddy as byte code provider, which is based on >>> ASM), so people don't need to have Javassist *and* ByteBuddy when using >>> the >>> latter as byte code provider? This seems desirable esp. once we move to >>> ByteBuddy by default. >>> >> >> Yes, Sanne brought this up in Paris and it is something I will look at >> prior to a 5.3.0.Final >> > > Actually this is different than what Sanne brought up. I actually cannot > reproduce what Sanne is reporting. If I had to guess he was not specifying > the bytecode provider to use "globally". This is a special kind of setting > (we used to have a few) that can only be specified per-VM : either as a > root `hibernate.properties` or as a System property. It has to do with how > Hibernate builds its mapping model, specifically `org.hibernate.mapping.Component`. > Given the redesign of the bootstrap process we may actually be able to > remove that "VM wide" requirement. I'll look into that for 5.3. BTW > Sanne, I created a repo[1] showing that this does indeed work when > specified "properly". > > Gunnar, what you are seeing is very different and I'm not sure of a way to > solve that yet. That is all part of auto-discovery of resources (entities, > embeddables, converters, etc) during bootstrap. We need to inspect the > file without loading the Class to look at its annotations. We need > *something* to do that, whether that is Jandex, Javassist, etc. Byte Buddy > may or may not have a similar facility. The problem here is that the > Javassist dependency is needed for a very different purpose. And without a > viable alternative solution, its going to have to stay that way. > Yes, understood it's for a different purpose. I don't think ByteBuddy itself will help with this task, but IIUC, ASM could be used to do so. ButeBuddy depends on ASM, there's one caveat, though, ASM isn't pulled in as a (transitive) dependency, but instead shaded into the ByteBuddy JAR. See the discussion at the bottom of http://bytebuddy.net/ (section "dependency maintenance") for details, it seems they recommend to shade ASM yourself to avoid any version conflicts. Not sure really what's the best here, might not be worth the hazzle and perhaps easiest just to live with the situation until this code has been migrated to Jandex. > [1] https://github.com/sebersole/orm-bytebuddy-no-javassist > From steve at hibernate.org Tue Jan 2 16:50:04 2018 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 02 Jan 2018 21:50:04 +0000 Subject: [hibernate-dev] Using Hibernate ORM as automatic JPMS modules In-Reply-To: References: Message-ID: ASM is a completely different model though, unless the part you think could be used here is different. I did say though that we could leverage Jandex for this part. The problem (iiuc) there though is that Jandex would require all classes to be indexed - we could not just ask it to index a particular file and then get that "class descriptor" from the IndexView. So I agree, unless there is a better option someone proposes (and honestly is willing to work on), I think continuing with Jandex for the scanning piece is going to be the route we go for 5.3 On Tue, Jan 2, 2018 at 3:23 PM Gunnar Morling wrote: > 2017-12-29 21:34 GMT+01:00 Steve Ebersole : > >> On Fri, Dec 22, 2017 at 5:33 AM Steve Ebersole >> wrote: >> >>> >>> * When using ByteBuddy as the byte code provider, I still needed to have >>>> Javassist around, as it's used in ClassFileArchiveEntryHandler. I >>>> understand that eventually this should be using Jandex, but I'm >>>> wondering >>>> whether we could (temporarily) change it to use ASM instead of Javassist >>>> (at least when using ByteBuddy as byte code provider, which is based on >>>> ASM), so people don't need to have Javassist *and* ByteBuddy when using >>>> the >>>> latter as byte code provider? This seems desirable esp. once we move to >>>> ByteBuddy by default. >>>> >>> >>> Yes, Sanne brought this up in Paris and it is something I will look at >>> prior to a 5.3.0.Final >>> >> >> Actually this is different than what Sanne brought up. I actually cannot >> reproduce what Sanne is reporting. If I had to guess he was not specifying >> the bytecode provider to use "globally". This is a special kind of setting >> (we used to have a few) that can only be specified per-VM : either as a >> root `hibernate.properties` or as a System property. It has to do with how >> Hibernate builds its mapping model, specifically >> `org.hibernate.mapping.Component`. Given the redesign of the bootstrap >> process we may actually be able to remove that "VM wide" requirement. I'll >> look into that for 5.3. BTW Sanne, I created a repo[1] showing that this >> does indeed work when specified "properly". >> >> Gunnar, what you are seeing is very different and I'm not sure of a way >> to solve that yet. That is all part of auto-discovery of resources >> (entities, embeddables, converters, etc) during bootstrap. We need to >> inspect the file without loading the Class to look at its annotations. We >> need *something* to do that, whether that is Jandex, Javassist, etc. Byte >> Buddy may or may not have a similar facility. The problem here is that the >> Javassist dependency is needed for a very different purpose. And without a >> viable alternative solution, its going to have to stay that way. >> > > Yes, understood it's for a different purpose. I don't think ByteBuddy > itself will help with this task, but IIUC, ASM could be used to do so. > ButeBuddy depends on ASM, there's one caveat, though, ASM isn't pulled in > as a (transitive) dependency, but instead shaded into the ByteBuddy JAR. > See the discussion at the bottom of http://bytebuddy.net/ (section > "dependency maintenance") for details, it seems they recommend to shade ASM > yourself to avoid any version conflicts. > > Not sure really what's the best here, might not be worth the hazzle and > perhaps easiest just to live with the situation until this code has been > migrated to Jandex. > > >> [1] https://github.com/sebersole/orm-bytebuddy-no-javassist >> > > From steve at hibernate.org Tue Jan 2 18:18:19 2018 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 02 Jan 2018 23:18:19 +0000 Subject: [hibernate-dev] Using Hibernate ORM as automatic JPMS modules In-Reply-To: References: Message-ID: Of course I meant "continuing with Javassist..." :P On Tue, Jan 2, 2018 at 3:50 PM Steve Ebersole wrote: > ASM is a completely different model though, unless the part you think > could be used here is different. > > I did say though that we could leverage Jandex for this part. The problem > (iiuc) there though is that Jandex would require all classes to be indexed > - we could not just ask it to index a particular file and then get that > "class descriptor" from the IndexView. > > So I agree, unless there is a better option someone proposes (and honestly > is willing to work on), I think continuing with Jandex for the scanning > piece is going to be the route we go for 5.3 > > On Tue, Jan 2, 2018 at 3:23 PM Gunnar Morling > wrote: > >> 2017-12-29 21:34 GMT+01:00 Steve Ebersole : >> >>> On Fri, Dec 22, 2017 at 5:33 AM Steve Ebersole >>> wrote: >>> >>>> >>>> * When using ByteBuddy as the byte code provider, I still needed to have >>>>> Javassist around, as it's used in ClassFileArchiveEntryHandler. I >>>>> understand that eventually this should be using Jandex, but I'm >>>>> wondering >>>>> whether we could (temporarily) change it to use ASM instead of >>>>> Javassist >>>>> (at least when using ByteBuddy as byte code provider, which is based on >>>>> ASM), so people don't need to have Javassist *and* ByteBuddy when >>>>> using the >>>>> latter as byte code provider? This seems desirable esp. once we move to >>>>> ByteBuddy by default. >>>>> >>>> >>>> Yes, Sanne brought this up in Paris and it is something I will look at >>>> prior to a 5.3.0.Final >>>> >>> >>> Actually this is different than what Sanne brought up. I actually >>> cannot reproduce what Sanne is reporting. If I had to guess he was not >>> specifying the bytecode provider to use "globally". This is a special kind >>> of setting (we used to have a few) that can only be specified per-VM : >>> either as a root `hibernate.properties` or as a System property. It has to >>> do with how Hibernate builds its mapping model, specifically >>> `org.hibernate.mapping.Component`. Given the redesign of the bootstrap >>> process we may actually be able to remove that "VM wide" requirement. I'll >>> look into that for 5.3. BTW Sanne, I created a repo[1] showing that this >>> does indeed work when specified "properly". >>> >>> Gunnar, what you are seeing is very different and I'm not sure of a way >>> to solve that yet. That is all part of auto-discovery of resources >>> (entities, embeddables, converters, etc) during bootstrap. We need to >>> inspect the file without loading the Class to look at its annotations. We >>> need *something* to do that, whether that is Jandex, Javassist, etc. Byte >>> Buddy may or may not have a similar facility. The problem here is that the >>> Javassist dependency is needed for a very different purpose. And without a >>> viable alternative solution, its going to have to stay that way. >>> >> >> Yes, understood it's for a different purpose. I don't think ByteBuddy >> itself will help with this task, but IIUC, ASM could be used to do so. >> ButeBuddy depends on ASM, there's one caveat, though, ASM isn't pulled in >> as a (transitive) dependency, but instead shaded into the ByteBuddy JAR. >> See the discussion at the bottom of http://bytebuddy.net/ (section >> "dependency maintenance") for details, it seems they recommend to shade ASM >> yourself to avoid any version conflicts. >> >> Not sure really what's the best here, might not be worth the hazzle and >> perhaps easiest just to live with the situation until this code has been >> migrated to Jandex. >> >> >>> [1] https://github.com/sebersole/orm-bytebuddy-no-javassist >>> >> >> From yoann at hibernate.org Wed Jan 3 04:31:39 2018 From: yoann at hibernate.org (Yoann Rodiere) Date: Wed, 3 Jan 2018 10:31:39 +0100 Subject: [hibernate-dev] ORM & Java 9 - strange javadoc failure In-Reply-To: References: Message-ID: Steve, there is a reference to org.hibernate.engine.Mapping in a non-javadoc comment with a javadoc tag ("@see"): org.hibernate.spatial.dialect.oracle.SDOObjectProperty#getReturnType Also: org.hibernate.spatial.dialect.oracle.SDOObjectMethod#getReturnType Maybe you could try removing/fixing this comment and see how it goes? The bug may be about the javadoc processor trying to process non-javadoc comments whenever it sees a javadoc tag... Which could be worked around easily. Yoann Rodi?re Hibernate NoORM Team yoann at hibernate.org On 2 January 2018 at 20:15, Steve Ebersole wrote: > Sanne, have you had a chance to look at this? If not, I may have to just > disable Java 9 from Travis > > On Wed, Dec 27, 2017 at 8:37 PM Steve Ebersole > wrote: > > > I worked on getting Travis CI set up on ORM for reasons discussed here > > previously. But I am running into a really strange error when I enabled > > Java 9: > > > > javadoc: error - An exception occurred while building a component: > > ClassSerializedForm > > (com.sun.tools.javac.code.Symbol$CompletionFailure: class file for > > org.hibernate.engine.Mapping not found) > > Please file a bug against the javadoc tool via the Java bug reporting > page > > (http://bugreport.java.com) after checking the Bug Database ( > > http://bugs.java.com) > > for duplicates. Include error messages and the following diagnostic in > > your report. Thank you. > > com.sun.tools.javac.code.Symbol$CompletionFailure: class file for > > org.hibernate.engine.Mapping not found > > > > It seems like javadoc is complaining because it sees a reference to a > > class (org.hibernate.engine.Mapping) that it cannot find. It is true > that > > there is no class named org.hibernate.engine.Mapping, the real name is > > org.hibernate.engine.spi.Mapping - but what is strange is that I search > the > > entire ORM project and found zero references to the String > > org.hibernate.engine.Mapping. > > > > I just kicked off a run of the ORM / Java 9 Jenkins job to see if it has > > the same failure. > > > > Anyone have any ideas? > > > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Wed Jan 3 07:47:57 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 3 Jan 2018 12:47:57 +0000 Subject: [hibernate-dev] Repository renamed: lucene-modules -> lucene-jbossmodules Message-ID: Hi all, we renamed the Git repository name, and the respective GitHub project, from lucene-modules to lucene-jbossmodules. Obviously "modules" alone was getting a bit too ambiguous. We decided to not call them "WildFly modules" as these are not used only for WildFly, and the modular technology is called "JBoss Modules" [1]. Please don't forget to update your references in any git clone you might have! Thanks, Sanne 1- https://jboss-modules.github.io/jboss-modules/ From steve at hibernate.org Wed Jan 3 09:36:12 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 03 Jan 2018 14:36:12 +0000 Subject: [hibernate-dev] ORM & Java 9 - strange javadoc failure In-Reply-To: References: Message-ID: Here is the version that triggered the Travis job: https://github.com/sebersole/hibernate-core/blob/5.3/hibernate-spatial/src/main/java/org/hibernate/spatial/dialect/oracle/SDOObjectMethod.java As you can see those (non-)references are removed. Same error. On Wed, Jan 3, 2018 at 3:32 AM Yoann Rodiere wrote: > Steve, there is a reference to org.hibernate.engine.Mapping in a > non-javadoc comment with a javadoc tag > ("@see"): org.hibernate.spatial.dialect.oracle.SDOObjectProperty#getReturnType > Also: org.hibernate.spatial.dialect.oracle.SDOObjectMethod#getReturnType > Maybe you could try removing/fixing this comment and see how it goes? The > bug may be about the javadoc processor trying to process non-javadoc > comments whenever it sees a javadoc tag... Which could be worked around > easily. > > Yoann Rodi?re > Hibernate NoORM Team > yoann at hibernate.org > > On 2 January 2018 at 20:15, Steve Ebersole wrote: > >> Sanne, have you had a chance to look at this? If not, I may have to just >> disable Java 9 from Travis >> > >> On Wed, Dec 27, 2017 at 8:37 PM Steve Ebersole >> wrote: >> >> > I worked on getting Travis CI set up on ORM for reasons discussed here >> > previously. But I am running into a really strange error when I enabled >> > Java 9: >> > >> > javadoc: error - An exception occurred while building a component: >> > ClassSerializedForm >> > (com.sun.tools.javac.code.Symbol$CompletionFailure: class file for >> > org.hibernate.engine.Mapping not found) >> > Please file a bug against the javadoc tool via the Java bug reporting >> page >> > (http://bugreport.java.com) after checking the Bug Database ( >> > http://bugs.java.com) >> > for duplicates. Include error messages and the following diagnostic in >> > your report. Thank you. >> > com.sun.tools.javac.code.Symbol$CompletionFailure: class file for >> > org.hibernate.engine.Mapping not found >> > >> > It seems like javadoc is complaining because it sees a reference to a >> > class (org.hibernate.engine.Mapping) that it cannot find. It is true >> that >> > there is no class named org.hibernate.engine.Mapping, the real name is >> > org.hibernate.engine.spi.Mapping - but what is strange is that I search >> the >> > entire ORM project and found zero references to the String >> > org.hibernate.engine.Mapping. >> > >> > I just kicked off a run of the ORM / Java 9 Jenkins job to see if it has >> > the same failure. >> > >> > Anyone have any ideas? >> > >> > >> > _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > > From smarlow at redhat.com Wed Jan 3 11:09:12 2018 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 3 Jan 2018 11:09:12 -0500 Subject: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope In-Reply-To: References: Message-ID: On Tue, Jan 2, 2018 at 2:42 PM, Steve Ebersole wrote: > Scott, how would we register a listener for this event? > If we want a standard solution, we could ask for an earlier CDI pre-destroy listener. The problem we have had with most CDI "listeners" so far is that they are > non-contextual, meaning there has been no way to link that back to a > specific SessionFactory.. If I can register this listener with a reference > back to the Sessionfactory, this should actually be fine. > I could pass the EMF to the org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager.LifecycleListener, if that helps. > > On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow wrote: > >> On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero >> wrote: >> >> > Any dependency injection framework will have some capability to define >> > the graph of dependencies across components, and such graph could be >> > very complex, with details only known to the framework. >> > >> > I don't think we can solve the integration by having "before all >> > others" / "after all others" phases as that's too coarse grained to >> > define a full graph; we need to find a way to have the DI framework >> > take in consideration our additional components both in terms of DI >> > consumers and providers - then let the framework wire up things in the >> > order it prefers. This is also to allow the DI engine to print >> > appropriate warnings for un-resolvable situations with its native >> > error handling, which would resolve in more familiar error messages. >> > >> > If that's not doable *or a priority* then all we can do is try to make >> > it clear enough that there will be limitations and hopefully describe >> > these clearly. Some of such limitations might be puzzling as you >> > describe. >> > >> > >> > >> > On 20 December 2017 at 12:50, Yoann Rodiere >> wrote: >> > > Hello all, >> > > >> > > TL;DR: Application-scoped beans cannot be used as part of the >> @PreDestroy >> > > method of ORM-instantiated CDI beans, and it's a bit odd because they >> can >> > > be used as part of the @PostConstruct method. >> > > >> > > I've been testing the CDI integration in Hibernate ORM for the past >> few >> > > days, trying to integrate it into Search. I think I've discovered >> > something >> > > odd: when CDI-managed beans are destroyed, they cannot access other >> > > Application-scoped CDI beans anymore. Not sure whether this is a >> problem >> > or >> > > not, so maybe we should discuss it a bit before going forward with the >> > > current behavior. >> > > >> > > Short reminder: scopes define when CDI beans are created and >> destroyed. >> > > @ApplicationScoped is pretty self-explanatory: created when the >> > application >> > > starts and destroyed when it stops. Some other scopes are a bit more >> > > convoluted: @Singleton basically means created *before* the >> application >> > > starts and destroyed *after* the application stops (and also means >> "this >> > > bean shall not be proxied"), @Dependent means created when an >> instance is >> > > requested and destroyed when the instance is released, etc. >> > > >> > > The thing is, Hibernate ORM is typically started very early and shut >> down >> > > very late in the CDI lifecycle - at least within WildFly. So when >> > Hibernate >> > > starts, CDI Application-scoped beans haven't been instantiated yet, >> and >> > it >> > > turns out that when Hibernate ORM shuts down, CDI has already >> destroyed >> > > Application-scoped beans. >> > > >> > > Regarding startup, Steve and Scott solved the problem by delaying bean >> > > instantiation to some point in the future when the Application scope >> is >> > > active (and thus Application-scoped beans are available). This makes >> it >> > > possible to use Application-scoped beans within ORM-instantiated >> beans as >> > > soon as the latter are constructed (i.e. within their @PostConstruct >> > > methods). >> > > However, when Hibernate ORM shuts down, the Application scope has >> already >> > > been terminated. So when ORM destroys the beans it instantiated, those >> > > ORM-instantiated beans cannot call a method on referenced >> > > Application-scoped beans (CDI proxies will throw an exception). >> > > >> > > All in all, the only type of beans we can currently use in a >> @PreDestroy >> > > method of an ORM-instantiated bean is @Dependent beans. @Singleton >> beans >> > > will work, but only because they are not proxied and thus you can >> cheat >> > and >> > > use them even after they have been destroyed... which I definitely >> > wouldn't >> > > recommend. >> > > >> > > I see two ways to handle the issue: >> > > >> > > 1. We don't change anything, and simply document somewhere that >> beans >> > > instantiated as part of the CDI integration are instantiated within >> > the >> > > Application scope, but are destroyed outside of it. And we suggest >> > that any >> > > bean used in @PostDestroy method in an ORM-instantiated bean >> > (directly or >> > > not) must have either a @Dependent scope, or a @Singleton scope >> and no >> > > @PostDestroy method. >> > > 2. We implement an "early shut-down" somehow, which would bring >> > forward >> > > bean destruction to some time when the Application scope is still >> > active. >> > >> >> org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager mentions that we >> could >> look at introducing a beanManagerDestroyed notification, if that is useful >> and we can find a way to implement it (javax.enterprise.spi. >> BeforeShutdown >> [1] is not early enough to meet your requirements). >> >> Scott >> >> [1] >> https://docs.oracle.com/javaee/7/api/javax/enterprise/ >> inject/spi/BeforeShutdown.html >> >> >> > > >> > > #1 may be enough for now, even though the behavior feels a bit odd, >> and >> > > forces users to resort to less-than-ideal practices (using a >> @Singleton >> > > bean after it has been destroyed). >> > > >> > > #2 would require changes in WildFly and may be a bit complex. In >> > > particular, if we aren't careful, Application-scoped beans may not be >> > able >> > > to use Hibernate ORM from within their @PreDestroy methods... Which is >> > > probably not a good idea. So we would have to find a solution together >> > with >> > > the WildFly team. Also to be considered: Hibernate Search would have >> to >> > be >> > > shut down just before the "early shut-down" of Hibernate ORM occurs, >> > > because Hibernate Search cannot function at all without the beans it >> > > retrieves from the CDI context. >> > > >> > > Thoughts? >> > > >> > > >> > > Yoann Rodi?re >> > > Hibernate NoORM Team >> > > yoann at hibernate.org >> > > _______________________________________________ >> > > hibernate-dev mailing list >> > > hibernate-dev at lists.jboss.org >> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > From steve at hibernate.org Wed Jan 3 11:22:34 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 03 Jan 2018 16:22:34 +0000 Subject: [hibernate-dev] ORM & Java 9 - strange javadoc failure In-Reply-To: References: Message-ID: What's even more strange is that if I build just spatial's javadoc it works fine. If I try to build the aggregated javadoc is when I see this (even though I've removed those lines) On Wed, Jan 3, 2018 at 8:36 AM Steve Ebersole wrote: > Here is the version that triggered the Travis job: > > > https://github.com/sebersole/hibernate-core/blob/5.3/hibernate-spatial/src/main/java/org/hibernate/spatial/dialect/oracle/SDOObjectMethod.java > > As you can see those (non-)references are removed. Same error. > > On Wed, Jan 3, 2018 at 3:32 AM Yoann Rodiere wrote: > >> Steve, there is a reference to org.hibernate.engine.Mapping in a >> non-javadoc comment with a javadoc tag >> ("@see"): org.hibernate.spatial.dialect.oracle.SDOObjectProperty#getReturnType >> Also: org.hibernate.spatial.dialect.oracle.SDOObjectMethod#getReturnType >> Maybe you could try removing/fixing this comment and see how it goes? The >> bug may be about the javadoc processor trying to process non-javadoc >> comments whenever it sees a javadoc tag... Which could be worked around >> easily. >> >> Yoann Rodi?re >> Hibernate NoORM Team >> yoann at hibernate.org >> >> On 2 January 2018 at 20:15, Steve Ebersole wrote: >> >>> Sanne, have you had a chance to look at this? If not, I may have to just >>> disable Java 9 from Travis >>> >> >>> On Wed, Dec 27, 2017 at 8:37 PM Steve Ebersole >>> wrote: >>> >>> > I worked on getting Travis CI set up on ORM for reasons discussed here >>> > previously. But I am running into a really strange error when I >>> enabled >>> > Java 9: >>> > >>> > javadoc: error - An exception occurred while building a component: >>> > ClassSerializedForm >>> > (com.sun.tools.javac.code.Symbol$CompletionFailure: class file for >>> > org.hibernate.engine.Mapping not found) >>> > Please file a bug against the javadoc tool via the Java bug reporting >>> page >>> > (http://bugreport.java.com) after checking the Bug Database ( >>> > http://bugs.java.com) >>> > for duplicates. Include error messages and the following diagnostic in >>> > your report. Thank you. >>> > com.sun.tools.javac.code.Symbol$CompletionFailure: class file for >>> > org.hibernate.engine.Mapping not found >>> > >>> > It seems like javadoc is complaining because it sees a reference to a >>> > class (org.hibernate.engine.Mapping) that it cannot find. It is true >>> that >>> > there is no class named org.hibernate.engine.Mapping, the real name is >>> > org.hibernate.engine.spi.Mapping - but what is strange is that I >>> search the >>> > entire ORM project and found zero references to the String >>> > org.hibernate.engine.Mapping. >>> > >>> > I just kicked off a run of the ORM / Java 9 Jenkins job to see if it >>> has >>> > the same failure. >>> > >>> > Anyone have any ideas? >>> > >>> > >>> >> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >> >> From steve at hibernate.org Wed Jan 3 12:10:18 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 03 Jan 2018 17:10:18 +0000 Subject: [hibernate-dev] ORM & Java 9 - strange javadoc failure In-Reply-To: References: Message-ID: OK, I've wasted way too much time on this. I'm just going to remove Java 9 from the Travis script. On Wed, Jan 3, 2018 at 10:22 AM Steve Ebersole wrote: > What's even more strange is that if I build just spatial's javadoc it > works fine. If I try to build the aggregated javadoc is when I see this > (even though I've removed those lines) > > On Wed, Jan 3, 2018 at 8:36 AM Steve Ebersole wrote: > >> Here is the version that triggered the Travis job: >> >> >> https://github.com/sebersole/hibernate-core/blob/5.3/hibernate-spatial/src/main/java/org/hibernate/spatial/dialect/oracle/SDOObjectMethod.java >> >> As you can see those (non-)references are removed. Same error. >> >> On Wed, Jan 3, 2018 at 3:32 AM Yoann Rodiere wrote: >> >>> Steve, there is a reference to org.hibernate.engine.Mapping in a >>> non-javadoc comment with a javadoc tag >>> ("@see"): org.hibernate.spatial.dialect.oracle.SDOObjectProperty#getReturnType >>> Also: org.hibernate.spatial.dialect.oracle.SDOObjectMethod#getReturnType >>> Maybe you could try removing/fixing this comment and see how it goes? >>> The bug may be about the javadoc processor trying to process non-javadoc >>> comments whenever it sees a javadoc tag... Which could be worked around >>> easily. >>> >>> Yoann Rodi?re >>> Hibernate NoORM Team >>> yoann at hibernate.org >>> >>> On 2 January 2018 at 20:15, Steve Ebersole wrote: >>> >>>> Sanne, have you had a chance to look at this? If not, I may have to >>>> just >>>> disable Java 9 from Travis >>>> >>> >>>> On Wed, Dec 27, 2017 at 8:37 PM Steve Ebersole >>>> wrote: >>>> >>>> > I worked on getting Travis CI set up on ORM for reasons discussed here >>>> > previously. But I am running into a really strange error when I >>>> enabled >>>> > Java 9: >>>> > >>>> > javadoc: error - An exception occurred while building a component: >>>> > ClassSerializedForm >>>> > (com.sun.tools.javac.code.Symbol$CompletionFailure: class file for >>>> > org.hibernate.engine.Mapping not found) >>>> > Please file a bug against the javadoc tool via the Java bug reporting >>>> page >>>> > (http://bugreport.java.com) after checking the Bug Database ( >>>> > http://bugs.java.com) >>>> > for duplicates. Include error messages and the following diagnostic in >>>> > your report. Thank you. >>>> > com.sun.tools.javac.code.Symbol$CompletionFailure: class file for >>>> > org.hibernate.engine.Mapping not found >>>> > >>>> > It seems like javadoc is complaining because it sees a reference to a >>>> > class (org.hibernate.engine.Mapping) that it cannot find. It is true >>>> that >>>> > there is no class named org.hibernate.engine.Mapping, the real name is >>>> > org.hibernate.engine.spi.Mapping - but what is strange is that I >>>> search the >>>> > entire ORM project and found zero references to the String >>>> > org.hibernate.engine.Mapping. >>>> > >>>> > I just kicked off a run of the ORM / Java 9 Jenkins job to see if it >>>> has >>>> > the same failure. >>>> > >>>> > Anyone have any ideas? >>>> > >>>> > >>>> >>> _______________________________________________ >>>> hibernate-dev mailing list >>>> hibernate-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>> >>> >>> From steve at hibernate.org Wed Jan 3 12:35:38 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 03 Jan 2018 17:35:38 +0000 Subject: [hibernate-dev] ORM CI jobs - erroneous github triggers In-Reply-To: References: Message-ID: So I just pushed to the ORM master branch, which has caused the following jobs to be queued up: - hibernate-orm-5.0-h2 - hibernate-orm-5.1-h2 - hibernate-orm-master-h2-main Only one of those jobs is configured to "watch" master. So why do these other jobs keep getting triggered? I see the same exact thing on my personal fork as well. At the same time I pushed to my fork's 5.3 branch, which triggered the 6.0 job to be queued. On Tue, Jan 2, 2018 at 1:54 PM Steve Ebersole wrote: > The legacy ORM jobs (5.1-based ones at least) are getting triggered when > they should not be. Generally they all show they the run is triggered by a > "SCM change", but it does not show any changes. The underlying problem > (although I am at a loss as to why) is that there has indeed been SCM > changes pushed to Github, but against completely different branches. As > far as I can tell these job's Github setting are correct. Any ideas what > is going on? > > This would not be such a big deal if the CI environment did not throttle > all waiting jobs down to one active job. So the jobs I am actually > interested in are forced to wait (sometimes over an hour) for these jobs > that should not even be running. > > > From sanne at hibernate.org Wed Jan 3 13:12:15 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 3 Jan 2018 18:12:15 +0000 Subject: [hibernate-dev] ORM CI jobs - erroneous github triggers In-Reply-To: References: Message-ID: Hi Steve, this rings a bell, we had this bug in the past and apparently it's regressed again :( The latest Jenkins bug seems to be: - https://issues.jenkins-ci.org/browse/JENKINS-42161 I'll try the suggested workarount, aka to enable SCM poll without any frequency. Thanks, Sanne On 3 January 2018 at 17:35, Steve Ebersole wrote: > So I just pushed to the ORM master branch, which has caused the following > jobs to be queued up: > > > - hibernate-orm-5.0-h2 > - hibernate-orm-5.1-h2 > - hibernate-orm-master-h2-main > > Only one of those jobs is configured to "watch" master. So why do these > other jobs keep getting triggered? > > I see the same exact thing on my personal fork as well. At the same time I > pushed to my fork's 5.3 branch, which triggered the 6.0 job to be queued. > > > > On Tue, Jan 2, 2018 at 1:54 PM Steve Ebersole wrote: > >> The legacy ORM jobs (5.1-based ones at least) are getting triggered when >> they should not be. Generally they all show they the run is triggered by a >> "SCM change", but it does not show any changes. The underlying problem >> (although I am at a loss as to why) is that there has indeed been SCM >> changes pushed to Github, but against completely different branches. As >> far as I can tell these job's Github setting are correct. Any ideas what >> is going on? >> >> This would not be such a big deal if the CI environment did not throttle >> all waiting jobs down to one active job. So the jobs I am actually >> interested in are forced to wait (sometimes over an hour) for these jobs >> that should not even be running. >> >> >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From sanne at hibernate.org Wed Jan 3 13:15:51 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 3 Jan 2018 18:15:51 +0000 Subject: [hibernate-dev] ORM CI jobs - erroneous github triggers In-Reply-To: References: Message-ID: I've made the change on: - hibernate-orm-5.0-h2 - hibernate-orm-5.1-h2 - hibernate-orm-master-h2-main Let's see if it helps, then we can figure out a way to check all jobs are using this workaround. On 3 January 2018 at 18:12, Sanne Grinovero wrote: > Hi Steve, > > this rings a bell, we had this bug in the past and apparently it's > regressed again :( > > The latest Jenkins bug seems to be: > - https://issues.jenkins-ci.org/browse/JENKINS-42161 > > I'll try the suggested workarount, aka to enable SCM poll without any frequency. > > Thanks, > Sanne > > > On 3 January 2018 at 17:35, Steve Ebersole wrote: >> So I just pushed to the ORM master branch, which has caused the following >> jobs to be queued up: >> >> >> - hibernate-orm-5.0-h2 >> - hibernate-orm-5.1-h2 >> - hibernate-orm-master-h2-main >> >> Only one of those jobs is configured to "watch" master. So why do these >> other jobs keep getting triggered? >> >> I see the same exact thing on my personal fork as well. At the same time I >> pushed to my fork's 5.3 branch, which triggered the 6.0 job to be queued. >> >> >> >> On Tue, Jan 2, 2018 at 1:54 PM Steve Ebersole wrote: >> >>> The legacy ORM jobs (5.1-based ones at least) are getting triggered when >>> they should not be. Generally they all show they the run is triggered by a >>> "SCM change", but it does not show any changes. The underlying problem >>> (although I am at a loss as to why) is that there has indeed been SCM >>> changes pushed to Github, but against completely different branches. As >>> far as I can tell these job's Github setting are correct. Any ideas what >>> is going on? >>> >>> This would not be such a big deal if the CI environment did not throttle >>> all waiting jobs down to one active job. So the jobs I am actually >>> interested in are forced to wait (sometimes over an hour) for these jobs >>> that should not even be running. >>> >>> >>> >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Wed Jan 3 14:15:18 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 03 Jan 2018 19:15:18 +0000 Subject: [hibernate-dev] ORM CI jobs - erroneous github triggers In-Reply-To: References: Message-ID: Nice! Glad you found something. Thanks for making the changes. On Wed, Jan 3, 2018 at 12:16 PM Sanne Grinovero wrote: > I've made the change on: > - hibernate-orm-5.0-h2 > - hibernate-orm-5.1-h2 > - hibernate-orm-master-h2-main > > Let's see if it helps, then we can figure out a way to check all jobs > are using this workaround. > > > On 3 January 2018 at 18:12, Sanne Grinovero wrote: > > Hi Steve, > > > > this rings a bell, we had this bug in the past and apparently it's > > regressed again :( > > > > The latest Jenkins bug seems to be: > > - https://issues.jenkins-ci.org/browse/JENKINS-42161 > > > > I'll try the suggested workarount, aka to enable SCM poll without any > frequency. > > > > Thanks, > > Sanne > > > > > > On 3 January 2018 at 17:35, Steve Ebersole wrote: > >> So I just pushed to the ORM master branch, which has caused the > following > >> jobs to be queued up: > >> > >> > >> - hibernate-orm-5.0-h2 > >> - hibernate-orm-5.1-h2 > >> - hibernate-orm-master-h2-main > >> > >> Only one of those jobs is configured to "watch" master. So why do these > >> other jobs keep getting triggered? > >> > >> I see the same exact thing on my personal fork as well. At the same > time I > >> pushed to my fork's 5.3 branch, which triggered the 6.0 job to be > queued. > >> > >> > >> > >> On Tue, Jan 2, 2018 at 1:54 PM Steve Ebersole > wrote: > >> > >>> The legacy ORM jobs (5.1-based ones at least) are getting triggered > when > >>> they should not be. Generally they all show they the run is triggered > by a > >>> "SCM change", but it does not show any changes. The underlying problem > >>> (although I am at a loss as to why) is that there has indeed been SCM > >>> changes pushed to Github, but against completely different branches. > As > >>> far as I can tell these job's Github setting are correct. Any ideas > what > >>> is going on? > >>> > >>> This would not be such a big deal if the CI environment did not > throttle > >>> all waiting jobs down to one active job. So the jobs I am actually > >>> interested in are forced to wait (sometimes over an hour) for these > jobs > >>> that should not even be running. > >>> > >>> > >>> > >> _______________________________________________ > >> hibernate-dev mailing list > >> hibernate-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Wed Jan 3 16:35:48 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 03 Jan 2018 21:35:48 +0000 Subject: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope In-Reply-To: References: Message-ID: If you have access to the specific ExtendedBeanManager/LifecycleListener, that should already be enough. Those things are already properly scoped to the SessionFactory, unless you are passing the same instance to multiple SessionFactory instances. On Wed, Jan 3, 2018 at 10:09 AM Scott Marlow wrote: > On Tue, Jan 2, 2018 at 2:42 PM, Steve Ebersole > wrote: > >> Scott, how would we register a listener for this event? >> > > If we want a standard solution, we could ask for an earlier CDI > pre-destroy listener. > > The problem we have had with most CDI "listeners" so far is that they are >> non-contextual, meaning there has been no way to link that back to a >> specific SessionFactory.. If I can register this listener with a reference >> back to the Sessionfactory, this should actually be fine. >> > > I could pass the EMF to the org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager.LifecycleListener, > if that helps. > > >> >> On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow wrote: >> >>> On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero >>> wrote: >>> >>> > Any dependency injection framework will have some capability to define >>> > the graph of dependencies across components, and such graph could be >>> > very complex, with details only known to the framework. >>> > >>> > I don't think we can solve the integration by having "before all >>> > others" / "after all others" phases as that's too coarse grained to >>> > define a full graph; we need to find a way to have the DI framework >>> > take in consideration our additional components both in terms of DI >>> > consumers and providers - then let the framework wire up things in the >>> > order it prefers. This is also to allow the DI engine to print >>> > appropriate warnings for un-resolvable situations with its native >>> > error handling, which would resolve in more familiar error messages. >>> > >>> > If that's not doable *or a priority* then all we can do is try to make >>> > it clear enough that there will be limitations and hopefully describe >>> > these clearly. Some of such limitations might be puzzling as you >>> > describe. >>> > >>> > >>> > >>> > On 20 December 2017 at 12:50, Yoann Rodiere >>> wrote: >>> > > Hello all, >>> > > >>> > > TL;DR: Application-scoped beans cannot be used as part of the >>> @PreDestroy >>> > > method of ORM-instantiated CDI beans, and it's a bit odd because >>> they can >>> > > be used as part of the @PostConstruct method. >>> > > >>> > > I've been testing the CDI integration in Hibernate ORM for the past >>> few >>> > > days, trying to integrate it into Search. I think I've discovered >>> > something >>> > > odd: when CDI-managed beans are destroyed, they cannot access other >>> > > Application-scoped CDI beans anymore. Not sure whether this is a >>> problem >>> > or >>> > > not, so maybe we should discuss it a bit before going forward with >>> the >>> > > current behavior. >>> > > >>> > > Short reminder: scopes define when CDI beans are created and >>> destroyed. >>> > > @ApplicationScoped is pretty self-explanatory: created when the >>> > application >>> > > starts and destroyed when it stops. Some other scopes are a bit more >>> > > convoluted: @Singleton basically means created *before* the >>> application >>> > > starts and destroyed *after* the application stops (and also means >>> "this >>> > > bean shall not be proxied"), @Dependent means created when an >>> instance is >>> > > requested and destroyed when the instance is released, etc. >>> > > >>> > > The thing is, Hibernate ORM is typically started very early and shut >>> down >>> > > very late in the CDI lifecycle - at least within WildFly. So when >>> > Hibernate >>> > > starts, CDI Application-scoped beans haven't been instantiated yet, >>> and >>> > it >>> > > turns out that when Hibernate ORM shuts down, CDI has already >>> destroyed >>> > > Application-scoped beans. >>> > > >>> > > Regarding startup, Steve and Scott solved the problem by delaying >>> bean >>> > > instantiation to some point in the future when the Application scope >>> is >>> > > active (and thus Application-scoped beans are available). This makes >>> it >>> > > possible to use Application-scoped beans within ORM-instantiated >>> beans as >>> > > soon as the latter are constructed (i.e. within their @PostConstruct >>> > > methods). >>> > > However, when Hibernate ORM shuts down, the Application scope has >>> already >>> > > been terminated. So when ORM destroys the beans it instantiated, >>> those >>> > > ORM-instantiated beans cannot call a method on referenced >>> > > Application-scoped beans (CDI proxies will throw an exception). >>> > > >>> > > All in all, the only type of beans we can currently use in a >>> @PreDestroy >>> > > method of an ORM-instantiated bean is @Dependent beans. @Singleton >>> beans >>> > > will work, but only because they are not proxied and thus you can >>> cheat >>> > and >>> > > use them even after they have been destroyed... which I definitely >>> > wouldn't >>> > > recommend. >>> > > >>> > > I see two ways to handle the issue: >>> > > >>> > > 1. We don't change anything, and simply document somewhere that >>> beans >>> > > instantiated as part of the CDI integration are instantiated >>> within >>> > the >>> > > Application scope, but are destroyed outside of it. And we suggest >>> > that any >>> > > bean used in @PostDestroy method in an ORM-instantiated bean >>> > (directly or >>> > > not) must have either a @Dependent scope, or a @Singleton scope >>> and no >>> > > @PostDestroy method. >>> > > 2. We implement an "early shut-down" somehow, which would bring >>> > forward >>> > > bean destruction to some time when the Application scope is still >>> > active. >>> > >>> >>> org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager mentions that we >>> could >>> look at introducing a beanManagerDestroyed notification, if that is >>> useful >>> and we can find a way to implement it >>> (javax.enterprise.spi.BeforeShutdown >>> [1] is not early enough to meet your requirements). >>> >>> Scott >>> >>> [1] >>> >>> https://docs.oracle.com/javaee/7/api/javax/enterprise/inject/spi/BeforeShutdown.html >>> >>> >>> > > >>> > > #1 may be enough for now, even though the behavior feels a bit odd, >>> and >>> > > forces users to resort to less-than-ideal practices (using a >>> @Singleton >>> > > bean after it has been destroyed). >>> > > >>> > > #2 would require changes in WildFly and may be a bit complex. In >>> > > particular, if we aren't careful, Application-scoped beans may not be >>> > able >>> > > to use Hibernate ORM from within their @PreDestroy methods... Which >>> is >>> > > probably not a good idea. So we would have to find a solution >>> together >>> > with >>> > > the WildFly team. Also to be considered: Hibernate Search would have >>> to >>> > be >>> > > shut down just before the "early shut-down" of Hibernate ORM occurs, >>> > > because Hibernate Search cannot function at all without the beans it >>> > > retrieves from the CDI context. >>> > > >>> > > Thoughts? >>> > > >>> > > >>> > > Yoann Rodi?re >>> > > Hibernate NoORM Team >>> > > yoann at hibernate.org >>> > > _______________________________________________ >>> > > hibernate-dev mailing list >>> > > hibernate-dev at lists.jboss.org >>> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> > >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> From sanne at hibernate.org Thu Jan 4 07:09:45 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 4 Jan 2018 12:09:45 +0000 Subject: [hibernate-dev] ORM CI jobs - erroneous github triggers In-Reply-To: References: Message-ID: Also these jobs were configured to build automatically every 5 hours: - hibernate-orm-4.2-h2 - hibernate-orm-4.3-h2 I removed the schedule, they will be built when (and only when) anything is committed to their respective branches. On 3 January 2018 at 19:15, Steve Ebersole wrote: > Nice! Glad you found something. Thanks for making the changes. > > > > On Wed, Jan 3, 2018 at 12:16 PM Sanne Grinovero wrote: >> >> I've made the change on: >> - hibernate-orm-5.0-h2 >> - hibernate-orm-5.1-h2 >> - hibernate-orm-master-h2-main >> >> Let's see if it helps, then we can figure out a way to check all jobs >> are using this workaround. >> >> >> On 3 January 2018 at 18:12, Sanne Grinovero wrote: >> > Hi Steve, >> > >> > this rings a bell, we had this bug in the past and apparently it's >> > regressed again :( >> > >> > The latest Jenkins bug seems to be: >> > - https://issues.jenkins-ci.org/browse/JENKINS-42161 >> > >> > I'll try the suggested workarount, aka to enable SCM poll without any >> > frequency. >> > >> > Thanks, >> > Sanne >> > >> > >> > On 3 January 2018 at 17:35, Steve Ebersole wrote: >> >> So I just pushed to the ORM master branch, which has caused the >> >> following >> >> jobs to be queued up: >> >> >> >> >> >> - hibernate-orm-5.0-h2 >> >> - hibernate-orm-5.1-h2 >> >> - hibernate-orm-master-h2-main >> >> >> >> Only one of those jobs is configured to "watch" master. So why do >> >> these >> >> other jobs keep getting triggered? >> >> >> >> I see the same exact thing on my personal fork as well. At the same >> >> time I >> >> pushed to my fork's 5.3 branch, which triggered the 6.0 job to be >> >> queued. >> >> >> >> >> >> >> >> On Tue, Jan 2, 2018 at 1:54 PM Steve Ebersole >> >> wrote: >> >> >> >>> The legacy ORM jobs (5.1-based ones at least) are getting triggered >> >>> when >> >>> they should not be. Generally they all show they the run is triggered >> >>> by a >> >>> "SCM change", but it does not show any changes. The underlying >> >>> problem >> >>> (although I am at a loss as to why) is that there has indeed been SCM >> >>> changes pushed to Github, but against completely different branches. >> >>> As >> >>> far as I can tell these job's Github setting are correct. Any ideas >> >>> what >> >>> is going on? >> >>> >> >>> This would not be such a big deal if the CI environment did not >> >>> throttle >> >>> all waiting jobs down to one active job. So the jobs I am actually >> >>> interested in are forced to wait (sometimes over an hour) for these >> >>> jobs >> >>> that should not even be running. >> >>> >> >>> >> >>> >> >> _______________________________________________ >> >> hibernate-dev mailing list >> >> hibernate-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Thu Jan 4 07:18:21 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 04 Jan 2018 12:18:21 +0000 Subject: [hibernate-dev] ORM CI jobs - erroneous github triggers In-Reply-To: References: Message-ID: Lol. No idea why these are even built period. Thanks Sanne On Thu, Jan 4, 2018 at 6:13 AM Sanne Grinovero wrote: > Also these jobs were configured to build automatically every 5 hours: > - hibernate-orm-4.2-h2 > - hibernate-orm-4.3-h2 > > I removed the schedule, they will be built when (and only when) > anything is committed to their respective branches. > > > On 3 January 2018 at 19:15, Steve Ebersole wrote: > > Nice! Glad you found something. Thanks for making the changes. > > > > > > > > On Wed, Jan 3, 2018 at 12:16 PM Sanne Grinovero > wrote: > >> > >> I've made the change on: > >> - hibernate-orm-5.0-h2 > >> - hibernate-orm-5.1-h2 > >> - hibernate-orm-master-h2-main > >> > >> Let's see if it helps, then we can figure out a way to check all jobs > >> are using this workaround. > >> > >> > >> On 3 January 2018 at 18:12, Sanne Grinovero > wrote: > >> > Hi Steve, > >> > > >> > this rings a bell, we had this bug in the past and apparently it's > >> > regressed again :( > >> > > >> > The latest Jenkins bug seems to be: > >> > - https://issues.jenkins-ci.org/browse/JENKINS-42161 > >> > > >> > I'll try the suggested workarount, aka to enable SCM poll without any > >> > frequency. > >> > > >> > Thanks, > >> > Sanne > >> > > >> > > >> > On 3 January 2018 at 17:35, Steve Ebersole > wrote: > >> >> So I just pushed to the ORM master branch, which has caused the > >> >> following > >> >> jobs to be queued up: > >> >> > >> >> > >> >> - hibernate-orm-5.0-h2 > >> >> - hibernate-orm-5.1-h2 > >> >> - hibernate-orm-master-h2-main > >> >> > >> >> Only one of those jobs is configured to "watch" master. So why do > >> >> these > >> >> other jobs keep getting triggered? > >> >> > >> >> I see the same exact thing on my personal fork as well. At the same > >> >> time I > >> >> pushed to my fork's 5.3 branch, which triggered the 6.0 job to be > >> >> queued. > >> >> > >> >> > >> >> > >> >> On Tue, Jan 2, 2018 at 1:54 PM Steve Ebersole > >> >> wrote: > >> >> > >> >>> The legacy ORM jobs (5.1-based ones at least) are getting triggered > >> >>> when > >> >>> they should not be. Generally they all show they the run is > triggered > >> >>> by a > >> >>> "SCM change", but it does not show any changes. The underlying > >> >>> problem > >> >>> (although I am at a loss as to why) is that there has indeed been > SCM > >> >>> changes pushed to Github, but against completely different branches. > >> >>> As > >> >>> far as I can tell these job's Github setting are correct. Any ideas > >> >>> what > >> >>> is going on? > >> >>> > >> >>> This would not be such a big deal if the CI environment did not > >> >>> throttle > >> >>> all waiting jobs down to one active job. So the jobs I am actually > >> >>> interested in are forced to wait (sometimes over an hour) for these > >> >>> jobs > >> >>> that should not even be running. > >> >>> > >> >>> > >> >>> > >> >> _______________________________________________ > >> >> hibernate-dev mailing list > >> >> hibernate-dev at lists.jboss.org > >> >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From smarlow at redhat.com Thu Jan 4 08:58:06 2018 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 4 Jan 2018 08:58:06 -0500 Subject: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope In-Reply-To: References: Message-ID: I can arrange to keep access to the specific ExtendedBeanManager/LifecycleListener, that is not difficult. What changes do we need from the CDI implementation? On Jan 3, 2018 4:36 PM, "Steve Ebersole" wrote: If you have access to the specific ExtendedBeanManager/LifecycleListener, that should already be enough. Those things are already properly scoped to the SessionFactory, unless you are passing the same instance to multiple SessionFactory instances. On Wed, Jan 3, 2018 at 10:09 AM Scott Marlow wrote: > On Tue, Jan 2, 2018 at 2:42 PM, Steve Ebersole > wrote: > >> Scott, how would we register a listener for this event? >> > > If we want a standard solution, we could ask for an earlier CDI > pre-destroy listener. > > The problem we have had with most CDI "listeners" so far is that they are >> non-contextual, meaning there has been no way to link that back to a >> specific SessionFactory.. If I can register this listener with a reference >> back to the Sessionfactory, this should actually be fine. >> > > I could pass the EMF to the org.hibernate.jpa.event.spi.jpa. > ExtendedBeanManager.LifecycleListener, if that helps. > > >> >> On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow wrote: >> >>> On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero >>> wrote: >>> >>> > Any dependency injection framework will have some capability to define >>> > the graph of dependencies across components, and such graph could be >>> > very complex, with details only known to the framework. >>> > >>> > I don't think we can solve the integration by having "before all >>> > others" / "after all others" phases as that's too coarse grained to >>> > define a full graph; we need to find a way to have the DI framework >>> > take in consideration our additional components both in terms of DI >>> > consumers and providers - then let the framework wire up things in the >>> > order it prefers. This is also to allow the DI engine to print >>> > appropriate warnings for un-resolvable situations with its native >>> > error handling, which would resolve in more familiar error messages. >>> > >>> > If that's not doable *or a priority* then all we can do is try to make >>> > it clear enough that there will be limitations and hopefully describe >>> > these clearly. Some of such limitations might be puzzling as you >>> > describe. >>> > >>> > >>> > >>> > On 20 December 2017 at 12:50, Yoann Rodiere >>> wrote: >>> > > Hello all, >>> > > >>> > > TL;DR: Application-scoped beans cannot be used as part of the >>> @PreDestroy >>> > > method of ORM-instantiated CDI beans, and it's a bit odd because >>> they can >>> > > be used as part of the @PostConstruct method. >>> > > >>> > > I've been testing the CDI integration in Hibernate ORM for the past >>> few >>> > > days, trying to integrate it into Search. I think I've discovered >>> > something >>> > > odd: when CDI-managed beans are destroyed, they cannot access other >>> > > Application-scoped CDI beans anymore. Not sure whether this is a >>> problem >>> > or >>> > > not, so maybe we should discuss it a bit before going forward with >>> the >>> > > current behavior. >>> > > >>> > > Short reminder: scopes define when CDI beans are created and >>> destroyed. >>> > > @ApplicationScoped is pretty self-explanatory: created when the >>> > application >>> > > starts and destroyed when it stops. Some other scopes are a bit more >>> > > convoluted: @Singleton basically means created *before* the >>> application >>> > > starts and destroyed *after* the application stops (and also means >>> "this >>> > > bean shall not be proxied"), @Dependent means created when an >>> instance is >>> > > requested and destroyed when the instance is released, etc. >>> > > >>> > > The thing is, Hibernate ORM is typically started very early and shut >>> down >>> > > very late in the CDI lifecycle - at least within WildFly. So when >>> > Hibernate >>> > > starts, CDI Application-scoped beans haven't been instantiated yet, >>> and >>> > it >>> > > turns out that when Hibernate ORM shuts down, CDI has already >>> destroyed >>> > > Application-scoped beans. >>> > > >>> > > Regarding startup, Steve and Scott solved the problem by delaying >>> bean >>> > > instantiation to some point in the future when the Application scope >>> is >>> > > active (and thus Application-scoped beans are available). This makes >>> it >>> > > possible to use Application-scoped beans within ORM-instantiated >>> beans as >>> > > soon as the latter are constructed (i.e. within their @PostConstruct >>> > > methods). >>> > > However, when Hibernate ORM shuts down, the Application scope has >>> already >>> > > been terminated. So when ORM destroys the beans it instantiated, >>> those >>> > > ORM-instantiated beans cannot call a method on referenced >>> > > Application-scoped beans (CDI proxies will throw an exception). >>> > > >>> > > All in all, the only type of beans we can currently use in a >>> @PreDestroy >>> > > method of an ORM-instantiated bean is @Dependent beans. @Singleton >>> beans >>> > > will work, but only because they are not proxied and thus you can >>> cheat >>> > and >>> > > use them even after they have been destroyed... which I definitely >>> > wouldn't >>> > > recommend. >>> > > >>> > > I see two ways to handle the issue: >>> > > >>> > > 1. We don't change anything, and simply document somewhere that >>> beans >>> > > instantiated as part of the CDI integration are instantiated >>> within >>> > the >>> > > Application scope, but are destroyed outside of it. And we suggest >>> > that any >>> > > bean used in @PostDestroy method in an ORM-instantiated bean >>> > (directly or >>> > > not) must have either a @Dependent scope, or a @Singleton scope >>> and no >>> > > @PostDestroy method. >>> > > 2. We implement an "early shut-down" somehow, which would bring >>> > forward >>> > > bean destruction to some time when the Application scope is still >>> > active. >>> > >>> >>> org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager mentions that we >>> could >>> look at introducing a beanManagerDestroyed notification, if that is >>> useful >>> and we can find a way to implement it (javax.enterprise.spi.BeforeSh >>> utdown >>> [1] is not early enough to meet your requirements). >>> >>> Scott >>> >>> [1] >>> https://docs.oracle.com/javaee/7/api/javax/enterprise/inject >>> /spi/BeforeShutdown.html >>> >>> >>> > > >>> > > #1 may be enough for now, even though the behavior feels a bit odd, >>> and >>> > > forces users to resort to less-than-ideal practices (using a >>> @Singleton >>> > > bean after it has been destroyed). >>> > > >>> > > #2 would require changes in WildFly and may be a bit complex. In >>> > > particular, if we aren't careful, Application-scoped beans may not be >>> > able >>> > > to use Hibernate ORM from within their @PreDestroy methods... Which >>> is >>> > > probably not a good idea. So we would have to find a solution >>> together >>> > with >>> > > the WildFly team. Also to be considered: Hibernate Search would have >>> to >>> > be >>> > > shut down just before the "early shut-down" of Hibernate ORM occurs, >>> > > because Hibernate Search cannot function at all without the beans it >>> > > retrieves from the CDI context. >>> > > >>> > > Thoughts? >>> > > >>> > > >>> > > Yoann Rodi?re >>> > > Hibernate NoORM Team >>> > > yoann at hibernate.org >>> > > _______________________________________________ >>> > > hibernate-dev mailing list >>> > > hibernate-dev at lists.jboss.org >>> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> > >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> From steve at hibernate.org Thu Jan 4 09:19:37 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 04 Jan 2018 14:19:37 +0000 Subject: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope In-Reply-To: References: Message-ID: Well there seems to be some disagreement about that. I personally think we do not need anything other than a pre-shutdown hook so that we can release our CDI references. Sanne seemed to think we needed something more "integrated". I think we should start with the simple and add deeper integration (which requires actual CDI changes) only if we see that is necessary. Sanne? On Thu, Jan 4, 2018 at 7:58 AM Scott Marlow wrote: > I can arrange to keep access to the specific > ExtendedBeanManager/LifecycleListener, that is not difficult. > > What changes do we need from the CDI implementation? > > > On Jan 3, 2018 4:36 PM, "Steve Ebersole" wrote: > > If you have access to the specific ExtendedBeanManager/LifecycleListener, > that should already be enough. Those things are already properly scoped to > the SessionFactory, unless you are passing the same instance to multiple > SessionFactory instances. > > On Wed, Jan 3, 2018 at 10:09 AM Scott Marlow wrote: > >> On Tue, Jan 2, 2018 at 2:42 PM, Steve Ebersole >> wrote: >> >>> Scott, how would we register a listener for this event? >>> >> >> If we want a standard solution, we could ask for an earlier CDI >> pre-destroy listener. >> >> The problem we have had with most CDI "listeners" so far is that they are >>> non-contextual, meaning there has been no way to link that back to a >>> specific SessionFactory.. If I can register this listener with a reference >>> back to the Sessionfactory, this should actually be fine. >>> >> >> I could pass the EMF to the org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager.LifecycleListener, >> if that helps. >> >> >>> >>> On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow wrote: >>> >>>> On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero >>>> wrote: >>>> >>>> > Any dependency injection framework will have some capability to define >>>> > the graph of dependencies across components, and such graph could be >>>> > very complex, with details only known to the framework. >>>> > >>>> > I don't think we can solve the integration by having "before all >>>> > others" / "after all others" phases as that's too coarse grained to >>>> > define a full graph; we need to find a way to have the DI framework >>>> > take in consideration our additional components both in terms of DI >>>> > consumers and providers - then let the framework wire up things in the >>>> > order it prefers. This is also to allow the DI engine to print >>>> > appropriate warnings for un-resolvable situations with its native >>>> > error handling, which would resolve in more familiar error messages. >>>> > >>>> > If that's not doable *or a priority* then all we can do is try to make >>>> > it clear enough that there will be limitations and hopefully describe >>>> > these clearly. Some of such limitations might be puzzling as you >>>> > describe. >>>> > >>>> > >>>> > >>>> > On 20 December 2017 at 12:50, Yoann Rodiere >>>> wrote: >>>> > > Hello all, >>>> > > >>>> > > TL;DR: Application-scoped beans cannot be used as part of the >>>> @PreDestroy >>>> > > method of ORM-instantiated CDI beans, and it's a bit odd because >>>> they can >>>> > > be used as part of the @PostConstruct method. >>>> > > >>>> > > I've been testing the CDI integration in Hibernate ORM for the past >>>> few >>>> > > days, trying to integrate it into Search. I think I've discovered >>>> > something >>>> > > odd: when CDI-managed beans are destroyed, they cannot access other >>>> > > Application-scoped CDI beans anymore. Not sure whether this is a >>>> problem >>>> > or >>>> > > not, so maybe we should discuss it a bit before going forward with >>>> the >>>> > > current behavior. >>>> > > >>>> > > Short reminder: scopes define when CDI beans are created and >>>> destroyed. >>>> > > @ApplicationScoped is pretty self-explanatory: created when the >>>> > application >>>> > > starts and destroyed when it stops. Some other scopes are a bit more >>>> > > convoluted: @Singleton basically means created *before* the >>>> application >>>> > > starts and destroyed *after* the application stops (and also means >>>> "this >>>> > > bean shall not be proxied"), @Dependent means created when an >>>> instance is >>>> > > requested and destroyed when the instance is released, etc. >>>> > > >>>> > > The thing is, Hibernate ORM is typically started very early and >>>> shut down >>>> > > very late in the CDI lifecycle - at least within WildFly. So when >>>> > Hibernate >>>> > > starts, CDI Application-scoped beans haven't been instantiated yet, >>>> and >>>> > it >>>> > > turns out that when Hibernate ORM shuts down, CDI has already >>>> destroyed >>>> > > Application-scoped beans. >>>> > > >>>> > > Regarding startup, Steve and Scott solved the problem by delaying >>>> bean >>>> > > instantiation to some point in the future when the Application >>>> scope is >>>> > > active (and thus Application-scoped beans are available). This >>>> makes it >>>> > > possible to use Application-scoped beans within ORM-instantiated >>>> beans as >>>> > > soon as the latter are constructed (i.e. within their @PostConstruct >>>> > > methods). >>>> > > However, when Hibernate ORM shuts down, the Application scope has >>>> already >>>> > > been terminated. So when ORM destroys the beans it instantiated, >>>> those >>>> > > ORM-instantiated beans cannot call a method on referenced >>>> > > Application-scoped beans (CDI proxies will throw an exception). >>>> > > >>>> > > All in all, the only type of beans we can currently use in a >>>> @PreDestroy >>>> > > method of an ORM-instantiated bean is @Dependent beans. @Singleton >>>> beans >>>> > > will work, but only because they are not proxied and thus you can >>>> cheat >>>> > and >>>> > > use them even after they have been destroyed... which I definitely >>>> > wouldn't >>>> > > recommend. >>>> > > >>>> > > I see two ways to handle the issue: >>>> > > >>>> > > 1. We don't change anything, and simply document somewhere that >>>> beans >>>> > > instantiated as part of the CDI integration are instantiated >>>> within >>>> > the >>>> > > Application scope, but are destroyed outside of it. And we >>>> suggest >>>> > that any >>>> > > bean used in @PostDestroy method in an ORM-instantiated bean >>>> > (directly or >>>> > > not) must have either a @Dependent scope, or a @Singleton scope >>>> and no >>>> > > @PostDestroy method. >>>> > > 2. We implement an "early shut-down" somehow, which would bring >>>> > forward >>>> > > bean destruction to some time when the Application scope is still >>>> > active. >>>> > >>>> >>>> org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager mentions that we >>>> could >>>> look at introducing a beanManagerDestroyed notification, if that is >>>> useful >>>> and we can find a way to implement it >>>> (javax.enterprise.spi.BeforeShutdown >>>> [1] is not early enough to meet your requirements). >>>> >>>> Scott >>>> >>>> [1] >>>> >>>> https://docs.oracle.com/javaee/7/api/javax/enterprise/inject/spi/BeforeShutdown.html >>>> >>>> >>>> > > >>>> > > #1 may be enough for now, even though the behavior feels a bit odd, >>>> and >>>> > > forces users to resort to less-than-ideal practices (using a >>>> @Singleton >>>> > > bean after it has been destroyed). >>>> > > >>>> > > #2 would require changes in WildFly and may be a bit complex. In >>>> > > particular, if we aren't careful, Application-scoped beans may not >>>> be >>>> > able >>>> > > to use Hibernate ORM from within their @PreDestroy methods... Which >>>> is >>>> > > probably not a good idea. So we would have to find a solution >>>> together >>>> > with >>>> > > the WildFly team. Also to be considered: Hibernate Search would >>>> have to >>>> > be >>>> > > shut down just before the "early shut-down" of Hibernate ORM occurs, >>>> > > because Hibernate Search cannot function at all without the beans it >>>> > > retrieves from the CDI context. >>>> > > >>>> > > Thoughts? >>>> > > >>>> > > >>>> > > Yoann Rodi?re >>>> > > Hibernate NoORM Team >>>> > > yoann at hibernate.org >>>> > > _______________________________________________ >>>> > > hibernate-dev mailing list >>>> > > hibernate-dev at lists.jboss.org >>>> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>> > >>>> _______________________________________________ >>>> hibernate-dev mailing list >>>> hibernate-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >>> > From sanne at hibernate.org Thu Jan 4 11:39:06 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 4 Jan 2018 16:39:06 +0000 Subject: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope In-Reply-To: References: Message-ID: On 4 January 2018 at 14:19, Steve Ebersole wrote: > Well there seems to be some disagreement about that. I personally think we > do not need anything other than a pre-shutdown hook so that we can release > our CDI references. Sanne seemed to think we needed something more > "integrated". I think we should start with the simple and add deeper > integration (which requires actual CDI changes) only if we see that is > necessary. Sanne? I guess it's totally possible that the current solution you all have been working on covers most practical use cases and most immediate user's needs, so that's great, but I wonder if we can clearly document the limitations which I'm assuming we have (I can't). I don't believe we can handle all complex dependency graphs that a CDI user might expect with before & after phases, however I had no time to prove this with a meaningful example. If someone with more CDI experience could experiment with complex dependency graphs then we should be able to better document the limitations - which I strongly suspect exist - and make a good case to need the JPA/CDI integration deeper at spec level, however "make it work as users expect" might not be worthwhile of a spec update, one could say it's the implementation's job so essentially a problem in how we deal with integration details. It's possible that there's no practical need for such a deeper integration but it makes me a bit nervous to not be able to specify the limitations to users. More concrete example: Steve mentions having a "PRE-shutdown hook" to release our references to managed beans; what if some other beans depend on these? What if these other beans have wider scopes, like app scope? Clearly the CDI engine is in the position to figure this out and might want to initiate a cascade shutdown of such other beans (which we don't manage directly) so this is essentially initiating a whole-shutdown (not just a PRE-shutdown). Vice-versa, same situation can arise during initialization; I'm afraid this would get hairy quickly, while supposedly any CDI implementation should have the means to handle ordering details appropriately, so I'd hope we delegate it all to it to happen during its normal phases rather than layering outer/inner phases around. I'm not sure who to ask for a better opinion; I'll add Stuart in CC as he's the only smart person I know with deep expertise in both Hibernate and CDI, with some luck he'll say I'm wrong and we're good :) Thanks, Sanne > > On Thu, Jan 4, 2018 at 7:58 AM Scott Marlow wrote: >> >> I can arrange to keep access to the specific >> ExtendedBeanManager/LifecycleListener, that is not difficult. >> >> What changes do we need from the CDI implementation? >> >> >> On Jan 3, 2018 4:36 PM, "Steve Ebersole" wrote: >> >> If you have access to the specific ExtendedBeanManager/LifecycleListener, >> that should already be enough. Those things are already properly scoped to >> the SessionFactory, unless you are passing the same instance to multiple >> SessionFactory instances. >> >> On Wed, Jan 3, 2018 at 10:09 AM Scott Marlow wrote: >>> >>> On Tue, Jan 2, 2018 at 2:42 PM, Steve Ebersole >>> wrote: >>>> >>>> Scott, how would we register a listener for this event? >>> >>> >>> If we want a standard solution, we could ask for an earlier CDI >>> pre-destroy listener. >>> >>>> The problem we have had with most CDI "listeners" so far is that they >>>> are non-contextual, meaning there has been no way to link that back to a >>>> specific SessionFactory.. If I can register this listener with a reference >>>> back to the Sessionfactory, this should actually be fine. >>> >>> >>> I could pass the EMF to the >>> org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager.LifecycleListener, if >>> that helps. >>> >>>> >>>> >>>> On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow wrote: >>>>> >>>>> On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero >>>>> wrote: >>>>> >>>>> > Any dependency injection framework will have some capability to >>>>> > define >>>>> > the graph of dependencies across components, and such graph could be >>>>> > very complex, with details only known to the framework. >>>>> > >>>>> > I don't think we can solve the integration by having "before all >>>>> > others" / "after all others" phases as that's too coarse grained to >>>>> > define a full graph; we need to find a way to have the DI framework >>>>> > take in consideration our additional components both in terms of DI >>>>> > consumers and providers - then let the framework wire up things in >>>>> > the >>>>> > order it prefers. This is also to allow the DI engine to print >>>>> > appropriate warnings for un-resolvable situations with its native >>>>> > error handling, which would resolve in more familiar error messages. >>>>> > >>>>> > If that's not doable *or a priority* then all we can do is try to >>>>> > make >>>>> > it clear enough that there will be limitations and hopefully describe >>>>> > these clearly. Some of such limitations might be puzzling as you >>>>> > describe. >>>>> > >>>>> > >>>>> > >>>>> > On 20 December 2017 at 12:50, Yoann Rodiere >>>>> > wrote: >>>>> > > Hello all, >>>>> > > >>>>> > > TL;DR: Application-scoped beans cannot be used as part of the >>>>> > > @PreDestroy >>>>> > > method of ORM-instantiated CDI beans, and it's a bit odd because >>>>> > > they can >>>>> > > be used as part of the @PostConstruct method. >>>>> > > >>>>> > > I've been testing the CDI integration in Hibernate ORM for the past >>>>> > > few >>>>> > > days, trying to integrate it into Search. I think I've discovered >>>>> > something >>>>> > > odd: when CDI-managed beans are destroyed, they cannot access other >>>>> > > Application-scoped CDI beans anymore. Not sure whether this is a >>>>> > > problem >>>>> > or >>>>> > > not, so maybe we should discuss it a bit before going forward with >>>>> > > the >>>>> > > current behavior. >>>>> > > >>>>> > > Short reminder: scopes define when CDI beans are created and >>>>> > > destroyed. >>>>> > > @ApplicationScoped is pretty self-explanatory: created when the >>>>> > application >>>>> > > starts and destroyed when it stops. Some other scopes are a bit >>>>> > > more >>>>> > > convoluted: @Singleton basically means created *before* the >>>>> > > application >>>>> > > starts and destroyed *after* the application stops (and also means >>>>> > > "this >>>>> > > bean shall not be proxied"), @Dependent means created when an >>>>> > > instance is >>>>> > > requested and destroyed when the instance is released, etc. >>>>> > > >>>>> > > The thing is, Hibernate ORM is typically started very early and >>>>> > > shut down >>>>> > > very late in the CDI lifecycle - at least within WildFly. So when >>>>> > Hibernate >>>>> > > starts, CDI Application-scoped beans haven't been instantiated yet, >>>>> > > and >>>>> > it >>>>> > > turns out that when Hibernate ORM shuts down, CDI has already >>>>> > > destroyed >>>>> > > Application-scoped beans. >>>>> > > >>>>> > > Regarding startup, Steve and Scott solved the problem by delaying >>>>> > > bean >>>>> > > instantiation to some point in the future when the Application >>>>> > > scope is >>>>> > > active (and thus Application-scoped beans are available). This >>>>> > > makes it >>>>> > > possible to use Application-scoped beans within ORM-instantiated >>>>> > > beans as >>>>> > > soon as the latter are constructed (i.e. within their >>>>> > > @PostConstruct >>>>> > > methods). >>>>> > > However, when Hibernate ORM shuts down, the Application scope has >>>>> > > already >>>>> > > been terminated. So when ORM destroys the beans it instantiated, >>>>> > > those >>>>> > > ORM-instantiated beans cannot call a method on referenced >>>>> > > Application-scoped beans (CDI proxies will throw an exception). >>>>> > > >>>>> > > All in all, the only type of beans we can currently use in a >>>>> > > @PreDestroy >>>>> > > method of an ORM-instantiated bean is @Dependent beans. @Singleton >>>>> > > beans >>>>> > > will work, but only because they are not proxied and thus you can >>>>> > > cheat >>>>> > and >>>>> > > use them even after they have been destroyed... which I definitely >>>>> > wouldn't >>>>> > > recommend. >>>>> > > >>>>> > > I see two ways to handle the issue: >>>>> > > >>>>> > > 1. We don't change anything, and simply document somewhere that >>>>> > > beans >>>>> > > instantiated as part of the CDI integration are instantiated >>>>> > > within >>>>> > the >>>>> > > Application scope, but are destroyed outside of it. And we >>>>> > > suggest >>>>> > that any >>>>> > > bean used in @PostDestroy method in an ORM-instantiated bean >>>>> > (directly or >>>>> > > not) must have either a @Dependent scope, or a @Singleton scope >>>>> > > and no >>>>> > > @PostDestroy method. >>>>> > > 2. We implement an "early shut-down" somehow, which would bring >>>>> > forward >>>>> > > bean destruction to some time when the Application scope is >>>>> > > still >>>>> > active. >>>>> > >>>>> >>>>> org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager mentions that we >>>>> could >>>>> look at introducing a beanManagerDestroyed notification, if that is >>>>> useful >>>>> and we can find a way to implement it >>>>> (javax.enterprise.spi.BeforeShutdown >>>>> [1] is not early enough to meet your requirements). >>>>> >>>>> Scott >>>>> >>>>> [1] >>>>> >>>>> https://docs.oracle.com/javaee/7/api/javax/enterprise/inject/spi/BeforeShutdown.html >>>>> >>>>> >>>>> > > >>>>> > > #1 may be enough for now, even though the behavior feels a bit odd, >>>>> > > and >>>>> > > forces users to resort to less-than-ideal practices (using a >>>>> > > @Singleton >>>>> > > bean after it has been destroyed). >>>>> > > >>>>> > > #2 would require changes in WildFly and may be a bit complex. In >>>>> > > particular, if we aren't careful, Application-scoped beans may not >>>>> > > be >>>>> > able >>>>> > > to use Hibernate ORM from within their @PreDestroy methods... Which >>>>> > > is >>>>> > > probably not a good idea. So we would have to find a solution >>>>> > > together >>>>> > with >>>>> > > the WildFly team. Also to be considered: Hibernate Search would >>>>> > > have to >>>>> > be >>>>> > > shut down just before the "early shut-down" of Hibernate ORM >>>>> > > occurs, >>>>> > > because Hibernate Search cannot function at all without the beans >>>>> > > it >>>>> > > retrieves from the CDI context. >>>>> > > >>>>> > > Thoughts? >>>>> > > >>>>> > > >>>>> > > Yoann Rodi?re >>>>> > > Hibernate NoORM Team >>>>> > > yoann at hibernate.org >>>>> > > _______________________________________________ >>>>> > > hibernate-dev mailing list >>>>> > > hibernate-dev at lists.jboss.org >>>>> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>>> > >>>>> _______________________________________________ >>>>> hibernate-dev mailing list >>>>> hibernate-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> > From sanne at hibernate.org Thu Jan 4 11:41:32 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 4 Jan 2018 16:41:32 +0000 Subject: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope In-Reply-To: References: Message-ID: On 4 January 2018 at 16:39, Sanne Grinovero wrote: > On 4 January 2018 at 14:19, Steve Ebersole wrote: >> Well there seems to be some disagreement about that. I personally think we >> do not need anything other than a pre-shutdown hook so that we can release >> our CDI references. Sanne seemed to think we needed something more >> "integrated". I think we should start with the simple and add deeper >> integration (which requires actual CDI changes) only if we see that is >> necessary. Sanne? > > I guess it's totally possible that the current solution you all have > been working on covers most practical use cases and most immediate > user's needs, so that's great, but I wonder if we can clearly document > the limitations which I'm assuming we have (I can't). > > I don't believe we can handle all complex dependency graphs that a CDI > user might expect with before & after phases, however I had no time to > prove this with a meaningful example. > > If someone with more CDI experience could experiment with complex > dependency graphs then we should be able to better document the > limitations - which I strongly suspect exist - and make a good case to > need the JPA/CDI integration deeper at spec level, however "make it > work as users expect" might not be worthwhile of a spec update, one > could say it's the implementation's job so essentially a problem in > how we deal with integration details. > > It's possible that there's no practical need for such a deeper > integration but it makes me a bit nervous to not be able to specify > the limitations to users. > > More concrete example: Steve mentions having a "PRE-shutdown hook" to > release our references to managed beans; what if some other beans > depend on these? What if these other beans have wider scopes, like app > scope? Clearly the CDI engine is in the position to figure this out > and might want to initiate a cascade shutdown of such other beans > (which we don't manage directly) so this is essentially initiating a > whole-shutdown (not just a PRE-shutdown). > > Vice-versa, same situation can arise during initialization; I'm afraid > this would get hairy quickly, while supposedly any CDI implementation > should have the means to handle ordering details appropriately, so I'd > hope we delegate it all to it to happen during its normal phases > rather than layering outer/inner phases around. > > I'm not sure who to ask for a better opinion; I'll add Stuart in CC as > he's the only smart person I know with deep expertise in both > Hibernate and CDI, with some luck he'll say I'm wrong and we're good > :) Lol, re-reading "Hibernate + CDI expertise" it's hilarious I forgot the most obvious expert name :) Not sure if Gavin is interested but I'll add him too. Thanks, Sanne > > Thanks, > Sanne > > >> >> On Thu, Jan 4, 2018 at 7:58 AM Scott Marlow wrote: >>> >>> I can arrange to keep access to the specific >>> ExtendedBeanManager/LifecycleListener, that is not difficult. >>> >>> What changes do we need from the CDI implementation? >>> >>> >>> On Jan 3, 2018 4:36 PM, "Steve Ebersole" wrote: >>> >>> If you have access to the specific ExtendedBeanManager/LifecycleListener, >>> that should already be enough. Those things are already properly scoped to >>> the SessionFactory, unless you are passing the same instance to multiple >>> SessionFactory instances. >>> >>> On Wed, Jan 3, 2018 at 10:09 AM Scott Marlow wrote: >>>> >>>> On Tue, Jan 2, 2018 at 2:42 PM, Steve Ebersole >>>> wrote: >>>>> >>>>> Scott, how would we register a listener for this event? >>>> >>>> >>>> If we want a standard solution, we could ask for an earlier CDI >>>> pre-destroy listener. >>>> >>>>> The problem we have had with most CDI "listeners" so far is that they >>>>> are non-contextual, meaning there has been no way to link that back to a >>>>> specific SessionFactory.. If I can register this listener with a reference >>>>> back to the Sessionfactory, this should actually be fine. >>>> >>>> >>>> I could pass the EMF to the >>>> org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager.LifecycleListener, if >>>> that helps. >>>> >>>>> >>>>> >>>>> On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow wrote: >>>>>> >>>>>> On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero >>>>>> wrote: >>>>>> >>>>>> > Any dependency injection framework will have some capability to >>>>>> > define >>>>>> > the graph of dependencies across components, and such graph could be >>>>>> > very complex, with details only known to the framework. >>>>>> > >>>>>> > I don't think we can solve the integration by having "before all >>>>>> > others" / "after all others" phases as that's too coarse grained to >>>>>> > define a full graph; we need to find a way to have the DI framework >>>>>> > take in consideration our additional components both in terms of DI >>>>>> > consumers and providers - then let the framework wire up things in >>>>>> > the >>>>>> > order it prefers. This is also to allow the DI engine to print >>>>>> > appropriate warnings for un-resolvable situations with its native >>>>>> > error handling, which would resolve in more familiar error messages. >>>>>> > >>>>>> > If that's not doable *or a priority* then all we can do is try to >>>>>> > make >>>>>> > it clear enough that there will be limitations and hopefully describe >>>>>> > these clearly. Some of such limitations might be puzzling as you >>>>>> > describe. >>>>>> > >>>>>> > >>>>>> > >>>>>> > On 20 December 2017 at 12:50, Yoann Rodiere >>>>>> > wrote: >>>>>> > > Hello all, >>>>>> > > >>>>>> > > TL;DR: Application-scoped beans cannot be used as part of the >>>>>> > > @PreDestroy >>>>>> > > method of ORM-instantiated CDI beans, and it's a bit odd because >>>>>> > > they can >>>>>> > > be used as part of the @PostConstruct method. >>>>>> > > >>>>>> > > I've been testing the CDI integration in Hibernate ORM for the past >>>>>> > > few >>>>>> > > days, trying to integrate it into Search. I think I've discovered >>>>>> > something >>>>>> > > odd: when CDI-managed beans are destroyed, they cannot access other >>>>>> > > Application-scoped CDI beans anymore. Not sure whether this is a >>>>>> > > problem >>>>>> > or >>>>>> > > not, so maybe we should discuss it a bit before going forward with >>>>>> > > the >>>>>> > > current behavior. >>>>>> > > >>>>>> > > Short reminder: scopes define when CDI beans are created and >>>>>> > > destroyed. >>>>>> > > @ApplicationScoped is pretty self-explanatory: created when the >>>>>> > application >>>>>> > > starts and destroyed when it stops. Some other scopes are a bit >>>>>> > > more >>>>>> > > convoluted: @Singleton basically means created *before* the >>>>>> > > application >>>>>> > > starts and destroyed *after* the application stops (and also means >>>>>> > > "this >>>>>> > > bean shall not be proxied"), @Dependent means created when an >>>>>> > > instance is >>>>>> > > requested and destroyed when the instance is released, etc. >>>>>> > > >>>>>> > > The thing is, Hibernate ORM is typically started very early and >>>>>> > > shut down >>>>>> > > very late in the CDI lifecycle - at least within WildFly. So when >>>>>> > Hibernate >>>>>> > > starts, CDI Application-scoped beans haven't been instantiated yet, >>>>>> > > and >>>>>> > it >>>>>> > > turns out that when Hibernate ORM shuts down, CDI has already >>>>>> > > destroyed >>>>>> > > Application-scoped beans. >>>>>> > > >>>>>> > > Regarding startup, Steve and Scott solved the problem by delaying >>>>>> > > bean >>>>>> > > instantiation to some point in the future when the Application >>>>>> > > scope is >>>>>> > > active (and thus Application-scoped beans are available). This >>>>>> > > makes it >>>>>> > > possible to use Application-scoped beans within ORM-instantiated >>>>>> > > beans as >>>>>> > > soon as the latter are constructed (i.e. within their >>>>>> > > @PostConstruct >>>>>> > > methods). >>>>>> > > However, when Hibernate ORM shuts down, the Application scope has >>>>>> > > already >>>>>> > > been terminated. So when ORM destroys the beans it instantiated, >>>>>> > > those >>>>>> > > ORM-instantiated beans cannot call a method on referenced >>>>>> > > Application-scoped beans (CDI proxies will throw an exception). >>>>>> > > >>>>>> > > All in all, the only type of beans we can currently use in a >>>>>> > > @PreDestroy >>>>>> > > method of an ORM-instantiated bean is @Dependent beans. @Singleton >>>>>> > > beans >>>>>> > > will work, but only because they are not proxied and thus you can >>>>>> > > cheat >>>>>> > and >>>>>> > > use them even after they have been destroyed... which I definitely >>>>>> > wouldn't >>>>>> > > recommend. >>>>>> > > >>>>>> > > I see two ways to handle the issue: >>>>>> > > >>>>>> > > 1. We don't change anything, and simply document somewhere that >>>>>> > > beans >>>>>> > > instantiated as part of the CDI integration are instantiated >>>>>> > > within >>>>>> > the >>>>>> > > Application scope, but are destroyed outside of it. And we >>>>>> > > suggest >>>>>> > that any >>>>>> > > bean used in @PostDestroy method in an ORM-instantiated bean >>>>>> > (directly or >>>>>> > > not) must have either a @Dependent scope, or a @Singleton scope >>>>>> > > and no >>>>>> > > @PostDestroy method. >>>>>> > > 2. We implement an "early shut-down" somehow, which would bring >>>>>> > forward >>>>>> > > bean destruction to some time when the Application scope is >>>>>> > > still >>>>>> > active. >>>>>> > >>>>>> >>>>>> org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager mentions that we >>>>>> could >>>>>> look at introducing a beanManagerDestroyed notification, if that is >>>>>> useful >>>>>> and we can find a way to implement it >>>>>> (javax.enterprise.spi.BeforeShutdown >>>>>> [1] is not early enough to meet your requirements). >>>>>> >>>>>> Scott >>>>>> >>>>>> [1] >>>>>> >>>>>> https://docs.oracle.com/javaee/7/api/javax/enterprise/inject/spi/BeforeShutdown.html >>>>>> >>>>>> >>>>>> > > >>>>>> > > #1 may be enough for now, even though the behavior feels a bit odd, >>>>>> > > and >>>>>> > > forces users to resort to less-than-ideal practices (using a >>>>>> > > @Singleton >>>>>> > > bean after it has been destroyed). >>>>>> > > >>>>>> > > #2 would require changes in WildFly and may be a bit complex. In >>>>>> > > particular, if we aren't careful, Application-scoped beans may not >>>>>> > > be >>>>>> > able >>>>>> > > to use Hibernate ORM from within their @PreDestroy methods... Which >>>>>> > > is >>>>>> > > probably not a good idea. So we would have to find a solution >>>>>> > > together >>>>>> > with >>>>>> > > the WildFly team. Also to be considered: Hibernate Search would >>>>>> > > have to >>>>>> > be >>>>>> > > shut down just before the "early shut-down" of Hibernate ORM >>>>>> > > occurs, >>>>>> > > because Hibernate Search cannot function at all without the beans >>>>>> > > it >>>>>> > > retrieves from the CDI context. >>>>>> > > >>>>>> > > Thoughts? >>>>>> > > >>>>>> > > >>>>>> > > Yoann Rodi?re >>>>>> > > Hibernate NoORM Team >>>>>> > > yoann at hibernate.org >>>>>> > > _______________________________________________ >>>>>> > > hibernate-dev mailing list >>>>>> > > hibernate-dev at lists.jboss.org >>>>>> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>>>> > >>>>>> _______________________________________________ >>>>>> hibernate-dev mailing list >>>>>> hibernate-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >>> >> From sanne at hibernate.org Thu Jan 4 18:03:53 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 4 Jan 2018 23:03:53 +0000 Subject: [hibernate-dev] New CI slaves now available! Message-ID: Hi all, we're having shiny new boxes running CI: more secure, way faster and less "out of disk space" prolems I hope. # Slaves Slaves have been rebuilt from scratch: - from Fedora 25 to Fedora 27 - NVMe disks for all storage, including databases, JDKs, dependency stores, indexes and journals - Now using C5 instances to benefit from Amazon's new "Nitro" engines [1] - hardware offloading of network operations by enabling ENA [2] - NVMe drives also using provisioned IO This took a bit of unexpected low level work as .. Fedora images don't support ENA yet so I had to create a custom Fedora re-distribution AMI first, it wasn't possible to simply compile the kernel modules for the standard Fedora images. These features are expected to come in future Fedora Cloud images but I didn't want to wait so made our own :) [3] # Cloud scaling Idle slaves will self-terminate after some timeout (currently 30m). When there are many jobs queueing up, more slaves (up to 5) will automatically start. If you're the first to trigger a build you'll have to be patient, as it's possible after some quiet time (after the night?) all slaves are gone; the system will boot up new ones automatically ASAP but this initial boot takes some extra couple of minutes. # Master node Well, security patching mostly, but also finally figured out how to workaround the bugs which were preventing us to upgrade Jenkins. So now Jenkins is upgraded to latest, including *all plugins*. It seems to work but let's keep an eye on it, those plugins are not all maintained at the quality one would expect. In particular attempting to change EC2 configuration properties will now trigger a super annoying NPE [4]; either don't make further changes or resort to XML editing of the configuration. # Next I'm not entirely done; eventually I'd like to convert our master node to ENA/C5/NVMe as well - especially to be able to move all master and slaves into the same physical cluster - but I'll stop now and get back to Java so you all get a chance to identify problems caused by the new slaves before I cause more trouble.. Thanks, Sanne 1 - https://www.theregister.co.uk/2017/11/29/aws_reveals_nitro_architecture_bare_metal_ec2_guard_duty_security_tool/ 2 - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html 3 - https://pagure.io/atomic-wg/issue/271 4 - https://issues.jenkins-ci.org/browse/JENKINS-46856 From steve at hibernate.org Thu Jan 4 18:52:53 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 04 Jan 2018 23:52:53 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: Awesome Sanne! Great work. Anything you need us to do to our jobs? On Thu, Jan 4, 2018, 5:20 PM Sanne Grinovero wrote: > Hi all, > > we're having shiny new boxes running CI: more secure, way faster and > less "out of disk space" prolems I hope. > > # Slaves > > Slaves have been rebuilt from scratch: > - from Fedora 25 to Fedora 27 > - NVMe disks for all storage, including databases, JDKs, dependency > stores, indexes and journals > - Now using C5 instances to benefit from Amazon's new "Nitro" engines [1] > - hardware offloading of network operations by enabling ENA [2] > - NVMe drives also using provisioned IO > > This took a bit of unexpected low level work as .. Fedora images don't > support ENA yet so I had to create a custom Fedora re-distribution AMI > first, it wasn't possible to simply compile the kernel modules for the > standard Fedora images. These features are expected to come in future > Fedora Cloud images but I didn't want to wait so made our own :) [3] > > # Cloud scaling > > Idle slaves will self-terminate after some timeout (currently 30m). > When there are many jobs queueing up, more slaves (up to 5) will > automatically start. > > If you're the first to trigger a build you'll have to be patient, as > it's possible after some quiet time (after the night?) all slaves are > gone; the system will boot up new ones automatically ASAP but this > initial boot takes some extra couple of minutes. > > # Master node > > Well, security patching mostly, but also finally figured out how to > workaround the bugs which were preventing us to upgrade Jenkins. > > So now Jenkins is upgraded to latest, including *all plugins*. It > seems to work but let's keep an eye on it, those plugins are not all > maintained at the quality one would expect. > > In particular attempting to change EC2 configuration properties will > now trigger a super annoying NPE [4]; either don't make further > changes or resort to XML editing of the configuration. > > # Next > > I'm not entirely done; eventually I'd like to convert our master node > to ENA/C5/NVMe as well - especially to be able to move all master and > slaves into the same physical cluster - but I'll stop now and get back > to Java so you all get a chance to identify problems caused by the new > slaves before I cause more trouble.. > > Thanks, > Sanne > > 1 - > https://www.theregister.co.uk/2017/11/29/aws_reveals_nitro_architecture_bare_metal_ec2_guard_duty_security_tool/ > 2 - > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html > 3 - https://pagure.io/atomic-wg/issue/271 > 4 - https://issues.jenkins-ci.org/browse/JENKINS-46856 > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From gbadner at redhat.com Thu Jan 4 20:28:16 2018 From: gbadner at redhat.com (Gail Badner) Date: Thu, 4 Jan 2018 17:28:16 -0800 Subject: [hibernate-dev] Plans to release 5.2.13? Message-ID: We discussed stopping 5.2 releases at the F2F, but I can't remember what was decided. I see that there is a 5.2 branch. Should we be backporting to 5.2 branch? Thanks, Gail From yoann at hibernate.org Fri Jan 5 02:57:03 2018 From: yoann at hibernate.org (Yoann Rodiere) Date: Fri, 5 Jan 2018 08:57:03 +0100 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: Great, thanks for all the work! Now that we have on-demand slave spawning, maybe we could get rid of our "hack" consisting in assigning 5 slots to each slave and a weight of 3 to each job? I would expect the website and release jobs to rarely wait in the queue, and if they do we can always set up a specific "priority queue" for those jobs, with a dedicated slave pool. Just asking for this because last time I checked, it was not possible to assign weight to jobs defined as Jenkins pipelines. So these jobs ended up with a weight of 1, and we ended up running multiple instances of those on the same slave... which is obviously not good. I can do the boring job editing work on each and every job, I'm just asking if it is seems ok to you... ? Yoann Rodi?re Hibernate NoORM Team yoann at hibernate.org On 5 January 2018 at 00:52, Steve Ebersole wrote: > Awesome Sanne! Great work. > > Anything you need us to do to our jobs? > > On Thu, Jan 4, 2018, 5:20 PM Sanne Grinovero wrote: > > > Hi all, > > > > we're having shiny new boxes running CI: more secure, way faster and > > less "out of disk space" prolems I hope. > > > > # Slaves > > > > Slaves have been rebuilt from scratch: > > - from Fedora 25 to Fedora 27 > > - NVMe disks for all storage, including databases, JDKs, dependency > > stores, indexes and journals > > - Now using C5 instances to benefit from Amazon's new "Nitro" engines > [1] > > - hardware offloading of network operations by enabling ENA [2] > > - NVMe drives also using provisioned IO > > > > This took a bit of unexpected low level work as .. Fedora images don't > > support ENA yet so I had to create a custom Fedora re-distribution AMI > > first, it wasn't possible to simply compile the kernel modules for the > > standard Fedora images. These features are expected to come in future > > Fedora Cloud images but I didn't want to wait so made our own :) [3] > > > > # Cloud scaling > > > > Idle slaves will self-terminate after some timeout (currently 30m). > > When there are many jobs queueing up, more slaves (up to 5) will > > automatically start. > > > > If you're the first to trigger a build you'll have to be patient, as > > it's possible after some quiet time (after the night?) all slaves are > > gone; the system will boot up new ones automatically ASAP but this > > initial boot takes some extra couple of minutes. > > > > # Master node > > > > Well, security patching mostly, but also finally figured out how to > > workaround the bugs which were preventing us to upgrade Jenkins. > > > > So now Jenkins is upgraded to latest, including *all plugins*. It > > seems to work but let's keep an eye on it, those plugins are not all > > maintained at the quality one would expect. > > > > In particular attempting to change EC2 configuration properties will > > now trigger a super annoying NPE [4]; either don't make further > > changes or resort to XML editing of the configuration. > > > > # Next > > > > I'm not entirely done; eventually I'd like to convert our master node > > to ENA/C5/NVMe as well - especially to be able to move all master and > > slaves into the same physical cluster - but I'll stop now and get back > > to Java so you all get a chance to identify problems caused by the new > > slaves before I cause more trouble.. > > > > Thanks, > > Sanne > > > > 1 - > > https://www.theregister.co.uk/2017/11/29/aws_reveals_nitro_ > architecture_bare_metal_ec2_guard_duty_security_tool/ > > 2 - > > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ > enhanced-networking-ena.html > > 3 - https://pagure.io/atomic-wg/issue/271 > > 4 - https://issues.jenkins-ci.org/browse/JENKINS-46856 > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Fri Jan 5 05:59:17 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 5 Jan 2018 10:59:17 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: On 4 January 2018 at 23:52, Steve Ebersole wrote: > Awesome Sanne! Great work. > > Anything you need us to do to our jobs? No changes *should* be needed. It would help me if you could all manually trigger the jobs you consider important and highlight suspucious problems so that we get awareness of regressions in short time. I've already triggered some ~20 jobs last night; saw no problems so far but haven't tested any release yet, nor website related tasks. Thanks, Sanne From sanne at hibernate.org Fri Jan 5 06:05:47 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 5 Jan 2018 11:05:47 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: On 5 January 2018 at 07:57, Yoann Rodiere wrote: > Great, thanks for all the work! > > Now that we have on-demand slave spawning, maybe we could get rid of our > "hack" consisting in assigning 5 slots to each slave and a weight of 3 to > each job? I would expect the website and release jobs to rarely wait in the > queue, and if they do we can always set up a specific "priority queue" for > those jobs, with a dedicated slave pool. > Just asking for this because last time I checked, it was not possible to > assign weight to jobs defined as Jenkins pipelines. So these jobs ended up > with a weight of 1, and we ended up running multiple instances of those on > the same slave... which is obviously not good. > I can do the boring job editing work on each and every job, I'm just asking > if it is seems ok to you... ? That weight system has been very useful for many things, so if you're having an urge to remove it just to "cleanup" I'd say no.. but if you see good use of those pipelines go ahead. Sorry but I forgot what we think pipelines would bring us. Thanks, Sanne > > Yoann Rodi?re > Hibernate NoORM Team > yoann at hibernate.org > > On 5 January 2018 at 00:52, Steve Ebersole wrote: >> >> Awesome Sanne! Great work. >> >> Anything you need us to do to our jobs? >> >> On Thu, Jan 4, 2018, 5:20 PM Sanne Grinovero wrote: >> >> > Hi all, >> > >> > we're having shiny new boxes running CI: more secure, way faster and >> > less "out of disk space" prolems I hope. >> > >> > # Slaves >> > >> > Slaves have been rebuilt from scratch: >> > - from Fedora 25 to Fedora 27 >> > - NVMe disks for all storage, including databases, JDKs, dependency >> > stores, indexes and journals >> > - Now using C5 instances to benefit from Amazon's new "Nitro" engines >> > [1] >> > - hardware offloading of network operations by enabling ENA [2] >> > - NVMe drives also using provisioned IO >> > >> > This took a bit of unexpected low level work as .. Fedora images don't >> > support ENA yet so I had to create a custom Fedora re-distribution AMI >> > first, it wasn't possible to simply compile the kernel modules for the >> > standard Fedora images. These features are expected to come in future >> > Fedora Cloud images but I didn't want to wait so made our own :) [3] >> > >> > # Cloud scaling >> > >> > Idle slaves will self-terminate after some timeout (currently 30m). >> > When there are many jobs queueing up, more slaves (up to 5) will >> > automatically start. >> > >> > If you're the first to trigger a build you'll have to be patient, as >> > it's possible after some quiet time (after the night?) all slaves are >> > gone; the system will boot up new ones automatically ASAP but this >> > initial boot takes some extra couple of minutes. >> > >> > # Master node >> > >> > Well, security patching mostly, but also finally figured out how to >> > workaround the bugs which were preventing us to upgrade Jenkins. >> > >> > So now Jenkins is upgraded to latest, including *all plugins*. It >> > seems to work but let's keep an eye on it, those plugins are not all >> > maintained at the quality one would expect. >> > >> > In particular attempting to change EC2 configuration properties will >> > now trigger a super annoying NPE [4]; either don't make further >> > changes or resort to XML editing of the configuration. >> > >> > # Next >> > >> > I'm not entirely done; eventually I'd like to convert our master node >> > to ENA/C5/NVMe as well - especially to be able to move all master and >> > slaves into the same physical cluster - but I'll stop now and get back >> > to Java so you all get a chance to identify problems caused by the new >> > slaves before I cause more trouble.. >> > >> > Thanks, >> > Sanne >> > >> > 1 - >> > >> > https://www.theregister.co.uk/2017/11/29/aws_reveals_nitro_architecture_bare_metal_ec2_guard_duty_security_tool/ >> > 2 - >> > >> > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html >> > 3 - https://pagure.io/atomic-wg/issue/271 >> > 4 - https://issues.jenkins-ci.org/browse/JENKINS-46856 >> > _______________________________________________ >> > hibernate-dev mailing list >> > hibernate-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > From steve at hibernate.org Fri Jan 5 06:53:34 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 05 Jan 2018 11:53:34 +0000 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: We should definitely stop doing 5.2 releases once we release 5.3. Of course 5.3 is held up waiting for answers from a few people... On Thu, Jan 4, 2018, 7:29 PM Gail Badner wrote: > We discussed stopping 5.2 releases at the F2F, but I can't remember what > was decided. > > I see that there is a 5.2 branch. Should we be backporting to 5.2 branch? > > Thanks, > Gail > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Fri Jan 5 07:20:56 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 05 Jan 2018 12:20:56 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: I went to manually kick off the main ORM job, but saw that you already had - however it had failed with GC/memory problems[1]. I kicked off a new run... [1] http://ci.hibernate.org/job/hibernate-orm-master-h2-main/951/console On Fri, Jan 5, 2018 at 4:59 AM Sanne Grinovero wrote: > On 4 January 2018 at 23:52, Steve Ebersole wrote: > > Awesome Sanne! Great work. > > > > Anything you need us to do to our jobs? > > No changes *should* be needed. It would help me if you could all > manually trigger the jobs you consider important and highlight > suspucious problems so that we get awareness of regressions in short > time. > > I've already triggered some ~20 jobs last night; saw no problems so > far but haven't tested any release yet, nor website related tasks. > > Thanks, > Sanne > From steve at hibernate.org Fri Jan 5 07:28:32 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 05 Jan 2018 12:28:32 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: FWIW... I do not know the rules about how these slaves spin up, but in the 10+ minutes since I kicked off that job it is still waiting in queue. And there is actually a job (Debezium Deploy Snapshots) in front of it that has been waiting over 3.5 hours On Fri, Jan 5, 2018 at 6:20 AM Steve Ebersole wrote: > I went to manually kick off the main ORM job, but saw that you already had > - however it had failed with GC/memory problems[1]. I kicked off a new > run... > > [1] http://ci.hibernate.org/job/hibernate-orm-master-h2-main/951/console > > > On Fri, Jan 5, 2018 at 4:59 AM Sanne Grinovero > wrote: > >> On 4 January 2018 at 23:52, Steve Ebersole wrote: >> > Awesome Sanne! Great work. >> > >> > Anything you need us to do to our jobs? >> >> No changes *should* be needed. It would help me if you could all >> manually trigger the jobs you consider important and highlight >> suspucious problems so that we get awareness of regressions in short >> time. >> >> I've already triggered some ~20 jobs last night; saw no problems so >> far but haven't tested any release yet, nor website related tasks. >> >> Thanks, >> Sanne >> > From guillaume.smet at gmail.com Fri Jan 5 08:05:48 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Fri, 5 Jan 2018 14:05:48 +0100 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: Hi, AFAICS there are 52 issues fixed for 5.2.13. And there are a couple of PRs waiting for review AFAICS (which might be ready to be integrated or not). So I think it would be really beneficial to continue doing 5.2.x releases. 5.3 is not there yet. And once it's going to be released, we would still need the integrators to support it (be it WildFly or Spring) before considering it fully consumable by the end users. And probably some time to get it field tested too before we can consider 5.2 as being more or less "dead" and just say to the users "upgrade to 5.3". My 2 cents. -- Guillaume On Fri, Jan 5, 2018 at 12:53 PM, Steve Ebersole wrote: > We should definitely stop doing 5.2 releases once we release 5.3. > > Of course 5.3 is held up waiting for answers from a few people... > > On Thu, Jan 4, 2018, 7:29 PM Gail Badner wrote: > > > We discussed stopping 5.2 releases at the F2F, but I can't remember what > > was decided. > > > > I see that there is a 5.2 branch. Should we be backporting to 5.2 branch? > > > > Thanks, > > Gail > From sanne at hibernate.org Fri Jan 5 08:12:21 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 5 Jan 2018 13:12:21 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: On 5 January 2018 at 12:28, Steve Ebersole wrote: > FWIW... I do not know the rules about how these slaves spin up, but in the > 10+ minutes since I kicked off that job it is still waiting in queue. When there are no slaves it might take some extra minutes; on top of that I was manually killing some leftover machines from yesterday's night experiments, so maybe I bothered it in some way. Let's keep an eye on it, if it happens regularly we'll see what can be done. I'll likely want to keep a slave "always on".. > And there is actually a job (Debezium Deploy Snapshots) in front of it that has > been waiting over 3.5 hours That was my fault, thanks for spotting it! (the job was misconfigured, fixed now). > On Fri, Jan 5, 2018 at 6:20 AM Steve Ebersole wrote: >> >> I went to manually kick off the main ORM job, but saw that you already had >> - however it had failed with GC/memory problems[1]. I kicked off a new >> run... These boxes have 4 core and 8GB RAM heach. We can probably use larger heaps: I've reconfigured the gradle and Maven environment options to allow 4GB of heap, and kicked a new ORM build. Thanks, Sanne >> >> [1] http://ci.hibernate.org/job/hibernate-orm-master-h2-main/951/console >> >> >> On Fri, Jan 5, 2018 at 4:59 AM Sanne Grinovero >> wrote: >>> >>> On 4 January 2018 at 23:52, Steve Ebersole wrote: >>> > Awesome Sanne! Great work. >>> > >>> > Anything you need us to do to our jobs? >>> >>> No changes *should* be needed. It would help me if you could all >>> manually trigger the jobs you consider important and highlight >>> suspucious problems so that we get awareness of regressions in short >>> time. >>> >>> I've already triggered some ~20 jobs last night; saw no problems so >>> far but haven't tested any release yet, nor website related tasks. >>> >>> Thanks, >>> Sanne From steve at hibernate.org Fri Jan 5 08:13:47 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 05 Jan 2018 13:13:47 +0000 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: While I understand the sentiment of continuing to develop old lines (branches) of code, that's just not viable. And this is something we have all discussed as a (full) team a few times now. Once the next development line is stable we stop developing the older line. That's even more true within a release family - there is no need to continue to develop 5.x once 5.x+1 is stable. We do adjust that procedure slightly around major releases, meaning that even after 6.0 is stable we continue to do those 5.x+1 releases *for a short time* On Fri, Jan 5, 2018 at 7:06 AM Guillaume Smet wrote: > Hi, > > AFAICS there are 52 issues fixed for 5.2.13. > > And there are a couple of PRs waiting for review AFAICS (which might be > ready to be integrated or not). > > So I think it would be really beneficial to continue doing 5.2.x releases. > > 5.3 is not there yet. And once it's going to be released, we would still > need the integrators to support it (be it WildFly or Spring) before > considering it fully consumable by the end users. And probably some time to > get it field tested too before we can consider 5.2 as being more or less > "dead" and just say to the users "upgrade to 5.3". > > My 2 cents. > > -- > Guillaume > > > On Fri, Jan 5, 2018 at 12:53 PM, Steve Ebersole > wrote: > >> We should definitely stop doing 5.2 releases once we release 5.3. >> >> Of course 5.3 is held up waiting for answers from a few people... >> >> On Thu, Jan 4, 2018, 7:29 PM Gail Badner wrote: >> >> > We discussed stopping 5.2 releases at the F2F, but I can't remember what >> > was decided. >> > >> > I see that there is a 5.2 branch. Should we be backporting to 5.2 >> branch? >> > >> > Thanks, >> > Gail >> > > From sanne at hibernate.org Fri Jan 5 08:16:33 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 5 Jan 2018 13:16:33 +0000 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: On 5 January 2018 at 13:05, Guillaume Smet wrote: > Hi, > > AFAICS there are 52 issues fixed for 5.2.13. > > And there are a couple of PRs waiting for review AFAICS (which might be > ready to be integrated or not). > > So I think it would be really beneficial to continue doing 5.2.x releases. > > 5.3 is not there yet. And once it's going to be released, we would still > need the integrators to support it (be it WildFly or Spring) before > considering it fully consumable by the end users. And probably some time to > get it field tested too before we can consider 5.2 as being more or less > "dead" and just say to the users "upgrade to 5.3". +1 I'd love to see more regular OGM releases, and some benevolent maintenance time on 5.2 for a while longer even after 5.3 is available. We're all willing to help with the release process, including automate it more; if someone on the ORM team could volunteer brains for the organizational work we can make sure it's quick and painless by delegating the annoying labour to Jenkins. Thanks, Sanne > > My 2 cents. > > -- > Guillaume > > On Fri, Jan 5, 2018 at 12:53 PM, Steve Ebersole wrote: > >> We should definitely stop doing 5.2 releases once we release 5.3. >> >> Of course 5.3 is held up waiting for answers from a few people... >> >> On Thu, Jan 4, 2018, 7:29 PM Gail Badner wrote: >> >> > We discussed stopping 5.2 releases at the F2F, but I can't remember what >> > was decided. >> > >> > I see that there is a 5.2 branch. Should we be backporting to 5.2 branch? >> > >> > Thanks, >> > Gail >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Fri Jan 5 08:32:15 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 05 Jan 2018 13:32:15 +0000 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: Certain parts of the release process are easy to automate, assuming nothing goes wrong of course. Other parts are not. Which actually circles back to some things I've been comtemplating about (ORM at least) releases. Basically we have an elaborate set of steps we go through for a release beyond just the "simple" aspects like tagging, building/uploading jars... things like blog posts, forum announcements, announcement emails... and we do these even for each bug fix release. IMO we really should only be doing some of these for a family (5.2, 5.3) initially going stable (Final). I'd love to see the release task (the actual Gradle tasks) do an announcement when any release is performed - Gradle has an "announce" plugin that can announce via twitter, etc. To me that is enough for a generalized "hey this new release it out" notifications. The initial stable release of a family (5.2.0.Final, 5.3.0.Final, 6.0..0.Final...) is special and the one we should handle specially by doing some of these other things. But even on top of that stuff, its often just managing the backporting that is resource intensive - identifying what should be backported and what should not, not to mention managing the conflicts as we get further down that path. On Fri, Jan 5, 2018 at 7:16 AM Sanne Grinovero wrote: > On 5 January 2018 at 13:05, Guillaume Smet > wrote: > > Hi, > > > > AFAICS there are 52 issues fixed for 5.2.13. > > > > And there are a couple of PRs waiting for review AFAICS (which might be > > ready to be integrated or not). > > > > So I think it would be really beneficial to continue doing 5.2.x > releases. > > > > 5.3 is not there yet. And once it's going to be released, we would still > > need the integrators to support it (be it WildFly or Spring) before > > considering it fully consumable by the end users. And probably some time > to > > get it field tested too before we can consider 5.2 as being more or less > > "dead" and just say to the users "upgrade to 5.3". > > +1 I'd love to see more regular OGM releases, and some benevolent > maintenance time on 5.2 for a while longer even after 5.3 is > available. > > We're all willing to help with the release process, including automate > it more; if someone on the ORM team could volunteer brains for the > organizational work we can make sure it's quick and painless by > delegating the annoying labour to Jenkins. > > Thanks, > Sanne > > > > > My 2 cents. > > > > -- > > Guillaume > > > > On Fri, Jan 5, 2018 at 12:53 PM, Steve Ebersole > wrote: > > > >> We should definitely stop doing 5.2 releases once we release 5.3. > >> > >> Of course 5.3 is held up waiting for answers from a few people... > >> > >> On Thu, Jan 4, 2018, 7:29 PM Gail Badner wrote: > >> > >> > We discussed stopping 5.2 releases at the F2F, but I can't remember > what > >> > was decided. > >> > > >> > I see that there is a 5.2 branch. Should we be backporting to 5.2 > branch? > >> > > >> > Thanks, > >> > Gail > >> > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Fri Jan 5 08:38:06 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 5 Jan 2018 13:38:06 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: On 5 January 2018 at 13:12, Sanne Grinovero wrote: > On 5 January 2018 at 12:28, Steve Ebersole wrote: >> FWIW... I do not know the rules about how these slaves spin up, but in the >> 10+ minutes since I kicked off that job it is still waiting in queue. > > When there are no slaves it might take some extra minutes; on top of > that I was manually killing some leftover machines from yesterday's > night experiments, so maybe I bothered it in some way. > > Let's keep an eye on it, if it happens regularly we'll see what can be > done. I'll likely want to keep a slave "always on".. > >> And there is actually a job (Debezium Deploy Snapshots) in front of it that has >> been waiting over 3.5 hours > > That was my fault, thanks for spotting it! (the job was misconfigured, > fixed now). > >> On Fri, Jan 5, 2018 at 6:20 AM Steve Ebersole wrote: >>> >>> I went to manually kick off the main ORM job, but saw that you already had >>> - however it had failed with GC/memory problems[1]. I kicked off a new >>> run... > > These boxes have 4 core and 8GB RAM heach. We can probably use larger > heaps: I've reconfigured the gradle and Maven environment options to > allow 4GB of heap, and kicked a new ORM build. The Gradle build completed fine now, in just 12 minutes. However then the job gets stuck on: Using GitBlamer to create author and commit information for all warnings. It's been busy 15 minutes already at this point - so taking longer than the build and tests - and still going.. What is it? is it worth it? something wrong with it? - http://ci.hibernate.org/job/hibernate-orm-master-h2-main/953/console Thanks, Sanne From guillaume.smet at gmail.com Fri Jan 5 09:03:57 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Fri, 5 Jan 2018 15:03:57 +0100 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: On Fri, Jan 5, 2018 at 2:13 PM, Steve Ebersole wrote: > While I understand the sentiment of continuing to develop old lines > (branches) of code, that's just not viable. And this is something we have > all discussed as a (full) team a few times now. Once the next development > line is stable we stop developing the older line. That's even more true > within a release family - there is no need to continue to develop 5.x once > 5.x+1 is stable. We do adjust that procedure slightly around major > releases, meaning that even after 6.0 is stable we continue to do those > 5.x+1 releases *for a short time* > Well, from my experience, it takes some time to get a 5.x version stable. The first .0.Finals always have a few annoying regressions (not judging, just stating the fact - same for HV or Search). And the end users usually wait for the integrators to do the work. You can't migrate to 5.3 to get your bug fixed if Spring isn't compatible with it yet (and they usually do the work very early). I'm not saying we should maintain the versions indefinitely. I'm just saying that first ".0.Final" != usable. That's why I'm using the term "consumable by the end users". Note that in the case of 5.2.13.Final, we don't even have a 5.3.0.Final yet, so I'm not sure why we are even considering not releasing it. The last one is from October 19th, we have 50+ issues fixed. And even the Spring guys are playing it nice asking when we plan to release it so that they can integrate it in their next Spring Boot release. -- Guillaume From steve at hibernate.org Fri Jan 5 09:07:27 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 05 Jan 2018 14:07:27 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: I have no idea what GitBlamer is. Never heard of it On Fri, Jan 5, 2018, 7:38 AM Sanne Grinovero wrote: > On 5 January 2018 at 13:12, Sanne Grinovero wrote: > > On 5 January 2018 at 12:28, Steve Ebersole wrote: > >> FWIW... I do not know the rules about how these slaves spin up, but in > the > >> 10+ minutes since I kicked off that job it is still waiting in queue. > > > > When there are no slaves it might take some extra minutes; on top of > > that I was manually killing some leftover machines from yesterday's > > night experiments, so maybe I bothered it in some way. > > > > Let's keep an eye on it, if it happens regularly we'll see what can be > > done. I'll likely want to keep a slave "always on".. > > > >> And there is actually a job (Debezium Deploy Snapshots) in front of it > that has > >> been waiting over 3.5 hours > > > > That was my fault, thanks for spotting it! (the job was misconfigured, > > fixed now). > > > >> On Fri, Jan 5, 2018 at 6:20 AM Steve Ebersole > wrote: > >>> > >>> I went to manually kick off the main ORM job, but saw that you already > had > >>> - however it had failed with GC/memory problems[1]. I kicked off a new > >>> run... > > > > These boxes have 4 core and 8GB RAM heach. We can probably use larger > > heaps: I've reconfigured the gradle and Maven environment options to > > allow 4GB of heap, and kicked a new ORM build. > > The Gradle build completed fine now, in just 12 minutes. > However then the job gets stuck on: > > Using GitBlamer to create author and commit > information for all warnings. > > It's been busy 15 minutes already at this point - so taking longer > than the build and tests - and still going.. > What is it? is it worth it? something wrong with it? > > - http://ci.hibernate.org/job/hibernate-orm-master-h2-main/953/console > > Thanks, > Sanne > From guillaume.smet at gmail.com Fri Jan 5 09:22:43 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Fri, 5 Jan 2018 15:22:43 +0100 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: On Fri, Jan 5, 2018 at 2:32 PM, Steve Ebersole wrote: > Certain parts of the release process are easy to automate, assuming > nothing goes wrong of course. Other parts are not. Which actually circles > back to some things I've been comtemplating about (ORM at least) releases. > Basically we have an elaborate set of steps we go through for a release > beyond just the "simple" aspects like tagging, building/uploading jars... > things like blog posts, forum announcements, announcement emails... and we > do these even for each bug fix release. IMO we really should only be doing > some of these for a family (5.2, 5.3) initially going stable (Final). I'd > love to see the release task (the actual Gradle tasks) do an announcement > when any release is performed - Gradle has an "announce" plugin that can > announce via twitter, etc. To me that is enough for a generalized "hey > this new release it out" notifications. The initial stable release of a > family (5.2.0.Final, 5.3.0.Final, 6.0..0.Final...) is special and the one > we should handle specially by doing some of these other things. > It should take no more than half a day to do the release itself. A full day with a detailed blog post. I agree it's still one day not spent on other things. But having a release per month should be doable. In the case of 5.2.13, we are talking about nearly 3 months of work that are not in the hands of the users. If you release something every month, it's not that bad if a bugfix slips to the next release. If a PR is not completely ready, well, it's going to be in the next one, no need to wait. It helps getting the release coordination easier. It's also easier to detect and fix regressions when you release more frequently. The good thing is that we are not considering a bugfix release as something traumatizing anymore. It's just day to day work. > But even on top of that stuff, its often just managing the backporting > that is resource intensive - identifying what should be backported and what > should not, not to mention managing the conflicts as we get further down > that path. > That I can understand. But I think not releasing periodically doesn't help as if you backport a 3 months old fix, it's hard to go back to it then. FWIW, in the active community branches, I usually do the backport right away - if I think the issue requires backporting, sometimes, it's just not worth it or too risky. And I'm doing the "what should I backport?" thing only on product only branches. I'm not saying it would be that easy with ORM as the flow of issues is significantly larger. Just stating how we do it. -- Guillaume From sanne at hibernate.org Fri Jan 5 09:50:25 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 5 Jan 2018 14:50:25 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: On 5 January 2018 at 14:07, Steve Ebersole wrote: > I have no idea what GitBlamer is. Never heard of it I figured it out; it's implicitly (by default) invoked by the job tasks of finding "TODO"'s and similar markers in the project, to add "blame" information to the final report. So for each and every warning the report would produce, it will dig into the git history of the project to figure out who introduced the marker. I disabled this "blame" process, I hope that's allright - it worked fine now, producing a full build in 15 minutes: - http://ci.hibernate.org/job/hibernate-orm-master-h2-main/955/console Since we didn't see this problem before I guess it's a new "feature" caused by me by upgrading the plugins. I disabled that "feature" globally as I don't think it's reasonable for any of our projects.. Thanks, Sanne > > > On Fri, Jan 5, 2018, 7:38 AM Sanne Grinovero wrote: >> >> On 5 January 2018 at 13:12, Sanne Grinovero wrote: >> > On 5 January 2018 at 12:28, Steve Ebersole wrote: >> >> FWIW... I do not know the rules about how these slaves spin up, but in >> >> the >> >> 10+ minutes since I kicked off that job it is still waiting in queue. >> > >> > When there are no slaves it might take some extra minutes; on top of >> > that I was manually killing some leftover machines from yesterday's >> > night experiments, so maybe I bothered it in some way. >> > >> > Let's keep an eye on it, if it happens regularly we'll see what can be >> > done. I'll likely want to keep a slave "always on".. >> > >> >> And there is actually a job (Debezium Deploy Snapshots) in front of it >> >> that has >> >> been waiting over 3.5 hours >> > >> > That was my fault, thanks for spotting it! (the job was misconfigured, >> > fixed now). >> > >> >> On Fri, Jan 5, 2018 at 6:20 AM Steve Ebersole >> >> wrote: >> >>> >> >>> I went to manually kick off the main ORM job, but saw that you already >> >>> had >> >>> - however it had failed with GC/memory problems[1]. I kicked off a >> >>> new >> >>> run... >> > >> > These boxes have 4 core and 8GB RAM heach. We can probably use larger >> > heaps: I've reconfigured the gradle and Maven environment options to >> > allow 4GB of heap, and kicked a new ORM build. >> >> The Gradle build completed fine now, in just 12 minutes. >> However then the job gets stuck on: >> >> Using GitBlamer to create author and commit >> information for all warnings. >> >> It's been busy 15 minutes already at this point - so taking longer >> than the build and tests - and still going.. >> What is it? is it worth it? something wrong with it? >> >> - http://ci.hibernate.org/job/hibernate-orm-master-h2-main/953/console >> >> Thanks, >> Sanne From steve at hibernate.org Fri Jan 5 10:04:55 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 05 Jan 2018 15:04:55 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: TBH, I'm ok with just dropping the TODO collection as a part of the Jenkins jobs. On Fri, Jan 5, 2018 at 8:56 AM Sanne Grinovero wrote: > On 5 January 2018 at 14:07, Steve Ebersole wrote: > > I have no idea what GitBlamer is. Never heard of it > > I figured it out; it's implicitly (by default) invoked by the job > tasks of finding "TODO"'s and similar markers in the project, > to add "blame" information to the final report. > > So for each and every warning the report would produce, it will dig > into the git history of the project to figure out who introduced the > marker. I disabled this "blame" process, I hope that's allright - it > worked fine now, producing a full build in 15 minutes: > > - http://ci.hibernate.org/job/hibernate-orm-master-h2-main/955/console > > Since we didn't see this problem before I guess it's a new "feature" > caused by me by upgrading the plugins. > I disabled that "feature" globally as I don't think it's reasonable > for any of our projects.. > > Thanks, > Sanne > > > > > > > On Fri, Jan 5, 2018, 7:38 AM Sanne Grinovero > wrote: > >> > >> On 5 January 2018 at 13:12, Sanne Grinovero > wrote: > >> > On 5 January 2018 at 12:28, Steve Ebersole > wrote: > >> >> FWIW... I do not know the rules about how these slaves spin up, but > in > >> >> the > >> >> 10+ minutes since I kicked off that job it is still waiting in queue. > >> > > >> > When there are no slaves it might take some extra minutes; on top of > >> > that I was manually killing some leftover machines from yesterday's > >> > night experiments, so maybe I bothered it in some way. > >> > > >> > Let's keep an eye on it, if it happens regularly we'll see what can be > >> > done. I'll likely want to keep a slave "always on".. > >> > > >> >> And there is actually a job (Debezium Deploy Snapshots) in front of > it > >> >> that has > >> >> been waiting over 3.5 hours > >> > > >> > That was my fault, thanks for spotting it! (the job was misconfigured, > >> > fixed now). > >> > > >> >> On Fri, Jan 5, 2018 at 6:20 AM Steve Ebersole > >> >> wrote: > >> >>> > >> >>> I went to manually kick off the main ORM job, but saw that you > already > >> >>> had > >> >>> - however it had failed with GC/memory problems[1]. I kicked off a > >> >>> new > >> >>> run... > >> > > >> > These boxes have 4 core and 8GB RAM heach. We can probably use larger > >> > heaps: I've reconfigured the gradle and Maven environment options to > >> > allow 4GB of heap, and kicked a new ORM build. > >> > >> The Gradle build completed fine now, in just 12 minutes. > >> However then the job gets stuck on: > >> > >> Using GitBlamer to create author and commit > >> information for all warnings. > >> > >> It's been busy 15 minutes already at this point - so taking longer > >> than the build and tests - and still going.. > >> What is it? is it worth it? something wrong with it? > >> > >> - http://ci.hibernate.org/job/hibernate-orm-master-h2-main/953/console > >> > >> Thanks, > >> Sanne > From steve at hibernate.org Fri Jan 5 10:24:59 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 05 Jan 2018 15:24:59 +0000 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: On Fri, Jan 5, 2018 at 8:23 AM Guillaume Smet wrote: > On Fri, Jan 5, 2018 at 2:32 PM, Steve Ebersole > wrote: > >> Certain parts of the release process are easy to automate, assuming >> nothing goes wrong of course. Other parts are not. Which actually circles >> back to some things I've been comtemplating about (ORM at least) releases. >> Basically we have an elaborate set of steps we go through for a release >> beyond just the "simple" aspects like tagging, building/uploading jars... >> things like blog posts, forum announcements, announcement emails... and we >> do these even for each bug fix release. IMO we really should only be doing >> some of these for a family (5.2, 5.3) initially going stable (Final). I'd >> love to see the release task (the actual Gradle tasks) do an announcement >> when any release is performed - Gradle has an "announce" plugin that can >> announce via twitter, etc. To me that is enough for a generalized "hey >> this new release it out" notifications. The initial stable release of a >> family (5.2.0.Final, 5.3.0.Final, 6.0..0.Final...) is special and the one >> we should handle specially by doing some of these other things. >> > > It should take no more than half a day to do the release itself. A full > day with a detailed blog post. > > I agree it's still one day not spent on other things. But having a release > per month should be doable. In the case of 5.2.13, we are talking about > nearly 3 months of work that are not in the hands of the users. > Yep, I know how long it takes to do a release - I've been doing them for almost 15 years ;) I'm not sure if you are agreeing or disagreeing about blogging every bugfix release. But anyway, Sanne asked what would help automate the release process, so I am listing things that would help. Of course you can feel free to contribute blogging and emailing announcement plugins for Gradle for us to use in the automated release tasks ;) If you release something every month, it's not that bad if a bugfix slips > to the next release. If a PR is not completely ready, well, it's going to > be in the next one, no need to wait. It helps getting the release > coordination easier. > 5.2 just got lost in the cracks as Andrea, Chris and I were all working on 6.0. It's also easier to detect and fix regressions when you release more > frequently. > That's a fallacy. Or at least its not true in isolation. It depends on the things that would highlight the regression picking up that release and playing with it, since your entire premise here is that the regression is not tested as part of the test suite. But that's actually not what happens today in terms of our inter-project integrations... really we find out many releases later when OGM or Search update to these newer ORM releases. > FWIW, in the active community branches, I usually do the backport right > away - if I think the issue requires backporting, sometimes, it's just not > worth it or too risky. And I'm doing the "what should I backport?" thing > only on product only branches. > This right here is the crux - "active community branch". By definition no branch is in active community development. Again, we have discussed this as a team multiple times. Once the next release is stable we stop developing the previous one, with a few caveats. E.g.: - Once 5.3 is stable we do generally expect to do a *few* additional 5.2 releases. But let's be careful about the expectation about the phrase "few" here. I really mean one or 2... - For major releases (5.x -> 6.x) we generally expect to do a larger number of releases of the 5.3 line. Again though, not indefinite. The basic gist is that we are an open source community. We simply do not have the resources to maintain infinite lines of development. We need to focus on what is important. I think we all agree that currently 5.2 is still important, but I think we may all have different expectations for what that means moving forward as 5.3 becomes the stable release. I cannot give a concrete "we will only do X more 5.2 releases after 5.3 is stable" answer. It might be 2. It might be 3. And it might be 1. I'm not saying it would be that easy with ORM as the flow of issues is > significantly larger. Just stating how we do it. > Sure. And time-boxed releases are what we normally strive for as well in ORM. 5.2 is largely an aberration in this regard. Again - Andrea, Chris and I were focused on 6.0 work and since there is no 5.2 based Red Hat work this fell between the cracks From sanne at hibernate.org Fri Jan 5 10:25:49 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 5 Jan 2018 15:25:49 +0000 Subject: [hibernate-dev] New CI slaves now available! In-Reply-To: References: Message-ID: On 5 January 2018 at 15:04, Steve Ebersole wrote: > TBH, I'm ok with just dropping the TODO collection as a part of the Jenkins > jobs. Even better, that will bring down the times from 15m to 12m :) Doing that now. > > On Fri, Jan 5, 2018 at 8:56 AM Sanne Grinovero wrote: >> >> On 5 January 2018 at 14:07, Steve Ebersole wrote: >> > I have no idea what GitBlamer is. Never heard of it >> >> I figured it out; it's implicitly (by default) invoked by the job >> tasks of finding "TODO"'s and similar markers in the project, >> to add "blame" information to the final report. >> >> So for each and every warning the report would produce, it will dig >> into the git history of the project to figure out who introduced the >> marker. I disabled this "blame" process, I hope that's allright - it >> worked fine now, producing a full build in 15 minutes: >> >> - http://ci.hibernate.org/job/hibernate-orm-master-h2-main/955/console >> >> Since we didn't see this problem before I guess it's a new "feature" >> caused by me by upgrading the plugins. >> I disabled that "feature" globally as I don't think it's reasonable >> for any of our projects.. >> >> Thanks, >> Sanne >> >> > >> > >> > On Fri, Jan 5, 2018, 7:38 AM Sanne Grinovero >> > wrote: >> >> >> >> On 5 January 2018 at 13:12, Sanne Grinovero >> >> wrote: >> >> > On 5 January 2018 at 12:28, Steve Ebersole >> >> > wrote: >> >> >> FWIW... I do not know the rules about how these slaves spin up, but >> >> >> in >> >> >> the >> >> >> 10+ minutes since I kicked off that job it is still waiting in >> >> >> queue. >> >> > >> >> > When there are no slaves it might take some extra minutes; on top of >> >> > that I was manually killing some leftover machines from yesterday's >> >> > night experiments, so maybe I bothered it in some way. >> >> > >> >> > Let's keep an eye on it, if it happens regularly we'll see what can >> >> > be >> >> > done. I'll likely want to keep a slave "always on".. >> >> > >> >> >> And there is actually a job (Debezium Deploy Snapshots) in front of >> >> >> it >> >> >> that has >> >> >> been waiting over 3.5 hours >> >> > >> >> > That was my fault, thanks for spotting it! (the job was >> >> > misconfigured, >> >> > fixed now). >> >> > >> >> >> On Fri, Jan 5, 2018 at 6:20 AM Steve Ebersole >> >> >> wrote: >> >> >>> >> >> >>> I went to manually kick off the main ORM job, but saw that you >> >> >>> already >> >> >>> had >> >> >>> - however it had failed with GC/memory problems[1]. I kicked off a >> >> >>> new >> >> >>> run... >> >> > >> >> > These boxes have 4 core and 8GB RAM heach. We can probably use larger >> >> > heaps: I've reconfigured the gradle and Maven environment options to >> >> > allow 4GB of heap, and kicked a new ORM build. >> >> >> >> The Gradle build completed fine now, in just 12 minutes. >> >> However then the job gets stuck on: >> >> >> >> Using GitBlamer to create author and commit >> >> information for all warnings. >> >> >> >> It's been busy 15 minutes already at this point - so taking longer >> >> than the build and tests - and still going.. >> >> What is it? is it worth it? something wrong with it? >> >> >> >> - http://ci.hibernate.org/job/hibernate-orm-master-h2-main/953/console >> >> >> >> Thanks, >> >> Sanne From steve at hibernate.org Fri Jan 5 10:26:57 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 05 Jan 2018 15:26:57 +0000 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: BTW Gail... back to part of your original question... a branch in git is very cheap, so I like create a branch anytime we jump to a new "line of development". It's just much more flexible moving forward. So yes, there is a 5.2 branch - but that does not mean we have to do something with it (backport e.g.) On Fri, Jan 5, 2018 at 9:24 AM Steve Ebersole wrote: > On Fri, Jan 5, 2018 at 8:23 AM Guillaume Smet > wrote: > >> On Fri, Jan 5, 2018 at 2:32 PM, Steve Ebersole >> wrote: >> >>> Certain parts of the release process are easy to automate, assuming >>> nothing goes wrong of course. Other parts are not. Which actually circles >>> back to some things I've been comtemplating about (ORM at least) releases. >>> Basically we have an elaborate set of steps we go through for a release >>> beyond just the "simple" aspects like tagging, building/uploading jars... >>> things like blog posts, forum announcements, announcement emails... and we >>> do these even for each bug fix release. IMO we really should only be doing >>> some of these for a family (5.2, 5.3) initially going stable (Final). I'd >>> love to see the release task (the actual Gradle tasks) do an announcement >>> when any release is performed - Gradle has an "announce" plugin that can >>> announce via twitter, etc. To me that is enough for a generalized "hey >>> this new release it out" notifications. The initial stable release of a >>> family (5.2.0.Final, 5.3.0.Final, 6.0..0.Final...) is special and the one >>> we should handle specially by doing some of these other things. >>> >> >> It should take no more than half a day to do the release itself. A full >> day with a detailed blog post. >> >> I agree it's still one day not spent on other things. But having a >> release per month should be doable. In the case of 5.2.13, we are talking >> about nearly 3 months of work that are not in the hands of the users. >> > > Yep, I know how long it takes to do a release - I've been doing them for > almost 15 years ;) > > I'm not sure if you are agreeing or disagreeing about blogging every > bugfix release. But anyway, Sanne asked what would help automate the > release process, so I am listing things that would help. Of course you can > feel free to contribute blogging and emailing announcement plugins for > Gradle for us to use in the automated release tasks ;) > > > If you release something every month, it's not that bad if a bugfix slips >> to the next release. If a PR is not completely ready, well, it's going to >> be in the next one, no need to wait. It helps getting the release >> coordination easier. >> > > 5.2 just got lost in the cracks as Andrea, Chris and I were all working on > 6.0. > > > It's also easier to detect and fix regressions when you release more >> frequently. >> > > That's a fallacy. Or at least its not true in isolation. It depends on > the things that would highlight the regression picking up that release and > playing with it, since your entire premise here is that the regression is > not tested as part of the test suite. But that's actually not what happens > today in terms of our inter-project integrations... really we find out many > releases later when OGM or Search update to these newer ORM releases. > > > >> FWIW, in the active community branches, I usually do the backport right >> away - if I think the issue requires backporting, sometimes, it's just not >> worth it or too risky. And I'm doing the "what should I backport?" thing >> only on product only branches. >> > > > This right here is the crux - "active community branch". By definition no > branch is in active community development. Again, we have discussed this > as a team multiple times. Once the next release is stable we stop > developing the previous one, with a few caveats. E.g.: > > - Once 5.3 is stable we do generally expect to do a *few* additional > 5.2 releases. But let's be careful about the expectation about the phrase > "few" here. I really mean one or 2... > - For major releases (5.x -> 6.x) we generally expect to do a larger > number of releases of the 5.3 line. Again though, not indefinite. > > The basic gist is that we are an open source community. We simply do not > have the resources to maintain infinite lines of development. We need to > focus on what is important. I think we all agree that currently 5.2 is > still important, but I think we may all have different expectations for > what that means moving forward as 5.3 becomes the stable release. I cannot > give a concrete "we will only do X more 5.2 releases after 5.3 is stable" > answer. It might be 2. It might be 3. And it might be 1. > > > I'm not saying it would be that easy with ORM as the flow of issues is >> significantly larger. Just stating how we do it. >> > > Sure. And time-boxed releases are what we normally strive for as well in > ORM. 5.2 is largely an aberration in this regard. Again - Andrea, Chris > and I were focused on 6.0 work and since there is no 5.2 based Red Hat work > this fell between the cracks > > From sanne at hibernate.org Fri Jan 5 10:54:51 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 5 Jan 2018 15:54:51 +0000 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: On 5 January 2018 at 13:32, Steve Ebersole wrote: > Certain parts of the release process are easy to automate, assuming nothing > goes wrong of course. Other parts are not. Which actually circles back to > some things I've been comtemplating about (ORM at least) releases. > Basically we have an elaborate set of steps we go through for a release > beyond just the "simple" aspects like tagging, building/uploading jars... > things like blog posts, forum announcements, announcement emails... and we > do these even for each bug fix release. IMO we really should only be doing > some of these for a family (5.2, 5.3) initially going stable (Final). I'd > love to see the release task (the actual Gradle tasks) do an announcement > when any release is performed - Gradle has an "announce" plugin that can > announce via twitter, etc. To me that is enough for a generalized "hey this > new release it out" notifications. The initial stable release of a family > (5.2.0.Final, 5.3.0.Final, 6.0..0.Final...) is special and the one we should > handle specially by doing some of these other things. +1 to automate the "crude announcements", especially when there's not much to add. We should still blog, but would be nice to blog about specifically interesting things to try rather than focus our blogging effort on announcements. Certainly the blogging aspect should not be a burden directly related to the release process, so let's decouple that to begin with; templates are a good start - later we can see if there's interest in adding a bit more flexibility to the automated process to add a personal touch to the announcements. > But even on top of that stuff, its often just managing the backporting that > is resource intensive - identifying what should be backported and what > should not, not to mention managing the conflicts as we get further down > that path. Yes I understand, and that's the pain point for which our helping capacity is limited as we don't have all the insight of the ORM team. But I hope by making the release process "cheap" to allow us to release what was backported already. If then the need arises to backport more stuff, one will just press the release button again (within reason, you know what I mean). I'd try the discipline of releasing every 2 weeks with the rule of "what's in is in", aka if you wanted it included you should include it - or it will have to wait 2 weeks, which shouldn't be too bad so mitigater the outcry of people needing X reviewed/merged urgently. Unless it's me of course :P Thanks, Sanne > > On Fri, Jan 5, 2018 at 7:16 AM Sanne Grinovero wrote: >> >> On 5 January 2018 at 13:05, Guillaume Smet >> wrote: >> > Hi, >> > >> > AFAICS there are 52 issues fixed for 5.2.13. >> > >> > And there are a couple of PRs waiting for review AFAICS (which might be >> > ready to be integrated or not). >> > >> > So I think it would be really beneficial to continue doing 5.2.x >> > releases. >> > >> > 5.3 is not there yet. And once it's going to be released, we would still >> > need the integrators to support it (be it WildFly or Spring) before >> > considering it fully consumable by the end users. And probably some time >> > to >> > get it field tested too before we can consider 5.2 as being more or less >> > "dead" and just say to the users "upgrade to 5.3". >> >> +1 I'd love to see more regular OGM releases, and some benevolent >> maintenance time on 5.2 for a while longer even after 5.3 is >> available. >> >> We're all willing to help with the release process, including automate >> it more; if someone on the ORM team could volunteer brains for the >> organizational work we can make sure it's quick and painless by >> delegating the annoying labour to Jenkins. >> >> Thanks, >> Sanne >> >> > >> > My 2 cents. >> > >> > -- >> > Guillaume >> > >> > On Fri, Jan 5, 2018 at 12:53 PM, Steve Ebersole >> > wrote: >> > >> >> We should definitely stop doing 5.2 releases once we release 5.3. >> >> >> >> Of course 5.3 is held up waiting for answers from a few people... >> >> >> >> On Thu, Jan 4, 2018, 7:29 PM Gail Badner wrote: >> >> >> >> > We discussed stopping 5.2 releases at the F2F, but I can't remember >> >> > what >> >> > was decided. >> >> > >> >> > I see that there is a 5.2 branch. Should we be backporting to 5.2 >> >> > branch? >> >> > >> >> > Thanks, >> >> > Gail >> >> >> > _______________________________________________ >> > hibernate-dev mailing list >> > hibernate-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/hibernate-dev From hilac at msn.com Sat Jan 6 15:10:00 2018 From: hilac at msn.com (Hilmer Chona) Date: Sat, 6 Jan 2018 20:10:00 +0000 Subject: [hibernate-dev] proposition to create a new Constraint for @Age Message-ID: Hi guys I have created a new issue on Jira HV-1552, it is to add a new Constraint to check if the number of years from a given date to today is igual or greater to a specified value. This validation can be very useful when someone who wants to access, sign up, or buy something must be over than one established age. What do you think? Hillmer Chona MedellinJUG.org Leader From guillaume.smet at gmail.com Mon Jan 8 07:04:49 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Mon, 8 Jan 2018 13:04:49 +0100 Subject: [hibernate-dev] proposition to create a new Constraint for @Age In-Reply-To: References: Message-ID: Hi Hilmer, On Sat, Jan 6, 2018 at 9:10 PM, Hilmer Chona wrote: > I have created a new issue on Jira HV-1552 atlassian.net/browse/HV-1552>, it is to add a new Constraint to check if > the number of years from a given date to today is igual or greater to a > specified value. > > This validation can be very useful when someone who wants to access, sign > up, or buy something must be over than one established age. > > What do you think? > Looks interesting. I think I would mimic what we do with Min/Max i.e. have AgeMin/AgeMax and an inclusive option. What I'm not sure about is if we should limit that to years. Or be more flexible and also support months/days for instance. But I'm not sure it's going to be easy (or useful?). Might be worth a try though. I would recommend doing an experiment with one date type before writing all the validators as it's going to be a bit tedious. I suppose you have seen Marko's post about how to contribute a constraint: http://in.relation.to/2018/01/04/adding-new-constraint-to-engine/ ? Have a nice day. -- Guillaume From guillaume.smet at gmail.com Tue Jan 9 08:46:13 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Tue, 9 Jan 2018 14:46:13 +0100 Subject: [hibernate-dev] Move beanvalidation-benchmark to Hibernate org Message-ID: Hi, Now that we added some more value to the Bean Validation benchmark [1] (Marko converted them to JMH), I think it's time to move the repo to the Hibernate org. Anyone against it? FYI, it's Apache 2 licensed as it's derived from previous work from the Apache BVal people. [1] https://github.com/gsmet/beanvalidation-benchmark -- Guillaume From gunnar at hibernate.org Wed Jan 10 03:33:53 2018 From: gunnar at hibernate.org (Gunnar Morling) Date: Wed, 10 Jan 2018 09:33:53 +0100 Subject: [hibernate-dev] Move beanvalidation-benchmark to Hibernate org In-Reply-To: References: Message-ID: +1, good idea. Thanks! 2018-01-09 14:46 GMT+01:00 Guillaume Smet : > Hi, > > Now that we added some more value to the Bean Validation benchmark [1] > (Marko converted them to JMH), I think it's time to move the repo to the > Hibernate org. > > Anyone against it? > > FYI, it's Apache 2 licensed as it's derived from previous work from the > Apache BVal people. > > [1] https://github.com/gsmet/beanvalidation-benchmark > > -- > Guillaume > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From yoann at hibernate.org Wed Jan 10 05:06:09 2018 From: yoann at hibernate.org (Yoann Rodiere) Date: Wed, 10 Jan 2018 10:06:09 +0000 Subject: [hibernate-dev] Jenkins job priorities Message-ID: Hello, TL;DR: I installed a plugin to prioritize Jenkins jobs, please let me know if you notice anything wrong. Also, I will remove the Heavy Job plugin soon, let me know if you're not okay with that. I recently raised the issue on HipChat that some Jenkins builds are triggered in batch, something like 4 or 5 at a time. Since builds are executed in the order they are requested, this forces the next requested builds to wait for more than one hour before being executed, regardless of their urgency. One example of such batch is whenever something is pushed to Hibernate ORM master (or Search master, probably): one build is triggered for tests against H2, another for tests against PostgreSQL, another for tests against MariaDB, and so on. It turns out there is a solution for this problem: the PrioritySorter plugin. I installed the plugin on CI and configured it to give higher priority to the following builds: - Builds triggered by users (highest priority) - Release builds (builds in the "Release" view) - Website builds (builds in the "Website" view) - PR builds (builds in the "PR" view) In practice, such builds will be moved to the front of the queue whenever they are triggered, resulting in reduced waiting times. I hope we will be able to use this priority feature instead of the Heavy Job plugin (which allows to assign weights to jobs), and avoid concurrent builds completely. With the current setup, someone releasing his/her project will only have to wait for the currently executing build to finish, and will get the highest priority on the release builds. Maybe this is enough? If you disagree, please raise your concerns now: I will disable the Heavy Job plugin soon and set each slave to only offer one execution slot. Please let me know if you notice anything wrong. I tested the plugin on a local Jenkins instance, but who knows... Yoann -- Yoann Rodiere yoann at hibernate.org / yrodiere at redhat.com Software Engineer Hibernate NoORM team From guillaume.smet at gmail.com Wed Jan 10 05:25:57 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 10 Jan 2018 11:25:57 +0100 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: Hi, On Wed, Jan 10, 2018 at 11:06 AM, Yoann Rodiere wrote: > > I hope we will be able to use this priority feature instead of the Heavy > Job plugin (which allows to assign weights to jobs), and avoid concurrent > builds completely. With the current setup, someone releasing his/her > project will only have to wait for the currently executing build to finish, > and will get the highest priority on the release builds. Maybe this is > enough? If you disagree, please raise your concerns now: I will disable the > Heavy Job plugin soon and set each slave to only offer one execution slot. > I'm not really convinced by this solution. Some jobs still take quite a lot of time and having to wait 20 minutes for each job I would trigger is a bit annoying. If it was for only one job, it would be acceptable, but let's take the worst case of a coordinated HV release : - TCK release - API release - HV release - website - blog I won't have to wait for each of them as some of them will be grouped by the prioritization but I'm pretty sure I will have to wait for several of them. So, I'm +1 on having this plugin as it seems to be helpful on its own but I'm -1 on considering it is a solution to the "let's roll a release" thing. -- Guillaume From guillaume.smet at gmail.com Wed Jan 10 05:43:42 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 10 Jan 2018 11:43:42 +0100 Subject: [hibernate-dev] Move beanvalidation-benchmark to Hibernate org In-Reply-To: References: Message-ID: Hi, So I did the move. And then I thought why not the beanvalidation organization? I didn't think of it before as I'm not sure this benchmark will stay totally HV agnostic but for now it is and it could probably stay this way. If we want some HV specific things, we can still add a specific benchmark module and keep the BV ones separated. It would be nice if it was considered a Bean Validation effort rather than as a purely HV one. But maybe it makes it too official to have it in the BV organization? WDYT? -- Guillaume On Wed, Jan 10, 2018 at 9:33 AM, Gunnar Morling wrote: > +1, good idea. Thanks! > > 2018-01-09 14:46 GMT+01:00 Guillaume Smet : > >> Hi, >> >> Now that we added some more value to the Bean Validation benchmark [1] >> (Marko converted them to JMH), I think it's time to move the repo to the >> Hibernate org. >> >> Anyone against it? >> >> FYI, it's Apache 2 licensed as it's derived from previous work from the >> Apache BVal people. >> >> [1] https://github.com/gsmet/beanvalidation-benchmark >> >> -- >> Guillaume >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > > From davide at hibernate.org Wed Jan 10 06:08:10 2018 From: davide at hibernate.org (Davide D'Alto) Date: Wed, 10 Jan 2018 11:08:10 +0000 Subject: [hibernate-dev] Awestruct upgrade to version 0.5.7 Message-ID: Hello, I've upgraded awestruct to version 0.5.7. Except for the minification of our stylesheets it seems to work fine but It would be nice if someone else can have a look at generate site and confirm to me that's OK to apply the changes in production. On the same note, I've noticed that we are using some custom extensions to execute the minification, specifically the one in _ext/css_minifier.rb. Is there any reason to do that? I'm asking because Awestruct comes with its own minification class called: Awestruct::Extension::Minify The main difference I noticed is that this extension doesn't copy the original file during deploy on the server and only uses the minified one. Also currently, this extension doesn't work for CSS but I guess they are going to fix it in the next releases (it works for html and js). Cheers, Davide From sanne at hibernate.org Wed Jan 10 06:08:35 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 10 Jan 2018 11:08:35 +0000 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: On 10 January 2018 at 10:25, Guillaume Smet wrote: > Hi, > > On Wed, Jan 10, 2018 at 11:06 AM, Yoann Rodiere wrote: >> >> I hope we will be able to use this priority feature instead of the Heavy >> Job plugin (which allows to assign weights to jobs), and avoid concurrent >> builds completely. With the current setup, someone releasing his/her >> project will only have to wait for the currently executing build to finish, >> and will get the highest priority on the release builds. Maybe this is >> enough? If you disagree, please raise your concerns now: I will disable the >> Heavy Job plugin soon and set each slave to only offer one execution slot. Thanks Yoann! that sounds great. >> > > I'm not really convinced by this solution. Some jobs still take quite a lot > of time and having to wait 20 minutes for each job I would trigger is a bit > annoying. > > If it was for only one job, it would be acceptable, but let's take the > worst case of a coordinated HV release : > - TCK release > - API release > - HV release > - website > - blog > > I won't have to wait for each of them as some of them will be grouped by > the prioritization but I'm pretty sure I will have to wait for several of > them. > > So, I'm +1 on having this plugin as it seems to be helpful on its own but > I'm -1 on considering it is a solution to the "let's roll a release" thing. Some of our test suites used to take 2 hours to run (even 5 days some years ago); now you say waiting 20 minutes is not good enough? You'll have to optimise our code better :P It's very easy to spin up extra nodes; my recommendation is that when you know you're about to release [for example approximately one hour in advance while you might be double-checking JIRA state and such things] hit that manual scale-up button and have CI "warmed up" with one or two extra nodes. By the time you need to trigger the release job you'll have the build queue flushed, the priority plugin helping you out, and still additional extra slaves running to run it all in parallel. And of course for many releases we don't care for an extra 30 minutes so you're free to skip this all if it's not important; incidentally for "work in progress" milestones like the module packs which we recently re-released several times while polishing up the PR I've been releasing from my local machine; it's good to have CI automate things but I don't think we should get in a position to require 100% availability from CI: practice releases locally sometimes. If we really wanted to invest more in it (both time and budget), there's the option of spinning up new containers for each job as soon as you need one but I feel like we've spent too much time on CI already; such technology is maturing so my take is let it mature a bit more, and in 6 months we'll do another step of improvement; jumping on those things makes us otherwise the beta testers and steals critical time from our own projects. Let's not forget that many Apache projects take a week or two to perform a release, we all know of other projects needing months, so by the law of diminishing returns I don't think we should invest much more of out time to shave on the 10 minutes.. just spin up some extra nodes :) Thanks, Sanne > > -- > Guillaume > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From gunnar at hibernate.org Wed Jan 10 06:12:26 2018 From: gunnar at hibernate.org (Gunnar Morling) Date: Wed, 10 Jan 2018 12:12:26 +0100 Subject: [hibernate-dev] Move beanvalidation-benchmark to Hibernate org In-Reply-To: References: Message-ID: I had been wondering about that, too, but I felt for now it's better located on the impl site (Hibernate org). It'd definitely be nice for this to be a BV effort, but for that I'd also like to get some feedback and input from others on the EG. My thinking was to keep that discussion for the 2.1 lifecycle, but if you like feel free to reach out to the list and ask for feedback, we also can do it now if there's some interest. --Gunnar 2018-01-10 11:43 GMT+01:00 Guillaume Smet : > Hi, > > So I did the move. > > And then I thought why not the beanvalidation organization? > > I didn't think of it before as I'm not sure this benchmark will stay > totally HV agnostic but for now it is and it could probably stay this way. > If we want some HV specific things, we can still add a specific benchmark > module and keep the BV ones separated. > > It would be nice if it was considered a Bean Validation effort rather than > as a purely HV one. > > But maybe it makes it too official to have it in the BV organization? > > WDYT? > > -- > Guillaume > > On Wed, Jan 10, 2018 at 9:33 AM, Gunnar Morling > wrote: > >> +1, good idea. Thanks! >> >> 2018-01-09 14:46 GMT+01:00 Guillaume Smet : >> >>> Hi, >>> >>> Now that we added some more value to the Bean Validation benchmark [1] >>> (Marko converted them to JMH), I think it's time to move the repo to the >>> Hibernate org. >>> >>> Anyone against it? >>> >>> FYI, it's Apache 2 licensed as it's derived from previous work from the >>> Apache BVal people. >>> >>> [1] https://github.com/gsmet/beanvalidation-benchmark >>> >>> -- >>> Guillaume >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >> >> > From davide at hibernate.org Wed Jan 10 06:15:34 2018 From: davide at hibernate.org (Davide D'Alto) Date: Wed, 10 Jan 2018 11:15:34 +0000 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: > Let's not forget that many Apache projects take a week or two to > perform a release, we all know of other projects needing months, so by > the law of diminishing returns I don't think we should invest much > more of out time to shave on the 10 minutes.. just spin up some extra > nodes :) +1 On Wed, Jan 10, 2018 at 11:08 AM, Sanne Grinovero wrote: > On 10 January 2018 at 10:25, Guillaume Smet wrote: >> Hi, >> >> On Wed, Jan 10, 2018 at 11:06 AM, Yoann Rodiere wrote: >>> >>> I hope we will be able to use this priority feature instead of the Heavy >>> Job plugin (which allows to assign weights to jobs), and avoid concurrent >>> builds completely. With the current setup, someone releasing his/her >>> project will only have to wait for the currently executing build to finish, >>> and will get the highest priority on the release builds. Maybe this is >>> enough? If you disagree, please raise your concerns now: I will disable the >>> Heavy Job plugin soon and set each slave to only offer one execution slot. > > Thanks Yoann! that sounds great. > >>> >> >> I'm not really convinced by this solution. Some jobs still take quite a lot >> of time and having to wait 20 minutes for each job I would trigger is a bit >> annoying. >> >> If it was for only one job, it would be acceptable, but let's take the >> worst case of a coordinated HV release : >> - TCK release >> - API release >> - HV release >> - website >> - blog >> >> I won't have to wait for each of them as some of them will be grouped by >> the prioritization but I'm pretty sure I will have to wait for several of >> them. >> >> So, I'm +1 on having this plugin as it seems to be helpful on its own but >> I'm -1 on considering it is a solution to the "let's roll a release" thing. > > Some of our test suites used to take 2 hours to run (even 5 days some > years ago); now you say waiting 20 minutes is not good enough? You'll > have to optimise our code better :P > > It's very easy to spin up extra nodes; my recommendation is that when > you know you're about to release [for example approximately one hour > in advance while you might be double-checking JIRA state and such > things] hit that manual scale-up button and have CI "warmed up" with > one or two extra nodes. > > By the time you need to trigger the release job you'll have the build > queue flushed, the priority plugin helping you out, and still > additional extra slaves running to run it all in parallel. > > And of course for many releases we don't care for an extra 30 minutes > so you're free to skip this all if it's not important; incidentally > for "work in progress" milestones like the module packs which we > recently re-released several times while polishing up the PR I've been > releasing from my local machine; it's good to have CI automate things > but I don't think we should get in a position to require 100% > availability from CI: practice releases locally sometimes. > > If we really wanted to invest more in it (both time and budget), > there's the option of spinning up new containers for each job as soon > as you need one but I feel like we've spent too much time on CI > already; such technology is maturing so my take is let it mature a bit > more, and in 6 months we'll do another step of improvement; jumping on > those things makes us otherwise the beta testers and steals critical > time from our own projects. > Let's not forget that many Apache projects take a week or two to > perform a release, we all know of other projects needing months, so by > the law of diminishing returns I don't think we should invest much > more of out time to shave on the 10 minutes.. just spin up some extra > nodes :) > > Thanks, > Sanne > >> >> -- >> Guillaume >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From davide at hibernate.org Wed Jan 10 06:25:46 2018 From: davide at hibernate.org (Davide D'Alto) Date: Wed, 10 Jan 2018 11:25:46 +0000 Subject: [hibernate-dev] Awestruct upgrade to version 0.5.7 In-Reply-To: References: Message-ID: Just to clarify, The websites to check are: http://staging.in.relation.to and http://staging.hibernate.org Sorry, I forgot to add the links in the previous mail. On Wed, Jan 10, 2018 at 11:08 AM, Davide D'Alto wrote: > Hello, > I've upgraded awestruct to version 0.5.7. > > Except for the minification of our stylesheets it seems to work fine > but It would be nice if someone else can have a look at generate site > and confirm to me that's > OK to apply the changes in production. > > On the same note, I've noticed that we are using some custom > extensions to execute the minification, specifically the one in > _ext/css_minifier.rb. > > Is there any reason to do that? I'm asking because Awestruct comes > with its own minification class called: Awestruct::Extension::Minify > > The main difference I noticed is that this extension doesn't copy the > original file during deploy on the server and only uses the minified > one. > Also currently, this extension doesn't work for CSS but I guess they > are going to fix it in the next releases (it works for html and js). > > Cheers, > Davide From guillaume.smet at gmail.com Wed Jan 10 06:33:10 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 10 Jan 2018 12:33:10 +0100 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: On Wed, Jan 10, 2018 at 12:08 PM, Sanne Grinovero wrote: > Some of our test suites used to take 2 hours to run (even 5 days some > years ago); now you say waiting 20 minutes is not good enough? You'll > have to optimise our code better :P > What I'm saying is that in the current setup, I don't wait at all when I have something to release. All is passed in parallel to the currently running jobs. And it works well. > It's very easy to spin up extra nodes; my recommendation is that when > you know you're about to release [for example approximately one hour > in advance while you might be double-checking JIRA state and such > things] hit that manual scale-up button and have CI "warmed up" with > one or two extra nodes. > > By the time you need to trigger the release job you'll have the build > queue flushed, the priority plugin helping you out, and still > additional extra slaves running to run it all in parallel. > > And of course for many releases we don't care for an extra 30 minutes > so you're free to skip this all if it's not important; incidentally > for "work in progress" milestones like the module packs which we > recently re-released several times while polishing up the PR I've been > releasing from my local machine; it's good to have CI automate things > but I don't think we should get in a position to require 100% > availability from CI: practice releases locally sometimes. > Well, the ultimate goal of releasing on CI is to have consistent releases and an automated process. I really don't want to build a release locally and be at risk of doing something wrong. That's the main purpose of an automated process and having a stable machine doing it. > Let's not forget that many Apache projects take a week or two to > perform a release, we all know of other projects needing months, so by > the law of diminishing returns I don't think we should invest much > more of out time to shave on the 10 minutes.. just spin up some extra > nodes :) > What I'm saying is that the current setup is working very well for releases and the proposed setup won't work as well. You can find all sorts of workarounds but it won't work as well and be as practical as it used to be. Yeah, you can think of starting another node 1 hour before doing your release and hope it will still be there and you won't have another project's commit triggering 4 jobs just before you start. Sure. But I'm pretty sure it's going to be a pain. I'm probably the one doing releases the most frequently with HV, that's why I am vocal about it. And maybe I'm the only one but, when I'm working on a release, I don't like to do stuff in parallel because I don't want to forget something or make a mistake. So I'm fully focused on it. Waiting 20 minutes before having my job running will be a complete waste of time. And if it has to happen more than one time on a given release time, I can predict I will get grumpy :). That being said, if you have nothing against me cancelling the running jobs because they are in the way, we can do that. But I'm not sure people will like it very much. -- Guillaume From guillaume.smet at gmail.com Wed Jan 10 06:41:01 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 10 Jan 2018 12:41:01 +0100 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: Hi, On Fri, Jan 5, 2018 at 4:24 PM, Steve Ebersole wrote: > Yep, I know how long it takes to do a release - I've been doing them for > almost 15 years ;) > > I'm not sure if you are agreeing or disagreeing about blogging every > bugfix release. But anyway, Sanne asked what would help automate the > release process, so I am listing things that would help. Of course you can > feel free to contribute blogging and emailing announcement plugins for > Gradle for us to use in the automated release tasks ;) > AFAICS, lately, the ORM bugfix releases announcement is just a link to the changelog. I don't think it would buy you a lot to automate it. For the NoORM projects, the announcement part (Twitter, Mail, Blog) is still manual. I don't think it's that bad. > If you release something every month, it's not that bad if a bugfix slips >> to the next release. If a PR is not completely ready, well, it's going to >> be in the next one, no need to wait. It helps getting the release >> coordination easier. >> > > 5.2 just got lost in the cracks as Andrea, Chris and I were all working on > 6.0. > > > It's also easier to detect and fix regressions when you release more >> frequently. >> > > That's a fallacy. Or at least its not true in isolation. It depends on > the things that would highlight the regression picking up that release and > playing with it, since your entire premise here is that the regression is > not tested as part of the test suite. But that's actually not what happens > today in terms of our inter-project integrations... really we find out many > releases later when OGM or Search update to these newer ORM releases. > I did a quite a lot of regression hunt myself in $previousJob (mostly on Search but a bit on ORM too), and it did help to upgrade often and when the releases were not too big. Easier to find the commit causing the regression. I don't know if there are a lot of companies doing that (I know mine stopped to do that after I left) but it did really help to upgrade in smaller steps. That's what I was trying to explain. FWIW, in the active community branches, I usually do the backport right >> away - if I think the issue requires backporting, sometimes, it's just not >> worth it or too risky. And I'm doing the "what should I backport?" thing >> only on product only branches. >> > > > This right here is the crux - "active community branch". By definition no > branch is in active community development. Again, we have discussed this > as a team multiple times. Once the next release is stable we stop > developing the previous one, with a few caveats. E.g.: > > - Once 5.3 is stable we do generally expect to do a *few* additional > 5.2 releases. But let's be careful about the expectation about the phrase > "few" here. I really mean one or 2... > - For major releases (5.x -> 6.x) we generally expect to do a larger > number of releases of the 5.3 line. Again though, not indefinite. > > The basic gist is that we are an open source community. We simply do not > have the resources to maintain infinite lines of development. We need to > focus on what is important. I think we all agree that currently 5.2 is > still important, but I think we may all have different expectations for > what that means moving forward as 5.3 becomes the stable release. I cannot > give a concrete "we will only do X more 5.2 releases after 5.3 is stable" > answer. It might be 2. It might be 3. And it might be 1. > I think we agree on the principles. We just need to have a viable definition of "stable" for the users. > I'm not saying it would be that easy with ORM as the flow of issues is >> significantly larger. Just stating how we do it. >> > > Sure. And time-boxed releases are what we normally strive for as well in > ORM. 5.2 is largely an aberration in this regard. Again - Andrea, Chris > and I were focused on 6.0 work and since there is no 5.2 based Red Hat work > this fell between the cracks > So I think we all agree that the situation with 5.2 is less than ideal. And it's the version currently recommended for community usage. Which is a large part of Hibernate usage. Could we agree on releasing it regularly from now on and at least plan a 5.2.13 release soon to release all the fixes already in? Thanks! -- Guillaume From guillaume.smet at gmail.com Wed Jan 10 06:45:08 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 10 Jan 2018 12:45:08 +0100 Subject: [hibernate-dev] Awestruct upgrade to version 0.5.7 In-Reply-To: References: Message-ID: Thanks for taking care of that. I clicked here and there, especially on the complex pages, and it looks good to me. Be careful when merging, we don't want the Corporate contributors page on the public website yet. Thanks! -- Guillaume On Wed, Jan 10, 2018 at 12:25 PM, Davide D'Alto wrote: > Just to clarify, The websites to check are: > http://staging.in.relation.to and http://staging.hibernate.org > Sorry, I forgot to add the links in the previous mail. > > > On Wed, Jan 10, 2018 at 11:08 AM, Davide D'Alto > wrote: > > Hello, > > I've upgraded awestruct to version 0.5.7. > > > > Except for the minification of our stylesheets it seems to work fine > > but It would be nice if someone else can have a look at generate site > > and confirm to me that's > > OK to apply the changes in production. > > > > On the same note, I've noticed that we are using some custom > > extensions to execute the minification, specifically the one in > > _ext/css_minifier.rb. > > > > Is there any reason to do that? I'm asking because Awestruct comes > > with its own minification class called: Awestruct::Extension::Minify > > > > The main difference I noticed is that this extension doesn't copy the > > original file during deploy on the server and only uses the minified > > one. > > Also currently, this extension doesn't work for CSS but I guess they > > are going to fix it in the next releases (it works for html and js). > > > > Cheers, > > Davide > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From davide at hibernate.org Wed Jan 10 06:59:09 2018 From: davide at hibernate.org (Davide D'Alto) Date: Wed, 10 Jan 2018 11:59:09 +0000 Subject: [hibernate-dev] Awestruct upgrade to version 0.5.7 In-Reply-To: References: Message-ID: Thanks, I will only add the additional commits I made. On Wed, Jan 10, 2018 at 11:45 AM, Guillaume Smet wrote: > Thanks for taking care of that. > > I clicked here and there, especially on the complex pages, and it looks good > to me. > > Be careful when merging, we don't want the Corporate contributors page on > the public website yet. > > Thanks! > > -- > Guillaume > > On Wed, Jan 10, 2018 at 12:25 PM, Davide D'Alto > wrote: >> >> Just to clarify, The websites to check are: >> http://staging.in.relation.to and http://staging.hibernate.org >> Sorry, I forgot to add the links in the previous mail. >> >> >> On Wed, Jan 10, 2018 at 11:08 AM, Davide D'Alto >> wrote: >> > Hello, >> > I've upgraded awestruct to version 0.5.7. >> > >> > Except for the minification of our stylesheets it seems to work fine >> > but It would be nice if someone else can have a look at generate site >> > and confirm to me that's >> > OK to apply the changes in production. >> > >> > On the same note, I've noticed that we are using some custom >> > extensions to execute the minification, specifically the one in >> > _ext/css_minifier.rb. >> > >> > Is there any reason to do that? I'm asking because Awestruct comes >> > with its own minification class called: Awestruct::Extension::Minify >> > >> > The main difference I noticed is that this extension doesn't copy the >> > original file during deploy on the server and only uses the minified >> > one. >> > Also currently, this extension doesn't work for CSS but I guess they >> > are going to fix it in the next releases (it works for html and js). >> > >> > Cheers, >> > Davide >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > From sanne at hibernate.org Wed Jan 10 10:50:15 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 10 Jan 2018 15:50:15 +0000 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: On 10 January 2018 at 11:33, Guillaume Smet wrote: > On Wed, Jan 10, 2018 at 12:08 PM, Sanne Grinovero > wrote: >> >> Some of our test suites used to take 2 hours to run (even 5 days some >> years ago); now you say waiting 20 minutes is not good enough? You'll >> have to optimise our code better :P > > > What I'm saying is that in the current setup, I don't wait at all when I > have something to release. > > All is passed in parallel to the currently running jobs. > > And it works well. I'm confused now. AFAIK this has never been the case? I understand that the release process itself runs without running the tests, but I'd still run the tests by triggering a full build before. You made the example of the TCK and various tests; to run them you'd not be allowed to run them in parallel with other builds, so you wanted to release and the jobs happened to be building ORM and all its RDBMS, you'd have had to wait for a couple hours. > >> >> It's very easy to spin up extra nodes; my recommendation is that when >> you know you're about to release [for example approximately one hour >> in advance while you might be double-checking JIRA state and such >> things] hit that manual scale-up button and have CI "warmed up" with >> one or two extra nodes. >> >> By the time you need to trigger the release job you'll have the build >> queue flushed, the priority plugin helping you out, and still >> additional extra slaves running to run it all in parallel. >> >> And of course for many releases we don't care for an extra 30 minutes >> so you're free to skip this all if it's not important; incidentally >> for "work in progress" milestones like the module packs which we >> recently re-released several times while polishing up the PR I've been >> releasing from my local machine; it's good to have CI automate things >> but I don't think we should get in a position to require 100% >> availability from CI: practice releases locally sometimes. > > > Well, the ultimate goal of releasing on CI is to have consistent releases > and an automated process. > > I really don't want to build a release locally and be at risk of doing > something wrong. > > That's the main purpose of an automated process and having a stable machine > doing it. > >> >> Let's not forget that many Apache projects take a week or two to >> perform a release, we all know of other projects needing months, so by >> the law of diminishing returns I don't think we should invest much >> more of out time to shave on the 10 minutes.. just spin up some extra >> nodes :) > > > What I'm saying is that the current setup is working very well for releases > and the proposed setup won't work as well. > > You can find all sorts of workarounds but it won't work as well and be as > practical as it used to be. Yeah, you can think of starting another node 1 > hour before doing your release and hope it will still be there and you won't > have another project's commit triggering 4 jobs just before you start. Sure. > But I'm pretty sure it's going to be a pain. Still I don't really understand if you're having a better idea. In a nutshell these jobs need resources, if they are busy you either add more resources, or change priorities, or you wait. That's the three aspects you can play with "safely". Then there's the option of playing with parallelism, but it's really dangerous: it risks failing both your release and causing failures in the other tests which are hard to expliain, cause confusion among us all, and ultimately lead to have to repeat all involved jobs so consuming unnecessarily more resources and time. In many cases parallelism isn't even an option, for examplethe ORM builds consume most system memory so you just can't run additional JVMs to run the TCK or similar jobs; if it was safe, I would be using smaller machines. > I'm probably the one doing releases the most frequently with HV, that's why > I am vocal about it. > > And maybe I'm the only one but, when I'm working on a release, I don't like > to do stuff in parallel because I don't want to forget something or make a > mistake. So I'm fully focused on it. Waiting 20 minutes before having my job > running will be a complete waste of time. And if it has to happen more than > one time on a given release time, I can predict I will get grumpy :). > > That being said, if you have nothing against me cancelling the running jobs > because they are in the way, we can do that. But I'm not sure people will > like it very much. Just make sure you ask for permissions, but yea we've done that previously, hopefully won't be needed often, but it's always an option. > > -- > Guillaume > From steve at hibernate.org Wed Jan 10 11:00:01 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 10 Jan 2018 16:00:01 +0000 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: And in advance I say I would not be cool with you killing my jobs for your job to run On Wed, Jan 10, 2018 at 9:52 AM Sanne Grinovero wrote: > On 10 January 2018 at 11:33, Guillaume Smet > wrote: > > On Wed, Jan 10, 2018 at 12:08 PM, Sanne Grinovero > > wrote: > >> > >> Some of our test suites used to take 2 hours to run (even 5 days some > >> years ago); now you say waiting 20 minutes is not good enough? You'll > >> have to optimise our code better :P > > > > > > What I'm saying is that in the current setup, I don't wait at all when I > > have something to release. > > > > All is passed in parallel to the currently running jobs. > > > > And it works well. > > I'm confused now. AFAIK this has never been the case? I understand > that the release process itself runs without running the tests, but > I'd still run the tests by triggering a full build before. > You made the example of the TCK and various tests; to run them you'd > not be allowed to run them in parallel with other builds, so you > wanted to release and the jobs happened to be building ORM and all its > RDBMS, you'd have had to wait for a couple hours. > > > > >> > >> It's very easy to spin up extra nodes; my recommendation is that when > >> you know you're about to release [for example approximately one hour > >> in advance while you might be double-checking JIRA state and such > >> things] hit that manual scale-up button and have CI "warmed up" with > >> one or two extra nodes. > >> > >> By the time you need to trigger the release job you'll have the build > >> queue flushed, the priority plugin helping you out, and still > >> additional extra slaves running to run it all in parallel. > >> > >> And of course for many releases we don't care for an extra 30 minutes > >> so you're free to skip this all if it's not important; incidentally > >> for "work in progress" milestones like the module packs which we > >> recently re-released several times while polishing up the PR I've been > >> releasing from my local machine; it's good to have CI automate things > >> but I don't think we should get in a position to require 100% > >> availability from CI: practice releases locally sometimes. > > > > > > Well, the ultimate goal of releasing on CI is to have consistent releases > > and an automated process. > > > > I really don't want to build a release locally and be at risk of doing > > something wrong. > > > > That's the main purpose of an automated process and having a stable > machine > > doing it. > > > >> > >> Let's not forget that many Apache projects take a week or two to > >> perform a release, we all know of other projects needing months, so by > >> the law of diminishing returns I don't think we should invest much > >> more of out time to shave on the 10 minutes.. just spin up some extra > >> nodes :) > > > > > > What I'm saying is that the current setup is working very well for > releases > > and the proposed setup won't work as well. > > > > You can find all sorts of workarounds but it won't work as well and be as > > practical as it used to be. Yeah, you can think of starting another node > 1 > > hour before doing your release and hope it will still be there and you > won't > > have another project's commit triggering 4 jobs just before you start. > Sure. > > But I'm pretty sure it's going to be a pain. > > Still I don't really understand if you're having a better idea. In a > nutshell these jobs need resources, if they are busy you either add > more resources, or change priorities, or you wait. That's the three > aspects you can play with "safely". > > Then there's the option of playing with parallelism, but it's really > dangerous: it risks failing both your release and causing failures in > the other tests which are hard to expliain, cause confusion among us > all, and ultimately lead to have to repeat all involved jobs so > consuming unnecessarily more resources and time. > In many cases parallelism isn't even an option, for examplethe ORM > builds consume most system memory so you just can't run additional > JVMs to run the TCK or similar jobs; if it was safe, I would be using > smaller machines. > > > I'm probably the one doing releases the most frequently with HV, that's > why > > I am vocal about it. > > > > And maybe I'm the only one but, when I'm working on a release, I don't > like > > to do stuff in parallel because I don't want to forget something or make > a > > mistake. So I'm fully focused on it. Waiting 20 minutes before having my > job > > running will be a complete waste of time. And if it has to happen more > than > > one time on a given release time, I can predict I will get grumpy :). > > > > That being said, if you have nothing against me cancelling the running > jobs > > because they are in the way, we can do that. But I'm not sure people will > > like it very much. > > Just make sure you ask for permissions, but yea we've done that > previously, hopefully won't be needed often, but it's always an > option. > > > > > -- > > Guillaume > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Wed Jan 10 11:12:30 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 10 Jan 2018 16:12:30 +0000 Subject: [hibernate-dev] Delay 5.3.0.Beta1 until next week Message-ID: I am going to delay 5.3.0.Beta1 until next week to investigate OSSRH publishing. I noticed yesterday that Bintray has a 10G storage limit which we would hit too quickly. Feel free to add issues to Beta1, but only if you plan on having them done by next week (1/17). From yoann at hibernate.org Wed Jan 10 11:18:06 2018 From: yoann at hibernate.org (Yoann Rodiere) Date: Wed, 10 Jan 2018 16:18:06 +0000 Subject: [hibernate-dev] Delay 5.3.0.Beta1 until next week In-Reply-To: References: Message-ID: It would be nice to have https://github.com/hibernate/hibernate-orm/pull/2092 in 5.3.0.Beta1, so that we can start experimenting in Search :) Just saw there's a conflict (again), I will rebase. On Wed, 10 Jan 2018 at 17:14 Steve Ebersole wrote: > I am going to delay 5.3.0.Beta1 until next week to investigate OSSRH > publishing. I noticed yesterday that Bintray has a 10G storage limit which > we would hit too quickly. > > Feel free to add issues to Beta1, but only if you plan on having them done > by next week (1/17). > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > -- Yoann Rodiere yoann at hibernate.org / yrodiere at redhat.com Software Engineer Hibernate NoORM team From guillaume.smet at gmail.com Wed Jan 10 11:28:28 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 10 Jan 2018 17:28:28 +0100 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: On Wed, Jan 10, 2018 at 5:00 PM, Steve Ebersole wrote: > And in advance I say I would not be cool with you killing my jobs for your > job to run > Yeah, that was my understanding. I don't expect anyone to be cool with it. From steve at hibernate.org Wed Jan 10 11:33:47 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 10 Jan 2018 16:33:47 +0000 Subject: [hibernate-dev] Delay 5.3.0.Beta1 until next week In-Reply-To: References: Message-ID: Yoann, yes, I will get that in. Honestly it would have even been in the one today - the Bintray concern was just very last minute that I felt it best to hold off for the week. On Wed, Jan 10, 2018 at 10:18 AM Yoann Rodiere wrote: > It would be nice to have > https://github.com/hibernate/hibernate-orm/pull/2092 in 5.3.0.Beta1, so > that we can start experimenting in Search :) > Just saw there's a conflict (again), I will rebase. > > On Wed, 10 Jan 2018 at 17:14 Steve Ebersole wrote: > >> I am going to delay 5.3.0.Beta1 until next week to investigate OSSRH >> publishing. I noticed yesterday that Bintray has a 10G storage limit >> which >> we would hit too quickly. >> >> Feel free to add issues to Beta1, but only if you plan on having them done >> by next week (1/17). >> > _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > > > -- > Yoann Rodiere > yoann at hibernate.org / yrodiere at redhat.com > Software Engineer > Hibernate NoORM team > From guillaume.smet at gmail.com Wed Jan 10 11:33:19 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 10 Jan 2018 17:33:19 +0100 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: On Wed, Jan 10, 2018 at 4:50 PM, Sanne Grinovero wrote: > I'm confused now. AFAIK this has never been the case? I understand > that the release process itself runs without running the tests, but > I'd still run the tests by triggering a full build before. > You made the example of the TCK and various tests; to run them you'd > not be allowed to run them in parallel with other builds, so you > wanted to release and the jobs happened to be building ORM and all its > RDBMS, you'd have had to wait for a couple hours. > When I start my release process, all my test jobs are green. That's the precondition. I usually don't commit something in a haste just before the release. When I start my release process, my release job has a weight of 2 so it passes in parallel of the other jobs (be it ORM, Search, or even BV/HV, as the release job pushes a commit so builds are triggered). That's why I like this weight plugin. And yes, this works because the release jobs don't run the tests so I'm sure there's no conflict of resources with another job. > Still I don't really understand if you're having a better idea. In a > nutshell these jobs need resources, if they are busy you either add > more resources, or change priorities, or you wait. That's the three > aspects you can play with "safely". As explained above, there's no conflict of resources in the case of the current release jobs: they don't run tests. That's why it works. -- Guillaume From steve at hibernate.org Wed Jan 10 11:40:18 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 10 Jan 2018 16:40:18 +0000 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: I know ;) Anyway I do agree that any release jobs should be given the highest priority in the job queue On Wed, Jan 10, 2018, 10:29 AM Guillaume Smet wrote: > On Wed, Jan 10, 2018 at 5:00 PM, Steve Ebersole > wrote: > >> And in advance I say I would not be cool with you killing my jobs for >> your job to run >> > > Yeah, that was my understanding. > > I don't expect anyone to be cool with it. > > From mihalcea.vlad at gmail.com Wed Jan 10 12:19:16 2018 From: mihalcea.vlad at gmail.com (Vlad Mihalcea) Date: Wed, 10 Jan 2018 19:19:16 +0200 Subject: [hibernate-dev] Serializable SessionFactory Message-ID: Hi, While reviewing old PRs we have in the ORM project, I stumbled on this one about serializing the SessionFactory. I created a new PR, rebased on top of the current master branch and all tests are passing fine. If anyone wants to take a look, this is the PR: https://github.com/hibernate/hibernate-orm/pull/2107 I'm thinking we should integrate it in 5.3.Alpha and stabilize it if there are some unforeseen changes. The only drawback is that, if we allow the SF to be Serializable, upgrading will be much more difficult in case we change object structure. We could make it clear that this might not be supported or use the serialVersionUID to point to Hibernate version: major.minor.patch. The main benefit is that, for a microservices architecture, Hibernate could start much faster this way. Vlad From steve at hibernate.org Wed Jan 10 12:45:20 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 10 Jan 2018 17:45:20 +0000 Subject: [hibernate-dev] Serializable SessionFactory In-Reply-To: References: Message-ID: The SessionFactory being Serialized outside the VM? Because otherwise it is already "serializable" via VM serialization hooks and org.hibernate.internal.SessionFactoryRegistry. And I'm not so convinced we should support serializing it for "out of" VM use aside from what we already do which assumes the new target VM has a similarly named SessionFactory in its org.hibernate.internal.SessionFactoryRegistry. On Wed, Jan 10, 2018 at 11:20 AM Vlad Mihalcea wrote: > Hi, > > While reviewing old PRs we have in the ORM project, I stumbled on this one > about serializing the SessionFactory. > > I created a new PR, rebased on top of the current master branch and all > tests are passing fine. > > If anyone wants to take a look, this is the PR: > > https://github.com/hibernate/hibernate-orm/pull/2107 > > I'm thinking we should integrate it in 5.3.Alpha and stabilize it if there > are some unforeseen changes. > > The only drawback is that, if we allow the SF to be Serializable, upgrading > will be much more difficult in case we change object structure. > We could make it clear that this might not be supported or use the > serialVersionUID to point to Hibernate version: major.minor.patch. > > The main benefit is that, for a microservices architecture, Hibernate could > start much faster this way. > > Vlad > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From gunnar at hibernate.org Wed Jan 10 15:12:21 2018 From: gunnar at hibernate.org (Gunnar Morling) Date: Wed, 10 Jan 2018 21:12:21 +0100 Subject: [hibernate-dev] proposition to create a new Constraint for @Age In-Reply-To: References: Message-ID: Hi Hilmer, Welcome to the list and +1 to that constraint. I think it's a good idea overall, will comment on the PR on some details as needed. Cheers, --Gunnar 2018-01-08 13:04 GMT+01:00 Guillaume Smet : > Hi Hilmer, > > On Sat, Jan 6, 2018 at 9:10 PM, Hilmer Chona wrote: > > > I have created a new issue on Jira HV-1552 > atlassian.net/browse/HV-1552>, it is to add a new Constraint to check if > > the number of years from a given date to today is igual or greater to a > > specified value. > > > > This validation can be very useful when someone who wants to access, sign > > up, or buy something must be over than one established age. > > > > What do you think? > > > > Looks interesting. I think I would mimic what we do with Min/Max i.e. have > AgeMin/AgeMax and an inclusive option. > > What I'm not sure about is if we should limit that to years. Or be more > flexible and also support months/days for instance. But I'm not sure it's > going to be easy (or useful?). Might be worth a try though. > > I would recommend doing an experiment with one date type before writing all > the validators as it's going to be a bit tedious. > > I suppose you have seen Marko's post about how to contribute a constraint: > http://in.relation.to/2018/01/04/adding-new-constraint-to-engine/ ? > > Have a nice day. > > -- > Guillaume > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From gbadner at redhat.com Wed Jan 10 17:08:36 2018 From: gbadner at redhat.com (Gail Badner) Date: Wed, 10 Jan 2018 14:08:36 -0800 Subject: [hibernate-dev] Preparing to release Hibernate ORM 5.1.11.Final Message-ID: As $subject. Please don't push anything to 5.1 branch. Thanks, Gail From gbadner at redhat.com Wed Jan 10 19:22:06 2018 From: gbadner at redhat.com (Gail Badner) Date: Wed, 10 Jan 2018 16:22:06 -0800 Subject: [hibernate-dev] Hibernate ORM 5.1.11.Final Released Message-ID: http://in.relation.to/2018/01/10/hibernate-orm-5111-final-release/ From mihalcea.vlad at gmail.com Thu Jan 11 03:07:46 2018 From: mihalcea.vlad at gmail.com (Vlad Mihalcea) Date: Thu, 11 Jan 2018 10:07:46 +0200 Subject: [hibernate-dev] Serializable SessionFactory In-Reply-To: References: Message-ID: Yes, out of the JVM. This PR allows the SF to be serialized to a file, so the next time we bootstrap, we reload the whole SF from the file instead. There are many unforeseen issues probably related to this PR and it might hurt maintenance in the long-run. For this reason, I'm going to leave the PR open as-is, and investigate whether we can bootstrap faster by avoiding (cacahing) the DB metadata retrieving the part. Vlad On Wed, Jan 10, 2018 at 7:45 PM, Steve Ebersole wrote: > The SessionFactory being Serialized outside the VM? Because otherwise it > is already "serializable" via VM serialization hooks > and org.hibernate.internal.SessionFactoryRegistry. And I'm not so > convinced we should support serializing it for "out of" VM use aside from > what we already do which assumes the new target VM has a similarly named > SessionFactory in its org.hibernate.internal.SessionFactoryRegistry. > > On Wed, Jan 10, 2018 at 11:20 AM Vlad Mihalcea > wrote: > >> Hi, >> >> While reviewing old PRs we have in the ORM project, I stumbled on this one >> about serializing the SessionFactory. >> >> I created a new PR, rebased on top of the current master branch and all >> tests are passing fine. >> >> If anyone wants to take a look, this is the PR: >> >> https://github.com/hibernate/hibernate-orm/pull/2107 >> >> I'm thinking we should integrate it in 5.3.Alpha and stabilize it if there >> are some unforeseen changes. >> >> The only drawback is that, if we allow the SF to be Serializable, >> upgrading >> will be much more difficult in case we change object structure. >> We could make it clear that this might not be supported or use the >> serialVersionUID to point to Hibernate version: major.minor.patch. >> >> The main benefit is that, for a microservices architecture, Hibernate >> could >> start much faster this way. >> >> Vlad >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > From steve at hibernate.org Thu Jan 11 07:39:24 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 11 Jan 2018 12:39:24 +0000 Subject: [hibernate-dev] Serializable SessionFactory In-Reply-To: References: Message-ID: I just don't see how serializing a full SessionFactory to disk is a good idea. What do you mean by "avoiding (caching) the DB metadata retrieving the part"? On Thu, Jan 11, 2018 at 2:08 AM Vlad Mihalcea wrote: > Yes, out of the JVM. This PR allows the SF to be serialized to a file, so > the next time we bootstrap, we reload the whole SF from the file instead. > > There are many unforeseen issues probably related to this PR and it might > hurt maintenance in the long-run. > > For this reason, I'm going to leave the PR open as-is, and investigate > whether we can bootstrap faster by avoiding (cacahing) the DB metadata > retrieving the part. > > Vlad > > On Wed, Jan 10, 2018 at 7:45 PM, Steve Ebersole > wrote: > >> The SessionFactory being Serialized outside the VM? Because otherwise it >> is already "serializable" via VM serialization hooks >> and org.hibernate.internal.SessionFactoryRegistry. And I'm not so >> convinced we should support serializing it for "out of" VM use aside from >> what we already do which assumes the new target VM has a similarly named >> SessionFactory in its org.hibernate.internal.SessionFactoryRegistry. >> >> On Wed, Jan 10, 2018 at 11:20 AM Vlad Mihalcea >> wrote: >> >>> Hi, >>> >>> While reviewing old PRs we have in the ORM project, I stumbled on this >>> one >>> about serializing the SessionFactory. >>> >>> I created a new PR, rebased on top of the current master branch and all >>> tests are passing fine. >>> >>> If anyone wants to take a look, this is the PR: >>> >>> https://github.com/hibernate/hibernate-orm/pull/2107 >>> >>> I'm thinking we should integrate it in 5.3.Alpha and stabilize it if >>> there >>> are some unforeseen changes. >>> >>> The only drawback is that, if we allow the SF to be Serializable, >>> upgrading >>> will be much more difficult in case we change object structure. >>> We could make it clear that this might not be supported or use the >>> serialVersionUID to point to Hibernate version: major.minor.patch. >>> >>> The main benefit is that, for a microservices architecture, Hibernate >>> could >>> start much faster this way. >>> >>> Vlad >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >> > From sanne at hibernate.org Thu Jan 11 08:05:32 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 11 Jan 2018 13:05:32 +0000 Subject: [hibernate-dev] Serializable SessionFactory In-Reply-To: References: Message-ID: On 11 January 2018 at 12:39, Steve Ebersole wrote: > I just don't see how serializing a full SessionFactory to disk is a good > idea. > > What do you mean by "avoiding (caching) the DB metadata retrieving the > part"? I'm wondering too. I would be very cautious with that: if the datasource connection is (temporarily) broken, because for example Hibernate was restarted, we don't really know which assumptions will still be true. The metadata is possibly no longer valid. You can't know for sure if the "development cycle" of the users isn't including some step which makes changes to the database, or maybe even updates it. I actually expect this to be common and this would cause a lot of trouble. If we're willing to invest to make the ORM bootstrap faster, that's great but we should work on identifying what is being slow and what can be done without making it dangerous. > > On Thu, Jan 11, 2018 at 2:08 AM Vlad Mihalcea > wrote: > >> Yes, out of the JVM. This PR allows the SF to be serialized to a file, so >> the next time we bootstrap, we reload the whole SF from the file instead. >> >> There are many unforeseen issues probably related to this PR and it might >> hurt maintenance in the long-run. >> >> For this reason, I'm going to leave the PR open as-is, and investigate >> whether we can bootstrap faster by avoiding (cacahing) the DB metadata >> retrieving the part. >> >> Vlad >> >> On Wed, Jan 10, 2018 at 7:45 PM, Steve Ebersole >> wrote: >> >>> The SessionFactory being Serialized outside the VM? Because otherwise it >>> is already "serializable" via VM serialization hooks >>> and org.hibernate.internal.SessionFactoryRegistry. And I'm not so >>> convinced we should support serializing it for "out of" VM use aside from >>> what we already do which assumes the new target VM has a similarly named >>> SessionFactory in its org.hibernate.internal.SessionFactoryRegistry. >>> >>> On Wed, Jan 10, 2018 at 11:20 AM Vlad Mihalcea >>> wrote: >>> >>>> Hi, >>>> >>>> While reviewing old PRs we have in the ORM project, I stumbled on this >>>> one >>>> about serializing the SessionFactory. >>>> >>>> I created a new PR, rebased on top of the current master branch and all >>>> tests are passing fine. >>>> >>>> If anyone wants to take a look, this is the PR: >>>> >>>> https://github.com/hibernate/hibernate-orm/pull/2107 >>>> >>>> I'm thinking we should integrate it in 5.3.Alpha and stabilize it if >>>> there >>>> are some unforeseen changes. >>>> >>>> The only drawback is that, if we allow the SF to be Serializable, >>>> upgrading >>>> will be much more difficult in case we change object structure. >>>> We could make it clear that this might not be supported or use the >>>> serialVersionUID to point to Hibernate version: major.minor.patch. >>>> >>>> The main benefit is that, for a microservices architecture, Hibernate >>>> could >>>> start much faster this way. >>>> >>>> Vlad >>>> _______________________________________________ >>>> hibernate-dev mailing list >>>> hibernate-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>> >>> >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From christian.beikov at gmail.com Thu Jan 11 11:56:58 2018 From: christian.beikov at gmail.com (Christian Beikov) Date: Thu, 11 Jan 2018 17:56:58 +0100 Subject: [hibernate-dev] Bulk delete behavior for collection tables Message-ID: <9f2cf8bf-8f81-d8ef-041d-5678f6dc3ef9@gmail.com> Hey, so HHH-5529 defines a feature which I'd like to work on but want to hear opinions first. Currently, bulk deletes only clear join tables of the affected entity type. I guess one could argue that this was done because collection table in contrast to join table entries should be bound to the entity table lifecycle by using an FK with delete cascading. Or maybe it just wasn't implemented because nobody stepped up. I'd like to fill this gap and implement the deletion of the collection table entries, but make that and the join table entry deletion configurable. Does anyone have anything against that? Would you prefer a single configuration option for join table and collection table clearing? If we enable that option by default, collection tables will then be cleared whereas currently users would get a FK violation. Don't know if that can be classified as breaking behavior. Or have two configuration options? Even then, would we enable collection table entry deletion by default? -- Mit freundlichen Gr??en, ------------------------------------------------------------------------ *Christian Beikov* From gbadner at redhat.com Thu Jan 11 14:43:44 2018 From: gbadner at redhat.com (Gail Badner) Date: Thu, 11 Jan 2018 11:43:44 -0800 Subject: [hibernate-dev] Bulk delete behavior for collection tables In-Reply-To: <9f2cf8bf-8f81-d8ef-041d-5678f6dc3ef9@gmail.com> References: <9f2cf8bf-8f81-d8ef-041d-5678f6dc3ef9@gmail.com> Message-ID: Hi Christian, I'm pretty sure this was implemented, but that it introduced a regression and ended up being reverted. I'll try to find the issue so you can see the code that was used. Regards, Gail On Thu, Jan 11, 2018 at 8:56 AM, Christian Beikov < christian.beikov at gmail.com> wrote: > Hey, > > so HHH-5529 defines a > feature which I'd like to work on but want to hear opinions first. > > Currently, bulk deletes only clear join tables of the affected entity > type. I guess one could argue that this was done because collection > table in contrast to join table entries should be bound to the entity > table lifecycle by using an FK with delete cascading. Or maybe it just > wasn't implemented because nobody stepped up. > > I'd like to fill this gap and implement the deletion of the collection > table entries, but make that and the join table entry deletion > configurable. > > Does anyone have anything against that? > > Would you prefer a single configuration option for join table and > collection table clearing? If we enable that option by default, > collection tables will then be cleared whereas currently users would get > a FK violation. Don't know if that can be classified as breaking behavior. > > Or have two configuration options? Even then, would we enable collection > table entry deletion by default? > > -- > > Mit freundlichen Gr??en, > ------------------------------------------------------------------------ > *Christian Beikov* > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From gbadner at redhat.com Thu Jan 11 15:41:35 2018 From: gbadner at redhat.com (Gail Badner) Date: Thu, 11 Jan 2018 12:41:35 -0800 Subject: [hibernate-dev] Bulk delete behavior for collection tables In-Reply-To: References: <9f2cf8bf-8f81-d8ef-041d-5678f6dc3ef9@gmail.com> Message-ID: Please see https://hibernate.atlassian.net/browse/HHH-9283. On Thu, Jan 11, 2018 at 11:43 AM, Gail Badner wrote: > Hi Christian, > > I'm pretty sure this was implemented, but that it introduced a regression > and ended up being reverted. I'll try to find the issue so you can see the > code that was used. > > Regards, > Gail > > On Thu, Jan 11, 2018 at 8:56 AM, Christian Beikov < > christian.beikov at gmail.com> wrote: > >> Hey, >> >> so HHH-5529 defines a >> feature which I'd like to work on but want to hear opinions first. >> >> Currently, bulk deletes only clear join tables of the affected entity >> type. I guess one could argue that this was done because collection >> table in contrast to join table entries should be bound to the entity >> table lifecycle by using an FK with delete cascading. Or maybe it just >> wasn't implemented because nobody stepped up. >> >> I'd like to fill this gap and implement the deletion of the collection >> table entries, but make that and the join table entry deletion >> configurable. >> >> Does anyone have anything against that? >> >> Would you prefer a single configuration option for join table and >> collection table clearing? If we enable that option by default, >> collection tables will then be cleared whereas currently users would get >> a FK violation. Don't know if that can be classified as breaking behavior. >> >> Or have two configuration options? Even then, would we enable collection >> table entry deletion by default? >> >> -- >> >> Mit freundlichen Gr??en, >> ------------------------------------------------------------------------ >> *Christian Beikov* >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > From gbadner at redhat.com Thu Jan 11 18:44:18 2018 From: gbadner at redhat.com (Gail Badner) Date: Thu, 11 Jan 2018 15:44:18 -0800 Subject: [hibernate-dev] Should HHH-12150 be fixed in 5.3.0.Beta? Message-ID: HHH-12150 is currently set to be fixed in 5.3.0. I have some time I can spend on this. There's another issue involving @MapKeyColumn, HHH-10575. Should I work on these, or something else for 5.3.0.Beta? From brett at hibernate.org Thu Jan 11 21:34:00 2018 From: brett at hibernate.org (Brett Meyer) Date: Thu, 11 Jan 2018 21:34:00 -0500 Subject: [hibernate-dev] replace Pax Exam with Docker Message-ID: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> I'm fed up with Pax Exam and would love to replace it as the hibernate-osgi integration test harness.? Most of the Karaf committers I've been working with hate it more than I do.? Every single time we upgrade the Karaf version, something less-than-minor in hibernate-osgi, upgrade/change dependencies, or attempt to upgrade Pax Exam itself, there's some new obfuscated failure.? And no matter how much I pray, it refuses to let us get to the container logs to figure out what happened.? Tis a house of cards. One alternative that recently came up elsewhere: use Docker to bootstrap the container, hit it with our features.xml, install a test bundle that exposes functionality externally (over HTTP, Karaf commands, etc), then hit the endpoints and run assertions. Pros: true "integration test", plain vanilla Karaf, direct access to all logs, easier to eventually support and test other containers. Cons: Need Docker installed for local test runs, probably safer to isolate the integration test behind a disabled-by-default Maven profile. Any gut reactions? OSGi is fun and I'm not at all bitter, -Brett- ;) From brett at hibernate.org Thu Jan 11 21:47:39 2018 From: brett at hibernate.org (Brett Meyer) Date: Thu, 11 Jan 2018 21:47:39 -0500 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: Sorry for the late and probably irrelevant response... We're using an in-house Artifactory instance at a gig and it's been trash.? I can't speak to the UI or management end, nor Bintray, but Artifactory's platform doesn't seem as polished (can't believe I just said that) or stable (can't believe I said that either) as Nexus (what is happening). I use OSSRH for some minor projects and have generally had decent luck -- including a few interactions with the support team that went well.? OSSRH != JBoss Nexus, although I definitely understand the wounds... On 12/19/17 8:34 AM, Steve Ebersole wrote: > HHH-12172 is about moving away from the JBoss Nexus repo for publishing our > artifacts. There is an open question about which service to use instead - > Sonatype's OSSRH (Nexus) or JFrog's Bintray (Artifactory). > > Personally I think Artifactory is far superior of a UI/platform. We all > know Nexus from the JBoss deployment of it, and we have all generally had > nothing good to say about it. > > But I am wondering if anyone has practical experience with either, or knows > persons/projects tyay do and could share their experiences. E.g., even > though I prefer Bintray in almost every regard, I am very nervous that it > seems next to impossible to get help/support with it. The same may be true > with OSSRH - I don't know, hence why I am asking ;) > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From yoann at hibernate.org Fri Jan 12 03:12:05 2018 From: yoann at hibernate.org (Yoann Rodiere) Date: Fri, 12 Jan 2018 08:12:05 +0000 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: Quick update: the priority plugin seems to be working fine, and I disabled the Heavy Job plugin. It turns out the Heavy Job plugin was preventing the Amazon EC2 plugin to spin up new slaves, probably because the Amazon EC2 plugin only saw two empty slots on an existing slave and couldn't understand that the waiting jobs couldn't be ran with only two slots. Consequently, the Amazon EC2 plugin now spins up lots of instances, with a limit of 5. In order to avoid a big hit on the budget, Sanne reduced the idle timeout to 30 minutes. Please allow 2 minutes for the slave to boot if there is no slave up when you start your job. So now we have working Amazon EC2 plugin, ensuring new slaves will be spun up if there are waiting jobs, and a priority queue, ensuring release/PR jobs will be ran first in the (hopefully unlikely) event a lot of jobs are waiting in the queue. It looks like a reasonable setup, so let's see how it goes for the next releases and discuss it afterwards. On Wed, 10 Jan 2018 at 17:46 Steve Ebersole wrote: > I know ;) > > Anyway I do agree that any release jobs should be given the highest > priority in the job queue > > On Wed, Jan 10, 2018, 10:29 AM Guillaume Smet > wrote: > > > On Wed, Jan 10, 2018 at 5:00 PM, Steve Ebersole > > wrote: > > > >> And in advance I say I would not be cool with you killing my jobs for > >> your job to run > >> > > > > Yeah, that was my understanding. > > > > I don't expect anyone to be cool with it. > > > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > -- Yoann Rodiere yoann at hibernate.org / yrodiere at redhat.com Software Engineer Hibernate NoORM team From gunnar at hibernate.org Fri Jan 12 03:22:15 2018 From: gunnar at hibernate.org (Gunnar Morling) Date: Fri, 12 Jan 2018 09:22:15 +0100 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> Message-ID: Hi Brett, We also had our fair share of frustration with Pax Exam in HV, and I was (more than once) at the point of dropping it. Docker could work, but as you say it's a bit of a heavy dependency, if not required anyways. Not sure whether I'd like to add this as a prerequisite for the HV build to be executed. And tests in separate profiles tend to be "forgotten" in my experience. One other approach could be to use Arquillian's OSGi support (see https://github.com/arquillian/arquillian-container-osgi), did you consider to use that one as an alternative? Cheers, --Gunnar 2018-01-12 3:34 GMT+01:00 Brett Meyer : > > > I'm fed up with Pax Exam and would love to replace it as the > hibernate-osgi integration test harness. Most of the Karaf committers > I've been working with hate it more than I do. Every single time we > upgrade the Karaf version, something less-than-minor in hibernate-osgi, > upgrade/change dependencies, or attempt to upgrade Pax Exam itself, > there's some new obfuscated failure. And no matter how much I pray, it > refuses to let us get to the container logs to figure out what > happened. Tis a house of cards. > > > > One alternative that recently came up elsewhere: use Docker to bootstrap > the container, hit it with our features.xml, install a test bundle that > exposes functionality externally (over HTTP, Karaf commands, etc), then > hit the endpoints and run assertions. > > Pros: true "integration test", plain vanilla Karaf, direct access to all > logs, easier to eventually support and test other containers. > > Cons: Need Docker installed for local test runs, probably safer to > isolate the integration test behind a disabled-by-default Maven profile. > > Any gut reactions? > > OSGi is fun and I'm not at all bitter, > > -Brett- > > ;) > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From sanne at hibernate.org Fri Jan 12 06:27:56 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 12 Jan 2018 11:27:56 +0000 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> Message-ID: +1 to explore alternatives to Pax Exam, but I'd be wary of maintining our own test infrastructure. Pax Exam was just "helping" to deploy/run things in Karaf, so I can't imagine using Karaf without the helpers being a walk in the park; e.g. having to deal with HTTP operations comes with its own baggage {dependencies, complexity, speed, .. } and generally more stuff to maintain. So.. +1 to try out Arquillian or anything else. Or maybe you could start your own tool, but I'd prefer to see it in a separate repository :) e.g. a nice Gradle plugin so maybe you get more people helping? Also: considered contributing to Pax? My personal experience with it has always been a pain but if I had to try identify the reason, it was mostly caused by me being unfamiliar with Karaf and not having good clues to track down the real failure; maybe some minor error reporting improvements could make a big difference to its usability? Just saying, I don't feel like Pax is bad, but it seems their developers really expect their users to be deeply familiar with it all - feels like the typical case in which they could use some feedback and a hand. Thanks, Sanne On 12 January 2018 at 08:22, Gunnar Morling wrote: > Hi Brett, > > We also had our fair share of frustration with Pax Exam in HV, and I was > (more than once) at the point of dropping it. > > Docker could work, but as you say it's a bit of a heavy dependency, if not > required anyways. Not sure whether I'd like to add this as a prerequisite > for the HV build to be executed. And tests in separate profiles tend to be > "forgotten" in my experience. > > One other approach could be to use Arquillian's OSGi support (see > https://github.com/arquillian/arquillian-container-osgi), did you consider > to use that one as an alternative? > > Cheers, > > --Gunnar > > > 2018-01-12 3:34 GMT+01:00 Brett Meyer : > >> >> >> I'm fed up with Pax Exam and would love to replace it as the >> hibernate-osgi integration test harness. Most of the Karaf committers >> I've been working with hate it more than I do. Every single time we >> upgrade the Karaf version, something less-than-minor in hibernate-osgi, >> upgrade/change dependencies, or attempt to upgrade Pax Exam itself, >> there's some new obfuscated failure. And no matter how much I pray, it >> refuses to let us get to the container logs to figure out what >> happened. Tis a house of cards. >> >> >> >> One alternative that recently came up elsewhere: use Docker to bootstrap >> the container, hit it with our features.xml, install a test bundle that >> exposes functionality externally (over HTTP, Karaf commands, etc), then >> hit the endpoints and run assertions. >> >> Pros: true "integration test", plain vanilla Karaf, direct access to all >> logs, easier to eventually support and test other containers. >> >> Cons: Need Docker installed for local test runs, probably safer to >> isolate the integration test behind a disabled-by-default Maven profile. >> >> Any gut reactions? >> >> OSGi is fun and I'm not at all bitter, >> >> -Brett- >> >> ;) >> >> >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From davide at hibernate.org Fri Jan 12 06:38:15 2018 From: davide at hibernate.org (Davide D'Alto) Date: Fri, 12 Jan 2018 11:38:15 +0000 Subject: [hibernate-dev] Jenkins job priorities In-Reply-To: References: Message-ID: Well done, thanks a lot. On Fri, Jan 12, 2018 at 8:12 AM, Yoann Rodiere wrote: > Quick update: the priority plugin seems to be working fine, and I disabled > the Heavy Job plugin. It turns out the Heavy Job plugin was preventing the > Amazon EC2 plugin to spin up new slaves, probably because the Amazon EC2 > plugin only saw two empty slots on an existing slave and couldn't > understand that the waiting jobs couldn't be ran with only two slots. > Consequently, the Amazon EC2 plugin now spins up lots of instances, with a > limit of 5. In order to avoid a big hit on the budget, Sanne reduced the > idle timeout to 30 minutes. Please allow 2 minutes for the slave to boot if > there is no slave up when you start your job. > > So now we have working Amazon EC2 plugin, ensuring new slaves will be spun > up if there are waiting jobs, and a priority queue, ensuring release/PR > jobs will be ran first in the (hopefully unlikely) event a lot of jobs are > waiting in the queue. > It looks like a reasonable setup, so let's see how it goes for the next > releases and discuss it afterwards. > > > On Wed, 10 Jan 2018 at 17:46 Steve Ebersole wrote: > >> I know ;) >> >> Anyway I do agree that any release jobs should be given the highest >> priority in the job queue >> >> On Wed, Jan 10, 2018, 10:29 AM Guillaume Smet >> wrote: >> >> > On Wed, Jan 10, 2018 at 5:00 PM, Steve Ebersole >> > wrote: >> > >> >> And in advance I say I would not be cool with you killing my jobs for >> >> your job to run >> >> >> > >> > Yeah, that was my understanding. >> > >> > I don't expect anyone to be cool with it. >> > >> > >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > > > -- > Yoann Rodiere > yoann at hibernate.org / yrodiere at redhat.com > Software Engineer > Hibernate NoORM team > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From sanne at hibernate.org Fri Jan 12 06:59:46 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 12 Jan 2018 11:59:46 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: Personally I'm neutral. I surely wouldn't want to manage our own Artifactory, but since JFrog will do that I'm not concerned about the platform management being horrible. Artifactory looks better, OSSRH has the benefit of possibly having better integration with Maven. There are some benefits on staying to JBoss's nexus though; not expressing a strong opinion but let's clarify these. # Stats We need download statistics, which I understand they all offer, but an absolute number is not as useful as being able to compare the numbers in one dashboard across various others of our projects. Also not looking forward to have to login to multiple systems to gather it all. # Quality control of artifacts I'm understanding that JBoss Nexus does several strict validations on our poms; sure they have been in the way as it's not nice to see such failures *during* a release but there's an upside to them as well. AFAIK OSSRH also has similar rules, but the JBoss team one has different ones, plus a deal with Sonatype to deem our stuff good "pre-approved" so we don't have to satisfy the Sonatype rules too. # Signing Also I'm understanding that to release on OSSRH we need to sign all artifacts; not a bad idea but it's quite more papework and key management. Such paperwork is handled for us by the JBoss Nexus team. We'd need to install GPG on our release servers, get a organization RSA key signed, and people stubbornly releasing manually will have to create a key each, and have it approved by Sonatype. Not against migrating if this is what you all want - just making sure we're keeping these into account. Thanks, Sanne On 12 January 2018 at 02:47, Brett Meyer wrote: > Sorry for the late and probably irrelevant response... > > We're using an in-house Artifactory instance at a gig and it's been > trash. I can't speak to the UI or management end, nor Bintray, but > Artifactory's platform doesn't seem as polished (can't believe I just > said that) or stable (can't believe I said that either) as Nexus (what > is happening). > > I use OSSRH for some minor projects and have generally had decent luck > -- including a few interactions with the support team that went well. > OSSRH != JBoss Nexus, although I definitely understand the wounds... > > > On 12/19/17 8:34 AM, Steve Ebersole wrote: >> HHH-12172 is about moving away from the JBoss Nexus repo for publishing our >> artifacts. There is an open question about which service to use instead - >> Sonatype's OSSRH (Nexus) or JFrog's Bintray (Artifactory). >> >> Personally I think Artifactory is far superior of a UI/platform. We all >> know Nexus from the JBoss deployment of it, and we have all generally had >> nothing good to say about it. >> >> But I am wondering if anyone has practical experience with either, or knows >> persons/projects tyay do and could share their experiences. E.g., even >> though I prefer Bintray in almost every regard, I am very nervous that it >> seems next to impossible to get help/support with it. The same may be true >> with OSSRH - I don't know, hence why I am asking ;) >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Fri Jan 12 07:54:07 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 12 Jan 2018 12:54:07 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: Firstly, if anyone looked at the Jira, you would see that Bintray simply won't work. They have a 10G storage limit. That's something we would hit pretty soon, depending on which projects moved there. Also, getting in touch with them for any kind of support has be a pretty big pain; its generally been multiple days until I here a reply - most recently I last asked a question on Monday or Tuesday that I still have not had a reply about. That's a shame because the UI is IMO significantly better than Nexus. Yes, JBoss has a particularly bad Nexus set up - but let's not lose sight of the fact that its still Nexus and we all still very much dislike that UI. On Fri, Jan 12, 2018 at 6:32 AM Sanne Grinovero wrote: > Artifactory looks better, OSSRH has the benefit of possibly having > better integration with Maven. > Simply not true, but not really relevant since we wont be using Bintray. > There are some benefits on staying to JBoss's nexus though; not > expressing a strong opinion but let's clarify these. > > # Stats > We need download statistics, which I understand they all offer, but an > absolute number is not as useful as being able to compare the numbers > in one dashboard across various others of our projects. > Also not looking forward to have to login to multiple systems to gather it > all. > You have to already. Nexus and SourceForge at the least. > # Quality control of artifacts > I'm understanding that JBoss Nexus does several strict validations on > our poms; sure they have been in the way as it's not nice to see such > failures *during* a release but there's an upside to them as well. > AFAIK OSSRH also has similar rules, but the JBoss team one has > different ones, plus a deal with Sonatype to deem our stuff good > "pre-approved" so we don't have to satisfy the Sonatype rules too. > They both validate pretty much the same exact information. I'm not at all sure what you mean as a point here. > # Signing > Also I'm understanding that to release on OSSRH we need to sign all > artifacts; not a bad idea but it's quite more papework and key > management. Such paperwork is handled for us by the JBoss Nexus team. > We'd need to install GPG on our release servers, get a organization > RSA key signed, and people stubbornly releasing manually will have to > create a key each, and have it approved by Sonatype. > This was another key benefit of Bintray. It actually has the capability of signing your artifacts as you publish them, meaning you do not have to do do any of these steps that concern you - you *can* set up signing to use your own key, but you can also just let Bintray handle it for you. > On 12 January 2018 at 02:47, Brett Meyer wrote: > > Sorry for the late and probably irrelevant response... > > > > We're using an in-house Artifactory instance at a gig and it's been > > trash. I can't speak to the UI or management end, nor Bintray, but > > Artifactory's platform doesn't seem as polished (can't believe I just > > said that) or stable (can't believe I said that either) as Nexus (what > > is happening). > We are only considering things we don't have to host. So one platform being harder to set up, maintain, etc is not at all a concern. > > I use OSSRH for some minor projects and have generally had decent luck > > -- including a few interactions with the support team that went well. > > OSSRH != JBoss Nexus, although I definitely understand the wounds... > I have also had very good experiences with Sonatype regarding OSSRH support as well. As far as "OSSRH != JBoss Nexus" that is certainly true to an extent. But really our troubles with JBoss Nexus break down into 2 categories: 1. Infrastructure - JBoss Nexus has often been slow, unstable. This is the one point I definitely agree with, in that I would assume OSSRH will be much better. 2. UI - ts still Nexus. The UI is not going to be any better, let alone significantly better. From gunnar at hibernate.org Fri Jan 12 08:12:58 2018 From: gunnar at hibernate.org (Gunnar Morling) Date: Fri, 12 Jan 2018 14:12:58 +0100 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: 2018-01-12 12:59 GMT+01:00 Sanne Grinovero : > Personally I'm neutral. I surely wouldn't want to manage our own > Artifactory, but since JFrog will do that I'm not concerned about the > platform management being horrible. > > Artifactory looks better, OSSRH has the benefit of possibly having > better integration with Maven. > > There are some benefits on staying to JBoss's nexus though; not > expressing a strong opinion but let's clarify these. > > # Stats > We need download statistics, which I understand they all offer, but an > absolute number is not as useful as being able to compare the numbers > in one dashboard across various others of our projects. > Also not looking forward to have to login to multiple systems to gather it > all. > > # Quality control of artifacts > I'm understanding that JBoss Nexus does several strict validations on > our poms; sure they have been in the way as it's not nice to see such > failures *during* a release but there's an upside to them as well. > AFAIK OSSRH also has similar rules, but the JBoss team one has > different ones, plus a deal with Sonatype to deem our stuff good > "pre-approved" so we don't have to satisfy the Sonatype rules too. > > # Signing > Also I'm understanding that to release on OSSRH we need to sign all > artifacts; not a bad idea but it's quite more papework and key > management. Such paperwork is handled for us by the JBoss Nexus team. > We'd need to install GPG on our release servers, get a organization > RSA key signed, and people stubbornly releasing manually will have to > create a key each, and have it approved by Sonatype. > Debezium already is released to OSSRH from our CI server. May be worth chatting to Jiri (added him to CC) about the details of setup. Note there's no need for key approval by Sonatype (at least last time I did it), you only need to publish them to some key server which you can do all by yourself. > > Not against migrating if this is what you all want - just making sure > we're keeping these into account. > > Thanks, > Sanne > > > On 12 January 2018 at 02:47, Brett Meyer wrote: > > Sorry for the late and probably irrelevant response... > > > > We're using an in-house Artifactory instance at a gig and it's been > > trash. I can't speak to the UI or management end, nor Bintray, but > > Artifactory's platform doesn't seem as polished (can't believe I just > > said that) or stable (can't believe I said that either) as Nexus (what > > is happening). > > > > I use OSSRH for some minor projects and have generally had decent luck > > -- including a few interactions with the support team that went well. > > OSSRH != JBoss Nexus, although I definitely understand the wounds... > > > > > > On 12/19/17 8:34 AM, Steve Ebersole wrote: > >> HHH-12172 is about moving away from the JBoss Nexus repo for publishing > our > >> artifacts. There is an open question about which service to use > instead - > >> Sonatype's OSSRH (Nexus) or JFrog's Bintray (Artifactory). > >> > >> Personally I think Artifactory is far superior of a UI/platform. We all > >> know Nexus from the JBoss deployment of it, and we have all generally > had > >> nothing good to say about it. > >> > >> But I am wondering if anyone has practical experience with either, or > knows > >> persons/projects tyay do and could share their experiences. E.g., even > >> though I prefer Bintray in almost every regard, I am very nervous that > it > >> seems next to impossible to get help/support with it. The same may be > true > >> with OSSRH - I don't know, hence why I am asking ;) > >> _______________________________________________ > >> hibernate-dev mailing list > >> hibernate-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From brett at hibernate.org Fri Jan 12 08:56:46 2018 From: brett at hibernate.org (Brett Meyer) Date: Fri, 12 Jan 2018 08:56:46 -0500 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> Message-ID: Sorry Gunnar/Sanne, should have clarified this first: We actually used Arquillian before Pax Exam, and the experience was far worse (somewhat of a long story)... > Pax Exam was just "helping" to deploy/run things in Karaf, so I can't imagine using Karaf without the helpers being a walk in the park That's not actually the case.? The way Pax Exam currently runs our tests is fundamentally part of the problem.? The test code is dynamically wrapped in an actual bundle, using something like tiny-bundles, and executed *within* the container itself. Pax overrides runs with additional probes, overrides logging infrastructure, etc.? Those nuances can often be the source of many of the bugs (there are a ton of classloader implications, etc. -- IIRC, this was one area where Arquillian was much, much worse).? There are some benefits to that setup, but for Hibernate it mainly gets in the way. It *does* have a "server mode" where tests run outside of the container, but I vaguely remember going down that path early on and hitting a roadblock.? For the life of me, I can't remember the specifics.? But my pushback here is that ultimately Docker might be more preferable, giving us more of a real world scenario to do true e2e tests without something else in the middle. > so I can't imagine using Karaf without the helpers being a walk in the park; e.g. having to deal with HTTP operations comes with its own baggage {dependencies, complexity, speed, .. } and generally more stuff to maintain. I guess I respectfully disagree with that, but purely due to Karaf features.? Our features.xml does most of the heavy lifting for us w/r/t getting Hibernate provisioned.? The same would be true with the test harness bundle/feature.? REST is simple and out-of-the-box thanks to Karaf + CXF or Camel.? For other possible routes (Karaf commands), we already have code available in our demo/quickstart projects. > Also: considered contributing to Pax? Yes, of course.? But the fact that numerous Karaf *committers* themselves have a long history of built-up frustration on it doesn't leave me optimistic.? A couple of them had tried to pitch in at one point and weren't able to get anywhere. > but it seems their developers really expect their users to be deeply familiar with it all Absolutely!? But again, our struggles also come down to the fundamental way Pax Exam works... On 1/12/18 6:27 AM, Sanne Grinovero wrote: > +1 to explore alternatives to Pax Exam, but I'd be wary of maintining > our own test infrastructure. > > Pax Exam was just "helping" to deploy/run things in Karaf, so I can't > imagine using Karaf without the helpers being a walk in the park; e.g. > having to deal with HTTP operations comes with its own baggage > {dependencies, complexity, speed, .. } and generally more stuff to > maintain. > > So.. +1 to try out Arquillian or anything else. Or maybe you could > start your own tool, but I'd prefer to see it in a separate repository > :) e.g. a nice Gradle plugin so maybe you get more people helping? > > Also: considered contributing to Pax? My personal experience with it > has always been a pain but if I had to try identify the reason, it was > mostly caused by me being unfamiliar with Karaf and not having good > clues to track down the real failure; maybe some minor error reporting > improvements could make a big difference to its usability? Just > saying, I don't feel like Pax is bad, but it seems their developers > really expect their users to be deeply familiar with it all - feels > like the typical case in which they could use some feedback and a > hand. > > Thanks, > Sanne > > On 12 January 2018 at 08:22, Gunnar Morling wrote: >> Hi Brett, >> >> We also had our fair share of frustration with Pax Exam in HV, and I was >> (more than once) at the point of dropping it. >> >> Docker could work, but as you say it's a bit of a heavy dependency, if not >> required anyways. Not sure whether I'd like to add this as a prerequisite >> for the HV build to be executed. And tests in separate profiles tend to be >> "forgotten" in my experience. >> >> One other approach could be to use Arquillian's OSGi support (see >> https://github.com/arquillian/arquillian-container-osgi), did you consider >> to use that one as an alternative? >> >> Cheers, >> >> --Gunnar >> >> >> 2018-01-12 3:34 GMT+01:00 Brett Meyer : >> >>> >>> >>> I'm fed up with Pax Exam and would love to replace it as the >>> hibernate-osgi integration test harness. Most of the Karaf committers >>> I've been working with hate it more than I do. Every single time we >>> upgrade the Karaf version, something less-than-minor in hibernate-osgi, >>> upgrade/change dependencies, or attempt to upgrade Pax Exam itself, >>> there's some new obfuscated failure. And no matter how much I pray, it >>> refuses to let us get to the container logs to figure out what >>> happened. Tis a house of cards. >>> >>> >>> >>> One alternative that recently came up elsewhere: use Docker to bootstrap >>> the container, hit it with our features.xml, install a test bundle that >>> exposes functionality externally (over HTTP, Karaf commands, etc), then >>> hit the endpoints and run assertions. >>> >>> Pros: true "integration test", plain vanilla Karaf, direct access to all >>> logs, easier to eventually support and test other containers. >>> >>> Cons: Need Docker installed for local test runs, probably safer to >>> isolate the integration test behind a disabled-by-default Maven profile. >>> >>> Any gut reactions? >>> >>> OSGi is fun and I'm not at all bitter, >>> >>> -Brett- >>> >>> ;) >>> >>> >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From brett at hibernate.org Fri Jan 12 08:59:43 2018 From: brett at hibernate.org (Brett Meyer) Date: Fri, 12 Jan 2018 08:59:43 -0500 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> Message-ID: Plus, for me, it's more a question of time.? I only have a bit available for open source work these days, and I'd rather spend that knocking out some of the hibernate-osgi tasks we've had on our plate for a while.? I unfortunately don't have anything left to contribute to Pax Exam itself, assuming that would even fix the problem. Even worse, we're barely using the integration tests for anything more than a simple smoke test at this point, since it seems like every time we touch it something new goes wrong.? Looking for a more *consistent* solution -- need more confidence in the backbone. On 1/12/18 8:56 AM, Brett Meyer wrote: > > Sorry Gunnar/Sanne, should have clarified this first: > > We actually used Arquillian before Pax Exam, and the experience was > far worse (somewhat of a long story)... > > > Pax Exam was just "helping" to deploy/run things in Karaf, so I > can't imagine using Karaf without the helpers being a walk in the park > > That's not actually the case.? The way Pax Exam currently runs our > tests is fundamentally part of the problem.? The test code is > dynamically wrapped in an actual bundle, using something like > tiny-bundles, and executed *within* the container itself. Pax > overrides runs with additional probes, overrides logging > infrastructure, etc.? Those nuances can often be the source of many of > the bugs (there are a ton of classloader implications, etc. -- IIRC, > this was one area where Arquillian was much, much worse).? There are > some benefits to that setup, but for Hibernate it mainly gets in the way. > > It *does* have a "server mode" where tests run outside of the > container, but I vaguely remember going down that path early on and > hitting a roadblock.? For the life of me, I can't remember the > specifics.? But my pushback here is that ultimately Docker might be > more preferable, giving us more of a real world scenario to do true > e2e tests without something else in the middle. > > > so I can't imagine using Karaf without the helpers being a walk in > the park; e.g. having to deal with HTTP operations comes with its own > baggage {dependencies, complexity, speed, .. } and generally more > stuff to maintain. > > I guess I respectfully disagree with that, but purely due to Karaf > features.? Our features.xml does most of the heavy lifting for us > w/r/t getting Hibernate provisioned.? The same would be true with the > test harness bundle/feature.? REST is simple and out-of-the-box thanks > to Karaf + CXF or Camel.? For other possible routes (Karaf commands), > we already have code available in our demo/quickstart projects. > > > Also: considered contributing to Pax? > > Yes, of course.? But the fact that numerous Karaf *committers* > themselves have a long history of built-up frustration on it doesn't > leave me optimistic.? A couple of them had tried to pitch in at one > point and weren't able to get anywhere. > > > but it seems their developers really expect their users to be deeply > familiar with it all > > Absolutely!? But again, our struggles also come down to the > fundamental way Pax Exam works... > > > On 1/12/18 6:27 AM, Sanne Grinovero wrote: >> +1 to explore alternatives to Pax Exam, but I'd be wary of maintining >> our own test infrastructure. >> >> Pax Exam was just "helping" to deploy/run things in Karaf, so I can't >> imagine using Karaf without the helpers being a walk in the park; e.g. >> having to deal with HTTP operations comes with its own baggage >> {dependencies, complexity, speed, .. } and generally more stuff to >> maintain. >> >> So.. +1 to try out Arquillian or anything else. Or maybe you could >> start your own tool, but I'd prefer to see it in a separate repository >> :) e.g. a nice Gradle plugin so maybe you get more people helping? >> >> Also: considered contributing to Pax? My personal experience with it >> has always been a pain but if I had to try identify the reason, it was >> mostly caused by me being unfamiliar with Karaf and not having good >> clues to track down the real failure; maybe some minor error reporting >> improvements could make a big difference to its usability? Just >> saying, I don't feel like Pax is bad, but it seems their developers >> really expect their users to be deeply familiar with it all - feels >> like the typical case in which they could use some feedback and a >> hand. >> >> Thanks, >> Sanne >> >> On 12 January 2018 at 08:22, Gunnar Morling wrote: >>> Hi Brett, >>> >>> We also had our fair share of frustration with Pax Exam in HV, and I was >>> (more than once) at the point of dropping it. >>> >>> Docker could work, but as you say it's a bit of a heavy dependency, if not >>> required anyways. Not sure whether I'd like to add this as a prerequisite >>> for the HV build to be executed. And tests in separate profiles tend to be >>> "forgotten" in my experience. >>> >>> One other approach could be to use Arquillian's OSGi support (see >>> https://github.com/arquillian/arquillian-container-osgi), did you consider >>> to use that one as an alternative? >>> >>> Cheers, >>> >>> --Gunnar >>> >>> >>> 2018-01-12 3:34 GMT+01:00 Brett Meyer: >>> >>>> >>>> >>>> I'm fed up with Pax Exam and would love to replace it as the >>>> hibernate-osgi integration test harness. Most of the Karaf committers >>>> I've been working with hate it more than I do. Every single time we >>>> upgrade the Karaf version, something less-than-minor in hibernate-osgi, >>>> upgrade/change dependencies, or attempt to upgrade Pax Exam itself, >>>> there's some new obfuscated failure. And no matter how much I pray, it >>>> refuses to let us get to the container logs to figure out what >>>> happened. Tis a house of cards. >>>> >>>> >>>> >>>> One alternative that recently came up elsewhere: use Docker to bootstrap >>>> the container, hit it with our features.xml, install a test bundle that >>>> exposes functionality externally (over HTTP, Karaf commands, etc), then >>>> hit the endpoints and run assertions. >>>> >>>> Pros: true "integration test", plain vanilla Karaf, direct access to all >>>> logs, easier to eventually support and test other containers. >>>> >>>> Cons: Need Docker installed for local test runs, probably safer to >>>> isolate the integration test behind a disabled-by-default Maven profile. >>>> >>>> Any gut reactions? >>>> >>>> OSGi is fun and I'm not at all bitter, >>>> >>>> -Brett- >>>> >>>> ;) >>>> >>>> >>>> _______________________________________________ >>>> hibernate-dev mailing list >>>> hibernate-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Fri Jan 12 09:24:21 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 12 Jan 2018 14:24:21 +0000 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> Message-ID: Personally I have had problems with Arquillian in the past. It was either setting up CDI tests or setting up OSGi tests (or maybe both). I am open to any suggestions that make this less brittle or easier to work with On Fri, Jan 12, 2018 at 5:34 AM Sanne Grinovero wrote: > +1 to explore alternatives to Pax Exam, but I'd be wary of maintining > our own test infrastructure. > > Pax Exam was just "helping" to deploy/run things in Karaf, so I can't > imagine using Karaf without the helpers being a walk in the park; e.g. > having to deal with HTTP operations comes with its own baggage > {dependencies, complexity, speed, .. } and generally more stuff to > maintain. > > So.. +1 to try out Arquillian or anything else. Or maybe you could > start your own tool, but I'd prefer to see it in a separate repository > :) e.g. a nice Gradle plugin so maybe you get more people helping? > > Also: considered contributing to Pax? My personal experience with it > has always been a pain but if I had to try identify the reason, it was > mostly caused by me being unfamiliar with Karaf and not having good > clues to track down the real failure; maybe some minor error reporting > improvements could make a big difference to its usability? Just > saying, I don't feel like Pax is bad, but it seems their developers > really expect their users to be deeply familiar with it all - feels > like the typical case in which they could use some feedback and a > hand. > > Thanks, > Sanne > > On 12 January 2018 at 08:22, Gunnar Morling wrote: > > Hi Brett, > > > > We also had our fair share of frustration with Pax Exam in HV, and I was > > (more than once) at the point of dropping it. > > > > Docker could work, but as you say it's a bit of a heavy dependency, if > not > > required anyways. Not sure whether I'd like to add this as a prerequisite > > for the HV build to be executed. And tests in separate profiles tend to > be > > "forgotten" in my experience. > > > > One other approach could be to use Arquillian's OSGi support (see > > https://github.com/arquillian/arquillian-container-osgi), did you > consider > > to use that one as an alternative? > > > > Cheers, > > > > --Gunnar > > > > > > 2018-01-12 3:34 GMT+01:00 Brett Meyer : > > > >> > >> > >> I'm fed up with Pax Exam and would love to replace it as the > >> hibernate-osgi integration test harness. Most of the Karaf committers > >> I've been working with hate it more than I do. Every single time we > >> upgrade the Karaf version, something less-than-minor in hibernate-osgi, > >> upgrade/change dependencies, or attempt to upgrade Pax Exam itself, > >> there's some new obfuscated failure. And no matter how much I pray, it > >> refuses to let us get to the container logs to figure out what > >> happened. Tis a house of cards. > >> > >> > >> > >> One alternative that recently came up elsewhere: use Docker to bootstrap > >> the container, hit it with our features.xml, install a test bundle that > >> exposes functionality externally (over HTTP, Karaf commands, etc), then > >> hit the endpoints and run assertions. > >> > >> Pros: true "integration test", plain vanilla Karaf, direct access to all > >> logs, easier to eventually support and test other containers. > >> > >> Cons: Need Docker installed for local test runs, probably safer to > >> isolate the integration test behind a disabled-by-default Maven profile. > >> > >> Any gut reactions? > >> > >> OSGi is fun and I'm not at all bitter, > >> > >> -Brett- > >> > >> ;) > >> > >> > >> _______________________________________________ > >> hibernate-dev mailing list > >> hibernate-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Fri Jan 12 10:05:46 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 12 Jan 2018 15:05:46 +0000 Subject: [hibernate-dev] Should HHH-12150 be fixed in 5.3.0.Beta? In-Reply-To: References: Message-ID: Can you get those done by Wednesday? If so, I think that's a good plan. I can't think of anything else for you to work on atm for 5.3. Maybe once we start to hear back about the remaining outstanding challenges... On Thu, Jan 11, 2018 at 5:44 PM Gail Badner wrote: > HHH-12150 is currently set to be fixed in 5.3.0. I have some time I can > spend on this. There's another issue involving @MapKeyColumn, HHH-10575. > > Should I work on these, or something else for 5.3.0.Beta? > From mihalcea.vlad at gmail.com Fri Jan 12 11:58:37 2018 From: mihalcea.vlad at gmail.com (Vlad Mihalcea) Date: Fri, 12 Jan 2018 18:58:37 +0200 Subject: [hibernate-dev] Serializable SessionFactory In-Reply-To: References: Message-ID: Sure, we need to profile it first. >From what our users have told us, getting the metadata from the database takes some time and my goal was to identify whether we can do something about that. I'll come back once I have more info. Vlad On Thu, Jan 11, 2018 at 3:05 PM, Sanne Grinovero wrote: > On 11 January 2018 at 12:39, Steve Ebersole wrote: > > I just don't see how serializing a full SessionFactory to disk is a good > > idea. > > > > What do you mean by "avoiding (caching) the DB metadata retrieving the > > part"? > > I'm wondering too. I would be very cautious with that: if the > datasource connection is (temporarily) broken, because for example > Hibernate was restarted, we don't really know which assumptions will > still be true. The metadata is possibly no longer valid. > > You can't know for sure if the "development cycle" of the users isn't > including some step which makes changes to the database, or maybe even > updates it. I actually expect this to be common and this would cause a > lot of trouble. > > If we're willing to invest to make the ORM bootstrap faster, that's > great but we should work on identifying what is being slow and what > can be done without making it dangerous. > > > > > On Thu, Jan 11, 2018 at 2:08 AM Vlad Mihalcea > > wrote: > > > >> Yes, out of the JVM. This PR allows the SF to be serialized to a file, > so > >> the next time we bootstrap, we reload the whole SF from the file > instead. > >> > >> There are many unforeseen issues probably related to this PR and it > might > >> hurt maintenance in the long-run. > >> > >> For this reason, I'm going to leave the PR open as-is, and investigate > >> whether we can bootstrap faster by avoiding (cacahing) the DB metadata > >> retrieving the part. > >> > >> Vlad > >> > >> On Wed, Jan 10, 2018 at 7:45 PM, Steve Ebersole > >> wrote: > >> > >>> The SessionFactory being Serialized outside the VM? Because otherwise > it > >>> is already "serializable" via VM serialization hooks > >>> and org.hibernate.internal.SessionFactoryRegistry. And I'm not so > >>> convinced we should support serializing it for "out of" VM use aside > from > >>> what we already do which assumes the new target VM has a similarly > named > >>> SessionFactory in its org.hibernate.internal.SessionFactoryRegistry. > >>> > >>> On Wed, Jan 10, 2018 at 11:20 AM Vlad Mihalcea < > mihalcea.vlad at gmail.com> > >>> wrote: > >>> > >>>> Hi, > >>>> > >>>> While reviewing old PRs we have in the ORM project, I stumbled on this > >>>> one > >>>> about serializing the SessionFactory. > >>>> > >>>> I created a new PR, rebased on top of the current master branch and > all > >>>> tests are passing fine. > >>>> > >>>> If anyone wants to take a look, this is the PR: > >>>> > >>>> https://github.com/hibernate/hibernate-orm/pull/2107 > >>>> > >>>> I'm thinking we should integrate it in 5.3.Alpha and stabilize it if > >>>> there > >>>> are some unforeseen changes. > >>>> > >>>> The only drawback is that, if we allow the SF to be Serializable, > >>>> upgrading > >>>> will be much more difficult in case we change object structure. > >>>> We could make it clear that this might not be supported or use the > >>>> serialVersionUID to point to Hibernate version: major.minor.patch. > >>>> > >>>> The main benefit is that, for a microservices architecture, Hibernate > >>>> could > >>>> start much faster this way. > >>>> > >>>> Vlad > >>>> _______________________________________________ > >>>> hibernate-dev mailing list > >>>> hibernate-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>>> > >>> > >> > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Fri Jan 12 12:20:56 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 12 Jan 2018 17:20:56 +0000 Subject: [hibernate-dev] Serializable SessionFactory In-Reply-To: References: Message-ID: On 12 January 2018 at 16:58, Vlad Mihalcea wrote: > Sure, we need to profile it first. I had started to create a micro-benchmark for it; unfortunately I had to get back on more urgent things, but if someone is willing to let it grow further, it's on github: - https://github.com/Sanne/orm-boostrap-benchmarks > From what our users have told us, getting the metadata from the database > takes some time and my > goal was to identify whether we can do something about that. > > I'll come back once I have more info. +1 thanks! > > Vlad > > On Thu, Jan 11, 2018 at 3:05 PM, Sanne Grinovero > wrote: >> >> On 11 January 2018 at 12:39, Steve Ebersole wrote: >> > I just don't see how serializing a full SessionFactory to disk is a good >> > idea. >> > >> > What do you mean by "avoiding (caching) the DB metadata retrieving the >> > part"? >> >> I'm wondering too. I would be very cautious with that: if the >> datasource connection is (temporarily) broken, because for example >> Hibernate was restarted, we don't really know which assumptions will >> still be true. The metadata is possibly no longer valid. >> >> You can't know for sure if the "development cycle" of the users isn't >> including some step which makes changes to the database, or maybe even >> updates it. I actually expect this to be common and this would cause a >> lot of trouble. >> >> If we're willing to invest to make the ORM bootstrap faster, that's >> great but we should work on identifying what is being slow and what >> can be done without making it dangerous. >> >> > >> > On Thu, Jan 11, 2018 at 2:08 AM Vlad Mihalcea >> > wrote: >> > >> >> Yes, out of the JVM. This PR allows the SF to be serialized to a file, >> >> so >> >> the next time we bootstrap, we reload the whole SF from the file >> >> instead. >> >> >> >> There are many unforeseen issues probably related to this PR and it >> >> might >> >> hurt maintenance in the long-run. >> >> >> >> For this reason, I'm going to leave the PR open as-is, and investigate >> >> whether we can bootstrap faster by avoiding (cacahing) the DB metadata >> >> retrieving the part. >> >> >> >> Vlad >> >> >> >> On Wed, Jan 10, 2018 at 7:45 PM, Steve Ebersole >> >> wrote: >> >> >> >>> The SessionFactory being Serialized outside the VM? Because otherwise >> >>> it >> >>> is already "serializable" via VM serialization hooks >> >>> and org.hibernate.internal.SessionFactoryRegistry. And I'm not so >> >>> convinced we should support serializing it for "out of" VM use aside >> >>> from >> >>> what we already do which assumes the new target VM has a similarly >> >>> named >> >>> SessionFactory in its org.hibernate.internal.SessionFactoryRegistry. >> >>> >> >>> On Wed, Jan 10, 2018 at 11:20 AM Vlad Mihalcea >> >>> >> >>> wrote: >> >>> >> >>>> Hi, >> >>>> >> >>>> While reviewing old PRs we have in the ORM project, I stumbled on >> >>>> this >> >>>> one >> >>>> about serializing the SessionFactory. >> >>>> >> >>>> I created a new PR, rebased on top of the current master branch and >> >>>> all >> >>>> tests are passing fine. >> >>>> >> >>>> If anyone wants to take a look, this is the PR: >> >>>> >> >>>> https://github.com/hibernate/hibernate-orm/pull/2107 >> >>>> >> >>>> I'm thinking we should integrate it in 5.3.Alpha and stabilize it if >> >>>> there >> >>>> are some unforeseen changes. >> >>>> >> >>>> The only drawback is that, if we allow the SF to be Serializable, >> >>>> upgrading >> >>>> will be much more difficult in case we change object structure. >> >>>> We could make it clear that this might not be supported or use the >> >>>> serialVersionUID to point to Hibernate version: major.minor.patch. >> >>>> >> >>>> The main benefit is that, for a microservices architecture, Hibernate >> >>>> could >> >>>> start much faster this way. >> >>>> >> >>>> Vlad >> >>>> _______________________________________________ >> >>>> hibernate-dev mailing list >> >>>> hibernate-dev at lists.jboss.org >> >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >>>> >> >>> >> >> >> > _______________________________________________ >> > hibernate-dev mailing list >> > hibernate-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > From sanne at hibernate.org Fri Jan 12 12:27:05 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 12 Jan 2018 17:27:05 +0000 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> Message-ID: Ok, looks like you really should start something new :) Hopefully many of those other annoyed Karaf developers will follow. On 12 January 2018 at 13:59, Brett Meyer wrote: > Plus, for me, it's more a question of time. I only have a bit available > for open source work these days, and I'd rather spend that knocking out > some of the hibernate-osgi tasks we've had on our plate for a while. I > unfortunately don't have anything left to contribute to Pax Exam itself, > assuming that would even fix the problem. > > Even worse, we're barely using the integration tests for anything more > than a simple smoke test at this point, since it seems like every time > we touch it something new goes wrong. Looking for a more *consistent* > solution -- need more confidence in the backbone. > > > On 1/12/18 8:56 AM, Brett Meyer wrote: >> >> Sorry Gunnar/Sanne, should have clarified this first: >> >> We actually used Arquillian before Pax Exam, and the experience was >> far worse (somewhat of a long story)... >> >> > Pax Exam was just "helping" to deploy/run things in Karaf, so I >> can't imagine using Karaf without the helpers being a walk in the park >> >> That's not actually the case. The way Pax Exam currently runs our >> tests is fundamentally part of the problem. The test code is >> dynamically wrapped in an actual bundle, using something like >> tiny-bundles, and executed *within* the container itself. Pax >> overrides runs with additional probes, overrides logging >> infrastructure, etc. Those nuances can often be the source of many of >> the bugs (there are a ton of classloader implications, etc. -- IIRC, >> this was one area where Arquillian was much, much worse). There are >> some benefits to that setup, but for Hibernate it mainly gets in the way. >> >> It *does* have a "server mode" where tests run outside of the >> container, but I vaguely remember going down that path early on and >> hitting a roadblock. For the life of me, I can't remember the >> specifics. But my pushback here is that ultimately Docker might be >> more preferable, giving us more of a real world scenario to do true >> e2e tests without something else in the middle. >> >> > so I can't imagine using Karaf without the helpers being a walk in >> the park; e.g. having to deal with HTTP operations comes with its own >> baggage {dependencies, complexity, speed, .. } and generally more >> stuff to maintain. >> >> I guess I respectfully disagree with that, but purely due to Karaf >> features. Our features.xml does most of the heavy lifting for us >> w/r/t getting Hibernate provisioned. The same would be true with the >> test harness bundle/feature. REST is simple and out-of-the-box thanks >> to Karaf + CXF or Camel. For other possible routes (Karaf commands), >> we already have code available in our demo/quickstart projects. >> >> > Also: considered contributing to Pax? >> >> Yes, of course. But the fact that numerous Karaf *committers* >> themselves have a long history of built-up frustration on it doesn't >> leave me optimistic. A couple of them had tried to pitch in at one >> point and weren't able to get anywhere. >> >> > but it seems their developers really expect their users to be deeply >> familiar with it all >> >> Absolutely! But again, our struggles also come down to the >> fundamental way Pax Exam works... >> >> >> On 1/12/18 6:27 AM, Sanne Grinovero wrote: >>> +1 to explore alternatives to Pax Exam, but I'd be wary of maintining >>> our own test infrastructure. >>> >>> Pax Exam was just "helping" to deploy/run things in Karaf, so I can't >>> imagine using Karaf without the helpers being a walk in the park; e.g. >>> having to deal with HTTP operations comes with its own baggage >>> {dependencies, complexity, speed, .. } and generally more stuff to >>> maintain. >>> >>> So.. +1 to try out Arquillian or anything else. Or maybe you could >>> start your own tool, but I'd prefer to see it in a separate repository >>> :) e.g. a nice Gradle plugin so maybe you get more people helping? >>> >>> Also: considered contributing to Pax? My personal experience with it >>> has always been a pain but if I had to try identify the reason, it was >>> mostly caused by me being unfamiliar with Karaf and not having good >>> clues to track down the real failure; maybe some minor error reporting >>> improvements could make a big difference to its usability? Just >>> saying, I don't feel like Pax is bad, but it seems their developers >>> really expect their users to be deeply familiar with it all - feels >>> like the typical case in which they could use some feedback and a >>> hand. >>> >>> Thanks, >>> Sanne >>> >>> On 12 January 2018 at 08:22, Gunnar Morling wrote: >>>> Hi Brett, >>>> >>>> We also had our fair share of frustration with Pax Exam in HV, and I was >>>> (more than once) at the point of dropping it. >>>> >>>> Docker could work, but as you say it's a bit of a heavy dependency, if not >>>> required anyways. Not sure whether I'd like to add this as a prerequisite >>>> for the HV build to be executed. And tests in separate profiles tend to be >>>> "forgotten" in my experience. >>>> >>>> One other approach could be to use Arquillian's OSGi support (see >>>> https://github.com/arquillian/arquillian-container-osgi), did you consider >>>> to use that one as an alternative? >>>> >>>> Cheers, >>>> >>>> --Gunnar >>>> >>>> >>>> 2018-01-12 3:34 GMT+01:00 Brett Meyer: >>>> >>>>> >>>>> >>>>> I'm fed up with Pax Exam and would love to replace it as the >>>>> hibernate-osgi integration test harness. Most of the Karaf committers >>>>> I've been working with hate it more than I do. Every single time we >>>>> upgrade the Karaf version, something less-than-minor in hibernate-osgi, >>>>> upgrade/change dependencies, or attempt to upgrade Pax Exam itself, >>>>> there's some new obfuscated failure. And no matter how much I pray, it >>>>> refuses to let us get to the container logs to figure out what >>>>> happened. Tis a house of cards. >>>>> >>>>> >>>>> >>>>> One alternative that recently came up elsewhere: use Docker to bootstrap >>>>> the container, hit it with our features.xml, install a test bundle that >>>>> exposes functionality externally (over HTTP, Karaf commands, etc), then >>>>> hit the endpoints and run assertions. >>>>> >>>>> Pros: true "integration test", plain vanilla Karaf, direct access to all >>>>> logs, easier to eventually support and test other containers. >>>>> >>>>> Cons: Need Docker installed for local test runs, probably safer to >>>>> isolate the integration test behind a disabled-by-default Maven profile. >>>>> >>>>> Any gut reactions? >>>>> >>>>> OSGi is fun and I'm not at all bitter, >>>>> >>>>> -Brett- >>>>> >>>>> ;) >>>>> >>>>> >>>>> _______________________________________________ >>>>> hibernate-dev mailing list >>>>> hibernate-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>> _______________________________________________ >>>> hibernate-dev mailing list >>>> hibernate-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From brett at hibernate.org Fri Jan 12 12:32:39 2018 From: brett at hibernate.org (Brett Meyer) Date: Fri, 12 Jan 2018 12:32:39 -0500 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> Message-ID: <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> If I don't have time to contribute to Pax Exam, I certainly don't have time to start a new project haha... And realistically, that "something new" would likely involve containers anyway. At this point, mostly a question of 1) status quo, 2) Docker (or any other container-based solution), or 3) try screwing around with Pax Exam in "server-only" mode (but I don't have high hopes there). On 1/12/18 12:27 PM, Sanne Grinovero wrote: > Ok, looks like you really should start something new :) > > Hopefully many of those other annoyed Karaf developers will follow. > > On 12 January 2018 at 13:59, Brett Meyer wrote: >> Plus, for me, it's more a question of time. I only have a bit available >> for open source work these days, and I'd rather spend that knocking out >> some of the hibernate-osgi tasks we've had on our plate for a while. I >> unfortunately don't have anything left to contribute to Pax Exam itself, >> assuming that would even fix the problem. >> >> Even worse, we're barely using the integration tests for anything more >> than a simple smoke test at this point, since it seems like every time >> we touch it something new goes wrong. Looking for a more *consistent* >> solution -- need more confidence in the backbone. >> >> >> On 1/12/18 8:56 AM, Brett Meyer wrote: >>> Sorry Gunnar/Sanne, should have clarified this first: >>> >>> We actually used Arquillian before Pax Exam, and the experience was >>> far worse (somewhat of a long story)... >>> >>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I >>> can't imagine using Karaf without the helpers being a walk in the park >>> >>> That's not actually the case. The way Pax Exam currently runs our >>> tests is fundamentally part of the problem. The test code is >>> dynamically wrapped in an actual bundle, using something like >>> tiny-bundles, and executed *within* the container itself. Pax >>> overrides runs with additional probes, overrides logging >>> infrastructure, etc. Those nuances can often be the source of many of >>> the bugs (there are a ton of classloader implications, etc. -- IIRC, >>> this was one area where Arquillian was much, much worse). There are >>> some benefits to that setup, but for Hibernate it mainly gets in the way. >>> >>> It *does* have a "server mode" where tests run outside of the >>> container, but I vaguely remember going down that path early on and >>> hitting a roadblock. For the life of me, I can't remember the >>> specifics. But my pushback here is that ultimately Docker might be >>> more preferable, giving us more of a real world scenario to do true >>> e2e tests without something else in the middle. >>> >>>> so I can't imagine using Karaf without the helpers being a walk in >>> the park; e.g. having to deal with HTTP operations comes with its own >>> baggage {dependencies, complexity, speed, .. } and generally more >>> stuff to maintain. >>> >>> I guess I respectfully disagree with that, but purely due to Karaf >>> features. Our features.xml does most of the heavy lifting for us >>> w/r/t getting Hibernate provisioned. The same would be true with the >>> test harness bundle/feature. REST is simple and out-of-the-box thanks >>> to Karaf + CXF or Camel. For other possible routes (Karaf commands), >>> we already have code available in our demo/quickstart projects. >>> >>>> Also: considered contributing to Pax? >>> Yes, of course. But the fact that numerous Karaf *committers* >>> themselves have a long history of built-up frustration on it doesn't >>> leave me optimistic. A couple of them had tried to pitch in at one >>> point and weren't able to get anywhere. >>> >>>> but it seems their developers really expect their users to be deeply >>> familiar with it all >>> >>> Absolutely! But again, our struggles also come down to the >>> fundamental way Pax Exam works... >>> >>> >>> On 1/12/18 6:27 AM, Sanne Grinovero wrote: >>>> +1 to explore alternatives to Pax Exam, but I'd be wary of maintining >>>> our own test infrastructure. >>>> >>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I can't >>>> imagine using Karaf without the helpers being a walk in the park; e.g. >>>> having to deal with HTTP operations comes with its own baggage >>>> {dependencies, complexity, speed, .. } and generally more stuff to >>>> maintain. >>>> >>>> So.. +1 to try out Arquillian or anything else. Or maybe you could >>>> start your own tool, but I'd prefer to see it in a separate repository >>>> :) e.g. a nice Gradle plugin so maybe you get more people helping? >>>> >>>> Also: considered contributing to Pax? My personal experience with it >>>> has always been a pain but if I had to try identify the reason, it was >>>> mostly caused by me being unfamiliar with Karaf and not having good >>>> clues to track down the real failure; maybe some minor error reporting >>>> improvements could make a big difference to its usability? Just >>>> saying, I don't feel like Pax is bad, but it seems their developers >>>> really expect their users to be deeply familiar with it all - feels >>>> like the typical case in which they could use some feedback and a >>>> hand. >>>> >>>> Thanks, >>>> Sanne >>>> >>>> On 12 January 2018 at 08:22, Gunnar Morling wrote: >>>>> Hi Brett, >>>>> >>>>> We also had our fair share of frustration with Pax Exam in HV, and I was >>>>> (more than once) at the point of dropping it. >>>>> >>>>> Docker could work, but as you say it's a bit of a heavy dependency, if not >>>>> required anyways. Not sure whether I'd like to add this as a prerequisite >>>>> for the HV build to be executed. And tests in separate profiles tend to be >>>>> "forgotten" in my experience. >>>>> >>>>> One other approach could be to use Arquillian's OSGi support (see >>>>> https://github.com/arquillian/arquillian-container-osgi), did you consider >>>>> to use that one as an alternative? >>>>> >>>>> Cheers, >>>>> >>>>> --Gunnar >>>>> >>>>> >>>>> 2018-01-12 3:34 GMT+01:00 Brett Meyer: >>>>> >>>>>> >>>>>> >>>>>> I'm fed up with Pax Exam and would love to replace it as the >>>>>> hibernate-osgi integration test harness. Most of the Karaf committers >>>>>> I've been working with hate it more than I do. Every single time we >>>>>> upgrade the Karaf version, something less-than-minor in hibernate-osgi, >>>>>> upgrade/change dependencies, or attempt to upgrade Pax Exam itself, >>>>>> there's some new obfuscated failure. And no matter how much I pray, it >>>>>> refuses to let us get to the container logs to figure out what >>>>>> happened. Tis a house of cards. >>>>>> >>>>>> >>>>>> >>>>>> One alternative that recently came up elsewhere: use Docker to bootstrap >>>>>> the container, hit it with our features.xml, install a test bundle that >>>>>> exposes functionality externally (over HTTP, Karaf commands, etc), then >>>>>> hit the endpoints and run assertions. >>>>>> >>>>>> Pros: true "integration test", plain vanilla Karaf, direct access to all >>>>>> logs, easier to eventually support and test other containers. >>>>>> >>>>>> Cons: Need Docker installed for local test runs, probably safer to >>>>>> isolate the integration test behind a disabled-by-default Maven profile. >>>>>> >>>>>> Any gut reactions? >>>>>> >>>>>> OSGi is fun and I'm not at all bitter, >>>>>> >>>>>> -Brett- >>>>>> >>>>>> ;) >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> hibernate-dev mailing list >>>>>> hibernate-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>>> _______________________________________________ >>>>> hibernate-dev mailing list >>>>> hibernate-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>> _______________________________________________ >>>> hibernate-dev mailing list >>>> hibernate-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev From sanne at hibernate.org Fri Jan 12 12:54:11 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 12 Jan 2018 17:54:11 +0000 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> Message-ID: On 12 January 2018 at 17:32, Brett Meyer wrote: > If I don't have time to contribute to Pax Exam, I certainly don't have > time to start a new project haha... > > And realistically, that "something new" would likely involve containers > anyway. > > At this point, mostly a question of 1) status quo, 2) Docker (or any > other container-based solution), or 3) try screwing around with Pax Exam > in "server-only" mode (but I don't have high hopes there). Sure. Docker is now available on the CI slaves too, so that's not a problem. The only annoyance is that the whole ORM team - and anyone contributing - would need to have Docker as well, but that doesn't seem too bad to me... and was likely bound to happen for other tools :) Steve, Chris and Andrea? Ok with that? Maybe you have Docker running already? > > > On 1/12/18 12:27 PM, Sanne Grinovero wrote: >> Ok, looks like you really should start something new :) >> >> Hopefully many of those other annoyed Karaf developers will follow. >> >> On 12 January 2018 at 13:59, Brett Meyer wrote: >>> Plus, for me, it's more a question of time. I only have a bit available >>> for open source work these days, and I'd rather spend that knocking out >>> some of the hibernate-osgi tasks we've had on our plate for a while. I >>> unfortunately don't have anything left to contribute to Pax Exam itself, >>> assuming that would even fix the problem. >>> >>> Even worse, we're barely using the integration tests for anything more >>> than a simple smoke test at this point, since it seems like every time >>> we touch it something new goes wrong. Looking for a more *consistent* >>> solution -- need more confidence in the backbone. >>> >>> >>> On 1/12/18 8:56 AM, Brett Meyer wrote: >>>> Sorry Gunnar/Sanne, should have clarified this first: >>>> >>>> We actually used Arquillian before Pax Exam, and the experience was >>>> far worse (somewhat of a long story)... >>>> >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I >>>> can't imagine using Karaf without the helpers being a walk in the park >>>> >>>> That's not actually the case. The way Pax Exam currently runs our >>>> tests is fundamentally part of the problem. The test code is >>>> dynamically wrapped in an actual bundle, using something like >>>> tiny-bundles, and executed *within* the container itself. Pax >>>> overrides runs with additional probes, overrides logging >>>> infrastructure, etc. Those nuances can often be the source of many of >>>> the bugs (there are a ton of classloader implications, etc. -- IIRC, >>>> this was one area where Arquillian was much, much worse). There are >>>> some benefits to that setup, but for Hibernate it mainly gets in the way. >>>> >>>> It *does* have a "server mode" where tests run outside of the >>>> container, but I vaguely remember going down that path early on and >>>> hitting a roadblock. For the life of me, I can't remember the >>>> specifics. But my pushback here is that ultimately Docker might be >>>> more preferable, giving us more of a real world scenario to do true >>>> e2e tests without something else in the middle. >>>> >>>>> so I can't imagine using Karaf without the helpers being a walk in >>>> the park; e.g. having to deal with HTTP operations comes with its own >>>> baggage {dependencies, complexity, speed, .. } and generally more >>>> stuff to maintain. >>>> >>>> I guess I respectfully disagree with that, but purely due to Karaf >>>> features. Our features.xml does most of the heavy lifting for us >>>> w/r/t getting Hibernate provisioned. The same would be true with the >>>> test harness bundle/feature. REST is simple and out-of-the-box thanks >>>> to Karaf + CXF or Camel. For other possible routes (Karaf commands), >>>> we already have code available in our demo/quickstart projects. >>>> >>>>> Also: considered contributing to Pax? >>>> Yes, of course. But the fact that numerous Karaf *committers* >>>> themselves have a long history of built-up frustration on it doesn't >>>> leave me optimistic. A couple of them had tried to pitch in at one >>>> point and weren't able to get anywhere. >>>> >>>>> but it seems their developers really expect their users to be deeply >>>> familiar with it all >>>> >>>> Absolutely! But again, our struggles also come down to the >>>> fundamental way Pax Exam works... >>>> >>>> >>>> On 1/12/18 6:27 AM, Sanne Grinovero wrote: >>>>> +1 to explore alternatives to Pax Exam, but I'd be wary of maintining >>>>> our own test infrastructure. >>>>> >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I can't >>>>> imagine using Karaf without the helpers being a walk in the park; e.g. >>>>> having to deal with HTTP operations comes with its own baggage >>>>> {dependencies, complexity, speed, .. } and generally more stuff to >>>>> maintain. >>>>> >>>>> So.. +1 to try out Arquillian or anything else. Or maybe you could >>>>> start your own tool, but I'd prefer to see it in a separate repository >>>>> :) e.g. a nice Gradle plugin so maybe you get more people helping? >>>>> >>>>> Also: considered contributing to Pax? My personal experience with it >>>>> has always been a pain but if I had to try identify the reason, it was >>>>> mostly caused by me being unfamiliar with Karaf and not having good >>>>> clues to track down the real failure; maybe some minor error reporting >>>>> improvements could make a big difference to its usability? Just >>>>> saying, I don't feel like Pax is bad, but it seems their developers >>>>> really expect their users to be deeply familiar with it all - feels >>>>> like the typical case in which they could use some feedback and a >>>>> hand. >>>>> >>>>> Thanks, >>>>> Sanne >>>>> >>>>> On 12 January 2018 at 08:22, Gunnar Morling wrote: >>>>>> Hi Brett, >>>>>> >>>>>> We also had our fair share of frustration with Pax Exam in HV, and I was >>>>>> (more than once) at the point of dropping it. >>>>>> >>>>>> Docker could work, but as you say it's a bit of a heavy dependency, if not >>>>>> required anyways. Not sure whether I'd like to add this as a prerequisite >>>>>> for the HV build to be executed. And tests in separate profiles tend to be >>>>>> "forgotten" in my experience. >>>>>> >>>>>> One other approach could be to use Arquillian's OSGi support (see >>>>>> https://github.com/arquillian/arquillian-container-osgi), did you consider >>>>>> to use that one as an alternative? >>>>>> >>>>>> Cheers, >>>>>> >>>>>> --Gunnar >>>>>> >>>>>> >>>>>> 2018-01-12 3:34 GMT+01:00 Brett Meyer: >>>>>> >>>>>>> >>>>>>> >>>>>>> I'm fed up with Pax Exam and would love to replace it as the >>>>>>> hibernate-osgi integration test harness. Most of the Karaf committers >>>>>>> I've been working with hate it more than I do. Every single time we >>>>>>> upgrade the Karaf version, something less-than-minor in hibernate-osgi, >>>>>>> upgrade/change dependencies, or attempt to upgrade Pax Exam itself, >>>>>>> there's some new obfuscated failure. And no matter how much I pray, it >>>>>>> refuses to let us get to the container logs to figure out what >>>>>>> happened. Tis a house of cards. >>>>>>> >>>>>>> >>>>>>> >>>>>>> One alternative that recently came up elsewhere: use Docker to bootstrap >>>>>>> the container, hit it with our features.xml, install a test bundle that >>>>>>> exposes functionality externally (over HTTP, Karaf commands, etc), then >>>>>>> hit the endpoints and run assertions. >>>>>>> >>>>>>> Pros: true "integration test", plain vanilla Karaf, direct access to all >>>>>>> logs, easier to eventually support and test other containers. >>>>>>> >>>>>>> Cons: Need Docker installed for local test runs, probably safer to >>>>>>> isolate the integration test behind a disabled-by-default Maven profile. >>>>>>> >>>>>>> Any gut reactions? >>>>>>> >>>>>>> OSGi is fun and I'm not at all bitter, >>>>>>> >>>>>>> -Brett- >>>>>>> >>>>>>> ;) >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> hibernate-dev mailing list >>>>>>> hibernate-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>>>> _______________________________________________ >>>>>> hibernate-dev mailing list >>>>>> hibernate-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>>> _______________________________________________ >>>>> hibernate-dev mailing list >>>>> hibernate-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Fri Jan 12 13:04:02 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 12 Jan 2018 18:04:02 +0000 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> Message-ID: I do not. But from what I understand its trivial to install on Fedora, unlike some other tools y'all like to use ;) On Fri, Jan 12, 2018 at 11:55 AM Sanne Grinovero wrote: > On 12 January 2018 at 17:32, Brett Meyer wrote: > > If I don't have time to contribute to Pax Exam, I certainly don't have > > time to start a new project haha... > > > > And realistically, that "something new" would likely involve containers > > anyway. > > > > At this point, mostly a question of 1) status quo, 2) Docker (or any > > other container-based solution), or 3) try screwing around with Pax Exam > > in "server-only" mode (but I don't have high hopes there). > > Sure. Docker is now available on the CI slaves too, so that's not a > problem. > > The only annoyance is that the whole ORM team - and anyone > contributing - would need to have Docker as well, but that doesn't > seem too bad to me... and was likely bound to happen for other tools > :) > > Steve, Chris and Andrea? Ok with that? Maybe you have Docker running > already? > > > > > > > On 1/12/18 12:27 PM, Sanne Grinovero wrote: > >> Ok, looks like you really should start something new :) > >> > >> Hopefully many of those other annoyed Karaf developers will follow. > >> > >> On 12 January 2018 at 13:59, Brett Meyer wrote: > >>> Plus, for me, it's more a question of time. I only have a bit > available > >>> for open source work these days, and I'd rather spend that knocking out > >>> some of the hibernate-osgi tasks we've had on our plate for a while. I > >>> unfortunately don't have anything left to contribute to Pax Exam > itself, > >>> assuming that would even fix the problem. > >>> > >>> Even worse, we're barely using the integration tests for anything more > >>> than a simple smoke test at this point, since it seems like every time > >>> we touch it something new goes wrong. Looking for a more *consistent* > >>> solution -- need more confidence in the backbone. > >>> > >>> > >>> On 1/12/18 8:56 AM, Brett Meyer wrote: > >>>> Sorry Gunnar/Sanne, should have clarified this first: > >>>> > >>>> We actually used Arquillian before Pax Exam, and the experience was > >>>> far worse (somewhat of a long story)... > >>>> > >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I > >>>> can't imagine using Karaf without the helpers being a walk in the park > >>>> > >>>> That's not actually the case. The way Pax Exam currently runs our > >>>> tests is fundamentally part of the problem. The test code is > >>>> dynamically wrapped in an actual bundle, using something like > >>>> tiny-bundles, and executed *within* the container itself. Pax > >>>> overrides runs with additional probes, overrides logging > >>>> infrastructure, etc. Those nuances can often be the source of many of > >>>> the bugs (there are a ton of classloader implications, etc. -- IIRC, > >>>> this was one area where Arquillian was much, much worse). There are > >>>> some benefits to that setup, but for Hibernate it mainly gets in the > way. > >>>> > >>>> It *does* have a "server mode" where tests run outside of the > >>>> container, but I vaguely remember going down that path early on and > >>>> hitting a roadblock. For the life of me, I can't remember the > >>>> specifics. But my pushback here is that ultimately Docker might be > >>>> more preferable, giving us more of a real world scenario to do true > >>>> e2e tests without something else in the middle. > >>>> > >>>>> so I can't imagine using Karaf without the helpers being a walk in > >>>> the park; e.g. having to deal with HTTP operations comes with its own > >>>> baggage {dependencies, complexity, speed, .. } and generally more > >>>> stuff to maintain. > >>>> > >>>> I guess I respectfully disagree with that, but purely due to Karaf > >>>> features. Our features.xml does most of the heavy lifting for us > >>>> w/r/t getting Hibernate provisioned. The same would be true with the > >>>> test harness bundle/feature. REST is simple and out-of-the-box thanks > >>>> to Karaf + CXF or Camel. For other possible routes (Karaf commands), > >>>> we already have code available in our demo/quickstart projects. > >>>> > >>>>> Also: considered contributing to Pax? > >>>> Yes, of course. But the fact that numerous Karaf *committers* > >>>> themselves have a long history of built-up frustration on it doesn't > >>>> leave me optimistic. A couple of them had tried to pitch in at one > >>>> point and weren't able to get anywhere. > >>>> > >>>>> but it seems their developers really expect their users to be deeply > >>>> familiar with it all > >>>> > >>>> Absolutely! But again, our struggles also come down to the > >>>> fundamental way Pax Exam works... > >>>> > >>>> > >>>> On 1/12/18 6:27 AM, Sanne Grinovero wrote: > >>>>> +1 to explore alternatives to Pax Exam, but I'd be wary of maintining > >>>>> our own test infrastructure. > >>>>> > >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I can't > >>>>> imagine using Karaf without the helpers being a walk in the park; > e.g. > >>>>> having to deal with HTTP operations comes with its own baggage > >>>>> {dependencies, complexity, speed, .. } and generally more stuff to > >>>>> maintain. > >>>>> > >>>>> So.. +1 to try out Arquillian or anything else. Or maybe you could > >>>>> start your own tool, but I'd prefer to see it in a separate > repository > >>>>> :) e.g. a nice Gradle plugin so maybe you get more people helping? > >>>>> > >>>>> Also: considered contributing to Pax? My personal experience with it > >>>>> has always been a pain but if I had to try identify the reason, it > was > >>>>> mostly caused by me being unfamiliar with Karaf and not having good > >>>>> clues to track down the real failure; maybe some minor error > reporting > >>>>> improvements could make a big difference to its usability? Just > >>>>> saying, I don't feel like Pax is bad, but it seems their developers > >>>>> really expect their users to be deeply familiar with it all - feels > >>>>> like the typical case in which they could use some feedback and a > >>>>> hand. > >>>>> > >>>>> Thanks, > >>>>> Sanne > >>>>> > >>>>> On 12 January 2018 at 08:22, Gunnar Morling > wrote: > >>>>>> Hi Brett, > >>>>>> > >>>>>> We also had our fair share of frustration with Pax Exam in HV, and > I was > >>>>>> (more than once) at the point of dropping it. > >>>>>> > >>>>>> Docker could work, but as you say it's a bit of a heavy dependency, > if not > >>>>>> required anyways. Not sure whether I'd like to add this as a > prerequisite > >>>>>> for the HV build to be executed. And tests in separate profiles > tend to be > >>>>>> "forgotten" in my experience. > >>>>>> > >>>>>> One other approach could be to use Arquillian's OSGi support (see > >>>>>> https://github.com/arquillian/arquillian-container-osgi), did you > consider > >>>>>> to use that one as an alternative? > >>>>>> > >>>>>> Cheers, > >>>>>> > >>>>>> --Gunnar > >>>>>> > >>>>>> > >>>>>> 2018-01-12 3:34 GMT+01:00 Brett Meyer: > >>>>>> > >>>>>>> > >>>>>>> > >>>>>>> I'm fed up with Pax Exam and would love to replace it as the > >>>>>>> hibernate-osgi integration test harness. Most of the Karaf > committers > >>>>>>> I've been working with hate it more than I do. Every single time > we > >>>>>>> upgrade the Karaf version, something less-than-minor in > hibernate-osgi, > >>>>>>> upgrade/change dependencies, or attempt to upgrade Pax Exam itself, > >>>>>>> there's some new obfuscated failure. And no matter how much I > pray, it > >>>>>>> refuses to let us get to the container logs to figure out what > >>>>>>> happened. Tis a house of cards. > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> One alternative that recently came up elsewhere: use Docker to > bootstrap > >>>>>>> the container, hit it with our features.xml, install a test bundle > that > >>>>>>> exposes functionality externally (over HTTP, Karaf commands, etc), > then > >>>>>>> hit the endpoints and run assertions. > >>>>>>> > >>>>>>> Pros: true "integration test", plain vanilla Karaf, direct access > to all > >>>>>>> logs, easier to eventually support and test other containers. > >>>>>>> > >>>>>>> Cons: Need Docker installed for local test runs, probably safer to > >>>>>>> isolate the integration test behind a disabled-by-default Maven > profile. > >>>>>>> > >>>>>>> Any gut reactions? > >>>>>>> > >>>>>>> OSGi is fun and I'm not at all bitter, > >>>>>>> > >>>>>>> -Brett- > >>>>>>> > >>>>>>> ;) > >>>>>>> > >>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> hibernate-dev mailing list > >>>>>>> hibernate-dev at lists.jboss.org > >>>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>>>>> _______________________________________________ > >>>>>> hibernate-dev mailing list > >>>>>> hibernate-dev at lists.jboss.org > >>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>>>> _______________________________________________ > >>>>> hibernate-dev mailing list > >>>>> hibernate-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>> _______________________________________________ > >>> hibernate-dev mailing list > >>> hibernate-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Fri Jan 12 13:10:10 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 12 Jan 2018 18:10:10 +0000 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> Message-ID: On 12 January 2018 at 18:04, Steve Ebersole wrote: > I do not. But from what I understand its trivial to install on Fedora, > unlike some other tools y'all like to use ;) Right, this one is easy to install :) Although my questions was more about if you're all ok with that. I know personally I've been skeptical: I don't like having barriers for contributors, nor have any "required" service running, however I've given up on fighting Docker, and also it's best to require just Docker and then via Docker we can have all other goodies which would otherwise be hard to install. > > > On Fri, Jan 12, 2018 at 11:55 AM Sanne Grinovero > wrote: >> >> On 12 January 2018 at 17:32, Brett Meyer wrote: >> > If I don't have time to contribute to Pax Exam, I certainly don't have >> > time to start a new project haha... >> > >> > And realistically, that "something new" would likely involve containers >> > anyway. >> > >> > At this point, mostly a question of 1) status quo, 2) Docker (or any >> > other container-based solution), or 3) try screwing around with Pax Exam >> > in "server-only" mode (but I don't have high hopes there). >> >> Sure. Docker is now available on the CI slaves too, so that's not a >> problem. >> >> The only annoyance is that the whole ORM team - and anyone >> contributing - would need to have Docker as well, but that doesn't >> seem too bad to me... and was likely bound to happen for other tools >> :) >> >> Steve, Chris and Andrea? Ok with that? Maybe you have Docker running >> already? >> >> > >> > >> > On 1/12/18 12:27 PM, Sanne Grinovero wrote: >> >> Ok, looks like you really should start something new :) >> >> >> >> Hopefully many of those other annoyed Karaf developers will follow. >> >> >> >> On 12 January 2018 at 13:59, Brett Meyer wrote: >> >>> Plus, for me, it's more a question of time. I only have a bit >> >>> available >> >>> for open source work these days, and I'd rather spend that knocking >> >>> out >> >>> some of the hibernate-osgi tasks we've had on our plate for a while. >> >>> I >> >>> unfortunately don't have anything left to contribute to Pax Exam >> >>> itself, >> >>> assuming that would even fix the problem. >> >>> >> >>> Even worse, we're barely using the integration tests for anything more >> >>> than a simple smoke test at this point, since it seems like every time >> >>> we touch it something new goes wrong. Looking for a more *consistent* >> >>> solution -- need more confidence in the backbone. >> >>> >> >>> >> >>> On 1/12/18 8:56 AM, Brett Meyer wrote: >> >>>> Sorry Gunnar/Sanne, should have clarified this first: >> >>>> >> >>>> We actually used Arquillian before Pax Exam, and the experience was >> >>>> far worse (somewhat of a long story)... >> >>>> >> >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I >> >>>> can't imagine using Karaf without the helpers being a walk in the >> >>>> park >> >>>> >> >>>> That's not actually the case. The way Pax Exam currently runs our >> >>>> tests is fundamentally part of the problem. The test code is >> >>>> dynamically wrapped in an actual bundle, using something like >> >>>> tiny-bundles, and executed *within* the container itself. Pax >> >>>> overrides runs with additional probes, overrides logging >> >>>> infrastructure, etc. Those nuances can often be the source of many >> >>>> of >> >>>> the bugs (there are a ton of classloader implications, etc. -- IIRC, >> >>>> this was one area where Arquillian was much, much worse). There are >> >>>> some benefits to that setup, but for Hibernate it mainly gets in the >> >>>> way. >> >>>> >> >>>> It *does* have a "server mode" where tests run outside of the >> >>>> container, but I vaguely remember going down that path early on and >> >>>> hitting a roadblock. For the life of me, I can't remember the >> >>>> specifics. But my pushback here is that ultimately Docker might be >> >>>> more preferable, giving us more of a real world scenario to do true >> >>>> e2e tests without something else in the middle. >> >>>> >> >>>>> so I can't imagine using Karaf without the helpers being a walk in >> >>>> the park; e.g. having to deal with HTTP operations comes with its own >> >>>> baggage {dependencies, complexity, speed, .. } and generally more >> >>>> stuff to maintain. >> >>>> >> >>>> I guess I respectfully disagree with that, but purely due to Karaf >> >>>> features. Our features.xml does most of the heavy lifting for us >> >>>> w/r/t getting Hibernate provisioned. The same would be true with the >> >>>> test harness bundle/feature. REST is simple and out-of-the-box >> >>>> thanks >> >>>> to Karaf + CXF or Camel. For other possible routes (Karaf commands), >> >>>> we already have code available in our demo/quickstart projects. >> >>>> >> >>>>> Also: considered contributing to Pax? >> >>>> Yes, of course. But the fact that numerous Karaf *committers* >> >>>> themselves have a long history of built-up frustration on it doesn't >> >>>> leave me optimistic. A couple of them had tried to pitch in at one >> >>>> point and weren't able to get anywhere. >> >>>> >> >>>>> but it seems their developers really expect their users to be deeply >> >>>> familiar with it all >> >>>> >> >>>> Absolutely! But again, our struggles also come down to the >> >>>> fundamental way Pax Exam works... >> >>>> >> >>>> >> >>>> On 1/12/18 6:27 AM, Sanne Grinovero wrote: >> >>>>> +1 to explore alternatives to Pax Exam, but I'd be wary of >> >>>>> maintining >> >>>>> our own test infrastructure. >> >>>>> >> >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I >> >>>>> can't >> >>>>> imagine using Karaf without the helpers being a walk in the park; >> >>>>> e.g. >> >>>>> having to deal with HTTP operations comes with its own baggage >> >>>>> {dependencies, complexity, speed, .. } and generally more stuff to >> >>>>> maintain. >> >>>>> >> >>>>> So.. +1 to try out Arquillian or anything else. Or maybe you could >> >>>>> start your own tool, but I'd prefer to see it in a separate >> >>>>> repository >> >>>>> :) e.g. a nice Gradle plugin so maybe you get more people helping? >> >>>>> >> >>>>> Also: considered contributing to Pax? My personal experience with it >> >>>>> has always been a pain but if I had to try identify the reason, it >> >>>>> was >> >>>>> mostly caused by me being unfamiliar with Karaf and not having good >> >>>>> clues to track down the real failure; maybe some minor error >> >>>>> reporting >> >>>>> improvements could make a big difference to its usability? Just >> >>>>> saying, I don't feel like Pax is bad, but it seems their developers >> >>>>> really expect their users to be deeply familiar with it all - feels >> >>>>> like the typical case in which they could use some feedback and a >> >>>>> hand. >> >>>>> >> >>>>> Thanks, >> >>>>> Sanne >> >>>>> >> >>>>> On 12 January 2018 at 08:22, Gunnar Morling >> >>>>> wrote: >> >>>>>> Hi Brett, >> >>>>>> >> >>>>>> We also had our fair share of frustration with Pax Exam in HV, and >> >>>>>> I was >> >>>>>> (more than once) at the point of dropping it. >> >>>>>> >> >>>>>> Docker could work, but as you say it's a bit of a heavy dependency, >> >>>>>> if not >> >>>>>> required anyways. Not sure whether I'd like to add this as a >> >>>>>> prerequisite >> >>>>>> for the HV build to be executed. And tests in separate profiles >> >>>>>> tend to be >> >>>>>> "forgotten" in my experience. >> >>>>>> >> >>>>>> One other approach could be to use Arquillian's OSGi support (see >> >>>>>> https://github.com/arquillian/arquillian-container-osgi), did you >> >>>>>> consider >> >>>>>> to use that one as an alternative? >> >>>>>> >> >>>>>> Cheers, >> >>>>>> >> >>>>>> --Gunnar >> >>>>>> >> >>>>>> >> >>>>>> 2018-01-12 3:34 GMT+01:00 Brett Meyer: >> >>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> I'm fed up with Pax Exam and would love to replace it as the >> >>>>>>> hibernate-osgi integration test harness. Most of the Karaf >> >>>>>>> committers >> >>>>>>> I've been working with hate it more than I do. Every single time >> >>>>>>> we >> >>>>>>> upgrade the Karaf version, something less-than-minor in >> >>>>>>> hibernate-osgi, >> >>>>>>> upgrade/change dependencies, or attempt to upgrade Pax Exam >> >>>>>>> itself, >> >>>>>>> there's some new obfuscated failure. And no matter how much I >> >>>>>>> pray, it >> >>>>>>> refuses to let us get to the container logs to figure out what >> >>>>>>> happened. Tis a house of cards. >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> One alternative that recently came up elsewhere: use Docker to >> >>>>>>> bootstrap >> >>>>>>> the container, hit it with our features.xml, install a test bundle >> >>>>>>> that >> >>>>>>> exposes functionality externally (over HTTP, Karaf commands, etc), >> >>>>>>> then >> >>>>>>> hit the endpoints and run assertions. >> >>>>>>> >> >>>>>>> Pros: true "integration test", plain vanilla Karaf, direct access >> >>>>>>> to all >> >>>>>>> logs, easier to eventually support and test other containers. >> >>>>>>> >> >>>>>>> Cons: Need Docker installed for local test runs, probably safer to >> >>>>>>> isolate the integration test behind a disabled-by-default Maven >> >>>>>>> profile. >> >>>>>>> >> >>>>>>> Any gut reactions? >> >>>>>>> >> >>>>>>> OSGi is fun and I'm not at all bitter, >> >>>>>>> >> >>>>>>> -Brett- >> >>>>>>> >> >>>>>>> ;) >> >>>>>>> >> >>>>>>> >> >>>>>>> _______________________________________________ >> >>>>>>> hibernate-dev mailing list >> >>>>>>> hibernate-dev at lists.jboss.org >> >>>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >>>>>> _______________________________________________ >> >>>>>> hibernate-dev mailing list >> >>>>>> hibernate-dev at lists.jboss.org >> >>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >>>>> _______________________________________________ >> >>>>> hibernate-dev mailing list >> >>>>> hibernate-dev at lists.jboss.org >> >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >>> _______________________________________________ >> >>> hibernate-dev mailing list >> >>> hibernate-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > >> > >> > _______________________________________________ >> > hibernate-dev mailing list >> > hibernate-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/hibernate-dev >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev From andrea at hibernate.org Fri Jan 12 13:13:45 2018 From: andrea at hibernate.org (andrea boriero) Date: Fri, 12 Jan 2018 18:13:45 +0000 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> Message-ID: I already have Docker running on my machine, so it seems not a big issue for me,but not sure about the impact for others. Anyway It's worth giving a try. On 12 January 2018 at 17:54, Sanne Grinovero wrote: > On 12 January 2018 at 17:32, Brett Meyer wrote: > > If I don't have time to contribute to Pax Exam, I certainly don't have > > time to start a new project haha... > > > > And realistically, that "something new" would likely involve containers > > anyway. > > > > At this point, mostly a question of 1) status quo, 2) Docker (or any > > other container-based solution), or 3) try screwing around with Pax Exam > > in "server-only" mode (but I don't have high hopes there). > > Sure. Docker is now available on the CI slaves too, so that's not a > problem. > > The only annoyance is that the whole ORM team - and anyone > contributing - would need to have Docker as well, but that doesn't > seem too bad to me... and was likely bound to happen for other tools > :) > > Steve, Chris and Andrea? Ok with that? Maybe you have Docker running > already? > > > > > > > On 1/12/18 12:27 PM, Sanne Grinovero wrote: > >> Ok, looks like you really should start something new :) > >> > >> Hopefully many of those other annoyed Karaf developers will follow. > >> > >> On 12 January 2018 at 13:59, Brett Meyer wrote: > >>> Plus, for me, it's more a question of time. I only have a bit > available > >>> for open source work these days, and I'd rather spend that knocking out > >>> some of the hibernate-osgi tasks we've had on our plate for a while. I > >>> unfortunately don't have anything left to contribute to Pax Exam > itself, > >>> assuming that would even fix the problem. > >>> > >>> Even worse, we're barely using the integration tests for anything more > >>> than a simple smoke test at this point, since it seems like every time > >>> we touch it something new goes wrong. Looking for a more *consistent* > >>> solution -- need more confidence in the backbone. > >>> > >>> > >>> On 1/12/18 8:56 AM, Brett Meyer wrote: > >>>> Sorry Gunnar/Sanne, should have clarified this first: > >>>> > >>>> We actually used Arquillian before Pax Exam, and the experience was > >>>> far worse (somewhat of a long story)... > >>>> > >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I > >>>> can't imagine using Karaf without the helpers being a walk in the park > >>>> > >>>> That's not actually the case. The way Pax Exam currently runs our > >>>> tests is fundamentally part of the problem. The test code is > >>>> dynamically wrapped in an actual bundle, using something like > >>>> tiny-bundles, and executed *within* the container itself. Pax > >>>> overrides runs with additional probes, overrides logging > >>>> infrastructure, etc. Those nuances can often be the source of many of > >>>> the bugs (there are a ton of classloader implications, etc. -- IIRC, > >>>> this was one area where Arquillian was much, much worse). There are > >>>> some benefits to that setup, but for Hibernate it mainly gets in the > way. > >>>> > >>>> It *does* have a "server mode" where tests run outside of the > >>>> container, but I vaguely remember going down that path early on and > >>>> hitting a roadblock. For the life of me, I can't remember the > >>>> specifics. But my pushback here is that ultimately Docker might be > >>>> more preferable, giving us more of a real world scenario to do true > >>>> e2e tests without something else in the middle. > >>>> > >>>>> so I can't imagine using Karaf without the helpers being a walk in > >>>> the park; e.g. having to deal with HTTP operations comes with its own > >>>> baggage {dependencies, complexity, speed, .. } and generally more > >>>> stuff to maintain. > >>>> > >>>> I guess I respectfully disagree with that, but purely due to Karaf > >>>> features. Our features.xml does most of the heavy lifting for us > >>>> w/r/t getting Hibernate provisioned. The same would be true with the > >>>> test harness bundle/feature. REST is simple and out-of-the-box thanks > >>>> to Karaf + CXF or Camel. For other possible routes (Karaf commands), > >>>> we already have code available in our demo/quickstart projects. > >>>> > >>>>> Also: considered contributing to Pax? > >>>> Yes, of course. But the fact that numerous Karaf *committers* > >>>> themselves have a long history of built-up frustration on it doesn't > >>>> leave me optimistic. A couple of them had tried to pitch in at one > >>>> point and weren't able to get anywhere. > >>>> > >>>>> but it seems their developers really expect their users to be deeply > >>>> familiar with it all > >>>> > >>>> Absolutely! But again, our struggles also come down to the > >>>> fundamental way Pax Exam works... > >>>> > >>>> > >>>> On 1/12/18 6:27 AM, Sanne Grinovero wrote: > >>>>> +1 to explore alternatives to Pax Exam, but I'd be wary of maintining > >>>>> our own test infrastructure. > >>>>> > >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I can't > >>>>> imagine using Karaf without the helpers being a walk in the park; > e.g. > >>>>> having to deal with HTTP operations comes with its own baggage > >>>>> {dependencies, complexity, speed, .. } and generally more stuff to > >>>>> maintain. > >>>>> > >>>>> So.. +1 to try out Arquillian or anything else. Or maybe you could > >>>>> start your own tool, but I'd prefer to see it in a separate > repository > >>>>> :) e.g. a nice Gradle plugin so maybe you get more people helping? > >>>>> > >>>>> Also: considered contributing to Pax? My personal experience with it > >>>>> has always been a pain but if I had to try identify the reason, it > was > >>>>> mostly caused by me being unfamiliar with Karaf and not having good > >>>>> clues to track down the real failure; maybe some minor error > reporting > >>>>> improvements could make a big difference to its usability? Just > >>>>> saying, I don't feel like Pax is bad, but it seems their developers > >>>>> really expect their users to be deeply familiar with it all - feels > >>>>> like the typical case in which they could use some feedback and a > >>>>> hand. > >>>>> > >>>>> Thanks, > >>>>> Sanne > >>>>> > >>>>> On 12 January 2018 at 08:22, Gunnar Morling > wrote: > >>>>>> Hi Brett, > >>>>>> > >>>>>> We also had our fair share of frustration with Pax Exam in HV, and > I was > >>>>>> (more than once) at the point of dropping it. > >>>>>> > >>>>>> Docker could work, but as you say it's a bit of a heavy dependency, > if not > >>>>>> required anyways. Not sure whether I'd like to add this as a > prerequisite > >>>>>> for the HV build to be executed. And tests in separate profiles > tend to be > >>>>>> "forgotten" in my experience. > >>>>>> > >>>>>> One other approach could be to use Arquillian's OSGi support (see > >>>>>> https://github.com/arquillian/arquillian-container-osgi), did you > consider > >>>>>> to use that one as an alternative? > >>>>>> > >>>>>> Cheers, > >>>>>> > >>>>>> --Gunnar > >>>>>> > >>>>>> > >>>>>> 2018-01-12 3:34 GMT+01:00 Brett Meyer: > >>>>>> > >>>>>>> > >>>>>>> > >>>>>>> I'm fed up with Pax Exam and would love to replace it as the > >>>>>>> hibernate-osgi integration test harness. Most of the Karaf > committers > >>>>>>> I've been working with hate it more than I do. Every single time > we > >>>>>>> upgrade the Karaf version, something less-than-minor in > hibernate-osgi, > >>>>>>> upgrade/change dependencies, or attempt to upgrade Pax Exam itself, > >>>>>>> there's some new obfuscated failure. And no matter how much I > pray, it > >>>>>>> refuses to let us get to the container logs to figure out what > >>>>>>> happened. Tis a house of cards. > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> One alternative that recently came up elsewhere: use Docker to > bootstrap > >>>>>>> the container, hit it with our features.xml, install a test bundle > that > >>>>>>> exposes functionality externally (over HTTP, Karaf commands, etc), > then > >>>>>>> hit the endpoints and run assertions. > >>>>>>> > >>>>>>> Pros: true "integration test", plain vanilla Karaf, direct access > to all > >>>>>>> logs, easier to eventually support and test other containers. > >>>>>>> > >>>>>>> Cons: Need Docker installed for local test runs, probably safer to > >>>>>>> isolate the integration test behind a disabled-by-default Maven > profile. > >>>>>>> > >>>>>>> Any gut reactions? > >>>>>>> > >>>>>>> OSGi is fun and I'm not at all bitter, > >>>>>>> > >>>>>>> -Brett- > >>>>>>> > >>>>>>> ;) > >>>>>>> > >>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> hibernate-dev mailing list > >>>>>>> hibernate-dev at lists.jboss.org > >>>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>>>>> _______________________________________________ > >>>>>> hibernate-dev mailing list > >>>>>> hibernate-dev at lists.jboss.org > >>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>>>> _______________________________________________ > >>>>> hibernate-dev mailing list > >>>>> hibernate-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>> _______________________________________________ > >>> hibernate-dev mailing list > >>> hibernate-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From brett at hibernate.org Fri Jan 12 13:16:49 2018 From: brett at hibernate.org (Brett Meyer) Date: Fri, 12 Jan 2018 13:16:49 -0500 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> Message-ID: <509a8137-a082-63a1-df57-7c9c55704db6@hibernate.org> I guess the way I'm looking at this is Docker will be primarily used by Jenkins, and myself or anyone working directly on hibernate-osgi itself.? Otherwise, it'll be disabled by default and hidden behind a profile.? We'll make sure that most contributors running the entire Hibernate test suite won't be affected... On 1/12/18 1:13 PM, andrea boriero wrote: > I already have Docker running on my machine, so it seems not a big > issue for me,but not sure about the impact for others. > > Anyway It's worth giving a try. > > On 12 January 2018 at 17:54, Sanne Grinovero > wrote: > > On 12 January 2018 at 17:32, Brett Meyer > wrote: > > If I don't have time to contribute to Pax Exam, I certainly > don't have > > time to start a new project haha... > > > > And realistically, that "something new" would likely involve > containers > > anyway. > > > > At this point, mostly a question of 1) status quo, 2) Docker (or any > > other container-based solution), or 3) try screwing around with > Pax Exam > > in "server-only" mode (but I don't have high hopes there). > > Sure. Docker is now available on the CI slaves too, so that's not > a problem. > > The only annoyance is that the whole ORM team - and anyone > contributing - would need to have Docker as well, but that doesn't > seem too bad to me... and was likely bound to happen for other tools > :) > > Steve, Chris and Andrea? Ok with that? Maybe you have Docker > running already? > > > > > > > On 1/12/18 12:27 PM, Sanne Grinovero wrote: > >> Ok, looks like you really should start something new :) > >> > >> Hopefully many of those other annoyed Karaf developers will follow. > >> > >> On 12 January 2018 at 13:59, Brett Meyer > wrote: > >>> Plus, for me, it's more a question of time.? I only have a bit > available > >>> for open source work these days, and I'd rather spend that > knocking out > >>> some of the hibernate-osgi tasks we've had on our plate for a > while.? I > >>> unfortunately don't have anything left to contribute to Pax > Exam itself, > >>> assuming that would even fix the problem. > >>> > >>> Even worse, we're barely using the integration tests for > anything more > >>> than a simple smoke test at this point, since it seems like > every time > >>> we touch it something new goes wrong. Looking for a more > *consistent* > >>> solution -- need more confidence in the backbone. > >>> > >>> > >>> On 1/12/18 8:56 AM, Brett Meyer wrote: > >>>> Sorry Gunnar/Sanne, should have clarified this first: > >>>> > >>>> We actually used Arquillian before Pax Exam, and the > experience was > >>>> far worse (somewhat of a long story)... > >>>> > >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I > >>>> can't imagine using Karaf without the helpers being a walk in > the park > >>>> > >>>> That's not actually the case.? The way Pax Exam currently > runs our > >>>> tests is fundamentally part of the problem.? The test code is > >>>> dynamically wrapped in an actual bundle, using something like > >>>> tiny-bundles, and executed *within* the container itself. Pax > >>>> overrides runs with additional probes, overrides logging > >>>> infrastructure, etc.? Those nuances can often be the source > of many of > >>>> the bugs (there are a ton of classloader implications, etc. > -- IIRC, > >>>> this was one area where Arquillian was much, much worse).? > There are > >>>> some benefits to that setup, but for Hibernate it mainly gets > in the way. > >>>> > >>>> It *does* have a "server mode" where tests run outside of the > >>>> container, but I vaguely remember going down that path early > on and > >>>> hitting a roadblock.? For the life of me, I can't remember the > >>>> specifics.? But my pushback here is that ultimately Docker > might be > >>>> more preferable, giving us more of a real world scenario to > do true > >>>> e2e tests without something else in the middle. > >>>> > >>>>> so I can't imagine using Karaf without the helpers being a > walk in > >>>> the park; e.g. having to deal with HTTP operations comes with > its own > >>>> baggage {dependencies, complexity, speed, .. } and generally more > >>>> stuff to maintain. > >>>> > >>>> I guess I respectfully disagree with that, but purely due to > Karaf > >>>> features.? Our features.xml does most of the heavy lifting for us > >>>> w/r/t getting Hibernate provisioned. The same would be true > with the > >>>> test harness bundle/feature.? REST is simple and > out-of-the-box thanks > >>>> to Karaf + CXF or Camel.? For other possible routes (Karaf > commands), > >>>> we already have code available in our demo/quickstart projects. > >>>> > >>>>> Also: considered contributing to Pax? > >>>> Yes, of course.? But the fact that numerous Karaf *committers* > >>>> themselves have a long history of built-up frustration on it > doesn't > >>>> leave me optimistic.? A couple of them had tried to pitch in > at one > >>>> point and weren't able to get anywhere. > >>>> > >>>>> but it seems their developers really expect their users to > be deeply > >>>> familiar with it all > >>>> > >>>> Absolutely!? But again, our struggles also come down to the > >>>> fundamental way Pax Exam works... > >>>> > >>>> > >>>> On 1/12/18 6:27 AM, Sanne Grinovero wrote: > >>>>> +1 to explore alternatives to Pax Exam, but I'd be wary of > maintining > >>>>> our own test infrastructure. > >>>>> > >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, > so I can't > >>>>> imagine using Karaf without the helpers being a walk in the > park; e.g. > >>>>> having to deal with HTTP operations comes with its own baggage > >>>>> {dependencies, complexity, speed, .. } and generally more > stuff to > >>>>> maintain. > >>>>> > >>>>> So.. +1 to try out Arquillian or anything else. Or maybe you > could > >>>>> start your own tool, but I'd prefer to see it in a separate > repository > >>>>> :) e.g. a nice Gradle plugin so maybe you get more people > helping? > >>>>> > >>>>> Also: considered contributing to Pax? My personal experience > with it > >>>>> has always been a pain but if I had to try identify the > reason, it was > >>>>> mostly caused by me being unfamiliar with Karaf and not > having good > >>>>> clues to track down the real failure; maybe some minor error > reporting > >>>>> improvements could make a big difference to its usability? Just > >>>>> saying, I don't feel like Pax is bad, but it seems their > developers > >>>>> really expect their users to be deeply familiar with it all > - feels > >>>>> like the typical case in which they could use some feedback > and a > >>>>> hand. > >>>>> > >>>>> Thanks, > >>>>> Sanne > >>>>> > >>>>> On 12 January 2018 at 08:22, Gunnar > Morling> wrote: > >>>>>> Hi Brett, > >>>>>> > >>>>>> We also had our fair share of frustration with Pax Exam in > HV, and I was > >>>>>> (more than once) at the point of dropping it. > >>>>>> > >>>>>> Docker could work, but as you say it's a bit of a heavy > dependency, if not > >>>>>> required anyways. Not sure whether I'd like to add this as > a prerequisite > >>>>>> for the HV build to be executed. And tests in separate > profiles tend to be > >>>>>> "forgotten" in my experience. > >>>>>> > >>>>>> One other approach could be to use Arquillian's OSGi > support (see > >>>>>> https://github.com/arquillian/arquillian-container-osgi > ), did > you consider > >>>>>> to use that one as an alternative? > >>>>>> > >>>>>> Cheers, > >>>>>> > >>>>>> --Gunnar > >>>>>> > >>>>>> > >>>>>> 2018-01-12 3:34 GMT+01:00 Brett Meyer >: > >>>>>> > >>>>>>> > >>>>>>> > >>>>>>> I'm fed up with Pax Exam and would love to replace it as the > >>>>>>> hibernate-osgi integration test harness.? Most of the > Karaf committers > >>>>>>> I've been working with hate it more than I do.? Every > single time we > >>>>>>> upgrade the Karaf version, something less-than-minor in > hibernate-osgi, > >>>>>>> upgrade/change dependencies, or attempt to upgrade Pax > Exam itself, > >>>>>>> there's some new obfuscated failure.? And no matter how > much I pray, it > >>>>>>> refuses to let us get to the container logs to figure out what > >>>>>>> happened.? Tis a house of cards. > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> One alternative that recently came up elsewhere: use > Docker to bootstrap > >>>>>>> the container, hit it with our features.xml, install a > test bundle that > >>>>>>> exposes functionality externally (over HTTP, Karaf > commands, etc), then > >>>>>>> hit the endpoints and run assertions. > >>>>>>> > >>>>>>> Pros: true "integration test", plain vanilla Karaf, direct > access to all > >>>>>>> logs, easier to eventually support and test other containers. > >>>>>>> > >>>>>>> Cons: Need Docker installed for local test runs, probably > safer to > >>>>>>> isolate the integration test behind a disabled-by-default > Maven profile. > >>>>>>> > >>>>>>> Any gut reactions? > >>>>>>> > >>>>>>> OSGi is fun and I'm not at all bitter, > >>>>>>> > >>>>>>> -Brett- > >>>>>>> > >>>>>>> ;) > >>>>>>> > >>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> hibernate-dev mailing list > >>>>>>> hibernate-dev at lists.jboss.org > > >>>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > >>>>>> _______________________________________________ > >>>>>> hibernate-dev mailing list > >>>>>> hibernate-dev at lists.jboss.org > > >>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > >>>>> _______________________________________________ > >>>>> hibernate-dev mailing list > >>>>> hibernate-dev at lists.jboss.org > > >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > >>> _______________________________________________ > >>> hibernate-dev mailing list > >>> hibernate-dev at lists.jboss.org > > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > From gunnar at hibernate.org Fri Jan 12 15:44:45 2018 From: gunnar at hibernate.org (Gunnar Morling) Date: Fri, 12 Jan 2018 21:44:45 +0100 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: Hey all, > at least plan a 5.2.13 release soon to release all the fixes already in? +1 for doing another 5.2 release if there are lots of issues already merged and 5.3 isn't going through the door very soon. It'd be great to get those nice fixes you all did out to users. I realize I won't be the one doing this work, so consider it just as my 2c, but from the sidelines it seems like an effort well spent. Cheers, --Gunnar 2018-01-10 12:41 GMT+01:00 Guillaume Smet : > Hi, > > On Fri, Jan 5, 2018 at 4:24 PM, Steve Ebersole > wrote: > > > Yep, I know how long it takes to do a release - I've been doing them for > > almost 15 years ;) > > > > I'm not sure if you are agreeing or disagreeing about blogging every > > bugfix release. But anyway, Sanne asked what would help automate the > > release process, so I am listing things that would help. Of course you > can > > feel free to contribute blogging and emailing announcement plugins for > > Gradle for us to use in the automated release tasks ;) > > > > AFAICS, lately, the ORM bugfix releases announcement is just a link to the > changelog. I don't think it would buy you a lot to automate it. > > For the NoORM projects, the announcement part (Twitter, Mail, Blog) is > still manual. I don't think it's that bad. > > > > If you release something every month, it's not that bad if a bugfix slips > >> to the next release. If a PR is not completely ready, well, it's going > to > >> be in the next one, no need to wait. It helps getting the release > >> coordination easier. > >> > > > > 5.2 just got lost in the cracks as Andrea, Chris and I were all working > on > > 6.0. > > > > > > It's also easier to detect and fix regressions when you release more > >> frequently. > >> > > > > That's a fallacy. Or at least its not true in isolation. It depends on > > the things that would highlight the regression picking up that release > and > > playing with it, since your entire premise here is that the regression is > > not tested as part of the test suite. But that's actually not what > happens > > today in terms of our inter-project integrations... really we find out > many > > releases later when OGM or Search update to these newer ORM releases. > > > > I did a quite a lot of regression hunt myself in $previousJob (mostly on > Search but a bit on ORM too), and it did help to upgrade often and when the > releases were not too big. Easier to find the commit causing the > regression. > > I don't know if there are a lot of companies doing that (I know mine > stopped to do that after I left) but it did really help to upgrade in > smaller steps. > > That's what I was trying to explain. > > FWIW, in the active community branches, I usually do the backport right > >> away - if I think the issue requires backporting, sometimes, it's just > not > >> worth it or too risky. And I'm doing the "what should I backport?" thing > >> only on product only branches. > >> > > > > > > This right here is the crux - "active community branch". By definition > no > > branch is in active community development. Again, we have discussed this > > as a team multiple times. Once the next release is stable we stop > > developing the previous one, with a few caveats. E.g.: > > > > - Once 5.3 is stable we do generally expect to do a *few* additional > > 5.2 releases. But let's be careful about the expectation about the > phrase > > "few" here. I really mean one or 2... > > - For major releases (5.x -> 6.x) we generally expect to do a larger > > number of releases of the 5.3 line. Again though, not indefinite. > > > > The basic gist is that we are an open source community. We simply do not > > have the resources to maintain infinite lines of development. We need to > > focus on what is important. I think we all agree that currently 5.2 is > > still important, but I think we may all have different expectations for > > what that means moving forward as 5.3 becomes the stable release. I > cannot > > give a concrete "we will only do X more 5.2 releases after 5.3 is stable" > > answer. It might be 2. It might be 3. And it might be 1. > > > > I think we agree on the principles. We just need to have a viable > definition of "stable" for the users. > > > > I'm not saying it would be that easy with ORM as the flow of issues is > >> significantly larger. Just stating how we do it. > >> > > > > Sure. And time-boxed releases are what we normally strive for as well in > > ORM. 5.2 is largely an aberration in this regard. Again - Andrea, Chris > > and I were focused on 6.0 work and since there is no 5.2 based Red Hat > work > > this fell between the cracks > > > > So I think we all agree that the situation with 5.2 is less than ideal. > > And it's the version currently recommended for community usage. Which is a > large part of Hibernate usage. > > Could we agree on releasing it regularly from now on and at least plan a > 5.2.13 release soon to release all the fixes already in? > > Thanks! > > -- > Guillaume > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From gunnar at hibernate.org Fri Jan 12 15:52:04 2018 From: gunnar at hibernate.org (Gunnar Morling) Date: Fri, 12 Jan 2018 21:52:04 +0100 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: <509a8137-a082-63a1-df57-7c9c55704db6@hibernate.org> References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> <509a8137-a082-63a1-df57-7c9c55704db6@hibernate.org> Message-ID: Brett, What's still unclear to me is when going the Docker route, won't you still need some code which deploys your tests to Karaf, runs them there and fetches the test results, so e.g. your Gradle build will fail if there are test failures? Would you envision to write these bits yourself? And wouldn't this amount to re-implementing PaxExam yourself? Seems I'm still missing a piece of the story :) Cheers, --Gunnar 2018-01-12 19:16 GMT+01:00 Brett Meyer : > I guess the way I'm looking at this is Docker will be primarily used by > Jenkins, and myself or anyone working directly on hibernate-osgi > itself. Otherwise, it'll be disabled by default and hidden behind a > profile. We'll make sure that most contributors running the entire > Hibernate test suite won't be affected... > > > On 1/12/18 1:13 PM, andrea boriero wrote: > > I already have Docker running on my machine, so it seems not a big > > issue for me,but not sure about the impact for others. > > > > Anyway It's worth giving a try. > > > > On 12 January 2018 at 17:54, Sanne Grinovero > > wrote: > > > > On 12 January 2018 at 17:32, Brett Meyer > > wrote: > > > If I don't have time to contribute to Pax Exam, I certainly > > don't have > > > time to start a new project haha... > > > > > > And realistically, that "something new" would likely involve > > containers > > > anyway. > > > > > > At this point, mostly a question of 1) status quo, 2) Docker (or > any > > > other container-based solution), or 3) try screwing around with > > Pax Exam > > > in "server-only" mode (but I don't have high hopes there). > > > > Sure. Docker is now available on the CI slaves too, so that's not > > a problem. > > > > The only annoyance is that the whole ORM team - and anyone > > contributing - would need to have Docker as well, but that doesn't > > seem too bad to me... and was likely bound to happen for other tools > > :) > > > > Steve, Chris and Andrea? Ok with that? Maybe you have Docker > > running already? > > > > > > > > > > > On 1/12/18 12:27 PM, Sanne Grinovero wrote: > > >> Ok, looks like you really should start something new :) > > >> > > >> Hopefully many of those other annoyed Karaf developers will > follow. > > >> > > >> On 12 January 2018 at 13:59, Brett Meyer > > wrote: > > >>> Plus, for me, it's more a question of time. I only have a bit > > available > > >>> for open source work these days, and I'd rather spend that > > knocking out > > >>> some of the hibernate-osgi tasks we've had on our plate for a > > while. I > > >>> unfortunately don't have anything left to contribute to Pax > > Exam itself, > > >>> assuming that would even fix the problem. > > >>> > > >>> Even worse, we're barely using the integration tests for > > anything more > > >>> than a simple smoke test at this point, since it seems like > > every time > > >>> we touch it something new goes wrong. Looking for a more > > *consistent* > > >>> solution -- need more confidence in the backbone. > > >>> > > >>> > > >>> On 1/12/18 8:56 AM, Brett Meyer wrote: > > >>>> Sorry Gunnar/Sanne, should have clarified this first: > > >>>> > > >>>> We actually used Arquillian before Pax Exam, and the > > experience was > > >>>> far worse (somewhat of a long story)... > > >>>> > > >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, so I > > >>>> can't imagine using Karaf without the helpers being a walk in > > the park > > >>>> > > >>>> That's not actually the case. The way Pax Exam currently > > runs our > > >>>> tests is fundamentally part of the problem. The test code is > > >>>> dynamically wrapped in an actual bundle, using something like > > >>>> tiny-bundles, and executed *within* the container itself. Pax > > >>>> overrides runs with additional probes, overrides logging > > >>>> infrastructure, etc. Those nuances can often be the source > > of many of > > >>>> the bugs (there are a ton of classloader implications, etc. > > -- IIRC, > > >>>> this was one area where Arquillian was much, much worse). > > There are > > >>>> some benefits to that setup, but for Hibernate it mainly gets > > in the way. > > >>>> > > >>>> It *does* have a "server mode" where tests run outside of the > > >>>> container, but I vaguely remember going down that path early > > on and > > >>>> hitting a roadblock. For the life of me, I can't remember the > > >>>> specifics. But my pushback here is that ultimately Docker > > might be > > >>>> more preferable, giving us more of a real world scenario to > > do true > > >>>> e2e tests without something else in the middle. > > >>>> > > >>>>> so I can't imagine using Karaf without the helpers being a > > walk in > > >>>> the park; e.g. having to deal with HTTP operations comes with > > its own > > >>>> baggage {dependencies, complexity, speed, .. } and generally > more > > >>>> stuff to maintain. > > >>>> > > >>>> I guess I respectfully disagree with that, but purely due to > > Karaf > > >>>> features. Our features.xml does most of the heavy lifting for > us > > >>>> w/r/t getting Hibernate provisioned. The same would be true > > with the > > >>>> test harness bundle/feature. REST is simple and > > out-of-the-box thanks > > >>>> to Karaf + CXF or Camel. For other possible routes (Karaf > > commands), > > >>>> we already have code available in our demo/quickstart projects. > > >>>> > > >>>>> Also: considered contributing to Pax? > > >>>> Yes, of course. But the fact that numerous Karaf *committers* > > >>>> themselves have a long history of built-up frustration on it > > doesn't > > >>>> leave me optimistic. A couple of them had tried to pitch in > > at one > > >>>> point and weren't able to get anywhere. > > >>>> > > >>>>> but it seems their developers really expect their users to > > be deeply > > >>>> familiar with it all > > >>>> > > >>>> Absolutely! But again, our struggles also come down to the > > >>>> fundamental way Pax Exam works... > > >>>> > > >>>> > > >>>> On 1/12/18 6:27 AM, Sanne Grinovero wrote: > > >>>>> +1 to explore alternatives to Pax Exam, but I'd be wary of > > maintining > > >>>>> our own test infrastructure. > > >>>>> > > >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, > > so I can't > > >>>>> imagine using Karaf without the helpers being a walk in the > > park; e.g. > > >>>>> having to deal with HTTP operations comes with its own baggage > > >>>>> {dependencies, complexity, speed, .. } and generally more > > stuff to > > >>>>> maintain. > > >>>>> > > >>>>> So.. +1 to try out Arquillian or anything else. Or maybe you > > could > > >>>>> start your own tool, but I'd prefer to see it in a separate > > repository > > >>>>> :) e.g. a nice Gradle plugin so maybe you get more people > > helping? > > >>>>> > > >>>>> Also: considered contributing to Pax? My personal experience > > with it > > >>>>> has always been a pain but if I had to try identify the > > reason, it was > > >>>>> mostly caused by me being unfamiliar with Karaf and not > > having good > > >>>>> clues to track down the real failure; maybe some minor error > > reporting > > >>>>> improvements could make a big difference to its usability? Just > > >>>>> saying, I don't feel like Pax is bad, but it seems their > > developers > > >>>>> really expect their users to be deeply familiar with it all > > - feels > > >>>>> like the typical case in which they could use some feedback > > and a > > >>>>> hand. > > >>>>> > > >>>>> Thanks, > > >>>>> Sanne > > >>>>> > > >>>>> On 12 January 2018 at 08:22, Gunnar > > Morling> wrote: > > >>>>>> Hi Brett, > > >>>>>> > > >>>>>> We also had our fair share of frustration with Pax Exam in > > HV, and I was > > >>>>>> (more than once) at the point of dropping it. > > >>>>>> > > >>>>>> Docker could work, but as you say it's a bit of a heavy > > dependency, if not > > >>>>>> required anyways. Not sure whether I'd like to add this as > > a prerequisite > > >>>>>> for the HV build to be executed. And tests in separate > > profiles tend to be > > >>>>>> "forgotten" in my experience. > > >>>>>> > > >>>>>> One other approach could be to use Arquillian's OSGi > > support (see > > >>>>>> https://github.com/arquillian/arquillian-container-osgi > > ), did > > you consider > > >>>>>> to use that one as an alternative? > > >>>>>> > > >>>>>> Cheers, > > >>>>>> > > >>>>>> --Gunnar > > >>>>>> > > >>>>>> > > >>>>>> 2018-01-12 3:34 GMT+01:00 Brett Meyer > >: > > >>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> I'm fed up with Pax Exam and would love to replace it as the > > >>>>>>> hibernate-osgi integration test harness. Most of the > > Karaf committers > > >>>>>>> I've been working with hate it more than I do. Every > > single time we > > >>>>>>> upgrade the Karaf version, something less-than-minor in > > hibernate-osgi, > > >>>>>>> upgrade/change dependencies, or attempt to upgrade Pax > > Exam itself, > > >>>>>>> there's some new obfuscated failure. And no matter how > > much I pray, it > > >>>>>>> refuses to let us get to the container logs to figure out > what > > >>>>>>> happened. Tis a house of cards. > > >>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> One alternative that recently came up elsewhere: use > > Docker to bootstrap > > >>>>>>> the container, hit it with our features.xml, install a > > test bundle that > > >>>>>>> exposes functionality externally (over HTTP, Karaf > > commands, etc), then > > >>>>>>> hit the endpoints and run assertions. > > >>>>>>> > > >>>>>>> Pros: true "integration test", plain vanilla Karaf, direct > > access to all > > >>>>>>> logs, easier to eventually support and test other containers. > > >>>>>>> > > >>>>>>> Cons: Need Docker installed for local test runs, probably > > safer to > > >>>>>>> isolate the integration test behind a disabled-by-default > > Maven profile. > > >>>>>>> > > >>>>>>> Any gut reactions? > > >>>>>>> > > >>>>>>> OSGi is fun and I'm not at all bitter, > > >>>>>>> > > >>>>>>> -Brett- > > >>>>>>> > > >>>>>>> ;) > > >>>>>>> > > >>>>>>> > > >>>>>>> _______________________________________________ > > >>>>>>> hibernate-dev mailing list > > >>>>>>> hibernate-dev at lists.jboss.org > > > > >>>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > >>>>>> _______________________________________________ > > >>>>>> hibernate-dev mailing list > > >>>>>> hibernate-dev at lists.jboss.org > > > > >>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > >>>>> _______________________________________________ > > >>>>> hibernate-dev mailing list > > >>>>> hibernate-dev at lists.jboss.org > > > > >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > >>> _______________________________________________ > > >>> hibernate-dev mailing list > > >>> hibernate-dev at lists.jboss.org > > > > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > > > > > > _______________________________________________ > > > hibernate-dev mailing list > > > hibernate-dev at lists.jboss.org jboss.org> > > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Fri Jan 12 16:04:14 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 12 Jan 2018 21:04:14 +0000 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> <509a8137-a082-63a1-df57-7c9c55704db6@hibernate.org> Message-ID: I believe most of this is already handled for various build tools (like the gradle docker plugin) and simplified by docker image repos (bintray, e.g.). But this is a good point. I'd be interested to see if you've done this before Brett.. On Fri, Jan 12, 2018 at 2:56 PM Gunnar Morling wrote: > Brett, > > What's still unclear to me is when going the Docker route, won't you still > need some code which deploys your tests to Karaf, runs them there and > fetches the test results, so e.g. your Gradle build will fail if there are > test failures? Would you envision to write these bits yourself? And > wouldn't this amount to re-implementing PaxExam yourself? Seems I'm still > missing a piece of the story :) > > Cheers, > > --Gunnar > > > 2018-01-12 19:16 GMT+01:00 Brett Meyer : > > > I guess the way I'm looking at this is Docker will be primarily used by > > Jenkins, and myself or anyone working directly on hibernate-osgi > > itself. Otherwise, it'll be disabled by default and hidden behind a > > profile. We'll make sure that most contributors running the entire > > Hibernate test suite won't be affected... > > > > > > On 1/12/18 1:13 PM, andrea boriero wrote: > > > I already have Docker running on my machine, so it seems not a big > > > issue for me,but not sure about the impact for others. > > > > > > Anyway It's worth giving a try. > > > > > > On 12 January 2018 at 17:54, Sanne Grinovero > > > wrote: > > > > > > On 12 January 2018 at 17:32, Brett Meyer > > > wrote: > > > > If I don't have time to contribute to Pax Exam, I certainly > > > don't have > > > > time to start a new project haha... > > > > > > > > And realistically, that "something new" would likely involve > > > containers > > > > anyway. > > > > > > > > At this point, mostly a question of 1) status quo, 2) Docker (or > > any > > > > other container-based solution), or 3) try screwing around with > > > Pax Exam > > > > in "server-only" mode (but I don't have high hopes there). > > > > > > Sure. Docker is now available on the CI slaves too, so that's not > > > a problem. > > > > > > The only annoyance is that the whole ORM team - and anyone > > > contributing - would need to have Docker as well, but that doesn't > > > seem too bad to me... and was likely bound to happen for other > tools > > > :) > > > > > > Steve, Chris and Andrea? Ok with that? Maybe you have Docker > > > running already? > > > > > > > > > > > > > > > On 1/12/18 12:27 PM, Sanne Grinovero wrote: > > > >> Ok, looks like you really should start something new :) > > > >> > > > >> Hopefully many of those other annoyed Karaf developers will > > follow. > > > >> > > > >> On 12 January 2018 at 13:59, Brett Meyer > > > wrote: > > > >>> Plus, for me, it's more a question of time. I only have a bit > > > available > > > >>> for open source work these days, and I'd rather spend that > > > knocking out > > > >>> some of the hibernate-osgi tasks we've had on our plate for a > > > while. I > > > >>> unfortunately don't have anything left to contribute to Pax > > > Exam itself, > > > >>> assuming that would even fix the problem. > > > >>> > > > >>> Even worse, we're barely using the integration tests for > > > anything more > > > >>> than a simple smoke test at this point, since it seems like > > > every time > > > >>> we touch it something new goes wrong. Looking for a more > > > *consistent* > > > >>> solution -- need more confidence in the backbone. > > > >>> > > > >>> > > > >>> On 1/12/18 8:56 AM, Brett Meyer wrote: > > > >>>> Sorry Gunnar/Sanne, should have clarified this first: > > > >>>> > > > >>>> We actually used Arquillian before Pax Exam, and the > > > experience was > > > >>>> far worse (somewhat of a long story)... > > > >>>> > > > >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, > so I > > > >>>> can't imagine using Karaf without the helpers being a walk in > > > the park > > > >>>> > > > >>>> That's not actually the case. The way Pax Exam currently > > > runs our > > > >>>> tests is fundamentally part of the problem. The test code is > > > >>>> dynamically wrapped in an actual bundle, using something like > > > >>>> tiny-bundles, and executed *within* the container itself. Pax > > > >>>> overrides runs with additional probes, overrides logging > > > >>>> infrastructure, etc. Those nuances can often be the source > > > of many of > > > >>>> the bugs (there are a ton of classloader implications, etc. > > > -- IIRC, > > > >>>> this was one area where Arquillian was much, much worse). > > > There are > > > >>>> some benefits to that setup, but for Hibernate it mainly gets > > > in the way. > > > >>>> > > > >>>> It *does* have a "server mode" where tests run outside of the > > > >>>> container, but I vaguely remember going down that path early > > > on and > > > >>>> hitting a roadblock. For the life of me, I can't remember the > > > >>>> specifics. But my pushback here is that ultimately Docker > > > might be > > > >>>> more preferable, giving us more of a real world scenario to > > > do true > > > >>>> e2e tests without something else in the middle. > > > >>>> > > > >>>>> so I can't imagine using Karaf without the helpers being a > > > walk in > > > >>>> the park; e.g. having to deal with HTTP operations comes with > > > its own > > > >>>> baggage {dependencies, complexity, speed, .. } and generally > > more > > > >>>> stuff to maintain. > > > >>>> > > > >>>> I guess I respectfully disagree with that, but purely due to > > > Karaf > > > >>>> features. Our features.xml does most of the heavy lifting for > > us > > > >>>> w/r/t getting Hibernate provisioned. The same would be true > > > with the > > > >>>> test harness bundle/feature. REST is simple and > > > out-of-the-box thanks > > > >>>> to Karaf + CXF or Camel. For other possible routes (Karaf > > > commands), > > > >>>> we already have code available in our demo/quickstart > projects. > > > >>>> > > > >>>>> Also: considered contributing to Pax? > > > >>>> Yes, of course. But the fact that numerous Karaf *committers* > > > >>>> themselves have a long history of built-up frustration on it > > > doesn't > > > >>>> leave me optimistic. A couple of them had tried to pitch in > > > at one > > > >>>> point and weren't able to get anywhere. > > > >>>> > > > >>>>> but it seems their developers really expect their users to > > > be deeply > > > >>>> familiar with it all > > > >>>> > > > >>>> Absolutely! But again, our struggles also come down to the > > > >>>> fundamental way Pax Exam works... > > > >>>> > > > >>>> > > > >>>> On 1/12/18 6:27 AM, Sanne Grinovero wrote: > > > >>>>> +1 to explore alternatives to Pax Exam, but I'd be wary of > > > maintining > > > >>>>> our own test infrastructure. > > > >>>>> > > > >>>>> Pax Exam was just "helping" to deploy/run things in Karaf, > > > so I can't > > > >>>>> imagine using Karaf without the helpers being a walk in the > > > park; e.g. > > > >>>>> having to deal with HTTP operations comes with its own > baggage > > > >>>>> {dependencies, complexity, speed, .. } and generally more > > > stuff to > > > >>>>> maintain. > > > >>>>> > > > >>>>> So.. +1 to try out Arquillian or anything else. Or maybe you > > > could > > > >>>>> start your own tool, but I'd prefer to see it in a separate > > > repository > > > >>>>> :) e.g. a nice Gradle plugin so maybe you get more people > > > helping? > > > >>>>> > > > >>>>> Also: considered contributing to Pax? My personal experience > > > with it > > > >>>>> has always been a pain but if I had to try identify the > > > reason, it was > > > >>>>> mostly caused by me being unfamiliar with Karaf and not > > > having good > > > >>>>> clues to track down the real failure; maybe some minor error > > > reporting > > > >>>>> improvements could make a big difference to its usability? > Just > > > >>>>> saying, I don't feel like Pax is bad, but it seems their > > > developers > > > >>>>> really expect their users to be deeply familiar with it all > > > - feels > > > >>>>> like the typical case in which they could use some feedback > > > and a > > > >>>>> hand. > > > >>>>> > > > >>>>> Thanks, > > > >>>>> Sanne > > > >>>>> > > > >>>>> On 12 January 2018 at 08:22, Gunnar > > > Morling> wrote: > > > >>>>>> Hi Brett, > > > >>>>>> > > > >>>>>> We also had our fair share of frustration with Pax Exam in > > > HV, and I was > > > >>>>>> (more than once) at the point of dropping it. > > > >>>>>> > > > >>>>>> Docker could work, but as you say it's a bit of a heavy > > > dependency, if not > > > >>>>>> required anyways. Not sure whether I'd like to add this as > > > a prerequisite > > > >>>>>> for the HV build to be executed. And tests in separate > > > profiles tend to be > > > >>>>>> "forgotten" in my experience. > > > >>>>>> > > > >>>>>> One other approach could be to use Arquillian's OSGi > > > support (see > > > >>>>>> https://github.com/arquillian/arquillian-container-osgi > > > ), did > > > you consider > > > >>>>>> to use that one as an alternative? > > > >>>>>> > > > >>>>>> Cheers, > > > >>>>>> > > > >>>>>> --Gunnar > > > >>>>>> > > > >>>>>> > > > >>>>>> 2018-01-12 3:34 GMT+01:00 Brett Meyer > > >: > > > >>>>>> > > > >>>>>>> > > > >>>>>>> > > > >>>>>>> I'm fed up with Pax Exam and would love to replace it as > the > > > >>>>>>> hibernate-osgi integration test harness. Most of the > > > Karaf committers > > > >>>>>>> I've been working with hate it more than I do. Every > > > single time we > > > >>>>>>> upgrade the Karaf version, something less-than-minor in > > > hibernate-osgi, > > > >>>>>>> upgrade/change dependencies, or attempt to upgrade Pax > > > Exam itself, > > > >>>>>>> there's some new obfuscated failure. And no matter how > > > much I pray, it > > > >>>>>>> refuses to let us get to the container logs to figure out > > what > > > >>>>>>> happened. Tis a house of cards. > > > >>>>>>> > > > >>>>>>> > > > >>>>>>> > > > >>>>>>> One alternative that recently came up elsewhere: use > > > Docker to bootstrap > > > >>>>>>> the container, hit it with our features.xml, install a > > > test bundle that > > > >>>>>>> exposes functionality externally (over HTTP, Karaf > > > commands, etc), then > > > >>>>>>> hit the endpoints and run assertions. > > > >>>>>>> > > > >>>>>>> Pros: true "integration test", plain vanilla Karaf, direct > > > access to all > > > >>>>>>> logs, easier to eventually support and test other > containers. > > > >>>>>>> > > > >>>>>>> Cons: Need Docker installed for local test runs, probably > > > safer to > > > >>>>>>> isolate the integration test behind a disabled-by-default > > > Maven profile. > > > >>>>>>> > > > >>>>>>> Any gut reactions? > > > >>>>>>> > > > >>>>>>> OSGi is fun and I'm not at all bitter, > > > >>>>>>> > > > >>>>>>> -Brett- > > > >>>>>>> > > > >>>>>>> ;) > > > >>>>>>> > > > >>>>>>> > > > >>>>>>> _______________________________________________ > > > >>>>>>> hibernate-dev mailing list > > > >>>>>>> hibernate-dev at lists.jboss.org > > > > > > >>>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > >>>>>> _______________________________________________ > > > >>>>>> hibernate-dev mailing list > > > >>>>>> hibernate-dev at lists.jboss.org > > > > > > >>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > >>>>> _______________________________________________ > > > >>>>> hibernate-dev mailing list > > > >>>>> hibernate-dev at lists.jboss.org > > > > > > >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > >>> _______________________________________________ > > > >>> hibernate-dev mailing list > > > >>> hibernate-dev at lists.jboss.org > > > > > > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > > > > > > > > > > _______________________________________________ > > > > hibernate-dev mailing list > > > > hibernate-dev at lists.jboss.org > jboss.org> > > > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > _______________________________________________ > > > hibernate-dev mailing list > > > hibernate-dev at lists.jboss.org hibernate-dev at lists.jboss.org> > > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > > > > > > > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From brett at hibernate.org Fri Jan 12 19:08:47 2018 From: brett at hibernate.org (Brett Meyer) Date: Fri, 12 Jan 2018 19:08:47 -0500 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> <509a8137-a082-63a1-df57-7c9c55704db6@hibernate.org> Message-ID: <2d3bf217-806f-643a-53e0-70a836ec2013@hibernate.org> Great questions.? There'd be clear separation of concerns in this: * Docker o Plain 'ole Karaf o Install our features.xml o Install a test bundle that enables the ability to remotely execute some flows.? I had been thinking along the lines of what we do in the OSGi quickstart/demo projects where we provide a simple persistence context and the ability to write/update/fetch/delete entities. * External o Plain JUnit methods o Hit the test bundle through its remote API (HTTP or something else), executing those simple functions. o Assertions based on the results or metrics So yes, there is a bit of overlap with the Pax Exam setup, but removing the dynamic tiny-bundle, Pax Probes, and CL caveats.? And most importantly, full access to standard Karaf logging without more praying. Before I get too far into the weeds here, one of the ActiveMQ guys gave me some ref code from an entirely different Pax Exam setup that may closely resemble what I'm describing, but without Docker.? I'll give that a fair shake before diving off any further, but as always, devil's in the details. > I believe most of this is already handled for various build tools (like the gradle docker plugin) and simplified by docker image repos (bintray, e.g.). Exactly -- the Gradle and Maven plugins for Docker handle almost all of the heavy lifting.? Our setup has been pretty straight-forward so far. On 1/12/18 4:04 PM, Steve Ebersole wrote: > I believe most of this is already handled for various build tools > (like the gradle docker plugin) and simplified by docker image repos > (bintray, e.g.). > > But this is a good point.? I'd be interested to see if you've done > this before Brett.. > > On Fri, Jan 12, 2018 at 2:56 PM Gunnar Morling > wrote: > > Brett, > > What's still unclear to me is when going the Docker route, won't > you still > need some code which deploys your tests to Karaf, runs them there and > fetches the test results, so e.g. your Gradle build will fail if > there are > test failures? Would you envision to write these bits yourself? And > wouldn't this amount to re-implementing PaxExam yourself? Seems > I'm still > missing a piece of the story :) > > Cheers, > > --Gunnar > > > 2018-01-12 19:16 GMT+01:00 Brett Meyer >: > > > I guess the way I'm looking at this is Docker will be primarily > used by > > Jenkins, and myself or anyone working directly on hibernate-osgi > > itself.? Otherwise, it'll be disabled by default and hidden behind a > > profile.? We'll make sure that most contributors running the entire > > Hibernate test suite won't be affected... > > > > > > On 1/12/18 1:13 PM, andrea boriero wrote: > > > I already have Docker running on my machine, so it seems not a big > > > issue for me,but not sure about the impact for others. > > > > > > Anyway It's worth giving a try. > > > > > > On 12 January 2018 at 17:54, Sanne Grinovero > > > > >> wrote: > > > > > >? ? ?On 12 January 2018 at 17:32, Brett Meyer > > > >? ? ?>> > wrote: > > >? ? ?> If I don't have time to contribute to Pax Exam, I certainly > > >? ? ?don't have > > >? ? ?> time to start a new project haha... > > >? ? ?> > > >? ? ?> And realistically, that "something new" would likely involve > > >? ? ?containers > > >? ? ?> anyway. > > >? ? ?> > > >? ? ?> At this point, mostly a question of 1) status quo, 2) > Docker (or > > any > > >? ? ?> other container-based solution), or 3) try screwing > around with > > >? ? ?Pax Exam > > >? ? ?> in "server-only" mode (but I don't have high hopes there). > > > > > >? ? ?Sure. Docker is now available on the CI slaves too, so > that's not > > >? ? ?a problem. > > > > > >? ? ?The only annoyance is that the whole ORM team - and anyone > > >? ? ?contributing - would need to have Docker as well, but that > doesn't > > >? ? ?seem too bad to me... and was likely bound to happen for > other tools > > >? ? ?:) > > > > > >? ? ?Steve, Chris and Andrea? Ok with that? Maybe you have Docker > > >? ? ?running already? > > > > > >? ? ?> > > >? ? ?> > > >? ? ?> On 1/12/18 12:27 PM, Sanne Grinovero wrote: > > >? ? ?>> Ok, looks like you really should start something new :) > > >? ? ?>> > > >? ? ?>> Hopefully many of those other annoyed Karaf developers will > > follow. > > >? ? ?>> > > >? ? ?>> On 12 January 2018 at 13:59, Brett Meyer > > > >? ? ?>> > wrote: > > >? ? ?>>> Plus, for me, it's more a question of time.? I only > have a bit > > >? ? ?available > > >? ? ?>>> for open source work these days, and I'd rather spend that > > >? ? ?knocking out > > >? ? ?>>> some of the hibernate-osgi tasks we've had on our > plate for a > > >? ? ?while.? I > > >? ? ?>>> unfortunately don't have anything left to contribute > to Pax > > >? ? ?Exam itself, > > >? ? ?>>> assuming that would even fix the problem. > > >? ? ?>>> > > >? ? ?>>> Even worse, we're barely using the integration tests for > > >? ? ?anything more > > >? ? ?>>> than a simple smoke test at this point, since it seems > like > > >? ? ?every time > > >? ? ?>>> we touch it something new goes wrong. Looking for a more > > >? ? ?*consistent* > > >? ? ?>>> solution -- need more confidence in the backbone. > > >? ? ?>>> > > >? ? ?>>> > > >? ? ?>>> On 1/12/18 8:56 AM, Brett Meyer wrote: > > >? ? ?>>>> Sorry Gunnar/Sanne, should have clarified this first: > > >? ? ?>>>> > > >? ? ?>>>> We actually used Arquillian before Pax Exam, and the > > >? ? ?experience was > > >? ? ?>>>> far worse (somewhat of a long story)... > > >? ? ?>>>> > > >? ? ?>>>>> Pax Exam was just "helping" to deploy/run things in > Karaf, so I > > >? ? ?>>>> can't imagine using Karaf without the helpers being a > walk in > > >? ? ?the park > > >? ? ?>>>> > > >? ? ?>>>> That's not actually the case. The way Pax Exam currently > > >? ? ?runs our > > >? ? ?>>>> tests is fundamentally part of the problem.? The test > code is > > >? ? ?>>>> dynamically wrapped in an actual bundle, using > something like > > >? ? ?>>>> tiny-bundles, and executed *within* the container > itself. Pax > > >? ? ?>>>> overrides runs with additional probes, overrides logging > > >? ? ?>>>> infrastructure, etc.? Those nuances can often be the > source > > >? ? ?of many of > > >? ? ?>>>> the bugs (there are a ton of classloader > implications, etc. > > >? ? ?-- IIRC, > > >? ? ?>>>> this was one area where Arquillian was much, much worse). > > >? ? ?There are > > >? ? ?>>>> some benefits to that setup, but for Hibernate it > mainly gets > > >? ? ?in the way. > > >? ? ?>>>> > > >? ? ?>>>> It *does* have a "server mode" where tests run > outside of the > > >? ? ?>>>> container, but I vaguely remember going down that > path early > > >? ? ?on and > > >? ? ?>>>> hitting a roadblock.? For the life of me, I can't > remember the > > >? ? ?>>>> specifics.? But my pushback here is that ultimately > Docker > > >? ? ?might be > > >? ? ?>>>> more preferable, giving us more of a real world > scenario to > > >? ? ?do true > > >? ? ?>>>> e2e tests without something else in the middle. > > >? ? ?>>>> > > >? ? ?>>>>> so I can't imagine using Karaf without the helpers > being a > > >? ? ?walk in > > >? ? ?>>>> the park; e.g. having to deal with HTTP operations > comes with > > >? ? ?its own > > >? ? ?>>>> baggage {dependencies, complexity, speed, .. } and > generally > > more > > >? ? ?>>>> stuff to maintain. > > >? ? ?>>>> > > >? ? ?>>>> I guess I respectfully disagree with that, but purely > due to > > >? ? ?Karaf > > >? ? ?>>>> features.? Our features.xml does most of the heavy > lifting for > > us > > >? ? ?>>>> w/r/t getting Hibernate provisioned. The same would > be true > > >? ? ?with the > > >? ? ?>>>> test harness bundle/feature. REST is simple and > > >? ? ?out-of-the-box thanks > > >? ? ?>>>> to Karaf + CXF or Camel.? For other possible routes > (Karaf > > >? ? ?commands), > > >? ? ?>>>> we already have code available in our demo/quickstart > projects. > > >? ? ?>>>> > > >? ? ?>>>>> Also: considered contributing to Pax? > > >? ? ?>>>> Yes, of course.? But the fact that numerous Karaf > *committers* > > >? ? ?>>>> themselves have a long history of built-up > frustration on it > > >? ? ?doesn't > > >? ? ?>>>> leave me optimistic.? A couple of them had tried to > pitch in > > >? ? ?at one > > >? ? ?>>>> point and weren't able to get anywhere. > > >? ? ?>>>> > > >? ? ?>>>>> but it seems their developers really expect their > users to > > >? ? ?be deeply > > >? ? ?>>>> familiar with it all > > >? ? ?>>>> > > >? ? ?>>>> Absolutely!? But again, our struggles also come down > to the > > >? ? ?>>>> fundamental way Pax Exam works... > > >? ? ?>>>> > > >? ? ?>>>> > > >? ? ?>>>> On 1/12/18 6:27 AM, Sanne Grinovero wrote: > > >? ? ?>>>>> +1 to explore alternatives to Pax Exam, but I'd be > wary of > > >? ? ?maintining > > >? ? ?>>>>> our own test infrastructure. > > >? ? ?>>>>> > > >? ? ?>>>>> Pax Exam was just "helping" to deploy/run things in > Karaf, > > >? ? ?so I can't > > >? ? ?>>>>> imagine using Karaf without the helpers being a walk > in the > > >? ? ?park; e.g. > > >? ? ?>>>>> having to deal with HTTP operations comes with its > own baggage > > >? ? ?>>>>> {dependencies, complexity, speed, .. } and generally > more > > >? ? ?stuff to > > >? ? ?>>>>> maintain. > > >? ? ?>>>>> > > >? ? ?>>>>> So.. +1 to try out Arquillian or anything else. Or > maybe you > > >? ? ?could > > >? ? ?>>>>> start your own tool, but I'd prefer to see it in a > separate > > >? ? ?repository > > >? ? ?>>>>> :) e.g. a nice Gradle plugin so maybe you get more > people > > >? ? ?helping? > > >? ? ?>>>>> > > >? ? ?>>>>> Also: considered contributing to Pax? My personal > experience > > >? ? ?with it > > >? ? ?>>>>> has always been a pain but if I had to try identify the > > >? ? ?reason, it was > > >? ? ?>>>>> mostly caused by me being unfamiliar with Karaf and not > > >? ? ?having good > > >? ? ?>>>>> clues to track down the real failure; maybe some > minor error > > >? ? ?reporting > > >? ? ?>>>>> improvements could make a big difference to its > usability? Just > > >? ? ?>>>>> saying, I don't feel like Pax is bad, but it seems their > > >? ? ?developers > > >? ? ?>>>>> really expect their users to be deeply familiar with > it all > > >? ? ?- feels > > >? ? ?>>>>> like the typical case in which they could use some > feedback > > >? ? ?and a > > >? ? ?>>>>> hand. > > >? ? ?>>>>> > > >? ? ?>>>>> Thanks, > > >? ? ?>>>>> Sanne > > >? ? ?>>>>> > > >? ? ?>>>>> On 12 January 2018 at 08:22, Gunnar > > >? ? ?Morling > >> wrote: > > >? ? ?>>>>>> Hi Brett, > > >? ? ?>>>>>> > > >? ? ?>>>>>> We also had our fair share of frustration with Pax > Exam in > > >? ? ?HV, and I was > > >? ? ?>>>>>> (more than once) at the point of dropping it. > > >? ? ?>>>>>> > > >? ? ?>>>>>> Docker could work, but as you say it's a bit of a heavy > > >? ? ?dependency, if not > > >? ? ?>>>>>> required anyways. Not sure whether I'd like to add > this as > > >? ? ?a prerequisite > > >? ? ?>>>>>> for the HV build to be executed. And tests in separate > > >? ? ?profiles tend to be > > >? ? ?>>>>>> "forgotten" in my experience. > > >? ? ?>>>>>> > > >? ? ?>>>>>> One other approach could be to use Arquillian's OSGi > > >? ? ?support (see > > >? ? ?>>>>>> https://github.com/arquillian/arquillian-container-osgi > > >? ? > ?), did > > >? ? ?you consider > > >? ? ?>>>>>> to use that one as an alternative? > > >? ? ?>>>>>> > > >? ? ?>>>>>> Cheers, > > >? ? ?>>>>>> > > >? ? ?>>>>>> --Gunnar > > >? ? ?>>>>>> > > >? ? ?>>>>>> > > >? ? ?>>>>>> 2018-01-12 3:34 GMT+01:00 Brett > Meyer > > >? ? ?>>: > > >? ? ?>>>>>> > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> I'm fed up with Pax Exam and would love to replace > it as the > > >? ? ?>>>>>>> hibernate-osgi integration test harness.? Most of the > > >? ? ?Karaf committers > > >? ? ?>>>>>>> I've been working with hate it more than I do.? Every > > >? ? ?single time we > > >? ? ?>>>>>>> upgrade the Karaf version, something > less-than-minor in > > >? ? ?hibernate-osgi, > > >? ? ?>>>>>>> upgrade/change dependencies, or attempt to upgrade Pax > > >? ? ?Exam itself, > > >? ? ?>>>>>>> there's some new obfuscated failure.? And no > matter how > > >? ? ?much I pray, it > > >? ? ?>>>>>>> refuses to let us get to the container logs to > figure out > > what > > >? ? ?>>>>>>> happened.? Tis a house of cards. > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> One alternative that recently came up elsewhere: use > > >? ? ?Docker to bootstrap > > >? ? ?>>>>>>> the container, hit it with our features.xml, install a > > >? ? ?test bundle that > > >? ? ?>>>>>>> exposes functionality externally (over HTTP, Karaf > > >? ? ?commands, etc), then > > >? ? ?>>>>>>> hit the endpoints and run assertions. > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> Pros: true "integration test", plain vanilla > Karaf, direct > > >? ? ?access to all > > >? ? ?>>>>>>> logs, easier to eventually support and test other > containers. > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> Cons: Need Docker installed for local test runs, > probably > > >? ? ?safer to > > >? ? ?>>>>>>> isolate the integration test behind a > disabled-by-default > > >? ? ?Maven profile. > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> Any gut reactions? > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> OSGi is fun and I'm not at all bitter, > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> -Brett- > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> ;) > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> > > >? ? ?>>>>>>> _______________________________________________ > > >? ? ?>>>>>>> hibernate-dev mailing list > > >? ? ?>>>>>>> hibernate-dev at lists.jboss.org > > > >? ? ? > > > >? ? ?>>>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > >? ? ? > > >? ? ?>>>>>> _______________________________________________ > > >? ? ?>>>>>> hibernate-dev mailing list > > >? ? ?>>>>>> hibernate-dev at lists.jboss.org > > > >? ? ? > > > >? ? ?>>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > >? ? ? > > >? ? ?>>>>> _______________________________________________ > > >? ? ?>>>>> hibernate-dev mailing list > > >? ? ?>>>>> hibernate-dev at lists.jboss.org > > > >? ? ? > > > >? ? ?>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > >? ? ? > > >? ? ?>>> _______________________________________________ > > >? ? ?>>> hibernate-dev mailing list > > >? ? ?>>> hibernate-dev at lists.jboss.org > > > >? ? ? > > > >? ? ?>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > >? ? ? > > >? ? ?> > > >? ? ?> > > >? ? ?> _______________________________________________ > > >? ? ?> hibernate-dev mailing list > > >? ? ?> hibernate-dev at lists.jboss.org > . > > jboss.org > > > >? ? ?> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > >? ? ? > > >? ? ?_______________________________________________ > > >? ? ?hibernate-dev mailing list > > > hibernate-dev at lists.jboss.org > > > > > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > >? ? ? > > > > > > > > > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From gbadner at redhat.com Fri Jan 12 22:19:17 2018 From: gbadner at redhat.com (Gail Badner) Date: Fri, 12 Jan 2018 19:19:17 -0800 Subject: [hibernate-dev] Should HHH-12150 be fixed in 5.3.0.Beta? In-Reply-To: References: Message-ID: Done. On Fri, Jan 12, 2018 at 7:05 AM, Steve Ebersole wrote: > Can you get those done by Wednesday? If so, I think that's a good plan. > I can't think of anything else for you to work on atm for 5.3. Maybe once > we start to hear back about the remaining outstanding challenges... > > On Thu, Jan 11, 2018 at 5:44 PM Gail Badner wrote: > >> HHH-12150 is currently set to be fixed in 5.3.0. I have some time I can >> spend on this. There's another issue involving @MapKeyColumn, HHH-10575. >> >> Should I work on these, or something else for 5.3.0.Beta? >> > From sanne at hibernate.org Sat Jan 13 19:17:15 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Sun, 14 Jan 2018 00:17:15 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts Message-ID: Hi all, while the new build machines are fast, some of you pointed out we're now spending a relative high amount of time downloading maven dependencies, this problem being compounded by the fact we "nuke" idle slaves shortly after they become idle. I just spent the day testing a distributed file system, and it's now running in "production". It's used exclusively to store the Gradle and Maven caches. This is stateful and independent from the lifecycle of individual slave nodes. Unfortunately this solution is not viable for Docker images, so while I experimented with the idea I backed off from moving the docker storage graph to a similar device. Please don't waste time trying that w/o carefully reading the Docker documentation or talking with me :) Also, beyond correctness of storage semantics, it's likely far less efficient for Docker. To learn more about our new cache: - https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html I'd add that - because of other IO tuning in place - writes might appear out of order to other nodes, and conflicts are not handled. Shouldn't be a problem since snapshots now have timestamps, but this might be something to keep in mind. N.B. Please never rely on this as "storage": it's just meant as cache and we reserve the right to wipe it all out at any time. Thanks, Sanne From yoann at hibernate.org Mon Jan 15 03:42:23 2018 From: yoann at hibernate.org (Yoann Rodiere) Date: Mon, 15 Jan 2018 08:42:23 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: Thanks Sanne ! I have one question... > Please never rely on this as "storage": it's just meant as cache and > we reserve the right to wipe it all out at any time. I gather you say that so that we don't try to "release" artifacts into this cache? But temporary storage for the duration of one build will still be safe? Because our builds obviously rely on the local repository for short-term storage (for the duration of the build). For example the dependencies are only checked and downloaded if necessary at the beginning of the build, and then are expected to exist in the local repository until the build stops. Another example: our WildFly modules are first built and installed in the "modules" subproject, and later "fetched" from the local repository in the "integrationtest/wildfly" subproject. If we were to clear the cache during a build, things would probably go wrong. Worse, if two parallel builds were to install the same artifacts (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would run the risk of testing the wrong "version" of this artifact in one of the builds... On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero wrote: > Hi all, > > while the new build machines are fast, some of you pointed out we're > now spending a relative high amount of time downloading maven > dependencies, this problem being compounded by the fact we "nuke" idle > slaves shortly after they become idle. > > I just spent the day testing a distributed file system, and it's now > running in "production". > It's used exclusively to store the Gradle and Maven caches. This is > stateful and independent from the lifecycle of individual slave nodes. > > Unfortunately this solution is not viable for Docker images, so while > I experimented with the idea I backed off from moving the docker > storage graph to a similar device. Please don't waste time trying that > w/o carefully reading the Docker documentation or talking with me :) > Also, beyond correctness of storage semantics, it's likely far less > efficient for Docker. > > To learn more about our new cache: > - > https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 > - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html > > I'd add that - because of other IO tuning in place - writes might > appear out of order to other nodes, and conflicts are not handled. > Shouldn't be a problem since snapshots now have timestamps, but this > might be something to keep in mind. > > N.B. > Please never rely on this as "storage": it's just meant as cache and > we reserve the right to wipe it all out at any time. > > Thanks, > Sanne > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > -- Yoann Rodiere yoann at hibernate.org / yrodiere at redhat.com Software Engineer Hibernate NoORM team From sanne at hibernate.org Mon Jan 15 05:28:44 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 15 Jan 2018 10:28:44 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: On 15 January 2018 at 08:42, Yoann Rodiere wrote: > Thanks Sanne ! > > I have one question... > >> Please never rely on this as "storage": it's just meant as cache and >> we reserve the right to wipe it all out at any time. > > I gather you say that so that we don't try to "release" artifacts into this > cache? But temporary storage for the duration of one build will still be > safe? > > Because our builds obviously rely on the local repository for short-term > storage (for the duration of the build). For example the dependencies are > only checked and downloaded if necessary at the beginning of the build, and > then are expected to exist in the local repository until the build stops. > Another example: our WildFly modules are first built and installed in the > "modules" subproject, and later "fetched" from the local repository in the > "integrationtest/wildfly" subproject. > > If we were to clear the cache during a build, things would probably go > wrong. Worse, if two parallel builds were to install the same artifacts > (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would run the risk > of testing the wrong "version" of this artifact in one of the builds... SNAPSHOT being installed are indeed a problem, e.g the PR testing jobs could conflict with the regular master jobs. We should reconfigure those to not "install" - that's actually a bad habit, legacy from Maven 2 times - people nowadays recommend using "mvn clean verify", especially on CI environments. I agree about the perils of clearing the cache during in-progress builds too. I just meant to warn that we don't have any backup plan in place, and I do plan to just wipe the whole thing occasionally: - when we have any direct need, e.g. currupted downloads - when it gets too large - if it gets too expensive - regularly, just to "practice" that everything works with an empty cache Also our "disaster recovery" plan to rebuild all infrastructure will always assume it's ok to reboot with having this file system empty. Thanks, Sanne > > > On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero wrote: >> >> Hi all, >> >> while the new build machines are fast, some of you pointed out we're >> now spending a relative high amount of time downloading maven >> dependencies, this problem being compounded by the fact we "nuke" idle >> slaves shortly after they become idle. >> >> I just spent the day testing a distributed file system, and it's now >> running in "production". >> It's used exclusively to store the Gradle and Maven caches. This is >> stateful and independent from the lifecycle of individual slave nodes. >> >> Unfortunately this solution is not viable for Docker images, so while >> I experimented with the idea I backed off from moving the docker >> storage graph to a similar device. Please don't waste time trying that >> w/o carefully reading the Docker documentation or talking with me :) >> Also, beyond correctness of storage semantics, it's likely far less >> efficient for Docker. >> >> To learn more about our new cache: >> - >> https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 >> - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html >> >> I'd add that - because of other IO tuning in place - writes might >> appear out of order to other nodes, and conflicts are not handled. >> Shouldn't be a problem since snapshots now have timestamps, but this >> might be something to keep in mind. >> >> N.B. >> Please never rely on this as "storage": it's just meant as cache and >> we reserve the right to wipe it all out at any time. >> >> Thanks, >> Sanne >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > -- > Yoann Rodiere > yoann at hibernate.org / yrodiere at redhat.com > Software Engineer > Hibernate NoORM team From yoann at hibernate.org Mon Jan 15 05:54:20 2018 From: yoann at hibernate.org (Yoann Rodiere) Date: Mon, 15 Jan 2018 10:54:20 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: > We should reconfigure those to not "install" - that's actually a bad > habit, legacy from Maven 2 times - people nowadays recommend using > "mvn clean verify", especially on CI environments. I could not agree more, that would be cleaner, but that's not possible. And believe me, I tried hard. Last time I checked, some of the plugins we use with dynamic dependency resolution would ignore the artifacts being built, and would always fetch the artifacts from the Maven repos (for SNAPSHOTs, they would end up using nightlies). I'm not talking about when we use standard maven markup to declare dependencies, but when the plugin itself has to fetch dependencies "dynamically", which happens when we setup a WildFly server with our own modules in particular. See maven-dependency-plugin's "artifactItems" configuration. On Mon, 15 Jan 2018 at 11:29 Sanne Grinovero wrote: > On 15 January 2018 at 08:42, Yoann Rodiere wrote: > > Thanks Sanne ! > > > > I have one question... > > > >> Please never rely on this as "storage": it's just meant as cache and > >> we reserve the right to wipe it all out at any time. > > > > I gather you say that so that we don't try to "release" artifacts into > this > > cache? But temporary storage for the duration of one build will still be > > safe? > > > > Because our builds obviously rely on the local repository for short-term > > storage (for the duration of the build). For example the dependencies are > > only checked and downloaded if necessary at the beginning of the build, > and > > then are expected to exist in the local repository until the build stops. > > Another example: our WildFly modules are first built and installed in the > > "modules" subproject, and later "fetched" from the local repository in > the > > "integrationtest/wildfly" subproject. > > > > If we were to clear the cache during a build, things would probably go > > wrong. Worse, if two parallel builds were to install the same artifacts > > (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would run the > risk > > of testing the wrong "version" of this artifact in one of the builds... > > SNAPSHOT being installed are indeed a problem, e.g the PR testing jobs > could conflict with the regular master jobs. > We should reconfigure those to not "install" - that's actually a bad > habit, legacy from Maven 2 times - people nowadays recommend using > "mvn clean verify", especially on CI environments. > > I agree about the perils of clearing the cache during in-progress builds > too. > > I just meant to warn that we don't have any backup plan in place, and > I do plan to just wipe the whole thing occasionally: > - when we have any direct need, e.g. currupted downloads > - when it gets too large > - if it gets too expensive > - regularly, just to "practice" that everything works with an empty cache > > Also our "disaster recovery" plan to rebuild all infrastructure will > always assume it's ok to reboot with having this file system empty. > > Thanks, > Sanne > > > > > > > On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero > wrote: > >> > >> Hi all, > >> > >> while the new build machines are fast, some of you pointed out we're > >> now spending a relative high amount of time downloading maven > >> dependencies, this problem being compounded by the fact we "nuke" idle > >> slaves shortly after they become idle. > >> > >> I just spent the day testing a distributed file system, and it's now > >> running in "production". > >> It's used exclusively to store the Gradle and Maven caches. This is > >> stateful and independent from the lifecycle of individual slave nodes. > >> > >> Unfortunately this solution is not viable for Docker images, so while > >> I experimented with the idea I backed off from moving the docker > >> storage graph to a similar device. Please don't waste time trying that > >> w/o carefully reading the Docker documentation or talking with me :) > >> Also, beyond correctness of storage semantics, it's likely far less > >> efficient for Docker. > >> > >> To learn more about our new cache: > >> - > >> > https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 > >> - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html > >> > >> I'd add that - because of other IO tuning in place - writes might > >> appear out of order to other nodes, and conflicts are not handled. > >> Shouldn't be a problem since snapshots now have timestamps, but this > >> might be something to keep in mind. > >> > >> N.B. > >> Please never rely on this as "storage": it's just meant as cache and > >> we reserve the right to wipe it all out at any time. > >> > >> Thanks, > >> Sanne > >> _______________________________________________ > >> hibernate-dev mailing list > >> hibernate-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > > > -- > > Yoann Rodiere > > yoann at hibernate.org / yrodiere at redhat.com > > Software Engineer > > Hibernate NoORM team > -- Yoann Rodiere yoann at hibernate.org / yrodiere at redhat.com Software Engineer Hibernate NoORM team From guillaume.smet at gmail.com Tue Jan 16 10:09:42 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Tue, 16 Jan 2018 16:09:42 +0100 Subject: [hibernate-dev] NoORM IRC meeting minutes Message-ID: Hi everyone, Here are the minutes of the biweekly NoORM IRC meeting: 16:06 < jbott> Meeting ended Tue Jan 16 15:05:30 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 16:06 < jbott> Minutes: http://transcripts.jboss.org/meeting/irc.freenode.org/hibernate-dev/2018/hibernate-dev.2018-01-16-14.00.html 16:06 < jbott> Minutes (text): http://transcripts.jboss.org/meeting/irc.freenode.org/hibernate-dev/2018/hibernate-dev.2018-01-16-14.00.txt 16:06 < jbott> Log: http://transcripts.jboss.org/meeting/irc.freenode.org/hibernate-dev/2018/hibernate-dev.2018-01-16-14.00.log.html Have a nice day. -- Guillaume From steve at hibernate.org Tue Jan 16 14:41:49 2018 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 16 Jan 2018 19:41:49 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: Did you happen to do the same for Gradle caches? Some jobs are failing: * What went wrong: Could not resolve all dependencies for configuration ':buildSrc:runtimeClasspath'. > Timeout waiting to lock artifact cache (/efs-maven-artifacts/.gradle/caches/modules-2). It is currently in use by another Gradle instance. Owner PID: 1423 Our PID: 10249 Owner Operation: resolve configuration ':classpath' Our operation: Lock file: /efs-maven-artifacts/.gradle/caches/modules-2/modules-2.lock On Mon, Jan 15, 2018 at 5:06 AM Yoann Rodiere wrote: > > We should reconfigure those to not "install" - that's actually a bad > > habit, legacy from Maven 2 times - people nowadays recommend using > > "mvn clean verify", especially on CI environments. > > I could not agree more, that would be cleaner, but that's not possible. And > believe me, I tried hard. Last time I checked, some of the plugins we use > with dynamic dependency resolution would ignore the artifacts being built, > and would always fetch the artifacts from the Maven repos (for SNAPSHOTs, > they would end up using nightlies). > I'm not talking about when we use standard maven markup to declare > dependencies, but when the plugin itself has to fetch dependencies > "dynamically", which happens when we setup a WildFly server with our own > modules in particular. See maven-dependency-plugin's "artifactItems" > configuration. > > > > On Mon, 15 Jan 2018 at 11:29 Sanne Grinovero wrote: > > > On 15 January 2018 at 08:42, Yoann Rodiere wrote: > > > Thanks Sanne ! > > > > > > I have one question... > > > > > >> Please never rely on this as "storage": it's just meant as cache and > > >> we reserve the right to wipe it all out at any time. > > > > > > I gather you say that so that we don't try to "release" artifacts into > > this > > > cache? But temporary storage for the duration of one build will still > be > > > safe? > > > > > > Because our builds obviously rely on the local repository for > short-term > > > storage (for the duration of the build). For example the dependencies > are > > > only checked and downloaded if necessary at the beginning of the build, > > and > > > then are expected to exist in the local repository until the build > stops. > > > Another example: our WildFly modules are first built and installed in > the > > > "modules" subproject, and later "fetched" from the local repository in > > the > > > "integrationtest/wildfly" subproject. > > > > > > If we were to clear the cache during a build, things would probably go > > > wrong. Worse, if two parallel builds were to install the same artifacts > > > (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would run the > > risk > > > of testing the wrong "version" of this artifact in one of the builds... > > > > SNAPSHOT being installed are indeed a problem, e.g the PR testing jobs > > could conflict with the regular master jobs. > > We should reconfigure those to not "install" - that's actually a bad > > habit, legacy from Maven 2 times - people nowadays recommend using > > "mvn clean verify", especially on CI environments. > > > > I agree about the perils of clearing the cache during in-progress builds > > too. > > > > I just meant to warn that we don't have any backup plan in place, and > > I do plan to just wipe the whole thing occasionally: > > - when we have any direct need, e.g. currupted downloads > > - when it gets too large > > - if it gets too expensive > > - regularly, just to "practice" that everything works with an empty > cache > > > > Also our "disaster recovery" plan to rebuild all infrastructure will > > always assume it's ok to reboot with having this file system empty. > > > > Thanks, > > Sanne > > > > > > > > > > > On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero > > wrote: > > >> > > >> Hi all, > > >> > > >> while the new build machines are fast, some of you pointed out we're > > >> now spending a relative high amount of time downloading maven > > >> dependencies, this problem being compounded by the fact we "nuke" idle > > >> slaves shortly after they become idle. > > >> > > >> I just spent the day testing a distributed file system, and it's now > > >> running in "production". > > >> It's used exclusively to store the Gradle and Maven caches. This is > > >> stateful and independent from the lifecycle of individual slave nodes. > > >> > > >> Unfortunately this solution is not viable for Docker images, so while > > >> I experimented with the idea I backed off from moving the docker > > >> storage graph to a similar device. Please don't waste time trying that > > >> w/o carefully reading the Docker documentation or talking with me :) > > >> Also, beyond correctness of storage semantics, it's likely far less > > >> efficient for Docker. > > >> > > >> To learn more about our new cache: > > >> - > > >> > > > https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 > > >> - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html > > >> > > >> I'd add that - because of other IO tuning in place - writes might > > >> appear out of order to other nodes, and conflicts are not handled. > > >> Shouldn't be a problem since snapshots now have timestamps, but this > > >> might be something to keep in mind. > > >> > > >> N.B. > > >> Please never rely on this as "storage": it's just meant as cache and > > >> we reserve the right to wipe it all out at any time. > > >> > > >> Thanks, > > >> Sanne > > >> _______________________________________________ > > >> hibernate-dev mailing list > > >> hibernate-dev at lists.jboss.org > > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > > > > > > > -- > > > Yoann Rodiere > > > yoann at hibernate.org / yrodiere at redhat.com > > > Software Engineer > > > Hibernate NoORM team > > > > > -- > Yoann Rodiere > yoann at hibernate.org / yrodiere at redhat.com > Software Engineer > Hibernate NoORM team > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From chris at hibernate.org Tue Jan 16 15:15:31 2018 From: chris at hibernate.org (Chris Cranford) Date: Tue, 16 Jan 2018 15:15:31 -0500 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> Message-ID: <2b7a372c-c59f-4447-1f1f-45ddf11a562a@hibernate.org> On 01/12/2018 12:54 PM, Sanne Grinovero wrote: > On 12 January 2018 at 17:32, Brett Meyer wrote: >> If I don't have time to contribute to Pax Exam, I certainly don't have >> time to start a new project haha... >> >> And realistically, that "something new" would likely involve containers >> anyway. >> >> At this point, mostly a question of 1) status quo, 2) Docker (or any >> other container-based solution), or 3) try screwing around with Pax Exam >> in "server-only" mode (but I don't have high hopes there). > Sure. Docker is now available on the CI slaves too, so that's not a problem. > > The only annoyance is that the whole ORM team - and anyone > contributing - would need to have Docker as well, but that doesn't > seem too bad to me... and was likely bound to happen for other tools > :) > > Steve, Chris and Andrea? Ok with that? Maybe you have Docker running already? I am fine with it; I have Docker installed already. From gunnar at hibernate.org Tue Jan 16 15:44:26 2018 From: gunnar at hibernate.org (Gunnar Morling) Date: Tue, 16 Jan 2018 21:44:26 +0100 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: 2018-01-15 11:54 GMT+01:00 Yoann Rodiere : > > We should reconfigure those to not "install" - that's actually a bad > > habit, legacy from Maven 2 times - people nowadays recommend using > > "mvn clean verify", especially on CI environments. > > I could not agree more, that would be cleaner, but that's not possible. And > believe me, I tried hard. Last time I checked, some of the plugins we use > with dynamic dependency resolution would ignore the artifacts being built, > and would always fetch the artifacts from the Maven repos (for SNAPSHOTs, > they would end up using nightlies). > I'm not talking about when we use standard maven markup to declare > dependencies, but when the plugin itself has to fetch dependencies > "dynamically", which happens when we setup a WildFly server with our own > modules in particular. See maven-dependency-plugin's "artifactItems" > configuration. > Yes, I wanted to bring this up, too. I believe it's a similar issue for the OSGi-based tests. The brute-force solution is to actually work with "install", but have a job-local Maven repo to prevent any side-effects across jobs. That won't be great for build times, though, unless perhaps a local Nexus would be available as a cache on each slave. > On Mon, 15 Jan 2018 at 11:29 Sanne Grinovero wrote: > > > On 15 January 2018 at 08:42, Yoann Rodiere wrote: > > > Thanks Sanne ! > > > > > > I have one question... > > > > > >> Please never rely on this as "storage": it's just meant as cache and > > >> we reserve the right to wipe it all out at any time. > > > > > > I gather you say that so that we don't try to "release" artifacts into > > this > > > cache? But temporary storage for the duration of one build will still > be > > > safe? > > > > > > Because our builds obviously rely on the local repository for > short-term > > > storage (for the duration of the build). For example the dependencies > are > > > only checked and downloaded if necessary at the beginning of the build, > > and > > > then are expected to exist in the local repository until the build > stops. > > > Another example: our WildFly modules are first built and installed in > the > > > "modules" subproject, and later "fetched" from the local repository in > > the > > > "integrationtest/wildfly" subproject. > > > > > > If we were to clear the cache during a build, things would probably go > > > wrong. Worse, if two parallel builds were to install the same artifacts > > > (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would run the > > risk > > > of testing the wrong "version" of this artifact in one of the builds... > > > > SNAPSHOT being installed are indeed a problem, e.g the PR testing jobs > > could conflict with the regular master jobs. > > We should reconfigure those to not "install" - that's actually a bad > > habit, legacy from Maven 2 times - people nowadays recommend using > > "mvn clean verify", especially on CI environments. > > > > I agree about the perils of clearing the cache during in-progress builds > > too. > > > > I just meant to warn that we don't have any backup plan in place, and > > I do plan to just wipe the whole thing occasionally: > > - when we have any direct need, e.g. currupted downloads > > - when it gets too large > > - if it gets too expensive > > - regularly, just to "practice" that everything works with an empty > cache > > > > Also our "disaster recovery" plan to rebuild all infrastructure will > > always assume it's ok to reboot with having this file system empty. > > > > Thanks, > > Sanne > > > > > > > > > > > On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero > > wrote: > > >> > > >> Hi all, > > >> > > >> while the new build machines are fast, some of you pointed out we're > > >> now spending a relative high amount of time downloading maven > > >> dependencies, this problem being compounded by the fact we "nuke" idle > > >> slaves shortly after they become idle. > > >> > > >> I just spent the day testing a distributed file system, and it's now > > >> running in "production". > > >> It's used exclusively to store the Gradle and Maven caches. This is > > >> stateful and independent from the lifecycle of individual slave nodes. > > >> > > >> Unfortunately this solution is not viable for Docker images, so while > > >> I experimented with the idea I backed off from moving the docker > > >> storage graph to a similar device. Please don't waste time trying that > > >> w/o carefully reading the Docker documentation or talking with me :) > > >> Also, beyond correctness of storage semantics, it's likely far less > > >> efficient for Docker. > > >> > > >> To learn more about our new cache: > > >> - > > >> > > https://github.com/hibernate/ci.hibernate.org/commit/ > dc6e0a4bd09fb3ae6347081243b4fb796a219f90 > > >> - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html > > >> > > >> I'd add that - because of other IO tuning in place - writes might > > >> appear out of order to other nodes, and conflicts are not handled. > > >> Shouldn't be a problem since snapshots now have timestamps, but this > > >> might be something to keep in mind. > > >> > > >> N.B. > > >> Please never rely on this as "storage": it's just meant as cache and > > >> we reserve the right to wipe it all out at any time. > > >> > > >> Thanks, > > >> Sanne > > >> _______________________________________________ > > >> hibernate-dev mailing list > > >> hibernate-dev at lists.jboss.org > > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > > > > > > > -- > > > Yoann Rodiere > > > yoann at hibernate.org / yrodiere at redhat.com > > > Software Engineer > > > Hibernate NoORM team > > > > > -- > Yoann Rodiere > yoann at hibernate.org / yrodiere at redhat.com > Software Engineer > Hibernate NoORM team > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Tue Jan 16 16:30:04 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 16 Jan 2018 21:30:04 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: Yes I did it for Gradle too, sorry. The `/efs-maven-artifacts` is the guilty mount point. I don't know any quick solutions for the various concerns you all raised, so I'll roll this back tonight. It's good to know that it's not too hard to have a shared FS between these machines; needs better planning though. Thanks, Sanne On 16 January 2018 at 19:41, Steve Ebersole wrote: > Did you happen to do the same for Gradle caches? > > Some jobs are failing: > > > * What went wrong: > Could not resolve all dependencies for configuration > ':buildSrc:runtimeClasspath'. >> Timeout waiting to lock artifact cache >> (/efs-maven-artifacts/.gradle/caches/modules-2). It is currently in use by >> another Gradle instance. > Owner PID: 1423 > Our PID: 10249 > Owner Operation: resolve configuration ':classpath' > Our operation: > Lock file: /efs-maven-artifacts/.gradle/caches/modules-2/modules-2.lock > > > > On Mon, Jan 15, 2018 at 5:06 AM Yoann Rodiere wrote: >> >> > We should reconfigure those to not "install" - that's actually a bad >> > habit, legacy from Maven 2 times - people nowadays recommend using >> > "mvn clean verify", especially on CI environments. >> >> I could not agree more, that would be cleaner, but that's not possible. >> And >> believe me, I tried hard. Last time I checked, some of the plugins we use >> with dynamic dependency resolution would ignore the artifacts being built, >> and would always fetch the artifacts from the Maven repos (for SNAPSHOTs, >> they would end up using nightlies). >> I'm not talking about when we use standard maven markup to declare >> dependencies, but when the plugin itself has to fetch dependencies >> "dynamically", which happens when we setup a WildFly server with our own >> modules in particular. See maven-dependency-plugin's "artifactItems" >> configuration. >> >> >> >> On Mon, 15 Jan 2018 at 11:29 Sanne Grinovero wrote: >> >> > On 15 January 2018 at 08:42, Yoann Rodiere wrote: >> > > Thanks Sanne ! >> > > >> > > I have one question... >> > > >> > >> Please never rely on this as "storage": it's just meant as cache and >> > >> we reserve the right to wipe it all out at any time. >> > > >> > > I gather you say that so that we don't try to "release" artifacts into >> > this >> > > cache? But temporary storage for the duration of one build will still >> > > be >> > > safe? >> > > >> > > Because our builds obviously rely on the local repository for >> > > short-term >> > > storage (for the duration of the build). For example the dependencies >> > > are >> > > only checked and downloaded if necessary at the beginning of the >> > > build, >> > and >> > > then are expected to exist in the local repository until the build >> > > stops. >> > > Another example: our WildFly modules are first built and installed in >> > > the >> > > "modules" subproject, and later "fetched" from the local repository in >> > the >> > > "integrationtest/wildfly" subproject. >> > > >> > > If we were to clear the cache during a build, things would probably go >> > > wrong. Worse, if two parallel builds were to install the same >> > > artifacts >> > > (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would run >> > > the >> > risk >> > > of testing the wrong "version" of this artifact in one of the >> > > builds... >> > >> > SNAPSHOT being installed are indeed a problem, e.g the PR testing jobs >> > could conflict with the regular master jobs. >> > We should reconfigure those to not "install" - that's actually a bad >> > habit, legacy from Maven 2 times - people nowadays recommend using >> > "mvn clean verify", especially on CI environments. >> > >> > I agree about the perils of clearing the cache during in-progress builds >> > too. >> > >> > I just meant to warn that we don't have any backup plan in place, and >> > I do plan to just wipe the whole thing occasionally: >> > - when we have any direct need, e.g. currupted downloads >> > - when it gets too large >> > - if it gets too expensive >> > - regularly, just to "practice" that everything works with an empty >> > cache >> > >> > Also our "disaster recovery" plan to rebuild all infrastructure will >> > always assume it's ok to reboot with having this file system empty. >> > >> > Thanks, >> > Sanne >> > >> > > >> > > >> > > On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero >> > wrote: >> > >> >> > >> Hi all, >> > >> >> > >> while the new build machines are fast, some of you pointed out we're >> > >> now spending a relative high amount of time downloading maven >> > >> dependencies, this problem being compounded by the fact we "nuke" >> > >> idle >> > >> slaves shortly after they become idle. >> > >> >> > >> I just spent the day testing a distributed file system, and it's now >> > >> running in "production". >> > >> It's used exclusively to store the Gradle and Maven caches. This is >> > >> stateful and independent from the lifecycle of individual slave >> > >> nodes. >> > >> >> > >> Unfortunately this solution is not viable for Docker images, so while >> > >> I experimented with the idea I backed off from moving the docker >> > >> storage graph to a similar device. Please don't waste time trying >> > >> that >> > >> w/o carefully reading the Docker documentation or talking with me :) >> > >> Also, beyond correctness of storage semantics, it's likely far less >> > >> efficient for Docker. >> > >> >> > >> To learn more about our new cache: >> > >> - >> > >> >> > >> > https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 >> > >> - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html >> > >> >> > >> I'd add that - because of other IO tuning in place - writes might >> > >> appear out of order to other nodes, and conflicts are not handled. >> > >> Shouldn't be a problem since snapshots now have timestamps, but this >> > >> might be something to keep in mind. >> > >> >> > >> N.B. >> > >> Please never rely on this as "storage": it's just meant as cache and >> > >> we reserve the right to wipe it all out at any time. >> > >> >> > >> Thanks, >> > >> Sanne >> > >> _______________________________________________ >> > >> hibernate-dev mailing list >> > >> hibernate-dev at lists.jboss.org >> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > > >> > > >> > > >> > > -- >> > > Yoann Rodiere >> > > yoann at hibernate.org / yrodiere at redhat.com >> > > Software Engineer >> > > Hibernate NoORM team >> > >> >> >> -- >> Yoann Rodiere >> yoann at hibernate.org / yrodiere at redhat.com >> Software Engineer >> Hibernate NoORM team >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Tue Jan 16 16:33:55 2018 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 16 Jan 2018 21:33:55 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: well Gradle is used in CI environments all over the place, so it must work. But I think we need some different configurations in the Gradle command used. For example, it is highly suggested that the Gradle daemon be disabled in CI but I'm not sure all of our jobs actually do that. I'll look into that... On Tue, Jan 16, 2018 at 3:30 PM Sanne Grinovero wrote: > Yes I did it for Gradle too, sorry. The `/efs-maven-artifacts` is the > guilty mount point. > > I don't know any quick solutions for the various concerns you all > raised, so I'll roll this back tonight. > > It's good to know that it's not too hard to have a shared FS between > these machines; needs better planning though. > > Thanks, > Sanne > > On 16 January 2018 at 19:41, Steve Ebersole wrote: > > Did you happen to do the same for Gradle caches? > > > > Some jobs are failing: > > > > > > * What went wrong: > > Could not resolve all dependencies for configuration > > ':buildSrc:runtimeClasspath'. > >> Timeout waiting to lock artifact cache > >> (/efs-maven-artifacts/.gradle/caches/modules-2). It is currently in use > by > >> another Gradle instance. > > Owner PID: 1423 > > Our PID: 10249 > > Owner Operation: resolve configuration ':classpath' > > Our operation: > > Lock file: /efs-maven-artifacts/.gradle/caches/modules-2/modules-2.lock > > > > > > > > On Mon, Jan 15, 2018 at 5:06 AM Yoann Rodiere > wrote: > >> > >> > We should reconfigure those to not "install" - that's actually a bad > >> > habit, legacy from Maven 2 times - people nowadays recommend using > >> > "mvn clean verify", especially on CI environments. > >> > >> I could not agree more, that would be cleaner, but that's not possible. > >> And > >> believe me, I tried hard. Last time I checked, some of the plugins we > use > >> with dynamic dependency resolution would ignore the artifacts being > built, > >> and would always fetch the artifacts from the Maven repos (for > SNAPSHOTs, > >> they would end up using nightlies). > >> I'm not talking about when we use standard maven markup to declare > >> dependencies, but when the plugin itself has to fetch dependencies > >> "dynamically", which happens when we setup a WildFly server with our own > >> modules in particular. See maven-dependency-plugin's "artifactItems" > >> configuration. > >> > >> > >> > >> On Mon, 15 Jan 2018 at 11:29 Sanne Grinovero > wrote: > >> > >> > On 15 January 2018 at 08:42, Yoann Rodiere > wrote: > >> > > Thanks Sanne ! > >> > > > >> > > I have one question... > >> > > > >> > >> Please never rely on this as "storage": it's just meant as cache > and > >> > >> we reserve the right to wipe it all out at any time. > >> > > > >> > > I gather you say that so that we don't try to "release" artifacts > into > >> > this > >> > > cache? But temporary storage for the duration of one build will > still > >> > > be > >> > > safe? > >> > > > >> > > Because our builds obviously rely on the local repository for > >> > > short-term > >> > > storage (for the duration of the build). For example the > dependencies > >> > > are > >> > > only checked and downloaded if necessary at the beginning of the > >> > > build, > >> > and > >> > > then are expected to exist in the local repository until the build > >> > > stops. > >> > > Another example: our WildFly modules are first built and installed > in > >> > > the > >> > > "modules" subproject, and later "fetched" from the local repository > in > >> > the > >> > > "integrationtest/wildfly" subproject. > >> > > > >> > > If we were to clear the cache during a build, things would probably > go > >> > > wrong. Worse, if two parallel builds were to install the same > >> > > artifacts > >> > > (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would run > >> > > the > >> > risk > >> > > of testing the wrong "version" of this artifact in one of the > >> > > builds... > >> > > >> > SNAPSHOT being installed are indeed a problem, e.g the PR testing jobs > >> > could conflict with the regular master jobs. > >> > We should reconfigure those to not "install" - that's actually a bad > >> > habit, legacy from Maven 2 times - people nowadays recommend using > >> > "mvn clean verify", especially on CI environments. > >> > > >> > I agree about the perils of clearing the cache during in-progress > builds > >> > too. > >> > > >> > I just meant to warn that we don't have any backup plan in place, and > >> > I do plan to just wipe the whole thing occasionally: > >> > - when we have any direct need, e.g. currupted downloads > >> > - when it gets too large > >> > - if it gets too expensive > >> > - regularly, just to "practice" that everything works with an empty > >> > cache > >> > > >> > Also our "disaster recovery" plan to rebuild all infrastructure will > >> > always assume it's ok to reboot with having this file system empty. > >> > > >> > Thanks, > >> > Sanne > >> > > >> > > > >> > > > >> > > On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero > >> > wrote: > >> > >> > >> > >> Hi all, > >> > >> > >> > >> while the new build machines are fast, some of you pointed out > we're > >> > >> now spending a relative high amount of time downloading maven > >> > >> dependencies, this problem being compounded by the fact we "nuke" > >> > >> idle > >> > >> slaves shortly after they become idle. > >> > >> > >> > >> I just spent the day testing a distributed file system, and it's > now > >> > >> running in "production". > >> > >> It's used exclusively to store the Gradle and Maven caches. This is > >> > >> stateful and independent from the lifecycle of individual slave > >> > >> nodes. > >> > >> > >> > >> Unfortunately this solution is not viable for Docker images, so > while > >> > >> I experimented with the idea I backed off from moving the docker > >> > >> storage graph to a similar device. Please don't waste time trying > >> > >> that > >> > >> w/o carefully reading the Docker documentation or talking with me > :) > >> > >> Also, beyond correctness of storage semantics, it's likely far less > >> > >> efficient for Docker. > >> > >> > >> > >> To learn more about our new cache: > >> > >> - > >> > >> > >> > > >> > > https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 > >> > >> - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html > >> > >> > >> > >> I'd add that - because of other IO tuning in place - writes might > >> > >> appear out of order to other nodes, and conflicts are not handled. > >> > >> Shouldn't be a problem since snapshots now have timestamps, but > this > >> > >> might be something to keep in mind. > >> > >> > >> > >> N.B. > >> > >> Please never rely on this as "storage": it's just meant as cache > and > >> > >> we reserve the right to wipe it all out at any time. > >> > >> > >> > >> Thanks, > >> > >> Sanne > >> > >> _______________________________________________ > >> > >> hibernate-dev mailing list > >> > >> hibernate-dev at lists.jboss.org > >> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >> > > > >> > > > >> > > > >> > > -- > >> > > Yoann Rodiere > >> > > yoann at hibernate.org / yrodiere at redhat.com > >> > > Software Engineer > >> > > Hibernate NoORM team > >> > > >> > >> > >> -- > >> Yoann Rodiere > >> yoann at hibernate.org / yrodiere at redhat.com > >> Software Engineer > >> Hibernate NoORM team > >> _______________________________________________ > >> hibernate-dev mailing list > >> hibernate-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Tue Jan 16 16:51:41 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 16 Jan 2018 21:51:41 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: On 16 January 2018 at 21:33, Steve Ebersole wrote: > well Gradle is used in CI environments all over the place, so it must work. > But I think we need some different configurations in the Gradle command > used. For example, it is highly suggested that the Gradle daemon be > disabled in CI but I'm not sure all of our jobs actually do that. I'll look > into that... I wouldn't mind having the Gradle deamon always on, if it helps we could even pre-load it with some tuned configuration. The only drawback I see is to make it easy to upgrade Gradle version, in case one needs, without having to go through server configuration scripts. We need strict isolation about writes in the cache though; for now I'll disable it, not least for the concerns that Yoann and Gunnar pointed out, then we can experiment with cool ideas more carefully. Funny, one would expect to know by know about the perils of a distributed cache :) > > On Tue, Jan 16, 2018 at 3:30 PM Sanne Grinovero wrote: >> >> Yes I did it for Gradle too, sorry. The `/efs-maven-artifacts` is the >> guilty mount point. >> >> I don't know any quick solutions for the various concerns you all >> raised, so I'll roll this back tonight. >> >> It's good to know that it's not too hard to have a shared FS between >> these machines; needs better planning though. >> >> Thanks, >> Sanne >> >> On 16 January 2018 at 19:41, Steve Ebersole wrote: >> > Did you happen to do the same for Gradle caches? >> > >> > Some jobs are failing: >> > >> > >> > * What went wrong: >> > Could not resolve all dependencies for configuration >> > ':buildSrc:runtimeClasspath'. >> >> Timeout waiting to lock artifact cache >> >> (/efs-maven-artifacts/.gradle/caches/modules-2). It is currently in use >> >> by >> >> another Gradle instance. >> > Owner PID: 1423 >> > Our PID: 10249 >> > Owner Operation: resolve configuration ':classpath' >> > Our operation: >> > Lock file: >> > /efs-maven-artifacts/.gradle/caches/modules-2/modules-2.lock >> > >> > >> > >> > On Mon, Jan 15, 2018 at 5:06 AM Yoann Rodiere >> > wrote: >> >> >> >> > We should reconfigure those to not "install" - that's actually a bad >> >> > habit, legacy from Maven 2 times - people nowadays recommend using >> >> > "mvn clean verify", especially on CI environments. >> >> >> >> I could not agree more, that would be cleaner, but that's not possible. >> >> And >> >> believe me, I tried hard. Last time I checked, some of the plugins we >> >> use >> >> with dynamic dependency resolution would ignore the artifacts being >> >> built, >> >> and would always fetch the artifacts from the Maven repos (for >> >> SNAPSHOTs, >> >> they would end up using nightlies). >> >> I'm not talking about when we use standard maven markup to declare >> >> dependencies, but when the plugin itself has to fetch dependencies >> >> "dynamically", which happens when we setup a WildFly server with our >> >> own >> >> modules in particular. See maven-dependency-plugin's "artifactItems" >> >> configuration. >> >> >> >> >> >> >> >> On Mon, 15 Jan 2018 at 11:29 Sanne Grinovero >> >> wrote: >> >> >> >> > On 15 January 2018 at 08:42, Yoann Rodiere >> >> > wrote: >> >> > > Thanks Sanne ! >> >> > > >> >> > > I have one question... >> >> > > >> >> > >> Please never rely on this as "storage": it's just meant as cache >> >> > >> and >> >> > >> we reserve the right to wipe it all out at any time. >> >> > > >> >> > > I gather you say that so that we don't try to "release" artifacts >> >> > > into >> >> > this >> >> > > cache? But temporary storage for the duration of one build will >> >> > > still >> >> > > be >> >> > > safe? >> >> > > >> >> > > Because our builds obviously rely on the local repository for >> >> > > short-term >> >> > > storage (for the duration of the build). For example the >> >> > > dependencies >> >> > > are >> >> > > only checked and downloaded if necessary at the beginning of the >> >> > > build, >> >> > and >> >> > > then are expected to exist in the local repository until the build >> >> > > stops. >> >> > > Another example: our WildFly modules are first built and installed >> >> > > in >> >> > > the >> >> > > "modules" subproject, and later "fetched" from the local repository >> >> > > in >> >> > the >> >> > > "integrationtest/wildfly" subproject. >> >> > > >> >> > > If we were to clear the cache during a build, things would probably >> >> > > go >> >> > > wrong. Worse, if two parallel builds were to install the same >> >> > > artifacts >> >> > > (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would run >> >> > > the >> >> > risk >> >> > > of testing the wrong "version" of this artifact in one of the >> >> > > builds... >> >> > >> >> > SNAPSHOT being installed are indeed a problem, e.g the PR testing >> >> > jobs >> >> > could conflict with the regular master jobs. >> >> > We should reconfigure those to not "install" - that's actually a bad >> >> > habit, legacy from Maven 2 times - people nowadays recommend using >> >> > "mvn clean verify", especially on CI environments. >> >> > >> >> > I agree about the perils of clearing the cache during in-progress >> >> > builds >> >> > too. >> >> > >> >> > I just meant to warn that we don't have any backup plan in place, and >> >> > I do plan to just wipe the whole thing occasionally: >> >> > - when we have any direct need, e.g. currupted downloads >> >> > - when it gets too large >> >> > - if it gets too expensive >> >> > - regularly, just to "practice" that everything works with an empty >> >> > cache >> >> > >> >> > Also our "disaster recovery" plan to rebuild all infrastructure will >> >> > always assume it's ok to reboot with having this file system empty. >> >> > >> >> > Thanks, >> >> > Sanne >> >> > >> >> > > >> >> > > >> >> > > On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero >> >> > wrote: >> >> > >> >> >> > >> Hi all, >> >> > >> >> >> > >> while the new build machines are fast, some of you pointed out >> >> > >> we're >> >> > >> now spending a relative high amount of time downloading maven >> >> > >> dependencies, this problem being compounded by the fact we "nuke" >> >> > >> idle >> >> > >> slaves shortly after they become idle. >> >> > >> >> >> > >> I just spent the day testing a distributed file system, and it's >> >> > >> now >> >> > >> running in "production". >> >> > >> It's used exclusively to store the Gradle and Maven caches. This >> >> > >> is >> >> > >> stateful and independent from the lifecycle of individual slave >> >> > >> nodes. >> >> > >> >> >> > >> Unfortunately this solution is not viable for Docker images, so >> >> > >> while >> >> > >> I experimented with the idea I backed off from moving the docker >> >> > >> storage graph to a similar device. Please don't waste time trying >> >> > >> that >> >> > >> w/o carefully reading the Docker documentation or talking with me >> >> > >> :) >> >> > >> Also, beyond correctness of storage semantics, it's likely far >> >> > >> less >> >> > >> efficient for Docker. >> >> > >> >> >> > >> To learn more about our new cache: >> >> > >> - >> >> > >> >> >> > >> >> > >> >> > https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 >> >> > >> - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html >> >> > >> >> >> > >> I'd add that - because of other IO tuning in place - writes might >> >> > >> appear out of order to other nodes, and conflicts are not handled. >> >> > >> Shouldn't be a problem since snapshots now have timestamps, but >> >> > >> this >> >> > >> might be something to keep in mind. >> >> > >> >> >> > >> N.B. >> >> > >> Please never rely on this as "storage": it's just meant as cache >> >> > >> and >> >> > >> we reserve the right to wipe it all out at any time. >> >> > >> >> >> > >> Thanks, >> >> > >> Sanne >> >> > >> _______________________________________________ >> >> > >> hibernate-dev mailing list >> >> > >> hibernate-dev at lists.jboss.org >> >> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> > > >> >> > > >> >> > > >> >> > > -- >> >> > > Yoann Rodiere >> >> > > yoann at hibernate.org / yrodiere at redhat.com >> >> > > Software Engineer >> >> > > Hibernate NoORM team >> >> > >> >> >> >> >> >> -- >> >> Yoann Rodiere >> >> yoann at hibernate.org / yrodiere at redhat.com >> >> Software Engineer >> >> Hibernate NoORM team >> >> _______________________________________________ >> >> hibernate-dev mailing list >> >> hibernate-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/hibernate-dev From sanne at hibernate.org Tue Jan 16 17:22:58 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 16 Jan 2018 22:22:58 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: Version F27v17 of the slaves is running now, with NFS drive removed. sorry for the experiment :) Thanks Sanne On 16 January 2018 at 21:51, Sanne Grinovero wrote: > On 16 January 2018 at 21:33, Steve Ebersole wrote: >> well Gradle is used in CI environments all over the place, so it must work. >> But I think we need some different configurations in the Gradle command >> used. For example, it is highly suggested that the Gradle daemon be >> disabled in CI but I'm not sure all of our jobs actually do that. I'll look >> into that... > > I wouldn't mind having the Gradle deamon always on, if it helps we > could even pre-load it with some tuned configuration. > The only drawback I see is to make it easy to upgrade Gradle version, > in case one needs, without having to go through server configuration > scripts. > > We need strict isolation about writes in the cache though; for now > I'll disable it, not least for the concerns that Yoann and Gunnar > pointed out, then we can experiment with cool ideas more carefully. > > Funny, one would expect to know by know about the perils of a > distributed cache :) > > >> >> On Tue, Jan 16, 2018 at 3:30 PM Sanne Grinovero wrote: >>> >>> Yes I did it for Gradle too, sorry. The `/efs-maven-artifacts` is the >>> guilty mount point. >>> >>> I don't know any quick solutions for the various concerns you all >>> raised, so I'll roll this back tonight. >>> >>> It's good to know that it's not too hard to have a shared FS between >>> these machines; needs better planning though. >>> >>> Thanks, >>> Sanne >>> >>> On 16 January 2018 at 19:41, Steve Ebersole wrote: >>> > Did you happen to do the same for Gradle caches? >>> > >>> > Some jobs are failing: >>> > >>> > >>> > * What went wrong: >>> > Could not resolve all dependencies for configuration >>> > ':buildSrc:runtimeClasspath'. >>> >> Timeout waiting to lock artifact cache >>> >> (/efs-maven-artifacts/.gradle/caches/modules-2). It is currently in use >>> >> by >>> >> another Gradle instance. >>> > Owner PID: 1423 >>> > Our PID: 10249 >>> > Owner Operation: resolve configuration ':classpath' >>> > Our operation: >>> > Lock file: >>> > /efs-maven-artifacts/.gradle/caches/modules-2/modules-2.lock >>> > >>> > >>> > >>> > On Mon, Jan 15, 2018 at 5:06 AM Yoann Rodiere >>> > wrote: >>> >> >>> >> > We should reconfigure those to not "install" - that's actually a bad >>> >> > habit, legacy from Maven 2 times - people nowadays recommend using >>> >> > "mvn clean verify", especially on CI environments. >>> >> >>> >> I could not agree more, that would be cleaner, but that's not possible. >>> >> And >>> >> believe me, I tried hard. Last time I checked, some of the plugins we >>> >> use >>> >> with dynamic dependency resolution would ignore the artifacts being >>> >> built, >>> >> and would always fetch the artifacts from the Maven repos (for >>> >> SNAPSHOTs, >>> >> they would end up using nightlies). >>> >> I'm not talking about when we use standard maven markup to declare >>> >> dependencies, but when the plugin itself has to fetch dependencies >>> >> "dynamically", which happens when we setup a WildFly server with our >>> >> own >>> >> modules in particular. See maven-dependency-plugin's "artifactItems" >>> >> configuration. >>> >> >>> >> >>> >> >>> >> On Mon, 15 Jan 2018 at 11:29 Sanne Grinovero >>> >> wrote: >>> >> >>> >> > On 15 January 2018 at 08:42, Yoann Rodiere >>> >> > wrote: >>> >> > > Thanks Sanne ! >>> >> > > >>> >> > > I have one question... >>> >> > > >>> >> > >> Please never rely on this as "storage": it's just meant as cache >>> >> > >> and >>> >> > >> we reserve the right to wipe it all out at any time. >>> >> > > >>> >> > > I gather you say that so that we don't try to "release" artifacts >>> >> > > into >>> >> > this >>> >> > > cache? But temporary storage for the duration of one build will >>> >> > > still >>> >> > > be >>> >> > > safe? >>> >> > > >>> >> > > Because our builds obviously rely on the local repository for >>> >> > > short-term >>> >> > > storage (for the duration of the build). For example the >>> >> > > dependencies >>> >> > > are >>> >> > > only checked and downloaded if necessary at the beginning of the >>> >> > > build, >>> >> > and >>> >> > > then are expected to exist in the local repository until the build >>> >> > > stops. >>> >> > > Another example: our WildFly modules are first built and installed >>> >> > > in >>> >> > > the >>> >> > > "modules" subproject, and later "fetched" from the local repository >>> >> > > in >>> >> > the >>> >> > > "integrationtest/wildfly" subproject. >>> >> > > >>> >> > > If we were to clear the cache during a build, things would probably >>> >> > > go >>> >> > > wrong. Worse, if two parallel builds were to install the same >>> >> > > artifacts >>> >> > > (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would run >>> >> > > the >>> >> > risk >>> >> > > of testing the wrong "version" of this artifact in one of the >>> >> > > builds... >>> >> > >>> >> > SNAPSHOT being installed are indeed a problem, e.g the PR testing >>> >> > jobs >>> >> > could conflict with the regular master jobs. >>> >> > We should reconfigure those to not "install" - that's actually a bad >>> >> > habit, legacy from Maven 2 times - people nowadays recommend using >>> >> > "mvn clean verify", especially on CI environments. >>> >> > >>> >> > I agree about the perils of clearing the cache during in-progress >>> >> > builds >>> >> > too. >>> >> > >>> >> > I just meant to warn that we don't have any backup plan in place, and >>> >> > I do plan to just wipe the whole thing occasionally: >>> >> > - when we have any direct need, e.g. currupted downloads >>> >> > - when it gets too large >>> >> > - if it gets too expensive >>> >> > - regularly, just to "practice" that everything works with an empty >>> >> > cache >>> >> > >>> >> > Also our "disaster recovery" plan to rebuild all infrastructure will >>> >> > always assume it's ok to reboot with having this file system empty. >>> >> > >>> >> > Thanks, >>> >> > Sanne >>> >> > >>> >> > > >>> >> > > >>> >> > > On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero >>> >> > wrote: >>> >> > >> >>> >> > >> Hi all, >>> >> > >> >>> >> > >> while the new build machines are fast, some of you pointed out >>> >> > >> we're >>> >> > >> now spending a relative high amount of time downloading maven >>> >> > >> dependencies, this problem being compounded by the fact we "nuke" >>> >> > >> idle >>> >> > >> slaves shortly after they become idle. >>> >> > >> >>> >> > >> I just spent the day testing a distributed file system, and it's >>> >> > >> now >>> >> > >> running in "production". >>> >> > >> It's used exclusively to store the Gradle and Maven caches. This >>> >> > >> is >>> >> > >> stateful and independent from the lifecycle of individual slave >>> >> > >> nodes. >>> >> > >> >>> >> > >> Unfortunately this solution is not viable for Docker images, so >>> >> > >> while >>> >> > >> I experimented with the idea I backed off from moving the docker >>> >> > >> storage graph to a similar device. Please don't waste time trying >>> >> > >> that >>> >> > >> w/o carefully reading the Docker documentation or talking with me >>> >> > >> :) >>> >> > >> Also, beyond correctness of storage semantics, it's likely far >>> >> > >> less >>> >> > >> efficient for Docker. >>> >> > >> >>> >> > >> To learn more about our new cache: >>> >> > >> - >>> >> > >> >>> >> > >>> >> > >>> >> > https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 >>> >> > >> - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html >>> >> > >> >>> >> > >> I'd add that - because of other IO tuning in place - writes might >>> >> > >> appear out of order to other nodes, and conflicts are not handled. >>> >> > >> Shouldn't be a problem since snapshots now have timestamps, but >>> >> > >> this >>> >> > >> might be something to keep in mind. >>> >> > >> >>> >> > >> N.B. >>> >> > >> Please never rely on this as "storage": it's just meant as cache >>> >> > >> and >>> >> > >> we reserve the right to wipe it all out at any time. >>> >> > >> >>> >> > >> Thanks, >>> >> > >> Sanne >>> >> > >> _______________________________________________ >>> >> > >> hibernate-dev mailing list >>> >> > >> hibernate-dev at lists.jboss.org >>> >> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >> > > >>> >> > > >>> >> > > >>> >> > > -- >>> >> > > Yoann Rodiere >>> >> > > yoann at hibernate.org / yrodiere at redhat.com >>> >> > > Software Engineer >>> >> > > Hibernate NoORM team >>> >> > >>> >> >>> >> >>> >> -- >>> >> Yoann Rodiere >>> >> yoann at hibernate.org / yrodiere at redhat.com >>> >> Software Engineer >>> >> Hibernate NoORM team >>> >> _______________________________________________ >>> >> hibernate-dev mailing list >>> >> hibernate-dev at lists.jboss.org >>> >> https://lists.jboss.org/mailman/listinfo/hibernate-dev From yoann at hibernate.org Wed Jan 17 03:34:05 2018 From: yoann at hibernate.org (Yoann Rodiere) Date: Wed, 17 Jan 2018 08:34:05 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: Thanks for trying :) We might try to improve the Maven plugins in question, but as you said we already spent quite some time on infrastructure... That being said, if we are ever to follow-up on this caching idea, Gunnar's idea of a local Nexus got me thinking... Was your change only about performance, or is it also about the bill? From what I understood we pay for outbound network traffic, so those recurring downloads might be a problem... To reduce the bill, what if we simply added a AWS node running a Nexus repository or a HTTP cache acting as proxy to the various Maven repositories we use? If we pay for outbound network traffic but not for network traffic between our own nodes, that would be a solution. Not sure if we would gain much in performance, since network speed would probably be similar, but that might reduce the bill (depending on the cost of this additional node, of course). On Tue, 16 Jan 2018 at 23:24 Sanne Grinovero wrote: > Version F27v17 of the slaves is running now, with NFS drive removed. > > sorry for the experiment :) > > Thanks > Sanne > > On 16 January 2018 at 21:51, Sanne Grinovero wrote: > > On 16 January 2018 at 21:33, Steve Ebersole wrote: > >> well Gradle is used in CI environments all over the place, so it must > work. > >> But I think we need some different configurations in the Gradle command > >> used. For example, it is highly suggested that the Gradle daemon be > >> disabled in CI but I'm not sure all of our jobs actually do that. I'll > look > >> into that... > > > > I wouldn't mind having the Gradle deamon always on, if it helps we > > could even pre-load it with some tuned configuration. > > The only drawback I see is to make it easy to upgrade Gradle version, > > in case one needs, without having to go through server configuration > > scripts. > > > > We need strict isolation about writes in the cache though; for now > > I'll disable it, not least for the concerns that Yoann and Gunnar > > pointed out, then we can experiment with cool ideas more carefully. > > > > Funny, one would expect to know by know about the perils of a > > distributed cache :) > > > > > >> > >> On Tue, Jan 16, 2018 at 3:30 PM Sanne Grinovero > wrote: > >>> > >>> Yes I did it for Gradle too, sorry. The `/efs-maven-artifacts` is the > >>> guilty mount point. > >>> > >>> I don't know any quick solutions for the various concerns you all > >>> raised, so I'll roll this back tonight. > >>> > >>> It's good to know that it's not too hard to have a shared FS between > >>> these machines; needs better planning though. > >>> > >>> Thanks, > >>> Sanne > >>> > >>> On 16 January 2018 at 19:41, Steve Ebersole > wrote: > >>> > Did you happen to do the same for Gradle caches? > >>> > > >>> > Some jobs are failing: > >>> > > >>> > > >>> > * What went wrong: > >>> > Could not resolve all dependencies for configuration > >>> > ':buildSrc:runtimeClasspath'. > >>> >> Timeout waiting to lock artifact cache > >>> >> (/efs-maven-artifacts/.gradle/caches/modules-2). It is currently in > use > >>> >> by > >>> >> another Gradle instance. > >>> > Owner PID: 1423 > >>> > Our PID: 10249 > >>> > Owner Operation: resolve configuration ':classpath' > >>> > Our operation: > >>> > Lock file: > >>> > /efs-maven-artifacts/.gradle/caches/modules-2/modules-2.lock > >>> > > >>> > > >>> > > >>> > On Mon, Jan 15, 2018 at 5:06 AM Yoann Rodiere > >>> > wrote: > >>> >> > >>> >> > We should reconfigure those to not "install" - that's actually a > bad > >>> >> > habit, legacy from Maven 2 times - people nowadays recommend using > >>> >> > "mvn clean verify", especially on CI environments. > >>> >> > >>> >> I could not agree more, that would be cleaner, but that's not > possible. > >>> >> And > >>> >> believe me, I tried hard. Last time I checked, some of the plugins > we > >>> >> use > >>> >> with dynamic dependency resolution would ignore the artifacts being > >>> >> built, > >>> >> and would always fetch the artifacts from the Maven repos (for > >>> >> SNAPSHOTs, > >>> >> they would end up using nightlies). > >>> >> I'm not talking about when we use standard maven markup to declare > >>> >> dependencies, but when the plugin itself has to fetch dependencies > >>> >> "dynamically", which happens when we setup a WildFly server with our > >>> >> own > >>> >> modules in particular. See maven-dependency-plugin's "artifactItems" > >>> >> configuration. > >>> >> > >>> >> > >>> >> > >>> >> On Mon, 15 Jan 2018 at 11:29 Sanne Grinovero > >>> >> wrote: > >>> >> > >>> >> > On 15 January 2018 at 08:42, Yoann Rodiere > >>> >> > wrote: > >>> >> > > Thanks Sanne ! > >>> >> > > > >>> >> > > I have one question... > >>> >> > > > >>> >> > >> Please never rely on this as "storage": it's just meant as > cache > >>> >> > >> and > >>> >> > >> we reserve the right to wipe it all out at any time. > >>> >> > > > >>> >> > > I gather you say that so that we don't try to "release" > artifacts > >>> >> > > into > >>> >> > this > >>> >> > > cache? But temporary storage for the duration of one build will > >>> >> > > still > >>> >> > > be > >>> >> > > safe? > >>> >> > > > >>> >> > > Because our builds obviously rely on the local repository for > >>> >> > > short-term > >>> >> > > storage (for the duration of the build). For example the > >>> >> > > dependencies > >>> >> > > are > >>> >> > > only checked and downloaded if necessary at the beginning of the > >>> >> > > build, > >>> >> > and > >>> >> > > then are expected to exist in the local repository until the > build > >>> >> > > stops. > >>> >> > > Another example: our WildFly modules are first built and > installed > >>> >> > > in > >>> >> > > the > >>> >> > > "modules" subproject, and later "fetched" from the local > repository > >>> >> > > in > >>> >> > the > >>> >> > > "integrationtest/wildfly" subproject. > >>> >> > > > >>> >> > > If we were to clear the cache during a build, things would > probably > >>> >> > > go > >>> >> > > wrong. Worse, if two parallel builds were to install the same > >>> >> > > artifacts > >>> >> > > (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would > run > >>> >> > > the > >>> >> > risk > >>> >> > > of testing the wrong "version" of this artifact in one of the > >>> >> > > builds... > >>> >> > > >>> >> > SNAPSHOT being installed are indeed a problem, e.g the PR testing > >>> >> > jobs > >>> >> > could conflict with the regular master jobs. > >>> >> > We should reconfigure those to not "install" - that's actually a > bad > >>> >> > habit, legacy from Maven 2 times - people nowadays recommend using > >>> >> > "mvn clean verify", especially on CI environments. > >>> >> > > >>> >> > I agree about the perils of clearing the cache during in-progress > >>> >> > builds > >>> >> > too. > >>> >> > > >>> >> > I just meant to warn that we don't have any backup plan in place, > and > >>> >> > I do plan to just wipe the whole thing occasionally: > >>> >> > - when we have any direct need, e.g. currupted downloads > >>> >> > - when it gets too large > >>> >> > - if it gets too expensive > >>> >> > - regularly, just to "practice" that everything works with an > empty > >>> >> > cache > >>> >> > > >>> >> > Also our "disaster recovery" plan to rebuild all infrastructure > will > >>> >> > always assume it's ok to reboot with having this file system > empty. > >>> >> > > >>> >> > Thanks, > >>> >> > Sanne > >>> >> > > >>> >> > > > >>> >> > > > >>> >> > > On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero < > sanne at hibernate.org> > >>> >> > wrote: > >>> >> > >> > >>> >> > >> Hi all, > >>> >> > >> > >>> >> > >> while the new build machines are fast, some of you pointed out > >>> >> > >> we're > >>> >> > >> now spending a relative high amount of time downloading maven > >>> >> > >> dependencies, this problem being compounded by the fact we > "nuke" > >>> >> > >> idle > >>> >> > >> slaves shortly after they become idle. > >>> >> > >> > >>> >> > >> I just spent the day testing a distributed file system, and > it's > >>> >> > >> now > >>> >> > >> running in "production". > >>> >> > >> It's used exclusively to store the Gradle and Maven caches. > This > >>> >> > >> is > >>> >> > >> stateful and independent from the lifecycle of individual slave > >>> >> > >> nodes. > >>> >> > >> > >>> >> > >> Unfortunately this solution is not viable for Docker images, so > >>> >> > >> while > >>> >> > >> I experimented with the idea I backed off from moving the > docker > >>> >> > >> storage graph to a similar device. Please don't waste time > trying > >>> >> > >> that > >>> >> > >> w/o carefully reading the Docker documentation or talking with > me > >>> >> > >> :) > >>> >> > >> Also, beyond correctness of storage semantics, it's likely far > >>> >> > >> less > >>> >> > >> efficient for Docker. > >>> >> > >> > >>> >> > >> To learn more about our new cache: > >>> >> > >> - > >>> >> > >> > >>> >> > > >>> >> > > >>> >> > > https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 > >>> >> > >> - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html > >>> >> > >> > >>> >> > >> I'd add that - because of other IO tuning in place - writes > might > >>> >> > >> appear out of order to other nodes, and conflicts are not > handled. > >>> >> > >> Shouldn't be a problem since snapshots now have timestamps, but > >>> >> > >> this > >>> >> > >> might be something to keep in mind. > >>> >> > >> > >>> >> > >> N.B. > >>> >> > >> Please never rely on this as "storage": it's just meant as > cache > >>> >> > >> and > >>> >> > >> we reserve the right to wipe it all out at any time. > >>> >> > >> > >>> >> > >> Thanks, > >>> >> > >> Sanne > >>> >> > >> _______________________________________________ > >>> >> > >> hibernate-dev mailing list > >>> >> > >> hibernate-dev at lists.jboss.org > >>> >> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>> >> > > > >>> >> > > > >>> >> > > > >>> >> > > -- > >>> >> > > Yoann Rodiere > >>> >> > > yoann at hibernate.org / yrodiere at redhat.com > >>> >> > > Software Engineer > >>> >> > > Hibernate NoORM team > >>> >> > > >>> >> > >>> >> > >>> >> -- > >>> >> Yoann Rodiere > >>> >> yoann at hibernate.org / yrodiere at redhat.com > >>> >> Software Engineer > >>> >> Hibernate NoORM team > >>> >> _______________________________________________ > >>> >> hibernate-dev mailing list > >>> >> hibernate-dev at lists.jboss.org > >>> >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > -- Yoann Rodiere yoann at hibernate.org / yrodiere at redhat.com Software Engineer Hibernate NoORM team From sanne at hibernate.org Wed Jan 17 05:38:09 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 17 Jan 2018 10:38:09 +0000 Subject: [hibernate-dev] ci.hibernate.org : announcing distributed cache for maven artifacts In-Reply-To: References: Message-ID: On 17 January 2018 at 08:34, Yoann Rodiere wrote: > Thanks for trying :) We might try to improve the Maven plugins in question, > but as you said we already spent quite some time on infrastructure... > > That being said, if we are ever to follow-up on this caching idea, Gunnar's > idea of a local Nexus got me thinking... Was your change only about > performance, or is it also about the bill? From what I understood we pay for > outbound network traffic, so those recurring downloads might be a problem... Right, my goal was to both improve performance and cost, but I was mainly driven by concerns aout the Docker images. Turned out this could not be applied for Docker as the documentation was more explicit about such dangers but (I thought) I had a nice Maven cache as side effect. > To reduce the bill, what if we simply added a AWS node running a Nexus > repository or a HTTP cache acting as proxy to the various Maven repositories > we use? That was my first idea as well, but after a quick look - I might be wrong - it looked like a significant time investment. Unfortunately while there are various "proxy as a service" options, they are not meant for inbound traffic to your own cluster. I have managed Nexus instances in the past and would not want to do it again - also I doubt that it would be very effective either on the bill or on the download speeds.. essentialy while it might help a tiny bit it's not worth the effort. Consider also that there's a mirror of Maven Central in our same data center. > If we pay for outbound network traffic but not for network traffic between > our own nodes, that would be a solution. Not sure if we would gain much in > performance, since network speed would probably be similar, but that might > reduce the bill (depending on the cost of this additional node, of course). Bandwith on AWS is metered both inbout and outbound to external IPs. Internal services are also metered, but in a different cost cathegory (way cheaper). Finally, when our own nodes are provisioned by enabling co-loction or cluster features then the communication among those in the same group is free. It's not possible to "launch" any of the other AWS provided services directly within your own cluster, so unless we can reuse data from our direct peers one might as well hit the local mirror of Central. All in all, I was hoping to learn something at low risk to then apply this to Docker images, that's higher priority so we can suspend the work on Maven stuff. There's no free "Nexus service for Docker" - maybe we should deploy OpenShift as I guess it contains such functionality. Thanks, Sanne > > On Tue, 16 Jan 2018 at 23:24 Sanne Grinovero wrote: >> >> Version F27v17 of the slaves is running now, with NFS drive removed. >> >> sorry for the experiment :) >> >> Thanks >> Sanne >> >> On 16 January 2018 at 21:51, Sanne Grinovero wrote: >> > On 16 January 2018 at 21:33, Steve Ebersole wrote: >> >> well Gradle is used in CI environments all over the place, so it must >> >> work. >> >> But I think we need some different configurations in the Gradle command >> >> used. For example, it is highly suggested that the Gradle daemon be >> >> disabled in CI but I'm not sure all of our jobs actually do that. I'll >> >> look >> >> into that... >> > >> > I wouldn't mind having the Gradle deamon always on, if it helps we >> > could even pre-load it with some tuned configuration. >> > The only drawback I see is to make it easy to upgrade Gradle version, >> > in case one needs, without having to go through server configuration >> > scripts. >> > >> > We need strict isolation about writes in the cache though; for now >> > I'll disable it, not least for the concerns that Yoann and Gunnar >> > pointed out, then we can experiment with cool ideas more carefully. >> > >> > Funny, one would expect to know by know about the perils of a >> > distributed cache :) >> > >> > >> >> >> >> On Tue, Jan 16, 2018 at 3:30 PM Sanne Grinovero >> >> wrote: >> >>> >> >>> Yes I did it for Gradle too, sorry. The `/efs-maven-artifacts` is the >> >>> guilty mount point. >> >>> >> >>> I don't know any quick solutions for the various concerns you all >> >>> raised, so I'll roll this back tonight. >> >>> >> >>> It's good to know that it's not too hard to have a shared FS between >> >>> these machines; needs better planning though. >> >>> >> >>> Thanks, >> >>> Sanne >> >>> >> >>> On 16 January 2018 at 19:41, Steve Ebersole >> >>> wrote: >> >>> > Did you happen to do the same for Gradle caches? >> >>> > >> >>> > Some jobs are failing: >> >>> > >> >>> > >> >>> > * What went wrong: >> >>> > Could not resolve all dependencies for configuration >> >>> > ':buildSrc:runtimeClasspath'. >> >>> >> Timeout waiting to lock artifact cache >> >>> >> (/efs-maven-artifacts/.gradle/caches/modules-2). It is currently in >> >>> >> use >> >>> >> by >> >>> >> another Gradle instance. >> >>> > Owner PID: 1423 >> >>> > Our PID: 10249 >> >>> > Owner Operation: resolve configuration ':classpath' >> >>> > Our operation: >> >>> > Lock file: >> >>> > /efs-maven-artifacts/.gradle/caches/modules-2/modules-2.lock >> >>> > >> >>> > >> >>> > >> >>> > On Mon, Jan 15, 2018 at 5:06 AM Yoann Rodiere >> >>> > wrote: >> >>> >> >> >>> >> > We should reconfigure those to not "install" - that's actually a >> >>> >> > bad >> >>> >> > habit, legacy from Maven 2 times - people nowadays recommend >> >>> >> > using >> >>> >> > "mvn clean verify", especially on CI environments. >> >>> >> >> >>> >> I could not agree more, that would be cleaner, but that's not >> >>> >> possible. >> >>> >> And >> >>> >> believe me, I tried hard. Last time I checked, some of the plugins >> >>> >> we >> >>> >> use >> >>> >> with dynamic dependency resolution would ignore the artifacts being >> >>> >> built, >> >>> >> and would always fetch the artifacts from the Maven repos (for >> >>> >> SNAPSHOTs, >> >>> >> they would end up using nightlies). >> >>> >> I'm not talking about when we use standard maven markup to declare >> >>> >> dependencies, but when the plugin itself has to fetch dependencies >> >>> >> "dynamically", which happens when we setup a WildFly server with >> >>> >> our >> >>> >> own >> >>> >> modules in particular. See maven-dependency-plugin's >> >>> >> "artifactItems" >> >>> >> configuration. >> >>> >> >> >>> >> >> >>> >> >> >>> >> On Mon, 15 Jan 2018 at 11:29 Sanne Grinovero >> >>> >> wrote: >> >>> >> >> >>> >> > On 15 January 2018 at 08:42, Yoann Rodiere >> >>> >> > wrote: >> >>> >> > > Thanks Sanne ! >> >>> >> > > >> >>> >> > > I have one question... >> >>> >> > > >> >>> >> > >> Please never rely on this as "storage": it's just meant as >> >>> >> > >> cache >> >>> >> > >> and >> >>> >> > >> we reserve the right to wipe it all out at any time. >> >>> >> > > >> >>> >> > > I gather you say that so that we don't try to "release" >> >>> >> > > artifacts >> >>> >> > > into >> >>> >> > this >> >>> >> > > cache? But temporary storage for the duration of one build will >> >>> >> > > still >> >>> >> > > be >> >>> >> > > safe? >> >>> >> > > >> >>> >> > > Because our builds obviously rely on the local repository for >> >>> >> > > short-term >> >>> >> > > storage (for the duration of the build). For example the >> >>> >> > > dependencies >> >>> >> > > are >> >>> >> > > only checked and downloaded if necessary at the beginning of >> >>> >> > > the >> >>> >> > > build, >> >>> >> > and >> >>> >> > > then are expected to exist in the local repository until the >> >>> >> > > build >> >>> >> > > stops. >> >>> >> > > Another example: our WildFly modules are first built and >> >>> >> > > installed >> >>> >> > > in >> >>> >> > > the >> >>> >> > > "modules" subproject, and later "fetched" from the local >> >>> >> > > repository >> >>> >> > > in >> >>> >> > the >> >>> >> > > "integrationtest/wildfly" subproject. >> >>> >> > > >> >>> >> > > If we were to clear the cache during a build, things would >> >>> >> > > probably >> >>> >> > > go >> >>> >> > > wrong. Worse, if two parallel builds were to install the same >> >>> >> > > artifacts >> >>> >> > > (e.g. hibernate-search-engine version 5.9.0-SNAPSHOT), we would >> >>> >> > > run >> >>> >> > > the >> >>> >> > risk >> >>> >> > > of testing the wrong "version" of this artifact in one of the >> >>> >> > > builds... >> >>> >> > >> >>> >> > SNAPSHOT being installed are indeed a problem, e.g the PR testing >> >>> >> > jobs >> >>> >> > could conflict with the regular master jobs. >> >>> >> > We should reconfigure those to not "install" - that's actually a >> >>> >> > bad >> >>> >> > habit, legacy from Maven 2 times - people nowadays recommend >> >>> >> > using >> >>> >> > "mvn clean verify", especially on CI environments. >> >>> >> > >> >>> >> > I agree about the perils of clearing the cache during in-progress >> >>> >> > builds >> >>> >> > too. >> >>> >> > >> >>> >> > I just meant to warn that we don't have any backup plan in place, >> >>> >> > and >> >>> >> > I do plan to just wipe the whole thing occasionally: >> >>> >> > - when we have any direct need, e.g. currupted downloads >> >>> >> > - when it gets too large >> >>> >> > - if it gets too expensive >> >>> >> > - regularly, just to "practice" that everything works with an >> >>> >> > empty >> >>> >> > cache >> >>> >> > >> >>> >> > Also our "disaster recovery" plan to rebuild all infrastructure >> >>> >> > will >> >>> >> > always assume it's ok to reboot with having this file system >> >>> >> > empty. >> >>> >> > >> >>> >> > Thanks, >> >>> >> > Sanne >> >>> >> > >> >>> >> > > >> >>> >> > > >> >>> >> > > On Sun, 14 Jan 2018 at 01:18 Sanne Grinovero >> >>> >> > > >> >>> >> > wrote: >> >>> >> > >> >> >>> >> > >> Hi all, >> >>> >> > >> >> >>> >> > >> while the new build machines are fast, some of you pointed out >> >>> >> > >> we're >> >>> >> > >> now spending a relative high amount of time downloading maven >> >>> >> > >> dependencies, this problem being compounded by the fact we >> >>> >> > >> "nuke" >> >>> >> > >> idle >> >>> >> > >> slaves shortly after they become idle. >> >>> >> > >> >> >>> >> > >> I just spent the day testing a distributed file system, and >> >>> >> > >> it's >> >>> >> > >> now >> >>> >> > >> running in "production". >> >>> >> > >> It's used exclusively to store the Gradle and Maven caches. >> >>> >> > >> This >> >>> >> > >> is >> >>> >> > >> stateful and independent from the lifecycle of individual >> >>> >> > >> slave >> >>> >> > >> nodes. >> >>> >> > >> >> >>> >> > >> Unfortunately this solution is not viable for Docker images, >> >>> >> > >> so >> >>> >> > >> while >> >>> >> > >> I experimented with the idea I backed off from moving the >> >>> >> > >> docker >> >>> >> > >> storage graph to a similar device. Please don't waste time >> >>> >> > >> trying >> >>> >> > >> that >> >>> >> > >> w/o carefully reading the Docker documentation or talking with >> >>> >> > >> me >> >>> >> > >> :) >> >>> >> > >> Also, beyond correctness of storage semantics, it's likely far >> >>> >> > >> less >> >>> >> > >> efficient for Docker. >> >>> >> > >> >> >>> >> > >> To learn more about our new cache: >> >>> >> > >> - >> >>> >> > >> >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> > https://github.com/hibernate/ci.hibernate.org/commit/dc6e0a4bd09fb3ae6347081243b4fb796a219f90 >> >>> >> > >> - https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html >> >>> >> > >> >> >>> >> > >> I'd add that - because of other IO tuning in place - writes >> >>> >> > >> might >> >>> >> > >> appear out of order to other nodes, and conflicts are not >> >>> >> > >> handled. >> >>> >> > >> Shouldn't be a problem since snapshots now have timestamps, >> >>> >> > >> but >> >>> >> > >> this >> >>> >> > >> might be something to keep in mind. >> >>> >> > >> >> >>> >> > >> N.B. >> >>> >> > >> Please never rely on this as "storage": it's just meant as >> >>> >> > >> cache >> >>> >> > >> and >> >>> >> > >> we reserve the right to wipe it all out at any time. >> >>> >> > >> >> >>> >> > >> Thanks, >> >>> >> > >> Sanne >> >>> >> > >> _______________________________________________ >> >>> >> > >> hibernate-dev mailing list >> >>> >> > >> hibernate-dev at lists.jboss.org >> >>> >> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >>> >> > > >> >>> >> > > >> >>> >> > > >> >>> >> > > -- >> >>> >> > > Yoann Rodiere >> >>> >> > > yoann at hibernate.org / yrodiere at redhat.com >> >>> >> > > Software Engineer >> >>> >> > > Hibernate NoORM team >> >>> >> > >> >>> >> >> >>> >> >> >>> >> -- >> >>> >> Yoann Rodiere >> >>> >> yoann at hibernate.org / yrodiere at redhat.com >> >>> >> Software Engineer >> >>> >> Hibernate NoORM team >> >>> >> _______________________________________________ >> >>> >> hibernate-dev mailing list >> >>> >> hibernate-dev at lists.jboss.org >> >>> >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > -- > Yoann Rodiere > yoann at hibernate.org / yrodiere at redhat.com > Software Engineer > Hibernate NoORM team From steve at hibernate.org Wed Jan 17 22:53:14 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 18 Jan 2018 03:53:14 +0000 Subject: [hibernate-dev] Jira - release notes in text format Message-ID: Anyone know what happened to the ability to export release notes in text format from Jira? When I click on a version's release notes it takes me to the HTML format which it always did, but there is no longer an option to choose the text format From steve at hibernate.org Wed Jan 17 23:39:37 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 18 Jan 2018 04:39:37 +0000 Subject: [hibernate-dev] 5.3.0.Beta1 delay Message-ID: God bless JBoss Nexus... JBoss Nexus is doing some over-zealous validations of a relocation POM. Even so far as I have found, Maven/Sonatype only expect a very minimal POM for relocation artifacts, yet JBoss Nexus' validations are checking that all normal POM values are defined. I'll have to figure this one out. In the meantime, anyone know the proper place to ask about this? JBoss Jira? From yoann at hibernate.org Thu Jan 18 02:43:04 2018 From: yoann at hibernate.org (Yoann Rodiere) Date: Thu, 18 Jan 2018 07:43:04 +0000 Subject: [hibernate-dev] Jira - release notes in text format In-Reply-To: References: Message-ID: Top right of the page, click "Configure release notes", select "text" in "Please select style", there will be a textarea to the bottom of the page with the text-formatted release notes. OR add "&styleName=Text&Create=Create" to the URL of the release notes. See for instance https://hibernate.atlassian.net/secure/ReleaseNote.jspa?projectId=10061&version=31616&styleName=Text&Create=Create . On Thu, 18 Jan 2018 at 04:54 Steve Ebersole wrote: > Anyone know what happened to the ability to export release notes in text > format from Jira? When I click on a version's release notes it takes me to > the HTML format which it always did, but there is no longer an option to > choose the text format > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > -- Yoann Rodiere yoann at hibernate.org / yrodiere at redhat.com Software Engineer Hibernate NoORM team From sanne at hibernate.org Thu Jan 18 06:58:04 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 18 Jan 2018 11:58:04 +0000 Subject: [hibernate-dev] 5.3.0.Beta1 delay In-Reply-To: References: Message-ID: As far as I know they just require some useful information: license, developers, description and a url. Isn't it easier to just add these? In case of the relocations we have in Hibernate Search, I just declare a description and then declare the parent pom so that most other things are inherited. On 18 January 2018 at 04:39, Steve Ebersole wrote: > God bless JBoss Nexus... > > JBoss Nexus is doing some over-zealous validations of a relocation POM. > Even so far as I have found, Maven/Sonatype only expect a very minimal POM > for relocation artifacts, yet JBoss Nexus' validations are checking that > all normal POM values are defined. I'll have to figure this one out. In > the meantime, anyone know the proper place to ask about this? JBoss Jira? > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Thu Jan 18 13:22:11 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 18 Jan 2018 18:22:11 +0000 Subject: [hibernate-dev] staging.hibernate.org CI failure Message-ID: I am trying to finish up the announce and website changes for 5.3.0.Beta1. staging.in.relation.to changes built fine. However I am getting incomprehensible (to me) failures on the CI server trying to build the staging website. Help? From guillaume.smet at gmail.com Thu Jan 18 13:38:16 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Thu, 18 Jan 2018 19:38:16 +0100 Subject: [hibernate-dev] staging.hibernate.org CI failure In-Reply-To: References: Message-ID: Hi Steve, I have to go very soon so I can't test myself right now but that might be because you're missing this file in the 5.3 directory: https://github.com/hibernate/hibernate.org/blob/d22b1461ee5ae7aac56cff90e4869b57599bb321/_data/projects/orm/releases/5.2/series.yml You need one to describe the series. -- Guillaume On Thu, Jan 18, 2018 at 7:22 PM, Steve Ebersole wrote: > I am trying to finish up the announce and website changes for 5.3.0.Beta1. > > staging.in.relation.to changes built fine. > > However I am getting incomprehensible (to me) failures on the CI server > trying to build the staging website. Help? > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Thu Jan 18 13:40:45 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 18 Jan 2018 18:40:45 +0000 Subject: [hibernate-dev] staging.hibernate.org CI failure In-Reply-To: References: Message-ID: Yes, I had seen that. We definitely need a bunch of notes in the release process about all these new steps On Thu, Jan 18, 2018 at 12:38 PM Guillaume Smet wrote: > Hi Steve, > > I have to go very soon so I can't test myself right now but that might be > because you're missing this file in the 5.3 directory: > > https://github.com/hibernate/hibernate.org/blob/d22b1461ee5ae7aac56cff90e4869b57599bb321/_data/projects/orm/releases/5.2/series.yml > > You need one to describe the series. > > -- > Guillaume > > On Thu, Jan 18, 2018 at 7:22 PM, Steve Ebersole > wrote: > >> I am trying to finish up the announce and website changes for 5.3.0.Beta1. >> >> staging.in.relation.to changes built fine. >> >> However I am getting incomprehensible (to me) failures on the CI server >> trying to build the staging website. Help? >> > _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > > From guillaume.smet at gmail.com Thu Jan 18 13:42:53 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Thu, 18 Jan 2018 19:42:53 +0100 Subject: [hibernate-dev] staging.hibernate.org CI failure In-Reply-To: References: Message-ID: See the thread "ORM release process" I started in late September. HTH. On Thu, Jan 18, 2018 at 7:40 PM, Steve Ebersole wrote: > Yes, I had seen that. > > We definitely need a bunch of notes in the release process about all these > new steps > > On Thu, Jan 18, 2018 at 12:38 PM Guillaume Smet > wrote: > >> Hi Steve, >> >> I have to go very soon so I can't test myself right now but that might be >> because you're missing this file in the 5.3 directory: >> https://github.com/hibernate/hibernate.org/blob/ >> d22b1461ee5ae7aac56cff90e4869b57599bb321/_data/projects/orm/ >> releases/5.2/series.yml >> >> You need one to describe the series. >> >> -- >> Guillaume >> >> On Thu, Jan 18, 2018 at 7:22 PM, Steve Ebersole >> wrote: >> >>> I am trying to finish up the announce and website changes for >>> 5.3.0.Beta1. >>> >>> staging.in.relation.to changes built fine. >>> >>> However I am getting incomprehensible (to me) failures on the CI server >>> trying to build the staging website. Help? >>> >> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >> >> From steve at hibernate.org Thu Jan 18 13:59:29 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 18 Jan 2018 18:59:29 +0000 Subject: [hibernate-dev] staging.hibernate.org CI failure In-Reply-To: References: Message-ID: That did help, thanks! One additional step is to add `hibernate.org/orm/releases/5.3/index.adoc` too I am capturing all of this on our release steps wiki On Thu, Jan 18, 2018 at 12:43 PM Guillaume Smet wrote: > See the thread "ORM release process" I started in late September. > > HTH. > > On Thu, Jan 18, 2018 at 7:40 PM, Steve Ebersole > wrote: > >> Yes, I had seen that. >> >> We definitely need a bunch of notes in the release process about all >> these new steps >> >> On Thu, Jan 18, 2018 at 12:38 PM Guillaume Smet >> wrote: >> >>> Hi Steve, >>> >>> I have to go very soon so I can't test myself right now but that might >>> be because you're missing this file in the 5.3 directory: >>> >>> https://github.com/hibernate/hibernate.org/blob/d22b1461ee5ae7aac56cff90e4869b57599bb321/_data/projects/orm/releases/5.2/series.yml >>> >>> You need one to describe the series. >>> >>> -- >>> Guillaume >>> >>> On Thu, Jan 18, 2018 at 7:22 PM, Steve Ebersole >>> wrote: >>> >>>> I am trying to finish up the announce and website changes for >>>> 5.3.0.Beta1. >>>> >>>> staging.in.relation.to changes built fine. >>>> >>>> However I am getting incomprehensible (to me) failures on the CI server >>>> trying to build the staging website. Help? >>>> >>> _______________________________________________ >>>> hibernate-dev mailing list >>>> hibernate-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>> >>> >>> > From steve at hibernate.org Thu Jan 18 15:21:45 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 18 Jan 2018 20:21:45 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: Bintray said they would increase the storage limit to 30G for Hibernate. However that limit is per organization, which is the top-level thing ( https://bintray.com/hibernate). I think we'd eat that up in no time, especially if other projects plan on moving to Bintray at any time. One way around that would be to have each project be its own Bintray organization. On Fri, Jan 12, 2018 at 7:33 AM Gunnar Morling wrote: > 2018-01-12 12:59 GMT+01:00 Sanne Grinovero : > > > Personally I'm neutral. I surely wouldn't want to manage our own > > Artifactory, but since JFrog will do that I'm not concerned about the > > platform management being horrible. > > > > Artifactory looks better, OSSRH has the benefit of possibly having > > better integration with Maven. > > > > There are some benefits on staying to JBoss's nexus though; not > > expressing a strong opinion but let's clarify these. > > > > # Stats > > We need download statistics, which I understand they all offer, but an > > absolute number is not as useful as being able to compare the numbers > > in one dashboard across various others of our projects. > > Also not looking forward to have to login to multiple systems to gather > it > > all. > > > > # Quality control of artifacts > > I'm understanding that JBoss Nexus does several strict validations on > > our poms; sure they have been in the way as it's not nice to see such > > failures *during* a release but there's an upside to them as well. > > AFAIK OSSRH also has similar rules, but the JBoss team one has > > different ones, plus a deal with Sonatype to deem our stuff good > > "pre-approved" so we don't have to satisfy the Sonatype rules too. > > > > # Signing > > Also I'm understanding that to release on OSSRH we need to sign all > > artifacts; not a bad idea but it's quite more papework and key > > management. Such paperwork is handled for us by the JBoss Nexus team. > > We'd need to install GPG on our release servers, get a organization > > RSA key signed, and people stubbornly releasing manually will have to > > create a key each, and have it approved by Sonatype. > > > > Debezium already is released to OSSRH from our CI server. May be worth > chatting to Jiri (added him to CC) about the details of setup. Note there's > no need for key approval by Sonatype (at least last time I did it), you > only need to publish them to some key server which you can do all by > yourself. > > > > > > Not against migrating if this is what you all want - just making sure > > we're keeping these into account. > > > > Thanks, > > Sanne > > > > > > On 12 January 2018 at 02:47, Brett Meyer wrote: > > > Sorry for the late and probably irrelevant response... > > > > > > We're using an in-house Artifactory instance at a gig and it's been > > > trash. I can't speak to the UI or management end, nor Bintray, but > > > Artifactory's platform doesn't seem as polished (can't believe I just > > > said that) or stable (can't believe I said that either) as Nexus (what > > > is happening). > > > > > > I use OSSRH for some minor projects and have generally had decent luck > > > -- including a few interactions with the support team that went well. > > > OSSRH != JBoss Nexus, although I definitely understand the wounds... > > > > > > > > > On 12/19/17 8:34 AM, Steve Ebersole wrote: > > >> HHH-12172 is about moving away from the JBoss Nexus repo for > publishing > > our > > >> artifacts. There is an open question about which service to use > > instead - > > >> Sonatype's OSSRH (Nexus) or JFrog's Bintray (Artifactory). > > >> > > >> Personally I think Artifactory is far superior of a UI/platform. We > all > > >> know Nexus from the JBoss deployment of it, and we have all generally > > had > > >> nothing good to say about it. > > >> > > >> But I am wondering if anyone has practical experience with either, or > > knows > > >> persons/projects tyay do and could share their experiences. E.g., > even > > >> though I prefer Bintray in almost every regard, I am very nervous that > > it > > >> seems next to impossible to get help/support with it. The same may be > > true > > >> with OSSRH - I don't know, hence why I am asking ;) > > >> _______________________________________________ > > >> hibernate-dev mailing list > > >> hibernate-dev at lists.jboss.org > > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > > > > _______________________________________________ > > > hibernate-dev mailing list > > > hibernate-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From smarlow at redhat.com Thu Jan 18 16:39:49 2018 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 18 Jan 2018 16:39:49 -0500 Subject: [hibernate-dev] Does Hibernate ORM bytecode enhance application entity classes by default? Message-ID: <726a1332-3f4e-e146-43ed-136636b7a674@redhat.com> Hi, Did we change Hibernate to automatically enhance/rewrite application entity classes by default? I remember hearing a few times that we would but don't remember if that happened. This is related to WildFly jira WFLY-8858 [1], which I'm not yet sure of how to completely fix (involves deal with multiple deployment ordering problems with application level datasources, CDI beanmanagers, parallel application deployment and application entity class enhancement). One WildFly issue is that the application datasources aren't available until late in WildFly deployment but the JPA container needs to register the JPA classloader level transformers very early, so Hibernate can rewrite application classes. This is further complicated by our WildFly CDI implementation needing to read application class definitions. I wonder if it could make sense for org.hibernate.jpa.boot.spi.EntityManagerFactoryBuilder to have a separate way to register the ClassTransformer transformer early and trigger the PU bootstrap on the first call to registered ClassTransformer's. If that doesn't happen, then we defer bootstrap until EntityManagerFactoryBuilder.build() is called. Scott [1] https://issues.jboss.org/browse/WFLY-8858 From brett at hibernate.org Thu Jan 18 17:14:05 2018 From: brett at hibernate.org (Brett Meyer) Date: Thu, 18 Jan 2018 17:14:05 -0500 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: <2b7a372c-c59f-4447-1f1f-45ddf11a562a@hibernate.org> References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> <2b7a372c-c59f-4447-1f1f-45ddf11a562a@hibernate.org> Message-ID: <5fb13b9b-63e5-8528-9f5b-1e8cad33a2b4@hibernate.org> Thanks Chris!? Steve, not sure if you officially voted -- any concerns?? Andrea?? Anyone else? Again, this would all be disabled by default, behind a Maven profile... On 1/16/18 3:15 PM, Chris Cranford wrote: > On 01/12/2018 12:54 PM, Sanne Grinovero wrote: >> On 12 January 2018 at 17:32, Brett Meyer wrote: >>> If I don't have time to contribute to Pax Exam, I certainly don't have >>> time to start a new project haha... >>> >>> And realistically, that "something new" would likely involve containers >>> anyway. >>> >>> At this point, mostly a question of 1) status quo, 2) Docker (or any >>> other container-based solution), or 3) try screwing around with Pax Exam >>> in "server-only" mode (but I don't have high hopes there). >> Sure. Docker is now available on the CI slaves too, so that's not a problem. >> >> The only annoyance is that the whole ORM team - and anyone >> contributing - would need to have Docker as well, but that doesn't >> seem too bad to me... and was likely bound to happen for other tools >> :) >> >> Steve, Chris and Andrea? Ok with that? Maybe you have Docker running already? > I am fine with it; I have Docker installed already. > From steve at hibernate.org Thu Jan 18 17:21:25 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 18 Jan 2018 22:21:25 +0000 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: <5fb13b9b-63e5-8528-9f5b-1e8cad33a2b4@hibernate.org> References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> <2b7a372c-c59f-4447-1f1f-45ddf11a562a@hibernate.org> <5fb13b9b-63e5-8528-9f5b-1e8cad33a2b4@hibernate.org> Message-ID: I'm ok with it. But we use Gradle, so it'll be hard to use a Maven profile ;) On Thu, Jan 18, 2018, 4:14 PM Brett Meyer wrote: > Thanks Chris! Steve, not sure if you officially voted -- any concerns? > Andrea? Anyone else? > > Again, this would all be disabled by default, behind a Maven profile... > > > On 1/16/18 3:15 PM, Chris Cranford wrote: > > On 01/12/2018 12:54 PM, Sanne Grinovero wrote: > >> On 12 January 2018 at 17:32, Brett Meyer wrote: > >>> If I don't have time to contribute to Pax Exam, I certainly don't have > >>> time to start a new project haha... > >>> > >>> And realistically, that "something new" would likely involve containers > >>> anyway. > >>> > >>> At this point, mostly a question of 1) status quo, 2) Docker (or any > >>> other container-based solution), or 3) try screwing around with Pax > Exam > >>> in "server-only" mode (but I don't have high hopes there). > >> Sure. Docker is now available on the CI slaves too, so that's not a > problem. > >> > >> The only annoyance is that the whole ORM team - and anyone > >> contributing - would need to have Docker as well, but that doesn't > >> seem too bad to me... and was likely bound to happen for other tools > >> :) > >> > >> Steve, Chris and Andrea? Ok with that? Maybe you have Docker running > already? > > I am fine with it; I have Docker installed already. > > > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From brett at hibernate.org Thu Jan 18 17:23:04 2018 From: brett at hibernate.org (Brett Meyer) Date: Thu, 18 Jan 2018 17:23:04 -0500 Subject: [hibernate-dev] replace Pax Exam with Docker In-Reply-To: References: <60d1e11a-88a5-0cbc-d0fa-dc8e70540eb4@hibernate.org> <2ee15737-0681-d998-52ca-7424a698fed8@hibernate.org> <2b7a372c-c59f-4447-1f1f-45ddf11a562a@hibernate.org> <5fb13b9b-63e5-8528-9f5b-1e8cad33a2b4@hibernate.org> Message-ID: One of them there build tool profile or switch thingies.? Sorry, been in Maven land for too long these days... On 1/18/18 5:21 PM, Steve Ebersole wrote: > > I'm ok with it.? But we use Gradle, so it'll be hard to use a Maven > profile ;) > > > On Thu, Jan 18, 2018, 4:14 PM Brett Meyer > wrote: > > Thanks Chris!? Steve, not sure if you officially voted -- any > concerns? > Andrea?? Anyone else? > > Again, this would all be disabled by default, behind a Maven > profile... > > > On 1/16/18 3:15 PM, Chris Cranford wrote: > > On 01/12/2018 12:54 PM, Sanne Grinovero wrote: > >> On 12 January 2018 at 17:32, Brett Meyer > wrote: > >>> If I don't have time to contribute to Pax Exam, I certainly > don't have > >>> time to start a new project haha... > >>> > >>> And realistically, that "something new" would likely involve > containers > >>> anyway. > >>> > >>> At this point, mostly a question of 1) status quo, 2) Docker > (or any > >>> other container-based solution), or 3) try screwing around > with Pax Exam > >>> in "server-only" mode (but I don't have high hopes there). > >> Sure. Docker is now available on the CI slaves too, so that's > not a problem. > >> > >> The only annoyance is that the whole ORM team - and anyone > >> contributing - would need to have Docker as well, but that doesn't > >> seem too bad to me... and was likely bound to happen for other > tools > >> :) > >> > >> Steve, Chris and Andrea? Ok with that? Maybe you have Docker > running already? > > I am fine with it; I have Docker installed already. > > > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From lbarreiro at redhat.com Thu Jan 18 19:09:52 2018 From: lbarreiro at redhat.com (Luis Barreiro) Date: Fri, 19 Jan 2018 00:09:52 +0000 Subject: [hibernate-dev] Does Hibernate ORM bytecode enhance application entity classes by default? In-Reply-To: <726a1332-3f4e-e146-43ed-136636b7a674@redhat.com> References: <726a1332-3f4e-e146-43ed-136636b7a674@redhat.com> Message-ID: <9a11580d-4e93-0656-d465-93a95eec14c3@redhat.com> Hi Scott, I can only comment on the initial question. No, hibernate does not automatically enhance entities. Entities are either enhanced at build time or after WildFly has pushed the right ClassTransformer (based on "hibernate.enhance.*" properties in persistence.xml). I'm not aware of any plans to change the current behavior. Regards, Luis Barreiro Middleware Performance Team On 01/18/2018 09:39 PM, Scott Marlow wrote: > Hi, > > Did we change Hibernate to automatically enhance/rewrite application > entity classes by default? I remember hearing a few times that we would > but don't remember if that happened. > > This is related to WildFly jira WFLY-8858 [1], which I'm not yet sure of > how to completely fix (involves deal with multiple deployment ordering > problems with application level datasources, CDI beanmanagers, parallel > application deployment and application entity class enhancement). > > One WildFly issue is that the application datasources aren't available > until late in WildFly deployment but the JPA container needs to register > the JPA classloader level transformers very early, so Hibernate can > rewrite application classes. This is further complicated by our WildFly > CDI implementation needing to read application class definitions. > > I wonder if it could make sense for > org.hibernate.jpa.boot.spi.EntityManagerFactoryBuilder to have a > separate way to register the ClassTransformer transformer early and > trigger the PU bootstrap on the first call to registered > ClassTransformer's. If that doesn't happen, then we defer bootstrap > until EntityManagerFactoryBuilder.build() is called. > > Scott > > [1] https://issues.jboss.org/browse/WFLY-8858 > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From guillaume.smet at gmail.com Fri Jan 19 07:24:07 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Fri, 19 Jan 2018 13:24:07 +0100 Subject: [hibernate-dev] staging.hibernate.org CI failure In-Reply-To: References: Message-ID: Hi Steve, As for ORM, you also need to add a per version Documentation page as they are specific per version (this is preexisting to the changes we made with Yoann, that's why I didn't mention it in my email): https://github.com/hibernate/hibernate.org/tree/staging/orm/documentation In passing, I parameterized the 5.2 page with a version variable to help but text and links in the copied page should be carefully reviewed (typically, you would need to update the JPA 2.1 mention to 2.2). HTH -- Guillaume On Thu, Jan 18, 2018 at 7:59 PM, Steve Ebersole wrote: > That did help, thanks! > > One additional step is to add `hibernate.org/orm/releases/5.3/index.adoc` > too > > I am capturing all of this on our release steps wiki > > On Thu, Jan 18, 2018 at 12:43 PM Guillaume Smet > wrote: > >> See the thread "ORM release process" I started in late September. >> >> HTH. >> >> On Thu, Jan 18, 2018 at 7:40 PM, Steve Ebersole >> wrote: >> >>> Yes, I had seen that. >>> >>> We definitely need a bunch of notes in the release process about all >>> these new steps >>> >>> On Thu, Jan 18, 2018 at 12:38 PM Guillaume Smet < >>> guillaume.smet at gmail.com> wrote: >>> >>>> Hi Steve, >>>> >>>> I have to go very soon so I can't test myself right now but that might >>>> be because you're missing this file in the 5.3 directory: >>>> https://github.com/hibernate/hibernate.org/blob/ >>>> d22b1461ee5ae7aac56cff90e4869b57599bb321/_data/projects/orm/ >>>> releases/5.2/series.yml >>>> >>>> You need one to describe the series. >>>> >>>> -- >>>> Guillaume >>>> >>>> On Thu, Jan 18, 2018 at 7:22 PM, Steve Ebersole >>>> wrote: >>>> >>>>> I am trying to finish up the announce and website changes for >>>>> 5.3.0.Beta1. >>>>> >>>>> staging.in.relation.to changes built fine. >>>>> >>>>> However I am getting incomprehensible (to me) failures on the CI server >>>>> trying to build the staging website. Help? >>>>> >>>> _______________________________________________ >>>>> hibernate-dev mailing list >>>>> hibernate-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>>> >>>> >>>> >> From steve at hibernate.org Fri Jan 19 08:05:09 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 19 Jan 2018 13:05:09 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: I sat down and did some calculations to get a better idea of whether this is feasible. 5.3.0.Beta1 had a total size of 135M (31M in "maven artifacts", 104 in release bundles). At 30G limit, we'd be able to do ~222 releases before we hit that limit (30 / .135 = 222.2222) So if only ORM is going to move to Bintray, I think the 30G limit is not a hindrance. Do we see other projects moving away from publishing to JBoss Nexus, and if so what publishing repo do y'all plan to use? On Thu, Jan 18, 2018 at 2:21 PM Steve Ebersole wrote: > Bintray said they would increase the storage limit to 30G for Hibernate. > However that limit is per organization, which is the top-level thing ( > https://bintray.com/hibernate). I think we'd eat that up in no time, > especially if other projects plan on moving to Bintray at any time. > > One way around that would be to have each project be its own Bintray > organization. > > > On Fri, Jan 12, 2018 at 7:33 AM Gunnar Morling > wrote: > >> 2018-01-12 12:59 GMT+01:00 Sanne Grinovero : >> >> > Personally I'm neutral. I surely wouldn't want to manage our own >> > Artifactory, but since JFrog will do that I'm not concerned about the >> > platform management being horrible. >> > >> > Artifactory looks better, OSSRH has the benefit of possibly having >> > better integration with Maven. >> > >> > There are some benefits on staying to JBoss's nexus though; not >> > expressing a strong opinion but let's clarify these. >> > >> > # Stats >> > We need download statistics, which I understand they all offer, but an >> > absolute number is not as useful as being able to compare the numbers >> > in one dashboard across various others of our projects. >> > Also not looking forward to have to login to multiple systems to gather >> it >> > all. >> > >> > # Quality control of artifacts >> > I'm understanding that JBoss Nexus does several strict validations on >> > our poms; sure they have been in the way as it's not nice to see such >> > failures *during* a release but there's an upside to them as well. >> > AFAIK OSSRH also has similar rules, but the JBoss team one has >> > different ones, plus a deal with Sonatype to deem our stuff good >> > "pre-approved" so we don't have to satisfy the Sonatype rules too. >> > >> > # Signing >> > Also I'm understanding that to release on OSSRH we need to sign all >> > artifacts; not a bad idea but it's quite more papework and key >> > management. Such paperwork is handled for us by the JBoss Nexus team. >> > We'd need to install GPG on our release servers, get a organization >> > RSA key signed, and people stubbornly releasing manually will have to >> > create a key each, and have it approved by Sonatype. >> > >> >> Debezium already is released to OSSRH from our CI server. May be worth >> chatting to Jiri (added him to CC) about the details of setup. Note >> there's >> no need for key approval by Sonatype (at least last time I did it), you >> only need to publish them to some key server which you can do all by >> yourself. >> >> >> > >> > Not against migrating if this is what you all want - just making sure >> > we're keeping these into account. >> > >> > Thanks, >> > Sanne >> > >> > >> > On 12 January 2018 at 02:47, Brett Meyer wrote: >> > > Sorry for the late and probably irrelevant response... >> > > >> > > We're using an in-house Artifactory instance at a gig and it's been >> > > trash. I can't speak to the UI or management end, nor Bintray, but >> > > Artifactory's platform doesn't seem as polished (can't believe I just >> > > said that) or stable (can't believe I said that either) as Nexus (what >> > > is happening). >> > > >> > > I use OSSRH for some minor projects and have generally had decent luck >> > > -- including a few interactions with the support team that went well. >> > > OSSRH != JBoss Nexus, although I definitely understand the wounds... >> > > >> > > >> > > On 12/19/17 8:34 AM, Steve Ebersole wrote: >> > >> HHH-12172 is about moving away from the JBoss Nexus repo for >> publishing >> > our >> > >> artifacts. There is an open question about which service to use >> > instead - >> > >> Sonatype's OSSRH (Nexus) or JFrog's Bintray (Artifactory). >> > >> >> > >> Personally I think Artifactory is far superior of a UI/platform. We >> all >> > >> know Nexus from the JBoss deployment of it, and we have all generally >> > had >> > >> nothing good to say about it. >> > >> >> > >> But I am wondering if anyone has practical experience with either, or >> > knows >> > >> persons/projects tyay do and could share their experiences. E.g., >> even >> > >> though I prefer Bintray in almost every regard, I am very nervous >> that >> > it >> > >> seems next to impossible to get help/support with it. The same may >> be >> > true >> > >> with OSSRH - I don't know, hence why I am asking ;) >> > >> _______________________________________________ >> > >> hibernate-dev mailing list >> > >> hibernate-dev at lists.jboss.org >> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > > >> > > >> > > _______________________________________________ >> > > hibernate-dev mailing list >> > > hibernate-dev at lists.jboss.org >> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > _______________________________________________ >> > hibernate-dev mailing list >> > hibernate-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > From sanne at hibernate.org Fri Jan 19 08:18:49 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 19 Jan 2018 13:18:49 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: On 19 January 2018 at 13:05, Steve Ebersole wrote: > I sat down and did some calculations to get a better idea of whether this is > feasible. 5.3.0.Beta1 had a total size of 135M (31M in "maven artifacts", > 104 in release bundles). At 30G limit, we'd be able to do ~222 releases > before we hit that limit (30 / .135 = 222.2222) > > So if only ORM is going to move to Bintray, I think the 30G limit is not a > hindrance. Do we see other projects moving away from publishing to JBoss > Nexus, and if so what publishing repo do y'all plan to use? Yes, as I said before I'm neutral on which one we use, but I was somewhat expecting us to all eventually use the same solution. Seems important to be consistent for sake of end user's experience, but also for us to share tooling, scripts, practices, lessons learned.. That said we didn't start looking at that in other Hibernate projects so there would certainly be a lag. The work we're doing on feature-packs might significantly reduce the size of each release, but I think it will only have an impact on the "maven artifacts", which according to your estimates are not the main issue. Maybe we could stick to sourceforge for the release bundles? We all seems to agree that "release bundles" are meant for the more "old school" devs; I'd say they won't be swayed away from Sourceforge anyway, and we should probably keep some continuity there. That would also happen to solve the storage limit problem? Thanks, Sanne > > > On Thu, Jan 18, 2018 at 2:21 PM Steve Ebersole wrote: >> >> Bintray said they would increase the storage limit to 30G for Hibernate. >> However that limit is per organization, which is the top-level thing >> (https://bintray.com/hibernate). I think we'd eat that up in no time, >> especially if other projects plan on moving to Bintray at any time. >> >> One way around that would be to have each project be its own Bintray >> organization. >> >> >> On Fri, Jan 12, 2018 at 7:33 AM Gunnar Morling >> wrote: >>> >>> 2018-01-12 12:59 GMT+01:00 Sanne Grinovero : >>> >>> > Personally I'm neutral. I surely wouldn't want to manage our own >>> > Artifactory, but since JFrog will do that I'm not concerned about the >>> > platform management being horrible. >>> > >>> > Artifactory looks better, OSSRH has the benefit of possibly having >>> > better integration with Maven. >>> > >>> > There are some benefits on staying to JBoss's nexus though; not >>> > expressing a strong opinion but let's clarify these. >>> > >>> > # Stats >>> > We need download statistics, which I understand they all offer, but an >>> > absolute number is not as useful as being able to compare the numbers >>> > in one dashboard across various others of our projects. >>> > Also not looking forward to have to login to multiple systems to gather >>> > it >>> > all. >>> > >>> > # Quality control of artifacts >>> > I'm understanding that JBoss Nexus does several strict validations on >>> > our poms; sure they have been in the way as it's not nice to see such >>> > failures *during* a release but there's an upside to them as well. >>> > AFAIK OSSRH also has similar rules, but the JBoss team one has >>> > different ones, plus a deal with Sonatype to deem our stuff good >>> > "pre-approved" so we don't have to satisfy the Sonatype rules too. >>> > >>> > # Signing >>> > Also I'm understanding that to release on OSSRH we need to sign all >>> > artifacts; not a bad idea but it's quite more papework and key >>> > management. Such paperwork is handled for us by the JBoss Nexus team. >>> > We'd need to install GPG on our release servers, get a organization >>> > RSA key signed, and people stubbornly releasing manually will have to >>> > create a key each, and have it approved by Sonatype. >>> > >>> >>> Debezium already is released to OSSRH from our CI server. May be worth >>> chatting to Jiri (added him to CC) about the details of setup. Note >>> there's >>> no need for key approval by Sonatype (at least last time I did it), you >>> only need to publish them to some key server which you can do all by >>> yourself. >>> >>> >>> > >>> > Not against migrating if this is what you all want - just making sure >>> > we're keeping these into account. >>> > >>> > Thanks, >>> > Sanne >>> > >>> > >>> > On 12 January 2018 at 02:47, Brett Meyer wrote: >>> > > Sorry for the late and probably irrelevant response... >>> > > >>> > > We're using an in-house Artifactory instance at a gig and it's been >>> > > trash. I can't speak to the UI or management end, nor Bintray, but >>> > > Artifactory's platform doesn't seem as polished (can't believe I just >>> > > said that) or stable (can't believe I said that either) as Nexus >>> > > (what >>> > > is happening). >>> > > >>> > > I use OSSRH for some minor projects and have generally had decent >>> > > luck >>> > > -- including a few interactions with the support team that went well. >>> > > OSSRH != JBoss Nexus, although I definitely understand the wounds... >>> > > >>> > > >>> > > On 12/19/17 8:34 AM, Steve Ebersole wrote: >>> > >> HHH-12172 is about moving away from the JBoss Nexus repo for >>> > >> publishing >>> > our >>> > >> artifacts. There is an open question about which service to use >>> > instead - >>> > >> Sonatype's OSSRH (Nexus) or JFrog's Bintray (Artifactory). >>> > >> >>> > >> Personally I think Artifactory is far superior of a UI/platform. We >>> > >> all >>> > >> know Nexus from the JBoss deployment of it, and we have all >>> > >> generally >>> > had >>> > >> nothing good to say about it. >>> > >> >>> > >> But I am wondering if anyone has practical experience with either, >>> > >> or >>> > knows >>> > >> persons/projects tyay do and could share their experiences. E.g., >>> > >> even >>> > >> though I prefer Bintray in almost every regard, I am very nervous >>> > >> that >>> > it >>> > >> seems next to impossible to get help/support with it. The same may >>> > >> be >>> > true >>> > >> with OSSRH - I don't know, hence why I am asking ;) >>> > >> _______________________________________________ >>> > >> hibernate-dev mailing list >>> > >> hibernate-dev at lists.jboss.org >>> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> > > >>> > > >>> > > _______________________________________________ >>> > > hibernate-dev mailing list >>> > > hibernate-dev at lists.jboss.org >>> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> > _______________________________________________ >>> > hibernate-dev mailing list >>> > hibernate-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> > >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev From smarlow at redhat.com Fri Jan 19 08:40:54 2018 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 19 Jan 2018 08:40:54 -0500 Subject: [hibernate-dev] Does Hibernate ORM bytecode enhance application entity classes by default? In-Reply-To: <726a1332-3f4e-e146-43ed-136636b7a674@redhat.com> References: <726a1332-3f4e-e146-43ed-136636b7a674@redhat.com> Message-ID: > One WildFly issue is that the application datasources aren't available > until late in WildFly deployment but the JPA container needs to register > the JPA classloader level transformers very early, so Hibernate can rewrite > application classes. This is further complicated by our WildFly CDI > implementation needing to read application class definitions. > > I wonder if it could make sense for org.hibernate.jpa.boot.spi.EntityManagerFactoryBuilder > to have a separate way to register the ClassTransformer transformer early > and trigger the PU bootstrap on the first call to registered > ClassTransformer's. If that doesn't happen, then we defer bootstrap until > EntityManagerFactoryBuilder.build() is called. > Related question, does ORM need the DatabaseMetaData to performs bytecode enhancing? Knowing the answer to that, will help answer my question above. Scott From steve at hibernate.org Fri Jan 19 09:56:51 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 19 Jan 2018 14:56:51 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: I think it is reasonable to only publish the maven artifacts to Bintray and continue to publish the bundles to SourceForge. On Fri, Jan 19, 2018 at 7:19 AM Sanne Grinovero wrote: > On 19 January 2018 at 13:05, Steve Ebersole wrote: > > I sat down and did some calculations to get a better idea of whether > this is > > feasible. 5.3.0.Beta1 had a total size of 135M (31M in "maven > artifacts", > > 104 in release bundles). At 30G limit, we'd be able to do ~222 releases > > before we hit that limit (30 / .135 = 222.2222) > > > > So if only ORM is going to move to Bintray, I think the 30G limit is not > a > > hindrance. Do we see other projects moving away from publishing to JBoss > > Nexus, and if so what publishing repo do y'all plan to use? > > Yes, as I said before I'm neutral on which one we use, but I was > somewhat expecting us to all eventually use the same solution. > Seems important to be consistent for sake of end user's experience, > but also for us to share tooling, scripts, practices, lessons > learned.. > > That said we didn't start looking at that in other Hibernate projects > so there would certainly be a lag. > > The work we're doing on feature-packs might significantly reduce the > size of each release, but I think it will only have an impact on the > "maven artifacts", which according to your estimates are not the main > issue. > > Maybe we could stick to sourceforge for the release bundles? We all > seems to agree that "release bundles" are meant for the more "old > school" devs; I'd say they won't be swayed away from Sourceforge > anyway, and we should probably keep some continuity there. > That would also happen to solve the storage limit problem? > > Thanks, > Sanne > > > > > > > > > > On Thu, Jan 18, 2018 at 2:21 PM Steve Ebersole > wrote: > >> > >> Bintray said they would increase the storage limit to 30G for Hibernate. > >> However that limit is per organization, which is the top-level thing > >> (https://bintray.com/hibernate). I think we'd eat that up in no time, > >> especially if other projects plan on moving to Bintray at any time. > >> > >> One way around that would be to have each project be its own Bintray > >> organization. > >> > >> > >> On Fri, Jan 12, 2018 at 7:33 AM Gunnar Morling > >> wrote: > >>> > >>> 2018-01-12 12:59 GMT+01:00 Sanne Grinovero : > >>> > >>> > Personally I'm neutral. I surely wouldn't want to manage our own > >>> > Artifactory, but since JFrog will do that I'm not concerned about the > >>> > platform management being horrible. > >>> > > >>> > Artifactory looks better, OSSRH has the benefit of possibly having > >>> > better integration with Maven. > >>> > > >>> > There are some benefits on staying to JBoss's nexus though; not > >>> > expressing a strong opinion but let's clarify these. > >>> > > >>> > # Stats > >>> > We need download statistics, which I understand they all offer, but > an > >>> > absolute number is not as useful as being able to compare the numbers > >>> > in one dashboard across various others of our projects. > >>> > Also not looking forward to have to login to multiple systems to > gather > >>> > it > >>> > all. > >>> > > >>> > # Quality control of artifacts > >>> > I'm understanding that JBoss Nexus does several strict validations on > >>> > our poms; sure they have been in the way as it's not nice to see such > >>> > failures *during* a release but there's an upside to them as well. > >>> > AFAIK OSSRH also has similar rules, but the JBoss team one has > >>> > different ones, plus a deal with Sonatype to deem our stuff good > >>> > "pre-approved" so we don't have to satisfy the Sonatype rules too. > >>> > > >>> > # Signing > >>> > Also I'm understanding that to release on OSSRH we need to sign all > >>> > artifacts; not a bad idea but it's quite more papework and key > >>> > management. Such paperwork is handled for us by the JBoss Nexus team. > >>> > We'd need to install GPG on our release servers, get a organization > >>> > RSA key signed, and people stubbornly releasing manually will have to > >>> > create a key each, and have it approved by Sonatype. > >>> > > >>> > >>> Debezium already is released to OSSRH from our CI server. May be worth > >>> chatting to Jiri (added him to CC) about the details of setup. Note > >>> there's > >>> no need for key approval by Sonatype (at least last time I did it), you > >>> only need to publish them to some key server which you can do all by > >>> yourself. > >>> > >>> > >>> > > >>> > Not against migrating if this is what you all want - just making sure > >>> > we're keeping these into account. > >>> > > >>> > Thanks, > >>> > Sanne > >>> > > >>> > > >>> > On 12 January 2018 at 02:47, Brett Meyer > wrote: > >>> > > Sorry for the late and probably irrelevant response... > >>> > > > >>> > > We're using an in-house Artifactory instance at a gig and it's been > >>> > > trash. I can't speak to the UI or management end, nor Bintray, but > >>> > > Artifactory's platform doesn't seem as polished (can't believe I > just > >>> > > said that) or stable (can't believe I said that either) as Nexus > >>> > > (what > >>> > > is happening). > >>> > > > >>> > > I use OSSRH for some minor projects and have generally had decent > >>> > > luck > >>> > > -- including a few interactions with the support team that went > well. > >>> > > OSSRH != JBoss Nexus, although I definitely understand the > wounds... > >>> > > > >>> > > > >>> > > On 12/19/17 8:34 AM, Steve Ebersole wrote: > >>> > >> HHH-12172 is about moving away from the JBoss Nexus repo for > >>> > >> publishing > >>> > our > >>> > >> artifacts. There is an open question about which service to use > >>> > instead - > >>> > >> Sonatype's OSSRH (Nexus) or JFrog's Bintray (Artifactory). > >>> > >> > >>> > >> Personally I think Artifactory is far superior of a UI/platform. > We > >>> > >> all > >>> > >> know Nexus from the JBoss deployment of it, and we have all > >>> > >> generally > >>> > had > >>> > >> nothing good to say about it. > >>> > >> > >>> > >> But I am wondering if anyone has practical experience with either, > >>> > >> or > >>> > knows > >>> > >> persons/projects tyay do and could share their experiences. E.g., > >>> > >> even > >>> > >> though I prefer Bintray in almost every regard, I am very nervous > >>> > >> that > >>> > it > >>> > >> seems next to impossible to get help/support with it. The same > may > >>> > >> be > >>> > true > >>> > >> with OSSRH - I don't know, hence why I am asking ;) > >>> > >> _______________________________________________ > >>> > >> hibernate-dev mailing list > >>> > >> hibernate-dev at lists.jboss.org > >>> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>> > > > >>> > > > >>> > > _______________________________________________ > >>> > > hibernate-dev mailing list > >>> > > hibernate-dev at lists.jboss.org > >>> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>> > _______________________________________________ > >>> > hibernate-dev mailing list > >>> > hibernate-dev at lists.jboss.org > >>> > https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>> > > >>> _______________________________________________ > >>> hibernate-dev mailing list > >>> hibernate-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Fri Jan 19 10:14:31 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 19 Jan 2018 15:14:31 +0000 Subject: [hibernate-dev] Does Hibernate ORM bytecode enhance application entity classes by default? In-Reply-To: References: <726a1332-3f4e-e146-43ed-136636b7a674@redhat.com> Message-ID: Yes. The enhancing is done based on the underlying BytcodeProvider's Enhancer. However knowing what classes to enhance is driven by Hibernate's boot-time model. Building this boot-time model requires access to DatabaseMetaData. On Fri, Jan 19, 2018 at 7:41 AM Scott Marlow wrote: > > One WildFly issue is that the application datasources aren't available > > until late in WildFly deployment but the JPA container needs to register > > the JPA classloader level transformers very early, so Hibernate can > rewrite > > application classes. This is further complicated by our WildFly CDI > > implementation needing to read application class definitions. > > > > I wonder if it could make sense for > org.hibernate.jpa.boot.spi.EntityManagerFactoryBuilder > > to have a separate way to register the ClassTransformer transformer early > > and trigger the PU bootstrap on the first call to registered > > ClassTransformer's. If that doesn't happen, then we defer bootstrap > until > > EntityManagerFactoryBuilder.build() is called. > > > > Related question, does ORM need the DatabaseMetaData to performs bytecode > enhancing? Knowing the answer to that, will help answer my question above. > > Scott > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Fri Jan 19 10:33:37 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 19 Jan 2018 15:33:37 +0000 Subject: [hibernate-dev] ORM 5.3.0.Beta1 released Message-ID: http://in.relation.to/2018/01/18/hibernate-orm-530-beta1-release/ From sanne at hibernate.org Fri Jan 19 10:35:16 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 19 Jan 2018 15:35:16 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: On 19 January 2018 at 14:56, Steve Ebersole wrote: > I think it is reasonable to only publish the maven artifacts to Bintray and > continue to publish the bundles to SourceForge. Great, so that means we can have about a thousand releases on Bintray; should be enough for all our projects for at least 10 years. Thanks, Sanne > > > On Fri, Jan 19, 2018 at 7:19 AM Sanne Grinovero wrote: >> >> On 19 January 2018 at 13:05, Steve Ebersole wrote: >> > I sat down and did some calculations to get a better idea of whether >> > this is >> > feasible. 5.3.0.Beta1 had a total size of 135M (31M in "maven >> > artifacts", >> > 104 in release bundles). At 30G limit, we'd be able to do ~222 releases >> > before we hit that limit (30 / .135 = 222.2222) >> > >> > So if only ORM is going to move to Bintray, I think the 30G limit is not >> > a >> > hindrance. Do we see other projects moving away from publishing to >> > JBoss >> > Nexus, and if so what publishing repo do y'all plan to use? >> >> Yes, as I said before I'm neutral on which one we use, but I was >> somewhat expecting us to all eventually use the same solution. >> Seems important to be consistent for sake of end user's experience, >> but also for us to share tooling, scripts, practices, lessons >> learned.. >> >> That said we didn't start looking at that in other Hibernate projects >> so there would certainly be a lag. >> >> The work we're doing on feature-packs might significantly reduce the >> size of each release, but I think it will only have an impact on the >> "maven artifacts", which according to your estimates are not the main >> issue. >> >> Maybe we could stick to sourceforge for the release bundles? We all >> seems to agree that "release bundles" are meant for the more "old >> school" devs; I'd say they won't be swayed away from Sourceforge >> anyway, and we should probably keep some continuity there. >> That would also happen to solve the storage limit problem? >> >> Thanks, >> Sanne >> >> >> >> >> > >> > >> > On Thu, Jan 18, 2018 at 2:21 PM Steve Ebersole >> > wrote: >> >> >> >> Bintray said they would increase the storage limit to 30G for >> >> Hibernate. >> >> However that limit is per organization, which is the top-level thing >> >> (https://bintray.com/hibernate). I think we'd eat that up in no time, >> >> especially if other projects plan on moving to Bintray at any time. >> >> >> >> One way around that would be to have each project be its own Bintray >> >> organization. >> >> >> >> >> >> On Fri, Jan 12, 2018 at 7:33 AM Gunnar Morling >> >> wrote: >> >>> >> >>> 2018-01-12 12:59 GMT+01:00 Sanne Grinovero : >> >>> >> >>> > Personally I'm neutral. I surely wouldn't want to manage our own >> >>> > Artifactory, but since JFrog will do that I'm not concerned about >> >>> > the >> >>> > platform management being horrible. >> >>> > >> >>> > Artifactory looks better, OSSRH has the benefit of possibly having >> >>> > better integration with Maven. >> >>> > >> >>> > There are some benefits on staying to JBoss's nexus though; not >> >>> > expressing a strong opinion but let's clarify these. >> >>> > >> >>> > # Stats >> >>> > We need download statistics, which I understand they all offer, but >> >>> > an >> >>> > absolute number is not as useful as being able to compare the >> >>> > numbers >> >>> > in one dashboard across various others of our projects. >> >>> > Also not looking forward to have to login to multiple systems to >> >>> > gather >> >>> > it >> >>> > all. >> >>> > >> >>> > # Quality control of artifacts >> >>> > I'm understanding that JBoss Nexus does several strict validations >> >>> > on >> >>> > our poms; sure they have been in the way as it's not nice to see >> >>> > such >> >>> > failures *during* a release but there's an upside to them as well. >> >>> > AFAIK OSSRH also has similar rules, but the JBoss team one has >> >>> > different ones, plus a deal with Sonatype to deem our stuff good >> >>> > "pre-approved" so we don't have to satisfy the Sonatype rules too. >> >>> > >> >>> > # Signing >> >>> > Also I'm understanding that to release on OSSRH we need to sign all >> >>> > artifacts; not a bad idea but it's quite more papework and key >> >>> > management. Such paperwork is handled for us by the JBoss Nexus >> >>> > team. >> >>> > We'd need to install GPG on our release servers, get a organization >> >>> > RSA key signed, and people stubbornly releasing manually will have >> >>> > to >> >>> > create a key each, and have it approved by Sonatype. >> >>> > >> >>> >> >>> Debezium already is released to OSSRH from our CI server. May be worth >> >>> chatting to Jiri (added him to CC) about the details of setup. Note >> >>> there's >> >>> no need for key approval by Sonatype (at least last time I did it), >> >>> you >> >>> only need to publish them to some key server which you can do all by >> >>> yourself. >> >>> >> >>> >> >>> > >> >>> > Not against migrating if this is what you all want - just making >> >>> > sure >> >>> > we're keeping these into account. >> >>> > >> >>> > Thanks, >> >>> > Sanne >> >>> > >> >>> > >> >>> > On 12 January 2018 at 02:47, Brett Meyer >> >>> > wrote: >> >>> > > Sorry for the late and probably irrelevant response... >> >>> > > >> >>> > > We're using an in-house Artifactory instance at a gig and it's >> >>> > > been >> >>> > > trash. I can't speak to the UI or management end, nor Bintray, >> >>> > > but >> >>> > > Artifactory's platform doesn't seem as polished (can't believe I >> >>> > > just >> >>> > > said that) or stable (can't believe I said that either) as Nexus >> >>> > > (what >> >>> > > is happening). >> >>> > > >> >>> > > I use OSSRH for some minor projects and have generally had decent >> >>> > > luck >> >>> > > -- including a few interactions with the support team that went >> >>> > > well. >> >>> > > OSSRH != JBoss Nexus, although I definitely understand the >> >>> > > wounds... >> >>> > > >> >>> > > >> >>> > > On 12/19/17 8:34 AM, Steve Ebersole wrote: >> >>> > >> HHH-12172 is about moving away from the JBoss Nexus repo for >> >>> > >> publishing >> >>> > our >> >>> > >> artifacts. There is an open question about which service to use >> >>> > instead - >> >>> > >> Sonatype's OSSRH (Nexus) or JFrog's Bintray (Artifactory). >> >>> > >> >> >>> > >> Personally I think Artifactory is far superior of a UI/platform. >> >>> > >> We >> >>> > >> all >> >>> > >> know Nexus from the JBoss deployment of it, and we have all >> >>> > >> generally >> >>> > had >> >>> > >> nothing good to say about it. >> >>> > >> >> >>> > >> But I am wondering if anyone has practical experience with >> >>> > >> either, >> >>> > >> or >> >>> > knows >> >>> > >> persons/projects tyay do and could share their experiences. >> >>> > >> E.g., >> >>> > >> even >> >>> > >> though I prefer Bintray in almost every regard, I am very nervous >> >>> > >> that >> >>> > it >> >>> > >> seems next to impossible to get help/support with it. The same >> >>> > >> may >> >>> > >> be >> >>> > true >> >>> > >> with OSSRH - I don't know, hence why I am asking ;) >> >>> > >> _______________________________________________ >> >>> > >> hibernate-dev mailing list >> >>> > >> hibernate-dev at lists.jboss.org >> >>> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >>> > > >> >>> > > >> >>> > > _______________________________________________ >> >>> > > hibernate-dev mailing list >> >>> > > hibernate-dev at lists.jboss.org >> >>> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >>> > _______________________________________________ >> >>> > hibernate-dev mailing list >> >>> > hibernate-dev at lists.jboss.org >> >>> > https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >>> > >> >>> _______________________________________________ >> >>> hibernate-dev mailing list >> >>> hibernate-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev From sanne at hibernate.org Fri Jan 19 10:35:50 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 19 Jan 2018 15:35:50 +0000 Subject: [hibernate-dev] [hibernate-announce] ORM 5.3.0.Beta1 released In-Reply-To: References: Message-ID: Awesome, congratulations all! On 19 January 2018 at 15:33, Steve Ebersole wrote: > http://in.relation.to/2018/01/18/hibernate-orm-530-beta1-release/ > _______________________________________________ > hibernate-announce mailing list > hibernate-announce at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-announce From steve at hibernate.org Fri Jan 19 11:15:24 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 19 Jan 2018 16:15:24 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: Agreed. So unless someone has an argument against, I will plan on switching over to Bintray publishing for the next 5.3 release. This means we will need to update the older versions we still want to release to publish to Bintray. Gail, that's just which versions? On Fri, Jan 19, 2018 at 9:35 AM Sanne Grinovero wrote: > On 19 January 2018 at 14:56, Steve Ebersole wrote: > > I think it is reasonable to only publish the maven artifacts to Bintray > and > > continue to publish the bundles to SourceForge. > > Great, so that means we can have about a thousand releases on Bintray; > should be enough for all our projects for at least 10 years. > > Thanks, > Sanne > > > > > > > On Fri, Jan 19, 2018 at 7:19 AM Sanne Grinovero > wrote: > >> > >> On 19 January 2018 at 13:05, Steve Ebersole > wrote: > >> > I sat down and did some calculations to get a better idea of whether > >> > this is > >> > feasible. 5.3.0.Beta1 had a total size of 135M (31M in "maven > >> > artifacts", > >> > 104 in release bundles). At 30G limit, we'd be able to do ~222 > releases > >> > before we hit that limit (30 / .135 = 222.2222) > >> > > >> > So if only ORM is going to move to Bintray, I think the 30G limit is > not > >> > a > >> > hindrance. Do we see other projects moving away from publishing to > >> > JBoss > >> > Nexus, and if so what publishing repo do y'all plan to use? > >> > >> Yes, as I said before I'm neutral on which one we use, but I was > >> somewhat expecting us to all eventually use the same solution. > >> Seems important to be consistent for sake of end user's experience, > >> but also for us to share tooling, scripts, practices, lessons > >> learned.. > >> > >> That said we didn't start looking at that in other Hibernate projects > >> so there would certainly be a lag. > >> > >> The work we're doing on feature-packs might significantly reduce the > >> size of each release, but I think it will only have an impact on the > >> "maven artifacts", which according to your estimates are not the main > >> issue. > >> > >> Maybe we could stick to sourceforge for the release bundles? We all > >> seems to agree that "release bundles" are meant for the more "old > >> school" devs; I'd say they won't be swayed away from Sourceforge > >> anyway, and we should probably keep some continuity there. > >> That would also happen to solve the storage limit problem? > >> > >> Thanks, > >> Sanne > >> > >> > >> > >> > >> > > >> > > >> > On Thu, Jan 18, 2018 at 2:21 PM Steve Ebersole > >> > wrote: > >> >> > >> >> Bintray said they would increase the storage limit to 30G for > >> >> Hibernate. > >> >> However that limit is per organization, which is the top-level thing > >> >> (https://bintray.com/hibernate). I think we'd eat that up in no > time, > >> >> especially if other projects plan on moving to Bintray at any time. > >> >> > >> >> One way around that would be to have each project be its own Bintray > >> >> organization. > >> >> > >> >> > >> >> On Fri, Jan 12, 2018 at 7:33 AM Gunnar Morling > > >> >> wrote: > >> >>> > >> >>> 2018-01-12 12:59 GMT+01:00 Sanne Grinovero : > >> >>> > >> >>> > Personally I'm neutral. I surely wouldn't want to manage our own > >> >>> > Artifactory, but since JFrog will do that I'm not concerned about > >> >>> > the > >> >>> > platform management being horrible. > >> >>> > > >> >>> > Artifactory looks better, OSSRH has the benefit of possibly having > >> >>> > better integration with Maven. > >> >>> > > >> >>> > There are some benefits on staying to JBoss's nexus though; not > >> >>> > expressing a strong opinion but let's clarify these. > >> >>> > > >> >>> > # Stats > >> >>> > We need download statistics, which I understand they all offer, > but > >> >>> > an > >> >>> > absolute number is not as useful as being able to compare the > >> >>> > numbers > >> >>> > in one dashboard across various others of our projects. > >> >>> > Also not looking forward to have to login to multiple systems to > >> >>> > gather > >> >>> > it > >> >>> > all. > >> >>> > > >> >>> > # Quality control of artifacts > >> >>> > I'm understanding that JBoss Nexus does several strict validations > >> >>> > on > >> >>> > our poms; sure they have been in the way as it's not nice to see > >> >>> > such > >> >>> > failures *during* a release but there's an upside to them as well. > >> >>> > AFAIK OSSRH also has similar rules, but the JBoss team one has > >> >>> > different ones, plus a deal with Sonatype to deem our stuff good > >> >>> > "pre-approved" so we don't have to satisfy the Sonatype rules too. > >> >>> > > >> >>> > # Signing > >> >>> > Also I'm understanding that to release on OSSRH we need to sign > all > >> >>> > artifacts; not a bad idea but it's quite more papework and key > >> >>> > management. Such paperwork is handled for us by the JBoss Nexus > >> >>> > team. > >> >>> > We'd need to install GPG on our release servers, get a > organization > >> >>> > RSA key signed, and people stubbornly releasing manually will have > >> >>> > to > >> >>> > create a key each, and have it approved by Sonatype. > >> >>> > > >> >>> > >> >>> Debezium already is released to OSSRH from our CI server. May be > worth > >> >>> chatting to Jiri (added him to CC) about the details of setup. Note > >> >>> there's > >> >>> no need for key approval by Sonatype (at least last time I did it), > >> >>> you > >> >>> only need to publish them to some key server which you can do all by > >> >>> yourself. > >> >>> > >> >>> > >> >>> > > >> >>> > Not against migrating if this is what you all want - just making > >> >>> > sure > >> >>> > we're keeping these into account. > >> >>> > > >> >>> > Thanks, > >> >>> > Sanne > >> >>> > > >> >>> > > >> >>> > On 12 January 2018 at 02:47, Brett Meyer > >> >>> > wrote: > >> >>> > > Sorry for the late and probably irrelevant response... > >> >>> > > > >> >>> > > We're using an in-house Artifactory instance at a gig and it's > >> >>> > > been > >> >>> > > trash. I can't speak to the UI or management end, nor Bintray, > >> >>> > > but > >> >>> > > Artifactory's platform doesn't seem as polished (can't believe I > >> >>> > > just > >> >>> > > said that) or stable (can't believe I said that either) as Nexus > >> >>> > > (what > >> >>> > > is happening). > >> >>> > > > >> >>> > > I use OSSRH for some minor projects and have generally had > decent > >> >>> > > luck > >> >>> > > -- including a few interactions with the support team that went > >> >>> > > well. > >> >>> > > OSSRH != JBoss Nexus, although I definitely understand the > >> >>> > > wounds... > >> >>> > > > >> >>> > > > >> >>> > > On 12/19/17 8:34 AM, Steve Ebersole wrote: > >> >>> > >> HHH-12172 is about moving away from the JBoss Nexus repo for > >> >>> > >> publishing > >> >>> > our > >> >>> > >> artifacts. There is an open question about which service to > use > >> >>> > instead - > >> >>> > >> Sonatype's OSSRH (Nexus) or JFrog's Bintray (Artifactory). > >> >>> > >> > >> >>> > >> Personally I think Artifactory is far superior of a > UI/platform. > >> >>> > >> We > >> >>> > >> all > >> >>> > >> know Nexus from the JBoss deployment of it, and we have all > >> >>> > >> generally > >> >>> > had > >> >>> > >> nothing good to say about it. > >> >>> > >> > >> >>> > >> But I am wondering if anyone has practical experience with > >> >>> > >> either, > >> >>> > >> or > >> >>> > knows > >> >>> > >> persons/projects tyay do and could share their experiences. > >> >>> > >> E.g., > >> >>> > >> even > >> >>> > >> though I prefer Bintray in almost every regard, I am very > nervous > >> >>> > >> that > >> >>> > it > >> >>> > >> seems next to impossible to get help/support with it. The same > >> >>> > >> may > >> >>> > >> be > >> >>> > true > >> >>> > >> with OSSRH - I don't know, hence why I am asking ;) > >> >>> > >> _______________________________________________ > >> >>> > >> hibernate-dev mailing list > >> >>> > >> hibernate-dev at lists.jboss.org > >> >>> > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >> >>> > > > >> >>> > > > >> >>> > > _______________________________________________ > >> >>> > > hibernate-dev mailing list > >> >>> > > hibernate-dev at lists.jboss.org > >> >>> > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > >> >>> > _______________________________________________ > >> >>> > hibernate-dev mailing list > >> >>> > hibernate-dev at lists.jboss.org > >> >>> > https://lists.jboss.org/mailman/listinfo/hibernate-dev > >> >>> > > >> >>> _______________________________________________ > >> >>> hibernate-dev mailing list > >> >>> hibernate-dev at lists.jboss.org > >> >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > From guillaume.smet at gmail.com Fri Jan 19 11:28:07 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Fri, 19 Jan 2018 17:28:07 +0100 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: On Fri, Jan 19, 2018 at 5:15 PM, Steve Ebersole wrote: > Agreed. So unless someone has an argument against, I will plan on > switching over to Bintray publishing for the next 5.3 release. > Not an argument against but just to be sure, you will be able to synchronize artifacts from the org.hibernate groupId from Bintray to Central even with the others (Validator, Search) coming from the JBoss Nexus? -- Guillaume From steve at hibernate.org Fri Jan 19 11:33:35 2018 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 19 Jan 2018 16:33:35 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: yessir On Fri, Jan 19, 2018 at 10:28 AM Guillaume Smet wrote: > On Fri, Jan 19, 2018 at 5:15 PM, Steve Ebersole > wrote: > >> Agreed. So unless someone has an argument against, I will plan on >> switching over to Bintray publishing for the next 5.3 release. >> > > Not an argument against but just to be sure, you will be able to > synchronize artifacts from the org.hibernate groupId from Bintray to > Central even with the others (Validator, Search) coming from the JBoss > Nexus? > > -- > Guillaume > From guillaume.smet at gmail.com Fri Jan 19 11:37:45 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Fri, 19 Jan 2018 17:37:45 +0100 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: On Fri, Jan 19, 2018 at 5:33 PM, Steve Ebersole wrote: > yessir > Nice, thanks for confirming. From rory.odonnell at oracle.com Mon Jan 22 06:01:44 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Mon, 22 Jan 2018 11:01:44 +0000 Subject: [hibernate-dev] JDK 10 Early Access b40 & JDK 8u172 Early Access b02 are available on jdk.java.net Message-ID: <1b36b505-677c-cf4f-7ecd-a1dd753c3cac@oracle.com> Hi Sanne, Happy New Year! *OpenJDK builds - *JDK 10 Early Access build 40 is available at http://jdk.java.net/10/ * These early-access, open-source builds are provided under the GNU General Public License, version?2, with the Classpath Exception . * Summary of changes :- https://download.java.net/java/jdk10/archive/40/jdk-10+40.html *JDK 10 will enter Rampdown Phase Two on Thursday the 18th of January, 2018. * * More details , see Mark Reinhold's email to jdk-dev mailing list [1] * The Rampdown Phase Two process will be similar to that of JDK 9 [2]. * JDK 10 Schedule, Status & Features are available [3] *JDK **8u172 Early-Access build 03*is available at :- http://jdk.java.net/8/ * Summary of Changes here :- https://download.java.net/java/jdk8u172/changes/jdk8u172-b02.html Regards, Rory [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-January/000416.html [2] http://openjdk.java.net/projects/jdk/10/rdp-2 [3] http://openjdk.java.net/projects/jdk/10/ -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA, Dublin,Ireland From sanne at hibernate.org Mon Jan 22 10:39:11 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 22 Jan 2018 15:39:11 +0000 Subject: [hibernate-dev] [Hibernate Search] Release plans, schedule adjusted Message-ID: We decided to roll over some minor pending issues to version 5.10 - which will target Hibernate ORM 5.3 / JPA 2.2. So since we're all eager to start working with ORM 5.3 we're wrapping up the works on Search 5.9. Version 5.9 now has only a couple of minor, mostly optional polishing tasks open. We plan to tag Hibernate Search 5.9.0.CR1 tomorrow evening.. which means a final is coming soon after. Thanks From postmaster at lists.jboss.org Tue Jan 23 07:55:20 2018 From: postmaster at lists.jboss.org (Returned mail) Date: Tue, 23 Jan 2018 20:55:20 +0800 Subject: [hibernate-dev] Returned mail: Data format error Message-ID: <201801231256.w0NCuLWJ009495@lists01.dmz-a.mwc.hst.phx2.redhat.com> From sanne at hibernate.org Tue Jan 23 12:22:31 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 23 Jan 2018 17:22:31 +0000 Subject: [hibernate-dev] Released: org.hibernate.javax.persistence:hibernate-jpa-2.1-api version 1.0.2.Final Message-ID: I've re-released the hibernate-jpa-2.1-api with a single, minor change: declare the official Jigsaw module name "java.persistence" in the Automatic-Module-Name header in the MANIFEST. This was mostly done on request of the WildFly team, there's no strong reason to upgrade any project, unless you want to experiment with Jigsaw of course.. FYI version 1.0.1.Final never existed; my bad, I misinterpreted the repository status. Reminder: for JPA 2.2 we'll be using the API bundle from the spec group as it's now finally being released in Maven central. So going forward it will be javax.persistence:javax.persistence-api. Thanks, Sanne From sanne at hibernate.org Wed Jan 24 05:51:05 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 24 Jan 2018 10:51:05 +0000 Subject: [hibernate-dev] About the WildFly bootstrap errors in Hibernate ORM / master Message-ID: Hi all, especially Chris, and anyone else having problems with the integration tests using WildFly, the problem seems to be caused by not having the JBoss Nexus repository enabled in your *Maven* configuration. (Yes, even though we use Gradle..) For the time being could you create a ~/.m2/settings.xml having the content you can copy from: - https://raw.githubusercontent.com/hibernate/hibernate-search/8f7e87bf282877a7a5554035abb709cc9813fec2/settings-example.xml This is just a temporary solution so that you're not stuck today, while I'm looking for a better fix. For details, see: - http://lists.jboss.org/pipermail/wildfly-dev/2018-January/006335.html Thanks, and sorry for the inconvenience! Sanne From yoann at hibernate.org Wed Jan 24 07:28:20 2018 From: yoann at hibernate.org (Yoann Rodiere) Date: Wed, 24 Jan 2018 12:28:20 +0000 Subject: [hibernate-dev] Hibernate Search 5.9.0.CR1 released Message-ID: Hello, We just released Hibernate Search 5.9.0.CR1, with WildFly feature packs and various bugfixes. This is the last step before 5.9 is released. Be sure to check it out so you can share your thoughts with us before the release! You can find more information about 5.9.0.CR1 on our blog: http://in.relation.to/2018/01/24/hibernate-search-5-9-0-CR1/ -- Yoann Rodiere yoann at hibernate.org / yrodiere at redhat.com Software Engineer Hibernate NoORM team From steve at hibernate.org Wed Jan 24 08:16:59 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 24 Jan 2018 13:16:59 +0000 Subject: [hibernate-dev] About the WildFly bootstrap errors in Hibernate ORM / master In-Reply-To: References: Message-ID: I'm confused. You're saying it's not enough to include it in the Gradle script? On Wed, Jan 24, 2018, 5:05 AM Sanne Grinovero wrote: > Hi all, > especially Chris, and anyone else having problems with the integration > tests using WildFly, > > the problem seems to be caused by not having the JBoss Nexus > repository enabled in your *Maven* configuration. (Yes, even though we > use Gradle..) > > For the time being could you create a ~/.m2/settings.xml > > having the content you can copy from: > - > https://raw.githubusercontent.com/hibernate/hibernate-search/8f7e87bf282877a7a5554035abb709cc9813fec2/settings-example.xml > > This is just a temporary solution so that you're not stuck today, > while I'm looking for a better fix. > > For details, see: > - http://lists.jboss.org/pipermail/wildfly-dev/2018-January/006335.html > > Thanks, and sorry for the inconvenience! > > Sanne > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Wed Jan 24 08:33:34 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 24 Jan 2018 13:33:34 +0000 Subject: [hibernate-dev] About the WildFly bootstrap errors in Hibernate ORM / master In-Reply-To: References: Message-ID: That's right :( Including it in the build script gets our provioning plugin to know how to resolve things, and all works fine in the phase of creating the server copy. But next the produced Wildfly server starts in a new JVM, entirely new context, and expects to find the dependencies "as configured" for Maven, for the current user. If the current user's configuration doesn't list the JBoss nexus it will ignore the locally cached artifacts, even if we made sure to download them during provisioning. I'm looking for settings we might use today, if I fail I'll revert it. On 24 Jan 2018 13:17, "Steve Ebersole" wrote: I'm confused. You're saying it's not enough to include it in the Gradle script? On Wed, Jan 24, 2018, 5:05 AM Sanne Grinovero wrote: > Hi all, > especially Chris, and anyone else having problems with the integration > tests using WildFly, > > the problem seems to be caused by not having the JBoss Nexus > repository enabled in your *Maven* configuration. (Yes, even though we > use Gradle..) > > For the time being could you create a ~/.m2/settings.xml > > having the content you can copy from: > - https://raw.githubusercontent.com/hibernate/hibernate-search/ > 8f7e87bf282877a7a5554035abb709cc9813fec2/settings-example.xml > > This is just a temporary solution so that you're not stuck today, > while I'm looking for a better fix. > > For details, see: > - http://lists.jboss.org/pipermail/wildfly-dev/2018-January/006335.html > > Thanks, and sorry for the inconvenience! > > Sanne > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Wed Jan 24 10:01:55 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 24 Jan 2018 15:01:55 +0000 Subject: [hibernate-dev] About the WildFly bootstrap errors in Hibernate ORM / master In-Reply-To: References: Message-ID: This is fixed now: HHH-12250 You should all be free to remove JBoss Nexus from your settings.xml again, if you prefer. Thanks On 24 January 2018 at 13:33, Sanne Grinovero wrote: > That's right :( > > Including it in the build script gets our provioning plugin to know how to > resolve things, and all works fine in the phase of creating the server copy. > > But next the produced Wildfly server starts in a new JVM, entirely new > context, and expects to find the dependencies "as configured" for Maven, for > the current user. If the current user's configuration doesn't list the JBoss > nexus it will ignore the locally cached artifacts, even if we made sure to > download them during provisioning. > > I'm looking for settings we might use today, if I fail I'll revert it. > > On 24 Jan 2018 13:17, "Steve Ebersole" wrote: > > I'm confused. You're saying it's not enough to include it in the Gradle > script? > > > On Wed, Jan 24, 2018, 5:05 AM Sanne Grinovero wrote: >> >> Hi all, >> especially Chris, and anyone else having problems with the integration >> tests using WildFly, >> >> the problem seems to be caused by not having the JBoss Nexus >> repository enabled in your *Maven* configuration. (Yes, even though we >> use Gradle..) >> >> For the time being could you create a ~/.m2/settings.xml >> >> having the content you can copy from: >> - >> https://raw.githubusercontent.com/hibernate/hibernate-search/8f7e87bf282877a7a5554035abb709cc9813fec2/settings-example.xml >> >> This is just a temporary solution so that you're not stuck today, >> while I'm looking for a better fix. >> >> For details, see: >> - http://lists.jboss.org/pipermail/wildfly-dev/2018-January/006335.html >> >> Thanks, and sorry for the inconvenience! >> >> Sanne >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > From chris at hibernate.org Wed Jan 24 10:03:06 2018 From: chris at hibernate.org (Chris Cranford) Date: Wed, 24 Jan 2018 10:03:06 -0500 Subject: [hibernate-dev] About the WildFly bootstrap errors in Hibernate ORM / master In-Reply-To: References: Message-ID: Thanks Sanne. On 01/24/2018 10:01 AM, Sanne Grinovero wrote: > This is fixed now: HHH-12250 > > You should all be free to remove JBoss Nexus from your settings.xml > again, if you prefer. > > Thanks > > > On 24 January 2018 at 13:33, Sanne Grinovero wrote: >> That's right :( >> >> Including it in the build script gets our provioning plugin to know how to >> resolve things, and all works fine in the phase of creating the server copy. >> >> But next the produced Wildfly server starts in a new JVM, entirely new >> context, and expects to find the dependencies "as configured" for Maven, for >> the current user. If the current user's configuration doesn't list the JBoss >> nexus it will ignore the locally cached artifacts, even if we made sure to >> download them during provisioning. >> >> I'm looking for settings we might use today, if I fail I'll revert it. >> >> On 24 Jan 2018 13:17, "Steve Ebersole" wrote: >> >> I'm confused. You're saying it's not enough to include it in the Gradle >> script? >> >> >> On Wed, Jan 24, 2018, 5:05 AM Sanne Grinovero wrote: >>> Hi all, >>> especially Chris, and anyone else having problems with the integration >>> tests using WildFly, >>> >>> the problem seems to be caused by not having the JBoss Nexus >>> repository enabled in your *Maven* configuration. (Yes, even though we >>> use Gradle..) >>> >>> For the time being could you create a ~/.m2/settings.xml >>> >>> having the content you can copy from: >>> - >>> https://raw.githubusercontent.com/hibernate/hibernate-search/8f7e87bf282877a7a5554035abb709cc9813fec2/settings-example.xml >>> >>> This is just a temporary solution so that you're not stuck today, >>> while I'm looking for a better fix. >>> >>> For details, see: >>> - http://lists.jboss.org/pipermail/wildfly-dev/2018-January/006335.html >>> >>> Thanks, and sorry for the inconvenience! >>> >>> Sanne >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> From steve at hibernate.org Wed Jan 24 10:33:42 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 24 Jan 2018 15:33:42 +0000 Subject: [hibernate-dev] About the WildFly bootstrap errors in Hibernate ORM / master In-Reply-To: References: Message-ID: To be clear... this fixes errors like: org.hibernate.wildfly.integrationtest.HibernateEnversOnWildflyTest > classMethod FAILED org.jboss.arquillian.container.spi.client.container.LifecycleException Caused by: java.util.concurrent.TimeoutException ? On Wed, Jan 24, 2018 at 9:03 AM Chris Cranford wrote: > Thanks Sanne. > > On 01/24/2018 10:01 AM, Sanne Grinovero wrote: > > This is fixed now: HHH-12250 > > > > You should all be free to remove JBoss Nexus from your settings.xml > > again, if you prefer. > > > > Thanks > > > > > > On 24 January 2018 at 13:33, Sanne Grinovero > wrote: > >> That's right :( > >> > >> Including it in the build script gets our provioning plugin to know how > to > >> resolve things, and all works fine in the phase of creating the server > copy. > >> > >> But next the produced Wildfly server starts in a new JVM, entirely new > >> context, and expects to find the dependencies "as configured" for > Maven, for > >> the current user. If the current user's configuration doesn't list the > JBoss > >> nexus it will ignore the locally cached artifacts, even if we made sure > to > >> download them during provisioning. > >> > >> I'm looking for settings we might use today, if I fail I'll revert it. > >> > >> On 24 Jan 2018 13:17, "Steve Ebersole" wrote: > >> > >> I'm confused. You're saying it's not enough to include it in the Gradle > >> script? > >> > >> > >> On Wed, Jan 24, 2018, 5:05 AM Sanne Grinovero > wrote: > >>> Hi all, > >>> especially Chris, and anyone else having problems with the integration > >>> tests using WildFly, > >>> > >>> the problem seems to be caused by not having the JBoss Nexus > >>> repository enabled in your *Maven* configuration. (Yes, even though we > >>> use Gradle..) > >>> > >>> For the time being could you create a ~/.m2/settings.xml > >>> > >>> having the content you can copy from: > >>> - > >>> > https://raw.githubusercontent.com/hibernate/hibernate-search/8f7e87bf282877a7a5554035abb709cc9813fec2/settings-example.xml > >>> > >>> This is just a temporary solution so that you're not stuck today, > >>> while I'm looking for a better fix. > >>> > >>> For details, see: > >>> - > http://lists.jboss.org/pipermail/wildfly-dev/2018-January/006335.html > >>> > >>> Thanks, and sorry for the inconvenience! > >>> > >>> Sanne > >>> _______________________________________________ > >>> hibernate-dev mailing list > >>> hibernate-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >> > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Wed Jan 24 10:43:03 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 24 Jan 2018 15:43:03 +0000 Subject: [hibernate-dev] About the WildFly bootstrap errors in Hibernate ORM / master In-Reply-To: References: Message-ID: On 24 January 2018 at 15:33, Steve Ebersole wrote: > To be clear... this fixes errors like: > > org.hibernate.wildfly.integrationtest.HibernateEnversOnWildflyTest > > classMethod FAILED > org.jboss.arquillian.container.spi.client.container.LifecycleException > Caused by: java.util.concurrent.TimeoutException > > ? If your TimeoutException was caused by the same underlying issue as Vlad and Chris had hit.. yes. > > > On Wed, Jan 24, 2018 at 9:03 AM Chris Cranford wrote: >> >> Thanks Sanne. >> >> On 01/24/2018 10:01 AM, Sanne Grinovero wrote: >> > This is fixed now: HHH-12250 >> > >> > You should all be free to remove JBoss Nexus from your settings.xml >> > again, if you prefer. >> > >> > Thanks >> > >> > >> > On 24 January 2018 at 13:33, Sanne Grinovero >> > wrote: >> >> That's right :( >> >> >> >> Including it in the build script gets our provioning plugin to know how >> >> to >> >> resolve things, and all works fine in the phase of creating the server >> >> copy. >> >> >> >> But next the produced Wildfly server starts in a new JVM, entirely new >> >> context, and expects to find the dependencies "as configured" for >> >> Maven, for >> >> the current user. If the current user's configuration doesn't list the >> >> JBoss >> >> nexus it will ignore the locally cached artifacts, even if we made sure >> >> to >> >> download them during provisioning. >> >> >> >> I'm looking for settings we might use today, if I fail I'll revert it. >> >> >> >> On 24 Jan 2018 13:17, "Steve Ebersole" wrote: >> >> >> >> I'm confused. You're saying it's not enough to include it in the >> >> Gradle >> >> script? >> >> >> >> >> >> On Wed, Jan 24, 2018, 5:05 AM Sanne Grinovero >> >> wrote: >> >>> Hi all, >> >>> especially Chris, and anyone else having problems with the integration >> >>> tests using WildFly, >> >>> >> >>> the problem seems to be caused by not having the JBoss Nexus >> >>> repository enabled in your *Maven* configuration. (Yes, even though we >> >>> use Gradle..) >> >>> >> >>> For the time being could you create a ~/.m2/settings.xml >> >>> >> >>> having the content you can copy from: >> >>> - >> >>> >> >>> https://raw.githubusercontent.com/hibernate/hibernate-search/8f7e87bf282877a7a5554035abb709cc9813fec2/settings-example.xml >> >>> >> >>> This is just a temporary solution so that you're not stuck today, >> >>> while I'm looking for a better fix. >> >>> >> >>> For details, see: >> >>> - >> >>> http://lists.jboss.org/pipermail/wildfly-dev/2018-January/006335.html >> >>> >> >>> Thanks, and sorry for the inconvenience! >> >>> >> >>> Sanne >> >>> _______________________________________________ >> >>> hibernate-dev mailing list >> >>> hibernate-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> >> >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Wed Jan 24 10:46:30 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 24 Jan 2018 15:46:30 +0000 Subject: [hibernate-dev] About the WildFly bootstrap errors in Hibernate ORM / master In-Reply-To: References: Message-ID: The stacktrace did not say. Anyway, I pulled and the errors in those tests are gone. Thanks! On Wed, Jan 24, 2018 at 9:43 AM Sanne Grinovero wrote: > On 24 January 2018 at 15:33, Steve Ebersole wrote: > > To be clear... this fixes errors like: > > > > org.hibernate.wildfly.integrationtest.HibernateEnversOnWildflyTest > > > classMethod FAILED > > > org.jboss.arquillian.container.spi.client.container.LifecycleException > > Caused by: java.util.concurrent.TimeoutException > > > > ? > > If your TimeoutException was caused by the same underlying issue as > Vlad and Chris had hit.. yes. > > > > > > > On Wed, Jan 24, 2018 at 9:03 AM Chris Cranford > wrote: > >> > >> Thanks Sanne. > >> > >> On 01/24/2018 10:01 AM, Sanne Grinovero wrote: > >> > This is fixed now: HHH-12250 > >> > > >> > You should all be free to remove JBoss Nexus from your settings.xml > >> > again, if you prefer. > >> > > >> > Thanks > >> > > >> > > >> > On 24 January 2018 at 13:33, Sanne Grinovero > >> > wrote: > >> >> That's right :( > >> >> > >> >> Including it in the build script gets our provioning plugin to know > how > >> >> to > >> >> resolve things, and all works fine in the phase of creating the > server > >> >> copy. > >> >> > >> >> But next the produced Wildfly server starts in a new JVM, entirely > new > >> >> context, and expects to find the dependencies "as configured" for > >> >> Maven, for > >> >> the current user. If the current user's configuration doesn't list > the > >> >> JBoss > >> >> nexus it will ignore the locally cached artifacts, even if we made > sure > >> >> to > >> >> download them during provisioning. > >> >> > >> >> I'm looking for settings we might use today, if I fail I'll revert > it. > >> >> > >> >> On 24 Jan 2018 13:17, "Steve Ebersole" wrote: > >> >> > >> >> I'm confused. You're saying it's not enough to include it in the > >> >> Gradle > >> >> script? > >> >> > >> >> > >> >> On Wed, Jan 24, 2018, 5:05 AM Sanne Grinovero > >> >> wrote: > >> >>> Hi all, > >> >>> especially Chris, and anyone else having problems with the > integration > >> >>> tests using WildFly, > >> >>> > >> >>> the problem seems to be caused by not having the JBoss Nexus > >> >>> repository enabled in your *Maven* configuration. (Yes, even though > we > >> >>> use Gradle..) > >> >>> > >> >>> For the time being could you create a ~/.m2/settings.xml > >> >>> > >> >>> having the content you can copy from: > >> >>> - > >> >>> > >> >>> > https://raw.githubusercontent.com/hibernate/hibernate-search/8f7e87bf282877a7a5554035abb709cc9813fec2/settings-example.xml > >> >>> > >> >>> This is just a temporary solution so that you're not stuck today, > >> >>> while I'm looking for a better fix. > >> >>> > >> >>> For details, see: > >> >>> - > >> >>> > http://lists.jboss.org/pipermail/wildfly-dev/2018-January/006335.html > >> >>> > >> >>> Thanks, and sorry for the inconvenience! > >> >>> > >> >>> Sanne > >> >>> _______________________________________________ > >> >>> hibernate-dev mailing list > >> >>> hibernate-dev at lists.jboss.org > >> >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >> >> > >> > >> _______________________________________________ > >> hibernate-dev mailing list > >> hibernate-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Wed Jan 24 10:48:04 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 24 Jan 2018 15:48:04 +0000 Subject: [hibernate-dev] About the WildFly bootstrap errors in Hibernate ORM / master In-Reply-To: References: Message-ID: On 24 January 2018 at 15:46, Steve Ebersole wrote: > The stacktrace did not say. Anyway, I pulled and the errors in those tests > are gone. Thanks! Great to hear it confirmed! Thank you > > On Wed, Jan 24, 2018 at 9:43 AM Sanne Grinovero wrote: >> >> On 24 January 2018 at 15:33, Steve Ebersole wrote: >> > To be clear... this fixes errors like: >> > >> > org.hibernate.wildfly.integrationtest.HibernateEnversOnWildflyTest > >> > classMethod FAILED >> > >> > org.jboss.arquillian.container.spi.client.container.LifecycleException >> > Caused by: java.util.concurrent.TimeoutException >> > >> > ? >> >> If your TimeoutException was caused by the same underlying issue as >> Vlad and Chris had hit.. yes. >> >> > >> > >> > On Wed, Jan 24, 2018 at 9:03 AM Chris Cranford >> > wrote: >> >> >> >> Thanks Sanne. >> >> >> >> On 01/24/2018 10:01 AM, Sanne Grinovero wrote: >> >> > This is fixed now: HHH-12250 >> >> > >> >> > You should all be free to remove JBoss Nexus from your settings.xml >> >> > again, if you prefer. >> >> > >> >> > Thanks >> >> > >> >> > >> >> > On 24 January 2018 at 13:33, Sanne Grinovero >> >> > wrote: >> >> >> That's right :( >> >> >> >> >> >> Including it in the build script gets our provioning plugin to know >> >> >> how >> >> >> to >> >> >> resolve things, and all works fine in the phase of creating the >> >> >> server >> >> >> copy. >> >> >> >> >> >> But next the produced Wildfly server starts in a new JVM, entirely >> >> >> new >> >> >> context, and expects to find the dependencies "as configured" for >> >> >> Maven, for >> >> >> the current user. If the current user's configuration doesn't list >> >> >> the >> >> >> JBoss >> >> >> nexus it will ignore the locally cached artifacts, even if we made >> >> >> sure >> >> >> to >> >> >> download them during provisioning. >> >> >> >> >> >> I'm looking for settings we might use today, if I fail I'll revert >> >> >> it. >> >> >> >> >> >> On 24 Jan 2018 13:17, "Steve Ebersole" wrote: >> >> >> >> >> >> I'm confused. You're saying it's not enough to include it in the >> >> >> Gradle >> >> >> script? >> >> >> >> >> >> >> >> >> On Wed, Jan 24, 2018, 5:05 AM Sanne Grinovero >> >> >> wrote: >> >> >>> Hi all, >> >> >>> especially Chris, and anyone else having problems with the >> >> >>> integration >> >> >>> tests using WildFly, >> >> >>> >> >> >>> the problem seems to be caused by not having the JBoss Nexus >> >> >>> repository enabled in your *Maven* configuration. (Yes, even though >> >> >>> we >> >> >>> use Gradle..) >> >> >>> >> >> >>> For the time being could you create a ~/.m2/settings.xml >> >> >>> >> >> >>> having the content you can copy from: >> >> >>> - >> >> >>> >> >> >>> >> >> >>> https://raw.githubusercontent.com/hibernate/hibernate-search/8f7e87bf282877a7a5554035abb709cc9813fec2/settings-example.xml >> >> >>> >> >> >>> This is just a temporary solution so that you're not stuck today, >> >> >>> while I'm looking for a better fix. >> >> >>> >> >> >>> For details, see: >> >> >>> - >> >> >>> >> >> >>> http://lists.jboss.org/pipermail/wildfly-dev/2018-January/006335.html >> >> >>> >> >> >>> Thanks, and sorry for the inconvenience! >> >> >>> >> >> >>> Sanne >> >> >>> _______________________________________________ >> >> >>> hibernate-dev mailing list >> >> >>> hibernate-dev at lists.jboss.org >> >> >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> >> >> >> >> >> _______________________________________________ >> >> hibernate-dev mailing list >> >> hibernate-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Wed Jan 24 10:51:07 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 24 Jan 2018 15:51:07 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: Do y'all mind then if I get rid of the hibernate/bundles[1] repo? It's a "generic" Artifactory repo meant to hold release bundles. ORM[2] and OGM[3] both have played with it, as we both have versions published there. But for ORM at least those are also on SourceForge and its not important that I keep this around. [1] https://bintray.com/hibernate/bundles [2] https://bintray.com/hibernate/bundles/hibernate-orm [3] https://bintray.com/hibernate/bundles/hibernate-ogm On Fri, Jan 19, 2018 at 10:38 AM Guillaume Smet wrote: > On Fri, Jan 19, 2018 at 5:33 PM, Steve Ebersole > wrote: > >> yessir >> > > Nice, thanks for confirming. > > From andrea at hibernate.org Wed Jan 24 10:58:47 2018 From: andrea at hibernate.org (andrea boriero) Date: Wed, 24 Jan 2018 15:58:47 +0000 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: no objection On 24 January 2018 at 15:51, Steve Ebersole wrote: > Do y'all mind then if I get rid of the hibernate/bundles[1] repo? It's a > "generic" Artifactory repo meant to hold release bundles. ORM[2] and > OGM[3] both have played with it, as we both have versions published there. > But for ORM at least those are also on SourceForge and its not important > that I keep this around. > > [1] https://bintray.com/hibernate/bundles > [2] https://bintray.com/hibernate/bundles/hibernate-orm > [3] https://bintray.com/hibernate/bundles/hibernate-ogm > > > On Fri, Jan 19, 2018 at 10:38 AM Guillaume Smet > wrote: > > > On Fri, Jan 19, 2018 at 5:33 PM, Steve Ebersole > > wrote: > > > >> yessir > >> > > > > Nice, thanks for confirming. > > > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From guillaume.smet at gmail.com Wed Jan 24 11:23:51 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 24 Jan 2018 17:23:51 +0100 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: Sure you can get rid of it. The only versions of OGM there is a very old one people shouldn't use. Thanks! On Wed, Jan 24, 2018 at 4:58 PM, andrea boriero wrote: > no objection > > On 24 January 2018 at 15:51, Steve Ebersole wrote: > >> Do y'all mind then if I get rid of the hibernate/bundles[1] repo? It's a >> "generic" Artifactory repo meant to hold release bundles. ORM[2] and >> OGM[3] both have played with it, as we both have versions published there. >> But for ORM at least those are also on SourceForge and its not important >> that I keep this around. >> >> [1] https://bintray.com/hibernate/bundles >> [2] https://bintray.com/hibernate/bundles/hibernate-orm >> [3] https://bintray.com/hibernate/bundles/hibernate-ogm >> >> >> On Fri, Jan 19, 2018 at 10:38 AM Guillaume Smet > > >> wrote: >> >> > On Fri, Jan 19, 2018 at 5:33 PM, Steve Ebersole >> > wrote: >> > >> >> yessir >> >> >> > >> > Nice, thanks for confirming. >> > >> > >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > > From crancran at gmail.com Wed Jan 24 11:36:56 2018 From: crancran at gmail.com (Chris Cranford) Date: Wed, 24 Jan 2018 11:36:56 -0500 Subject: [hibernate-dev] HHH-12172 - Bintray v. OSSRH In-Reply-To: References: Message-ID: Fine with me. On 01/24/2018 11:23 AM, Guillaume Smet wrote: > Sure you can get rid of it. > > The only versions of OGM there is a very old one people shouldn't use. > > Thanks! > > On Wed, Jan 24, 2018 at 4:58 PM, andrea boriero > wrote: > >> no objection >> >> On 24 January 2018 at 15:51, Steve Ebersole wrote: >> >>> Do y'all mind then if I get rid of the hibernate/bundles[1] repo? It's a >>> "generic" Artifactory repo meant to hold release bundles. ORM[2] and >>> OGM[3] both have played with it, as we both have versions published there. >>> But for ORM at least those are also on SourceForge and its not important >>> that I keep this around. >>> >>> [1] https://bintray.com/hibernate/bundles >>> [2] https://bintray.com/hibernate/bundles/hibernate-orm >>> [3] https://bintray.com/hibernate/bundles/hibernate-ogm >>> >>> >>> On Fri, Jan 19, 2018 at 10:38 AM Guillaume Smet >> wrote: >>> >>>> On Fri, Jan 19, 2018 at 5:33 PM, Steve Ebersole >>>> wrote: >>>> >>>>> yessir >>>>> >>>> Nice, thanks for confirming. >>>> >>>> >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Wed Jan 24 17:47:06 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 24 Jan 2018 22:47:06 +0000 Subject: [hibernate-dev] Realising the JavaDoc jars as well In-Reply-To: References: <64d228b0-011e-dd94-1d55-78a4efc37120@hibernate.org> Message-ID: Follow up question... Snapshots are only ever hosted on the JBoss snapshot repo. They are never sync'ed to Central. As such we are not under the same validation constraints. So should we publish Javadocs for snapshot builds? Not doing so would speed up the CI jobs a bit. On Tue, Jan 2, 2018 at 1:49 PM Steve Ebersole wrote: > This is already what I have done over a week ago ;) > > On Tue, Jan 2, 2018 at 1:43 PM Chris Cranford wrote: > >> I agree with Andrea. >> >> >> On 12/29/2017 09:14 AM, andrea boriero wrote: >> >> +1 for filtering out internal packages. >> >> not a strong opinion on grouping >> >> On 24 December 2017 at 14:23, Steve Ebersole wrote: >> >> >> Sure, but the question remains :P It just adds another one: >> >> >> 1. Should internal packages be generated into the javadocs (individual >> and/or aggregated)? >> 2. Should the individual javadocs (only intended for publishing to >> Central) group the packages into api/spi(/internal) the way we do for >> the >> aggregated javadocs? >> >> Personally I think filtering out internal packages is a great idea. >> >> Regarding grouping packages, I think its not worth the effort for the >> individual ones - just have an overview for these that just notes this >> distinction. >> >> On Sat, Dec 23, 2017 at 6:53 AM Sanne Grinovero >> wrote: >> >> >> On 22 December 2017 at 18:16, Steve Ebersole >> >> wrote: >> >> I wanted to get everyone's opinion about the api/spi/internal package >> grouping we do in the aggregated Javadoc in regards to the per-module >> javadocs. Adding this logic adds significant overhead to the process >> >> of >> >> building the Javadoc, to the point where I am considering not >> >> performing >> >> that grouping there. >> >> Thoughts? >> >> >> For Hibernate Search we recently decided to not produce javadocs at >> all for "internal"; everything else is just documented as a single >> group. >> >> That cuts on the "need to know" complexity of end users. Advanced >> users who could have benefitted from knowing more about the internals >> will likely have sources. >> >> >> >> On Tue, Dec 12, 2017 at 11:37 AM Vlad Mihalcea < >> >> mihalcea.vlad at gmail.com> >> >> wrote: >> >> >> I tested it locally, and when publishing the jars to Maven local, the >> JavaDoc is now included. >> >> Don't know if there's anything to be done about it. >> >> Vlad >> >> On Mon, Dec 11, 2017 at 9:32 PM, Sanne Grinovero > >> wrote: >> >> >> +1 to merge it (if it works - which I didn't check) >> >> Some history can easily be found: >> - >> >> >> http://lists.jboss.org/pipermail/hibernate-dev/2017-January/015758.html >> >> >> Thanks, >> Sanne >> >> >> On 11 December 2017 at 15:24, Vlad Mihalcea < >> >> mihalcea.vlad at gmail.com> >> >> wrote: >> >> Hi, >> >> I've noticed this Pull Request which is valid and worth >> >> integrating: >> >> https://github.com/hibernate/hibernate-orm/pull/2078 >> >> Before I merge it, I wanted to make sure whether this change was >> >> accidental >> >> or intentional. >> >> Was there any reason not to ship the JavaDoc jars along with the >> release >> artifacts and the sources jars as well? >> >> Thanks, >> Vlad >> _______________________________________________ >> hibernate-dev mailing listhibernate-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> _______________________________________________ >> hibernate-dev mailing listhibernate-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> _______________________________________________ >> hibernate-dev mailing listhibernate-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> _______________________________________________ >> hibernate-dev mailing listhibernate-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> >> From gbadner at redhat.com Wed Jan 24 18:12:09 2018 From: gbadner at redhat.com (Gail Badner) Date: Wed, 24 Jan 2018 15:12:09 -0800 Subject: [hibernate-dev] Hibernate ORM 5.2 backports Message-ID: People seem to be relying on me to backport to 5.2. From a product standpoint, there is no need to backport to 5.2 anymore once 5.3.0 is released. Maybe 5.3.0 and 5.2.13 should be released together, with 5.2.13 being the last 5.2 release. >From an earlier thread, it sounds like there is concern outside the ORM team that we need to keep 5.2.x releases going for some time. What is the criteria for backporting? Steve, Andrea, and I had a brief discussion about backporting bugfixes from master to 5.2 when the bugs apply to master. Is there more discussion to be done about this? FWIW, backporting to active community branches is everyone's responsibility, not mine alone. Regards, Gail From brett at hibernate.org Wed Jan 24 21:47:26 2018 From: brett at hibernate.org (Brett Meyer) Date: Wed, 24 Jan 2018 21:47:26 -0500 Subject: [hibernate-dev] Hibernate ORM 5.2 backports In-Reply-To: References: Message-ID: For what it's worth, +1 on this.? At least back in the day, we'd continue to backport bugfixes to the previous minor release, until a new final minor release was deployed.? That was the responsibility of whoever was committing to master.? Since the baselines were typically "close enough", commits generally cherry-picked fairly cleanly. On 1/24/18 6:12 PM, Gail Badner wrote: > People seem to be relying on me to backport to 5.2. From a product > standpoint, there is no need to backport to 5.2 anymore once 5.3.0 is > released. Maybe 5.3.0 and 5.2.13 should be released together, with 5.2.13 > being the last 5.2 release. > > >From an earlier thread, it sounds like there is concern outside the ORM > team that we need to keep 5.2.x releases going for some time. > > What is the criteria for backporting? > > Steve, Andrea, and I had a brief discussion about backporting bugfixes from > master to 5.2 when the bugs apply to master. Is there more discussion to be > done about this? > > FWIW, backporting to active community branches is everyone's > responsibility, not mine alone. > > Regards, > Gail > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Thu Jan 25 09:55:53 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 25 Jan 2018 14:55:53 +0000 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: On Wed, Jan 10, 2018 at 5:41 AM Guillaume Smet wrote: > AFAICS, lately, the ORM bugfix releases announcement is just a link to the > changelog. I don't think it would buy you a lot to automate it. > Its been a long time since I've personally done a ORM bugfix release. But AFAIK we still do do a detailed "release announcement". Its just that we centralize that in one place (the blog) and have the other forms (email, twitter, etc) simply refer back to that one. But regardless of how simple or elaborate these "secondary" announcements are, the information changes still need to be collected and "written up" - and that's not something that can really be automated. Again, the question here was about automating this process of doing a release. So everything i am bringing up is in relation to that point. So from one POV there are 2 parts to releasing: 1. the actual release (tagging, publishing artifacts, publishing docs, etc) 2. various announcements (blog, email, twitter, etc) This "automated release job" people keep bringing up can really only do some of these tasks. The "problem" is that those are tasks we have already "automated" in Gradle itself - having an actual CI job to do the release really isn't buying us anything. And btw... the release announcement emails, tweets, forum posts, etc have been simple links back to the blog post for YEARS. So not sure what you mean about that happening "lately". For the NoORM projects, the announcement part (Twitter, Mail, Blog) is > still manual. I don't think it's that bad. > Of course, because we all like our own devised processes :) But I can tell you this... the question of the release announcement blog URL aside... if I could automate the announcement to the "other forms", I absolutely would. And I think you'd be lying if you said you wouldn't want to. > I think we agree on the principles. We just need to have a viable > definition of "stable" for the users. > Its more than having a definition of "stable". Perhaps that's the problem. Technically ORM 2.0 is still "stable". We normally mean that as a counter-point to "(in) development". So just like 2.0 is stable, so are 3.0, 4.0, 5.0, 5.1, 5.2, etal. "Stable" is really just "beyond the pre-release versions" (Alpha, Beta, CR)... in other words: Final. So once we release 5.3.0.Final, 5.3 is "stable". But 5.2 is also still "stable". So its not this "stable"/"development" that is the important distinction here. Its really more about "current" (or "active") versus "older". So at the time of 5.3.0.Final, that's the more important transition here - 5.3 becoming the *current* stable release. Could we agree on releasing it regularly from now on and at least plan a > 5.2.13 release soon to release all the fixes already in? > In isolation 5.2 is not what we should be recommending to use - once 5.3 goes Final for sure. 5.2 -> 5.3 does have a minor hitch in this though in that it also represents a bump in JPA version. Which means that a user is not going to be able to simply drop 5.3 on top of 5.2 in a container (EE, Spring, etc) written to work with JPA 2.1. But even given that, I still think we are only going to do a limited number of additional 5.2 releases, stopping once 5.3 becomes stable. The real question is who is going to handle: 1. identifying what should get ported from 5.3/master to 5.2 2. performing the needed back ports 3. performing the release(s) Because the longer y'all want 5.2 releases to continue, the more help we are going to need From guillaume.smet at gmail.com Thu Jan 25 10:22:12 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Thu, 25 Jan 2018 16:22:12 +0100 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: On Thu, Jan 25, 2018 at 3:55 PM, Steve Ebersole wrote: > And btw... the release announcement emails, tweets, forum posts, etc have > been simple links back to the blog post for YEARS. So not sure what you > mean about that happening "lately". > I'm talking about the blog post itself. I mean that for micro releases, the ORM blog announcement is as this one: http://in.relation.to/2018/01/10/hibernate-orm-5111-final-release/ or this one http://in.relation.to/2017/10/19/hibernate-orm-5212-final-release/ for quite a while now. So it does not require as much work as what you put in the 5.3.0.Beta1 blog post for instance. > For the NoORM projects, the announcement part (Twitter, Mail, Blog) is >> still manual. I don't think it's that bad. >> > > Of course, because we all like our own devised processes :) > > But I can tell you this... the question of the release announcement blog > URL aside... if I could automate the announcement to the "other forms", I > absolutely would. And I think you'd be lying if you said you wouldn't want > to. > Well, it might be just me but I like to try to give a personal touch to the announcements when I have the time to do it. So when I don't, I mostly reuse the previous one but, otherwise, I like to give a more "human" touch to the communication. That's why I invested so much time in having an entirely automated release process for the NoORM side (based on Davide's previous work for OGM, to be fair) but not so much in the automated communication part. I think we agree on the principles. We just need to have a viable >> definition of "stable" for the users. >> > > Its more than having a definition of "stable". Perhaps that's the > problem. Technically ORM 2.0 is still "stable". We normally mean that as > a counter-point to "(in) development". So just like 2.0 is stable, so are > 3.0, 4.0, 5.0, 5.1, 5.2, etal. > > "Stable" is really just "beyond the pre-release versions" (Alpha, Beta, > CR)... in other words: Final. So once we release 5.3.0.Final, 5.3 is > "stable". But 5.2 is also still "stable". So its not this > "stable"/"development" that is the important distinction here. Its really > more about "current" (or "active") versus "older". So at the time of > 5.3.0.Final, that's the more important transition here - 5.3 becoming the > *current* stable release. > Yeah, that's why I quoted "stable". For me, we can safely stop maintaining a back branch as soon as the new one is consumable by the end users (i.e. application developers in our case). By this, I mean: - the obvious regressions have been fixed and the new version is usable in most cases (it usually takes at least one micro) - the integrators have done their work on integrating the new version - by integrators, these days, I mostly mean Spring. I wouldn't put this condition if they weren't reactive enough but they are. I.e. we can now consider that the users can safely move to the new version. They can do it or not, it's their business, but they don't have the obvious excuse of not being able to do it. As long as the new one is not consumable by the end users, I would continue to maintain the back branch, be it a product branch or not. -- Guillaume From coladict at gmail.com Thu Jan 25 10:38:19 2018 From: coladict at gmail.com (Jordan Gigov) Date: Thu, 25 Jan 2018 17:38:19 +0200 Subject: [hibernate-dev] Hibernate ORM 5.2 backports In-Reply-To: References: Message-ID: I don't know how applicable this is to the Hibernate project, but a workflow I've seen is you do the fixes in the old maintenance-only branches (say 5.1) and then merge or cherry-pick those changes into the current-release branch. Of course if they've diverged too drastically in very few commits, that's not an option. On 25 January 2018 at 04:47, Brett Meyer wrote: > For what it's worth, +1 on this. At least back in the day, we'd > continue to backport bugfixes to the previous minor release, until a new > final minor release was deployed. That was the responsibility of > whoever was committing to master. Since the baselines were typically > "close enough", commits generally cherry-picked fairly cleanly. > > > On 1/24/18 6:12 PM, Gail Badner wrote: > > People seem to be relying on me to backport to 5.2. From a product > > standpoint, there is no need to backport to 5.2 anymore once 5.3.0 is > > released. Maybe 5.3.0 and 5.2.13 should be released together, with 5.2.13 > > being the last 5.2 release. > > > > >From an earlier thread, it sounds like there is concern outside the ORM > > team that we need to keep 5.2.x releases going for some time. > > > > What is the criteria for backporting? > > > > Steve, Andrea, and I had a brief discussion about backporting bugfixes > from > > master to 5.2 when the bugs apply to master. Is there more discussion to > be > > done about this? > > > > FWIW, backporting to active community branches is everyone's > > responsibility, not mine alone. > > > > Regards, > > Gail > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Thu Jan 25 12:32:38 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 25 Jan 2018 17:32:38 +0000 Subject: [hibernate-dev] Plans to release 5.2.13? In-Reply-To: References: Message-ID: On Thu, Jan 25, 2018 at 9:22 AM Guillaume Smet wrote: > On Thu, Jan 25, 2018 at 3:55 PM, Steve Ebersole > wrote: > >> And btw... the release announcement emails, tweets, forum posts, etc have >> been simple links back to the blog post for YEARS. So not sure what you >> mean about that happening "lately". >> > > I'm talking about the blog post itself. > > I mean that for micro releases, the ORM blog announcement is as this one: > http://in.relation.to/2018/01/10/hibernate-orm-5111-final-release/ or > this one > http://in.relation.to/2017/10/19/hibernate-orm-5212-final-release/ for > quite a while now. > > So it does not require as much work as what you put in the 5.3.0.Beta1 > blog post for instance. > Yeah, that's a bit more scarce on details than I would generally do. But at the same time, at some point there is only so much variation on "this release fixes bugs". Except for rare cases where there is one or more *major* bug fixes that we want to highlight, there is generally not a need to say anything more than "this release fixed a number of bugs - look [here] to see the details; go [here] to get it; ...". And 90+% of the time, "[here]" is the only part that changes. YMMV, but the only releases I end up writing a lot of announcement details for are the "development" (Alpha, Beta, CR) releases and then that first Final release - outlining the new features and improvements. In fact I'd argue that this information (the "[here]"s) is really *metadata* about the release and should be kept with the release details whether that be entries in the release's yml file or somewhere else. Its not really worth getting further into this whole discussion because we all seem to have different opinions. For the NoORM projects, the announcement part (Twitter, Mail, Blog) is >>> still manual. I don't think it's that bad. >>> >> >> Of course, because we all like our own devised processes :) >> >> But I can tell you this... the question of the release announcement blog >> URL aside... if I could automate the announcement to the "other forms", I >> absolutely would. And I think you'd be lying if you said you wouldn't want >> to. >> > > Well, it might be just me but I like to try to give a personal touch to > the announcements when I have the time to do it. > It's not a question of a "form letter" versus "personal touch" - its a question of where you put that "personal touch". The reason its an important distinction is that "personal touch" precludes automation. > For me, we can safely stop maintaining a back branch as soon as the new > one is consumable by the end users (i.e. application developers in our > case). By this, I mean: > - the obvious regressions have been fixed and the new version is usable in > most cases (it usually takes at least one micro) > - the integrators have done their work on integrating the new version - by > integrators, these days, I mostly mean Spring. I wouldn't put this > condition if they weren't reactive enough but they are. > > I.e. we can now consider that the users can safely move to the new version. > > They can do it or not, it's their business, but they don't have the > obvious excuse of not being able to do it. > > As long as the new one is not consumable by the end users, I would > continue to maintain the back branch, be it a product branch or not. > So realistically how do you gauge "integrators have done their work on integrating the new version"? Let's say I am about to release 5.3.2.Final. I need to know whether I also need to release 5.2.999999999999.Final at the same time - how would I know this? Specifically how would I know that some particular integrator has not "done their work on integrating the new version"? Let's assume you are right and its just Spring we care about here. How do I know that they have "done their work on integrating the new version"? Is there an issue in Hibernate Jira for "Make sure Spring have worked on integrating 5.3" that they come and update when they have? I'm not sure what this is supposed to mean *practically*. Also, we know 100% that Spring is not the only one we care about. We also care about WildFly, and we know close to 100% that WildFly has not "done their work on integrating the new version" at this juncture. And lets take this to the extreme... let's say 5.3 has been out for a year, but neither Spring nor WildFly have "done their work on integrating the new version". Do you propose that we keep doing 5.2 releases in this case? WRT "the obvious regressions have been fixed". I am assuming you mean, currently, any "obvious regressions" from 5.1 -> 5.2 and continuing to do 5.2 releases until all of those are resolved. I get that in theory. But what happens in 5 years when someone reports a new "obvious regression" between 5.1 and 5.2? IMO there is just too many variables to "flowchart" the decision here. From steve at hibernate.org Thu Jan 25 12:51:53 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 25 Jan 2018 17:51:53 +0000 Subject: [hibernate-dev] Hibernate ORM 5.2 backports In-Reply-To: References: Message-ID: I don't think anyone mentioned expecting you to do this. In fact the discussions I have had about this was that others would help with this (after 5.2.13). On Wed, Jan 24, 2018 at 5:20 PM Gail Badner wrote: > People seem to be relying on me to backport to 5.2. From a product > standpoint, there is no need to backport to 5.2 anymore once 5.3.0 is > released. Maybe 5.3.0 and 5.2.13 should be released together, with 5.2.13 > being the last 5.2 release. > > >From an earlier thread, it sounds like there is concern outside the ORM > team that we need to keep 5.2.x releases going for some time. > > What is the criteria for backporting? > > Steve, Andrea, and I had a brief discussion about backporting bugfixes from > master to 5.2 when the bugs apply to master. Is there more discussion to be > done about this? > > FWIW, backporting to active community branches is everyone's > responsibility, not mine alone. > > Regards, > Gail > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From gbadner at redhat.com Fri Jan 26 21:49:15 2018 From: gbadner at redhat.com (Gail Badner) Date: Fri, 26 Jan 2018 18:49:15 -0800 Subject: [hibernate-dev] Hibernate ORM 5.1.12.Final has been released Message-ID: For details: http://in.relation.to/2018/01/26/hibernate-orm-5112-final-release/ From davide at hibernate.org Mon Jan 29 10:39:18 2018 From: davide at hibernate.org (Davide D'Alto) Date: Mon, 29 Jan 2018 15:39:18 +0000 Subject: [hibernate-dev] Hibernate OGM 5.2 CR1 has been released! Message-ID: Hibernate OGM 5.2 CR1 is arrived! This will become the next 5.2 Final soon and we added support for Geospatial integration and new native operator support with MongoDB, Neo4j queries performance improvements and integration with cluster counters for Infinispan embedded. More details in the blog post: http://in.relation.to/2018/01/29/hibernate-ogm-5-2-CR1-released/ Thanks, Davide From gbadner at redhat.com Mon Jan 29 19:59:09 2018 From: gbadner at redhat.com (Gail Badner) Date: Mon, 29 Jan 2018 16:59:09 -0800 Subject: [hibernate-dev] Hibernate ORM 5.2 backports In-Reply-To: References: Message-ID: I've seen at least one jira comment asking if I approve backporting to 5.2, and I also remember seeing a PR that had "Requires Gail" label with a comment suggesting bugfixes in older branches (including 5.2) must be approved by me. I just wanted to make this clear so that bugfixes that should be backported don't fall through the cracks. On Thu, Jan 25, 2018 at 9:51 AM, Steve Ebersole wrote: > I don't think anyone mentioned expecting you to do this. In fact the > discussions I have had about this was that others would help with this > (after 5.2.13). > > > > On Wed, Jan 24, 2018 at 5:20 PM Gail Badner wrote: > >> People seem to be relying on me to backport to 5.2. From a product >> standpoint, there is no need to backport to 5.2 anymore once 5.3.0 is >> released. Maybe 5.3.0 and 5.2.13 should be released together, with 5.2.13 >> being the last 5.2 release. >> >> >From an earlier thread, it sounds like there is concern outside the ORM >> team that we need to keep 5.2.x releases going for some time. >> >> What is the criteria for backporting? >> >> Steve, Andrea, and I had a brief discussion about backporting bugfixes >> from >> master to 5.2 when the bugs apply to master. Is there more discussion to >> be >> done about this? >> >> FWIW, backporting to active community branches is everyone's >> responsibility, not mine alone. >> >> Regards, >> Gail >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > From sanne at hibernate.org Tue Jan 30 08:52:50 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 30 Jan 2018 13:52:50 +0000 Subject: [hibernate-dev] Hibernate ORM 5.2 backports In-Reply-To: References: Message-ID: In some cases - maybe not these specifically - I know people ask your review when it gets tricky, as you're the most thorough reviewer and have deep knowledge of most areas :) Maybe that was the case, maybe not. Good to clarify either way. Thanks! Sanne On 30 January 2018 at 00:59, Gail Badner wrote: > I've seen at least one jira comment asking if I approve backporting to 5.2, > and I also remember seeing a PR that had "Requires Gail" label with a > comment suggesting bugfixes in older branches (including 5.2) must be > approved by me. > > I just wanted to make this clear so that bugfixes that should be backported > don't fall through the cracks. > > On Thu, Jan 25, 2018 at 9:51 AM, Steve Ebersole wrote: > >> I don't think anyone mentioned expecting you to do this. In fact the >> discussions I have had about this was that others would help with this >> (after 5.2.13). >> >> >> >> On Wed, Jan 24, 2018 at 5:20 PM Gail Badner wrote: >> >>> People seem to be relying on me to backport to 5.2. From a product >>> standpoint, there is no need to backport to 5.2 anymore once 5.3.0 is >>> released. Maybe 5.3.0 and 5.2.13 should be released together, with 5.2.13 >>> being the last 5.2 release. >>> >>> >From an earlier thread, it sounds like there is concern outside the ORM >>> team that we need to keep 5.2.x releases going for some time. >>> >>> What is the criteria for backporting? >>> >>> Steve, Andrea, and I had a brief discussion about backporting bugfixes >>> from >>> master to 5.2 when the bugs apply to master. Is there more discussion to >>> be >>> done about this? >>> >>> FWIW, backporting to active community branches is everyone's >>> responsibility, not mine alone. >>> >>> Regards, >>> Gail >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From gbadner at redhat.com Tue Jan 30 13:23:35 2018 From: gbadner at redhat.com (Gail Badner) Date: Tue, 30 Jan 2018 10:23:35 -0800 Subject: [hibernate-dev] Hibernate ORM 5.2 backports In-Reply-To: References: Message-ID: If it's a matter of reviewing code, I'm happy to help. In that case, my name should be added as a reviewer. BTW, I forget to check all the various ways I can be contacted, so a personal email or hipchat message. helps. On Tue, Jan 30, 2018 at 5:52 AM, Sanne Grinovero wrote: > In some cases - maybe not these specifically - I know people ask your > review when it gets tricky, as you're the most thorough reviewer and > have deep knowledge of most areas :) Maybe that was the case, maybe > not. Good to clarify either way. > > Thanks! > Sanne > > > On 30 January 2018 at 00:59, Gail Badner wrote: > > I've seen at least one jira comment asking if I approve backporting to > 5.2, > > and I also remember seeing a PR that had "Requires Gail" label with a > > comment suggesting bugfixes in older branches (including 5.2) must be > > approved by me. > > > > I just wanted to make this clear so that bugfixes that should be > backported > > don't fall through the cracks. > > > > On Thu, Jan 25, 2018 at 9:51 AM, Steve Ebersole > wrote: > > > >> I don't think anyone mentioned expecting you to do this. In fact the > >> discussions I have had about this was that others would help with this > >> (after 5.2.13). > >> > >> > >> > >> On Wed, Jan 24, 2018 at 5:20 PM Gail Badner wrote: > >> > >>> People seem to be relying on me to backport to 5.2. From a product > >>> standpoint, there is no need to backport to 5.2 anymore once 5.3.0 is > >>> released. Maybe 5.3.0 and 5.2.13 should be released together, with > 5.2.13 > >>> being the last 5.2 release. > >>> > >>> >From an earlier thread, it sounds like there is concern outside the > ORM > >>> team that we need to keep 5.2.x releases going for some time. > >>> > >>> What is the criteria for backporting? > >>> > >>> Steve, Andrea, and I had a brief discussion about backporting bugfixes > >>> from > >>> master to 5.2 when the bugs apply to master. Is there more discussion > to > >>> be > >>> done about this? > >>> > >>> FWIW, backporting to active community branches is everyone's > >>> responsibility, not mine alone. > >>> > >>> Regards, > >>> Gail > >>> _______________________________________________ > >>> hibernate-dev mailing list > >>> hibernate-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>> > >> > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Tue Jan 30 17:00:08 2018 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 30 Jan 2018 22:00:08 +0000 Subject: [hibernate-dev] 5.3.0 release tomorrow Message-ID: Wanted to remind everyone that tomorrow is the next time-boxed release for 5.3. I wanted to get everyone's opinions about the version number, whether this should be Beta2 or CR1. IMO It depends how you view the remaining challenges with the JPA TCK, with CR1 being the optimistic view. From smarlow at redhat.com Wed Jan 31 01:31:04 2018 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 31 Jan 2018 07:31:04 +0100 Subject: [hibernate-dev] 5.3.0 release tomorrow In-Reply-To: References: Message-ID: On Tue, Jan 30, 2018 at 11:00 PM, Steve Ebersole wrote: > Wanted to remind everyone that tomorrow is the next time-boxed release for > 5.3. > > I wanted to get everyone's opinions about the version number, whether this > should be Beta2 or CR1. IMO It depends how you view the remaining > challenges with the JPA TCK, with CR1 being the optimistic view. > I like Beta2 better but if you need to move on, CR1 would be okay as a JPA 2.2 preview, as long as we later release 5.3.1 with any further changes that are needed (we still have to also pass the EE TCK). > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From smarlow at redhat.com Wed Jan 31 01:43:17 2018 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 31 Jan 2018 07:43:17 +0100 Subject: [hibernate-dev] Could we have a Hibernate 5.3 compatibility layer that includes the ORM 5.1 Hibernate Session class Message-ID: WildFly would like to have a version of 5.3+, that is compatible with ORM 5.1, with regard to the org.hibernate.session changes (including mapping of exceptions thrown, so that the same exceptions are thrown). Is it even possible to have an extra org.hibernate.Session interface + impl (bridge) that matches the same session included in 5.1? The impl would delegate to the real underlying org.hibernate.Session impl classes and also wrap thrown exceptions, so that Hibernate 5.1 native ORM apps, continue to work without code changes Or something like that. I could see how some users wouldn't want to use the compatibility layer to avoid extra overhead, so in WildFly, we would have to make that possible also. What do you think? We would need something similar in ORM 6.0+ that is also compatible with 5.1, if this is possible. Scott From sanne at hibernate.org Wed Jan 31 05:26:17 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 31 Jan 2018 10:26:17 +0000 Subject: [hibernate-dev] 5.3.0 release tomorrow In-Reply-To: References: Message-ID: I would suggest a Beta2, as we were hoping to still do some work on it. No strong take though, as far as I know our pending work is optional / low impact: A. produce the feature packs in ORM B. test OGM integration Status of these: A# The feature packs have low impact on ORM's risk and quality, although it would be nice to be able to test the feature packs "as released" from ORM within Search and OGM. It requires a second Gradle plugin; Andrea created a first POC last week, but we still need some work on it, release it and then have the ORM build use it. Finally we'll need to update the documentation and guides to explain to users how to consume it. B# The OGM integration is a bit late; we should be able to verify it next week. We didn't start converting OGM into feature packs; that would take even longer but I guess we can regard this one as optional. All of this could be done in a CR1 if you see reasons to accelerate, I'd just suggest a preference for a little more time. Thanks, Sanne On 30 January 2018 at 22:00, Steve Ebersole wrote: > Wanted to remind everyone that tomorrow is the next time-boxed release for > 5.3. > > I wanted to get everyone's opinions about the version number, whether this > should be Beta2 or CR1. IMO It depends how you view the remaining > challenges with the JPA TCK, with CR1 being the optimistic view. > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From andrea at hibernate.org Wed Jan 31 06:20:43 2018 From: andrea at hibernate.org (andrea boriero) Date: Wed, 31 Jan 2018 11:20:43 +0000 Subject: [hibernate-dev] 5.3.0 release tomorrow In-Reply-To: References: Message-ID: having a Beta2 release is fine also to me On 31 January 2018 at 10:26, Sanne Grinovero wrote: > I would suggest a Beta2, as we were hoping to still do some work on > it. No strong take though, as far as I know our pending work is > optional / low impact: > A. produce the feature packs in ORM > B. test OGM integration > > Status of these: > > A# > The feature packs have low impact on ORM's risk and quality, although > it would be nice to be able to test the feature packs "as released" > from ORM within Search and OGM. > It requires a second Gradle plugin; Andrea created a first POC last > week, but we still need some work on it, release it and then have the > ORM build use it. > Finally we'll need to update the documentation and guides to explain > to users how to consume it. > > B# > The OGM integration is a bit late; we should be able to verify it next > week. We didn't start converting OGM into feature packs; that would > take even longer but I guess we can regard this one as optional. > > All of this could be done in a CR1 if you see reasons to accelerate, > I'd just suggest a preference for a little more time. > > Thanks, > Sanne > > > On 30 January 2018 at 22:00, Steve Ebersole wrote: > > Wanted to remind everyone that tomorrow is the next time-boxed release > for > > 5.3. > > > > I wanted to get everyone's opinions about the version number, whether > this > > should be Beta2 or CR1. IMO It depends how you view the remaining > > challenges with the JPA TCK, with CR1 being the optimistic view. > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From guillaume.smet at gmail.com Wed Jan 31 07:15:04 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 31 Jan 2018 13:15:04 +0100 Subject: [hibernate-dev] Minor changes to the website Message-ID: Hi, Just so you know, I just made a few minor changes to the website: - in the Releases dymanic submenu, you now have a label stating "latest stable"/"development" to make it clearer what the versions are (it's especially useful at this point of the ORM development for instance as 5.3 is the latest but not yet considered stable) - I made the Documentation entry menu of ORM dynamic with a submenu following the same principles as for Releases. It avoids going to the latest then using the dropdown at the top (I kept the dropdown anyway) In passing, I also added a placeholder page for the ORM 5.3 doc stating it will be available soon. HTH -- Guillaume From sanne at hibernate.org Wed Jan 31 09:16:59 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 31 Jan 2018 14:16:59 +0000 Subject: [hibernate-dev] Minor changes to the website In-Reply-To: References: Message-ID: Sounds great, thanks! On 31 January 2018 at 12:15, Guillaume Smet wrote: > Hi, > > Just so you know, I just made a few minor changes to the website: > - in the Releases dymanic submenu, you now have a label stating "latest > stable"/"development" to make it clearer what the versions are (it's > especially useful at this point of the ORM development for instance as 5.3 > is the latest but not yet considered stable) > - I made the Documentation entry menu of ORM dynamic with a submenu > following the same principles as for Releases. It avoids going to the > latest then using the dropdown at the top (I kept the dropdown anyway) > > In passing, I also added a placeholder page for the ORM 5.3 doc stating it > will be available soon. > > HTH > > -- > Guillaume > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Wed Jan 31 09:32:50 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 31 Jan 2018 14:32:50 +0000 Subject: [hibernate-dev] Minor changes to the website In-Reply-To: References: Message-ID: Looks great! Thanks On Wed, Jan 31, 2018 at 8:18 AM Sanne Grinovero wrote: > Sounds great, thanks! > > On 31 January 2018 at 12:15, Guillaume Smet > wrote: > > Hi, > > > > Just so you know, I just made a few minor changes to the website: > > - in the Releases dymanic submenu, you now have a label stating "latest > > stable"/"development" to make it clearer what the versions are (it's > > especially useful at this point of the ORM development for instance as > 5.3 > > is the latest but not yet considered stable) > > - I made the Documentation entry menu of ORM dynamic with a submenu > > following the same principles as for Releases. It avoids going to the > > latest then using the dropdown at the top (I kept the dropdown anyway) > > > > In passing, I also added a placeholder page for the ORM 5.3 doc stating > it > > will be available soon. > > > > HTH > > > > -- > > Guillaume > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From chris at hibernate.org Wed Jan 31 10:11:36 2018 From: chris at hibernate.org (Chris Cranford) Date: Wed, 31 Jan 2018 10:11:36 -0500 Subject: [hibernate-dev] 5.3.0 release tomorrow In-Reply-To: References: Message-ID: <06cca534-ddef-ce5e-31cd-ee418094c360@hibernate.org> I have no strong preference either way. On 01/30/2018 05:00 PM, Steve Ebersole wrote: > Wanted to remind everyone that tomorrow is the next time-boxed release for > 5.3. > > I wanted to get everyone's opinions about the version number, whether this > should be Beta2 or CR1. IMO It depends how you view the remaining > challenges with the JPA TCK, with CR1 being the optimistic view. > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From guillaume.smet at gmail.com Wed Jan 31 10:22:05 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 31 Jan 2018 16:22:05 +0100 Subject: [hibernate-dev] 5.3.0 release tomorrow In-Reply-To: <06cca534-ddef-ce5e-31cd-ee418094c360@hibernate.org> References: <06cca534-ddef-ce5e-31cd-ee418094c360@hibernate.org> Message-ID: I would say let's go for Beta2. We are not in a hurry considering the challenges still pending so no need to rush in the CR phase. On the NoORM side: - Search - Yoann has prepared a PR with the 5.3 support so we should be OK on this side. 5.9 will be tagged early next week and we'll release an alpha of 5.10 for OGM consumption. - OGM - Davide is going to release OGM 5.2 (still with ORM 5.1) early next week, then we will merge the ORM 5.2 support that is pending for quite some time (the PR is ready, just waiting for the 5.2 release) and we will experiment with the 5.3 support on top of that. I don't expect many issues as it's not a step as big as 5.1 -> 5.2. So you can expect some feedback from us in the next 2 weeks. On Wed, Jan 31, 2018 at 4:11 PM, Chris Cranford wrote: > I have no strong preference either way. > > On 01/30/2018 05:00 PM, Steve Ebersole wrote: > > Wanted to remind everyone that tomorrow is the next time-boxed release > for > > 5.3. > > > > I wanted to get everyone's opinions about the version number, whether > this > > should be Beta2 or CR1. IMO It depends how you view the remaining > > challenges with the JPA TCK, with CR1 being the optimistic view. > From sanne at hibernate.org Wed Jan 31 10:28:01 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 31 Jan 2018 15:28:01 +0000 Subject: [hibernate-dev] Could we have a Hibernate 5.3 compatibility layer that includes the ORM 5.1 Hibernate Session class In-Reply-To: References: Message-ID: Didn't we discuss this at our last meeting? The general opinion - and final verdict - was that doing like you're asking now again would be way too much work and not generally worth it, not least not in the best interest of all our users. Technically it's certainly possible, but a lot of work going at cost of more useful progress, not least possibly causing confusion to users in various ways. Our proposal is unchanged (from the meeting): we'll have - experimentally - two versions in WildFly. In a first stage this would be version 5.1 and 5.3, so that people needing backwards compatibility with 5.1 can use 5.1 (can't have better compatibility than that!), while people wanting to use a later version can opt in for that. In a second stage, possibly onboarding some lessons, version 5.3 will be gradually replaced with 6.0. MAYBE 6.0 will be API compatible with 5.3, but definitely not with 5.1. I'm not sure if we can have this in WildFly.next, but the sooner the better. As soon as we have a stable release of Hibernate ORM 5.3 we'll start looking into details of such a double integration. Thanks, Sanne On 31 January 2018 at 06:43, Scott Marlow wrote: > WildFly would like to have a version of 5.3+, that is compatible with ORM > 5.1, with regard to the org.hibernate.session changes (including mapping of > exceptions thrown, so that the same exceptions are thrown). > > Is it even possible to have an extra org.hibernate.Session interface + impl > (bridge) that matches the same session included in 5.1? The impl would > delegate to the real underlying org.hibernate.Session impl classes and also > wrap thrown exceptions, so that Hibernate 5.1 native ORM apps, continue to > work without code changes > > Or something like that. > > I could see how some users wouldn't want to use the compatibility layer to > avoid extra overhead, so in WildFly, we would have to make that possible > also. > > What do you think? > > We would need something similar in ORM 6.0+ that is also compatible with > 5.1, if this is possible. > > Scott > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Wed Jan 31 10:42:17 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 31 Jan 2018 15:42:17 +0000 Subject: [hibernate-dev] 5.3.0 release tomorrow In-Reply-To: References: <06cca534-ddef-ce5e-31cd-ee418094c360@hibernate.org> Message-ID: The next time-box is in fact 2 weeks... :) Hopefully we have these JPA TCK challenges resolved by then and can do CR then. So yes, please have feedback by then. Yoann and I have been working on Search and ORM 5.3 (at least parts) so I think that will work out fine. But yes, OGM makes me a little nervous as as far as I know we have no idea about that integration yet On Wed, Jan 31, 2018 at 9:22 AM Guillaume Smet wrote: > I would say let's go for Beta2. We are not in a hurry considering the > challenges still pending so no need to rush in the CR phase. > > On the NoORM side: > - Search - Yoann has prepared a PR with the 5.3 support so we should be OK > on this side. 5.9 will be tagged early next week and we'll release an alpha > of 5.10 for OGM consumption. > - OGM - Davide is going to release OGM 5.2 (still with ORM 5.1) early next > week, then we will merge the ORM 5.2 support that is pending for quite some > time (the PR is ready, just waiting for the 5.2 release) and we will > experiment with the 5.3 support on top of that. I don't expect many issues > as it's not a step as big as 5.1 -> 5.2. > > So you can expect some feedback from us in the next 2 weeks. > > On Wed, Jan 31, 2018 at 4:11 PM, Chris Cranford > wrote: > >> I have no strong preference either way. >> >> On 01/30/2018 05:00 PM, Steve Ebersole wrote: >> > Wanted to remind everyone that tomorrow is the next time-boxed release >> for >> > 5.3. >> > >> > I wanted to get everyone's opinions about the version number, whether >> this >> > should be Beta2 or CR1. IMO It depends how you view the remaining >> > challenges with the JPA TCK, with CR1 being the optimistic view. >> > From guillaume.smet at gmail.com Wed Jan 31 10:48:35 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 31 Jan 2018 16:48:35 +0100 Subject: [hibernate-dev] 5.3.0 release tomorrow In-Reply-To: References: <06cca534-ddef-ce5e-31cd-ee418094c360@hibernate.org> Message-ID: On Wed, Jan 31, 2018 at 4:42 PM, Steve Ebersole wrote: > The next time-box is in fact 2 weeks... :) > > But yes, OGM makes me a little nervous as as far as I know we have no idea > about that integration yet > We discussed it on Tuesday at our meeting. I think we can commit to getting your feedback on OGM in the next 2 weeks. Personally, I would like to have an OGM release supporting ORM 5.3 not long ago after the ORM 5.3 release, even if we don't have any new features. -- Guillaume From steve at hibernate.org Wed Jan 31 10:49:41 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 31 Jan 2018 15:49:41 +0000 Subject: [hibernate-dev] Could we have a Hibernate 5.3 compatibility layer that includes the ORM 5.1 Hibernate Session class In-Reply-To: References: Message-ID: Not to mention, I'm really not even sure what this request "means". As we all understand 5.1 -> 5.2 unified SessionFactory/EntityManagerFactory and Session/EntityManager, and that caused us to have to make changes to certain method signatures - most notably `Session#getFlushMode` was one of the problems. Session defined that returning a FlushMode; however JPA also defined this same method, although poorly named IMO since it instead returns JPA's FlushModeType (so why the method is not called `#getFlushModeType` is beyond me. Anyway the point is that there is no way to rectify these - there is no way that we can define a contract that simultaneously conforms to both. As Sanne said, and as we all agreed during f2f, the best approach is to have both versions available for use. On Wed, Jan 31, 2018 at 9:28 AM Sanne Grinovero wrote: > Didn't we discuss this at our last meeting? > > The general opinion - and final verdict - was that doing like you're > asking now again would be way too much work and not generally worth > it, not least not in the best interest of all our users. Technically > it's certainly possible, but a lot of work going at cost of more > useful progress, not least possibly causing confusion to users in > various ways. > > Our proposal is unchanged (from the meeting): > > we'll have - experimentally - two versions in WildFly. > > In a first stage this would be version 5.1 and 5.3, so that people > needing backwards compatibility with 5.1 can use 5.1 (can't have > better compatibility than that!), while people wanting to use a later > version can opt in for that. > > In a second stage, possibly onboarding some lessons, version 5.3 will > be gradually replaced with 6.0. MAYBE 6.0 will be API compatible with > 5.3, but definitely not with 5.1. > > I'm not sure if we can have this in WildFly.next, but the sooner the > better. As soon as we have a stable release of Hibernate ORM 5.3 we'll > start looking into details of such a double integration. > > Thanks, > Sanne > > > > > On 31 January 2018 at 06:43, Scott Marlow wrote: > > WildFly would like to have a version of 5.3+, that is compatible with ORM > > 5.1, with regard to the org.hibernate.session changes (including mapping > of > > exceptions thrown, so that the same exceptions are thrown). > > > > Is it even possible to have an extra org.hibernate.Session interface + > impl > > (bridge) that matches the same session included in 5.1? The impl would > > delegate to the real underlying org.hibernate.Session impl classes and > also > > wrap thrown exceptions, so that Hibernate 5.1 native ORM apps, continue > to > > work without code changes > > > > Or something like that. > > > > I could see how some users wouldn't want to use the compatibility layer > to > > avoid extra overhead, so in WildFly, we would have to make that possible > > also. > > > > What do you think? > > > > We would need something similar in ORM 6.0+ that is also compatible with > > 5.1, if this is possible. > > > > Scott > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Wed Jan 31 10:54:52 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 31 Jan 2018 15:54:52 +0000 Subject: [hibernate-dev] 5.3.0 release tomorrow In-Reply-To: References: <06cca534-ddef-ce5e-31cd-ee418094c360@hibernate.org> Message-ID: Yes, I agree. As it is, it is very likely that in 2 weeks we will have ORM 5.3.0.CR1. So even if you did do a OGM release at that time we are going to be limited in what exactly we can change if we find changes are needed. Interestingly this goes back to earlier discussions about "release early/often" for the purpose of identifying regressions. Granted there y'all were talking about 5.2, but the same happens here from the ORM perspective in 5.3. We need to not be dragging version non-stable releases out as we continue to wait for +1's from all integrators (Search, OGM, Spring, etc). Anyway, we'll hear what we hear in 2 weeks. On Wed, Jan 31, 2018 at 9:49 AM Guillaume Smet wrote: > On Wed, Jan 31, 2018 at 4:42 PM, Steve Ebersole > wrote: > >> The next time-box is in fact 2 weeks... :) >> >> But yes, OGM makes me a little nervous as as far as I know we have no >> idea about that integration yet >> > > We discussed it on Tuesday at our meeting. > > I think we can commit to getting your feedback on OGM in the next 2 weeks. > > Personally, I would like to have an OGM release supporting ORM 5.3 not > long ago after the ORM 5.3 release, even if we don't have any new features. > > -- > Guillaume > From guillaume.smet at gmail.com Wed Jan 31 11:05:23 2018 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Wed, 31 Jan 2018 17:05:23 +0100 Subject: [hibernate-dev] 5.3.0 release tomorrow In-Reply-To: References: <06cca534-ddef-ce5e-31cd-ee418094c360@hibernate.org> Message-ID: On Wed, Jan 31, 2018 at 4:54 PM, Steve Ebersole wrote: > Yes, I agree. As it is, it is very likely that in 2 weeks we will have > ORM 5.3.0.CR1. So even if you did do a OGM release at that time we are > going to be limited in what exactly we can change if we find changes are > needed. > > Interestingly this goes back to earlier discussions about "release > early/often" for the purpose of identifying regressions. Granted there > y'all were talking about 5.2, but the same happens here from the ORM > perspective in 5.3. We need to not be dragging version non-stable releases > out as we continue to wait for +1's from all integrators (Search, OGM, > Spring, etc). > Yes, for a lot of reasons (good and bad) we were really bad with the ORM 5.2 support in OGM. We are very aware of that and the idea is to not do that again :). We (probably Davide) will let you know about our progress soon. -- Guillaume From gbadner at redhat.com Wed Jan 31 15:51:54 2018 From: gbadner at redhat.com (Gail Badner) Date: Wed, 31 Jan 2018 12:51:54 -0800 Subject: [hibernate-dev] Should EntityManager#refresh retain an existing lock? Message-ID: HHH-12257 involves refreshing an entity that is already has a pessimistic lock. In the test case attached to the jira, EntityManager#refresh(Object entity) is used to refresh the entity, instead of a method that specifies a particular LockModetype (e.g., #refresh(Object entity, LockModeType lockMode)). The lock on the refreshed entity is dropped. A workaround is to determine the current lock mode using Session#getCurrentLockMode, which returns a org.hibernate.LockMode object, which can be converted to a LockModeType that can be used to call EntityManager#refresh(Object entity, LockModeType lockMode). Unfortunately, the code that converts org.hibernate.LockMode to LockModeType is "internal" (org.hibernate.internal.util.LockModeConverter). I'm on the fence about how this should work. The API for EntityManager#refresh(Object entity) does not say that an existing lock mode on the entity should be retained. On the other hand, in JPA 2.1 spec, 3.4 Locking and Concurrency section seems to indicate that locks on an entity apply to the transaction, and does say that a lock on an entity should be dropped when refreshed without an specified LockModeType. Does anyone have any guidance on how this should work? Thanks, Gail From sanne at hibernate.org Wed Jan 31 16:20:43 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 31 Jan 2018 21:20:43 +0000 Subject: [hibernate-dev] Should EntityManager#refresh retain an existing lock? In-Reply-To: References: Message-ID: Hi Gail, personally I wouldn't expect the pessimistic lock to be dropped. In case of optimistic locking, I would expect the version to be updated to the latest read - the one triggered by the refresh. I just read section 3.4 as you suggested but I couldn't find were it suggests that "a lock on an entity should be dropped when refreshed" ; what makes you think it indicates that? On the other hand, section 3.4.3 is quite explicit about no other changes being allowed by other transactions until the end of the transaction, which I guess makes sense. Would it even be possible to "unlock" a row on which we have a pessimistic lock without committing the transaction? I don't think that's possible, so that should clarify what needs to be done. Thanks, Sanne On 31 January 2018 at 20:51, Gail Badner wrote: > HHH-12257 involves refreshing an entity that is already has a pessimistic > lock. In the test case attached to the jira, EntityManager#refresh(Object > entity) is used to refresh the entity, instead of a method that specifies a > particular LockModetype (e.g., #refresh(Object entity, LockModeType > lockMode)). The lock on the refreshed entity is dropped. > > A workaround is to determine the current lock mode using > Session#getCurrentLockMode, which returns a org.hibernate.LockMode object, > which can be converted to a LockModeType that can be used to call > EntityManager#refresh(Object entity, LockModeType lockMode). > > Unfortunately, the code that converts org.hibernate.LockMode to > LockModeType is "internal" (org.hibernate.internal.util.LockModeConverter). > > I'm on the fence about how this should work. > > The API for EntityManager#refresh(Object entity) does not say that an > existing lock mode on the entity should be retained. > > On the other hand, in JPA 2.1 spec, 3.4 Locking and Concurrency section > seems to indicate that locks on an entity apply to the transaction, and > does say that a lock on an entity should be dropped when refreshed without > an specified LockModeType. > > Does anyone have any guidance on how this should work? > > Thanks, > Gail > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From gbadner at redhat.com Wed Jan 31 16:48:47 2018 From: gbadner at redhat.com (Gail Badner) Date: Wed, 31 Jan 2018 13:48:47 -0800 Subject: [hibernate-dev] Should EntityManager#refresh retain an existing lock? In-Reply-To: References: Message-ID: See below... On Wed, Jan 31, 2018 at 1:20 PM, Sanne Grinovero wrote: > Hi Gail, > > personally I wouldn't expect the pessimistic lock to be dropped. > In case of optimistic locking, I would expect the version to be > updated to the latest read - the one triggered by the refresh. > Yes, the version is updated, if necessary, on a refresh. > > I just read section 3.4 as you suggested but I couldn't find were it > suggests that "a lock on an entity should be dropped when refreshed" ; > what makes you think it indicates that? > Ah, that was a typo on my part, it should have said : > On the other hand, in JPA 2.1 spec, 3.4 Locking and Concurrency section > seems to indicate that locks on an entity apply to the transaction, and > *doesn't *say that a lock on an entity should be dropped when refreshed without > a specified LockModeType. I updated the thread below to make the correction (including a correction to a grammatical error.) > On the other hand, section 3.4.3 is quite explicit about no other > changes being allowed by other transactions until the end of the > transaction, which I guess makes sense. > > Would it even be possible to "unlock" a row on which we have a > pessimistic lock without committing the transaction? I don't think > that's possible, so that should clarify what needs to be done. > > It is possible to call EntityManager#lock(Object entity, LockModeType lockMode) with a lower-level lock, but that request will be ignored. Hibernate will only upgrade a lock. I think that clarifies retaining the same lock-level for the entity when calling EntityManager#refresh(Object entity). If no one has any comments that disagree with this in the next couple of days, I'll go with that. Thanks! Gail Thanks, > Sanne > > > > On 31 January 2018 at 20:51, Gail Badner wrote: > > HHH-12257 involves refreshing an entity that is already has a pessimistic > > lock. In the test case attached to the jira, EntityManager#refresh(Object > > entity) is used to refresh the entity, instead of a method that > specifies a > > particular LockModetype (e.g., #refresh(Object entity, LockModeType > > lockMode)). The lock on the refreshed entity is dropped. > > > > A workaround is to determine the current lock mode using > > Session#getCurrentLockMode, which returns a org.hibernate.LockMode > object, > > which can be converted to a LockModeType that can be used to call > > EntityManager#refresh(Object entity, LockModeType lockMode). > > > > Unfortunately, the code that converts org.hibernate.LockMode to > > LockModeType is "internal" (org.hibernate.internal.util. > LockModeConverter). > > > > I'm on the fence about how this should work. > > > > The API for EntityManager#refresh(Object entity) does not say that an > > existing lock mode on the entity should be retained. > > > The following contains a correction from the original: > > On the other hand, in JPA 2.1 spec, 3.4 Locking and Concurrency section > > seems to indicate that locks on an entity apply to the transaction, and > > *doesn't* say that a lock on an entity should be dropped when refreshed > without > > a specified LockModeType. > > > > Does anyone have any guidance on how this should work? > > > > Thanks, > > Gail > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Wed Jan 31 17:11:35 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 31 Jan 2018 22:11:35 +0000 Subject: [hibernate-dev] Should EntityManager#refresh retain an existing lock? In-Reply-To: References: Message-ID: On 31 January 2018 at 21:48, Gail Badner wrote: > See below... > > On Wed, Jan 31, 2018 at 1:20 PM, Sanne Grinovero > wrote: >> >> Hi Gail, >> >> personally I wouldn't expect the pessimistic lock to be dropped. >> In case of optimistic locking, I would expect the version to be >> updated to the latest read - the one triggered by the refresh. > > > Yes, the version is updated, if necessary, on a refresh. > >> >> >> I just read section 3.4 as you suggested but I couldn't find were it >> suggests that "a lock on an entity should be dropped when refreshed" ; >> what makes you think it indicates that? > > > Ah, that was a typo on my part, it should have said : > >> On the other hand, in JPA 2.1 spec, 3.4 Locking and Concurrency section >> seems to indicate that locks on an entity apply to the transaction, and >> doesn't say that a lock on an entity should be dropped when refreshed >> without >> a specified LockModeType. > > I updated the thread below to make the correction (including a correction to > a grammatical error.) > >> >> On the other hand, section 3.4.3 is quite explicit about no other >> changes being allowed by other transactions until the end of the >> transaction, which I guess makes sense. >> >> Would it even be possible to "unlock" a row on which we have a >> pessimistic lock without committing the transaction? I don't think >> that's possible, so that should clarify what needs to be done. >> > > It is possible to call EntityManager#lock(Object entity, LockModeType > lockMode) with a lower-level lock, but that request will be ignored. > Hibernate will only upgrade a lock. Yes I understand what Hibernate does. I meant I don't think it would be possible to have it do otherwise, as I'm not aware of SQL instructions or JDBC methods to unlock a database entry w/o committing the transaction. I might be wrong: haven't used JDBC in years, hence I phrased it as a question.. but if I'm right then clearly we can't "undo" the pessimistic lock. > > I think that clarifies retaining the same lock-level for the entity when > calling EntityManager#refresh(Object entity). +1 Thanks, Sanne > > If no one has any comments that disagree with this in the next couple of > days, I'll go with that. > > Thanks! > Gail > >> Thanks, >> Sanne >> >> >> >> On 31 January 2018 at 20:51, Gail Badner wrote: >> > HHH-12257 involves refreshing an entity that is already has a >> > pessimistic >> > lock. In the test case attached to the jira, >> > EntityManager#refresh(Object >> > entity) is used to refresh the entity, instead of a method that >> > specifies a >> > particular LockModetype (e.g., #refresh(Object entity, LockModeType >> > lockMode)). The lock on the refreshed entity is dropped. >> > >> > A workaround is to determine the current lock mode using >> > Session#getCurrentLockMode, which returns a org.hibernate.LockMode >> > object, >> > which can be converted to a LockModeType that can be used to call >> > EntityManager#refresh(Object entity, LockModeType lockMode). >> > >> > Unfortunately, the code that converts org.hibernate.LockMode to >> > LockModeType is "internal" >> > (org.hibernate.internal.util.LockModeConverter). >> > >> > I'm on the fence about how this should work. >> > >> > The API for EntityManager#refresh(Object entity) does not say that an >> > existing lock mode on the entity should be retained. >> > > > > The following contains a correction from the original: > >> >> > On the other hand, in JPA 2.1 spec, 3.4 Locking and Concurrency section >> > seems to indicate that locks on an entity apply to the transaction, and >> > doesn't say that a lock on an entity should be dropped when refreshed >> > without >> > a specified LockModeType. >> > >> > Does anyone have any guidance on how this should work? >> > >> > Thanks, >> > Gail >> > _______________________________________________ >> > hibernate-dev mailing list >> > hibernate-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > From gbadner at redhat.com Wed Jan 31 17:35:29 2018 From: gbadner at redhat.com (Gail Badner) Date: Wed, 31 Jan 2018 14:35:29 -0800 Subject: [hibernate-dev] Should EntityManager#refresh retain an existing lock? In-Reply-To: References: Message-ID: Ah, I see. Using H2, the lock is still held after calling EntityManager#refresh(Object entity), in spite of Hibernate setting the lock mode to NONE for the entity in the PersistenceContext. On Wed, Jan 31, 2018 at 2:11 PM, Sanne Grinovero wrote: > On 31 January 2018 at 21:48, Gail Badner wrote: > > See below... > > > > On Wed, Jan 31, 2018 at 1:20 PM, Sanne Grinovero > > wrote: > >> > >> Hi Gail, > >> > >> personally I wouldn't expect the pessimistic lock to be dropped. > >> In case of optimistic locking, I would expect the version to be > >> updated to the latest read - the one triggered by the refresh. > > > > > > Yes, the version is updated, if necessary, on a refresh. > > > >> > >> > >> I just read section 3.4 as you suggested but I couldn't find were it > >> suggests that "a lock on an entity should be dropped when refreshed" ; > >> what makes you think it indicates that? > > > > > > Ah, that was a typo on my part, it should have said : > > > >> On the other hand, in JPA 2.1 spec, 3.4 Locking and Concurrency section > >> seems to indicate that locks on an entity apply to the transaction, and > >> doesn't say that a lock on an entity should be dropped when refreshed > >> without > >> a specified LockModeType. > > > > I updated the thread below to make the correction (including a > correction to > > a grammatical error.) > > > >> > >> On the other hand, section 3.4.3 is quite explicit about no other > >> changes being allowed by other transactions until the end of the > >> transaction, which I guess makes sense. > >> > >> Would it even be possible to "unlock" a row on which we have a > >> pessimistic lock without committing the transaction? I don't think > >> that's possible, so that should clarify what needs to be done. > >> > > > > It is possible to call EntityManager#lock(Object entity, LockModeType > > lockMode) with a lower-level lock, but that request will be ignored. > > Hibernate will only upgrade a lock. > > Yes I understand what Hibernate does. I meant I don't think it would > be possible to have it do otherwise, as I'm not aware of SQL > instructions or JDBC methods to unlock a database entry w/o committing > the transaction. > I might be wrong: haven't used JDBC in years, hence I phrased it as a > question.. but if I'm right then clearly we can't "undo" the > pessimistic lock. > > > > > I think that clarifies retaining the same lock-level for the entity when > > calling EntityManager#refresh(Object entity). > > +1 > > Thanks, > Sanne > > > > > If no one has any comments that disagree with this in the next couple of > > days, I'll go with that. > > > > Thanks! > > Gail > > > >> Thanks, > >> Sanne > >> > >> > >> > >> On 31 January 2018 at 20:51, Gail Badner wrote: > >> > HHH-12257 involves refreshing an entity that is already has a > >> > pessimistic > >> > lock. In the test case attached to the jira, > >> > EntityManager#refresh(Object > >> > entity) is used to refresh the entity, instead of a method that > >> > specifies a > >> > particular LockModetype (e.g., #refresh(Object entity, LockModeType > >> > lockMode)). The lock on the refreshed entity is dropped. > >> > > >> > A workaround is to determine the current lock mode using > >> > Session#getCurrentLockMode, which returns a org.hibernate.LockMode > >> > object, > >> > which can be converted to a LockModeType that can be used to call > >> > EntityManager#refresh(Object entity, LockModeType lockMode). > >> > > >> > Unfortunately, the code that converts org.hibernate.LockMode to > >> > LockModeType is "internal" > >> > (org.hibernate.internal.util.LockModeConverter). > >> > > >> > I'm on the fence about how this should work. > >> > > >> > The API for EntityManager#refresh(Object entity) does not say that an > >> > existing lock mode on the entity should be retained. > >> > > > > > > > The following contains a correction from the original: > > > >> > >> > On the other hand, in JPA 2.1 spec, 3.4 Locking and Concurrency > section > >> > seems to indicate that locks on an entity apply to the transaction, > and > >> > doesn't say that a lock on an entity should be dropped when refreshed > >> > without > >> > a specified LockModeType. > >> > > >> > Does anyone have any guidance on how this should work? > >> > > >> > Thanks, > >> > Gail > >> > _______________________________________________ > >> > hibernate-dev mailing list > >> > hibernate-dev at lists.jboss.org > >> > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > From steve at hibernate.org Wed Jan 31 17:38:05 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 31 Jan 2018 22:38:05 +0000 Subject: [hibernate-dev] Should EntityManager#refresh retain an existing lock? In-Reply-To: References: Message-ID: > > It is possible to call EntityManager#lock(Object entity, LockModeType > lockMode) with a lower-level lock, but that request will be ignored. > Hibernate will only upgrade a lock. > Sure, this is in keeping with most (all?) databases - a transaction can only acquire more restrictive locks. > I think that clarifies retaining the same lock-level for the entity when > calling EntityManager#refresh(Object entity). > > If no one has any comments that disagree with this in the next couple of > days, I'll go with that. > That's the correct handling. From steve at hibernate.org Wed Jan 31 18:02:18 2018 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 31 Jan 2018 23:02:18 +0000 Subject: [hibernate-dev] Should EntityManager#refresh retain an existing lock? In-Reply-To: References: Message-ID: It should not set NONE in the PC even. It should only overwrite if the new lock mode is "greater than" the current one. For sure we used to have these checks, but apparently no regression tests for it. On Wed, Jan 31, 2018 at 4:55 PM Gail Badner wrote: > Ah, I see. > > Using H2, the lock is still held after calling EntityManager#refresh(Object > entity), in spite of Hibernate setting the lock mode to NONE for the > entity in the PersistenceContext. > > On Wed, Jan 31, 2018 at 2:11 PM, Sanne Grinovero > wrote: > > > On 31 January 2018 at 21:48, Gail Badner wrote: > > > See below... > > > > > > On Wed, Jan 31, 2018 at 1:20 PM, Sanne Grinovero > > > wrote: > > >> > > >> Hi Gail, > > >> > > >> personally I wouldn't expect the pessimistic lock to be dropped. > > >> In case of optimistic locking, I would expect the version to be > > >> updated to the latest read - the one triggered by the refresh. > > > > > > > > > Yes, the version is updated, if necessary, on a refresh. > > > > > >> > > >> > > >> I just read section 3.4 as you suggested but I couldn't find were it > > >> suggests that "a lock on an entity should be dropped when refreshed" ; > > >> what makes you think it indicates that? > > > > > > > > > Ah, that was a typo on my part, it should have said : > > > > > >> On the other hand, in JPA 2.1 spec, 3.4 Locking and Concurrency > section > > >> seems to indicate that locks on an entity apply to the transaction, > and > > >> doesn't say that a lock on an entity should be dropped when refreshed > > >> without > > >> a specified LockModeType. > > > > > > I updated the thread below to make the correction (including a > > correction to > > > a grammatical error.) > > > > > >> > > >> On the other hand, section 3.4.3 is quite explicit about no other > > >> changes being allowed by other transactions until the end of the > > >> transaction, which I guess makes sense. > > >> > > >> Would it even be possible to "unlock" a row on which we have a > > >> pessimistic lock without committing the transaction? I don't think > > >> that's possible, so that should clarify what needs to be done. > > >> > > > > > > It is possible to call EntityManager#lock(Object entity, LockModeType > > > lockMode) with a lower-level lock, but that request will be ignored. > > > Hibernate will only upgrade a lock. > > > > Yes I understand what Hibernate does. I meant I don't think it would > > be possible to have it do otherwise, as I'm not aware of SQL > > instructions or JDBC methods to unlock a database entry w/o committing > > the transaction. > > I might be wrong: haven't used JDBC in years, hence I phrased it as a > > question.. but if I'm right then clearly we can't "undo" the > > pessimistic lock. > > > > > > > > I think that clarifies retaining the same lock-level for the entity > when > > > calling EntityManager#refresh(Object entity). > > > > +1 > > > > Thanks, > > Sanne > > > > > > > > If no one has any comments that disagree with this in the next couple > of > > > days, I'll go with that. > > > > > > Thanks! > > > Gail > > > > > >> Thanks, > > >> Sanne > > >> > > >> > > >> > > >> On 31 January 2018 at 20:51, Gail Badner wrote: > > >> > HHH-12257 involves refreshing an entity that is already has a > > >> > pessimistic > > >> > lock. In the test case attached to the jira, > > >> > EntityManager#refresh(Object > > >> > entity) is used to refresh the entity, instead of a method that > > >> > specifies a > > >> > particular LockModetype (e.g., #refresh(Object entity, LockModeType > > >> > lockMode)). The lock on the refreshed entity is dropped. > > >> > > > >> > A workaround is to determine the current lock mode using > > >> > Session#getCurrentLockMode, which returns a org.hibernate.LockMode > > >> > object, > > >> > which can be converted to a LockModeType that can be used to call > > >> > EntityManager#refresh(Object entity, LockModeType lockMode). > > >> > > > >> > Unfortunately, the code that converts org.hibernate.LockMode to > > >> > LockModeType is "internal" > > >> > (org.hibernate.internal.util.LockModeConverter). > > >> > > > >> > I'm on the fence about how this should work. > > >> > > > >> > The API for EntityManager#refresh(Object entity) does not say that > an > > >> > existing lock mode on the entity should be retained. > > >> > > > > > > > > > > The following contains a correction from the original: > > > > > >> > > >> > On the other hand, in JPA 2.1 spec, 3.4 Locking and Concurrency > > section > > >> > seems to indicate that locks on an entity apply to the transaction, > > and > > >> > doesn't say that a lock on an entity should be dropped when > refreshed > > >> > without > > >> > a specified LockModeType. > > >> > > > >> > Does anyone have any guidance on how this should work? > > >> > > > >> > Thanks, > > >> > Gail > > >> > _______________________________________________ > > >> > hibernate-dev mailing list > > >> > hibernate-dev at lists.jboss.org > > >> > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From gbadner at redhat.com Wed Jan 31 18:12:36 2018 From: gbadner at redhat.com (Gail Badner) Date: Wed, 31 Jan 2018 15:12:36 -0800 Subject: [hibernate-dev] Should EntityManager#refresh retain an existing lock? In-Reply-To: References: Message-ID: OK, sounds good. Thanks, Gail On Wed, Jan 31, 2018 at 2:38 PM, Steve Ebersole wrote: > It is possible to call EntityManager#lock(Object entity, LockModeType >> lockMode) with a lower-level lock, but that request will be ignored. >> Hibernate will only upgrade a lock. >> > > Sure, this is in keeping with most (all?) databases - a transaction can > only acquire more restrictive locks. > > > >> I think that clarifies retaining the same lock-level for the entity when >> calling EntityManager#refresh(Object entity). >> >> If no one has any comments that disagree with this in the next couple of >> days, I'll go with that. >> > > That's the correct handling. > > From steve at hibernate.org Wed Jan 31 21:03:04 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 01 Feb 2018 02:03:04 +0000 Subject: [hibernate-dev] 5.3.0 release tomorrow In-Reply-To: References: <06cca534-ddef-ce5e-31cd-ee418094c360@hibernate.org> Message-ID: I am waiting until I hear back from Joel from Sonatype regarding disabling the JBoss Nexus -> OSSRH sync for ORM artifacts before I can release. On Wed, Jan 31, 2018 at 10:06 AM Guillaume Smet wrote: > On Wed, Jan 31, 2018 at 4:54 PM, Steve Ebersole > wrote: > >> Yes, I agree. As it is, it is very likely that in 2 weeks we will have >> ORM 5.3.0.CR1. So even if you did do a OGM release at that time we are >> going to be limited in what exactly we can change if we find changes are >> needed. >> >> Interestingly this goes back to earlier discussions about "release >> early/often" for the purpose of identifying regressions. Granted there >> y'all were talking about 5.2, but the same happens here from the ORM >> perspective in 5.3. We need to not be dragging version non-stable releases >> out as we continue to wait for +1's from all integrators (Search, OGM, >> Spring, etc). >> > > Yes, for a lot of reasons (good and bad) we were really bad with the ORM > 5.2 support in OGM. > > We are very aware of that and the idea is to not do that again :). > > We (probably Davide) will let you know about our progress soon. > > -- > Guillaume >