From louis.collet at skynet.be Mon Feb 5 10:03:05 2018 From: louis.collet at skynet.be (LouisCollet) Date: Mon, 5 Feb 2018 08:03:05 -0700 (MST) Subject: [wildfly-dev] WildFly 12 Plans, EE8, and Move to Quarterly Iterative Releases In-Reply-To: References: Message-ID: <1517842985661-0.post@n5.nabble.com> /Proposed WildFly 12 Goals [Target Release Date = Feb 28, 2018] ???????????????????????? + Adopt new release model + Java 9 improvements + Servlet 4 + JSON-B (incorporating Yasoon) + CDI 2 + JSF 2.3 + Metaspace usage improvements + early/initial changes to accommodate the new provisioning effort (easy slimming, updates, etc) / *This message to underline the JSF 2.3 priority in Wildfly 12 for me ! *Kind regards, Louis -- Sent from: http://wildfly-development.1055759.n5.nabble.com/ From rory.odonnell at oracle.com Tue Feb 13 05:25:01 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Tue, 13 Feb 2018 10:25:01 +0000 Subject: [wildfly-dev] JDK 10: First Release Candidate - JDK 10 b43 Message-ID: <5c195f3d-fc49-b18f-1914-1b182429e7c7@oracle.com> Hi Jason/Tomaz, *JDK 10 build 43 is our first JDK 10 Release Candidate [1]* * JDK 10 Early Access? build 43 is available at : - jdk.java.net/10/ Notable changes since previous email.** *build 43 * * JDK-8194764 - javac incorrectly flags deprecated for removal imports * JDK-8196678 - avoid printing uninitialized buffer in os::print_memory_info on AIX * JDK-8195837 - (tz) Upgrade time-zone data to tzdata2018c ** *Bug fixes reported by Open Source Projects? :* * JDK-8196296 Lucene test crashes C2 compilation *Security Manager Survey * If you have written or maintain code that uses the SecurityManager or related APIs such as the AccessController, then we would appreciate if you would complete this survey: https://www.surveymonkey.com/r/RSGMF3K More info on the survey? [2] Regards, Rory [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000742.html [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000649.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180213/0e6a8813/attachment.html From sanne at hibernate.org Tue Feb 13 06:23:27 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 13 Feb 2018 11:23:27 +0000 Subject: [wildfly-dev] Automatic module names: JBoss Logger please? Message-ID: Hi all, I would like to start converting some of our basic Hibernate building blocks into proper Jigsaw modules, so to enable better exprimentation with the more complex projects, but all our projects depend on JBoss Logger. Just noticed it doesn't even have an Automatic-Module-Name manifest entry, that's kind of a huge blocker for any further progress. Would it be possible to add this header quickly and release? Thanks, Sanne From sanne at hibernate.org Tue Feb 13 09:53:14 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 13 Feb 2018 14:53:14 +0000 Subject: [wildfly-dev] Automatic module names: JBoss Logger please? In-Reply-To: References: Message-ID: On 13 February 2018 at 14:49, David Lloyd wrote: > Can you file an issue at https://issues.jboss.org/browse/JBLOGGING ? Done, thank you David - https://issues.jboss.org/browse/JBLOGGING-130 From david.lloyd at redhat.com Tue Feb 13 09:53:47 2018 From: david.lloyd at redhat.com (David Lloyd) Date: Tue, 13 Feb 2018 08:53:47 -0600 Subject: [wildfly-dev] JDK 10: First Release Candidate - JDK 10 b43 In-Reply-To: <5c195f3d-fc49-b18f-1914-1b182429e7c7@oracle.com> References: <5c195f3d-fc49-b18f-1914-1b182429e7c7@oracle.com> Message-ID: Hi Rory, the security manager survey seems to limit respondents to single projects ("Which category below best describes your application?") but your average OSS hacker generally works on several different projects, and we in fact have hundreds of them, all of which can run with SM or in some cases actually implement it. So, I'm not quite sure how that's going to give you any sort of accurate picture of anything, especially as it's upstream projects that do (or do not) support SMs, but it's the downstream users that actually _using_ them. On Tue, Feb 13, 2018 at 4:25 AM, Rory O'Donnell wrote: > Hi Jason/Tomaz, > > JDK 10 build 43 is our first JDK 10 Release Candidate [1] > > JDK 10 Early Access build 43 is available at : - jdk.java.net/10/ > > Notable changes since previous email. > > build 43 > > JDK-8194764 - javac incorrectly flags deprecated for removal imports > JDK-8196678 - avoid printing uninitialized buffer in os::print_memory_info > on AIX > JDK-8195837 - (tz) Upgrade time-zone data to tzdata2018c > > Bug fixes reported by Open Source Projects : > > JDK-8196296 Lucene test crashes C2 compilation > > Security Manager Survey > > If you have written or maintain code that uses the SecurityManager or > related APIs such as the AccessController, > then we would appreciate if you would complete this survey: > https://www.surveymonkey.com/r/RSGMF3K > More info on the survey [2] > > > Regards, > Rory > > [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000742.html > [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000649.html > > -- > Rgds,Rory O'Donnell > Quality Engineering Manager > Oracle EMEA , Dublin, Ireland > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- - DML From rory.odonnell at oracle.com Tue Feb 13 09:57:21 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Tue, 13 Feb 2018 14:57:21 +0000 Subject: [wildfly-dev] JDK 10: First Release Candidate - JDK 10 b43 In-Reply-To: References: <5c195f3d-fc49-b18f-1914-1b182429e7c7@oracle.com> Message-ID: <5ca46a62-7cb2-e186-4c85-e99f044edad4@oracle.com> Hi David, I'll pass on your feedback to the owner of the survey. Thanks,Rory On 13/02/2018 14:53, David Lloyd wrote: > Hi Rory, the security manager survey seems to limit respondents to > single projects ("Which category below best describes your > application?") but your average OSS hacker generally works on several > different projects, and we in fact have hundreds of them, all of which > can run with SM or in some cases actually implement it. So, I'm not > quite sure how that's going to give you any sort of accurate picture > of anything, especially as it's upstream projects that do (or do not) > support SMs, but it's the downstream users that actually _using_ them. > > On Tue, Feb 13, 2018 at 4:25 AM, Rory O'Donnell > wrote: >> Hi Jason/Tomaz, >> >> JDK 10 build 43 is our first JDK 10 Release Candidate [1] >> >> JDK 10 Early Access build 43 is available at : - jdk.java.net/10/ >> >> Notable changes since previous email. >> >> build 43 >> >> JDK-8194764 - javac incorrectly flags deprecated for removal imports >> JDK-8196678 - avoid printing uninitialized buffer in os::print_memory_info >> on AIX >> JDK-8195837 - (tz) Upgrade time-zone data to tzdata2018c >> >> Bug fixes reported by Open Source Projects : >> >> JDK-8196296 Lucene test crashes C2 compilation >> >> Security Manager Survey >> >> If you have written or maintain code that uses the SecurityManager or >> related APIs such as the AccessController, >> then we would appreciate if you would complete this survey: >> https://www.surveymonkey.com/r/RSGMF3K >> More info on the survey [2] >> >> >> Regards, >> Rory >> >> [1] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000742.html >> [2] http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000649.html >> >> -- >> Rgds,Rory O'Donnell >> Quality Engineering Manager >> Oracle EMEA , Dublin, Ireland >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA, Dublin,Ireland From rory.odonnell at oracle.com Tue Feb 13 10:29:49 2018 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Tue, 13 Feb 2018 15:29:49 +0000 Subject: [wildfly-dev] JDK 10: First Release Candidate - JDK 10 b43 In-Reply-To: <5ca46a62-7cb2-e186-4c85-e99f044edad4@oracle.com> References: <5c195f3d-fc49-b18f-1914-1b182429e7c7@oracle.com> <5ca46a62-7cb2-e186-4c85-e99f044edad4@oracle.com> Message-ID: <3f715d94-4882-6120-bf27-8404684d3591@oracle.com> Hi David, I have been asked to pass on the following to you, we really would appreciate your feedback ? I would suggest that David fill out the survey at least once and use the open text boxes to explain or list as many of the different use cases and applications that they support and the overall challenges they have. One of the last questions is very open-ended and asks for general thoughts on improving the SecurityManager . Thanks, Rory On 13/02/2018 14:57, Rory O'Donnell wrote: > Hi David, > > I'll pass on your feedback to the owner of the survey. > > Thanks,Rory > > > On 13/02/2018 14:53, David Lloyd wrote: >> Hi Rory, the security manager survey seems to limit respondents to >> single projects ("Which category below best describes your >> application?") but your average OSS hacker generally works on several >> different projects, and we in fact have hundreds of them, all of which >> can run with SM or in some cases actually implement it. So, I'm not >> quite sure how that's going to give you any sort of accurate picture >> of anything, especially as it's upstream projects that do (or do not) >> support SMs, but it's the downstream users that actually _using_ them. >> >> On Tue, Feb 13, 2018 at 4:25 AM, Rory O'Donnell >> wrote: >>> Hi Jason/Tomaz, >>> >>> JDK 10 build 43 is our first JDK 10 Release Candidate [1] >>> >>> JDK 10 Early Access build 43 is available at : - jdk.java.net/10/ >>> >>> Notable changes since previous email. >>> >>> build 43 >>> >>> JDK-8194764 - javac incorrectly flags deprecated for removal imports >>> JDK-8196678 - avoid printing uninitialized buffer in >>> os::print_memory_info >>> on AIX >>> JDK-8195837 - (tz) Upgrade time-zone data to tzdata2018c >>> >>> Bug fixes reported by Open Source Projects : >>> >>> JDK-8196296 Lucene test crashes C2 compilation >>> >>> Security Manager Survey >>> >>> If you have written or maintain code that uses the SecurityManager or >>> related APIs such as the AccessController, >>> then we would appreciate if you would complete this survey: >>> https://www.surveymonkey.com/r/RSGMF3K >>> More info on the survey [2] >>> >>> >>> Regards, >>> Rory >>> >>> [1] >>> http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000742.html >>> >>> [2] >>> http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000649.html >>> >>> >>> -- >>> Rgds,Rory O'Donnell >>> Quality Engineering Manager >>> Oracle EMEA , Dublin, Ireland >>> >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> >> > -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA, Dublin,Ireland From david.lloyd at redhat.com Tue Feb 13 10:37:43 2018 From: david.lloyd at redhat.com (David Lloyd) Date: Tue, 13 Feb 2018 09:37:43 -0600 Subject: [wildfly-dev] JDK 10: First Release Candidate - JDK 10 b43 In-Reply-To: <3f715d94-4882-6120-bf27-8404684d3591@oracle.com> References: <5c195f3d-fc49-b18f-1914-1b182429e7c7@oracle.com> <5ca46a62-7cb2-e186-4c85-e99f044edad4@oracle.com> <3f715d94-4882-6120-bf27-8404684d3591@oracle.com> Message-ID: OK, I'll try to find time to do it. That's a big task though. On Tue, Feb 13, 2018 at 9:29 AM, Rory O'Donnell wrote: > Hi David, > > I have been asked to pass on the following to you, we really would > appreciate your feedback ? > > I would suggest that David fill out the survey at least once and use the > open text boxes to explain or list as many of the different use cases and > applications that they support and the overall challenges they have. One of > the last questions is very open-ended and asks for general thoughts on > improving the SecurityManager . > > Thanks, Rory > > > > On 13/02/2018 14:57, Rory O'Donnell wrote: >> >> Hi David, >> >> I'll pass on your feedback to the owner of the survey. >> >> Thanks,Rory >> >> >> On 13/02/2018 14:53, David Lloyd wrote: >>> >>> Hi Rory, the security manager survey seems to limit respondents to >>> single projects ("Which category below best describes your >>> application?") but your average OSS hacker generally works on several >>> different projects, and we in fact have hundreds of them, all of which >>> can run with SM or in some cases actually implement it. So, I'm not >>> quite sure how that's going to give you any sort of accurate picture >>> of anything, especially as it's upstream projects that do (or do not) >>> support SMs, but it's the downstream users that actually _using_ them. >>> >>> On Tue, Feb 13, 2018 at 4:25 AM, Rory O'Donnell >>> wrote: >>>> >>>> Hi Jason/Tomaz, >>>> >>>> JDK 10 build 43 is our first JDK 10 Release Candidate [1] >>>> >>>> JDK 10 Early Access build 43 is available at : - jdk.java.net/10/ >>>> >>>> Notable changes since previous email. >>>> >>>> build 43 >>>> >>>> JDK-8194764 - javac incorrectly flags deprecated for removal imports >>>> JDK-8196678 - avoid printing uninitialized buffer in >>>> os::print_memory_info >>>> on AIX >>>> JDK-8195837 - (tz) Upgrade time-zone data to tzdata2018c >>>> >>>> Bug fixes reported by Open Source Projects : >>>> >>>> JDK-8196296 Lucene test crashes C2 compilation >>>> >>>> Security Manager Survey >>>> >>>> If you have written or maintain code that uses the SecurityManager or >>>> related APIs such as the AccessController, >>>> then we would appreciate if you would complete this survey: >>>> https://www.surveymonkey.com/r/RSGMF3K >>>> More info on the survey [2] >>>> >>>> >>>> Regards, >>>> Rory >>>> >>>> [1] >>>> http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000742.html >>>> [2] >>>> http://mail.openjdk.java.net/pipermail/jdk-dev/2018-February/000649.html >>>> >>>> -- >>>> Rgds,Rory O'Donnell >>>> Quality Engineering Manager >>>> Oracle EMEA , Dublin, Ireland >>>> >>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >>> >>> >> > > -- > Rgds,Rory O'Donnell > Quality Engineering Manager > Oracle EMEA, Dublin,Ireland > -- - DML From stuart.w.douglas at gmail.com Tue Feb 13 21:24:01 2018 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Wed, 14 Feb 2018 03:24:01 +0100 Subject: [wildfly-dev] Error reporting on deployment failure Message-ID: Hi Everyone, I have been thinking a bit about the way we report errors in WildFly, and I think this is something that we can improve on. At the moment I think we are way to liberal with what we report, which results in a ton of services being listed in the error report that have nothing to do with the actual failure. As an example to work from I have created [1], which is a simple EJB application. This consists of 10 EJB's, one of which has a reference to a non-existant data source, the rest are simply empty no-op EJB's (just @Stateless on an empty class). This app fails to deploy because the java:global/NonExistant data source is missing, which gives the failure description in [2]. This is ~120 lines long and lists multiple services for every single component in the application (part of the reason this is so long is because the failures are reported twice, once when the deployment fails and once when the server starts). I think we can improve on this. I think in every failure case there will be some root causes that are all the end user cares about, and we should limit our reporting to just these cases, rather than listing every internal service that can no longer start due to missing transitive deps. In particular these root causes are: 1) A service threw and exception in its start() method and failed to start 2) A dependency is actually missing (i.e. not installed, not just not started) I think that one or both of these two cases will be the root cause of any failure, and as such that is all we should be reporting on. We already do an OK job of handing case 1), services that have failed, as they get their own line item in the error report, however case 2) results in a huge report that lists every service that has not come up, no matter how far removed they are from the actual problem. I think we could make a change to the way this is reported so that only direct problems are reported [3], so the error report would look something like [4] (note that this commit only changes the operation report, the container state reporting after boot is still quite verbose). I am guessing that this is not as simple as it sounds, otherwise it would have already been addressed, but I think we can do better that the current state of affairs so I thought I would get a discussion started. Stuart [1] https://github.com/stuartwdouglas/errorreporting [2] https://gist.github.com/stuartwdouglas/b52a85813913f3304301eeb1f389fae8 [3] https://github.com/stuartwdouglas/wildfly-core/commit/a1fbc831edf290971d54c13dd1c5d15719454f85 [4] https://gist.github.com/stuartwdouglas/14040534da8d07f937d02f2f08099e8d -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180214/c77bd282/attachment.html From brian.stansberry at redhat.com Wed Feb 14 10:43:47 2018 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Wed, 14 Feb 2018 09:43:47 -0600 Subject: [wildfly-dev] Error reporting on deployment failure In-Reply-To: References: Message-ID: On Tue, Feb 13, 2018 at 8:24 PM, Stuart Douglas wrote: > Hi Everyone, > > I have been thinking a bit about the way we report errors in WildFly, and > I think this is something that we can improve on. At the moment I think we > are way to liberal with what we report, which results in a ton of services > being listed in the error report that have nothing to do with the actual > failure. > > As an example to work from I have created [1], which is a simple EJB > application. This consists of 10 EJB's, one of which has a reference to a > non-existant data source, the rest are simply empty no-op EJB's (just > @Stateless on an empty class). > > This app fails to deploy because the java:global/NonExistant data source > is missing, which gives the failure description in [2]. This is ~120 lines > long and lists multiple services for every single component in the > application (part of the reason this is so long is because the failures are > reported twice, once when the deployment fails and once when the server > starts). > > I think we can improve on this. I think in every failure case there will > be some root causes that are all the end user cares about, and we should > limit our reporting to just these cases, rather than listing every internal > service that can no longer start due to missing transitive deps. > > In particular these root causes are: > 1) A service threw and exception in its start() method and failed to start > 2) A dependency is actually missing (i.e. not installed, not just not > started) > > I think that one or both of these two cases will be the root cause of any > failure, and as such that is all we should be reporting on. > > We already do an OK job of handing case 1), services that have failed, as > they get their own line item in the error report, however case 2) results > in a huge report that lists every service that has not come up, no matter > how far removed they are from the actual problem. > If the 2) case can be correctly determined, then +1 to reporting some new section and not reporting the current "WFLYCTL0180: Services with missing/unavailable dependencies" section. The WFLYCTL0180 section could only be reported as a fallback if for some reason the 1) and 2) stuff is empty. > > I think we could make a change to the way this is reported so that only > direct problems are reported [3], so the error report would look something > like [4] (note that this commit only changes the operation report, the > container state reporting after boot is still quite verbose). > I think the container state reporting is ok. IMHO the proper fix to the container state reporting is to rollback and fail boot if Stage.RUNTIME failures occur. Configurable, but rollback by default. If we did that there would be no container state reporting. If you deploy your broken app post-boot you shouldn't see the container state reporting because by the time the report is written the op should have rolled back and the services are no longer "missing". It's only because we don't rollback on boot that this is reported. > > I am guessing that this is not as simple as it sounds, otherwise it would > have already been addressed, but I think we can do better that the current > state of affairs so I thought I would get a discussion started. > It sounds pretty simple. Any "problem" ServiceController exposes its ServiceContainer, and if relying on that registry to check if a missing dependency is installed is not correct for some reason, the ModelControllerImpl exposes its ServiceRegistry via a package protected getter. So AbstractOperationContext can provide that to the SVH. > Stuart > > [1] https://github.com/stuartwdouglas/errorreporting > [2] https://gist.github.com/stuartwdouglas/b52a85813913f3304301eeb1f389fa > e8 > [3] https://github.com/stuartwdouglas/wildfly-core/commit/ > a1fbc831edf290971d54c13dd1c5d15719454f85 > [4] https://gist.github.com/stuartwdouglas/14040534da8d07f937d02f2f08099e > 8d > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Manager, Senior Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180214/d9056298/attachment-0001.html From stuart.w.douglas at gmail.com Wed Feb 14 22:37:13 2018 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Thu, 15 Feb 2018 04:37:13 +0100 Subject: [wildfly-dev] Error reporting on deployment failure In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 4:43 PM, Brian Stansberry < brian.stansberry at redhat.com> wrote: > On Tue, Feb 13, 2018 at 8:24 PM, Stuart Douglas < > stuart.w.douglas at gmail.com> wrote: > >> Hi Everyone, >> >> I have been thinking a bit about the way we report errors in WildFly, and >> I think this is something that we can improve on. At the moment I think we >> are way to liberal with what we report, which results in a ton of services >> being listed in the error report that have nothing to do with the actual >> failure. >> >> As an example to work from I have created [1], which is a simple EJB >> application. This consists of 10 EJB's, one of which has a reference to a >> non-existant data source, the rest are simply empty no-op EJB's (just >> @Stateless on an empty class). >> >> This app fails to deploy because the java:global/NonExistant data source >> is missing, which gives the failure description in [2]. This is ~120 lines >> long and lists multiple services for every single component in the >> application (part of the reason this is so long is because the failures are >> reported twice, once when the deployment fails and once when the server >> starts). >> >> I think we can improve on this. I think in every failure case there will >> be some root causes that are all the end user cares about, and we should >> limit our reporting to just these cases, rather than listing every internal >> service that can no longer start due to missing transitive deps. >> >> In particular these root causes are: >> 1) A service threw and exception in its start() method and failed to start >> 2) A dependency is actually missing (i.e. not installed, not just not >> started) >> >> I think that one or both of these two cases will be the root cause of any >> failure, and as such that is all we should be reporting on. >> >> We already do an OK job of handing case 1), services that have failed, as >> they get their own line item in the error report, however case 2) results >> in a huge report that lists every service that has not come up, no matter >> how far removed they are from the actual problem. >> > > If the 2) case can be correctly determined, then +1 to reporting some new > section and not reporting the current "WFLYCTL0180: Services with > missing/unavailable dependencies" section. The WFLYCTL0180 section could > only be reported as a fallback if for some reason the 1) and 2) stuff is > empty. > I have adjusted this a bit so a service with mode NEVER is treated the same as if it is missing. I am pretty sure that with this change 1) and 2) will cover 100% of cases. > > >> >> I think we could make a change to the way this is reported so that only >> direct problems are reported [3], so the error report would look something >> like [4] (note that this commit only changes the operation report, the >> container state reporting after boot is still quite verbose). >> > > I think the container state reporting is ok. IMHO the proper fix to the > container state reporting is to rollback and fail boot if Stage.RUNTIME > failures occur. Configurable, but rollback by default. If we did that there > would be no container state reporting. If you deploy your broken app > post-boot you shouldn't see the container state reporting because by the > time the report is written the op should have rolled back and the services > are no longer "missing". It's only because we don't rollback on boot that > this is reported. > I don't think it is nessesary to report on services that are only down because their dependents are down. It basically just adds noise, as they are not really related to the underlying issue. I have expanded my branch to also do this: https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:error-reporting?expand=1 This ends up with very concise reports that just detail the services that are the root cause of the problem: https://gist.github.com/stuartwdouglas/42a68aaaa130ceee38ca5f66d0040de3 Does this approach seem reasonable? lf a user really does want a complete dump of all services that are down that information is still available directly from MSC anyway. Stuart > >> >> I am guessing that this is not as simple as it sounds, otherwise it would >> have already been addressed, but I think we can do better that the current >> state of affairs so I thought I would get a discussion started. >> > > It sounds pretty simple. Any "problem" ServiceController exposes its > ServiceContainer, and if relying on that registry to check if a missing > dependency is installed is not correct for some reason, the > ModelControllerImpl exposes its ServiceRegistry via a package protected > getter. So AbstractOperationContext can provide that to the SVH. > > >> Stuart >> >> [1] https://github.com/stuartwdouglas/errorreporting >> [2] https://gist.github.com/stuartwdouglas/b52a85813913f3304301e >> eb1f389fae8 >> [3] https://github.com/stuartwdouglas/wildfly-core/commit/a1 >> fbc831edf290971d54c13dd1c5d15719454f85 >> [4] https://gist.github.com/stuartwdouglas/14040534da8d07f93 >> 7d02f2f08099e8d >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180215/19cb1338/attachment.html From kkhan at redhat.com Thu Feb 15 05:31:36 2018 From: kkhan at redhat.com (Kabir Khan) Date: Thu, 15 Feb 2018 10:31:36 +0000 Subject: [wildfly-dev] WildFly Feature Freeze Message-ID: <62AE2C0D-32BE-432A-A5E3-F09B446A9600@redhat.com> Hi, Please note that the feature freeze for WildFly 12 is now effective. You can still open pull requests with new features, but they will not be merged until WildFly 12 has been tagged. PRs containing bug fixes can still be opened, and we will do our best to merge those. Please take care not to add features in any component upgrades created to fix bugs. Thanks, Kabir From bmcwhirt at redhat.com Thu Feb 15 11:20:05 2018 From: bmcwhirt at redhat.com (Bob McWhirter) Date: Thu, 15 Feb 2018 16:20:05 +0000 Subject: [wildfly-dev] Error reporting on deployment failure In-Reply-To: References: Message-ID: Agreed. I?ve had to track giant errors from A to B to C etc only to figure out Z was missing. On Wed, Feb 14, 2018 at 10:38 PM Stuart Douglas wrote: > On Wed, Feb 14, 2018 at 4:43 PM, Brian Stansberry < > brian.stansberry at redhat.com> wrote: > >> On Tue, Feb 13, 2018 at 8:24 PM, Stuart Douglas < >> stuart.w.douglas at gmail.com> wrote: >> >>> Hi Everyone, >>> >>> I have been thinking a bit about the way we report errors in WildFly, >>> and I think this is something that we can improve on. At the moment I think >>> we are way to liberal with what we report, which results in a ton of >>> services being listed in the error report that have nothing to do with the >>> actual failure. >>> >>> As an example to work from I have created [1], which is a simple EJB >>> application. This consists of 10 EJB's, one of which has a reference to a >>> non-existant data source, the rest are simply empty no-op EJB's (just >>> @Stateless on an empty class). >>> >>> This app fails to deploy because the java:global/NonExistant data source >>> is missing, which gives the failure description in [2]. This is ~120 lines >>> long and lists multiple services for every single component in the >>> application (part of the reason this is so long is because the failures are >>> reported twice, once when the deployment fails and once when the server >>> starts). >>> >>> I think we can improve on this. I think in every failure case there will >>> be some root causes that are all the end user cares about, and we should >>> limit our reporting to just these cases, rather than listing every internal >>> service that can no longer start due to missing transitive deps. >>> >>> In particular these root causes are: >>> 1) A service threw and exception in its start() method and failed to >>> start >>> 2) A dependency is actually missing (i.e. not installed, not just not >>> started) >>> >>> I think that one or both of these two cases will be the root cause of >>> any failure, and as such that is all we should be reporting on. >>> >>> We already do an OK job of handing case 1), services that have failed, >>> as they get their own line item in the error report, however case 2) >>> results in a huge report that lists every service that has not come up, no >>> matter how far removed they are from the actual problem. >>> >> >> If the 2) case can be correctly determined, then +1 to reporting some new >> section and not reporting the current "WFLYCTL0180: Services with >> missing/unavailable dependencies" section. The WFLYCTL0180 section could >> only be reported as a fallback if for some reason the 1) and 2) stuff is >> empty. >> > > I have adjusted this a bit so a service with mode NEVER is treated the > same as if it is missing. I am pretty sure that with this change 1) and 2) > will cover 100% of cases. > > > >> >> >>> >>> I think we could make a change to the way this is reported so that only >>> direct problems are reported [3], so the error report would look something >>> like [4] (note that this commit only changes the operation report, the >>> container state reporting after boot is still quite verbose). >>> >> >> I think the container state reporting is ok. IMHO the proper fix to the >> container state reporting is to rollback and fail boot if Stage.RUNTIME >> failures occur. Configurable, but rollback by default. If we did that there >> would be no container state reporting. If you deploy your broken app >> post-boot you shouldn't see the container state reporting because by the >> time the report is written the op should have rolled back and the services >> are no longer "missing". It's only because we don't rollback on boot that >> this is reported. >> > > I don't think it is nessesary to report on services that are only down > because their dependents are down. It basically just adds noise, as they > are not really related to the underlying issue. I have expanded my branch > to also do this: > > > https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:error-reporting?expand=1 > > This ends up with very concise reports that just detail the services that > are the root cause of the problem: > https://gist.github.com/stuartwdouglas/42a68aaaa130ceee38ca5f66d0040de3 > > Does this approach seem reasonable? lf a user really does want a complete > dump of all services that are down that information is still available > directly from MSC anyway. > > Stuart > > >> >>> >>> I am guessing that this is not as simple as it sounds, otherwise it >>> would have already been addressed, but I think we can do better that the >>> current state of affairs so I thought I would get a discussion started. >>> >> >> It sounds pretty simple. Any "problem" ServiceController exposes its >> ServiceContainer, and if relying on that registry to check if a missing >> dependency is installed is not correct for some reason, the >> ModelControllerImpl exposes its ServiceRegistry via a package protected >> getter. So AbstractOperationContext can provide that to the SVH. >> >> >>> Stuart >>> >>> [1] https://github.com/stuartwdouglas/errorreporting >>> [2] >>> https://gist.github.com/stuartwdouglas/b52a85813913f3304301eeb1f389fae8 >>> [3] >>> https://github.com/stuartwdouglas/wildfly-core/commit/a1fbc831edf290971d54c13dd1c5d15719454f85 >>> [4] >>> https://gist.github.com/stuartwdouglas/14040534da8d07f937d02f2f08099e8d >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> >> >> -- >> Brian Stansberry >> Manager, Senior Principal Software Engineer >> Red Hat >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180215/9a031587/attachment-0001.html From tomaz.cerar at gmail.com Thu Feb 15 11:28:09 2018 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Thu, 15 Feb 2018 17:28:09 +0100 Subject: [wildfly-dev] Error reporting on deployment failure In-Reply-To: References: Message-ID: Hey, One of things we did talk about at f2f, but never got into details, that would help with this is adding capabilities to deployments. This way one failure you would get error message with telling you what capability is not available. for example, that datasource is missing and where you can define it. Or datasource defined at address xyz is in error so you know where to look to fix it. For start we should need to expose capability registry and few other things to DUPs and continue from there. -- tomaz On Thu, Feb 15, 2018 at 4:37 AM, Stuart Douglas wrote: > > > On Wed, Feb 14, 2018 at 4:43 PM, Brian Stansberry < > brian.stansberry at redhat.com> wrote: > >> On Tue, Feb 13, 2018 at 8:24 PM, Stuart Douglas < >> stuart.w.douglas at gmail.com> wrote: >> >>> Hi Everyone, >>> >>> I have been thinking a bit about the way we report errors in WildFly, >>> and I think this is something that we can improve on. At the moment I think >>> we are way to liberal with what we report, which results in a ton of >>> services being listed in the error report that have nothing to do with the >>> actual failure. >>> >>> As an example to work from I have created [1], which is a simple EJB >>> application. This consists of 10 EJB's, one of which has a reference to a >>> non-existant data source, the rest are simply empty no-op EJB's (just >>> @Stateless on an empty class). >>> >>> This app fails to deploy because the java:global/NonExistant data source >>> is missing, which gives the failure description in [2]. This is ~120 lines >>> long and lists multiple services for every single component in the >>> application (part of the reason this is so long is because the failures are >>> reported twice, once when the deployment fails and once when the server >>> starts). >>> >>> I think we can improve on this. I think in every failure case there will >>> be some root causes that are all the end user cares about, and we should >>> limit our reporting to just these cases, rather than listing every internal >>> service that can no longer start due to missing transitive deps. >>> >>> In particular these root causes are: >>> 1) A service threw and exception in its start() method and failed to >>> start >>> 2) A dependency is actually missing (i.e. not installed, not just not >>> started) >>> >>> I think that one or both of these two cases will be the root cause of >>> any failure, and as such that is all we should be reporting on. >>> >>> We already do an OK job of handing case 1), services that have failed, >>> as they get their own line item in the error report, however case 2) >>> results in a huge report that lists every service that has not come up, no >>> matter how far removed they are from the actual problem. >>> >> >> If the 2) case can be correctly determined, then +1 to reporting some new >> section and not reporting the current "WFLYCTL0180: Services with >> missing/unavailable dependencies" section. The WFLYCTL0180 section could >> only be reported as a fallback if for some reason the 1) and 2) stuff is >> empty. >> > > I have adjusted this a bit so a service with mode NEVER is treated the > same as if it is missing. I am pretty sure that with this change 1) and 2) > will cover 100% of cases. > > > >> >> >>> >>> I think we could make a change to the way this is reported so that only >>> direct problems are reported [3], so the error report would look something >>> like [4] (note that this commit only changes the operation report, the >>> container state reporting after boot is still quite verbose). >>> >> >> I think the container state reporting is ok. IMHO the proper fix to the >> container state reporting is to rollback and fail boot if Stage.RUNTIME >> failures occur. Configurable, but rollback by default. If we did that there >> would be no container state reporting. If you deploy your broken app >> post-boot you shouldn't see the container state reporting because by the >> time the report is written the op should have rolled back and the services >> are no longer "missing". It's only because we don't rollback on boot that >> this is reported. >> > > I don't think it is nessesary to report on services that are only down > because their dependents are down. It basically just adds noise, as they > are not really related to the underlying issue. I have expanded my branch > to also do this: > > https://github.com/wildfly/wildfly-core/compare/master... > stuartwdouglas:error-reporting?expand=1 > > This ends up with very concise reports that just detail the services that > are the root cause of the problem: https://gist.github.com/stuartwdouglas/ > 42a68aaaa130ceee38ca5f66d0040de3 > > Does this approach seem reasonable? lf a user really does want a complete > dump of all services that are down that information is still available > directly from MSC anyway. > > Stuart > > >> >>> >>> I am guessing that this is not as simple as it sounds, otherwise it >>> would have already been addressed, but I think we can do better that the >>> current state of affairs so I thought I would get a discussion started. >>> >> >> It sounds pretty simple. Any "problem" ServiceController exposes its >> ServiceContainer, and if relying on that registry to check if a missing >> dependency is installed is not correct for some reason, the >> ModelControllerImpl exposes its ServiceRegistry via a package protected >> getter. So AbstractOperationContext can provide that to the SVH. >> >> >>> Stuart >>> >>> [1] https://github.com/stuartwdouglas/errorreporting >>> [2] https://gist.github.com/stuartwdouglas/b52a85813913f3304301e >>> eb1f389fae8 >>> [3] https://github.com/stuartwdouglas/wildfly-core/commit/a1 >>> fbc831edf290971d54c13dd1c5d15719454f85 >>> [4] https://gist.github.com/stuartwdouglas/14040534da8d07f93 >>> 7d02f2f08099e8d >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> >> >> -- >> Brian Stansberry >> Manager, Senior Principal Software Engineer >> Red Hat >> > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180215/c6558e6e/attachment.html From brian.stansberry at redhat.com Thu Feb 15 12:51:04 2018 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Thu, 15 Feb 2018 11:51:04 -0600 Subject: [wildfly-dev] Error reporting on deployment failure In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 9:37 PM, Stuart Douglas wrote: > > > On Wed, Feb 14, 2018 at 4:43 PM, Brian Stansberry < > brian.stansberry at redhat.com> wrote: > >> On Tue, Feb 13, 2018 at 8:24 PM, Stuart Douglas < >> stuart.w.douglas at gmail.com> wrote: >> >>> Hi Everyone, >>> >>> I have been thinking a bit about the way we report errors in WildFly, >>> and I think this is something that we can improve on. At the moment I think >>> we are way to liberal with what we report, which results in a ton of >>> services being listed in the error report that have nothing to do with the >>> actual failure. >>> >>> As an example to work from I have created [1], which is a simple EJB >>> application. This consists of 10 EJB's, one of which has a reference to a >>> non-existant data source, the rest are simply empty no-op EJB's (just >>> @Stateless on an empty class). >>> >>> This app fails to deploy because the java:global/NonExistant data source >>> is missing, which gives the failure description in [2]. This is ~120 lines >>> long and lists multiple services for every single component in the >>> application (part of the reason this is so long is because the failures are >>> reported twice, once when the deployment fails and once when the server >>> starts). >>> >>> I think we can improve on this. I think in every failure case there will >>> be some root causes that are all the end user cares about, and we should >>> limit our reporting to just these cases, rather than listing every internal >>> service that can no longer start due to missing transitive deps. >>> >>> In particular these root causes are: >>> 1) A service threw and exception in its start() method and failed to >>> start >>> 2) A dependency is actually missing (i.e. not installed, not just not >>> started) >>> >>> I think that one or both of these two cases will be the root cause of >>> any failure, and as such that is all we should be reporting on. >>> >>> We already do an OK job of handing case 1), services that have failed, >>> as they get their own line item in the error report, however case 2) >>> results in a huge report that lists every service that has not come up, no >>> matter how far removed they are from the actual problem. >>> >> >> If the 2) case can be correctly determined, then +1 to reporting some new >> section and not reporting the current "WFLYCTL0180: Services with >> missing/unavailable dependencies" section. The WFLYCTL0180 section could >> only be reported as a fallback if for some reason the 1) and 2) stuff is >> empty. >> > > I have adjusted this a bit so a service with mode NEVER is treated the > same as if it is missing. I am pretty sure that with this change 1) and 2) > will cover 100% of cases. > > > >> >> >>> >>> I think we could make a change to the way this is reported so that only >>> direct problems are reported [3], so the error report would look something >>> like [4] (note that this commit only changes the operation report, the >>> container state reporting after boot is still quite verbose). >>> >> >> I think the container state reporting is ok. IMHO the proper fix to the >> container state reporting is to rollback and fail boot if Stage.RUNTIME >> failures occur. Configurable, but rollback by default. If we did that there >> would be no container state reporting. If you deploy your broken app >> post-boot you shouldn't see the container state reporting because by the >> time the report is written the op should have rolled back and the services >> are no longer "missing". It's only because we don't rollback on boot that >> this is reported. >> > > I don't think it is nessesary to report on services that are only down > because their dependents are down. It basically just adds noise, as they > are not really related to the underlying issue. I have expanded my branch > to also do this: > > https://github.com/wildfly/wildfly-core/compare/master... > stuartwdouglas:error-reporting?expand=1 > > This ends up with very concise reports that just detail the services that > are the root cause of the problem: https://gist.github.com/stuartwdouglas/ > 42a68aaaa130ceee38ca5f66d0040de3 > > Does this approach seem reasonable? lf a user really does want a complete > dump of all services that are down that information is still available > directly from MSC anyway. > It seems reasonable. I'm going to get all lawyerly now. This is because while we don't treat our failure messages as "API" requiring compatibility, for these particular ones I think we should be as careful as possible. 1) "WFLYCTL0180: Services with missing/unavailable dependencies" => ["jboss.naming.context.java.comp.\"error-reporting-1.0-SNAPSHOT\".\"error-reporting-1.0-SNAPSHOT\".ErrorEjb.env.\"com.stuartdouglas.ErrorEjb\".nonExistant is missing [jboss.naming.context.java.global.NonExistant]"] Here you've somewhat repurposed an existing message. That can be quite ok IMHO so long as what's gone is just noise and the English meaning of the message is still correct. Basically, what did "missing/unavailable dependencies" mean before, what does it mean now, and is there a clear story behind the shift from A to B. The "missing" part is pretty clear -- not installed or NEVER is "missing". For "unavailable" now we've dropped the installed but unstarted ones. If we're including the ones that directly depend on *failed* services then that's a coherent definition of "unavailable". If we're not then "unavailable" is misleading. Sorry, I'm juggling stuff so I haven't checked the code. :( 2) I think "38 additional services are down due to their dependencies being missing or failed" should have a message code, not NONE. It's a separate message that may or may not appear. Plus it's new. And I think we're better off in these complex message structures to be precise vs trying to avoid codes for cosmetic reasons. > Stuart > > >> >>> >>> I am guessing that this is not as simple as it sounds, otherwise it >>> would have already been addressed, but I think we can do better that the >>> current state of affairs so I thought I would get a discussion started. >>> >> >> It sounds pretty simple. Any "problem" ServiceController exposes its >> ServiceContainer, and if relying on that registry to check if a missing >> dependency is installed is not correct for some reason, the >> ModelControllerImpl exposes its ServiceRegistry via a package protected >> getter. So AbstractOperationContext can provide that to the SVH. >> >> >>> Stuart >>> >>> [1] https://github.com/stuartwdouglas/errorreporting >>> [2] https://gist.github.com/stuartwdouglas/b52a85813913f3304301e >>> eb1f389fae8 >>> [3] https://github.com/stuartwdouglas/wildfly-core/commit/a1 >>> fbc831edf290971d54c13dd1c5d15719454f85 >>> [4] https://gist.github.com/stuartwdouglas/14040534da8d07f93 >>> 7d02f2f08099e8d >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> >> >> -- >> Brian Stansberry >> Manager, Senior Principal Software Engineer >> Red Hat >> > > -- Brian Stansberry Manager, Senior Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180215/cf8dae55/attachment-0001.html From kkhan at redhat.com Thu Feb 15 13:19:49 2018 From: kkhan at redhat.com (Kabir Khan) Date: Thu, 15 Feb 2018 18:19:49 +0000 Subject: [wildfly-dev] WildFly 12.0.0.Beta1 Message-ID: <28755EE0-E513-4B61-BF86-7C0F05C596E9@redhat.com> Hi, WildFly 12.0.0.Beta1 has been tagged. The tag can be found at https://github.com/wildfly/wildfly/tree/12.0.0.Beta1. I am in the process of releasing it on Nexus, and it should be available shortly. I don't know if Jason intends to upload this for general download, or if we will wait until we release Final in a few weeks. I'll let him decide :) As for Jira housekeeping I deleted some intermediate 12 releases for WildFly 12, and renamed 12.0.0.Alpha1 (which we have been using up to now) to 12.0.0.Beta1. All unresolved issues when releasing 12.0.0.Beta1 in Jira have been moved to the next release 12.0.0.CR1. Please use 12.0.0.CR1 to resolve issues until the feature freeze. Thanks, Kabir From stuart.w.douglas at gmail.com Thu Feb 15 17:32:07 2018 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Thu, 15 Feb 2018 23:32:07 +0100 Subject: [wildfly-dev] Error reporting on deployment failure In-Reply-To: References: Message-ID: On Thu, Feb 15, 2018 at 6:51 PM, Brian Stansberry < brian.stansberry at redhat.com> wrote: > On Wed, Feb 14, 2018 at 9:37 PM, Stuart Douglas < > stuart.w.douglas at gmail.com> wrote: > >> >> >> On Wed, Feb 14, 2018 at 4:43 PM, Brian Stansberry < >> brian.stansberry at redhat.com> wrote: >> >>> On Tue, Feb 13, 2018 at 8:24 PM, Stuart Douglas < >>> stuart.w.douglas at gmail.com> wrote: >>> >>>> Hi Everyone, >>>> >>>> I have been thinking a bit about the way we report errors in WildFly, >>>> and I think this is something that we can improve on. At the moment I think >>>> we are way to liberal with what we report, which results in a ton of >>>> services being listed in the error report that have nothing to do with the >>>> actual failure. >>>> >>>> As an example to work from I have created [1], which is a simple EJB >>>> application. This consists of 10 EJB's, one of which has a reference to a >>>> non-existant data source, the rest are simply empty no-op EJB's (just >>>> @Stateless on an empty class). >>>> >>>> This app fails to deploy because the java:global/NonExistant data >>>> source is missing, which gives the failure description in [2]. This is ~120 >>>> lines long and lists multiple services for every single component in the >>>> application (part of the reason this is so long is because the failures are >>>> reported twice, once when the deployment fails and once when the server >>>> starts). >>>> >>>> I think we can improve on this. I think in every failure case there >>>> will be some root causes that are all the end user cares about, and we >>>> should limit our reporting to just these cases, rather than listing every >>>> internal service that can no longer start due to missing transitive deps. >>>> >>>> In particular these root causes are: >>>> 1) A service threw and exception in its start() method and failed to >>>> start >>>> 2) A dependency is actually missing (i.e. not installed, not just not >>>> started) >>>> >>>> I think that one or both of these two cases will be the root cause of >>>> any failure, and as such that is all we should be reporting on. >>>> >>>> We already do an OK job of handing case 1), services that have failed, >>>> as they get their own line item in the error report, however case 2) >>>> results in a huge report that lists every service that has not come up, no >>>> matter how far removed they are from the actual problem. >>>> >>> >>> If the 2) case can be correctly determined, then +1 to reporting some >>> new section and not reporting the current "WFLYCTL0180: Services with >>> missing/unavailable dependencies" section. The WFLYCTL0180 section could >>> only be reported as a fallback if for some reason the 1) and 2) stuff is >>> empty. >>> >> >> I have adjusted this a bit so a service with mode NEVER is treated the >> same as if it is missing. I am pretty sure that with this change 1) and 2) >> will cover 100% of cases. >> >> >> >>> >>> >>>> >>>> I think we could make a change to the way this is reported so that only >>>> direct problems are reported [3], so the error report would look something >>>> like [4] (note that this commit only changes the operation report, the >>>> container state reporting after boot is still quite verbose). >>>> >>> >>> I think the container state reporting is ok. IMHO the proper fix to the >>> container state reporting is to rollback and fail boot if Stage.RUNTIME >>> failures occur. Configurable, but rollback by default. If we did that there >>> would be no container state reporting. If you deploy your broken app >>> post-boot you shouldn't see the container state reporting because by the >>> time the report is written the op should have rolled back and the services >>> are no longer "missing". It's only because we don't rollback on boot that >>> this is reported. >>> >> >> I don't think it is nessesary to report on services that are only down >> because their dependents are down. It basically just adds noise, as they >> are not really related to the underlying issue. I have expanded my branch >> to also do this: >> >> https://github.com/wildfly/wildfly-core/compare/master...stu >> artwdouglas:error-reporting?expand=1 >> >> This ends up with very concise reports that just detail the services that >> are the root cause of the problem: https://gist.github.c >> om/stuartwdouglas/42a68aaaa130ceee38ca5f66d0040de3 >> >> Does this approach seem reasonable? lf a user really does want a complete >> dump of all services that are down that information is still available >> directly from MSC anyway. >> > > It seems reasonable. > > I'm going to get all lawyerly now. This is because while we don't treat > our failure messages as "API" requiring compatibility, for these particular > ones I think we should be as careful as possible. > > 1) "WFLYCTL0180: Services with missing/unavailable dependencies" => > ["jboss.naming.context.java.comp.\"error-reporting-1.0- > SNAPSHOT\".\"error-reporting-1.0-SNAPSHOT\".ErrorEjb.env.\" > com.stuartdouglas.ErrorEjb\".nonExistant is missing > [jboss.naming.context.java.global.NonExistant]"] > > Here you've somewhat repurposed an existing message. That can be quite ok > IMHO so long as what's gone is just noise and the English meaning of the > message is still correct. Basically, what did "missing/unavailable > dependencies" mean before, what does it mean now, and is there a clear > story behind the shift from A to B. The "missing" part is pretty clear -- > not installed or NEVER is "missing". For "unavailable" now we've dropped > the installed but unstarted ones. If we're including the ones that directly > depend on *failed* services then that's a coherent definition of > "unavailable". If we're not then "unavailable" is misleading. Sorry, I'm > juggling stuff so I haven't checked the code. :( > Previously this section would display every service that was down due to its dependencies being down. This would include services that were many levels away from the actual problem (e.g. if A depends on B which depends on C which depends on D which is down, A, B and C would all be listed in this section). This change displays the same information, but only for direct dependents, so in the example about only C would be listed in this section. The 'New missing/unsatisfied dependencies:' section in the container state report is similar. Previously it would list every service that had failed to come up, now it will only list services that are directly affected by a problem. > > 2) I think "38 additional services are down due to their dependencies > being missing or failed" should have a message code, not NONE. It's a > separate message that may or may not appear. Plus it's new. And I think > we're better off in these complex message structures to be precise vs > trying to avoid codes for cosmetic reasons. > Ok. Stuart > > > >> Stuart >> >> >>> >>>> >>>> I am guessing that this is not as simple as it sounds, otherwise it >>>> would have already been addressed, but I think we can do better that the >>>> current state of affairs so I thought I would get a discussion started. >>>> >>> >>> It sounds pretty simple. Any "problem" ServiceController exposes its >>> ServiceContainer, and if relying on that registry to check if a missing >>> dependency is installed is not correct for some reason, the >>> ModelControllerImpl exposes its ServiceRegistry via a package protected >>> getter. So AbstractOperationContext can provide that to the SVH. >>> >>> >>>> Stuart >>>> >>>> [1] https://github.com/stuartwdouglas/errorreporting >>>> [2] https://gist.github.com/stuartwdouglas/b52a85813913f3304301e >>>> eb1f389fae8 >>>> [3] https://github.com/stuartwdouglas/wildfly-core/commit/a1 >>>> fbc831edf290971d54c13dd1c5d15719454f85 >>>> [4] https://gist.github.com/stuartwdouglas/14040534da8d07f93 >>>> 7d02f2f08099e8d >>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>> >>> >>> >>> -- >>> Brian Stansberry >>> Manager, Senior Principal Software Engineer >>> Red Hat >>> >> >> > > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180215/2f8509d1/attachment.html From stuart.w.douglas at gmail.com Thu Feb 15 18:15:24 2018 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Fri, 16 Feb 2018 00:15:24 +0100 Subject: [wildfly-dev] Error reporting on deployment failure In-Reply-To: References: Message-ID: I have opened https://github.com/wildfly/wildfly-core/pull/3114 to allow for testing/further review. Stuart On Thu, Feb 15, 2018 at 11:32 PM, Stuart Douglas wrote: > > > On Thu, Feb 15, 2018 at 6:51 PM, Brian Stansberry < > brian.stansberry at redhat.com> wrote: > >> On Wed, Feb 14, 2018 at 9:37 PM, Stuart Douglas < >> stuart.w.douglas at gmail.com> wrote: >> >>> >>> >>> On Wed, Feb 14, 2018 at 4:43 PM, Brian Stansberry < >>> brian.stansberry at redhat.com> wrote: >>> >>>> On Tue, Feb 13, 2018 at 8:24 PM, Stuart Douglas < >>>> stuart.w.douglas at gmail.com> wrote: >>>> >>>>> Hi Everyone, >>>>> >>>>> I have been thinking a bit about the way we report errors in WildFly, >>>>> and I think this is something that we can improve on. At the moment I think >>>>> we are way to liberal with what we report, which results in a ton of >>>>> services being listed in the error report that have nothing to do with the >>>>> actual failure. >>>>> >>>>> As an example to work from I have created [1], which is a simple EJB >>>>> application. This consists of 10 EJB's, one of which has a reference to a >>>>> non-existant data source, the rest are simply empty no-op EJB's (just >>>>> @Stateless on an empty class). >>>>> >>>>> This app fails to deploy because the java:global/NonExistant data >>>>> source is missing, which gives the failure description in [2]. This is ~120 >>>>> lines long and lists multiple services for every single component in the >>>>> application (part of the reason this is so long is because the failures are >>>>> reported twice, once when the deployment fails and once when the server >>>>> starts). >>>>> >>>>> I think we can improve on this. I think in every failure case there >>>>> will be some root causes that are all the end user cares about, and we >>>>> should limit our reporting to just these cases, rather than listing every >>>>> internal service that can no longer start due to missing transitive deps. >>>>> >>>>> In particular these root causes are: >>>>> 1) A service threw and exception in its start() method and failed to >>>>> start >>>>> 2) A dependency is actually missing (i.e. not installed, not just not >>>>> started) >>>>> >>>>> I think that one or both of these two cases will be the root cause of >>>>> any failure, and as such that is all we should be reporting on. >>>>> >>>>> We already do an OK job of handing case 1), services that have failed, >>>>> as they get their own line item in the error report, however case 2) >>>>> results in a huge report that lists every service that has not come up, no >>>>> matter how far removed they are from the actual problem. >>>>> >>>> >>>> If the 2) case can be correctly determined, then +1 to reporting some >>>> new section and not reporting the current "WFLYCTL0180: Services with >>>> missing/unavailable dependencies" section. The WFLYCTL0180 section could >>>> only be reported as a fallback if for some reason the 1) and 2) stuff is >>>> empty. >>>> >>> >>> I have adjusted this a bit so a service with mode NEVER is treated the >>> same as if it is missing. I am pretty sure that with this change 1) and 2) >>> will cover 100% of cases. >>> >>> >>> >>>> >>>> >>>>> >>>>> I think we could make a change to the way this is reported so that >>>>> only direct problems are reported [3], so the error report would look >>>>> something like [4] (note that this commit only changes the operation >>>>> report, the container state reporting after boot is still quite verbose). >>>>> >>>> >>>> I think the container state reporting is ok. IMHO the proper fix to the >>>> container state reporting is to rollback and fail boot if Stage.RUNTIME >>>> failures occur. Configurable, but rollback by default. If we did that there >>>> would be no container state reporting. If you deploy your broken app >>>> post-boot you shouldn't see the container state reporting because by the >>>> time the report is written the op should have rolled back and the services >>>> are no longer "missing". It's only because we don't rollback on boot that >>>> this is reported. >>>> >>> >>> I don't think it is nessesary to report on services that are only down >>> because their dependents are down. It basically just adds noise, as they >>> are not really related to the underlying issue. I have expanded my branch >>> to also do this: >>> >>> https://github.com/wildfly/wildfly-core/compare/master...stu >>> artwdouglas:error-reporting?expand=1 >>> >>> This ends up with very concise reports that just detail the services >>> that are the root cause of the problem: https://gist.github.c >>> om/stuartwdouglas/42a68aaaa130ceee38ca5f66d0040de3 >>> >>> Does this approach seem reasonable? lf a user really does want a >>> complete dump of all services that are down that information is still >>> available directly from MSC anyway. >>> >> >> It seems reasonable. >> >> I'm going to get all lawyerly now. This is because while we don't treat >> our failure messages as "API" requiring compatibility, for these particular >> ones I think we should be as careful as possible. >> >> 1) "WFLYCTL0180: Services with missing/unavailable dependencies" => [" >> jboss.naming.context.java.comp.\"error-reporting-1.0-SNAPS >> HOT\".\"error-reporting-1.0-SNAPSHOT\".ErrorEjb.env.\"com. >> stuartdouglas.ErrorEjb\".nonExistant is missing >> [jboss.naming.context.java.global.NonExistant]"] >> >> Here you've somewhat repurposed an existing message. That can be quite ok >> IMHO so long as what's gone is just noise and the English meaning of the >> message is still correct. Basically, what did "missing/unavailable >> dependencies" mean before, what does it mean now, and is there a clear >> story behind the shift from A to B. The "missing" part is pretty clear -- >> not installed or NEVER is "missing". For "unavailable" now we've dropped >> the installed but unstarted ones. If we're including the ones that directly >> depend on *failed* services then that's a coherent definition of >> "unavailable". If we're not then "unavailable" is misleading. Sorry, I'm >> juggling stuff so I haven't checked the code. :( >> > > Previously this section would display every service that was down due to > its dependencies being down. This would include services that were many > levels away from the actual problem (e.g. if A depends on B which depends > on C which depends on D which is down, A, B and C would all be listed in > this section). This change displays the same information, but only for > direct dependents, so in the example about only C would be listed in this > section. > > The 'New missing/unsatisfied dependencies:' section in the container state > report is similar. Previously it would list every service that had failed > to come up, now it will only list services that are directly affected by a > problem. > > >> >> 2) I think "38 additional services are down due to their dependencies >> being missing or failed" should have a message code, not NONE. It's a >> separate message that may or may not appear. Plus it's new. And I think >> we're better off in these complex message structures to be precise vs >> trying to avoid codes for cosmetic reasons. >> > > Ok. > > Stuart > > >> >> >> >>> Stuart >>> >>> >>>> >>>>> >>>>> I am guessing that this is not as simple as it sounds, otherwise it >>>>> would have already been addressed, but I think we can do better that the >>>>> current state of affairs so I thought I would get a discussion started. >>>>> >>>> >>>> It sounds pretty simple. Any "problem" ServiceController exposes its >>>> ServiceContainer, and if relying on that registry to check if a missing >>>> dependency is installed is not correct for some reason, the >>>> ModelControllerImpl exposes its ServiceRegistry via a package protected >>>> getter. So AbstractOperationContext can provide that to the SVH. >>>> >>>> >>>>> Stuart >>>>> >>>>> [1] https://github.com/stuartwdouglas/errorreporting >>>>> [2] https://gist.github.com/stuartwdouglas/b52a85813913f3304301e >>>>> eb1f389fae8 >>>>> [3] https://github.com/stuartwdouglas/wildfly-core/commit/a1 >>>>> fbc831edf290971d54c13dd1c5d15719454f85 >>>>> [4] https://gist.github.com/stuartwdouglas/14040534da8d07f93 >>>>> 7d02f2f08099e8d >>>>> >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>> >>>> >>>> >>>> >>>> -- >>>> Brian Stansberry >>>> Manager, Senior Principal Software Engineer >>>> Red Hat >>>> >>> >>> >> >> >> -- >> Brian Stansberry >> Manager, Senior Principal Software Engineer >> Red Hat >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180216/b3d9b184/attachment-0001.html From bmaxwell at redhat.com Fri Feb 16 10:30:03 2018 From: bmaxwell at redhat.com (Brad Maxwell) Date: Fri, 16 Feb 2018 09:30:03 -0600 Subject: [wildfly-dev] Error reporting on deployment failure In-Reply-To: References: Message-ID: Yes, we had some bz/jiras opened about this before.? We get cases where customers application is failing and has pages upon pages of dependency errors and the customer cannot easily determine the issue.? And even support has difficultly, we usually try searching for common things like datasources or other JNDI references that might be missing, but I have seen several where it was not a datasource and took a while of tearing the apps apart to resolve. It looks like there was some improvement in EAP 7.1 [2], but it sounds like Stuart's PR may be even better. I found one example deployment on [1] that we could try and see what the logging looks like with the new PR. I figure the service dump would show all of the failed dependencies in case there was a need to look at the others? [1] https://bugzilla.redhat.com/show_bug.cgi?id=1283294 [2] https://issues.jboss.org/browse/JBEAP-5311 On 2/15/18 5:15 PM, Stuart Douglas wrote: > I have opened https://github.com/wildfly/wildfly-core/pull/3114 to > allow for testing/further review. > > Stuart > > On Thu, Feb 15, 2018 at 11:32 PM, Stuart Douglas > > wrote: > > > > On Thu, Feb 15, 2018 at 6:51 PM, Brian Stansberry > > > wrote: > > On Wed, Feb 14, 2018 at 9:37 PM, Stuart Douglas > > wrote: > > > > On Wed, Feb 14, 2018 at 4:43 PM, Brian Stansberry > > wrote: > > On Tue, Feb 13, 2018 at 8:24 PM, Stuart Douglas > > wrote: > > Hi Everyone, > > I have been thinking a bit about the way we report > errors in WildFly, and I think this is something > that we can improve on. At the moment I think we > are way to liberal with what we report, which > results in a ton of services being listed in the > error report that have nothing to do with the > actual failure. > > As an example to work from I have created [1], > which is a simple EJB application. This consists > of 10 EJB's, one of which has a reference to a > non-existant data source, the rest are simply > empty no-op EJB's (just @Stateless on an empty class). > > This app fails to deploy because the > java:global/NonExistant data source is missing, > which gives the failure description in [2]. This > is ~120 lines long and lists multiple services for > every single component in the application (part of > the reason this is so long is because the failures > are reported twice, once when the deployment fails > and once when the server starts). > > I think we can improve on this. I think in every > failure case there will be some root causes that > are all the end user cares about, and we should > limit our reporting to just these cases, rather > than listing every internal service that can no > longer start due to missing transitive deps. > > In particular these root causes are: > 1) A service threw and exception in its start() > method and failed to start > 2) A dependency is actually missing (i.e. not > installed, not just not started) > > I think that one or both of these two cases will > be the root cause of any failure, and as such that > is all we should be reporting on. > > We already do an OK job of handing case 1), > services that have failed, as they get their own > line item in the error report, however case 2) > results in a huge report that lists every service > that has not come up, no matter how far removed > they are from the actual problem. > > > If the 2) case can be correctly determined, then +1 to > reporting some new section and not reporting the > current?"WFLYCTL0180: Services with > missing/unavailable dependencies" section. The > WFLYCTL0180 section could only be reported as a > fallback if for some reason the 1) and 2) stuff is empty. > > > I have adjusted this a bit so a service with mode NEVER is > treated the same as if it is missing. I am pretty sure > that with this change 1) and 2) will cover 100% of cases. > > > I think we could make a change to the way this is > reported so that only direct problems are reported > [3], so the error report would look something like > [4] (note that this commit only changes the > operation report, the container state reporting > after boot is still quite verbose). > > > I think the container state reporting is ok. IMHO the > proper fix to the container state reporting is to > rollback and fail boot if Stage.RUNTIME failures > occur. Configurable, but rollback by default. If we > did that there would be no container state reporting. > If you deploy your broken app post-boot you shouldn't > see the container state reporting because by the time > the report is written the op should have rolled back > and the services are no longer "missing". It's only > because we don't rollback on boot that this is reported. > > > I don't think it is nessesary to report on services that > are only down because their dependents are down. It > basically just adds noise, as they are not really related > to the underlying issue. I have expanded my branch to also > do this: > > https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:error-reporting?expand=1 > > This ends up with very concise reports that just detail > the services that are the root cause of the problem: > https://gist.github.com/stuartwdouglas/42a68aaaa130ceee38ca5f66d0040de3 > > > Does this approach seem reasonable? lf a user really does > want a complete dump of all services that are down that > information is still available directly from MSC anyway. > > > It seems reasonable. > > I'm going to get all lawyerly now. This is because while we > don't treat our failure messages as "API" requiring > compatibility, for these particular ones I think we should be > as careful as possible. > > 1)??"WFLYCTL0180: Services with missing/unavailable > dependencies" => ["jboss.naming.context.java.co > mp.\"error-reporting-1.0-SNAPSHOT\".\"error-reporting-1.0-SNAPSHOT\".ErrorEjb.env.\"com.stuartdouglas.ErrorEjb\".nonExistant > is missing [jboss.naming.context.java.global.NonExistant]"] > > Here you've somewhat repurposed an existing message. That can > be quite ok IMHO so long as what's gone is just noise and the > English meaning of the message is still correct. Basically, > what did "missing/unavailable dependencies" mean before, what > does it mean now, and is there a clear story behind the shift > from A to B.? The "missing" part is pretty clear -- not > installed or NEVER is "missing". For "unavailable" now we've > dropped the installed but unstarted ones. If we're including > the ones that directly depend on *failed* services then that's > a coherent definition of "unavailable". If we're not then > "unavailable" is misleading. Sorry, I'm juggling stuff so I > haven't checked the code. :( > > > Previously this section would display every service that was down > due to its dependencies being down. This would include services > that were many levels away from the actual problem (e.g. if A > depends on B which depends on C which depends on D which is down, > A, B and C would all be listed in this section). This change > displays the same information, but only for direct dependents, so > in the example about only C would be listed in this section. > > The 'New missing/unsatisfied dependencies:' section in the > container state report is similar. Previously it would list every > service that had failed to come up, now it will only list services > that are directly affected by a problem. > > > 2) I think "38 additional services are down due to their > dependencies being missing or failed" should have a message > code, not NONE. It's a separate message that may or may not > appear. Plus it's new. And I think we're better off in these > complex message structures to be precise vs trying to avoid > codes for cosmetic reasons. > > > Ok. > > Stuart > > > > > Stuart > > > I am guessing that this is not as simple as it > sounds, otherwise it would have already been > addressed, but I think we can do better that the > current state of affairs so I thought I would get > a discussion started. > > > It sounds pretty simple. Any "problem" > ServiceController exposes its ServiceContainer, and if > relying on that registry to check if a missing > dependency is installed is not correct for some > reason, the ModelControllerImpl exposes its > ServiceRegistry via a package protected getter. So > AbstractOperationContext can provide that to the SVH. > > > Stuart > > [1] > https://github.com/stuartwdouglas/errorreporting > > [2] > https://gist.github.com/stuartwdouglas/b52a85813913f3304301eeb1f389fae8 > > > [3] > https://github.com/stuartwdouglas/wildfly-core/commit/a1fbc831edf290971d54c13dd1c5d15719454f85 > > [4] > https://gist.github.com/stuartwdouglas/14040534da8d07f937d02f2f08099e8d > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > Red Hat > > > > > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > Red Hat > > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180216/735bfa2c/attachment-0001.html From jperkins at redhat.com Fri Feb 16 15:50:59 2018 From: jperkins at redhat.com (James Perkins) Date: Fri, 16 Feb 2018 12:50:59 -0800 Subject: [wildfly-dev] WFCORE-3205 Fix Logging for Embedded Containers Message-ID: Hello All, Embedded containers offer some interesting issues with regards to logging. If the logging subsystem is present the container may attempt to configure logging via the logging subsystem. If logging has already been configured by the application starting the embedded container, this could cause errors if the log manager was not installed correctly. While just the first draft, I've created a design/requirements doc [1] on how this should likely work or what's expected. Please, especially those interested, have a look and let me know if I've missed something or I'm off base on anything. [1]: https://github.com/wildfly/wildfly-proposals/pull/28 [2]: https://issues.jboss.org/browse/WFCORE-3205 -- James R. Perkins JBoss by Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180216/c27294f4/attachment.html From sstark at redhat.com Sat Feb 17 15:43:58 2018 From: sstark at redhat.com (Scott Stark) Date: Sat, 17 Feb 2018 12:43:58 -0800 Subject: [wildfly-dev] ee4j-build mailing list for specs, RIs, TCKs Message-ID: We should have someone from our EAP/Wildfly build teams join the ee4j-build list to keep abreast and help steer the migration of the Oracle Java EE project into the Eclipse build infrastructure. On the last EE4j call, it was brought up that the discussion around how the TCKs needed to be updated to integrate into more modern CI environments. At some point we should be able to reduce our TCK run efforts, and improve the TCK codebase by leveraging the public EE4j TCKs, but we need to be involved to help that move in the right direction to achieve this. Subscribe here: https://accounts.eclipse.org/mailing-list/ee4j-build -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180217/9b70119c/attachment.html From kustos at gmx.net Sun Feb 18 06:12:26 2018 From: kustos at gmx.net (Philippe Marschall) Date: Sun, 18 Feb 2018 12:12:26 +0100 Subject: [wildfly-dev] Reducing Startup Time and Footprint with AppCDS Message-ID: <6a9de19d-bf91-05cd-e93e-ee28cb2ab9d9@gmx.net> Hello There has been a lot of talk on this list about how startup time and footprint of WildFly can be reduced even further. I have experimented with AppCDS and the first results are encouraging. As you may be aware Application Class-Data Sharing, or AppCDS for short, is available in OpenJDK 10 [1]. My work is based on the excellent talk of Volker Simonis on the subject at FOSEM [2] and his cl4cds [3] tool. I recommend viewing his presentation first, it will provide a lot of background. To get started you first need download an OpenJDK 10 early access build from [4]. It is important to not download an OracleJDK build as AppCDS is missing there for some reason. Then you need to dump the list of loaded classes export PREPEND_JAVA_OPTS="-Xlog:class+load=debug:file=/tmp/wildfly.classtrace" ./bin/standalone.sh Followed by converting that to a class list suitable for AppCDS, this is where the cl4cds tool comes in $JAVA_HOME/bin/java -jar ~/git/cl4cds/target/cl4cds-1.0.0-SNAPSHOT.jar /tmp/wildfly.classtrace /tmp/wildfly.cls Then you can create the shared archive export PREPEND_JAVA_OPTS="-Xshare:dump -XX:+UseAppCDS -XX:SharedClassListFile=/tmp/wildfly.cls -XX:+UnlockDiagnosticVMOptions -XX:SharedArchiveFile=/tmp/wildfly.jsa" ./bin/standalone.sh and then finally you can start WildFly with the shared archive export PREPEND_JAVA_OPTS="-Xshare:on -XX:+UseAppCDS -XX:+UnlockDiagnosticVMOptions -XX:SharedArchiveFile=/tmp/wildfly.jsa" ./bin/standalone.sh I checked the startup time reported by an "empty" WildFly 11. I realize this is not the most scientific way. The startup time went down from about 2000ms to about 1500ms or by about 25%. I did not have a look at the memory savings when running multiple WildFly versions is parallel. One thing I noted is that the Xerces classes should probably be recompiled with bytecode major version 49 (Java 1.5) or later, otherwise they can not be processed by AppCDS. Unfortunately AppCDS is quite hard to use, I don't know if the WildFly project can somehow help to make this easier. One option would be to ship a class list file but I don't know how portable that is. Also as WildFly services are lazy it only a fraction of the required classes may be in there. [1] http://openjdk.java.net/jeps/310 [2] https://fosdem.org/2018/schedule/event/class_data_sharing/ [3] https://simonis.github.io/cl4cds/ [4] http://jdk.java.net/10/ Cheers Philippe From wu at cybersolon.com Sun Feb 18 21:25:13 2018 From: wu at cybersolon.com (Noah Wu) Date: Mon, 19 Feb 2018 11:25:13 +0900 Subject: [wildfly-dev] Wildfly multi domains with one certificate Message-ID: Suppose I have a few websites with different domain names, domainA.com, domainB.jp, etc. I have setup the correct virtual hosts configuration for these sites in wildfly 11. And I am planning on buying mutlti domain SSL certificate from GoDaddy. A Multi-domain SSL Certificate can secure your main domain + several SAN ( Subject Alternative Name) domain names in one Certificate. My question is that can wildfly recognize this kind of multi domain SSL certificate. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180219/918aacad/attachment.html From david.lloyd at redhat.com Mon Feb 19 09:05:52 2018 From: david.lloyd at redhat.com (David Lloyd) Date: Mon, 19 Feb 2018 08:05:52 -0600 Subject: [wildfly-dev] Reducing Startup Time and Footprint with AppCDS In-Reply-To: <6a9de19d-bf91-05cd-e93e-ee28cb2ab9d9@gmx.net> References: <6a9de19d-bf91-05cd-e93e-ee28cb2ab9d9@gmx.net> Message-ID: On Sun, Feb 18, 2018 at 5:12 AM, Philippe Marschall wrote: > Hello > > There has been a lot of talk on this list about how startup time and > footprint of WildFly can be reduced even further. I have experimented > with AppCDS and the first results are encouraging. Cool! > [...] > I checked the startup time reported by an "empty" WildFly 11. I realize > this is not the most scientific way. The startup time went down from > about 2000ms to about 1500ms or by about 25%. I did not have a look at > the memory savings when running multiple WildFly versions is parallel. > > One thing I noted is that the Xerces classes should probably be > recompiled with bytecode major version 49 (Java 1.5) or later, otherwise > they can not be processed by AppCDS. > > Unfortunately AppCDS is quite hard to use, I don't know if the WildFly > project can somehow help to make this easier. One option would be to > ship a class list file but I don't know how portable that is. Also as > WildFly services are lazy it only a fraction of the required classes may > be in there. What about simply adding *all* the classes (I think there are about 78,000 of them) to AppCDS? -- - DML From smarlow at redhat.com Mon Feb 19 10:27:32 2018 From: smarlow at redhat.com (Scott Marlow) Date: Mon, 19 Feb 2018 10:27:32 -0500 Subject: [wildfly-dev] ee4j-build mailing list for specs, RIs, TCKs In-Reply-To: References: Message-ID: Thanks, I just signed up. On 02/17/2018 03:43 PM, Scott Stark wrote: > We should have someone from our EAP/Wildfly build teams join the > ee4j-build list to keep abreast and help steer the migration of the > Oracle Java EE project into the Eclipse build infrastructure. On the > last EE4j call, it was brought up that the discussion around how the > TCKs needed to be updated to integrate into more modern CI environments. > At some point we should be able to reduce our TCK run efforts, and > improve the TCK codebase by leveraging the public EE4j TCKs, but we need > to be involved to help that move in the right direction to achieve this. +100 > > Subscribe here: > https://accounts.eclipse.org/mailing-list/ee4j-build > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From kustos at gmx.net Mon Feb 19 14:14:49 2018 From: kustos at gmx.net (Philippe Marschall) Date: Mon, 19 Feb 2018 20:14:49 +0100 Subject: [wildfly-dev] Reducing Startup Time and Footprint with AppCDS In-Reply-To: References: <6a9de19d-bf91-05cd-e93e-ee28cb2ab9d9@gmx.net> Message-ID: On 19.02.2018 15:05, David Lloyd wrote: > On Sun, Feb 18, 2018 at 5:12 AM, Philippe Marschall wrote: >> ... >> Unfortunately AppCDS is quite hard to use, I don't know if the WildFly >> project can somehow help to make this easier. One option would be to >> ship a class list file but I don't know how portable that is. Also as >> WildFly services are lazy it only a fraction of the required classes may >> be in there. > > What about simply adding *all* the classes (I think there are about > 78,000 of them) to AppCDS? That would work in theory. In practice I see two challenges, the output of cl4cds (the input for dumping the classes) looks something like this: org/xnio/IoUtils$1 id: 0x00000001007f51d8 super: 0x0000000100000eb0 interfaces: 0x00000001001633d8 source: /home/user/wildfly/wildfly-11.0.0.Final/modules/system/layers/base/org/jboss/xnio/main/xnio-api-3.5.4.Final.jar 1. all the paths to JARs are absolute, that could probably be solved somehow 2. we somehow have to come up with the class ids, I have no idea where they come from Cheers Philippe From smarlow at redhat.com Wed Feb 21 15:20:08 2018 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 21 Feb 2018 15:20:08 -0500 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... Message-ID: Any suggestions for what we should call the new Hibernate ORM 5.3 module? We can probably drop the slot and just include the version in the module name. Some ideas: name="org.hibernate5.3" name="org.hibernate5_3" name="org.hibernate5-3" Any suggestions? Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180221/3e53f4df/attachment-0001.html From jperkins at redhat.com Wed Feb 21 15:36:59 2018 From: jperkins at redhat.com (James Perkins) Date: Wed, 21 Feb 2018 12:36:59 -0800 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: References: Message-ID: Is there a need to create a new module? On Wed, Feb 21, 2018 at 12:20 PM, Scott Marlow wrote: > Any suggestions for what we should call the new Hibernate ORM 5.3 module? > We can probably drop the slot and just include the version in the module > name. > > Some ideas: > > name="org.hibernate5.3" > name="org.hibernate5_3" > name="org.hibernate5-3" > > Any suggestions? > > Scott > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- James R. Perkins JBoss by Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180221/33d6ff67/attachment.html From david.lloyd at redhat.com Wed Feb 21 15:42:30 2018 From: david.lloyd at redhat.com (David Lloyd) Date: Wed, 21 Feb 2018 14:42:30 -0600 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: References: Message-ID: I would either keep the slot: Or use a module 1.7+ descriptor and do it this way: (still in the org/hibernate/5.3/module.xml path in this case). If you don't like using slots, then I'd suggest: On Wed, Feb 21, 2018 at 2:20 PM, Scott Marlow wrote: > Any suggestions for what we should call the new Hibernate ORM 5.3 module? > We can probably drop the slot and just include the version in the module > name. > > Some ideas: > > name="org.hibernate5.3" > name="org.hibernate5_3" > name="org.hibernate5-3" > > Any suggestions? > > Scott > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- - DML From smarlow at redhat.com Wed Feb 21 16:20:45 2018 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 21 Feb 2018 16:20:45 -0500 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: References: Message-ID: On Wed, Feb 21, 2018 at 3:36 PM, James Perkins wrote: > Is there a need to create a new module? > Yes, WildFly needs to keep including the current Hibernate ORM 5.1 jar for application compatibility but we also need to include the newer Hibernate ORM 5.3+ jar for EE 8/JPA 2.2 use. So, we really need a new ORM module, since we will keep both Hibernate versions in WildFly for a while (ORM 5.3 may get swapped out with a newer version later, TBD). Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180221/e08fefd1/attachment.html From alexey.loubyansky at redhat.com Wed Feb 21 17:40:02 2018 From: alexey.loubyansky at redhat.com (Alexey Loubyansky) Date: Wed, 21 Feb 2018 23:40:02 +0100 Subject: [wildfly-dev] new feature-pack repo coords, id and streams Message-ID: As many of you know we are planning to move to the new feature-packs and the provisioning mechanism for our wildfly(-based) distributions. New feature-packs will be artifacts in a repository (currently Maven). In this email I'd like to raise a question about how to express a location (coordinates) of a feature-pack, its identify (id) and a stream information which is the source of version updates and patches. Until this moment I've used the GAV (group, artifact, version) as both the feature-pack ID and its coordinates in the repository. This is pretty much enough for a static installation config (which is a list of feature-pack GAVs and config options). The GAV-based config also makes the installation build reproducible. Which is a hard requirement for the provisioning mechanism. On the other hand, we also want to be able to check for the updates in the repository for the installed feature-packs and apply them to an existing installation. Which means that the installation has to be also described in terms of the consumed update streams. This will be a description of the installation in terms of sources of the latest available versions. A build from this kind of config is not guaranteed to be reproducible. This is where the GAVs don't fit as well. What I would like to achieve is to combine the static and dynamic parts of the config into one. Here is what I'm considering. When I install a feature-pack (using a tool or adding it manually into the installation config) what ends up in the config is the following expression: universe:family:branch:classifier:build_id, e.g. org.jboss:wildfly:12:beta:12.0.5.Beta4. This expression is going to be the feature-pack coordinates. The meaning behind the parts. UNIVERSE Universe is supposed to be a registry of feature-pack streams for various projects and products. In the example above the org.jboss universe would include wildfly-core, wildfly and related projects that are consumed by wildfly that also choose to provide feature-packs. FAMILY The family part would designate the project or product. BRANCH The branch would normally be a major version. The assumption is that anything that comes from the branch is API and config backward compatible. CLASSIFIER Branch + classifier is what identifies a stream. The idea is that there could be multiple streams originating from the same branch. I.e. a stream of final releases, a stream of betas, alphas, etc. A user could choose which stream to subscribe to by providing the classifier. BUILD ID In most cases that would be the release version. universe:family:branch:build_id is going to be the feature-pack identity. The classifier is not taken into account because the same feature-pack build/release might appear in more than one stream. And so the build_id must be unique for the branch. Given the full feature-pack coordinates, the target feature-pack can unmistakenly be identified and the installation can be reproduced. At the same time, the coordinates include the stream information, so a tool can check the stream for the updates, apply them and update the installation config with the new feature-pack build_id. If you see any problem with this approach or have a better idea, please share. Thanks! Alexey -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180221/db00fa1d/attachment.html From jperkins at redhat.com Wed Feb 21 17:47:35 2018 From: jperkins at redhat.com (James Perkins) Date: Wed, 21 Feb 2018 14:47:35 -0800 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: References: Message-ID: With other Java EE 8 tech previews we handle it by loading the correct resource based on a system property. Is this not possible with Hibernate for some reason? For example have a look at the javax.validation.api module [1]. [1]: https://github.com/wildfly/wildfly/blob/master/servlet-feature-pack/src/main/resources/modules/system/layers/base/javax/validation/api/main/module.xml#L26-L42 On Wed, Feb 21, 2018 at 1:20 PM, Scott Marlow wrote: > > On Wed, Feb 21, 2018 at 3:36 PM, James Perkins > wrote: > >> Is there a need to create a new module? >> > > Yes, WildFly needs to keep including the current Hibernate ORM 5.1 jar > for application compatibility but we also need to include the newer > Hibernate ORM 5.3+ jar for EE 8/JPA 2.2 use. So, we really need a new ORM > module, since we will keep both Hibernate versions in WildFly for a while > (ORM 5.3 may get swapped out with a newer version later, TBD). > > Scott > > -- James R. Perkins JBoss by Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180221/037b72d2/attachment-0001.html From smarlow at redhat.com Wed Feb 21 18:26:28 2018 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 21 Feb 2018 18:26:28 -0500 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: References: Message-ID: On Wed, Feb 21, 2018 at 5:47 PM, James Perkins wrote: > With other Java EE 8 tech previews we handle it by loading the correct > resource based on a system property. Is this not possible with Hibernate > for some reason? For example have a look at the javax.validation.api module > [1]. > If we are running in EE 8 mode, the JPA container, should use/expose the JPA 2.2 SPEC jars, however, either Hibernate ORM 5.1 or 5.3 could be used. Also, in EE 8 mode, applications should use Hibernate ORM 5.3 by default but could also Hibernate ORM 5.1. In EE 7 mode, only the Hibernate ORM 5.1 jars should be available to applications. I think this should work. I have an alternative in mind, if this doesn't work but it has in the past (e.g. JPA 2.1 container implementation could work with JPA 1.0-2.0 persistence providers). Since not all applications will want to use Hibernate ORM 5.3 by default (in EE 8 tech preview), I think we should have a system property way to change the default JPA persistence provider module name (to be handled by the JPA container). > > [1]: https://github.com/wildfly/wildfly/blob/master/ > servlet-feature-pack/src/main/resources/modules/system/ > layers/base/javax/validation/api/main/module.xml#L26-L42 > > On Wed, Feb 21, 2018 at 1:20 PM, Scott Marlow wrote: > >> >> On Wed, Feb 21, 2018 at 3:36 PM, James Perkins >> wrote: >> >>> Is there a need to create a new module? >>> >> >> Yes, WildFly needs to keep including the current Hibernate ORM 5.1 jar >> for application compatibility but we also need to include the newer >> Hibernate ORM 5.3+ jar for EE 8/JPA 2.2 use. So, we really need a new ORM >> module, since we will keep both Hibernate versions in WildFly for a while >> (ORM 5.3 may get swapped out with a newer version later, TBD). >> >> Scott >> >> > > > -- > James R. Perkins > JBoss by Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180221/ee4cefb0/attachment.html From smarlow at redhat.com Wed Feb 21 21:00:48 2018 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 21 Feb 2018 21:00:48 -0500 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: References: Message-ID: On Wed, Feb 21, 2018 at 3:42 PM, David Lloyd wrote: > I would either keep the slot: > > > > Or use a module 1.7+ descriptor and do it this way: > > > > (still in the org/hibernate/5.3/module.xml path in this case). > > If you don't like using slots, then I'd suggest: > Slots are great, think I will upgrade to the module 1.7+ descriptor and include the slot in the name, as you suggest above. Thanks! Scott > > > > > On Wed, Feb 21, 2018 at 2:20 PM, Scott Marlow wrote: > > Any suggestions for what we should call the new Hibernate ORM 5.3 module? > > We can probably drop the slot and just include the version in the module > > name. > > > > Some ideas: > > > > name="org.hibernate5.3" > > name="org.hibernate5_3" > > name="org.hibernate5-3" > > > > Any suggestions? > > > > Scott > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > -- > - DML > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180221/5963ac81/attachment.html From sanne at hibernate.org Thu Feb 22 04:40:50 2018 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 22 Feb 2018 09:40:50 +0000 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: References: Message-ID: Hi all, Scott, the Hibernate ORM and Hibernate Search projects are now producing and releasing feature packs, so please use them rather than re-creating a different set of modules. We also produced feature packs of several of our dependencies, for example Apache Lucene feature packs have their own repository and versioning: more flexible and consistent with the Lucene version. We also have chosen different names; e.g. the Hibernate ORM one is called "org.hibernate.orm" and has a slot "5.3". Technically that's an alias, the real module has a slot with the full version. We also included a deprecated module using the older name which imports and exports the new one, to ease transition. The feature packs for Hibernate Search have been released already, the ones for Hibernate ORM are merged in master but have not been released yet. Of course we can still make changes to any of them, I'll try the new module descriptors if you prefer them? Scott: you remember we discussed including a JipiJapa adaptor within the ORM codebase so that a matching version would be released together? This would be a good time, so we include it, want to help me with that? Thanks, Sanne On 22 Feb 2018 02:06, "Scott Marlow" wrote: > > > On Wed, Feb 21, 2018 at 3:42 PM, David Lloyd > wrote: > >> I would either keep the slot: >> >> >> >> Or use a module 1.7+ descriptor and do it this way: >> >> >> >> (still in the org/hibernate/5.3/module.xml path in this case). >> >> If you don't like using slots, then I'd suggest: >> > > Slots are great, think I will upgrade to the module 1.7+ descriptor and > include the slot in the name, as you suggest above. > > Thanks! > Scott > > >> >> >> >> >> On Wed, Feb 21, 2018 at 2:20 PM, Scott Marlow wrote: >> > Any suggestions for what we should call the new Hibernate ORM 5.3 >> module? >> > We can probably drop the slot and just include the version in the module >> > name. >> > >> > Some ideas: >> > >> > name="org.hibernate5.3" >> > name="org.hibernate5_3" >> > name="org.hibernate5-3" >> > >> > Any suggestions? >> > >> > Scott >> > >> > _______________________________________________ >> > wildfly-dev mailing list >> > wildfly-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/wildfly-dev >> >> >> >> -- >> - DML >> > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180222/db0e3b85/attachment.html From hpehl at redhat.com Thu Feb 22 08:00:54 2018 From: hpehl at redhat.com (Harald Pehl) Date: Thu, 22 Feb 2018 14:00:54 +0100 Subject: [wildfly-dev] WildFly Model Graph on OpenShift Message-ID: <734F25F8-5CF6-487E-BC33-86DF13AFD530@redhat.com> Hi, I finally managed to deploy WildFly Model Graph to OpenShift [1]. WildFly Model Graph lets you analyse the WildFly management model using a Neo4J graph database. For more information see [2]. The Neo4J databases are running on the Red Hat OpenShift Online employee cluster. Applications on this cluster have limited resources. So you might experience some latency when there's too much traffic. Have fun! Harald [1] https://hal.github.io/model-graph/ [2] https://github.com/hal/model-graph From smarlow at redhat.com Thu Feb 22 08:43:37 2018 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 22 Feb 2018 08:43:37 -0500 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: References: Message-ID: On Thu, Feb 22, 2018 at 4:40 AM, Sanne Grinovero wrote: > Hi all, Scott, > > the Hibernate ORM and Hibernate Search projects are now producing and > releasing feature packs, so please use them rather than re-creating a > different set of modules. > > We also produced feature packs of several of our dependencies, for example > Apache Lucene feature packs have their own repository and versioning: more > flexible and consistent with the Lucene version. > > We also have chosen different names; e.g. the Hibernate ORM one is called > "org.hibernate.orm" and has a slot "5.3". > Name sounds fine. > Technically that's an alias, the real module has a slot with the full > version. > > We also included a deprecated module using the older name which imports > and exports the new one, to ease transition. > What is the deprecated module name/slot? > > The feature packs for Hibernate Search have been released already, the > ones for Hibernate ORM are merged in master but have not been released yet. > > Of course we can still make changes to any of them, I'll try the new > module descriptors if you prefer them? > > Scott: you remember we discussed including a JipiJapa adaptor within the > ORM codebase so that a matching version would be released together? This > would be a good time, so we include it, want to help me with that? > Excellent, yes I want to help with that. > > Thanks, > Sanne > > > > > > On 22 Feb 2018 02:06, "Scott Marlow" wrote: > >> >> >> On Wed, Feb 21, 2018 at 3:42 PM, David Lloyd >> wrote: >> >>> I would either keep the slot: >>> >>> >>> >>> Or use a module 1.7+ descriptor and do it this way: >>> >>> >>> >>> (still in the org/hibernate/5.3/module.xml path in this case). >>> >>> If you don't like using slots, then I'd suggest: >>> >> >> Slots are great, think I will upgrade to the module 1.7+ descriptor and >> include the slot in the name, as you suggest above. >> >> Thanks! >> Scott >> >> >>> >>> >>> >>> >>> On Wed, Feb 21, 2018 at 2:20 PM, Scott Marlow >>> wrote: >>> > Any suggestions for what we should call the new Hibernate ORM 5.3 >>> module? >>> > We can probably drop the slot and just include the version in the >>> module >>> > name. >>> > >>> > Some ideas: >>> > >>> > name="org.hibernate5.3" >>> > name="org.hibernate5_3" >>> > name="org.hibernate5-3" >>> > >>> > Any suggestions? >>> > >>> > Scott >>> > >>> > _______________________________________________ >>> > wildfly-dev mailing list >>> > wildfly-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >>> >>> >>> -- >>> - DML >>> >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180222/4610a385/attachment-0001.html From steve at hibernate.org Thu Feb 22 09:47:35 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 22 Feb 2018 14:47:35 +0000 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: References: Message-ID: On Thu, Feb 22, 2018 at 7:44 AM Scott Marlow wrote: > On Thu, Feb 22, 2018 at 4:40 AM, Sanne Grinovero > wrote: > >> >> Scott: you remember we discussed including a JipiJapa adaptor within the >> ORM codebase so that a matching version would be released together? This >> would be a good time, so we include it, want to help me with that? >> > > Excellent, yes I want to help with that. > This would let us completely replace container-managed JPA support? If so, that would be amazing -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180222/91ccd35a/attachment.html From smarlow at redhat.com Thu Feb 22 10:48:28 2018 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 22 Feb 2018 10:48:28 -0500 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: References: Message-ID: <3ebe5725-e7ac-c8c0-e65a-b18b6569bda9@redhat.com> On 02/22/2018 09:47 AM, Steve Ebersole wrote: > On Thu, Feb 22, 2018 at 7:44 AM Scott Marlow > wrote: > > On Thu, Feb 22, 2018 at 4:40 AM, Sanne Grinovero > > wrote: > > > Scott: you remember we discussed including a JipiJapa adaptor > within the ORM codebase so that a matching version would be > released together? This would be a good time, so we include it, > want to help me with that? > > > Excellent, yes I want to help with that. > > > This would let us completely replace container-managed JPA support?? If > so, that would be amazing I think that Sanne was only talking about the Hibernate ORM 5.3 persistence provider modules and the JipiJapa module that integrates WildFly with ORM 5.3. Which other parts of container managed JPA support do you want to be able to replace? From brian.stansberry at redhat.com Thu Feb 22 11:04:15 2018 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Thu, 22 Feb 2018 10:04:15 -0600 Subject: [wildfly-dev] WildFly Model Graph on OpenShift In-Reply-To: <734F25F8-5CF6-487E-BC33-86DF13AFD530@redhat.com> References: <734F25F8-5CF6-487E-BC33-86DF13AFD530@redhat.com> Message-ID: Nice! The model graph tool is really cool but I don't use it often enough to be able to quickly set up the db locally when I have something I'd like to check. This will make this much simpler. On Thu, Feb 22, 2018 at 7:00 AM, Harald Pehl wrote: > Hi, > > I finally managed to deploy WildFly Model Graph to OpenShift [1]. > > WildFly Model Graph lets you analyse the WildFly management model using a > Neo4J graph database. For more information see [2]. > > The Neo4J databases are running on the Red Hat OpenShift Online employee > cluster. Applications on this cluster have limited resources. So you might > experience some latency when there's too much traffic. > > Have fun! > Harald > > [1] https://hal.github.io/model-graph/ > [2] https://github.com/hal/model-graph > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Manager, Senior Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180222/459cb3be/attachment.html From steve at hibernate.org Thu Feb 22 15:41:53 2018 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 22 Feb 2018 20:41:53 +0000 Subject: [wildfly-dev] Hibernate 5.3+ module name suggestions... In-Reply-To: <3ebe5725-e7ac-c8c0-e65a-b18b6569bda9@redhat.com> References: <3ebe5725-e7ac-c8c0-e65a-b18b6569bda9@redhat.com> Message-ID: Just that. The pieces that allow us to plug in any version of Hibernate and any version of JPA (like 2.2 here recently) into WildFly for testing, without having to wait for WildFly to support everything else to have a release that includes the little bit we need On Thu, Feb 22, 2018 at 9:48 AM Scott Marlow wrote: > > > On 02/22/2018 09:47 AM, Steve Ebersole wrote: > > On Thu, Feb 22, 2018 at 7:44 AM Scott Marlow > > wrote: > > > > On Thu, Feb 22, 2018 at 4:40 AM, Sanne Grinovero > > > wrote: > > > > > > Scott: you remember we discussed including a JipiJapa adaptor > > within the ORM codebase so that a matching version would be > > released together? This would be a good time, so we include it, > > want to help me with that? > > > > > > Excellent, yes I want to help with that. > > > > > > This would let us completely replace container-managed JPA support? If > > so, that would be amazing > > I think that Sanne was only talking about the Hibernate ORM 5.3 > persistence provider modules and the JipiJapa module that integrates > WildFly with ORM 5.3. > > Which other parts of container managed JPA support do you want to be > able to replace? > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180222/a3fde160/attachment.html From brian.stansberry at redhat.com Thu Feb 22 16:24:40 2018 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Thu, 22 Feb 2018 15:24:40 -0600 Subject: [wildfly-dev] new feature-pack repo coords, id and streams In-Reply-To: References: Message-ID: I'm describing my thinking process of understanding this in hopes that it's helpful to others. Or that I'm all wrong and you can correct me. ;) AIUI you want to still want to use maven and GAVs for actually pulling items from the repo, but the additional stream info allows you to work out how to identify other related items. So I'm a bit unclear on the relationships of this coordinate to a GAV. I initially thought it's universe:family:build-id org.jboss:wildfly:12.0.5.Beta4 That would mean though that BUILD_ID is not just unique for the branch, it is unique for the family. That sounds wrong, as you state it's unique to the branch. So now I think it's family:branch:build-id wildfly:12:12.0.5.Beta4 One concern with that is the 'A' in the GAV is no longer something rarely changing. In the WildFly case it would change every 3 months. This has some implications for the process of producing the feature packs. I'm not saying that's a show-stopper problem; more that it's something that we'll have to be aware of as we think through the process of creating these. Most readers can safely skip the rest of this as I'm probably getting ahead of myself.... An example of the kind of thing I'm talking about is in the current root pom for WildFly we have: .... .... ${project.groupId} wildfly-feature-pack pom ${project.version} Thereafter any other child poms that declare a dependency on that feature pack just have .... .... ${project.groupId} wildfly-feature-pack pom There's no need to specify the version all over the place, as the dependencyManagement mechanism takes care of that in a central location. But that kind of approach doesn't work as readily when it comes to artifactId. One possibility is in the root pom there's .... 12 .... .... ${project.groupId} ${feature.pack.branch} ${project.version} And then in other child poms: .... .... ${project.groupId} ${feature.pack.branch} pom On Wed, Feb 21, 2018 at 4:40 PM, Alexey Loubyansky < alexey.loubyansky at redhat.com> wrote: > As many of you know we are planning to move to the new feature-packs and > the provisioning mechanism for our wildfly(-based) distributions. New > feature-packs will be artifacts in a repository (currently Maven). In this > email I'd like to raise a question about how to express a location > (coordinates) of a feature-pack, its identify (id) and a stream information > which is the source of version updates and patches. > > Until this moment I've used the GAV (group, artifact, version) as both the > feature-pack ID and its coordinates in the repository. This is pretty much > enough for a static installation config (which is a list of feature-pack > GAVs and config options). The GAV-based config also makes the installation > build reproducible. Which is a hard requirement for the provisioning > mechanism. > > On the other hand, we also want to be able to check for the updates in the > repository for the installed feature-packs and apply them to an existing > installation. Which means that the installation has to be also described in > terms of the consumed update streams. This will be a description of the > installation in terms of sources of the latest available versions. A build > from this kind of config is not guaranteed to be reproducible. This is > where the GAVs don't fit as well. > > What I would like to achieve is to combine the static and dynamic parts of > the config into one. Here is what I'm considering. When I install a > feature-pack (using a tool or adding it manually into the installation > config) what ends up in the config is the following expression: > universe:family:branch:classifier:build_id, e.g. > org.jboss:wildfly:12:beta:12.0.5.Beta4. This expression is going to be > the feature-pack coordinates. > > The meaning behind the parts. > > UNIVERSE > > Universe is supposed to be a registry of feature-pack streams for various > projects and products. In the example above the org.jboss universe would > include wildfly-core, wildfly and related projects that are consumed by > wildfly that also choose to provide feature-packs. > > FAMILY > > The family part would designate the project or product. > > BRANCH > > The branch would normally be a major version. The assumption is that > anything that comes from the branch is API and config backward compatible. > > CLASSIFIER > > Branch + classifier is what identifies a stream. The idea is that there > could be multiple streams originating from the same branch. I.e. a stream > of final releases, a stream of betas, alphas, etc. A user could choose > which stream to subscribe to by providing the classifier. > > BUILD ID > > In most cases that would be the release version. > universe:family:branch:build_id is going to be the feature-pack identity. > The classifier is not taken into account because the same feature-pack > build/release might appear in more than one stream. And so the build_id > must be unique for the branch. > > > Given the full feature-pack coordinates, the target feature-pack can > unmistakenly be identified and the installation can be reproduced. At the > same time, the coordinates include the stream information, so a tool can > check the stream for the updates, apply them and update the installation > config with the new feature-pack build_id. > > If you see any problem with this approach or have a better idea, please > share. Thanks! > > Alexey > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Manager, Senior Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180222/a34f73e1/attachment-0001.html From alexey.loubyansky at redhat.com Thu Feb 22 17:41:03 2018 From: alexey.loubyansky at redhat.com (Alexey Loubyansky) Date: Thu, 22 Feb 2018 23:41:03 +0100 Subject: [wildfly-dev] new feature-pack repo coords, id and streams In-Reply-To: References: Message-ID: On Thu, Feb 22, 2018 at 10:24 PM, Brian Stansberry < brian.stansberry at redhat.com> wrote: > I'm describing my thinking process of understanding this in hopes that > it's helpful to others. Or that I'm all wrong and you can correct me. ;) > > AIUI you want to still want to use maven and GAVs for actually pulling > items from the repo, but the additional stream info allows you to work out > how to identify other related items. So I'm a bit unclear on the > relationships of this coordinate to a GAV. > GAV has been used initially because of the Maven repo. As long as we use Maven whatever coordinate expression we choose it will have to translate to GAV at the end. I imagine there will be an artifact (target repo coordinate) resolver that will take care of that. I initially thought it's > > universe:family:build-id > > org.jboss:wildfly:12.0.5.Beta4 > > That would mean though that BUILD_ID is not just unique for the branch, it > is unique for the family. That sounds wrong, as you state it's unique to > the branch. > > So now I think it's > > family:branch:build-id > > wildfly:12:12.0.5.Beta4 > To me that looks like a variation of a GAV which is both a coordinate and an ID. That could be ok. Actually, the examples above do contain a lot of info that seems sufficient to have a clue about what this is and where it belongs. My approach was based on what pieces of info I wanted to extract from those expressions and that would include (taking into account the tooling and the user interface): universe, family, branch, release stream classifier, release id. This is what I will be extracting and dealing with whatever format we choose. So I might as well expose these directly and let project/product owners decide how those map into their preferred versioning, compatibility and update rules. I could provide a default GAV coordinate resolver based on how we are used to define our GAVs and also let the user (project owner) provide a custom one. > One concern with that is the 'A' in the GAV is no longer something rarely > changing. In the WildFly case it would change every 3 months. This has some > implications for the process of producing the feature packs. I'm not > saying that's a show-stopper problem; more that it's something that we'll > have to be aware of as we think through the process of creating these. > One of the advantages of not using actual Maven GAVs directly is to make them an implementation detail. If one day we decide to redefine our GAV approach or support non-Maven repo for some reason, the end user of the tool will not have to know about that. Thanks, Alexey Most readers can safely skip the rest of this as I'm probably getting ahead > of myself.... > > An example of the kind of thing I'm talking about is in the current root > pom for WildFly we have: > > > .... > > > .... > > ${project.groupId} > wildfly-feature-pack > pom > ${project.version} > > > Thereafter any other child poms that declare a dependency on that feature > pack just have > > > .... > > .... > > ${project.groupId} > wildfly-feature-pack > pom > > > There's no need to specify the version all over the place, as the > dependencyManagement mechanism takes care of that in a central location. > But that kind of approach doesn't work as readily when it comes to > artifactId. > > One possibility is in the root pom there's > > > .... > > 12 > .... > > > .... > > ${project.groupId} > ${feature.pack.branch} > ${project.version} > > > And then in other child poms: > > > .... > > .... > > ${project.groupId} > ${feature.pack.branch} > pom > > > On Wed, Feb 21, 2018 at 4:40 PM, Alexey Loubyansky < > alexey.loubyansky at redhat.com> wrote: > >> As many of you know we are planning to move to the new feature-packs and >> the provisioning mechanism for our wildfly(-based) distributions. New >> feature-packs will be artifacts in a repository (currently Maven). In this >> email I'd like to raise a question about how to express a location >> (coordinates) of a feature-pack, its identify (id) and a stream information >> which is the source of version updates and patches. >> >> Until this moment I've used the GAV (group, artifact, version) as both >> the feature-pack ID and its coordinates in the repository. This is pretty >> much enough for a static installation config (which is a list of >> feature-pack GAVs and config options). The GAV-based config also makes the >> installation build reproducible. Which is a hard requirement for the >> provisioning mechanism. >> >> On the other hand, we also want to be able to check for the updates in >> the repository for the installed feature-packs and apply them to an >> existing installation. Which means that the installation has to be also >> described in terms of the consumed update streams. This will be a >> description of the installation in terms of sources of the latest available >> versions. A build from this kind of config is not guaranteed to be >> reproducible. This is where the GAVs don't fit as well. >> >> What I would like to achieve is to combine the static and dynamic parts >> of the config into one. Here is what I'm considering. When I install a >> feature-pack (using a tool or adding it manually into the installation >> config) what ends up in the config is the following expression: >> universe:family:branch:classifier:build_id, e.g. >> org.jboss:wildfly:12:beta:12.0.5.Beta4. This expression is going to be >> the feature-pack coordinates. >> >> The meaning behind the parts. >> >> UNIVERSE >> >> Universe is supposed to be a registry of feature-pack streams for various >> projects and products. In the example above the org.jboss universe would >> include wildfly-core, wildfly and related projects that are consumed by >> wildfly that also choose to provide feature-packs. >> >> FAMILY >> >> The family part would designate the project or product. >> >> BRANCH >> >> The branch would normally be a major version. The assumption is that >> anything that comes from the branch is API and config backward compatible. >> >> CLASSIFIER >> >> Branch + classifier is what identifies a stream. The idea is that there >> could be multiple streams originating from the same branch. I.e. a stream >> of final releases, a stream of betas, alphas, etc. A user could choose >> which stream to subscribe to by providing the classifier. >> >> BUILD ID >> >> In most cases that would be the release version. >> universe:family:branch:build_id is going to be the feature-pack >> identity. The classifier is not taken into account because the same >> feature-pack build/release might appear in more than one stream. And so the >> build_id must be unique for the branch. >> >> >> Given the full feature-pack coordinates, the target feature-pack can >> unmistakenly be identified and the installation can be reproduced. At the >> same time, the coordinates include the stream information, so a tool can >> check the stream for the updates, apply them and update the installation >> config with the new feature-pack build_id. >> >> If you see any problem with this approach or have a better idea, please >> share. Thanks! >> >> Alexey >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180222/2dbed293/attachment-0001.html From brian.stansberry at redhat.com Thu Feb 22 18:11:04 2018 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Thu, 22 Feb 2018 17:11:04 -0600 Subject: [wildfly-dev] new feature-pack repo coords, id and streams In-Reply-To: References: Message-ID: Having maven GAVs be an internal detail of the tool sounds fine, but we are going to need to produce and distribute the feature packs, and for that I figured we're talking maven. With a specialized plugin involved, sure, but for now and probably for quite a while, it's fundamentally maven. One thing I didn't say before because I was focused on my question, is that the expression segments you outlined sound conceptually correct to me. Because they sound right is why I jumped to practical questions. I don't want to sidetrack this too much though with the details of how this relates to maven, at least not at the cost of people giving you feedback on the basic concept. On Thu, Feb 22, 2018 at 4:41 PM, Alexey Loubyansky < alexey.loubyansky at redhat.com> wrote: > On Thu, Feb 22, 2018 at 10:24 PM, Brian Stansberry < > brian.stansberry at redhat.com> wrote: > >> I'm describing my thinking process of understanding this in hopes that >> it's helpful to others. Or that I'm all wrong and you can correct me. ;) >> >> AIUI you want to still want to use maven and GAVs for actually pulling >> items from the repo, but the additional stream info allows you to work out >> how to identify other related items. So I'm a bit unclear on the >> relationships of this coordinate to a GAV. >> > > GAV has been used initially because of the Maven repo. As long as we use > Maven whatever coordinate expression we choose it will have to translate to > GAV at the end. I imagine there will be an artifact (target repo > coordinate) resolver that will take care of that. > > I initially thought it's >> >> universe:family:build-id >> >> org.jboss:wildfly:12.0.5.Beta4 >> >> That would mean though that BUILD_ID is not just unique for the branch, >> it is unique for the family. That sounds wrong, as you state it's unique >> to the branch. >> >> So now I think it's >> >> family:branch:build-id >> >> wildfly:12:12.0.5.Beta4 >> > > To me that looks like a variation of a GAV which is both a coordinate and > an ID. That could be ok. Actually, the examples above do contain a lot of > info that seems sufficient to have a clue about what this is and where it > belongs. My approach was based on what pieces of info I wanted to extract > from those expressions and that would include (taking into account the > tooling and the user interface): universe, family, branch, release stream > classifier, release id. This is what I will be extracting and dealing with > whatever format we choose. So I might as well expose these directly and let > project/product owners decide how those map into their preferred > versioning, compatibility and update rules. I could provide a default GAV > coordinate resolver based on how we are used to define our GAVs and also > let the user (project owner) provide a custom one. > > >> One concern with that is the 'A' in the GAV is no longer something rarely >> changing. In the WildFly case it would change every 3 months. This has some >> implications for the process of producing the feature packs. I'm not >> saying that's a show-stopper problem; more that it's something that we'll >> have to be aware of as we think through the process of creating these. >> > > One of the advantages of not using actual Maven GAVs directly is to make > them an implementation detail. If one day we decide to redefine our GAV > approach or support non-Maven repo for some reason, the end user of the > tool will not have to know about that. > > Thanks, > Alexey > > Most readers can safely skip the rest of this as I'm probably getting >> ahead of myself.... >> >> An example of the kind of thing I'm talking about is in the current root >> pom for WildFly we have: >> >> >> .... >> >> >> .... >> >> ${project.groupId} >> wildfly-feature-pack >> pom >> ${project.version} >> >> >> Thereafter any other child poms that declare a dependency on that feature >> pack just have >> >> >> .... >> >> .... >> >> ${project.groupId} >> wildfly-feature-pack >> pom >> >> >> There's no need to specify the version all over the place, as the >> dependencyManagement mechanism takes care of that in a central location. >> But that kind of approach doesn't work as readily when it comes to >> artifactId. >> >> One possibility is in the root pom there's >> >> >> .... >> >> 12 >> .... >> >> >> .... >> >> ${project.groupId} >> ${feature.pack.branch} >> ${project.version} >> >> >> And then in other child poms: >> >> >> .... >> >> .... >> >> ${project.groupId} >> ${feature.pack.branch} >> pom >> >> >> On Wed, Feb 21, 2018 at 4:40 PM, Alexey Loubyansky < >> alexey.loubyansky at redhat.com> wrote: >> >>> As many of you know we are planning to move to the new feature-packs and >>> the provisioning mechanism for our wildfly(-based) distributions. New >>> feature-packs will be artifacts in a repository (currently Maven). In this >>> email I'd like to raise a question about how to express a location >>> (coordinates) of a feature-pack, its identify (id) and a stream information >>> which is the source of version updates and patches. >>> >>> Until this moment I've used the GAV (group, artifact, version) as both >>> the feature-pack ID and its coordinates in the repository. This is pretty >>> much enough for a static installation config (which is a list of >>> feature-pack GAVs and config options). The GAV-based config also makes the >>> installation build reproducible. Which is a hard requirement for the >>> provisioning mechanism. >>> >>> On the other hand, we also want to be able to check for the updates in >>> the repository for the installed feature-packs and apply them to an >>> existing installation. Which means that the installation has to be also >>> described in terms of the consumed update streams. This will be a >>> description of the installation in terms of sources of the latest available >>> versions. A build from this kind of config is not guaranteed to be >>> reproducible. This is where the GAVs don't fit as well. >>> >>> What I would like to achieve is to combine the static and dynamic parts >>> of the config into one. Here is what I'm considering. When I install a >>> feature-pack (using a tool or adding it manually into the installation >>> config) what ends up in the config is the following expression: >>> universe:family:branch:classifier:build_id, e.g. >>> org.jboss:wildfly:12:beta:12.0.5.Beta4. This expression is going to be >>> the feature-pack coordinates. >>> >>> The meaning behind the parts. >>> >>> UNIVERSE >>> >>> Universe is supposed to be a registry of feature-pack streams for >>> various projects and products. In the example above the org.jboss universe >>> would include wildfly-core, wildfly and related projects that are consumed >>> by wildfly that also choose to provide feature-packs. >>> >>> FAMILY >>> >>> The family part would designate the project or product. >>> >>> BRANCH >>> >>> The branch would normally be a major version. The assumption is that >>> anything that comes from the branch is API and config backward compatible. >>> >>> CLASSIFIER >>> >>> Branch + classifier is what identifies a stream. The idea is that there >>> could be multiple streams originating from the same branch. I.e. a stream >>> of final releases, a stream of betas, alphas, etc. A user could choose >>> which stream to subscribe to by providing the classifier. >>> >>> BUILD ID >>> >>> In most cases that would be the release version. >>> universe:family:branch:build_id is going to be the feature-pack >>> identity. The classifier is not taken into account because the same >>> feature-pack build/release might appear in more than one stream. And so the >>> build_id must be unique for the branch. >>> >>> >>> Given the full feature-pack coordinates, the target feature-pack can >>> unmistakenly be identified and the installation can be reproduced. At the >>> same time, the coordinates include the stream information, so a tool can >>> check the stream for the updates, apply them and update the installation >>> config with the new feature-pack build_id. >>> >>> If you see any problem with this approach or have a better idea, please >>> share. Thanks! >>> >>> Alexey >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> >> >> -- >> Brian Stansberry >> Manager, Senior Principal Software Engineer >> Red Hat >> > > -- Brian Stansberry Manager, Senior Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180222/bc038780/attachment-0001.html From alexey.loubyansky at redhat.com Fri Feb 23 13:02:55 2018 From: alexey.loubyansky at redhat.com (Alexey Loubyansky) Date: Fri, 23 Feb 2018 19:02:55 +0100 Subject: [wildfly-dev] new feature-pack repo coords, id and streams In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 12:11 AM, Brian Stansberry < brian.stansberry at redhat.com> wrote: > Having maven GAVs be an internal detail of the tool sounds fine, but we > are going to need to produce and distribute the feature packs, and for that > I figured we're talking maven. With a specialized plugin involved, sure, > but for now and probably for quite a while, it's fundamentally maven. > By distributing you mean deploying them to the repo? Let's clarify who will care about the actual GAVs. Will feature-packs need to be located by anything else than the provisioning tool? People taking a snapshot of the repo for offline use? Once feature-packs are in the repo, they become consumable by the tool (which is capable of discovering them by means of a resolver). The tool can also create feature-packs and install/deploy them into the repo. So it serves both the end users and teams producing the feature-packs. The location in the repo will still be 100% predictable. It's just the coordinates in the provisioning configs will not be the actual Maven GAVs. I'm thinking who would care about that. The end user will deal with the notions of the family, branch, stream, etc and not need to set the coordinates resolver up. It will be provided by the stream they subscribe to. BTW, conceptually the artifact resolver component will be there either way just be able to implement the notion of the universe and a stream of updates. Alexey One thing I didn't say before because I was focused on my question, is that > the expression segments you outlined sound conceptually correct to me. > Because they sound right is why I jumped to practical questions. I don't > want to sidetrack this too much though with the details of how this relates > to maven, at least not at the cost of people giving you feedback on the > basic concept. > > > On Thu, Feb 22, 2018 at 4:41 PM, Alexey Loubyansky < > alexey.loubyansky at redhat.com> wrote: > >> On Thu, Feb 22, 2018 at 10:24 PM, Brian Stansberry < >> brian.stansberry at redhat.com> wrote: >> >>> I'm describing my thinking process of understanding this in hopes that >>> it's helpful to others. Or that I'm all wrong and you can correct me. ;) >>> >>> AIUI you want to still want to use maven and GAVs for actually pulling >>> items from the repo, but the additional stream info allows you to work out >>> how to identify other related items. So I'm a bit unclear on the >>> relationships of this coordinate to a GAV. >>> >> >> GAV has been used initially because of the Maven repo. As long as we use >> Maven whatever coordinate expression we choose it will have to translate to >> GAV at the end. I imagine there will be an artifact (target repo >> coordinate) resolver that will take care of that. >> >> I initially thought it's >>> >>> universe:family:build-id >>> >>> org.jboss:wildfly:12.0.5.Beta4 >>> >>> That would mean though that BUILD_ID is not just unique for the branch, >>> it is unique for the family. That sounds wrong, as you state it's unique >>> to the branch. >>> >>> So now I think it's >>> >>> family:branch:build-id >>> >>> wildfly:12:12.0.5.Beta4 >>> >> >> To me that looks like a variation of a GAV which is both a coordinate and >> an ID. That could be ok. Actually, the examples above do contain a lot of >> info that seems sufficient to have a clue about what this is and where it >> belongs. My approach was based on what pieces of info I wanted to extract >> from those expressions and that would include (taking into account the >> tooling and the user interface): universe, family, branch, release stream >> classifier, release id. This is what I will be extracting and dealing with >> whatever format we choose. So I might as well expose these directly and let >> project/product owners decide how those map into their preferred >> versioning, compatibility and update rules. I could provide a default GAV >> coordinate resolver based on how we are used to define our GAVs and also >> let the user (project owner) provide a custom one. >> >> >>> One concern with that is the 'A' in the GAV is no longer something >>> rarely changing. In the WildFly case it would change every 3 months. This >>> has some implications for the process of producing the feature packs. I'm >>> not saying that's a show-stopper problem; more that it's something that >>> we'll have to be aware of as we think through the process of creating these. >>> >> >> One of the advantages of not using actual Maven GAVs directly is to make >> them an implementation detail. If one day we decide to redefine our GAV >> approach or support non-Maven repo for some reason, the end user of the >> tool will not have to know about that. >> > > >> Thanks, >> Alexey >> >> Most readers can safely skip the rest of this as I'm probably getting >>> ahead of myself.... >>> >>> An example of the kind of thing I'm talking about is in the current root >>> pom for WildFly we have: >>> >>> >>> .... >>> >>> >>> .... >>> >>> ${project.groupId} >>> wildfly-feature-pack >>> pom >>> ${project.version} >>> >>> >>> Thereafter any other child poms that declare a dependency on that >>> feature pack just have >>> >>> >>> .... >>> >>> .... >>> >>> ${project.groupId} >>> wildfly-feature-pack >>> pom >>> >>> >>> There's no need to specify the version all over the place, as the >>> dependencyManagement mechanism takes care of that in a central location. >>> But that kind of approach doesn't work as readily when it comes to >>> artifactId. >>> >>> One possibility is in the root pom there's >>> >>> >>> .... >>> >>> 12 >>> .... >>> >>> >>> .... >>> >>> ${project.groupId} >>> ${feature.pack.branch} >>> ${project.version} >>> >>> >>> And then in other child poms: >>> >>> >>> .... >>> >>> .... >>> >>> ${project.groupId} >>> ${feature.pack.branch} >>> pom >>> >>> >>> On Wed, Feb 21, 2018 at 4:40 PM, Alexey Loubyansky < >>> alexey.loubyansky at redhat.com> wrote: >>> >>>> As many of you know we are planning to move to the new feature-packs >>>> and the provisioning mechanism for our wildfly(-based) distributions. New >>>> feature-packs will be artifacts in a repository (currently Maven). In this >>>> email I'd like to raise a question about how to express a location >>>> (coordinates) of a feature-pack, its identify (id) and a stream information >>>> which is the source of version updates and patches. >>>> >>>> Until this moment I've used the GAV (group, artifact, version) as both >>>> the feature-pack ID and its coordinates in the repository. This is pretty >>>> much enough for a static installation config (which is a list of >>>> feature-pack GAVs and config options). The GAV-based config also makes the >>>> installation build reproducible. Which is a hard requirement for the >>>> provisioning mechanism. >>>> >>>> On the other hand, we also want to be able to check for the updates in >>>> the repository for the installed feature-packs and apply them to an >>>> existing installation. Which means that the installation has to be also >>>> described in terms of the consumed update streams. This will be a >>>> description of the installation in terms of sources of the latest available >>>> versions. A build from this kind of config is not guaranteed to be >>>> reproducible. This is where the GAVs don't fit as well. >>>> >>>> What I would like to achieve is to combine the static and dynamic parts >>>> of the config into one. Here is what I'm considering. When I install a >>>> feature-pack (using a tool or adding it manually into the installation >>>> config) what ends up in the config is the following expression: >>>> universe:family:branch:classifier:build_id, e.g. >>>> org.jboss:wildfly:12:beta:12.0.5.Beta4. This expression is going to be >>>> the feature-pack coordinates. >>>> >>>> The meaning behind the parts. >>>> >>>> UNIVERSE >>>> >>>> Universe is supposed to be a registry of feature-pack streams for >>>> various projects and products. In the example above the org.jboss universe >>>> would include wildfly-core, wildfly and related projects that are consumed >>>> by wildfly that also choose to provide feature-packs. >>>> >>>> FAMILY >>>> >>>> The family part would designate the project or product. >>>> >>>> BRANCH >>>> >>>> The branch would normally be a major version. The assumption is that >>>> anything that comes from the branch is API and config backward compatible. >>>> >>>> CLASSIFIER >>>> >>>> Branch + classifier is what identifies a stream. The idea is that there >>>> could be multiple streams originating from the same branch. I.e. a stream >>>> of final releases, a stream of betas, alphas, etc. A user could choose >>>> which stream to subscribe to by providing the classifier. >>>> >>>> BUILD ID >>>> >>>> In most cases that would be the release version. >>>> universe:family:branch:build_id is going to be the feature-pack >>>> identity. The classifier is not taken into account because the same >>>> feature-pack build/release might appear in more than one stream. And so the >>>> build_id must be unique for the branch. >>>> >>>> >>>> Given the full feature-pack coordinates, the target feature-pack can >>>> unmistakenly be identified and the installation can be reproduced. At the >>>> same time, the coordinates include the stream information, so a tool can >>>> check the stream for the updates, apply them and update the installation >>>> config with the new feature-pack build_id. >>>> >>>> If you see any problem with this approach or have a better idea, please >>>> share. Thanks! >>>> >>>> Alexey >>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>> >>> >>> >>> -- >>> Brian Stansberry >>> Manager, Senior Principal Software Engineer >>> Red Hat >>> >> >> > > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180223/21c9d9a8/attachment-0001.html From jason.greene at redhat.com Mon Feb 26 22:32:21 2018 From: jason.greene at redhat.com (Jason Greene) Date: Mon, 26 Feb 2018 21:32:21 -0600 Subject: [wildfly-dev] 12.0.0.CR1 released! Message-ID: <8E8FB3D2-54E2-4B96-A48E-70FCC090DAF7@redhat.com> Hi Everyone, In preparation for WildFly 12 Final, CR1 is now available for build testing: http://wildfly.org/downloads/ Provided no blocking issues are discovered we will be releasing Final shortly. WildFly 12 is the first release in our new quarterly delivery model. The most significant feature is delivery of new EE8 capabilities. As mentioned during the original 12 announcement, we are delivering EE8 functionality incrementally, as opposed to waiting for a big bang. WildFly 12 includes Servlet 4, JAX-RS 2.1, CDI 2.0, Bean Validation 2.0, JSF 2.3, JSON-B, JSON-P 1.1, and Javamail 1.6. By default WildFly 12 runs in EE7 mode, but you can enable EE8 variants of the standard by starting the server with the special parameter ?-Dee8.preview.mode=true?. Thanks! -Jason From brian.stansberry at redhat.com Tue Feb 27 11:44:16 2018 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Tue, 27 Feb 2018 10:44:16 -0600 Subject: [wildfly-dev] new feature-pack repo coords, id and streams In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 12:02 PM, Alexey Loubyansky < alexey.loubyansky at redhat.com> wrote: > On Fri, Feb 23, 2018 at 12:11 AM, Brian Stansberry < > brian.stansberry at redhat.com> wrote: > >> Having maven GAVs be an internal detail of the tool sounds fine, but we >> are going to need to produce and distribute the feature packs, and for that >> I figured we're talking maven. With a specialized plugin involved, sure, >> but for now and probably for quite a while, it's fundamentally maven. >> > > By distributing you mean deploying them to the repo? > Sorry for the delay on this. I mean building them making them available for use, in whatever ways we have to do that. Precisely how we intend to do that was something of a question mark for me, even before this discussion. But in a naive kind of way if we were just talking about building maven artifacts and making them available via a maven repo, well that's something we've done a ton of and it's well understood. But we (or at least I) need more clarity on how this will work, and this discussion has just made me more aware of that. Within the WildFly build itself, AIUI then this "provisioning repo" is both an output, and an input. It's an input because the existing build and dist maven modules need to continue to exist, and those will need this provisioning repo in order for the pm tool to produce the build/dist artifacts. I agree that this "provisioning repo" does not need to be internally structured as a maven repo. It just needs to be producible and consumable by a maven-based build that uses a plugin that uses the provisioning tool. Let's clarify who will care about the actual GAVs. Will feature-packs need > to be located by anything else than the provisioning tool? People taking > a snapshot of the repo for offline use? > I don't think so no. > Once feature-packs are in the repo, they become consumable by the tool > (which is capable of discovering them by means of a resolver). The tool can > also create feature-packs and install/deploy them into the repo. So it > serves both the end users and teams producing the feature-packs. The > location in the repo will still be 100% predictable. It's just the > coordinates in the provisioning configs will not be the actual Maven GAVs. > I'm thinking who would care about that. The end user will deal with the > notions of the family, branch, stream, etc and not need to set the > coordinates resolver up. It will be provided by the stream they subscribe > to. > > BTW, conceptually the artifact resolver component will be there either > way just be able to implement the notion of the universe and a stream of > updates. > > Alexey > > > One thing I didn't say before because I was focused on my question, is >> that the expression segments you outlined sound conceptually correct to me. >> Because they sound right is why I jumped to practical questions. I don't >> want to sidetrack this too much though with the details of how this relates >> to maven, at least not at the cost of people giving you feedback on the >> basic concept. >> >> >> On Thu, Feb 22, 2018 at 4:41 PM, Alexey Loubyansky < >> alexey.loubyansky at redhat.com> wrote: >> >>> On Thu, Feb 22, 2018 at 10:24 PM, Brian Stansberry < >>> brian.stansberry at redhat.com> wrote: >>> >>>> I'm describing my thinking process of understanding this in hopes that >>>> it's helpful to others. Or that I'm all wrong and you can correct me. ;) >>>> >>>> AIUI you want to still want to use maven and GAVs for actually pulling >>>> items from the repo, but the additional stream info allows you to work out >>>> how to identify other related items. So I'm a bit unclear on the >>>> relationships of this coordinate to a GAV. >>>> >>> >>> GAV has been used initially because of the Maven repo. As long as we use >>> Maven whatever coordinate expression we choose it will have to translate to >>> GAV at the end. I imagine there will be an artifact (target repo >>> coordinate) resolver that will take care of that. >>> >>> I initially thought it's >>>> >>>> universe:family:build-id >>>> >>>> org.jboss:wildfly:12.0.5.Beta4 >>>> >>>> That would mean though that BUILD_ID is not just unique for the branch, >>>> it is unique for the family. That sounds wrong, as you state it's unique >>>> to the branch. >>>> >>>> So now I think it's >>>> >>>> family:branch:build-id >>>> >>>> wildfly:12:12.0.5.Beta4 >>>> >>> >>> To me that looks like a variation of a GAV which is both a coordinate >>> and an ID. That could be ok. Actually, the examples above do contain a lot >>> of info that seems sufficient to have a clue about what this is and where >>> it belongs. My approach was based on what pieces of info I wanted to >>> extract from those expressions and that would include (taking into account >>> the tooling and the user interface): universe, family, branch, release >>> stream classifier, release id. This is what I will be extracting and >>> dealing with whatever format we choose. So I might as well expose these >>> directly and let project/product owners decide how those map into their >>> preferred versioning, compatibility and update rules. I could provide a >>> default GAV coordinate resolver based on how we are used to define our GAVs >>> and also let the user (project owner) provide a custom one. >>> >>> >>>> One concern with that is the 'A' in the GAV is no longer something >>>> rarely changing. In the WildFly case it would change every 3 months. This >>>> has some implications for the process of producing the feature packs. I'm >>>> not saying that's a show-stopper problem; more that it's something that >>>> we'll have to be aware of as we think through the process of creating these. >>>> >>> >>> One of the advantages of not using actual Maven GAVs directly is to make >>> them an implementation detail. If one day we decide to redefine our GAV >>> approach or support non-Maven repo for some reason, the end user of the >>> tool will not have to know about that. >>> >> >> >>> Thanks, >>> Alexey >>> >>> Most readers can safely skip the rest of this as I'm probably getting >>>> ahead of myself.... >>>> >>>> An example of the kind of thing I'm talking about is in the current >>>> root pom for WildFly we have: >>>> >>>> >>>> .... >>>> >>>> >>>> .... >>>> >>>> ${project.groupId} >>>> wildfly-feature-pack >>>> pom >>>> ${project.version} >>>> >>>> >>>> Thereafter any other child poms that declare a dependency on that >>>> feature pack just have >>>> >>>> >>>> .... >>>> >>>> .... >>>> >>>> ${project.groupId} >>>> wildfly-feature-pack >>>> pom >>>> >>>> >>>> There's no need to specify the version all over the place, as the >>>> dependencyManagement mechanism takes care of that in a central location. >>>> But that kind of approach doesn't work as readily when it comes to >>>> artifactId. >>>> >>>> One possibility is in the root pom there's >>>> >>>> >>>> .... >>>> >>>> 12 >>>> .... >>>> >>>> >>>> .... >>>> >>>> ${project.groupId} >>>> ${feature.pack.branch} >>>> ${project.version} >>>> >>>> >>>> And then in other child poms: >>>> >>>> >>>> .... >>>> >>>> .... >>>> >>>> ${project.groupId} >>>> ${feature.pack.branch} >>>> pom >>>> >>>> >>>> On Wed, Feb 21, 2018 at 4:40 PM, Alexey Loubyansky < >>>> alexey.loubyansky at redhat.com> wrote: >>>> >>>>> As many of you know we are planning to move to the new feature-packs >>>>> and the provisioning mechanism for our wildfly(-based) distributions. New >>>>> feature-packs will be artifacts in a repository (currently Maven). In this >>>>> email I'd like to raise a question about how to express a location >>>>> (coordinates) of a feature-pack, its identify (id) and a stream information >>>>> which is the source of version updates and patches. >>>>> >>>>> Until this moment I've used the GAV (group, artifact, version) as both >>>>> the feature-pack ID and its coordinates in the repository. This is pretty >>>>> much enough for a static installation config (which is a list of >>>>> feature-pack GAVs and config options). The GAV-based config also makes the >>>>> installation build reproducible. Which is a hard requirement for the >>>>> provisioning mechanism. >>>>> >>>>> On the other hand, we also want to be able to check for the updates in >>>>> the repository for the installed feature-packs and apply them to an >>>>> existing installation. Which means that the installation has to be also >>>>> described in terms of the consumed update streams. This will be a >>>>> description of the installation in terms of sources of the latest available >>>>> versions. A build from this kind of config is not guaranteed to be >>>>> reproducible. This is where the GAVs don't fit as well. >>>>> >>>>> What I would like to achieve is to combine the static and dynamic >>>>> parts of the config into one. Here is what I'm considering. When I install >>>>> a feature-pack (using a tool or adding it manually into the installation >>>>> config) what ends up in the config is the following expression: >>>>> universe:family:branch:classifier:build_id, e.g. >>>>> org.jboss:wildfly:12:beta:12.0.5.Beta4. This expression is going to >>>>> be the feature-pack coordinates. >>>>> >>>>> The meaning behind the parts. >>>>> >>>>> UNIVERSE >>>>> >>>>> Universe is supposed to be a registry of feature-pack streams for >>>>> various projects and products. In the example above the org.jboss universe >>>>> would include wildfly-core, wildfly and related projects that are consumed >>>>> by wildfly that also choose to provide feature-packs. >>>>> >>>>> FAMILY >>>>> >>>>> The family part would designate the project or product. >>>>> >>>>> BRANCH >>>>> >>>>> The branch would normally be a major version. The assumption is that >>>>> anything that comes from the branch is API and config backward compatible. >>>>> >>>>> CLASSIFIER >>>>> >>>>> Branch + classifier is what identifies a stream. The idea is that >>>>> there could be multiple streams originating from the same branch. I.e. a >>>>> stream of final releases, a stream of betas, alphas, etc. A user could >>>>> choose which stream to subscribe to by providing the classifier. >>>>> >>>>> BUILD ID >>>>> >>>>> In most cases that would be the release version. >>>>> universe:family:branch:build_id is going to be the feature-pack >>>>> identity. The classifier is not taken into account because the same >>>>> feature-pack build/release might appear in more than one stream. And so the >>>>> build_id must be unique for the branch. >>>>> >>>>> >>>>> Given the full feature-pack coordinates, the target feature-pack can >>>>> unmistakenly be identified and the installation can be reproduced. At the >>>>> same time, the coordinates include the stream information, so a tool can >>>>> check the stream for the updates, apply them and update the installation >>>>> config with the new feature-pack build_id. >>>>> >>>>> If you see any problem with this approach or have a better idea, >>>>> please share. Thanks! >>>>> >>>>> Alexey >>>>> >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>> >>>> >>>> >>>> >>>> -- >>>> Brian Stansberry >>>> Manager, Senior Principal Software Engineer >>>> Red Hat >>>> >>> >>> >> >> >> -- >> Brian Stansberry >> Manager, Senior Principal Software Engineer >> Red Hat >> > > -- Brian Stansberry Manager, Senior Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180227/f7f6a5b6/attachment-0001.html From jason.greene at redhat.com Tue Feb 27 12:18:46 2018 From: jason.greene at redhat.com (Jason Greene) Date: Tue, 27 Feb 2018 11:18:46 -0600 Subject: [wildfly-dev] new feature-pack repo coords, id and streams In-Reply-To: References: Message-ID: <288672B5-2538-45F8-B2AE-8AC0C00BC1C7@redhat.com> > On Feb 22, 2018, at 3:24 PM, Brian Stansberry wrote: > > I'm describing my thinking process of understanding this in hopes that it's helpful to others. Or that I'm all wrong and you can correct me. ;) > > AIUI you want to still want to use maven and GAVs for actually pulling items from the repo, but the additional stream info allows you to work out how to identify other related items. So I'm a bit unclear on the relationships of this coordinate to a GAV. > > I initially thought it's > > universe:family:build-id > > org.jboss:wildfly:12.0.5.Beta4 > > That would mean though that BUILD_ID is not just unique for the branch, it is unique for the family. That sounds wrong, as you state it's unique to the branch. > > So now I think it's > > family:branch:build-id > > wildfly:12:12.0.5.Beta4 > > One concern with that is the 'A' in the GAV is no longer something rarely changing. In the WildFly case it would change every 3 months. This has some implications for the process of producing the feature packs. I?m not saying that's a show-stopper problem; more that it's something that we'll have to be aware of as we think through the process of creating these. For WildFly the ?branch? (which is really a stream) we would want would, should, IMO, just be called ?wildfly?, since we allow compatible updates across the stream. That may seem sketchy since it implies ?forever?, but If we ever did decide to make a radical incompatible architectural change in the distant future, we would then just come up with a new moniker. Note that stream compatibility doesn?t necessarily mean we never prune deprecated content, it just means an update is possible. -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180227/f8541ecb/attachment.html From jason.greene at redhat.com Tue Feb 27 12:43:59 2018 From: jason.greene at redhat.com (Jason Greene) Date: Tue, 27 Feb 2018 11:43:59 -0600 Subject: [wildfly-dev] new feature-pack repo coords, id and streams In-Reply-To: References: Message-ID: <28BC58C6-C35F-456D-877A-FC425F02F904@redhat.com> > On Feb 22, 2018, at 4:41 PM, Alexey Loubyansky wrote: > > I could provide a default GAV coordinate resolver based on how we are used to define our GAVs and also let the user (project owner) provide a custom one. Ideally the specification approach we decide on is consistent across the family and even the universe. It?s easier IMO to look at this from a user interaction perspective, and to consider which source of info is authoritative. For initial consumption there is two ways a user will specify what they initially want: A) Give me the latest WildFly (latest on a given branch:classifer) B) Give me exactly this version of WildFly (e.g. WildFly 12.0.0.Final) As you mentioned earlier in the thread, even when A is used we need to store B so that the prov config is reproducible. Whats also interesting is that in most cases the user does not care what the policy details are for updating to the latest, they will happily accept the default, and really in that scenario the authoritative source is a registry artifact in the maven repo, not necessarily whats in the provisioning file. The policy details also aren?t important for reproducibility, since the full version already gives you what you need. Where it would be important in the provisioning file is when the user has overridden the rules. As an example, just because a user installs 12.0.0.Beta1, doesn?t mean that from now on they always want betas. They might just want to try a particular beta, and then update back to the main stream. So to resolve the ambiguity, in any case other than the default, the user would have to specify the stream they are interested in. One other thing to keep in mind, from a usability perspective, is to be careful with using things that look like GAVs but aren?t GAVs -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180227/48670b96/attachment.html From alexey.loubyansky at redhat.com Tue Feb 27 16:10:27 2018 From: alexey.loubyansky at redhat.com (Alexey Loubyansky) Date: Tue, 27 Feb 2018 22:10:27 +0100 Subject: [wildfly-dev] new feature-pack repo coords, id and streams In-Reply-To: <28BC58C6-C35F-456D-877A-FC425F02F904@redhat.com> References: <28BC58C6-C35F-456D-877A-FC425F02F904@redhat.com> Message-ID: On Tue, Feb 27, 2018 at 6:43 PM, Jason Greene wrote: > > On Feb 22, 2018, at 4:41 PM, Alexey Loubyansky < > alexey.loubyansky at redhat.com> wrote: > > I could provide a default GAV coordinate resolver based on how we are > used to define our GAVs and also let the user (project owner) provide a > custom one. > > > Ideally the specification approach we decide on is consistent across the > family and even the universe. It?s easier IMO to look at this from a user > interaction perspective, and to consider which source of info is > authoritative. > > For initial consumption there is two ways a user will specify what they > initially want: > > A) Give me the latest WildFly (latest on a given branch:classifer) > > B) Give me exactly this version of WildFly (e.g. WildFly 12.0.0.Final) > > As you mentioned earlier in the thread, even when A is used we need to > store B so that the prov config is reproducible. Whats also interesting is > that in most cases the user does not care what the policy details are for > updating to the latest, they will happily accept the default, and really in > that scenario the authoritative source is a registry artifact in the maven > repo, not necessarily whats in the provisioning file. The policy details > also aren?t important for reproducibility, since the full version already > gives you what you need. Where it would be important in the provisioning > file is when the user has overridden the rules. As an example, just because > a user installs 12.0.0.Beta1, doesn?t mean that from now on they always > want betas. They might just want to try a particular beta, and then update > back to the main stream. So to resolve the ambiguity, in any case other > than the default, the user would have to specify the stream they are > interested in. > What I want to achieve with this is make the initial choice of A) or B) not important in a sense that if the user chose to install the latest version, the provisioned installation will remain 100% reprodicible since the recorded state will include specific versions of the installed feature-packs. And if the user choice B), i.e. to provision a specific version, then the usual update command would still work, because the family:branch:stream is a part of the coordinates of the initially chosen specific version. The same applies to the dependencies of the explicitly installed feature-packs. The dependencies will always be of specific versions to guarantee the reproducibility of the release. But from their coordinates we can figure out the branch and the stream they originate from. And when executing update we can poll for dependency updates as well. The user would be able to switch to a different stream of the same branch, that would have to be an explicit instruction though. I think the default stream should be final releases. If the user wants something else then it will need to be expressed explicitly. Switching between branches of the same family could work too. I guess that would depend on a specific project and the compatibility between major version releases. > One other thing to keep in mind, from a usability perspective, is to be > careful with using things that look like GAVs but aren?t GAVs > That's a good one, agreed. Thanks, Alexey > > -Jason > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180227/91c6e97a/attachment-0001.html From alexey.loubyansky at redhat.com Tue Feb 27 16:22:58 2018 From: alexey.loubyansky at redhat.com (Alexey Loubyansky) Date: Tue, 27 Feb 2018 22:22:58 +0100 Subject: [wildfly-dev] new feature-pack repo coords, id and streams In-Reply-To: References: Message-ID: On Tue, Feb 27, 2018 at 5:44 PM, Brian Stansberry < brian.stansberry at redhat.com> wrote: > On Fri, Feb 23, 2018 at 12:02 PM, Alexey Loubyansky < > alexey.loubyansky at redhat.com> wrote: > >> On Fri, Feb 23, 2018 at 12:11 AM, Brian Stansberry < >> brian.stansberry at redhat.com> wrote: >> >>> Having maven GAVs be an internal detail of the tool sounds fine, but we >>> are going to need to produce and distribute the feature packs, and for that >>> I figured we're talking maven. With a specialized plugin involved, sure, >>> but for now and probably for quite a while, it's fundamentally maven. >>> >> >> By distributing you mean deploying them to the repo? >> > > Sorry for the delay on this. I mean building them making them available > for use, in whatever ways we have to do that. Precisely how we intend to do > that was something of a question mark for me, even before this discussion. > But in a naive kind of way if we were just talking about building maven > artifacts and making them available via a maven repo, well that's something > we've done a ton of and it's well understood. But we (or at least I) need > more clarity on how this will work, and this discussion has just made me > more aware of that. > > Within the WildFly build itself, AIUI then this "provisioning repo" is > both an output, and an input. It's an input because the existing build and > dist maven modules need to continue to exist, and those will need this > provisioning repo in order for the pm tool to produce the build/dist > artifacts. > What I'm proposing in this thread affects only the feature-pack coordinates, not the module artifacts. The build and dist will remain as they are now. I agree that this "provisioning repo" does not need to be internally > structured as a maven repo. It just needs to be producible and consumable > by a maven-based build that uses a plugin that uses the provisioning tool. > The repo will remain the Maven for us. There won't be a separate provisioning-specific internal repo. The feature-pack coordinates will simply be translated into the Maven GAVs when we need to resolve the feature-pack artifact. Thanks, Alexey Let's clarify who will care about the actual GAVs. Will feature-packs need >> to be located by anything else than the provisioning tool? People taking >> a snapshot of the repo for offline use? >> > > I don't think so no. > > >> Once feature-packs are in the repo, they become consumable by the tool >> (which is capable of discovering them by means of a resolver). The tool can >> also create feature-packs and install/deploy them into the repo. So it >> serves both the end users and teams producing the feature-packs. The >> location in the repo will still be 100% predictable. It's just the >> coordinates in the provisioning configs will not be the actual Maven GAVs. >> I'm thinking who would care about that. The end user will deal with the >> notions of the family, branch, stream, etc and not need to set the >> coordinates resolver up. It will be provided by the stream they subscribe >> to. >> >> BTW, conceptually the artifact resolver component will be there either >> way just be able to implement the notion of the universe and a stream of >> updates. >> >> Alexey >> >> >> One thing I didn't say before because I was focused on my question, is >>> that the expression segments you outlined sound conceptually correct to me. >>> Because they sound right is why I jumped to practical questions. I don't >>> want to sidetrack this too much though with the details of how this relates >>> to maven, at least not at the cost of people giving you feedback on the >>> basic concept. >>> >>> >>> On Thu, Feb 22, 2018 at 4:41 PM, Alexey Loubyansky < >>> alexey.loubyansky at redhat.com> wrote: >>> >>>> On Thu, Feb 22, 2018 at 10:24 PM, Brian Stansberry < >>>> brian.stansberry at redhat.com> wrote: >>>> >>>>> I'm describing my thinking process of understanding this in hopes that >>>>> it's helpful to others. Or that I'm all wrong and you can correct me. ;) >>>>> >>>>> AIUI you want to still want to use maven and GAVs for actually pulling >>>>> items from the repo, but the additional stream info allows you to work out >>>>> how to identify other related items. So I'm a bit unclear on the >>>>> relationships of this coordinate to a GAV. >>>>> >>>> >>>> GAV has been used initially because of the Maven repo. As long as we >>>> use Maven whatever coordinate expression we choose it will have to >>>> translate to GAV at the end. I imagine there will be an artifact (target >>>> repo coordinate) resolver that will take care of that. >>>> >>>> I initially thought it's >>>>> >>>>> universe:family:build-id >>>>> >>>>> org.jboss:wildfly:12.0.5.Beta4 >>>>> >>>>> That would mean though that BUILD_ID is not just unique for the >>>>> branch, it is unique for the family. That sounds wrong, as you state it's >>>>> unique to the branch. >>>>> >>>>> So now I think it's >>>>> >>>>> family:branch:build-id >>>>> >>>>> wildfly:12:12.0.5.Beta4 >>>>> >>>> >>>> To me that looks like a variation of a GAV which is both a coordinate >>>> and an ID. That could be ok. Actually, the examples above do contain a lot >>>> of info that seems sufficient to have a clue about what this is and where >>>> it belongs. My approach was based on what pieces of info I wanted to >>>> extract from those expressions and that would include (taking into account >>>> the tooling and the user interface): universe, family, branch, release >>>> stream classifier, release id. This is what I will be extracting and >>>> dealing with whatever format we choose. So I might as well expose these >>>> directly and let project/product owners decide how those map into their >>>> preferred versioning, compatibility and update rules. I could provide a >>>> default GAV coordinate resolver based on how we are used to define our GAVs >>>> and also let the user (project owner) provide a custom one. >>>> >>>> >>>>> One concern with that is the 'A' in the GAV is no longer something >>>>> rarely changing. In the WildFly case it would change every 3 months. This >>>>> has some implications for the process of producing the feature packs. I'm >>>>> not saying that's a show-stopper problem; more that it's something that >>>>> we'll have to be aware of as we think through the process of creating these. >>>>> >>>> >>>> One of the advantages of not using actual Maven GAVs directly is to >>>> make them an implementation detail. If one day we decide to redefine our >>>> GAV approach or support non-Maven repo for some reason, the end user of the >>>> tool will not have to know about that. >>>> >>> >>> >>>> Thanks, >>>> Alexey >>>> >>>> Most readers can safely skip the rest of this as I'm probably getting >>>>> ahead of myself.... >>>>> >>>>> An example of the kind of thing I'm talking about is in the current >>>>> root pom for WildFly we have: >>>>> >>>>> >>>>> .... >>>>> >>>>> >>>>> .... >>>>> >>>>> ${project.groupId} >>>>> wildfly-feature-pack >>>>> pom >>>>> ${project.version} >>>>> >>>>> >>>>> Thereafter any other child poms that declare a dependency on that >>>>> feature pack just have >>>>> >>>>> >>>>> .... >>>>> >>>>> .... >>>>> >>>>> ${project.groupId} >>>>> wildfly-feature-pack >>>>> pom >>>>> >>>>> >>>>> There's no need to specify the version all over the place, as the >>>>> dependencyManagement mechanism takes care of that in a central location. >>>>> But that kind of approach doesn't work as readily when it comes to >>>>> artifactId. >>>>> >>>>> One possibility is in the root pom there's >>>>> >>>>> >>>>> .... >>>>> >>>>> 12 >>>>> .... >>>>> >>>>> >>>>> .... >>>>> >>>>> ${project.groupId} >>>>> ${feature.pack.branch} >>>>> ${project.version} >>>>> >>>>> >>>>> And then in other child poms: >>>>> >>>>> >>>>> .... >>>>> >>>>> .... >>>>> >>>>> ${project.groupId} >>>>> ${feature.pack.branch} >>>>> pom >>>>> >>>>> >>>>> On Wed, Feb 21, 2018 at 4:40 PM, Alexey Loubyansky < >>>>> alexey.loubyansky at redhat.com> wrote: >>>>> >>>>>> As many of you know we are planning to move to the new feature-packs >>>>>> and the provisioning mechanism for our wildfly(-based) distributions. New >>>>>> feature-packs will be artifacts in a repository (currently Maven). In this >>>>>> email I'd like to raise a question about how to express a location >>>>>> (coordinates) of a feature-pack, its identify (id) and a stream information >>>>>> which is the source of version updates and patches. >>>>>> >>>>>> Until this moment I've used the GAV (group, artifact, version) as >>>>>> both the feature-pack ID and its coordinates in the repository. This is >>>>>> pretty much enough for a static installation config (which is a list of >>>>>> feature-pack GAVs and config options). The GAV-based config also makes the >>>>>> installation build reproducible. Which is a hard requirement for the >>>>>> provisioning mechanism. >>>>>> >>>>>> On the other hand, we also want to be able to check for the updates >>>>>> in the repository for the installed feature-packs and apply them to an >>>>>> existing installation. Which means that the installation has to be also >>>>>> described in terms of the consumed update streams. This will be a >>>>>> description of the installation in terms of sources of the latest available >>>>>> versions. A build from this kind of config is not guaranteed to be >>>>>> reproducible. This is where the GAVs don't fit as well. >>>>>> >>>>>> What I would like to achieve is to combine the static and dynamic >>>>>> parts of the config into one. Here is what I'm considering. When I install >>>>>> a feature-pack (using a tool or adding it manually into the installation >>>>>> config) what ends up in the config is the following expression: >>>>>> universe:family:branch:classifier:build_id, e.g. >>>>>> org.jboss:wildfly:12:beta:12.0.5.Beta4. This expression is going to >>>>>> be the feature-pack coordinates. >>>>>> >>>>>> The meaning behind the parts. >>>>>> >>>>>> UNIVERSE >>>>>> >>>>>> Universe is supposed to be a registry of feature-pack streams for >>>>>> various projects and products. In the example above the org.jboss universe >>>>>> would include wildfly-core, wildfly and related projects that are consumed >>>>>> by wildfly that also choose to provide feature-packs. >>>>>> >>>>>> FAMILY >>>>>> >>>>>> The family part would designate the project or product. >>>>>> >>>>>> BRANCH >>>>>> >>>>>> The branch would normally be a major version. The assumption is that >>>>>> anything that comes from the branch is API and config backward compatible. >>>>>> >>>>>> CLASSIFIER >>>>>> >>>>>> Branch + classifier is what identifies a stream. The idea is that >>>>>> there could be multiple streams originating from the same branch. I.e. a >>>>>> stream of final releases, a stream of betas, alphas, etc. A user could >>>>>> choose which stream to subscribe to by providing the classifier. >>>>>> >>>>>> BUILD ID >>>>>> >>>>>> In most cases that would be the release version. >>>>>> universe:family:branch:build_id is going to be the feature-pack >>>>>> identity. The classifier is not taken into account because the same >>>>>> feature-pack build/release might appear in more than one stream. And so the >>>>>> build_id must be unique for the branch. >>>>>> >>>>>> >>>>>> Given the full feature-pack coordinates, the target feature-pack can >>>>>> unmistakenly be identified and the installation can be reproduced. At the >>>>>> same time, the coordinates include the stream information, so a tool can >>>>>> check the stream for the updates, apply them and update the installation >>>>>> config with the new feature-pack build_id. >>>>>> >>>>>> If you see any problem with this approach or have a better idea, >>>>>> please share. Thanks! >>>>>> >>>>>> Alexey >>>>>> >>>>>> _______________________________________________ >>>>>> wildfly-dev mailing list >>>>>> wildfly-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Brian Stansberry >>>>> Manager, Senior Principal Software Engineer >>>>> Red Hat >>>>> >>>> >>>> >>> >>> >>> -- >>> Brian Stansberry >>> Manager, Senior Principal Software Engineer >>> Red Hat >>> >> >> > > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20180227/c3cca001/attachment-0001.html From jai.forums2013 at gmail.com Wed Feb 28 23:52:08 2018 From: jai.forums2013 at gmail.com (Jaikiran Pai) Date: Thu, 1 Mar 2018 10:22:08 +0530 Subject: [wildfly-dev] 12.0.0.CR1 released! In-Reply-To: <8E8FB3D2-54E2-4B96-A48E-70FCC090DAF7@redhat.com> References: <8E8FB3D2-54E2-4B96-A48E-70FCC090DAF7@redhat.com> Message-ID: <98bef8dd-9e9d-67a2-2ebb-edb793706710@gmail.com> Hi Jason, The past few weeks, there have been multiple reports in the forums and in the JIRA related to remote EJB invocations (especially over HTTP) that seems to be failing due to some of the bugs in our remote EJB libraries. From what I see, most of them should be resolved in 12.0.0.CR1 that's released. However, I think there's at least one or two issues (still being discussed in the forums and JIRA[1] [2]) which I suspect are bugs in either WildFly itself or one of our libraries. Related to those, I believe, the fix in this PR https://github.com/wildfly/wildfly/pull/10929 which is currently open, is an important one in context of 12.0 Final release. I haven't had the chance to check if this PR would solve the rest of the issues, but at least from the looks of it, it's an important fix in itself. I think we should include that fix in 12.0.0.Final. [1] https://issues.jboss.org/browse/WFLY-9896 [2] https://developer.jboss.org/message/980718#980718 -Jaikiran On 27/02/18 9:02 AM, Jason Greene wrote: > Hi Everyone, > > In preparation for WildFly 12 Final, CR1 is now available for build testing: > http://wildfly.org/downloads/ > > Provided no blocking issues are discovered we will be releasing Final shortly. > > WildFly 12 is the first release in our new quarterly delivery model. The most significant feature is delivery of new EE8 capabilities. As mentioned during the original 12 announcement, we are delivering EE8 functionality incrementally, as opposed to waiting for a big bang. WildFly 12 includes Servlet 4, JAX-RS 2.1, CDI 2.0, Bean Validation 2.0, JSF 2.3, JSON-B, JSON-P 1.1, and Javamail 1.6. > > By default WildFly 12 runs in EE7 mode, but you can enable EE8 variants of the standard by starting the server with the special parameter ?-Dee8.preview.mode=true?. > > Thanks! > > -Jason > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev