From manovotn at redhat.com Tue May 2 01:26:39 2017 From: manovotn at redhat.com (Matej Novotny) Date: Tue, 2 May 2017 01:26:39 -0400 (EDT) Subject: [wildfly-dev] Weld 3 & Wildfly 11 integration - help with security needed In-Reply-To: References: <1297078731.331997.1493036582744.JavaMail.zimbra@redhat.com> <1852088519.338288.1493037155775.JavaMail.zimbra@redhat.com> Message-ID: <1208879242.3040593.1493702799796.JavaMail.zimbra@redhat.com> Hi Stuart, that's pretty much what we did (Darran reached out to us already). On API side, we added a method returning a consumer[1]. And on WildFly side this is then implemented via runAs(consumer)[2]. Thanks for answering Matej ____________________________________________________________________________________- [1]https://github.com/weld/api/blob/master/weld-spi/src/main/java/org/jboss/weld/security/spi/SecurityServices.java#L75 [2]https://github.com/manovotn/wildfly/blob/weld2380/weld/subsystem/src/main/java/org/jboss/as/weld/services/bootstrap/WeldSecurityServices.java#L102 ----- Original Message ----- > From: "Stuart Douglas" > To: "Matej Novotny" > Cc: "WildFly Dev" > Sent: Monday, May 1, 2017 1:10:16 AM > Subject: Re: [wildfly-dev] Weld 3 & Wildfly 11 integration - help with security needed > > So looking at the code I am not sure if this is possible to adapt to > Elytron without an API change on the Weld side of things. > > This issue is in the Weld SecurityContext, which just as associate and > disassociate methods, while elytron uses a more functional approach. > > I think this API needs to be change so SecurityContext just has a > run(PrivilidgedExceptionAction action) method, where the implementation > would look something like: > > elytronDomain.getCurrentSecurityIdentity().runAs(action) > > Not sure how hard to do this will be from the Weld side and I am not sure > how this method is actually used. > > Stuart > > > > On Mon, Apr 24, 2017 at 10:32 PM, Matej Novotny wrote: > > > Hello, > > > > recently I decided, that Weld 3 (CDI 2.0, currently nearing Final at high > > speed) should be running on WildFly 11. > > Up until now, we had the integration based on 10.1.0.Final but for several > > reasons we want to move to 11. > > > > There were some changes and I figured out most of them but I am lost when > > it comes to security. > > I know Elytron was added but I don't know a damn thing about it - could > > anyone lend a hand here, please? > > > > The code is now located at this branch[1] and the very last commit shows > > the integration done. > > Vast majority is just taken from previous integration with 10.1.0.Final > > (branch 10.1.0.Final-weld3). > > The part I am concerned about is weld/subsystem/src/main/java/ > > org/jboss/as/weld/services/bootstrap/WeldSecurityServices.java [2] > > 'getPrincipal'[3] method was earlier adapted to Elytron, and I am thinking > > the other methods should perhaps be adjusted as well? > > But then again, I have no idea how to do that with Elytron... a penny for > > your thoughts? > > > > Regards > > Matej > > > > ____________________________________________________________ > > ________________________________________________________________________ > > [1]https://github.com/weld/wildfly/tree/11.0.0.Alpha1-weld3 > > [2]https://github.com/weld/wildfly/blob/11.0.0.Alpha1- > > weld3/weld/subsystem/src/main/java/org/jboss/as/weld/services/bootstrap/ > > WeldSecurityServices.java > > [3]https://github.com/weld/wildfly/blob/11.0.0.Alpha1- > > weld3/weld/subsystem/src/main/java/org/jboss/as/weld/services/bootstrap/ > > WeldSecurityServices.java#L69 > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > From darran.lofthouse at jboss.com Wed May 3 12:44:45 2017 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Wed, 3 May 2017 17:44:45 +0100 Subject: [wildfly-dev] Sending PRs upstream instead of wildfly-security-incubator / ladybird Message-ID: <22e42335-5b72-e795-e730-cb082ec98fad@jboss.com> We have a number of PRs working there way through the queues now with the latest component upgrades / backports from ladybird to upstream but from this point on all pull requests should be sent directly upstream instead of going via the ladybird branches. We still have some PRs against the incubator which we will continue to merge and port over so those don't need to be resubmitted. We will keep the ladybird branches up to date with upstream in case we need to use them again but for the last week or so the PRs coming through have required less coordination so we should be able to handle changes individually. Regards, Darran Lofthouse. From belaran at redhat.com Thu May 4 04:21:20 2017 From: belaran at redhat.com (Romain Pelisse) Date: Thu, 4 May 2017 10:21:20 +0200 Subject: [wildfly-dev] The future of the management console In-Reply-To: References: <234B355C-313D-4D20-87C8-AF65E794252E@redhat.com> Message-ID: About Monitoring, I think we discussed such feature a couple of years ago, but I think it would be nice to have "live" picture / overview of the server, showing the following basics value (ideally in a graphical way): - nb incoming http requests - memory usage - database connection pool consomption As you said, not turning HAL into a full blown monitoring solution, but give a quick and simple widget showing "how the wildfly" instance is faring. On Mon, Apr 24, 2017 at 4:44 PM, Harald Pehl wrote: > On Mon, Apr 24, 2017 at 4:20 PM, Brian Stansberry > wrote: > > Hi Harald, > > > > Thanks for the update; it?s great that this keeps moving along! > > > > Re: macro recording, how is the recorded data made useful for the user? > > The recorded DMR operations are presented in a read-only editor. > They're already shown in CLI syntax. There's also a > "copy-to-clipboard" button for easy CLI execution. > > > > > I think this is one where we need to think through the use cases > carefully so we make sure we cover all the necessary ones or at least don?t > do something that blocks covering them. > > > > One thing I know that?s been requested is taking the output of this kind > of recording and being able to execute it from the CLI. But that implies > CLI syntax instead of raw DMR. And then if we start getting into variable > etc it?s important that it be done in a consistent and compatible way. > > > > Right, adding advanced features like variables and iterations needs > more research and a consistent exchange with the CLI. > > > Cheers, > > Brian > > > >> On Apr 24, 2017, at 6:53 AM, Harald Pehl wrote: > >> > >> We're currently working on the next major version of HAL [1]. HAL.next > will > >> include all features of the current management console plus many new > features > >> such as macro recording, topology overview, better keyboard support and > >> PatternFly [2] compliance. See [3] for more details. > >> > >> We're making good progress and have migrated all of the configuration > and > >> half of the runtime screens to HAL.next. What's missing is the support > for > >> patching and the remaining runtime UI. Our goal is to ship HAL.next with > >> WildFly asap. If you don't want to wait, I encourage you to try out > HAL.next > >> today [4] and give us feedback! > >> > >> I'd like to use this post to give you the chance to participate in the > >> future of the management console. We already have some basic ideas what > >> we would like to add to HAL.next, but we also want you to give us > additional > >> input. > >> > >> # Runtime Extensions / JavaScript API > >> > >> As most of you will know both HAL and HAL.next are implemented in GWT. > >> For the current version there's a way to write extensions as GWT > modules [5]. > >> This is based on the concept of having compile time extensions provided > as > >> maven dependencies. While this gives you full access to the HAL API, > it's > >> often hard to get started for none GWT developers. > >> > >> New features in GWT 2.8 like JsInterop [6] make it very easy to export > parts > >> of your Java code to JavaScript. We've used this feature to provide a > basic > >> JavaScript API. This can be used in the future to write runtime > extensions > >> in JavaScript. A first draft is available at [7]. > >> > >> # Monitoring > >> > >> The current management console has some limited monitoring capabilities. > >> We could improve and enhance these capabilities if this is something > which > >> you want to have out of the box. However we don't want to turn HAL into > >> another monitoring tool. There are plenty of other tools and frameworks > >> which focus on monitoring. > >> > >> # Macro Recording > >> > >> We've built basic support to record macros in HAL.next. Behind the > scenes the > >> DMR operations are collected and made available for replay. We could > extend > >> this feature to be more dynamic if requested (variables, iterations, el > al). > >> > >> # What else? > >> > >> It's your turn! What else do you want to see in HAL.next? > >> > >> > >> [1] https://github.com/hal/hal.next > >> [2] https://www.patternfly.org/ > >> [3] https://github.com/hal/hal.next/#motivation > >> [4] https://github.com/hal/hal.next/#running > >> [5] https://hal.gitbooks.io/dev/content/building-blocks/extensions.html > >> [6] https://docs.google.com/document/d/10fmlEYIHcyead_ > 4R1S5wKGs1t2I7Fnp_PaNaa7XTEk0/view > >> [7] https://github.com/hal/hal.next/wiki/JavaScript-API > >> > >> > >> -- > >> Harald Pehl > >> hpehl at redhat.com > >> _______________________________________________ > >> wildfly-dev mailing list > >> wildfly-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > -- > > Brian Stansberry > > Manager, Senior Principal Software Engineer > > JBoss by Red Hat > > > > > > > > > > -- > Harald Pehl > Senior Software Engineer > Red Hat > hpehl at redhat.com > Twitter: @redhatway | Instagram: @redhatinc | Snapchat: @redhatsnaps > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170504/77f860eb/attachment-0001.html From mazz at redhat.com Fri May 5 13:52:16 2017 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 5 May 2017 13:52:16 -0400 (EDT) Subject: [wildfly-dev] using and with same driver and other related questions In-Reply-To: <244406146.7986901.1494006201121.JavaMail.zimbra@redhat.com> Message-ID: <1388850926.7989055.1494006736396.JavaMail.zimbra@redhat.com> I have a stupid question and two not-so-stupid questions. 1. I think I know the answer but I really just need confirmation. Suppose I have this defined in standalone.xml: org.h2.jdbcx.JdbcDataSource Notice it defines the XA datasource class. I can share this with both a non-XA and a XA , correct? So this is OK: h2 h2 That's the stupid question. 2. Here a second question - what is the purpose of both "driver-xa-datasource-class-name" and "xa-datasource-class". The weird thing is the XML in standalone.xml uses "xa-datasource-class" but that seems to be the value of the attribute "driver-xa-datasource-class-name" - what is this xa-datasource-class ATTRIBUTE? The docs are not clear here: https://wildscribe.github.io/Wildfly/10.0.0.Final/subsystem/datasources/jdbc-driver/index.html where it says: * driver-xa-datasource-class-name The fully qualified class name of the javax.sql.XADataSource implementation module-slot The slot of the module from which the driver was loaded, if it was loaded from the module path profile Domain Profile in which driver is defined. Null in case of standalone server * xa-datasource-class XA datasource class 3. Third question - what is this "jdbc-compliant" attribute used for? The docs don't indicate what it would actually be used for: * jdbc-compliant - Whether or not the driver is JDBC compliant If I am defining a JDBC driver, wouldn't you think it is JDBC compliant? :-) Thanks. From brian.stansberry at redhat.com Fri May 5 15:20:39 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Fri, 5 May 2017 14:20:39 -0500 Subject: [wildfly-dev] using and with same driver and other related questions In-Reply-To: <1388850926.7989055.1494006736396.JavaMail.zimbra@redhat.com> References: <1388850926.7989055.1494006736396.JavaMail.zimbra@redhat.com> Message-ID: <96234FF3-7EE8-42AD-ADF3-82958C3BA0E1@redhat.com> Quick and dirty answers.. 1) Yes. 2) I believe the xa-datasource-class management attribute on the driver resources is cruft. The code related to drivers does not use it beyond storing a value in the model. 3) I *think* that relates to the method java.sql.Driver.jdbcCompliant(), whose javadoc says: "Reports whether this driver is a genuine JDBC Compliant? driver. A driver may only report true here if it passes the JDBC compliance tests; otherwise it is required to return false. JDBC compliance requires full support for the JDBC API and full support for SQL 92 Entry Level. It is expected that JDBC compliant drivers will be available for all the major commercial databases. This method is not intended to encourage the development of non-JDBC compliant drivers, but is a recognition of the fact that some vendors are interested in using the JDBC API and framework for lightweight databases that do not support full database functionality, or for special databases such as document information retrieval where a SQL implementation may not be feasible." > On May 5, 2017, at 12:52 PM, John Mazzitelli wrote: > > I have a stupid question and two not-so-stupid questions. > > 1. I think I know the answer but I really just need confirmation. > > Suppose I have this defined in standalone.xml: > > > org.h2.jdbcx.JdbcDataSource > > > Notice it defines the XA datasource class. > > I can share this with both a non-XA and a XA , correct? So this is OK: > > > h2 > > > > h2 > > > That's the stupid question. > > 2. Here a second question - what is the purpose of both "driver-xa-datasource-class-name" and "xa-datasource-class". The weird thing is the XML in standalone.xml uses "xa-datasource-class" but that seems to be the value of the attribute "driver-xa-datasource-class-name" - what is this xa-datasource-class ATTRIBUTE? > > The docs are not clear here: https://wildscribe.github.io/Wildfly/10.0.0.Final/subsystem/datasources/jdbc-driver/index.html > > where it says: > > * driver-xa-datasource-class-name The fully qualified class name of the javax.sql.XADataSource implementation module-slot The slot of the module from which the driver was loaded, if it was loaded from the module path profile Domain Profile in which driver is defined. Null in case of standalone server > * xa-datasource-class XA datasource class > > 3. Third question - what is this "jdbc-compliant" attribute used for? The docs don't indicate what it would actually be used for: > > * jdbc-compliant - Whether or not the driver is JDBC compliant > > If I am defining a JDBC driver, wouldn't you think it is JDBC compliant? :-) > > Thanks. > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From rareddy at redhat.com Fri May 5 17:08:49 2017 From: rareddy at redhat.com (Ramesh Reddy) Date: Fri, 5 May 2017 17:08:49 -0400 (EDT) Subject: [wildfly-dev] using and with same driver and other related questions In-Reply-To: <96234FF3-7EE8-42AD-ADF3-82958C3BA0E1@redhat.com> References: <1388850926.7989055.1494006736396.JavaMail.zimbra@redhat.com> <96234FF3-7EE8-42AD-ADF3-82958C3BA0E1@redhat.com> Message-ID: <1533521261.5699933.1494018529144.JavaMail.zimbra@redhat.com> 2)if there more than one Driver or Data Source class in the defined module, this attribute also defines which one to use to create the connection ----- Original Message ----- > Quick and dirty answers.. > > 1) Yes. > > 2) I believe the xa-datasource-class management attribute on the driver > resources is cruft. The code related to drivers does not use it beyond > storing a value in the model. > > 3) I *think* that relates to the method java.sql.Driver.jdbcCompliant(), > whose javadoc says: > > "Reports whether this driver is a genuine JDBC Compliant? driver. A driver > may only report true here if it passes the JDBC compliance tests; otherwise > it is required to return false. > JDBC compliance requires full support for the JDBC API and full support for > SQL 92 Entry Level. It is expected that JDBC compliant drivers will be > available for all the major commercial databases. > > This method is not intended to encourage the development of non-JDBC > compliant drivers, but is a recognition of the fact that some vendors are > interested in using the JDBC API and framework for lightweight databases > that do not support full database functionality, or for special databases > such as document information retrieval where a SQL implementation may not be > feasible." > > > On May 5, 2017, at 12:52 PM, John Mazzitelli wrote: > > > > I have a stupid question and two not-so-stupid questions. > > > > 1. I think I know the answer but I really just need confirmation. > > > > Suppose I have this defined in standalone.xml: > > > > > > org.h2.jdbcx.JdbcDataSource > > > > > > Notice it defines the XA datasource class. > > > > I can share this with both a non-XA and a XA > > , correct? So this is OK: > > > > > > h2 > > > > > > > > h2 > > > > > > That's the stupid question. > > > > 2. Here a second question - what is the purpose of both > > "driver-xa-datasource-class-name" and "xa-datasource-class". The weird > > thing is the XML in standalone.xml uses "xa-datasource-class" but that > > seems to be the value of the attribute "driver-xa-datasource-class-name" - > > what is this xa-datasource-class ATTRIBUTE? > > > > The docs are not clear here: > > https://wildscribe.github.io/Wildfly/10.0.0.Final/subsystem/datasources/jdbc-driver/index.html > > > > where it says: > > > > * driver-xa-datasource-class-name The fully qualified class name of the > > javax.sql.XADataSource implementation module-slot The slot of the module > > from which the driver was loaded, if it was loaded from the module path > > profile Domain Profile in which driver is defined. Null in case of > > standalone server > > * xa-datasource-class XA datasource class > > > > 3. Third question - what is this "jdbc-compliant" attribute used for? The > > docs don't indicate what it would actually be used for: > > > > * jdbc-compliant - Whether or not the driver is JDBC compliant > > > > If I am defining a JDBC driver, wouldn't you think it is JDBC compliant? > > :-) > > > > Thanks. > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > JBoss by Red Hat > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From mazz at redhat.com Fri May 5 18:50:35 2017 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 5 May 2017 18:50:35 -0400 (EDT) Subject: [wildfly-dev] using and with same driver and other related questions In-Reply-To: <1533521261.5699933.1494018529144.JavaMail.zimbra@redhat.com> References: <1388850926.7989055.1494006736396.JavaMail.zimbra@redhat.com> <96234FF3-7EE8-42AD-ADF3-82958C3BA0E1@redhat.com> <1533521261.5699933.1494018529144.JavaMail.zimbra@redhat.com> Message-ID: <774861460.8031267.1494024635173.JavaMail.zimbra@redhat.com> Ramesh, When you say "this attribute also defines which one to use..." what is "this attribute" referring to? Are you referring to the "driver-xa-datasource-class-name" attribute or "xa-datasource-class" attribute (which is the one that Brian thinks is really not used)?? Note that I"m not referring to the XML elements (of which there is the one "xa-datasource-class") - I'm referring to the DMR model attributes. BTW: the XML element name vs. the DMR attribute name results in utter confusion :) Why is the XML element called "xa-datasource-class" - but that doesn't set the value of DMR attribute "xa-datasource-class" (which is the same name)? Instead it affects the DMR attribute "driver-xa-datasource-class-name". --John Mazz ----- Original Message ----- > 2)if there more than one Driver or Data Source class in the defined module, > this attribute also defines which one to use to create the connection > > ----- Original Message ----- > > Quick and dirty answers.. > > > > 1) Yes. > > > > 2) I believe the xa-datasource-class management attribute on the driver > > resources is cruft. The code related to drivers does not use it beyond > > storing a value in the model. > > > > 3) I *think* that relates to the method java.sql.Driver.jdbcCompliant(), > > whose javadoc says: > > > > "Reports whether this driver is a genuine JDBC Compliant? driver. A driver > > may only report true here if it passes the JDBC compliance tests; otherwise > > it is required to return false. > > JDBC compliance requires full support for the JDBC API and full support for > > SQL 92 Entry Level. It is expected that JDBC compliant drivers will be > > available for all the major commercial databases. > > > > This method is not intended to encourage the development of non-JDBC > > compliant drivers, but is a recognition of the fact that some vendors are > > interested in using the JDBC API and framework for lightweight databases > > that do not support full database functionality, or for special databases > > such as document information retrieval where a SQL implementation may not > > be > > feasible." > > > > > On May 5, 2017, at 12:52 PM, John Mazzitelli wrote: > > > > > > I have a stupid question and two not-so-stupid questions. > > > > > > 1. I think I know the answer but I really just need confirmation. > > > > > > Suppose I have this defined in standalone.xml: > > > > > > > > > org.h2.jdbcx.JdbcDataSource > > > > > > > > > Notice it defines the XA datasource class. > > > > > > I can share this with both a non-XA and a XA > > > , correct? So this is OK: > > > > > > > > > h2 > > > > > > > > > > > > h2 > > > > > > > > > That's the stupid question. > > > > > > 2. Here a second question - what is the purpose of both > > > "driver-xa-datasource-class-name" and "xa-datasource-class". The weird > > > thing is the XML in standalone.xml uses "xa-datasource-class" but that > > > seems to be the value of the attribute "driver-xa-datasource-class-name" > > > - > > > what is this xa-datasource-class ATTRIBUTE? > > > > > > The docs are not clear here: > > > https://wildscribe.github.io/Wildfly/10.0.0.Final/subsystem/datasources/jdbc-driver/index.html > > > > > > where it says: > > > > > > * driver-xa-datasource-class-name The fully qualified class name of the > > > javax.sql.XADataSource implementation module-slot The slot of the > > > module > > > from which the driver was loaded, if it was loaded from the module path > > > profile Domain Profile in which driver is defined. Null in case of > > > standalone server > > > * xa-datasource-class XA datasource class > > > > > > 3. Third question - what is this "jdbc-compliant" attribute used for? The > > > docs don't indicate what it would actually be used for: > > > > > > * jdbc-compliant - Whether or not the driver is JDBC compliant > > > > > > If I am defining a JDBC driver, wouldn't you think it is JDBC compliant? > > > :-) > > > > > > Thanks. > > > _______________________________________________ > > > wildfly-dev mailing list > > > wildfly-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > -- > > Brian Stansberry > > Manager, Senior Principal Software Engineer > > JBoss by Red Hat > > > > > > > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From mazz at redhat.com Fri May 5 18:54:50 2017 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 5 May 2017 18:54:50 -0400 (EDT) Subject: [wildfly-dev] using and with same driver and other related questions In-Reply-To: <774861460.8031267.1494024635173.JavaMail.zimbra@redhat.com> References: <1388850926.7989055.1494006736396.JavaMail.zimbra@redhat.com> <96234FF3-7EE8-42AD-ADF3-82958C3BA0E1@redhat.com> <1533521261.5699933.1494018529144.JavaMail.zimbra@redhat.com> <774861460.8031267.1494024635173.JavaMail.zimbra@redhat.com> Message-ID: <1985561260.8031479.1494024890780.JavaMail.zimbra@redhat.com> Note - the reason I am asking is the ManageIQ folks are going to create some UI pages to push JDBC drivers to managed WildFly servers and there is confusion as to how to specify the XA datasource class name. Do they set "driver-xa-datasource-class-name" ? Do they set "xa-datasource-class" ?? Do they set both? This is for both EAP 6.4 and EAP 7.x (but I believe the model didn't change between 6.4 and 7.x). ----- Original Message ----- > Ramesh, > > When you say "this attribute also defines which one to use..." what is "this > attribute" referring to? > > Are you referring to the "driver-xa-datasource-class-name" attribute or > "xa-datasource-class" attribute (which is the one that Brian thinks is > really not used)?? > > Note that I"m not referring to the XML elements (of which there is the one > "xa-datasource-class") - I'm referring to the DMR model attributes. > > BTW: the XML element name vs. the DMR attribute name results in utter > confusion :) Why is the XML element called "xa-datasource-class" - but that > doesn't set the value of DMR attribute "xa-datasource-class" (which is the > same name)? Instead it affects the DMR attribute > "driver-xa-datasource-class-name". > > > --John Mazz > > ----- Original Message ----- > > 2)if there more than one Driver or Data Source class in the defined module, > > this attribute also defines which one to use to create the connection > > > > ----- Original Message ----- > > > Quick and dirty answers.. > > > > > > 1) Yes. > > > > > > 2) I believe the xa-datasource-class management attribute on the driver > > > resources is cruft. The code related to drivers does not use it beyond > > > storing a value in the model. > > > > > > 3) I *think* that relates to the method java.sql.Driver.jdbcCompliant(), > > > whose javadoc says: > > > > > > "Reports whether this driver is a genuine JDBC Compliant? driver. A > > > driver > > > may only report true here if it passes the JDBC compliance tests; > > > otherwise > > > it is required to return false. > > > JDBC compliance requires full support for the JDBC API and full support > > > for > > > SQL 92 Entry Level. It is expected that JDBC compliant drivers will be > > > available for all the major commercial databases. > > > > > > This method is not intended to encourage the development of non-JDBC > > > compliant drivers, but is a recognition of the fact that some vendors are > > > interested in using the JDBC API and framework for lightweight databases > > > that do not support full database functionality, or for special databases > > > such as document information retrieval where a SQL implementation may not > > > be > > > feasible." > > > > > > > On May 5, 2017, at 12:52 PM, John Mazzitelli wrote: > > > > > > > > I have a stupid question and two not-so-stupid questions. > > > > > > > > 1. I think I know the answer but I really just need confirmation. > > > > > > > > Suppose I have this defined in standalone.xml: > > > > > > > > > > > > org.h2.jdbcx.JdbcDataSource > > > > > > > > > > > > Notice it defines the XA datasource class. > > > > > > > > I can share this with both a non-XA and a XA > > > > , correct? So this is OK: > > > > > > > > > > > > h2 > > > > > > > > > > > > > > > > h2 > > > > > > > > > > > > That's the stupid question. > > > > > > > > 2. Here a second question - what is the purpose of both > > > > "driver-xa-datasource-class-name" and "xa-datasource-class". The weird > > > > thing is the XML in standalone.xml uses "xa-datasource-class" but that > > > > seems to be the value of the attribute > > > > "driver-xa-datasource-class-name" > > > > - > > > > what is this xa-datasource-class ATTRIBUTE? > > > > > > > > The docs are not clear here: > > > > https://wildscribe.github.io/Wildfly/10.0.0.Final/subsystem/datasources/jdbc-driver/index.html > > > > > > > > where it says: > > > > > > > > * driver-xa-datasource-class-name The fully qualified class name of > > > > the > > > > javax.sql.XADataSource implementation module-slot The slot of the > > > > module > > > > from which the driver was loaded, if it was loaded from the module > > > > path > > > > profile Domain Profile in which driver is defined. Null in case of > > > > standalone server > > > > * xa-datasource-class XA datasource class > > > > > > > > 3. Third question - what is this "jdbc-compliant" attribute used for? > > > > The > > > > docs don't indicate what it would actually be used for: > > > > > > > > * jdbc-compliant - Whether or not the driver is JDBC compliant > > > > > > > > If I am defining a JDBC driver, wouldn't you think it is JDBC > > > > compliant? > > > > :-) > > > > > > > > Thanks. > > > > _______________________________________________ > > > > wildfly-dev mailing list > > > > wildfly-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > > -- > > > Brian Stansberry > > > Manager, Senior Principal Software Engineer > > > JBoss by Red Hat > > > > > > > > > > > > > > > _______________________________________________ > > > wildfly-dev mailing list > > > wildfly-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > From rareddy at redhat.com Sat May 6 10:06:42 2017 From: rareddy at redhat.com (Ramesh Reddy) Date: Sat, 6 May 2017 10:06:42 -0400 (EDT) Subject: [wildfly-dev] using and with same driver and other related questions In-Reply-To: <1985561260.8031479.1494024890780.JavaMail.zimbra@redhat.com> References: <1388850926.7989055.1494006736396.JavaMail.zimbra@redhat.com> <96234FF3-7EE8-42AD-ADF3-82958C3BA0E1@redhat.com> <1533521261.5699933.1494018529144.JavaMail.zimbra@redhat.com> <774861460.8031267.1494024635173.JavaMail.zimbra@redhat.com> <1985561260.8031479.1494024890780.JavaMail.zimbra@redhat.com> Message-ID: <1281805113.5757536.1494079602422.JavaMail.zimbra@redhat.com> Sorry, I was mentioning the xml element in the standalone.xml file. Since it looks like you are trying to configure programtically or using the CLI then we need to find the DMR property that corresponds to this xml-element, and that could be one you are mentioning. You really do not need to use property at all, unless the situation is 1) You deployed the JDBC driver jar file, using CLI or programtically. If the driver jar has jdbc4 service loader mechanism, it loads the jar file JDBC "driver" automatically. 2) If this deployed jar file has multiple Driver classes, in its service loader file, then the WildFly has ambuguity to choose the driver. So, this property will define the JDBC driver name to use. The above is for when using the Jdbc's Driver class, if you are trying to create the DataSource or XADataSource class, then when you deploy a JDBC driver as in (1), this property will define the class to use for creating the data source. If you want to know all the properties supported for creating a xa-data-source, using jboss-cli.sh /subsystem=datasources/xa-data-source=foo:add( or for driver /subsystem=datasources/data-source=foo:add( so it all comes down to how you are deploying the JAR file, that defines later actions. HTH Ramesh.. ----- Original Message ----- > Note - the reason I am asking is the ManageIQ folks are going to create some > UI pages to push JDBC drivers to managed WildFly servers and there is > confusion as to how to specify the XA datasource class name. > > Do they set "driver-xa-datasource-class-name" ? > > Do they set "xa-datasource-class" ?? > > Do they set both? > > This is for both EAP 6.4 and EAP 7.x (but I believe the model didn't change > between 6.4 and 7.x). > > ----- Original Message ----- > > Ramesh, > > > > When you say "this attribute also defines which one to use..." what is > > "this > > attribute" referring to? > > > > Are you referring to the "driver-xa-datasource-class-name" attribute or > > "xa-datasource-class" attribute (which is the one that Brian thinks is > > really not used)?? > > > > Note that I"m not referring to the XML elements (of which there is the one > > "xa-datasource-class") - I'm referring to the DMR model attributes. > > > > BTW: the XML element name vs. the DMR attribute name results in utter > > confusion :) Why is the XML element called "xa-datasource-class" - but that > > doesn't set the value of DMR attribute "xa-datasource-class" (which is the > > same name)? Instead it affects the DMR attribute > > "driver-xa-datasource-class-name". > > > > > > --John Mazz > > > > ----- Original Message ----- > > > 2)if there more than one Driver or Data Source class in the defined > > > module, > > > this attribute also defines which one to use to create the connection > > > > > > ----- Original Message ----- > > > > Quick and dirty answers.. > > > > > > > > 1) Yes. > > > > > > > > 2) I believe the xa-datasource-class management attribute on the driver > > > > resources is cruft. The code related to drivers does not use it beyond > > > > storing a value in the model. > > > > > > > > 3) I *think* that relates to the method > > > > java.sql.Driver.jdbcCompliant(), > > > > whose javadoc says: > > > > > > > > "Reports whether this driver is a genuine JDBC Compliant? driver. A > > > > driver > > > > may only report true here if it passes the JDBC compliance tests; > > > > otherwise > > > > it is required to return false. > > > > JDBC compliance requires full support for the JDBC API and full support > > > > for > > > > SQL 92 Entry Level. It is expected that JDBC compliant drivers will be > > > > available for all the major commercial databases. > > > > > > > > This method is not intended to encourage the development of non-JDBC > > > > compliant drivers, but is a recognition of the fact that some vendors > > > > are > > > > interested in using the JDBC API and framework for lightweight > > > > databases > > > > that do not support full database functionality, or for special > > > > databases > > > > such as document information retrieval where a SQL implementation may > > > > not > > > > be > > > > feasible." > > > > > > > > > On May 5, 2017, at 12:52 PM, John Mazzitelli wrote: > > > > > > > > > > I have a stupid question and two not-so-stupid questions. > > > > > > > > > > 1. I think I know the answer but I really just need confirmation. > > > > > > > > > > Suppose I have this defined in standalone.xml: > > > > > > > > > > > > > > > org.h2.jdbcx.JdbcDataSource > > > > > > > > > > > > > > > Notice it defines the XA datasource class. > > > > > > > > > > I can share this with both a non-XA and a XA > > > > > , correct? So this is OK: > > > > > > > > > > > > > > > h2 > > > > > > > > > > > > > > > > > > > > h2 > > > > > > > > > > > > > > > That's the stupid question. > > > > > > > > > > 2. Here a second question - what is the purpose of both > > > > > "driver-xa-datasource-class-name" and "xa-datasource-class". The > > > > > weird > > > > > thing is the XML in standalone.xml uses "xa-datasource-class" but > > > > > that > > > > > seems to be the value of the attribute > > > > > "driver-xa-datasource-class-name" > > > > > - > > > > > what is this xa-datasource-class ATTRIBUTE? > > > > > > > > > > The docs are not clear here: > > > > > https://wildscribe.github.io/Wildfly/10.0.0.Final/subsystem/datasources/jdbc-driver/index.html > > > > > > > > > > where it says: > > > > > > > > > > * driver-xa-datasource-class-name The fully qualified class name of > > > > > the > > > > > javax.sql.XADataSource implementation module-slot The slot of the > > > > > module > > > > > from which the driver was loaded, if it was loaded from the module > > > > > path > > > > > profile Domain Profile in which driver is defined. Null in case of > > > > > standalone server > > > > > * xa-datasource-class XA datasource class > > > > > > > > > > 3. Third question - what is this "jdbc-compliant" attribute used for? > > > > > The > > > > > docs don't indicate what it would actually be used for: > > > > > > > > > > * jdbc-compliant - Whether or not the driver is JDBC compliant > > > > > > > > > > If I am defining a JDBC driver, wouldn't you think it is JDBC > > > > > compliant? > > > > > :-) > > > > > > > > > > Thanks. > > > > > _______________________________________________ > > > > > wildfly-dev mailing list > > > > > wildfly-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > > > > -- > > > > Brian Stansberry > > > > Manager, Senior Principal Software Engineer > > > > JBoss by Red Hat > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > wildfly-dev mailing list > > > > wildfly-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > > From brian.stansberry at redhat.com Sat May 6 11:31:06 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Sat, 6 May 2017 10:31:06 -0500 Subject: [wildfly-dev] using and with same driver and other related questions In-Reply-To: <1985561260.8031479.1494024890780.JavaMail.zimbra@redhat.com> References: <1388850926.7989055.1494006736396.JavaMail.zimbra@redhat.com> <96234FF3-7EE8-42AD-ADF3-82958C3BA0E1@redhat.com> <1533521261.5699933.1494018529144.JavaMail.zimbra@redhat.com> <774861460.8031267.1494024635173.JavaMail.zimbra@redhat.com> <1985561260.8031479.1494024890780.JavaMail.zimbra@redhat.com> Message-ID: The xa-datasource-class attribute on a jdbc-driver=* management resource has no impact on any runtime service. The driver-xa-datasource-class-name attribute does. The handler for the jdbc-driver=*:add operation will store any value for xa-datasource-class in the resource?s in-memory management model, but it does not pass it into any runtime service, so it?s essentially cruft. The persister for the subsystem also does not persist the value, so if the server is reloaded/restarted after you add it, the value is lost. Note that the xa-data-source=* resource also has an attribute called xa-datasource-class. That?s a different thing. > On May 5, 2017, at 5:54 PM, John Mazzitelli wrote: > > Note - the reason I am asking is the ManageIQ folks are going to create some UI pages to push JDBC drivers to managed WildFly servers and there is confusion as to how to specify the XA datasource class name. > > Do they set "driver-xa-datasource-class-name" ? > > Do they set "xa-datasource-class" ?? > > Do they set both? > > This is for both EAP 6.4 and EAP 7.x (but I believe the model didn't change between 6.4 and 7.x). > > ----- Original Message ----- >> Ramesh, >> >> When you say "this attribute also defines which one to use..." what is "this >> attribute" referring to? >> >> Are you referring to the "driver-xa-datasource-class-name" attribute or >> "xa-datasource-class" attribute (which is the one that Brian thinks is >> really not used)?? >> >> Note that I"m not referring to the XML elements (of which there is the one >> "xa-datasource-class") - I'm referring to the DMR model attributes. >> >> BTW: the XML element name vs. the DMR attribute name results in utter >> confusion :) Why is the XML element called "xa-datasource-class" - but that >> doesn't set the value of DMR attribute "xa-datasource-class" (which is the >> same name)? Instead it affects the DMR attribute >> "driver-xa-datasource-class-name". >> >> >> --John Mazz >> >> ----- Original Message ----- >>> 2)if there more than one Driver or Data Source class in the defined module, >>> this attribute also defines which one to use to create the connection >>> >>> ----- Original Message ----- >>>> Quick and dirty answers.. >>>> >>>> 1) Yes. >>>> >>>> 2) I believe the xa-datasource-class management attribute on the driver >>>> resources is cruft. The code related to drivers does not use it beyond >>>> storing a value in the model. >>>> >>>> 3) I *think* that relates to the method java.sql.Driver.jdbcCompliant(), >>>> whose javadoc says: >>>> >>>> "Reports whether this driver is a genuine JDBC Compliant? driver. A >>>> driver >>>> may only report true here if it passes the JDBC compliance tests; >>>> otherwise >>>> it is required to return false. >>>> JDBC compliance requires full support for the JDBC API and full support >>>> for >>>> SQL 92 Entry Level. It is expected that JDBC compliant drivers will be >>>> available for all the major commercial databases. >>>> >>>> This method is not intended to encourage the development of non-JDBC >>>> compliant drivers, but is a recognition of the fact that some vendors are >>>> interested in using the JDBC API and framework for lightweight databases >>>> that do not support full database functionality, or for special databases >>>> such as document information retrieval where a SQL implementation may not >>>> be >>>> feasible." >>>> >>>>> On May 5, 2017, at 12:52 PM, John Mazzitelli wrote: >>>>> >>>>> I have a stupid question and two not-so-stupid questions. >>>>> >>>>> 1. I think I know the answer but I really just need confirmation. >>>>> >>>>> Suppose I have this defined in standalone.xml: >>>>> >>>>> >>>>> org.h2.jdbcx.JdbcDataSource >>>>> >>>>> >>>>> Notice it defines the XA datasource class. >>>>> >>>>> I can share this with both a non-XA and a XA >>>>> , correct? So this is OK: >>>>> >>>>> >>>>> h2 >>>>> >>>>> >>>>> >>>>> h2 >>>>> >>>>> >>>>> That's the stupid question. >>>>> >>>>> 2. Here a second question - what is the purpose of both >>>>> "driver-xa-datasource-class-name" and "xa-datasource-class". The weird >>>>> thing is the XML in standalone.xml uses "xa-datasource-class" but that >>>>> seems to be the value of the attribute >>>>> "driver-xa-datasource-class-name" >>>>> - >>>>> what is this xa-datasource-class ATTRIBUTE? >>>>> >>>>> The docs are not clear here: >>>>> https://wildscribe.github.io/Wildfly/10.0.0.Final/subsystem/datasources/jdbc-driver/index.html >>>>> >>>>> where it says: >>>>> >>>>> * driver-xa-datasource-class-name The fully qualified class name of >>>>> the >>>>> javax.sql.XADataSource implementation module-slot The slot of the >>>>> module >>>>> from which the driver was loaded, if it was loaded from the module >>>>> path >>>>> profile Domain Profile in which driver is defined. Null in case of >>>>> standalone server >>>>> * xa-datasource-class XA datasource class >>>>> >>>>> 3. Third question - what is this "jdbc-compliant" attribute used for? >>>>> The >>>>> docs don't indicate what it would actually be used for: >>>>> >>>>> * jdbc-compliant - Whether or not the driver is JDBC compliant >>>>> >>>>> If I am defining a JDBC driver, wouldn't you think it is JDBC >>>>> compliant? >>>>> :-) >>>>> >>>>> Thanks. >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>>> -- >>>> Brian Stansberry >>>> Manager, Senior Principal Software Engineer >>>> JBoss by Red Hat >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From mazz at redhat.com Sat May 6 19:24:05 2017 From: mazz at redhat.com (John Mazzitelli) Date: Sat, 6 May 2017 19:24:05 -0400 (EDT) Subject: [wildfly-dev] using and with same driver and other related questions In-Reply-To: References: <1388850926.7989055.1494006736396.JavaMail.zimbra@redhat.com> <96234FF3-7EE8-42AD-ADF3-82958C3BA0E1@redhat.com> <1533521261.5699933.1494018529144.JavaMail.zimbra@redhat.com> <774861460.8031267.1494024635173.JavaMail.zimbra@redhat.com> <1985561260.8031479.1494024890780.JavaMail.zimbra@redhat.com> Message-ID: <264002881.8287092.1494113045761.JavaMail.zimbra@redhat.com> > The xa-datasource-class attribute on a jdbc-driver=* management resource has > no impact on any runtime service. The driver-xa-datasource-class-name > attribute does. > > The handler for the jdbc-driver=*:add operation will store any value for > xa-datasource-class in the resource?s in-memory management model, but it > does not pass it into any runtime service, so it?s essentially cruft. The > persister for the subsystem also does not persist the value, so if the > server is reloaded/restarted after you add it, the value is lost. OK, got it. So we'll just ignore the xa-datasource-class DMR attribute and only deal with driver-xa-datasource-class-name. That's the main answer I care about. Thanks. > Note that the xa-data-source=* resource also has an attribute called > xa-datasource-class. That?s a different thing. Understood. As you noticed, the question I was asking was about jdbc-driver, not xa-datasource. I think we're all good with datasources. Thanks, John Mazz From lthon at redhat.com Mon May 8 07:52:30 2017 From: lthon at redhat.com (Ladislav Thon) Date: Mon, 8 May 2017 13:52:30 +0200 Subject: [wildfly-dev] using and with same driver and other related questions In-Reply-To: References: <1388850926.7989055.1494006736396.JavaMail.zimbra@redhat.com> <96234FF3-7EE8-42AD-ADF3-82958C3BA0E1@redhat.com> <1533521261.5699933.1494018529144.JavaMail.zimbra@redhat.com> <774861460.8031267.1494024635173.JavaMail.zimbra@redhat.com> <1985561260.8031479.1494024890780.JavaMail.zimbra@redhat.com> Message-ID: <92a2ce2d-e206-7247-f7b4-3241a24a11b4@redhat.com> On 6.5.2017 17:31, Brian Stansberry wrote: > The handler for the jdbc-driver=*:add operation will store any value for xa-datasource-class in the resource?s in-memory management model, but it does not pass it into any runtime service, so it?s essentially cruft. The very existence of this cruft actually caused a bug in WildFly Swarm: https://issues.jboss.org/browse/SWARM-1215 I should have done this much earlier, but better later then never I guess -- I filed https://issues.jboss.org/browse/WFLY-8718 LT From darran.lofthouse at jboss.com Wed May 10 06:08:33 2017 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Wed, 10 May 2017 11:08:33 +0100 Subject: [wildfly-dev] Sending PRs upstream instead of wildfly-security-incubator / ladybird In-Reply-To: <22e42335-5b72-e795-e730-cb082ec98fad@jboss.com> References: <22e42335-5b72-e795-e730-cb082ec98fad@jboss.com> Message-ID: Everything that was merged to ladybird has now been merged upstream. We have a few PRs raised against ladybird that were not already merged - I am currently porting them over and resubmitting upstream so no action is required on those. Regards, Darran Lofthouse. On 03/05/17 17:44, Darran Lofthouse wrote: > We have a number of PRs working there way through the queues now with > the latest component upgrades / backports from ladybird to upstream but > from this point on all pull requests should be sent directly upstream > instead of going via the ladybird branches. > > We still have some PRs against the incubator which we will continue to > merge and port over so those don't need to be resubmitted. > > We will keep the ladybird branches up to date with upstream in case we > need to use them again but for the last week or so the PRs coming > through have required less coordination so we should be able to handle > changes individually. > > Regards, > Darran Lofthouse. > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From brian.stansberry at redhat.com Wed May 10 10:38:26 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Wed, 10 May 2017 10:38:26 -0400 Subject: [wildfly-dev] Sending PRs upstream instead of wildfly-security-incubator / ladybird In-Reply-To: References: <22e42335-5b72-e795-e730-cb082ec98fad@jboss.com> Message-ID: <8D23B28F-869F-4BB7-821F-3554035EE952@redhat.com> Thanks, Darran. My impression is the folks working on ladybird have been doing a good job of reviewing the various PRs that went there. Please keep doing that now that the PRs are going directly upstream, or we?ll quickly have a bottleneck. Cheers, Brian > On May 10, 2017, at 6:08 AM, Darran Lofthouse wrote: > > Everything that was merged to ladybird has now been merged upstream. > > We have a few PRs raised against ladybird that were not already merged - > I am currently porting them over and resubmitting upstream so no action > is required on those. > > Regards, > Darran Lofthouse. > > On 03/05/17 17:44, Darran Lofthouse wrote: >> We have a number of PRs working there way through the queues now with >> the latest component upgrades / backports from ladybird to upstream but >> from this point on all pull requests should be sent directly upstream >> instead of going via the ladybird branches. >> >> We still have some PRs against the incubator which we will continue to >> merge and port over so those don't need to be resubmitted. >> >> We will keep the ladybird branches up to date with upstream in case we >> need to use them again but for the last week or so the PRs coming >> through have required less coordination so we should be able to handle >> changes individually. >> >> Regards, >> Darran Lofthouse. >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From darran.lofthouse at jboss.com Wed May 10 11:45:13 2017 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Wed, 10 May 2017 16:45:13 +0100 Subject: [wildfly-dev] Sending PRs upstream instead of wildfly-security-incubator / ladybird In-Reply-To: <8D23B28F-869F-4BB7-821F-3554035EE952@redhat.com> References: <22e42335-5b72-e795-e730-cb082ec98fad@jboss.com> <8D23B28F-869F-4BB7-821F-3554035EE952@redhat.com> Message-ID: <7683b5ca-df48-0a73-0fc7-482c2864dc65@jboss.com> +1 Also if you see issues in the area of security without review let me know and I can nominate someone ;-) On 10/05/17 15:38, Brian Stansberry wrote: > Thanks, Darran. > > My impression is the folks working on ladybird have been doing a good job of reviewing the various PRs that went there. Please keep doing that now that the PRs are going directly upstream, or we?ll quickly have a bottleneck. > > Cheers, > Brian > >> On May 10, 2017, at 6:08 AM, Darran Lofthouse wrote: >> >> Everything that was merged to ladybird has now been merged upstream. >> >> We have a few PRs raised against ladybird that were not already merged - >> I am currently porting them over and resubmitting upstream so no action >> is required on those. >> >> Regards, >> Darran Lofthouse. >> >> On 03/05/17 17:44, Darran Lofthouse wrote: >>> We have a number of PRs working there way through the queues now with >>> the latest component upgrades / backports from ladybird to upstream but >>> from this point on all pull requests should be sent directly upstream >>> instead of going via the ladybird branches. >>> >>> We still have some PRs against the incubator which we will continue to >>> merge and port over so those don't need to be resubmitted. >>> >>> We will keep the ladybird branches up to date with upstream in case we >>> need to use them again but for the last week or so the PRs coming >>> through have required less coordination so we should be able to handle >>> changes individually. >>> >>> Regards, >>> Darran Lofthouse. >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > From stuart.w.douglas at gmail.com Sun May 14 19:36:54 2017 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Mon, 15 May 2017 09:36:54 +1000 Subject: [wildfly-dev] Speeding up WildFly boot time Message-ID: When JIRA was being screwy on Friday I used the time to investigate an idea I have had for a while about improving our boot time performance. According to Yourkit the majority of our time is spent in class loading. It seems very unlikely that we will be able to reduce the number of classes we load on boot (or at the very least it would be a massive amount of work) so I investigated a different approach. I modified ModuleClassLoader to spit out the name and module of every class that is loaded at boot time, and stored this in a properties file. I then created a simple Service that starts immediately that uses two threads to eagerly load every class on this list (I used two threads because that seemed to work well on my laptop, I think Runtime.availableProcessors()/4 is probably the best amount, but that assumption would need to be tested on different hardware). The idea behind this is that we know the classes will be used at some point, and we generally do not fully utilise all CPU's during boot, so we can use the unused CPU to pre load these classes so they are ready when they are actually required. Using this approach I saw the boot time for standalone.xml drop from ~2.9s to ~2.3s on my laptop. The (super hacky) code I used to perform this test is at https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack I think these initial results are encouraging, and it is a big enough gain that I think it is worth investigating further. Firstly it would be great if I could get others to try it out and see if they see similar gains to boot time, it may be that the gain is very system dependent. Secondly if we do decide to do this there are two approach that we can use that I can see: 1) A hard coded list of class names that we generate before a release (basically what the hack already does), this is simplest, but does add a little bit of additional work to the release process (although if it is missed it would be no big deal, as ClassNotFoundException's would be suppressed, and if a few classes are missing the performance impact is negligible as long as the majority of the list is correct). 2) Generate the list dynamically on first boot, and store it in the temp directory. This would require the addition of a hook into JBoss Modules to generate the list, but is the approach I would prefer (as first boot is always a bit slower anyway). Thoughts? Stuart -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170515/7757a60c/attachment.html From tomaz.cerar at gmail.com Mon May 15 08:09:21 2017 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Mon, 15 May 2017 14:09:21 +0200 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: Hey Stuart, this exact problem we discussed some time ago with David but didn't go as far as implementing a prototype. At the time one of bigger contention bottleneck was in jdk classes (java.lang.*, java.util.*, etc) and I think we should do something similar to what you did also in jboss-modules to address speeding up this. It would probably yield even better results in end run. -- tomaz On Mon, May 15, 2017 at 1:36 AM, Stuart Douglas wrote: > When JIRA was being screwy on Friday I used the time to investigate an > idea I have had for a while about improving our boot time performance. > According to Yourkit the majority of our time is spent in class loading. It > seems very unlikely that we will be able to reduce the number of classes we > load on boot (or at the very least it would be a massive amount of work) so > I investigated a different approach. > > I modified ModuleClassLoader to spit out the name and module of every > class that is loaded at boot time, and stored this in a properties file. I > then created a simple Service that starts immediately that uses two threads > to eagerly load every class on this list (I used two threads because that > seemed to work well on my laptop, I think Runtime.availableProcessors()/4 > is probably the best amount, but that assumption would need to be tested on > different hardware). > > The idea behind this is that we know the classes will be used at some > point, and we generally do not fully utilise all CPU's during boot, so we > can use the unused CPU to pre load these classes so they are ready when > they are actually required. > > Using this approach I saw the boot time for standalone.xml drop from ~2.9s > to ~2.3s on my laptop. The (super hacky) code I used to perform this test > is at https://github.com/wildfly/wildfly-core/compare/master... > stuartwdouglas:boot-performance-hack > > I think these initial results are encouraging, and it is a big enough gain > that I think it is worth investigating further. > > Firstly it would be great if I could get others to try it out and see if > they see similar gains to boot time, it may be that the gain is very system > dependent. > > Secondly if we do decide to do this there are two approach that we can use > that I can see: > > 1) A hard coded list of class names that we generate before a release > (basically what the hack already does), this is simplest, but does add a > little bit of additional work to the release process (although if it is > missed it would be no big deal, as ClassNotFoundException's would be > suppressed, and if a few classes are missing the performance impact is > negligible as long as the majority of the list is correct). > > 2) Generate the list dynamically on first boot, and store it in the temp > directory. This would require the addition of a hook into JBoss Modules to > generate the list, but is the approach I would prefer (as first boot is > always a bit slower anyway). > > Thoughts? > > Stuart > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170515/facdb913/attachment.html From rsvoboda at redhat.com Mon May 15 08:27:03 2017 From: rsvoboda at redhat.com (Rostislav Svoboda) Date: Mon, 15 May 2017 08:27:03 -0400 (EDT) Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> Hi. I can confirm I see improvements in boot time with your changes. My HW is Lenovo T440s with Fedora 25, Intel(R) Core(TM) i7-4600U CPU (Base Frequency 2.10 GHz, Max Turbo 3.30 GHz) I executed 50 iterations of start - stop sequence [1], before execution 5x start - stop for "warmup" With your changes Min: 3116 Max: 3761 Average: 3247.640000 Without: Min: 3442 Max: 4081 Average: 3580.840000 > 1) A hard coded list of class names that we generate before a release This will improve first boot impression, little bit harder for maintaining the list for the final build. Property files could be located inside properties directory of dedicated module (). Properties directory could contain property files for delivered profiles. Layered products or customer modifications could deliver own property file. e.g. predefined property file for standalone-openshift.xml in EAP image in OpenShift environment, I think they boot the server just once and throw away the whole docker image when something changes. > 2) Generate the list dynamically on first boot, and store it in the temp This looks like the most elegant thing to do. Question is how it will slow down the initial boot. People care about first boot impression, some blog writers do the mistake too. This would also block boot time improvements for use-cases when you start the server just once - e.g. Docker, OpenShift. Also the logic should take into account which profile is loaded - e.g standalone.xml vs. standalone-full-ha.xml Rostislav [1] rm wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log rm wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log for i in {1..50}; do echo $i wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/standalone.sh 1>/dev/null 2>&1 & sleep 8 wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/jboss-cli.sh -c :shutdown 1>/dev/null 2>&1 done grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log | sed "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } { if ($1>max) max=$1; if ($1/dev/null 2>&1 & sleep 8 wildfly-11.0.0.Beta1-SNAPSHOT/bin/jboss-cli.sh -c :shutdown 1>/dev/null 2>&1 done grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log | sed "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } { if ($1>max) max=$1; if ($1 When JIRA was being screwy on Friday I used the time to investigate an idea I > have had for a while about improving our boot time performance. According to > Yourkit the majority of our time is spent in class loading. It seems very > unlikely that we will be able to reduce the number of classes we load on > boot (or at the very least it would be a massive amount of work) so I > investigated a different approach. > > I modified ModuleClassLoader to spit out the name and module of every class > that is loaded at boot time, and stored this in a properties file. I then > created a simple Service that starts immediately that uses two threads to > eagerly load every class on this list (I used two threads because that > seemed to work well on my laptop, I think Runtime.availableProcessors()/4 is > probably the best amount, but that assumption would need to be tested on > different hardware). > > The idea behind this is that we know the classes will be used at some point, > and we generally do not fully utilise all CPU's during boot, so we can use > the unused CPU to pre load these classes so they are ready when they are > actually required. > > Using this approach I saw the boot time for standalone.xml drop from ~2.9s to > ~2.3s on my laptop. The (super hacky) code I used to perform this test is at > https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack > > I think these initial results are encouraging, and it is a big enough gain > that I think it is worth investigating further. > > Firstly it would be great if I could get others to try it out and see if they > see similar gains to boot time, it may be that the gain is very system > dependent. > > Secondly if we do decide to do this there are two approach that we can use > that I can see: > > 1) A hard coded list of class names that we generate before a release > (basically what the hack already does), this is simplest, but does add a > little bit of additional work to the release process (although if it is > missed it would be no big deal, as ClassNotFoundException's would be > suppressed, and if a few classes are missing the performance impact is > negligible as long as the majority of the list is correct). > > 2) Generate the list dynamically on first boot, and store it in the temp > directory. This would require the addition of a hook into JBoss Modules to > generate the list, but is the approach I would prefer (as first boot is > always a bit slower anyway). > > Thoughts? > > Stuart > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From brian.stansberry at redhat.com Mon May 15 10:13:26 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 15 May 2017 09:13:26 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: Definitely worth investigating. I?d like to have a real good understanding of why it has the benefits it has, so we can see if this is the best way to get them or if something else is better. This kicks in just before the ModelController starts and begins parsing the config. The config parsing quickly gets into parallel work; as soon as the extension elements are reached the extension modules are loaded concurrently. Then once the parsing is done each subsystem is installed concurrently, so lots of threads doing concurrent classloading. So why does adding two more make such a big difference? Is it that they gets lots of work done in that time when the regular boot thread is not doing concurrent work, i.e. the parsing and the non-parallel bits of operation execution? Is it that these threads are just chugging along doing classloading efficiently while the parallel threads are running along inefficiently getting scheduled and unscheduled? The latter doesn?t make sense to me as there?s no reason why these threads would be any more efficient than the others. - Brian > On May 14, 2017, at 6:36 PM, Stuart Douglas wrote: > > When JIRA was being screwy on Friday I used the time to investigate an idea I have had for a while about improving our boot time performance. According to Yourkit the majority of our time is spent in class loading. It seems very unlikely that we will be able to reduce the number of classes we load on boot (or at the very least it would be a massive amount of work) so I investigated a different approach. > > I modified ModuleClassLoader to spit out the name and module of every class that is loaded at boot time, and stored this in a properties file. I then created a simple Service that starts immediately that uses two threads to eagerly load every class on this list (I used two threads because that seemed to work well on my laptop, I think Runtime.availableProcessors()/4 is probably the best amount, but that assumption would need to be tested on different hardware). > > The idea behind this is that we know the classes will be used at some point, and we generally do not fully utilise all CPU's during boot, so we can use the unused CPU to pre load these classes so they are ready when they are actually required. > > Using this approach I saw the boot time for standalone.xml drop from ~2.9s to ~2.3s on my laptop. The (super hacky) code I used to perform this test is at https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack > > I think these initial results are encouraging, and it is a big enough gain that I think it is worth investigating further. > > Firstly it would be great if I could get others to try it out and see if they see similar gains to boot time, it may be that the gain is very system dependent. > > Secondly if we do decide to do this there are two approach that we can use that I can see: > > 1) A hard coded list of class names that we generate before a release (basically what the hack already does), this is simplest, but does add a little bit of additional work to the release process (although if it is missed it would be no big deal, as ClassNotFoundException's would be suppressed, and if a few classes are missing the performance impact is negligible as long as the majority of the list is correct). > > 2) Generate the list dynamically on first boot, and store it in the temp directory. This would require the addition of a hook into JBoss Modules to generate the list, but is the approach I would prefer (as first boot is always a bit slower anyway). > > Thoughts? > > Stuart > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From brian.stansberry at redhat.com Mon May 15 10:21:35 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 15 May 2017 09:21:35 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> References: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> Message-ID: A disadvantage of a static list is that we have other concerns besides boot speed, i.e. memory footprint. We do not want to be loading a bunch of classes that are not relevant to the configuration. You touch on this issue, Rostislav, in your point about properties file for delivered profiles. If the lists are per feature-pack (or better yet per package once our feature packs have the package notion) and then the build integrates them, that somewhat mitigates that concern, at least for people who are careful about how they provision. But it doesn?t help people who are not so careful and just rely on standalone.xml trimming or selecting a reasonalbe standarc config to tailor their server footprint. > On May 15, 2017, at 7:27 AM, Rostislav Svoboda wrote: > > Hi. > > I can confirm I see improvements in boot time with your changes. > My HW is Lenovo T440s with Fedora 25, Intel(R) Core(TM) i7-4600U CPU (Base Frequency 2.10 GHz, Max Turbo 3.30 GHz) > > I executed 50 iterations of start - stop sequence [1], before execution 5x start - stop for "warmup" > > With your changes > Min: 3116 Max: 3761 Average: 3247.640000 > > Without: > Min: 3442 Max: 4081 Average: 3580.840000 > > >> 1) A hard coded list of class names that we generate before a release > > This will improve first boot impression, little bit harder for maintaining the list for the final build. > > Property files could be located inside properties directory of dedicated module (). Properties directory could contain property files for delivered profiles. > > Layered products or customer modifications could deliver own property file. > e.g. predefined property file for standalone-openshift.xml in EAP image in OpenShift environment, I think they boot the server just once and throw away the whole docker image when something changes. > > >> 2) Generate the list dynamically on first boot, and store it in the temp > > This looks like the most elegant thing to do. Question is how it will slow down the initial boot. People care about first boot impression, some blog writers do the mistake too. > This would also block boot time improvements for use-cases when you start the server just once - e.g. Docker, OpenShift. > > Also the logic should take into account which profile is loaded - e.g standalone.xml vs. standalone-full-ha.xml > > Rostislav > > [1] > rm wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log > rm wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log > > for i in {1..50}; do > echo $i > wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/standalone.sh 1>/dev/null 2>&1 & > sleep 8 > wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/jboss-cli.sh -c :shutdown 1>/dev/null 2>&1 > done > grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log | sed "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } > { if ($1>max) max=$1; if ($1 END {printf "Min: %d\tMax: %d\tAverage: %f\n", min, max, sum/NR}' > > > for i in {1..50}; do > echo $i > wildfly-11.0.0.Beta1-SNAPSHOT/bin/standalone.sh 1>/dev/null 2>&1 & > sleep 8 > wildfly-11.0.0.Beta1-SNAPSHOT/bin/jboss-cli.sh -c :shutdown 1>/dev/null 2>&1 > done > grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log | sed "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } > { if ($1>max) max=$1; if ($1 END {printf "Min: %d\tMax: %d\tAverage: %f\n", min, max, sum/NR}' > > > ----- Original Message ----- >> When JIRA was being screwy on Friday I used the time to investigate an idea I >> have had for a while about improving our boot time performance. According to >> Yourkit the majority of our time is spent in class loading. It seems very >> unlikely that we will be able to reduce the number of classes we load on >> boot (or at the very least it would be a massive amount of work) so I >> investigated a different approach. >> >> I modified ModuleClassLoader to spit out the name and module of every class >> that is loaded at boot time, and stored this in a properties file. I then >> created a simple Service that starts immediately that uses two threads to >> eagerly load every class on this list (I used two threads because that >> seemed to work well on my laptop, I think Runtime.availableProcessors()/4 is >> probably the best amount, but that assumption would need to be tested on >> different hardware). >> >> The idea behind this is that we know the classes will be used at some point, >> and we generally do not fully utilise all CPU's during boot, so we can use >> the unused CPU to pre load these classes so they are ready when they are >> actually required. >> >> Using this approach I saw the boot time for standalone.xml drop from ~2.9s to >> ~2.3s on my laptop. The (super hacky) code I used to perform this test is at >> https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack >> >> I think these initial results are encouraging, and it is a big enough gain >> that I think it is worth investigating further. >> >> Firstly it would be great if I could get others to try it out and see if they >> see similar gains to boot time, it may be that the gain is very system >> dependent. >> >> Secondly if we do decide to do this there are two approach that we can use >> that I can see: >> >> 1) A hard coded list of class names that we generate before a release >> (basically what the hack already does), this is simplest, but does add a >> little bit of additional work to the release process (although if it is >> missed it would be no big deal, as ClassNotFoundException's would be >> suppressed, and if a few classes are missing the performance impact is >> negligible as long as the majority of the list is correct). >> >> 2) Generate the list dynamically on first boot, and store it in the temp >> directory. This would require the addition of a hook into JBoss Modules to >> generate the list, but is the approach I would prefer (as first boot is >> always a bit slower anyway). >> >> Thoughts? >> >> Stuart >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From sanne at hibernate.org Mon May 15 10:23:37 2017 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 15 May 2017 15:23:37 +0100 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> References: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> Message-ID: Very interesting. >From a different perspective, but closely related, I was recently trying to profile the testsuites of Hibernate projects and also in our case ClassLoader time is significant portion of the bootstrap time. In the Hibernate case I noticed it spends quite some time to locate Service implementations over the ServiceLoader pattern. The problem seems to be that a lot of internal code has been refactored in recent versions to be "replaceable" so Hibernate ORM includes a default implementation for each internal Service but it will first check if it can find an alternative somewhere else on the classpath: looking both among its own dependencies and among the classes provided by the deployment. I'll see if we can do better in Hibernate ORM (not sure yet!), but raising it here as I suspect several other libraries could be guilty of the same approach. I also hope we'll be able to curate (trim) the dependencies more; the current JPA subsystem is including many dependencies of questionable usefulness. That should help? Thanks, Sanne On 15 May 2017 at 13:27, Rostislav Svoboda wrote: > Hi. > > I can confirm I see improvements in boot time with your changes. > My HW is Lenovo T440s with Fedora 25, Intel(R) Core(TM) i7-4600U CPU (Base Frequency 2.10 GHz, Max Turbo 3.30 GHz) > > I executed 50 iterations of start - stop sequence [1], before execution 5x start - stop for "warmup" > > With your changes > Min: 3116 Max: 3761 Average: 3247.640000 > > Without: > Min: 3442 Max: 4081 Average: 3580.840000 > > >> 1) A hard coded list of class names that we generate before a release > > This will improve first boot impression, little bit harder for maintaining the list for the final build. > > Property files could be located inside properties directory of dedicated module (). Properties directory could contain property files for delivered profiles. > > Layered products or customer modifications could deliver own property file. > e.g. predefined property file for standalone-openshift.xml in EAP image in OpenShift environment, I think they boot the server just once and throw away the whole docker image when something changes. > > >> 2) Generate the list dynamically on first boot, and store it in the temp > > This looks like the most elegant thing to do. Question is how it will slow down the initial boot. People care about first boot impression, some blog writers do the mistake too. > This would also block boot time improvements for use-cases when you start the server just once - e.g. Docker, OpenShift. > > Also the logic should take into account which profile is loaded - e.g standalone.xml vs. standalone-full-ha.xml > > Rostislav > > [1] > rm wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log > rm wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log > > for i in {1..50}; do > echo $i > wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/standalone.sh 1>/dev/null 2>&1 & > sleep 8 > wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/jboss-cli.sh -c :shutdown 1>/dev/null 2>&1 > done > grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log | sed "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } > { if ($1>max) max=$1; if ($1 END {printf "Min: %d\tMax: %d\tAverage: %f\n", min, max, sum/NR}' > > > for i in {1..50}; do > echo $i > wildfly-11.0.0.Beta1-SNAPSHOT/bin/standalone.sh 1>/dev/null 2>&1 & > sleep 8 > wildfly-11.0.0.Beta1-SNAPSHOT/bin/jboss-cli.sh -c :shutdown 1>/dev/null 2>&1 > done > grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log | sed "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } > { if ($1>max) max=$1; if ($1 END {printf "Min: %d\tMax: %d\tAverage: %f\n", min, max, sum/NR}' > > > ----- Original Message ----- >> When JIRA was being screwy on Friday I used the time to investigate an idea I >> have had for a while about improving our boot time performance. According to >> Yourkit the majority of our time is spent in class loading. It seems very >> unlikely that we will be able to reduce the number of classes we load on >> boot (or at the very least it would be a massive amount of work) so I >> investigated a different approach. >> >> I modified ModuleClassLoader to spit out the name and module of every class >> that is loaded at boot time, and stored this in a properties file. I then >> created a simple Service that starts immediately that uses two threads to >> eagerly load every class on this list (I used two threads because that >> seemed to work well on my laptop, I think Runtime.availableProcessors()/4 is >> probably the best amount, but that assumption would need to be tested on >> different hardware). >> >> The idea behind this is that we know the classes will be used at some point, >> and we generally do not fully utilise all CPU's during boot, so we can use >> the unused CPU to pre load these classes so they are ready when they are >> actually required. >> >> Using this approach I saw the boot time for standalone.xml drop from ~2.9s to >> ~2.3s on my laptop. The (super hacky) code I used to perform this test is at >> https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack >> >> I think these initial results are encouraging, and it is a big enough gain >> that I think it is worth investigating further. >> >> Firstly it would be great if I could get others to try it out and see if they >> see similar gains to boot time, it may be that the gain is very system >> dependent. >> >> Secondly if we do decide to do this there are two approach that we can use >> that I can see: >> >> 1) A hard coded list of class names that we generate before a release >> (basically what the hack already does), this is simplest, but does add a >> little bit of additional work to the release process (although if it is >> missed it would be no big deal, as ClassNotFoundException's would be >> suppressed, and if a few classes are missing the performance impact is >> negligible as long as the majority of the list is correct). >> >> 2) Generate the list dynamically on first boot, and store it in the temp >> directory. This would require the addition of a hook into JBoss Modules to >> generate the list, but is the approach I would prefer (as first boot is >> always a bit slower anyway). >> >> Thoughts? >> >> Stuart >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From brian.stansberry at redhat.com Mon May 15 10:53:10 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 15 May 2017 09:53:10 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> Message-ID: <26D62A26-0C12-4F14-A2EE-6AD585E68348@redhat.com> +1 re: being careful about ServiceLoader. We were doing some perf testing work last week and that's one thing that showed up. Unfortunately it was in the app we were testing rather than in the server code but I could easily imagine similar things happening in the server. In case people are curious, the issue is the javax.json.Json class, which provides a bunch of static utility methods to create JSON related objects. The problem is they are implemented via "JsonProvider.provider().doXXX?. And that JsonProvider.provider() call uses a ServiceLoader to try and find any custom JsonProvider impls before falling back using the default. WildFly doesn?t ship any such impls. Those ServiceLoader calls all result in a FileNotFoundException as the classloader checks for META-INF/services/?JsonProvider. So IO access plus the cost of creating exception. The solution is to not use javax.json.Json.xxx all the time in the app, but to call JsonProvider.provider() once and cache it. > On May 15, 2017, at 9:23 AM, Sanne Grinovero wrote: > > Very interesting. > >> From a different perspective, but closely related, I was recently > trying to profile the testsuites of Hibernate projects and also in our > case ClassLoader time is significant portion of the bootstrap time. > > In the Hibernate case I noticed it spends quite some time to locate > Service implementations over the ServiceLoader pattern. > > The problem seems to be that a lot of internal code has been > refactored in recent versions to be "replaceable" so Hibernate ORM > includes a default implementation for each internal Service but it > will first check if it can find an alternative somewhere else on the > classpath: looking both among its own dependencies and among the > classes provided by the deployment. > > I'll see if we can do better in Hibernate ORM (not sure yet!), but > raising it here as I suspect several other libraries could be guilty > of the same approach. > > I also hope we'll be able to curate (trim) the dependencies more; the > current JPA subsystem is including many dependencies of questionable > usefulness. That should help? > > Thanks, > Sanne > > > > On 15 May 2017 at 13:27, Rostislav Svoboda wrote: >> Hi. >> >> I can confirm I see improvements in boot time with your changes. >> My HW is Lenovo T440s with Fedora 25, Intel(R) Core(TM) i7-4600U CPU (Base Frequency 2.10 GHz, Max Turbo 3.30 GHz) >> >> I executed 50 iterations of start - stop sequence [1], before execution 5x start - stop for "warmup" >> >> With your changes >> Min: 3116 Max: 3761 Average: 3247.640000 >> >> Without: >> Min: 3442 Max: 4081 Average: 3580.840000 >> >> >>> 1) A hard coded list of class names that we generate before a release >> >> This will improve first boot impression, little bit harder for maintaining the list for the final build. >> >> Property files could be located inside properties directory of dedicated module (). Properties directory could contain property files for delivered profiles. >> >> Layered products or customer modifications could deliver own property file. >> e.g. predefined property file for standalone-openshift.xml in EAP image in OpenShift environment, I think they boot the server just once and throw away the whole docker image when something changes. >> >> >>> 2) Generate the list dynamically on first boot, and store it in the temp >> >> This looks like the most elegant thing to do. Question is how it will slow down the initial boot. People care about first boot impression, some blog writers do the mistake too. >> This would also block boot time improvements for use-cases when you start the server just once - e.g. Docker, OpenShift. >> >> Also the logic should take into account which profile is loaded - e.g standalone.xml vs. standalone-full-ha.xml >> >> Rostislav >> >> [1] >> rm wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log >> rm wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log >> >> for i in {1..50}; do >> echo $i >> wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/standalone.sh 1>/dev/null 2>&1 & >> sleep 8 >> wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/jboss-cli.sh -c :shutdown 1>/dev/null 2>&1 >> done >> grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log | sed "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } >> { if ($1>max) max=$1; if ($1> END {printf "Min: %d\tMax: %d\tAverage: %f\n", min, max, sum/NR}' >> >> >> for i in {1..50}; do >> echo $i >> wildfly-11.0.0.Beta1-SNAPSHOT/bin/standalone.sh 1>/dev/null 2>&1 & >> sleep 8 >> wildfly-11.0.0.Beta1-SNAPSHOT/bin/jboss-cli.sh -c :shutdown 1>/dev/null 2>&1 >> done >> grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log | sed "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } >> { if ($1>max) max=$1; if ($1> END {printf "Min: %d\tMax: %d\tAverage: %f\n", min, max, sum/NR}' >> >> >> ----- Original Message ----- >>> When JIRA was being screwy on Friday I used the time to investigate an idea I >>> have had for a while about improving our boot time performance. According to >>> Yourkit the majority of our time is spent in class loading. It seems very >>> unlikely that we will be able to reduce the number of classes we load on >>> boot (or at the very least it would be a massive amount of work) so I >>> investigated a different approach. >>> >>> I modified ModuleClassLoader to spit out the name and module of every class >>> that is loaded at boot time, and stored this in a properties file. I then >>> created a simple Service that starts immediately that uses two threads to >>> eagerly load every class on this list (I used two threads because that >>> seemed to work well on my laptop, I think Runtime.availableProcessors()/4 is >>> probably the best amount, but that assumption would need to be tested on >>> different hardware). >>> >>> The idea behind this is that we know the classes will be used at some point, >>> and we generally do not fully utilise all CPU's during boot, so we can use >>> the unused CPU to pre load these classes so they are ready when they are >>> actually required. >>> >>> Using this approach I saw the boot time for standalone.xml drop from ~2.9s to >>> ~2.3s on my laptop. The (super hacky) code I used to perform this test is at >>> https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack >>> >>> I think these initial results are encouraging, and it is a big enough gain >>> that I think it is worth investigating further. >>> >>> Firstly it would be great if I could get others to try it out and see if they >>> see similar gains to boot time, it may be that the gain is very system >>> dependent. >>> >>> Secondly if we do decide to do this there are two approach that we can use >>> that I can see: >>> >>> 1) A hard coded list of class names that we generate before a release >>> (basically what the hack already does), this is simplest, but does add a >>> little bit of additional work to the release process (although if it is >>> missed it would be no big deal, as ClassNotFoundException's would be >>> suppressed, and if a few classes are missing the performance impact is >>> negligible as long as the majority of the list is correct). >>> >>> 2) Generate the list dynamically on first boot, and store it in the temp >>> directory. This would require the addition of a hook into JBoss Modules to >>> generate the list, but is the approach I would prefer (as first boot is >>> always a bit slower anyway). >>> >>> Thoughts? >>> >>> Stuart >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From tomaz.cerar at gmail.com Mon May 15 11:04:05 2017 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Mon, 15 May 2017 17:04:05 +0200 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: On Mon, May 15, 2017 at 4:13 PM, Brian Stansberry < brian.stansberry at redhat.com> wrote: > So why does adding two more make such a big difference? Main reason is that this two threads load most of later required classes which can later be quickly loaded from multiple parallel threads. Currently concurrency causes that 8 -16 threads (on 4-8 logical core systems) try to load same classes at same time. this leads to lots of contention as result. "preloading" some of this classes reduces contention. Looking at the list in the current "hack impl" there are lots of classes that don't need to be there, stuff like subsystem parsers which are only loaded once in any case. Main pressure is on classes from jboss-modules, controller, server & xml parsers modules, all others are not as problematic. This is also reason why lots of contention is happening on JDK classes as well as those are shared between all parts of server code. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170515/3652ae49/attachment.html From david.lloyd at redhat.com Mon May 15 11:34:02 2017 From: david.lloyd at redhat.com (David M. Lloyd) Date: Mon, 15 May 2017 10:34:02 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: I have a few thoughts that might be of interest. Firstly, I'd be interested to see when you are logging the class name being loaded. If you are logging it in loadClass, you might not be seeing the actual correct load order because that method is ultimately recursive. To get an accurate picture of what order that classes are actually defined - and thus what order you can load them in order to prevent contention on per-class locks within the CL - you should log immediately _after_ defineClass completes for each class. Secondly, while debugging a resource iteration performance problem a user was having with a large number of deployments, I discovered that contention for the lock on JarFile and ZipFile was a primary cause. The workaround I employed was to keep a RAM-based List of the files in the JAR, which can be iterated over without touching the lock. When we're preloading classes, we're definitely going to see this same kind of contention come up, because there's only one lock per JarFile instance so you can only ever read one entry at a time, thus preventing any kind of useful concurrency on a per-module basis. Exploding the files out of the JarFile could expose this contention and therefore might be useful as a test - but it would also skew the results a little because you have no decompression overhead, and creating the separate file streams hypothetically might be somewhat more (or less) expensive. I joked about resurrecting jzipfile (which I killed off because it was something like 20% slower at decompressing entries than Jar/ZipFile) but it might be worth considering having our own JAR extractor at some point with a view towards concurrency gains. If we go this route, we could go even further and create an optimized module format, which is an idea I think we've looked at a little bit in the past; there are a few avenues of exploration here which could be interesting. At some point we also need to see how jaotc might improve things. It probably won't improve class loading time directly, but it might improve the processes by which class loading is done because all the one-off bits would be precompiled. Also it's worth exploring whether the jimage format has contention issues like this. On 05/14/2017 06:36 PM, Stuart Douglas wrote: > When JIRA was being screwy on Friday I used the time to investigate an > idea I have had for a while about improving our boot time performance. > According to Yourkit the majority of our time is spent in class loading. > It seems very unlikely that we will be able to reduce the number of > classes we load on boot (or at the very least it would be a massive > amount of work) so I investigated a different approach. > > I modified ModuleClassLoader to spit out the name and module of every > class that is loaded at boot time, and stored this in a properties file. > I then created a simple Service that starts immediately that uses two > threads to eagerly load every class on this list (I used two threads > because that seemed to work well on my laptop, I think > Runtime.availableProcessors()/4 is probably the best amount, but that > assumption would need to be tested on different hardware). > > The idea behind this is that we know the classes will be used at some > point, and we generally do not fully utilise all CPU's during boot, so > we can use the unused CPU to pre load these classes so they are ready > when they are actually required. > > Using this approach I saw the boot time for standalone.xml drop from > ~2.9s to ~2.3s on my laptop. The (super hacky) code I used to perform > this test is at > https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack > > I think these initial results are encouraging, and it is a big enough > gain that I think it is worth investigating further. > > Firstly it would be great if I could get others to try it out and see if > they see similar gains to boot time, it may be that the gain is very > system dependent. > > Secondly if we do decide to do this there are two approach that we can > use that I can see: > > 1) A hard coded list of class names that we generate before a release > (basically what the hack already does), this is simplest, but does add a > little bit of additional work to the release process (although if it is > missed it would be no big deal, as ClassNotFoundException's would be > suppressed, and if a few classes are missing the performance impact is > negligible as long as the majority of the list is correct). > > 2) Generate the list dynamically on first boot, and store it in the temp > directory. This would require the addition of a hook into JBoss Modules > to generate the list, but is the approach I would prefer (as first boot > is always a bit slower anyway). > > Thoughts? > > Stuart > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- - DML From brian.stansberry at redhat.com Mon May 15 12:20:12 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 15 May 2017 11:20:12 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: Thanks. That?s interesting. > On May 15, 2017, at 10:04 AM, Toma? Cerar wrote: > > > On Mon, May 15, 2017 at 4:13 PM, Brian Stansberry wrote: > So why does adding two more make such a big difference? > > Main reason is that this two threads load most of later required classes which can later be quickly loaded from multiple parallel threads. > > Currently concurrency causes that 8 -16 threads (on 4-8 logical core systems) try to load same classes at same time. > this leads to lots of contention as result. "preloading" some of this classes reduces contention. > > Looking at the list in the current "hack impl" there are lots of classes that don't need to be there, stuff like subsystem parsers which are only loaded once in any case. > > Main pressure is on classes from jboss-modules, controller, server & xml parsers modules, all others are not as problematic. > This is also reason why lots of contention is happening on JDK classes as well as those are shared between all parts of server code. > > > > -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From stuart.w.douglas at gmail.com Mon May 15 17:52:27 2017 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 16 May 2017 07:52:27 +1000 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: On Tue, May 16, 2017 at 12:13 AM, Brian Stansberry < brian.stansberry at redhat.com> wrote: > Definitely worth investigating. I?d like to have a real good understanding > of why it has the benefits it has, so we can see if this is the best way to > get them or if something else is better. > I am pretty sure it is contention related. I modified my hack to load all classes from the same module at once (so once the first class from a module in that properties file is reached, it loads all others from the same module), and this gave another small but significant speedup (so the total gain is ~2.0-2.1s down from ~2.9s). Looking at the results of monitor profiling in Yourkit it looks like the reason is reduced contention. There is 50% less thread wait time on ModuleLoader$FutureModule, contention on JarFileResourceLoader is no more. I think the reason is that we have a lot of threads active at boot and this results in a lot of contention in module/class loading. Stuart > > This kicks in just before the ModelController starts and begins parsing > the config. The config parsing quickly gets into parallel work; as soon as > the extension elements are reached the extension modules are loaded > concurrently. Then once the parsing is done each subsystem is installed > concurrently, so lots of threads doing concurrent classloading. > > So why does adding two more make such a big difference? > > Is it that they gets lots of work done in that time when the regular boot > thread is not doing concurrent work, i.e. the parsing and the non-parallel > bits of operation execution? > > Is it that these threads are just chugging along doing classloading > efficiently while the parallel threads are running along inefficiently > getting scheduled and unscheduled? > > The latter doesn?t make sense to me as there?s no reason why these threads > would be any more efficient than the others. > > - Brian > > > On May 14, 2017, at 6:36 PM, Stuart Douglas > wrote: > > > > When JIRA was being screwy on Friday I used the time to investigate an > idea I have had for a while about improving our boot time performance. > According to Yourkit the majority of our time is spent in class loading. It > seems very unlikely that we will be able to reduce the number of classes we > load on boot (or at the very least it would be a massive amount of work) so > I investigated a different approach. > > > > I modified ModuleClassLoader to spit out the name and module of every > class that is loaded at boot time, and stored this in a properties file. I > then created a simple Service that starts immediately that uses two threads > to eagerly load every class on this list (I used two threads because that > seemed to work well on my laptop, I think Runtime.availableProcessors()/4 > is probably the best amount, but that assumption would need to be tested on > different hardware). > > > > The idea behind this is that we know the classes will be used at some > point, and we generally do not fully utilise all CPU's during boot, so we > can use the unused CPU to pre load these classes so they are ready when > they are actually required. > > > > Using this approach I saw the boot time for standalone.xml drop from > ~2.9s to ~2.3s on my laptop. The (super hacky) code I used to perform this > test is at https://github.com/wildfly/wildfly-core/compare/master... > stuartwdouglas:boot-performance-hack > > > > I think these initial results are encouraging, and it is a big enough > gain that I think it is worth investigating further. > > > > Firstly it would be great if I could get others to try it out and see if > they see similar gains to boot time, it may be that the gain is very system > dependent. > > > > Secondly if we do decide to do this there are two approach that we can > use that I can see: > > > > 1) A hard coded list of class names that we generate before a release > (basically what the hack already does), this is simplest, but does add a > little bit of additional work to the release process (although if it is > missed it would be no big deal, as ClassNotFoundException's would be > suppressed, and if a few classes are missing the performance impact is > negligible as long as the majority of the list is correct). > > > > 2) Generate the list dynamically on first boot, and store it in the temp > directory. This would require the addition of a hook into JBoss Modules to > generate the list, but is the approach I would prefer (as first boot is > always a bit slower anyway). > > > > Thoughts? > > > > Stuart > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > JBoss by Red Hat > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170516/0495ac80/attachment-0001.html From stuart.w.douglas at gmail.com Mon May 15 18:15:47 2017 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 16 May 2017 08:15:47 +1000 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> References: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> Message-ID: On Mon, May 15, 2017 at 10:27 PM, Rostislav Svoboda wrote: > Hi. > > I can confirm I see improvements in boot time with your changes. > My HW is Lenovo T440s with Fedora 25, Intel(R) Core(TM) i7-4600U CPU (Base > Frequency 2.10 GHz, Max Turbo 3.30 GHz) > > I executed 50 iterations of start - stop sequence [1], before execution 5x > start - stop for "warmup" > > With your changes > Min: 3116 Max: 3761 Average: 3247.640000 > > Without: > Min: 3442 Max: 4081 Average: 3580.840000 > > > > 1) A hard coded list of class names that we generate before a release > > This will improve first boot impression, little bit harder for maintaining > the list for the final build. > > Property files could be located inside properties directory of dedicated > module (). Properties directory could > contain property files for delivered profiles. > > Layered products or customer modifications could deliver own property file. > e.g. predefined property file for standalone-openshift.xml in EAP image > in OpenShift environment, I think they boot the server just once and throw > away the whole docker image when something changes. > > > > 2) Generate the list dynamically on first boot, and store it in the temp > > This looks like the most elegant thing to do. Question is how it will slow > down the initial boot. People care about first boot impression, some blog > writers do the mistake too. > It will not actually slow down the initial boot (at least not in a measurable way), but the first boot would not get the benefit of this optimisation. Stuart > This would also block boot time improvements for use-cases when you start > the server just once - e.g. Docker, OpenShift. > > Also the logic should take into account which profile is loaded - e.g > standalone.xml vs. standalone-full-ha.xml > > Rostislav > > [1] > rm wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log > rm wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log > > for i in {1..50}; do > echo $i > wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/standalone.sh 1>/dev/null > 2>&1 & > sleep 8 > wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/jboss-cli.sh -c :shutdown > 1>/dev/null 2>&1 > done > grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT- > preload/standalone/log/server.log | sed "s/.*\(....\)ms.*/\1/g" | awk > 'NR == 1 { max=$1; min=$1; sum=0 } > { if ($1>max) max=$1; if ($1 END {printf "Min: %d\tMax: %d\tAverage: %f\n", min, max, sum/NR}' > > > for i in {1..50}; do > echo $i > wildfly-11.0.0.Beta1-SNAPSHOT/bin/standalone.sh 1>/dev/null 2>&1 & > sleep 8 > wildfly-11.0.0.Beta1-SNAPSHOT/bin/jboss-cli.sh -c :shutdown 1>/dev/null > 2>&1 > done > grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log > | sed "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } > { if ($1>max) max=$1; if ($1 END {printf "Min: %d\tMax: %d\tAverage: %f\n", min, max, sum/NR}' > > > ----- Original Message ----- > > When JIRA was being screwy on Friday I used the time to investigate an > idea I > > have had for a while about improving our boot time performance. > According to > > Yourkit the majority of our time is spent in class loading. It seems very > > unlikely that we will be able to reduce the number of classes we load on > > boot (or at the very least it would be a massive amount of work) so I > > investigated a different approach. > > > > I modified ModuleClassLoader to spit out the name and module of every > class > > that is loaded at boot time, and stored this in a properties file. I then > > created a simple Service that starts immediately that uses two threads to > > eagerly load every class on this list (I used two threads because that > > seemed to work well on my laptop, I think Runtime.availableProcessors()/4 > is > > probably the best amount, but that assumption would need to be tested on > > different hardware). > > > > The idea behind this is that we know the classes will be used at some > point, > > and we generally do not fully utilise all CPU's during boot, so we can > use > > the unused CPU to pre load these classes so they are ready when they are > > actually required. > > > > Using this approach I saw the boot time for standalone.xml drop from > ~2.9s to > > ~2.3s on my laptop. The (super hacky) code I used to perform this test > is at > > https://github.com/wildfly/wildfly-core/compare/master... > stuartwdouglas:boot-performance-hack > > > > I think these initial results are encouraging, and it is a big enough > gain > > that I think it is worth investigating further. > > > > Firstly it would be great if I could get others to try it out and see if > they > > see similar gains to boot time, it may be that the gain is very system > > dependent. > > > > Secondly if we do decide to do this there are two approach that we can > use > > that I can see: > > > > 1) A hard coded list of class names that we generate before a release > > (basically what the hack already does), this is simplest, but does add a > > little bit of additional work to the release process (although if it is > > missed it would be no big deal, as ClassNotFoundException's would be > > suppressed, and if a few classes are missing the performance impact is > > negligible as long as the majority of the list is correct). > > > > 2) Generate the list dynamically on first boot, and store it in the temp > > directory. This would require the addition of a hook into JBoss Modules > to > > generate the list, but is the approach I would prefer (as first boot is > > always a bit slower anyway). > > > > Thoughts? > > > > Stuart > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170516/275a3653/attachment.html From brian.stansberry at redhat.com Mon May 15 18:16:09 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 15 May 2017 17:16:09 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: <93F739F1-0419-44EC-8C58-5403CC11D817@redhat.com> >From the time I did parallel boot I?ve always wondered if the level of concurrency was valid, but I never got around to doing any experimentation. It?s quite naive; a task per extension module load and then one per subystem. I?ve wanted to look into instead dividing the work into X larger tasks with X derived from the number of cores. But for your fix to be helping things so much it must be loading a lot of these classes during the single-threaded parts of the boot, so I don?t see how my changing it to have fewer tasks would compete with that. It may be beneficial regardless though, e.g. by not spinning up more threads that can be efficiently used. > On May 15, 2017, at 4:52 PM, Stuart Douglas wrote: > > > > On Tue, May 16, 2017 at 12:13 AM, Brian Stansberry wrote: > Definitely worth investigating. I?d like to have a real good understanding of why it has the benefits it has, so we can see if this is the best way to get them or if something else is better. > > I am pretty sure it is contention related. I modified my hack to load all classes from the same module at once (so once the first class from a module in that properties file is reached, it loads all others from the same module), and this gave another small but significant speedup (so the total gain is ~2.0-2.1s down from ~2.9s). > > Looking at the results of monitor profiling in Yourkit it looks like the reason is reduced contention. There is 50% less thread wait time on ModuleLoader$FutureModule, contention on JarFileResourceLoader is no more. I think the reason is that we have a lot of threads active at boot and this results in a lot of contention in module/class loading. > > Stuart > > > > > This kicks in just before the ModelController starts and begins parsing the config. The config parsing quickly gets into parallel work; as soon as the extension elements are reached the extension modules are loaded concurrently. Then once the parsing is done each subsystem is installed concurrently, so lots of threads doing concurrent classloading. > > So why does adding two more make such a big difference? > > Is it that they gets lots of work done in that time when the regular boot thread is not doing concurrent work, i.e. the parsing and the non-parallel bits of operation execution? > > Is it that these threads are just chugging along doing classloading efficiently while the parallel threads are running along inefficiently getting scheduled and unscheduled? > > The latter doesn?t make sense to me as there?s no reason why these threads would be any more efficient than the others. > > - Brian > > > On May 14, 2017, at 6:36 PM, Stuart Douglas wrote: > > > > When JIRA was being screwy on Friday I used the time to investigate an idea I have had for a while about improving our boot time performance. According to Yourkit the majority of our time is spent in class loading. It seems very unlikely that we will be able to reduce the number of classes we load on boot (or at the very least it would be a massive amount of work) so I investigated a different approach. > > > > I modified ModuleClassLoader to spit out the name and module of every class that is loaded at boot time, and stored this in a properties file. I then created a simple Service that starts immediately that uses two threads to eagerly load every class on this list (I used two threads because that seemed to work well on my laptop, I think Runtime.availableProcessors()/4 is probably the best amount, but that assumption would need to be tested on different hardware). > > > > The idea behind this is that we know the classes will be used at some point, and we generally do not fully utilise all CPU's during boot, so we can use the unused CPU to pre load these classes so they are ready when they are actually required. > > > > Using this approach I saw the boot time for standalone.xml drop from ~2.9s to ~2.3s on my laptop. The (super hacky) code I used to perform this test is at https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack > > > > I think these initial results are encouraging, and it is a big enough gain that I think it is worth investigating further. > > > > Firstly it would be great if I could get others to try it out and see if they see similar gains to boot time, it may be that the gain is very system dependent. > > > > Secondly if we do decide to do this there are two approach that we can use that I can see: > > > > 1) A hard coded list of class names that we generate before a release (basically what the hack already does), this is simplest, but does add a little bit of additional work to the release process (although if it is missed it would be no big deal, as ClassNotFoundException's would be suppressed, and if a few classes are missing the performance impact is negligible as long as the majority of the list is correct). > > > > 2) Generate the list dynamically on first boot, and store it in the temp directory. This would require the addition of a hook into JBoss Modules to generate the list, but is the approach I would prefer (as first boot is always a bit slower anyway). > > > > Thoughts? > > > > Stuart > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > JBoss by Red Hat > > > > -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From stuart.w.douglas at gmail.com Mon May 15 18:21:32 2017 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 16 May 2017 08:21:32 +1000 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: On Tue, May 16, 2017 at 1:34 AM, David M. Lloyd wrote: > I have a few thoughts that might be of interest. > > Firstly, I'd be interested to see when you are logging the class name > being loaded. If you are logging it in loadClass, you might not be > seeing the actual correct load order because that method is ultimately > recursive. To get an accurate picture of what order that classes are > actually defined - and thus what order you can load them in order to > prevent contention on per-class locks within the CL - you should log > immediately _after_ defineClass completes for each class. > I set a breakpoint in loadClassLocal to print off the information. > > Secondly, while debugging a resource iteration performance problem a > user was having with a large number of deployments, I discovered that > contention for the lock on JarFile and ZipFile was a primary cause. The > workaround I employed was to keep a RAM-based List of the files in the > JAR, which can be iterated over without touching the lock. > > When we're preloading classes, we're definitely going to see this same > kind of contention come up, because there's only one lock per JarFile > instance so you can only ever read one entry at a time, thus preventing > any kind of useful concurrency on a per-module basis. > I think this is why I see an even bigger gain when pre-loading classes one module at a time. > > Exploding the files out of the JarFile could expose this contention and > therefore might be useful as a test - but it would also skew the results > a little because you have no decompression overhead, and creating the > separate file streams hypothetically might be somewhat more (or less) > expensive. I joked about resurrecting jzipfile (which I killed off > because it was something like 20% slower at decompressing entries than > Jar/ZipFile) but it might be worth considering having our own JAR > extractor at some point with a view towards concurrency gains. If we go > this route, we could go even further and create an optimized module > format, which is an idea I think we've looked at a little bit in the > past; there are a few avenues of exploration here which could be > interesting. > This could be worth investigating. Stuart > > At some point we also need to see how jaotc might improve things. It > probably won't improve class loading time directly, but it might improve > the processes by which class loading is done because all the one-off > bits would be precompiled. Also it's worth exploring whether the jimage > format has contention issues like this. > > On 05/14/2017 06:36 PM, Stuart Douglas wrote: > > When JIRA was being screwy on Friday I used the time to investigate an > > idea I have had for a while about improving our boot time performance. > > According to Yourkit the majority of our time is spent in class loading. > > It seems very unlikely that we will be able to reduce the number of > > classes we load on boot (or at the very least it would be a massive > > amount of work) so I investigated a different approach. > > > > I modified ModuleClassLoader to spit out the name and module of every > > class that is loaded at boot time, and stored this in a properties file. > > I then created a simple Service that starts immediately that uses two > > threads to eagerly load every class on this list (I used two threads > > because that seemed to work well on my laptop, I think > > Runtime.availableProcessors()/4 is probably the best amount, but that > > assumption would need to be tested on different hardware). > > > > The idea behind this is that we know the classes will be used at some > > point, and we generally do not fully utilise all CPU's during boot, so > > we can use the unused CPU to pre load these classes so they are ready > > when they are actually required. > > > > Using this approach I saw the boot time for standalone.xml drop from > > ~2.9s to ~2.3s on my laptop. The (super hacky) code I used to perform > > this test is at > > https://github.com/wildfly/wildfly-core/compare/master... > stuartwdouglas:boot-performance-hack > > > > I think these initial results are encouraging, and it is a big enough > > gain that I think it is worth investigating further. > > > > Firstly it would be great if I could get others to try it out and see if > > they see similar gains to boot time, it may be that the gain is very > > system dependent. > > > > Secondly if we do decide to do this there are two approach that we can > > use that I can see: > > > > 1) A hard coded list of class names that we generate before a release > > (basically what the hack already does), this is simplest, but does add a > > little bit of additional work to the release process (although if it is > > missed it would be no big deal, as ClassNotFoundException's would be > > suppressed, and if a few classes are missing the performance impact is > > negligible as long as the majority of the list is correct). > > > > 2) Generate the list dynamically on first boot, and store it in the temp > > directory. This would require the addition of a hook into JBoss Modules > > to generate the list, but is the approach I would prefer (as first boot > > is always a bit slower anyway). > > > > Thoughts? > > > > Stuart > > > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > > > -- > - DML > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170516/db8f7c2a/attachment-0001.html From heiko.rupp at redhat.com Mon May 15 23:57:41 2017 From: heiko.rupp at redhat.com (Heiko W.Rupp) Date: Tue, 16 May 2017 05:57:41 +0200 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> Message-ID: <6AB03629-F714-44F3-91BA-54A380EE298F@redhat.com> On 16 May 2017, at 0:15, Stuart Douglas wrote: >>> 2) Generate the list dynamically on first boot, and store it in the temp >> >> This looks like the most elegant thing to do. Question is how it will slow >> down the initial boot. People care about first boot impression, some blog >> writers do the mistake too. >> > > It will not actually slow down the initial boot (at least not in a > measurable way), but the first boot would not get the benefit of this > optimisation. A mixed mode could be interesting, where the list is created by tooling (or pseudo-boot) and then written down. As someone said on Docker/OS every boot is first boot, so doing the pre-population then would not help. But creating the list at image creation time would dynamically create the list and make the speedup available to all starts of containers from that image. From jai.forums2013 at gmail.com Tue May 16 07:32:47 2017 From: jai.forums2013 at gmail.com (J Pai) Date: Tue, 16 May 2017 17:02:47 +0530 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: <1DC1BE38-2158-4096-9A9B-AEDB4241CE95@gmail.com> Not to undermine these efforts (in fact, this thread has actually brought up a couple of really interesting details), but one of the things I have always seen when we spent time trying to add relatively complex ways to squeeze some milli seconds out of the boot time, is that for the end users, most of the times it really didn?t matter in a noticeable way.I am not talking about the major improvements we have made from AS5/AS6 to the WildFly boot times today. What I have experienced is that for end users, they are mostly interested in seeing their (usually large) deployments show noticeable improvements in deployment time, not necessarily from a cold boot of the server, but when the server is already up and they either want to deploy something new or re-deploy their application. All in all, as a developer, I will be curiously following how these experiments go, but as an end user, I am not sure this will show up as something noticeable. Of course, the place where this would probably make a difference (even from an end user perspective) is something like maybe WildFly Swarm, but then again I haven?t been following that project to understand if these efforts will directly end up somehow in WildFly Swarm. -Jaikiran On 15-May-2017, at 5:06 AM, Stuart Douglas wrote: When JIRA was being screwy on Friday I used the time to investigate an idea I have had for a while about improving our boot time performance. According to Yourkit the majority of our time is spent in class loading. It seems very unlikely that we will be able to reduce the number of classes we load on boot (or at the very least it would be a massive amount of work) so I investigated a different approach. I modified ModuleClassLoader to spit out the name and module of every class that is loaded at boot time, and stored this in a properties file. I then created a simple Service that starts immediately that uses two threads to eagerly load every class on this list (I used two threads because that seemed to work well on my laptop, I think Runtime.availableProcessors()/4 is probably the best amount, but that assumption would need to be tested on different hardware). The idea behind this is that we know the classes will be used at some point, and we generally do not fully utilise all CPU's during boot, so we can use the unused CPU to pre load these classes so they are ready when they are actually required. Using this approach I saw the boot time for standalone.xml drop from ~2.9s to ~2.3s on my laptop. The (super hacky) code I used to perform this test is at https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack I think these initial results are encouraging, and it is a big enough gain that I think it is worth investigating further. Firstly it would be great if I could get others to try it out and see if they see similar gains to boot time, it may be that the gain is very system dependent. Secondly if we do decide to do this there are two approach that we can use that I can see: 1) A hard coded list of class names that we generate before a release (basically what the hack already does), this is simplest, but does add a little bit of additional work to the release process (although if it is missed it would be no big deal, as ClassNotFoundException's would be suppressed, and if a few classes are missing the performance impact is negligible as long as the majority of the list is correct). 2) Generate the list dynamically on first boot, and store it in the temp directory. This would require the addition of a hook into JBoss Modules to generate the list, but is the approach I would prefer (as first boot is always a bit slower anyway). Thoughts? Stuart _______________________________________________ wildfly-dev mailing list wildfly-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/wildfly-dev From hbraun at redhat.com Tue May 16 08:41:53 2017 From: hbraun at redhat.com (Heiko Braun) Date: Tue, 16 May 2017 14:41:53 +0200 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <1DC1BE38-2158-4096-9A9B-AEDB4241CE95@gmail.com> References: <1DC1BE38-2158-4096-9A9B-AEDB4241CE95@gmail.com> Message-ID: > On 16. May 2017, at 13:32, J Pai wrote: > > What I have experienced is that for end users, they are mostly interested in seeing their (usually large) deployments show noticeable improvements in deployment time, not necessarily from a cold boot of the server, but when the server is already up and they either want to deploy something new or re-deploy their application. +1 the deployments increase the time until ?ready to perform work?. This is the point we should use as a reference. Anything before (i.e. blank WF without deployments) is just marketing IMO. Heiko -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170516/09c843ea/attachment.html From tomaz.cerar at gmail.com Tue May 16 09:14:09 2017 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Tue, 16 May 2017 15:14:09 +0200 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <1DC1BE38-2158-4096-9A9B-AEDB4241CE95@gmail.com> References: <1DC1BE38-2158-4096-9A9B-AEDB4241CE95@gmail.com> Message-ID: Hey Jaikiran! On Tue, May 16, 2017 at 1:32 PM, J Pai wrote: > What I have experienced is that for end users, they are mostly interested > in seeing their (usually large) deployments show noticeable improvements in > deployment time, not necessarily from a cold boot of the server, but when > the server is already up and they either want to deploy something new or > re-deploy their application. We all agree on this and we are looking into speeding up user deployments as well. One of bottlenecks with deployments is how we read deployment contents as it has showed that java.util.jar.JarFile that we are using to load resources doesn't really scale well in concurrent environments and is causing lots of slowdown. We are now looking what could we do to mitigate this by different approaches, but as none of them are in fully workable state I wouldn't go into details yet. In short, speeding up user deployments is on our radar. - tomaz -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170516/fd9f04c6/attachment.html From jason.greene at redhat.com Tue May 16 09:47:11 2017 From: jason.greene at redhat.com (Jason Greene) Date: Tue, 16 May 2017 08:47:11 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: <1DC1BE38-2158-4096-9A9B-AEDB4241CE95@gmail.com> Message-ID: <1077E3ED-4B84-4CCA-A303-74D8072B3C1C@redhat.com> > On May 16, 2017, at 7:41 AM, Heiko Braun wrote: > > >> On 16. May 2017, at 13:32, J Pai > wrote: >> >> What I have experienced is that for end users, they are mostly interested in seeing their (usually large) deployments show noticeable improvements in deployment time, not necessarily from a cold boot of the server, but when the server is already up and they either want to deploy something new or re-deploy their application. > > +1 the deployments increase the time until ?ready to perform work?. This is the point we should use as a reference. Anything before (i.e. blank WF without deployments) is just marketing IMO. I agree that deployment time is important, but I just want to point out that not all usages of WildFly involve deployments. Examples include proxy servers, static content servers, message brokers, javascript code, transaction managers, and service based applications. -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170516/2cc47b6c/attachment.html From smarlow at redhat.com Tue May 16 09:59:05 2017 From: smarlow at redhat.com (Scott Marlow) Date: Tue, 16 May 2017 09:59:05 -0400 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: Excellent idea! > > 1) A hard coded list of class names that we generate before a release > (basically what the hack already does), this is simplest, but does add a > little bit of additional work to the release process (although if it is > missed it would be no big deal, as ClassNotFoundException's would be > suppressed, and if a few classes are missing the performance impact is > negligible as long as the majority of the list is correct). Could the list of class names be read from the server configuration? I assume that would likely defeat the purpose of pre-loading these classes, as we would to then wait until the server configuration is read. Perhaps an alternative could be allowing a system property setting to override the class list. I am thinking that users might want some influence over which classes are pre-loaded so they can prune the list and also add to it. > > 2) Generate the list dynamically on first boot, and store it in the temp > directory. This would require the addition of a hook into JBoss Modules to > generate the list, but is the approach I would prefer (as first boot is > always a bit slower anyway). I like this best but also wonder how users would deal with updating the list, if they know it should contain a different set of class names. Perhaps they could know to delete the list from the temp directory, at the right time (e.g. after stopping the app server but before starting the app server again). If we do add a system property for allowing the user to specify the list of classes (or perhaps name of file that contains the list), IMO, I think that system property should override the list that we generate in the temp directory. > > Thoughts? > > Stuart > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From david.lloyd at redhat.com Tue May 16 10:00:00 2017 From: david.lloyd at redhat.com (David M. Lloyd) Date: Tue, 16 May 2017 09:00:00 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: <1DC1BE38-2158-4096-9A9B-AEDB4241CE95@gmail.com> Message-ID: <8d54aaa7-f9b2-3000-ee49-f77dc6c98add@redhat.com> On 05/16/2017 07:41 AM, Heiko Braun wrote: > >> On 16. May 2017, at 13:32, J Pai > > wrote: >> >> What I have experienced is that for end users, they are mostly >> interested in seeing their (usually large) deployments show noticeable >> improvements in deployment time, not necessarily from a cold boot of >> the server, but when the server is already up and they either want to >> deploy something new or re-deploy their application. > > +1 the deployments increase the time until ?ready to perform work?. This > is the point we should use as a reference. Anything before (i.e. blank > WF without deployments) is just marketing IMO. Startup time to "ready" does include the server init; so such an effort isn't a total waste of time in this case. But I agree with your main point. But I think if we can squeeze a bit more speed out of initialization, there's no harm in trying for it. The performance data that comes from this analysis has already been used to target areas that will improve performance for every part of startup, including deployment (maybe substantially). -- - DML From david.lloyd at redhat.com Tue May 16 10:52:06 2017 From: david.lloyd at redhat.com (David M. Lloyd) Date: Tue, 16 May 2017 09:52:06 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: Off list discussion: the NIO.2 JAR file provider appears to have substantially better performance (and no central locking). This JIRA [1] covers using this by default within JBoss Modules. https://issues.jboss.org/browse/MODULES-285 On 05/15/2017 10:34 AM, David M. Lloyd wrote: > I have a few thoughts that might be of interest. > > Firstly, I'd be interested to see when you are logging the class name > being loaded. If you are logging it in loadClass, you might not be > seeing the actual correct load order because that method is ultimately > recursive. To get an accurate picture of what order that classes are > actually defined - and thus what order you can load them in order to > prevent contention on per-class locks within the CL - you should log > immediately _after_ defineClass completes for each class. > > Secondly, while debugging a resource iteration performance problem a > user was having with a large number of deployments, I discovered that > contention for the lock on JarFile and ZipFile was a primary cause. The > workaround I employed was to keep a RAM-based List of the files in the > JAR, which can be iterated over without touching the lock. > > When we're preloading classes, we're definitely going to see this same > kind of contention come up, because there's only one lock per JarFile > instance so you can only ever read one entry at a time, thus preventing > any kind of useful concurrency on a per-module basis. > > Exploding the files out of the JarFile could expose this contention and > therefore might be useful as a test - but it would also skew the results > a little because you have no decompression overhead, and creating the > separate file streams hypothetically might be somewhat more (or less) > expensive. I joked about resurrecting jzipfile (which I killed off > because it was something like 20% slower at decompressing entries than > Jar/ZipFile) but it might be worth considering having our own JAR > extractor at some point with a view towards concurrency gains. If we go > this route, we could go even further and create an optimized module > format, which is an idea I think we've looked at a little bit in the > past; there are a few avenues of exploration here which could be > interesting. > > At some point we also need to see how jaotc might improve things. It > probably won't improve class loading time directly, but it might improve > the processes by which class loading is done because all the one-off > bits would be precompiled. Also it's worth exploring whether the jimage > format has contention issues like this. > > On 05/14/2017 06:36 PM, Stuart Douglas wrote: >> When JIRA was being screwy on Friday I used the time to investigate an >> idea I have had for a while about improving our boot time performance. >> According to Yourkit the majority of our time is spent in class loading. >> It seems very unlikely that we will be able to reduce the number of >> classes we load on boot (or at the very least it would be a massive >> amount of work) so I investigated a different approach. >> >> I modified ModuleClassLoader to spit out the name and module of every >> class that is loaded at boot time, and stored this in a properties file. >> I then created a simple Service that starts immediately that uses two >> threads to eagerly load every class on this list (I used two threads >> because that seemed to work well on my laptop, I think >> Runtime.availableProcessors()/4 is probably the best amount, but that >> assumption would need to be tested on different hardware). >> >> The idea behind this is that we know the classes will be used at some >> point, and we generally do not fully utilise all CPU's during boot, so >> we can use the unused CPU to pre load these classes so they are ready >> when they are actually required. >> >> Using this approach I saw the boot time for standalone.xml drop from >> ~2.9s to ~2.3s on my laptop. The (super hacky) code I used to perform >> this test is at >> https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack >> >> I think these initial results are encouraging, and it is a big enough >> gain that I think it is worth investigating further. >> >> Firstly it would be great if I could get others to try it out and see if >> they see similar gains to boot time, it may be that the gain is very >> system dependent. >> >> Secondly if we do decide to do this there are two approach that we can >> use that I can see: >> >> 1) A hard coded list of class names that we generate before a release >> (basically what the hack already does), this is simplest, but does add a >> little bit of additional work to the release process (although if it is >> missed it would be no big deal, as ClassNotFoundException's would be >> suppressed, and if a few classes are missing the performance impact is >> negligible as long as the majority of the list is correct). >> >> 2) Generate the list dynamically on first boot, and store it in the temp >> directory. This would require the addition of a hook into JBoss Modules >> to generate the list, but is the approach I would prefer (as first boot >> is always a bit slower anyway). >> >> Thoughts? >> >> Stuart >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > > > -- - DML From anmiller at redhat.com Tue May 16 11:01:34 2017 From: anmiller at redhat.com (Andrig Miller) Date: Tue, 16 May 2017 09:01:34 -0600 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <1077E3ED-4B84-4CCA-A303-74D8072B3C1C@redhat.com> References: <1DC1BE38-2158-4096-9A9B-AEDB4241CE95@gmail.com> <1077E3ED-4B84-4CCA-A303-74D8072B3C1C@redhat.com> Message-ID: One thing I would like to mention is that with our OpenShift first strategy, anything we do should also take into account memory footprint changes. We are still doing analysis on the memory footprint of EAP, but will have something to publish fairly soon. One thing we should avoid here is approaches that allocate memory that won't go away when the boot process is done. Andy On Tue, May 16, 2017 at 7:47 AM, Jason Greene wrote: > > On May 16, 2017, at 7:41 AM, Heiko Braun wrote: > > > On 16. May 2017, at 13:32, J Pai wrote: > > What I have experienced is that for end users, they are mostly interested > in seeing their (usually large) deployments show noticeable improvements in > deployment time, not necessarily from a cold boot of the server, but when > the server is already up and they either want to deploy something new or > re-deploy their application. > > > +1 the deployments increase the time until ?ready to perform work?. This > is the point we should use as a reference. Anything before (i.e. blank WF > without deployments) is just marketing IMO. > > > I agree that deployment time is important, but I just want to point out > that not all usages of WildFly involve deployments. Examples include proxy > servers, static content servers, message brokers, javascript code, > transaction managers, and service based applications. > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Andrig (Andy) T. Miller Global Platform Director, Middleware Red Hat, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170516/31467779/attachment.html From hbraun at redhat.com Tue May 16 11:02:32 2017 From: hbraun at redhat.com (Heiko Braun) Date: Tue, 16 May 2017 17:02:32 +0200 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <8d54aaa7-f9b2-3000-ee49-f77dc6c98add@redhat.com> References: <1DC1BE38-2158-4096-9A9B-AEDB4241CE95@gmail.com> <8d54aaa7-f9b2-3000-ee49-f77dc6c98add@redhat.com> Message-ID: <8490F639-76AB-4CD0-99DF-0BCEFCCEE03C@redhat.com> > On 16. May 2017, at 16:00, David M. Lloyd wrote: > > Startup time to "ready" does include the server init; so such an effort > isn't a total waste of time in this case. But I agree with your main point. > > But I think if we can squeeze a bit more speed out of initialization, > there's no harm in trying for it. The performance data that comes from > this analysis has already been used to target areas that will improve > performance for every part of startup, including deployment (maybe > substantially). I was exaggerating when I used the term ?marketing?. You are right and Jason has some valid points too. Heiko -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170516/d7eee652/attachment.html From bmcwhirt at redhat.com Tue May 16 19:54:53 2017 From: bmcwhirt at redhat.com (Bob McWhirter) Date: Tue, 16 May 2017 23:54:53 +0000 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> References: <977439009.11493674.1494851223794.JavaMail.zimbra@redhat.com> Message-ID: >From a swarm perspective I'd like something that benefits first boot because we have no place to store stuff for second boot. It's all first boots! Bob On Mon, May 15, 2017 at 8:29 AM Rostislav Svoboda wrote: > Hi. > > I can confirm I see improvements in boot time with your changes. > My HW is Lenovo T440s with Fedora 25, Intel(R) Core(TM) i7-4600U CPU (Base > Frequency 2.10 GHz, Max Turbo 3.30 GHz) > > I executed 50 iterations of start - stop sequence [1], before execution 5x > start - stop for "warmup" > > With your changes > Min: 3116 Max: 3761 Average: 3247.640000 > > Without: > Min: 3442 Max: 4081 Average: 3580.840000 > > > > 1) A hard coded list of class names that we generate before a release > > This will improve first boot impression, little bit harder for maintaining > the list for the final build. > > Property files could be located inside properties directory of dedicated > module (). Properties directory could > contain property files for delivered profiles. > > Layered products or customer modifications could deliver own property file. > e.g. predefined property file for standalone-openshift.xml in EAP image > in OpenShift environment, I think they boot the server just once and throw > away the whole docker image when something changes. > > > > 2) Generate the list dynamically on first boot, and store it in the temp > > This looks like the most elegant thing to do. Question is how it will slow > down the initial boot. People care about first boot impression, some blog > writers do the mistake too. > This would also block boot time improvements for use-cases when you start > the server just once - e.g. Docker, OpenShift. > > Also the logic should take into account which profile is loaded - e.g > standalone.xml vs. standalone-full-ha.xml > > Rostislav > > [1] > rm wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log > rm wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log > > for i in {1..50}; do > echo $i > wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/standalone.sh 1>/dev/null 2>&1 > & > sleep 8 > wildfly-11.0.0.Beta1-SNAPSHOT-preload/bin/jboss-cli.sh -c :shutdown > 1>/dev/null 2>&1 > done > grep WFLYSRV0025 > wildfly-11.0.0.Beta1-SNAPSHOT-preload/standalone/log/server.log | sed > "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } > { if ($1>max) max=$1; if ($1 END {printf "Min: %d\tMax: %d\tAverage: %f\n", min, max, sum/NR}' > > > for i in {1..50}; do > echo $i > wildfly-11.0.0.Beta1-SNAPSHOT/bin/standalone.sh 1>/dev/null 2>&1 & > sleep 8 > wildfly-11.0.0.Beta1-SNAPSHOT/bin/jboss-cli.sh -c :shutdown 1>/dev/null > 2>&1 > done > grep WFLYSRV0025 wildfly-11.0.0.Beta1-SNAPSHOT/standalone/log/server.log | > sed "s/.*\(....\)ms.*/\1/g" | awk 'NR == 1 { max=$1; min=$1; sum=0 } > { if ($1>max) max=$1; if ($1 END {printf "Min: %d\tMax: %d\tAverage: %f\n", min, max, sum/NR}' > > > ----- Original Message ----- > > When JIRA was being screwy on Friday I used the time to investigate an > idea I > > have had for a while about improving our boot time performance. > According to > > Yourkit the majority of our time is spent in class loading. It seems very > > unlikely that we will be able to reduce the number of classes we load on > > boot (or at the very least it would be a massive amount of work) so I > > investigated a different approach. > > > > I modified ModuleClassLoader to spit out the name and module of every > class > > that is loaded at boot time, and stored this in a properties file. I then > > created a simple Service that starts immediately that uses two threads to > > eagerly load every class on this list (I used two threads because that > > seemed to work well on my laptop, I think > Runtime.availableProcessors()/4 is > > probably the best amount, but that assumption would need to be tested on > > different hardware). > > > > The idea behind this is that we know the classes will be used at some > point, > > and we generally do not fully utilise all CPU's during boot, so we can > use > > the unused CPU to pre load these classes so they are ready when they are > > actually required. > > > > Using this approach I saw the boot time for standalone.xml drop from > ~2.9s to > > ~2.3s on my laptop. The (super hacky) code I used to perform this test > is at > > > https://github.com/wildfly/wildfly-core/compare/master...stuartwdouglas:boot-performance-hack > > > > I think these initial results are encouraging, and it is a big enough > gain > > that I think it is worth investigating further. > > > > Firstly it would be great if I could get others to try it out and see if > they > > see similar gains to boot time, it may be that the gain is very system > > dependent. > > > > Secondly if we do decide to do this there are two approach that we can > use > > that I can see: > > > > 1) A hard coded list of class names that we generate before a release > > (basically what the hack already does), this is simplest, but does add a > > little bit of additional work to the release process (although if it is > > missed it would be no big deal, as ClassNotFoundException's would be > > suppressed, and if a few classes are missing the performance impact is > > negligible as long as the majority of the list is correct). > > > > 2) Generate the list dynamically on first boot, and store it in the temp > > directory. This would require the addition of a hook into JBoss Modules > to > > generate the list, but is the approach I would prefer (as first boot is > > always a bit slower anyway). > > > > Thoughts? > > > > Stuart > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170516/194ab79f/attachment-0001.html From brian.stansberry at redhat.com Wed May 17 14:42:45 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Wed, 17 May 2017 13:42:45 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: <2D2C7667-4B46-4F96-B7C0-545D69A6FC00@redhat.com> > On May 15, 2017, at 10:04 AM, Toma? Cerar wrote: > > > On Mon, May 15, 2017 at 4:13 PM, Brian Stansberry wrote: > So why does adding two more make such a big difference? > > Main reason is that this two threads load most of later required classes which can later be quickly loaded from multiple parallel threads. > > Currently concurrency causes that 8 -16 threads (on 4-8 logical core systems) try to load same classes at same time. > this leads to lots of contention as result. "preloading" some of this classes reduces contention. > > Looking at the list in the current "hack impl" there are lots of classes that don't need to be there, stuff like subsystem parsers which are only loaded once in any case. > > Main pressure is on classes from jboss-modules, controller, server & xml parsers modules, all others are not as problematic. > This is also reason why lots of contention is happening on JDK classes as well as those are shared between all parts of server code. > Stuart/Tomaz ? Please ignore this for now if your thinking has moved on to other approaches, e.g. better concurrency in classloading. :) Otherwise, are there any numbers on this last point Tomaz made? I ask because people are asking for a static list since a dynamic list is of no benefit to cloud use cases. A static list is painful to administer though, and if not administered well can result in loading unneeded classes and wasting memory. But, a static list limited to modules that are part of the WildFly Core kernel is not particularly hard to administer. So if we can get the bulk of the gains with the minimum of the pain, we might consider that. -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From jason.greene at redhat.com Wed May 17 16:29:28 2017 From: jason.greene at redhat.com (Jason Greene) Date: Wed, 17 May 2017 15:29:28 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <2D2C7667-4B46-4F96-B7C0-545D69A6FC00@redhat.com> References: <2D2C7667-4B46-4F96-B7C0-545D69A6FC00@redhat.com> Message-ID: > On May 17, 2017, at 1:42 PM, Brian Stansberry wrote: > > >> On May 15, 2017, at 10:04 AM, Toma? Cerar wrote: >> >> >> On Mon, May 15, 2017 at 4:13 PM, Brian Stansberry wrote: >> So why does adding two more make such a big difference? >> >> Main reason is that this two threads load most of later required classes which can later be quickly loaded from multiple parallel threads. >> >> Currently concurrency causes that 8 -16 threads (on 4-8 logical core systems) try to load same classes at same time. >> this leads to lots of contention as result. "preloading" some of this classes reduces contention. >> >> Looking at the list in the current "hack impl" there are lots of classes that don't need to be there, stuff like subsystem parsers which are only loaded once in any case. >> >> Main pressure is on classes from jboss-modules, controller, server & xml parsers modules, all others are not as problematic. >> This is also reason why lots of contention is happening on JDK classes as well as those are shared between all parts of server code. >> > > Stuart/Tomaz ? > > Please ignore this for now if your thinking has moved on to other approaches, e.g. better concurrency in classloading. :) > > Otherwise, are there any numbers on this last point Tomaz made? > > I ask because people are asking for a static list since a dynamic list is of no benefit to cloud use cases. > > A static list is painful to administer though, and if not administered well can result in loading unneeded classes and wasting memory. > > But, a static list limited to modules that are part of the WildFly Core kernel is not particularly hard to administer. So if we can get the bulk of the gains with the minimum of the pain, we might consider that. > We can also just have a dynamic offline list generation, which is ran as a build task. -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From brian.stansberry at redhat.com Wed May 17 16:45:21 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Wed, 17 May 2017 15:45:21 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: <2D2C7667-4B46-4F96-B7C0-545D69A6FC00@redhat.com> Message-ID: <275DEA75-58E8-4DC3-B372-052163D4CED6@redhat.com> > On May 17, 2017, at 3:29 PM, Jason Greene wrote: > > >> On May 17, 2017, at 1:42 PM, Brian Stansberry wrote: >> >> >>> On May 15, 2017, at 10:04 AM, Toma? Cerar wrote: >>> >>> >>> On Mon, May 15, 2017 at 4:13 PM, Brian Stansberry wrote: >>> So why does adding two more make such a big difference? >>> >>> Main reason is that this two threads load most of later required classes which can later be quickly loaded from multiple parallel threads. >>> >>> Currently concurrency causes that 8 -16 threads (on 4-8 logical core systems) try to load same classes at same time. >>> this leads to lots of contention as result. "preloading" some of this classes reduces contention. >>> >>> Looking at the list in the current "hack impl" there are lots of classes that don't need to be there, stuff like subsystem parsers which are only loaded once in any case. >>> >>> Main pressure is on classes from jboss-modules, controller, server & xml parsers modules, all others are not as problematic. >>> This is also reason why lots of contention is happening on JDK classes as well as those are shared between all parts of server code. >>> >> >> Stuart/Tomaz ? >> >> Please ignore this for now if your thinking has moved on to other approaches, e.g. better concurrency in classloading. :) >> >> Otherwise, are there any numbers on this last point Tomaz made? >> >> I ask because people are asking for a static list since a dynamic list is of no benefit to cloud use cases. >> >> A static list is painful to administer though, and if not administered well can result in loading unneeded classes and wasting memory. >> >> But, a static list limited to modules that are part of the WildFly Core kernel is not particularly hard to administer. So if we can get the bulk of the gains with the minimum of the pain, we might consider that. >> > > We can also just have a dynamic offline list generation, which is ran as a build task. Yes, that?s my assumption. When I say ?static? I mean static on a given installation. If it is limited to the kernel (including relevant JDK bits), then there are no issues with ensuring different feature pack maintainers are doing this, no need to combine lists from different parts of the build, no worries about ensuring only those bits relevant to what the user is actually running are loaded, etc. Those things are the ?painful to administer part?. They might very well be worth it but data should demonstrate that. -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From kabir.khan at jboss.com Thu May 18 04:28:24 2017 From: kabir.khan at jboss.com (Kabir Khan) Date: Thu, 18 May 2017 10:28:24 +0200 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <275DEA75-58E8-4DC3-B372-052163D4CED6@redhat.com> References: <2D2C7667-4B46-4F96-B7C0-545D69A6FC00@redhat.com> <275DEA75-58E8-4DC3-B372-052163D4CED6@redhat.com> Message-ID: > On 17 May 2017, at 22:45, Brian Stansberry wrote: > > >> On May 17, 2017, at 3:29 PM, Jason Greene wrote: >> >> >>> On May 17, 2017, at 1:42 PM, Brian Stansberry wrote: >>> >>> >>>> On May 15, 2017, at 10:04 AM, Toma? Cerar wrote: >>>> >>>> >>>> On Mon, May 15, 2017 at 4:13 PM, Brian Stansberry wrote: >>>> So why does adding two more make such a big difference? >>>> >>>> Main reason is that this two threads load most of later required classes which can later be quickly loaded from multiple parallel threads. >>>> >>>> Currently concurrency causes that 8 -16 threads (on 4-8 logical core systems) try to load same classes at same time. >>>> this leads to lots of contention as result. "preloading" some of this classes reduces contention. >>>> >>>> Looking at the list in the current "hack impl" there are lots of classes that don't need to be there, stuff like subsystem parsers which are only loaded once in any case. >>>> >>>> Main pressure is on classes from jboss-modules, controller, server & xml parsers modules, all others are not as problematic. >>>> This is also reason why lots of contention is happening on JDK classes as well as those are shared between all parts of server code. >>>> >>> >>> Stuart/Tomaz ? >>> >>> Please ignore this for now if your thinking has moved on to other approaches, e.g. better concurrency in classloading. :) >>> >>> Otherwise, are there any numbers on this last point Tomaz made? >>> >>> I ask because people are asking for a static list since a dynamic list is of no benefit to cloud use cases. >>> >>> A static list is painful to administer though, and if not administered well can result in loading unneeded classes and wasting memory. >>> >>> But, a static list limited to modules that are part of the WildFly Core kernel is not particularly hard to administer. So if we can get the bulk of the gains with the minimum of the pain, we might consider that. >>> >> >> We can also just have a dynamic offline list generation, which is ran as a build task. > > Yes, that?s my assumption. When I say ?static? I mean static on a given installation. > > If it is limited to the kernel (including relevant JDK bits), then there are no issues with ensuring different feature pack maintainers are doing this, no need to combine lists from different parts of the build, no worries about ensuring only those bits relevant to what the user is actually running are loaded, etc. Those things are the ?painful to administer part?. They might very well be worth it but data should demonstrate that. If it turns out to be worth it, the feature pack generation stuff could be enhanced to generate the list for each config. Perhaps only if doing -Prelease so we don't slow down everybody's builds while developing > > -- > Brian Stansberry > Manager, Senior Principal Software Engineer > JBoss by Red Hat > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From tomaz.cerar at gmail.com Thu May 18 05:33:36 2017 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Thu, 18 May 2017 11:33:36 +0200 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: <2D2C7667-4B46-4F96-B7C0-545D69A6FC00@redhat.com> <275DEA75-58E8-4DC3-B372-052163D4CED6@redhat.com> Message-ID: On Thu, May 18, 2017 at 10:28 AM, Kabir Khan wrote: > > If it is limited to the kernel (including relevant JDK bits), then there > are no issues with ensuring different feature pack maintainers are doing > this, no need to combine lists from different parts of the build, no > worries about ensuring only those bits relevant to what the user is > actually running are loaded, etc. Those things are the ?painful to > administer part?. They might very well be worth it but data should > demonstrate that. > If it turns out to be worth it, the feature pack generation stuff could be > enhanced to generate the list for each config. Perhaps only if doing > -Prelease so we don't slow down everybody's builds while developing I think that only core stuff should need this, as there is where most of contention for class loading is. As all extension need classes from core it is most contested. So I think if we go with this, jdk + core would be the yield best work / benefit results -- tomaz -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170518/7c10a7e8/attachment.html From sanne at hibernate.org Thu May 18 06:19:31 2017 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 18 May 2017 11:19:31 +0100 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: <2D2C7667-4B46-4F96-B7C0-545D69A6FC00@redhat.com> <275DEA75-58E8-4DC3-B372-052163D4CED6@redhat.com> Message-ID: On 18 May 2017 at 10:33, Toma? Cerar wrote: > > On Thu, May 18, 2017 at 10:28 AM, Kabir Khan wrote: >> >> > If it is limited to the kernel (including relevant JDK bits), then there >> > are no issues with ensuring different feature pack maintainers are doing >> > this, no need to combine lists from different parts of the build, no worries >> > about ensuring only those bits relevant to what the user is actually running >> > are loaded, etc. Those things are the ?painful to administer part?. They >> > might very well be worth it but data should demonstrate that. >> If it turns out to be worth it, the feature pack generation stuff could be >> enhanced to generate the list for each config. Perhaps only if doing >> -Prelease so we don't slow down everybody's builds while developing > > > I think that only core stuff should need this, as there is where most of > contention for class loading is. > As all extension need classes from core it is most contested. > > So I think if we go with this, jdk + core would be the yield best work / > benefit results Be it first boot or deployment, I agree that what matters most is the time to have the deployed application running & responding. So it would be nice if such a technique could be automated and made generic, to see if other components can benefit from such an approach at minimal maintenance overhead. Incidentally I'm mostly bothered by Hibernate being slow to boot, but I don't think this particular optimisation would help. Our bootstrap isn't concurrent; there's just a lot to load - sequentially - and possibly scanning a combination of classpaths for discovery of entities & services; hopefully we can improve this by narrowing down the scope to be scanned but that's clearly an orthogonal issue. Thanks, Sanne From cdewolf at redhat.com Thu May 18 06:44:19 2017 From: cdewolf at redhat.com (Carlo de Wolf) Date: Thu, 18 May 2017 12:44:19 +0200 Subject: [wildfly-dev] Provisioning a server without some core modules Message-ID: <693e281b-d530-8718-5e11-7e5d8e3bac83@redhat.com> I'm trying to provision a server with the wildfly-server-provisioning-maven-plugin. The server needs some modules defined in the core-feature-pack excluded, so I put up excludes in the dependency management of the pom. However all modules seem to be included regardless of the projects dependency management. Is server provisioning ignoring dependency management? If so, I think it is a bug. If not, then how should it be done? Carlo From david.lloyd at redhat.com Thu May 18 09:50:24 2017 From: david.lloyd at redhat.com (David M. Lloyd) Date: Thu, 18 May 2017 08:50:24 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: Message-ID: <677d5ba1-7948-5d52-5601-fce9c39a65e8@redhat.com> On 05/15/2017 05:21 PM, Stuart Douglas wrote: > On Tue, May 16, 2017 at 1:34 AM, David M. Lloyd wrote: >> Exploding the files out of the JarFile could expose this contention and >> therefore might be useful as a test - but it would also skew the results >> a little because you have no decompression overhead, and creating the >> separate file streams hypothetically might be somewhat more (or less) >> expensive. I joked about resurrecting jzipfile (which I killed off >> because it was something like 20% slower at decompressing entries than >> Jar/ZipFile) but it might be worth considering having our own JAR >> extractor at some point with a view towards concurrency gains. If we go >> this route, we could go even further and create an optimized module >> format, which is an idea I think we've looked at a little bit in the >> past; there are a few avenues of exploration here which could be >> interesting. > > This could be worth investigating. Toma? did a prototype of using the JDK JAR filesystem to back the resource loader if it is available; contention did go down but memory footprint went up, and overall the additional indexing and allocation ended up slowing down boot a little, unfortunately (though large numbers of deployments seemed to be faster). Toma? can elaborate on his findings if he wishes. I had a look in the JAR FS implementation (and its parent class, the ZIP FS implementation, which does most of the hard work), and there are a few things which add overhead and contention that we don't need, like using read/write locks to manage access and modifications (which we don't need) and (synch-based) indexing structures that might be somewhat larger than necessary. They use NIO channels to access the zip data, which is probably OK, but maybe mapped buffers could be better... or worse? They use a synchronized list per JAR file to pool Inflaters; pooling is a hard thing to do right so maybe there isn't really any better option in this case. But in any event, I think a custom extractor still might be a reasonable thing to experiment with. We could resurrect jzipfile or try a different approach (maybe see how well mapped buffers work?). Since we're read-only, any indexes we use can be immutable and thus unsynchronized, and maybe more compact as a result. We can use an unordered hash table because we generally don't care about file order the way that JarFile historically needs to, thus making indexing faster. We could save object allocation overhead by using a specialized object->int hash table that just records offsets into the index for each entry. If we try mapped buffers, we could share one buffer concurrently by using only methods that accept an offset, and track offsets independently. This would let the OS page cache work for us, especially for heavily used JARs. We would be limited to 2GB JAR files, but I don't think that's likely to be a practical problem for us; if it ever is, we can create a specialized alternative implementation for huge JARs. In Java 9, jimages become an option by way of jlink, which will also be worth experimenting with (as soon as we're booting on Java 9). Brainstorm other ideas here! -- - DML From cdewolf at redhat.com Thu May 18 09:51:02 2017 From: cdewolf at redhat.com (Carlo de Wolf) Date: Thu, 18 May 2017 15:51:02 +0200 Subject: [wildfly-dev] Provisioning a server without some core modules In-Reply-To: <693e281b-d530-8718-5e11-7e5d8e3bac83@redhat.com> References: <693e281b-d530-8718-5e11-7e5d8e3bac83@redhat.com> Message-ID: <65c0c7d9-3341-409f-bf17-9f110b027e8f@redhat.com> To answer my own question: server-provisioning.xml: Note that it matches against the module.xml path relative to the modules directory. E.g. system/layers/base/something/main/module.xml This strikes me as odd, because neither module name comes into play nor is the pattern a true Java pattern, but a weird mix-up. Carlo On 05/18/2017 12:44 PM, Carlo de Wolf wrote: > I'm trying to provision a server with the > wildfly-server-provisioning-maven-plugin. The server needs some modules > defined in the core-feature-pack excluded, so I put up excludes in the > dependency management of the pom. However all modules seem to be > included regardless of the projects dependency management. > > Is server provisioning ignoring dependency management? If so, I think it > is a bug. > If not, then how should it be done? > > Carlo > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > From anmiller at redhat.com Thu May 18 10:04:34 2017 From: anmiller at redhat.com (Andrig Miller) Date: Thu, 18 May 2017 08:04:34 -0600 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: <677d5ba1-7948-5d52-5601-fce9c39a65e8@redhat.com> References: <677d5ba1-7948-5d52-5601-fce9c39a65e8@redhat.com> Message-ID: On Thu, May 18, 2017 at 7:50 AM, David M. Lloyd wrote: > On 05/15/2017 05:21 PM, Stuart Douglas wrote: > > On Tue, May 16, 2017 at 1:34 AM, David M. Lloyd > wrote: > >> Exploding the files out of the JarFile could expose this contention and > >> therefore might be useful as a test - but it would also skew the results > >> a little because you have no decompression overhead, and creating the > >> separate file streams hypothetically might be somewhat more (or less) > >> expensive. I joked about resurrecting jzipfile (which I killed off > >> because it was something like 20% slower at decompressing entries than > >> Jar/ZipFile) but it might be worth considering having our own JAR > >> extractor at some point with a view towards concurrency gains. If we go > >> this route, we could go even further and create an optimized module > >> format, which is an idea I think we've looked at a little bit in the > >> past; there are a few avenues of exploration here which could be > >> interesting. > > > > This could be worth investigating. > > Toma? did a prototype of using the JDK JAR filesystem to back the > resource loader if it is available; contention did go down but memory > footprint went up, and overall the additional indexing and allocation > ended up slowing down boot a little, unfortunately (though large numbers > of deployments seemed to be faster). Toma? can elaborate on his > findings if he wishes. > > I had a look in the JAR FS implementation (and its parent class, the ZIP > FS implementation, which does most of the hard work), and there are a > few things which add overhead and contention that we don't need, like > using read/write locks to manage access and modifications (which we > don't need) and (synch-based) indexing structures that might be somewhat > larger than necessary. They use NIO channels to access the zip data, > which is probably OK, but maybe mapped buffers could be better... or > worse? They use a synchronized list per JAR file to pool Inflaters; > pooling is a hard thing to do right so maybe there isn't really any > better option in this case. > > But in any event, I think a custom extractor still might be a reasonable > thing to experiment with. We could resurrect jzipfile or try a > different approach (maybe see how well mapped buffers work?). Since > we're read-only, any indexes we use can be immutable and thus > unsynchronized, and maybe more compact as a result. We can use an > unordered hash table because we generally don't care about file order > the way that JarFile historically needs to, thus making indexing faster. > We could save object allocation overhead by using a specialized > object->int hash table that just records offsets into the index for each > entry. > > If we try mapped buffers, we could share one buffer concurrently by > using only methods that accept an offset, and track offsets > independently. This would let the OS page cache work for us, especially > for heavily used JARs. We would be limited to 2GB JAR files, but I > don't think that's likely to be a practical problem for us; if it ever > is, we can create a specialized alternative implementation for huge JARs. > ?I'm not so sure that the OS page cache will do anything here. I actually think it would be better if we could open the JAR files using direct I/O, but of course Java doesn't support that, and that would require native code, so not the greatest option. Andy ? > > In Java 9, jimages become an option by way of jlink, which will also be > worth experimenting with (as soon as we're booting on Java 9). > > Brainstorm other ideas here! > -- > - DML > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Andrig (Andy) T. Miller Global Platform Director, Middleware Red Hat, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170518/6c482928/attachment.html From david.lloyd at redhat.com Thu May 18 15:18:19 2017 From: david.lloyd at redhat.com (David M. Lloyd) Date: Thu, 18 May 2017 14:18:19 -0500 Subject: [wildfly-dev] Speeding up WildFly boot time In-Reply-To: References: <677d5ba1-7948-5d52-5601-fce9c39a65e8@redhat.com> Message-ID: <1054b672-7780-6b33-170b-c6dae5820880@redhat.com> On 05/18/2017 09:04 AM, Andrig Miller wrote: > On Thu, May 18, 2017 at 7:50 AM, David M. Lloyd > wrote: > >> On 05/15/2017 05:21 PM, Stuart Douglas wrote: >>> On Tue, May 16, 2017 at 1:34 AM, David M. Lloyd >> wrote: >>>> Exploding the files out of the JarFile could expose this contention and >>>> therefore might be useful as a test - but it would also skew the results >>>> a little because you have no decompression overhead, and creating the >>>> separate file streams hypothetically might be somewhat more (or less) >>>> expensive. I joked about resurrecting jzipfile (which I killed off >>>> because it was something like 20% slower at decompressing entries than >>>> Jar/ZipFile) but it might be worth considering having our own JAR >>>> extractor at some point with a view towards concurrency gains. If we go >>>> this route, we could go even further and create an optimized module >>>> format, which is an idea I think we've looked at a little bit in the >>>> past; there are a few avenues of exploration here which could be >>>> interesting. >>> >>> This could be worth investigating. >> >> Toma? did a prototype of using the JDK JAR filesystem to back the >> resource loader if it is available; contention did go down but memory >> footprint went up, and overall the additional indexing and allocation >> ended up slowing down boot a little, unfortunately (though large numbers >> of deployments seemed to be faster). Toma? can elaborate on his >> findings if he wishes. >> >> I had a look in the JAR FS implementation (and its parent class, the ZIP >> FS implementation, which does most of the hard work), and there are a >> few things which add overhead and contention that we don't need, like >> using read/write locks to manage access and modifications (which we >> don't need) and (synch-based) indexing structures that might be somewhat >> larger than necessary. They use NIO channels to access the zip data, >> which is probably OK, but maybe mapped buffers could be better... or >> worse? They use a synchronized list per JAR file to pool Inflaters; >> pooling is a hard thing to do right so maybe there isn't really any >> better option in this case. >> >> But in any event, I think a custom extractor still might be a reasonable >> thing to experiment with. We could resurrect jzipfile or try a >> different approach (maybe see how well mapped buffers work?). Since >> we're read-only, any indexes we use can be immutable and thus >> unsynchronized, and maybe more compact as a result. We can use an >> unordered hash table because we generally don't care about file order >> the way that JarFile historically needs to, thus making indexing faster. >> We could save object allocation overhead by using a specialized >> object->int hash table that just records offsets into the index for each >> entry. >> >> If we try mapped buffers, we could share one buffer concurrently by >> using only methods that accept an offset, and track offsets >> independently. This would let the OS page cache work for us, especially >> for heavily used JARs. We would be limited to 2GB JAR files, but I >> don't think that's likely to be a practical problem for us; if it ever >> is, we can create a specialized alternative implementation for huge JARs. >> > > ?I'm not so sure that the OS page cache will do anything here. I actually > think it would be better if we could open the JAR files using direct I/O, > but of course Java doesn't support that, and that would require native > code, so not the greatest option. What the page cache would theoretically do for us is keep "hot" areas (i.e. the index) of commonly-used JAR files in RAM, while letting "cold" JARs be paged out, without consuming Java heap or committed memory (thus avoiding GC), while allowing total random access, without any special buffer management. Because we are only reading and not writing, direct I/O won't likely help: either way you block to read from disk, but with memory mapping, you can reread an area many times and the OS will keep it handy for you. On Linux, the page cache works very similarly whether you're mapping in a file or allocating memory from the OS: recently-used pages stay in physical RAM, and old pages get flushed to disk (BUT only if they're dirty) and dropped from physical RAM. So it's effectively similar to allocating several hundred MB, copying all the JAR contents into that memory, and then referencing that, except that in this case you'd have to ensure that there is enough RAM+swap to accommodate it; behaviorally the primary difference is that the mmaped file is "paged out" by default and loaded on demand, whereas the eager allocated memory is "paged in" by default as you populate it and the pages have to age out. Since we are generally not reading entire JAR files though, the lazy behavior should theoretically be a bit better for us. On the other hand, this is a far worse option for 32-bit platforms for the same reason that it's useful on 64-bit: address space. If we map in all the JARs that *we* ship, that could be as much as 25% or more of the available address space gone instantly. So if we did explore this route, we'd need it to be switchable, with sensible defaults based on the available logical address size (and, as I said before, size of the target object). The primary resource cost here (other than address space) is page table entries. We'd be talking about probably hundreds of thousands, once every module has been referenced, on a CPU with 4k pages. In terms of RAM, that's not too much; each one is only a few bytes plus (I believe) a few more bytes for bookkeeping in the kernel, and the kernel is pretty damned good at managing them at this point. But it's not nothing. Of course all this is just educated (?) speculation unless we test & measure it. I suspect that in the end, it'll be subtle tradeoffs, just like everything else ends up being. > > Andy > ? > >> >> In Java 9, jimages become an option by way of jlink, which will also be >> worth experimenting with (as soon as we're booting on Java 9). >> >> Brainstorm other ideas here! >> -- >> - DML >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > > -- - DML From rory.odonnell at oracle.com Fri May 19 06:41:30 2017 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Fri, 19 May 2017 11:41:30 +0100 Subject: [wildfly-dev] JDK 9 EA Build 170 is available on jdk.java.net Message-ID: <351eabc4-b6ea-e466-15ea-4f87d6e88dec@oracle.com> Hi Jason/Tomaz, *JDK 9 Early Access* build 170 is available at the new location : - jdk.java.net/9/ A summary of all the changes in this build are listed here . Changes which were introduced since the last availability email that may be of interest : * b168 - JDK-8175814: Update default HttpClient protocol version and optional request version o related to JEP 110 : HTTP/2 Client. * b169 - JDK-8178380 : Module system implementation refresh (5/2017) o changes in command line options * b170 - JDK-8177153 : LambdaMetafactory has default constructorIncompatible change, o release note: JDK-8180035 *New Proposal - Mark Reinhold has asked for comments on the jigsaw-dev mailing list *[1] * Proposal: Allow illegal reflective access by default in JDK 9 In short, the existing "big kill switch" of the `--permit-illegal-access` option [1] will become the default behavior of the JDK 9 run-time system, though without as many warnings. The current behavior of JDK 9, in which illegal reflective-access operations from code on the class path are not permitted, will become the default in a future release. Nothing will change at compile time. Rgds,Rory [1] http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-May/012673.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170519/46e53eb9/attachment.html From ppalaga at redhat.com Fri May 26 05:52:40 2017 From: ppalaga at redhat.com (Peter Palaga) Date: Fri, 26 May 2017 11:52:40 +0200 Subject: [wildfly-dev] Srcdeps in WildFly and WildFly Core In-Reply-To: <6fbf9ccd-9354-86f4-7926-182d692b7d0d@redhat.com> References: <1EC27C97-CC18-4CB6-A434-2E053A8999CE@redhat.com> <60823934-405c-c800-074e-86f6da854e32@redhat.com> <6fbf9ccd-9354-86f4-7926-182d692b7d0d@redhat.com> Message-ID: Hi *, I was able to finish the missing parts (partly thanks to Carlo and Fedor recently asking me about running CI against a WildFly branch with a custom, non-released WildFly Core). * The failWithout configuration option promised in (3) below was implemented in srcdeps 3.1.0. * I have updated the PR for WildFly Core https://github.com/wildfly/wildfly-core/pull/2122 accordingly. * Have submitted a new PR for WildFly https://github.com/wildfly/wildfly/pull/10116 * In both named PRs, the source dependencies are opt-in: a build that contains source dependencies will only succeed if mvn is invoked with -Dsrcdeps.enabled . * Once the PRs mentioned above get accepted, the CI jobs that build PRs should be modified to contain the -Dsrcdeps.enabled mvn parameter. Is there anything more I need to do to get the above PRs merged? * As for the task (4) "create a Maven Mojo to figure out for a given Maven project if it uses source dependencies", I have not done that yet and I'd like to discuss how to implement it. One option that comes to my mind is that the mojo would simply make the Maven invocation fail with a non zero return code. E.g. mvn srcdeps:assert-none would print something like [Error] srcdeps:assert-none detected a source dependency org.groupId:artifactId:1.2.3-SRC-revision in module myModule and exit with a non-zero return code in case there are source dependencies; otherwise, the build would succeed with no output. I hope the mojo would be useful for the pull processor like this? Thanks, Peter On 01/30/2017 04:37 PM, Peter Palaga wrote: > Thanks everybody for the feedback! > > Let me try to sum up the results of the discussion so far: > > (1) Nobody except for me wants commits with source dependencies in > stable branches (such as master or 2.x in wildfly-core). I fully accept > that the ability to release anytime would go away with source dependency > merges and I thus give up in this point as long as the "release anytime" > requirement will be there. > > (2) There seems to exist some (enough?) agreement that source > dependencies could be allowed in pull requests. Such pull requests would > be there to allow fast feedback from the CI and reviewers, even before > the release of the dependency. But given (1), such PRs would have to be > upgraded to a proper dependency release before merging. > > (3) If nobody vetoes (2), I am going to implement a failWithout > configuration option in srcdeps.yaml that will allow for making srcdeps > resolution opt-in (e.g. via -Dsrcdeps.enabled) for those CI jobs that > build from PRs but will keep all other srcdeps builds failing. > > Having (3) will not prevent merges of PRs with source dependencies > directly, but will at least make the after-merge CI job fail so that the > maintainer of the branch is informed very quickly that something bad > happened. > > (4) I'd also add some sort of code (probably a Maven Mojo) to srcdeps > that would allow to figure out for a given Maven project if it uses > source dependencies. This could be used to label PRs so that the > gatekeeper sees clearly that the given PR has source dependencies. > > Thanks, > > Peter > > > On 2017-01-26 19:11, Brian Stansberry wrote: >> >>> On Jan 26, 2017, at 10:28 AM, Peter Palaga wrote: >>> >>> On 2017-01-26 15:44, Brian Stansberry wrote: >>>> There?s been a lot of discussion overnight, but I?ll reply to this >>>> one directly since my answers better align with your questions here. :) >>>> >>>>> On Jan 26, 2017, at 2:54 AM, Peter Palaga wrote: >>>>> >>>>> Hi Brian, thanks for your comments, more inline... >>>>> >>>>> On 2017-01-26 02:02, Brian Stansberry wrote: >>>>>> My only concerns with this would relate to comitting this kind of src >>>>>> dependency to the poms in the main branches in the widlfly/wildfly >>>>>> and wildfly/wildfly-core repos. We?ve managed to survive up to now >>>>>> with little or no need for that kind of thing, so until we get used >>>>>> to using this in other ways IMHO we should follow the KISS principle >>>>>> and forbid that. >>>>> >>>>> Maybe I overestimate the amount of changes that span over multiple >>>>> git repos. Maybe you in the Core team do not do this often. But for >>>>> us in the Sustaining Engineering Team, this is quite a typical >>>>> situation. A substantial part of the reports from customers come >>>>> with a description how to reproduce on the whole server, but they >>>>> need to be fixed in a component. Having srcdeps would make the CP >>>>> process simpler and faster, allowing us to uncover the conflicts >>>>> and regressions earlier. >>>> >>>> I don?t see how merging to the main branches is required to get this >>>> benefit. Git topic branches are fully sharable and CI jobs against >>>> them are easily done. All CI tests of pull requests are tests of >>>> topic branches. >>> >>> Yes, for me as the submitter of the PR, it is nice to get the >>> feedback from the CI and a review early, even before the component is >>> released, but it is quite bothersome to have to revisit the PR again >>> once the component gets released and rebase (in case there there are >>> conflicts) and either upgrade to the released component version or >>> remove the upgrade change (if the upgrade was merged separately). >>> >>> As long as my PR is not merged, my changes are not binding for the >>> rest of the team. I want my PR to get merged as fast as possible and >>> make others care that their changes are compatible with mine. I want >>> to happily forget about the PR as soon as possible and pick a new >>> task :) >>> >> >> You?ve convinced me! Convinced me that we shouldn?t allow this. :D >> >> We don?t have a role analogous to the ?release coordinator? used with >> EAP CPs, i.e. someone whose primary responsibility is coordinating to >> make sure that the untidy pieces get tidied. Most of our non-CR/Final >> releases are done as side tasks by people who are stealing time from >> other tasks. They need to be simple and mechanical. We also have a far >> greater volume of changes to manage than EAP CPs do. A process based >> on merging half the necessary change and then letting the issue owner >> walk away and assume someone else is going to come tidy up is a recipe >> for disaster. >> >>>> But, in any case perhaps you?ve seen clear need for merging to the >>>> main branches with the EAP CP branches. I haven?t seen it in WildFly >>>> / WildFly Core. I deliberately used specific repo names in my last >>>> comment to try and scope it. ;) >>> >>> My reasons for merging there in EAP CP branches are the same as here >>> in the community branches: it is better for PR submitters to merge as >>> early as possible to avoid conflicts, subsequent PR edits and to keep >>> the list of open tasks short. >>> >>>> Note I?m not saying we should disallow PRs with src deps in the pom. >>>> We should just disallow merging until those are replaced. >>> >>> Yes, I understand that and I appreciate that. That would be a >>> progress too. >>> >>>>>> A trick is avoiding doing that by mistake; i.e. a PR is sent up with >>>>>> a SRC dependency to get CI or review and accidentally gets merged. >>>>> >>>>> Oh, I am just realizing I have not said anything about merging. I >>>>> actually do want to propose that commits with source dependencies >>>>> get merged to e.g. wildfly-core master as early as possible. Those >>>>> are the key points of Continuous Integration: get feedback quickly, >>>>> and merge as soon as possible. This is exactly what Hawkular is >>>>> doing since more than a year. >>>> >>>> We regularly produce releases (ideally weekly for WildFly Core), >>>> often at short notice under pressure. Allowing merging of changes >>>> that are not acceptable for release increases the risk and effort >>>> required to do that, since now we have to scan for src deps and >>>> figure out how to get them out of the build. Perhaps needing >>>> assistance from whoever added the src dep and the lead of relevant >>>> component, both of whom are on the other side of the world asleep. >>>> (This is a real issue since we often do releases on Friday afternoon >>>> US time or Monday morning European time.) We already have too much >>>> risk and effort doing releases so adding more will need a really >>>> strong justification. >>> >>> This sounds as a valid concern. I must admit I know little about how >>> you plan and perform the releases of wildfly-core, wildfly and of the >>> components in the community. Knowing how complex the graph of WF >>> components is, I am far from underestimating any manual release >>> efforts or efforts to setup a CI jobs to do that automagically. I'll >>> have to gather more info about how you work. >>> >>>>>> But I suppose that?s not the end of the world, so long as the release >>>>>> process will eventually detect it and fail. >>>>> >>>>> Yes, source dependencies on a stable branch do not harm. They just >>>>> need to be avoided in releases (for which srcdeps offers technical >>>>> means). >>>> >>>> They do do harm as they mean the branch is no longer releasable. >>>> It?s not end-of-the-world harm but it?s harm. >>> >>> Well, I naivelly thought, that the components are obligated to >>> provide a release, say, one day before a planned wildfly-core release >>> and send a PRs that would then sweep out all source dependencies. And >>> TBH, I did not think "releasable at any time" is important in >>> wildfly-core. "Releasable once a week" still sounds good enough to me :) >> >> Unfortunately, it?s not. >> >>> >>>>>> Can making srcdeps fail (or just disabling it) be turned on via a >>>>>> maven profile? With that we could set up such a profile and turn it >>>>>> on in CI jobs that are testing branches where it?s forbidden (e.g. >>>>>> the nightly builds of master.) >>>>> >>>>> Yes, the feature is called "failWith profiles" and can be >>>>> configured in .mvn/srcdeps.yaml, like here in this srcdeps >>>>> quickstart: >>>>> https://github.com/srcdeps/srcdeps-maven/blob/master/srcdeps-maven-quickstarts/srcdeps-mvn-git-profile-quickstart/.mvn/srcdeps.yaml#L33 >>>>> >>>>> There is also "failWith properties" and "failWith goals". It is >>>>> documented here: >>>>> https://github.com/srcdeps/srcdeps-core/blob/master/doc/srcdeps.yaml#L130 >>>>> >>>>> By default there is failWith: {goals: release:prepare, >>>>> release:perform}. Projects that do not use the release plugin can >>>>> set e.g. failWith: {goals: deploy:deploy} or whatever else >>>>> distinguishes their releases. >>>>> >>>> >>>> Thanks. >>>> >>>>>> Oh, one other concern ? how robust is this in the face of poor >>>>>> maintenance? I see a lot of boilerplate in that .mvn/srcdeps.yaml. >>>>> >>>>> Which parts are boilerpate? >>>> >>>> All of it. :) >>>> >>>> I?m not using that word as an attack. I?m just saying it?s extra >>>> text that needs to be maintained, and since it?s separate from the >>>> usual place similar text occurs (the poms) it is more likely to >>>> diverge. >>> >>> OK, now I know what you mean :) You are right that poms can diverge >>> from srcdeps.yaml. >>> >>>>>> If >>>>>> that gets out of date or something is the only effect that using a >>>>>> src dependency for the affected item doesn't work? >>>>> >>>>> Yes, I think so. As long as the .mvn/srcdeps.yaml file is >>>>> syntactically correct, any misconfiguration there should not have >>>>> any other effect than eventually breaking an embedded build. >>>>> >>>>> Generally, the things configured in .mvn/srcdeps.yaml tend to be >>>>> quite stable - it is basically just mapping from GAVs to their >>>>> respective git URLs. Git URLs do not change often. It is true that >>>>> dependency artifacts come and go, but as long as their groupIds are >>>>> selected reasonably (one groupId occurs in not more than one git >>>>> repo) the mapping itself can be quite stable over time too. >>>> >>>> Yeah, that?s true. Where this file would be more likely to go >>>> unmaintained is adding new entries or cleaning out old ones. But the >>>> latter is just noise and if the only harm of the former is a srcdep >>>> can?t be used for that lib, then that will naturally get handled by >>>> whoever wants to use the srcdep. >>> >>> Yes, exactly. >>> >>> Thanks, >>> >>> Peter >>> >>>>> >>>>> Thanks, >>>>> >>>>> Peter >>>>> >>>>>> >>>>>>> On Jan 25, 2017, at 3:45 PM, Peter Palaga >>>>>>> wrote: >>>>>>> >>>>>>> Hi *, >>>>>>> >>>>>>> this is not new to those of you who attended my talk on the F2F >>>>>>> 2016 in Brno. Let me explain the idea here again for all others who >>>>>>> did not have a chance to be there. >>>>>>> >>>>>>> Srcdeps [1] is a tool to build Maven dependencies from their >>>>>>> sources. With srcdeps, wildfly-core can depend on a specific commit >>>>>>> of, e.g., undertow: >>>>>>> >>>>>>> 1.4.8.Final-SRC-revision-aabbccd >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>> where aabbccd is the git commit id to build when any undertow artifact >>>>>>> is requested during the build of wildfly-core. >>>>>>> >>>>>>> [1] describes in detail, how it works. >>>>>>> >>>>>>> The main advantage of srcdeps is that changes in components can be >>>>>>> integrated and tested in wildfly-core immediately after they are >>>>>>> committed to a public component branch. There is no need to wait >>>>>>> for the component release. >>>>>>> >>>>>>> Here in the WildFly family of projects, it is often the case that >>>>>>> something needs to be fixed in a component, but the verification >>>>>>> (using bug reproducer, or integration test) is possible only at the >>>>>>> level of wildfly or wildfly-core. Engineers typically work with >>>>>>> snapshots locally, but when their changes need to get shared (CI, >>>>>>> reviews) in a reproducible manner, snapshots cannot be used >>>>>>> anymore. In such situations a source dependency come in handy: it >>>>>>> is very easy to share and it is as reproducible as a Maven build >>>>>>> from a specific commit can be. All CIs and reviewers can work with >>>>>>> it, because all source dependency compilation is done under the >>>>>>> hood by Maven. >>>>>>> >>>>>>> Developers working on changes that span over multiple >>>>>>> interdependent git repos can thus get feedback (i-tests, reviews) >>>>>>> quickly without waiting for releases of components. >>>>>>> >>>>>>> Srcdeps emerged in the Hawkular family of projects to solve exactly >>>>>>> this kind of situation and is in use there since around October >>>>>>> 2015. >>>>>>> >>>>>>> When I said there is no need to wait for releases of components, I >>>>>>> did not mean that we can get rid of component releases altogether. >>>>>>> Clearly, we cannot, because i.a. for any tooling uninformed about >>>>>>> how srcdeps work, those source dependencies would simply be >>>>>>> non-resolvable from public Maven repositories. So, before releasing >>>>>>> the dependent component (such as wildfly-core) all its dependencies >>>>>>> need to be released. To enforce this, srcdeps is by default >>>>>>> configured to make the release fail, as long as there are source >>>>>>> dependencies. >>>>>>> >>>>>>> I have sent a PR introducing srcdeps to wildfly-core: >>>>>>> https://github.com/wildfly/wildfly-core/pull/2122 To get a feeling >>>>>>> how it works, checkout the branch, switch to e.g. >>>>>>> 1.4.8.Final-SRC-revision-1bff8c32f0eee986e83a7589ae95ebbc1d67d6bd >>>>>>> >>>>>>> (that happens to be the commit id of the 1.4.8.Final tag) and >>>>>>> build wildfly-core as usual with "mvn clean install". You'll see in >>>>>>> the build log that undertow is being cloned to >>>>>>> ~/.m2/srcdeps/io/undertow and that it is built there. After the >>>>>>> build, check that the >>>>>>> 1.4.8.Final-SRC-revision-1bff8c32f0eee986e83a7589ae95ebbc1d67d6bd >>>>>>> version of Undertow got installed to your local Maven repo (usually >>>>>>> ~/m2/repository/io/undertow/undertow-core ) >>>>>>> >>>>>>> Are there any questions or comments? >>>>>>> >>>>>>> [1] https://github.com/srcdeps/srcdeps-maven#srcdeps-maven >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Peter >>>>>>> >>>>>>> P.S.: I will be talking about srcdeps on Saturday 2017-01-28 at >>>>>>> 14:30 at DevConf Brno. >>>>>>> _______________________________________________ wildfly-dev mailing >>>>>>> list wildfly-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>> >>>>> >>>> >>> >> > From brian.stansberry at redhat.com Fri May 26 11:01:58 2017 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Fri, 26 May 2017 10:01:58 -0500 Subject: [wildfly-dev] Testing 1..2..3 Message-ID: <570D515D-324B-48AD-A40A-AF6907180A19@redhat.com> Just a test of the list delivery; we?ve gotten a report of issues. -- Brian Stansberry Manager, Senior Principal Software Engineer JBoss by Red Hat From jason.greene at redhat.com Fri May 26 11:18:09 2017 From: jason.greene at redhat.com (Jason Greene) Date: Fri, 26 May 2017 10:18:09 -0500 Subject: [wildfly-dev] Test - Please Ignore Message-ID: Test From mwessend at redhat.com Tue May 30 04:35:44 2017 From: mwessend at redhat.com (Matthias Wessendorf) Date: Tue, 30 May 2017 10:35:44 +0200 Subject: [wildfly-dev] Test - Please Ignore In-Reply-To: References: Message-ID: Did you test it, due to (very) slow delivery of mails on jboss lists ? On Fri, May 26, 2017 at 5:18 PM, Jason Greene wrote: > Test > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Project lead AeroGear.org -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170530/caa202c3/attachment.html From david.lloyd at redhat.com Tue May 30 09:10:10 2017 From: david.lloyd at redhat.com (David M. Lloyd) Date: Tue, 30 May 2017 08:10:10 -0500 Subject: [wildfly-dev] Test - Please Ignore In-Reply-To: References: Message-ID: <97a13368-0b9b-41ff-32b9-0411567b44cb@redhat.com> He originally sent that message in 2009. On 05/30/2017 03:35 AM, Matthias Wessendorf wrote: > Did you test it, due to (very) slow delivery of mails on jboss lists ? > > On Fri, May 26, 2017 at 5:18 PM, Jason Greene > wrote: > > Test > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > > -- > Project lead AeroGear.org > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- - DML From mwessend at redhat.com Tue May 30 09:55:02 2017 From: mwessend at redhat.com (Matthias Wessendorf) Date: Tue, 30 May 2017 15:55:02 +0200 Subject: [wildfly-dev] Test - Please Ignore In-Reply-To: <97a13368-0b9b-41ff-32b9-0411567b44cb@redhat.com> References: <97a13368-0b9b-41ff-32b9-0411567b44cb@redhat.com> Message-ID: ah, that's quick :) On Tue, May 30, 2017 at 3:10 PM, David M. Lloyd wrote: > He originally sent that message in 2009. > > On 05/30/2017 03:35 AM, Matthias Wessendorf wrote: > > Did you test it, due to (very) slow delivery of mails on jboss lists ? > > > > On Fri, May 26, 2017 at 5:18 PM, Jason Greene > > wrote: > > > > Test > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > > > > > > > > -- > > Project lead AeroGear.org > > > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > -- > - DML > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Project lead AeroGear.org -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20170530/c58857cc/attachment.html