From jason.greene at redhat.com Sun Jun 1 00:54:11 2014 From: jason.greene at redhat.com (Jason Greene) Date: Sat, 31 May 2014 23:54:11 -0500 Subject: [wildfly-dev] WildFly 8.1.0.Final Released! Message-ID: <2FCC8E8F-C380-4FFF-AD86-751B1EBB4CA2@redhat.com> Hi Everyone, I?m happy to announce the release of 8.1.0.Final. This release includes a few minor enhancements and a significant number of bug fixes (247 issues resolved!). Also, as requested by my recent poll, the release includes an update package that can be used with the patch command to update an 8.0.0.Final distro in-place. More details are available here: https://community.jboss.org/wiki/WildFly810FinalReleaseNotes The standard download location is here: http://wildfly.org/downloads Thank you everyone for all the hard work you put into this release! -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From jason.greene at redhat.com Sun Jun 1 01:13:11 2014 From: jason.greene at redhat.com (Jason Greene) Date: Sun, 1 Jun 2014 00:13:11 -0500 Subject: [wildfly-dev] 8.2 Release Plan Message-ID: <40B373B5-88EE-473D-9D41-5F8323DE2AE0@redhat.com> Hello Everyone, Since we were forced to pull the CDI 1.1 update out of 8.1 (TCK rules), and the work was already completed, I have added an 8.2 into jira, with a primary focus on delivering this important update. This release is dependent on an EE7 TCK update allowing it in, so I unfortunately can?t yet set a date for the release. While the focus should be on 9, I am leaving the 8.x branch open for high priority bug fixes that can be included with the CDI update. As in the past, I need two pull requests for every proposed addition to 8.x, one against master, and the other against the 8.x branch. Please prefix the subject of the pull request with [8.x]. Thanks! -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From sebastian.laskawiec at gmail.com Sun Jun 1 14:21:49 2014 From: sebastian.laskawiec at gmail.com (=?UTF-8?Q?Sebastian_=C5=81askawiec?=) Date: Sun, 1 Jun 2014 20:21:49 +0200 Subject: [wildfly-dev] JMX Console over Web Admin Console In-Reply-To: <537D51A9.7090803@redhat.com> References: <537D51A9.7090803@redhat.com> Message-ID: Hi Brian Thanks for clarification and sorry for late response. I created Feature Request to add expose MBean server through HTTP management interface: https://issues.jboss.org/browse/WFLY-3426 It would be great to have MBean server exposed via Wildfly HTTP Management interface, but I know several teams which would like to have such functionality in JBoss AS 7. This is why I started looking at Darran's port to JMX console (https://github.com/dandreadis/wildfly/commits/jmx-console). I rebased it, detached from Wildfly parent and pushed to my branch ( https://github.com/altanis/wildfly/commits/jmx-console-ported). The same WAR file seems to work correctly on JBoss AS 7 as well as Wildfly. In my opinion it would be great to have this console available publicly. Is it possible to make the WAR file available through JBoss Nexus (perhaps thirdparty-releases repository)? If it is, I'd squash all commits and push only jmx-console code into new github repository (to make it separate from Wildfly). Best regards Sebastian 2014-05-22 3:23 GMT+02:00 Brian Stansberry : > I agree that if we exposed the mbean server over HTTP that it should be > via a context on our HTTP management interface. Either that or expose > mbeans as part of our standard management resource tree. That would make > integration in the web console much more practical. > > I don't see us ever bringing back the AS5-style jmx-console.war that > runs on port 8080 as part of the WildFly distribution. That would > introduce a requirement for EE into our management infrastructure, and > we won't do that. Management is part of WildFly core, and WildFly core > does not require EE. If the Servlet-based jmx-console.war code linked > from WFLY-1197 gets further developed, I see it as a community effort > for people who want to install that on their own, not as something we'd > distribute as part of WildFly itself. > > On 5/21/14, 7:37 AM, Sebastian ?askawiec wrote: > > Hi > > > > One of our projects is based on JBoss 5.1 and we are considering > > migrating it to Wildfly. One of our problems is Web based JMX Console... > > We have pretty complicated production environment and Web based JMX > > console with basic Auth delegated to LDAP is the simplest solution for > us. > > > > I noticed that there was a ticket opened for porting legacy JMX Console: > > https://issues.jboss.org/browse/WFLY-1197. > > However I think it would be much better idea to to have this > > functionality in Web Administraction console. In my opinion it would be > > great to have it under "Runtime" in "Status" submenu. > > > > What do you think about this idea? > > > > Best Regards > > -- > > Sebastian ?askawiec > > > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > -- > Brian Stansberry > Senior Principal Software Engineer > JBoss by Red Hat > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Sebastian ?askawiec -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140601/c0741664/attachment.html From darran.lofthouse at jboss.com Mon Jun 2 06:49:20 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Mon, 02 Jun 2014 11:49:20 +0100 Subject: [wildfly-dev] 8.2 Release Plan In-Reply-To: <40B373B5-88EE-473D-9D41-5F8323DE2AE0@redhat.com> References: <40B373B5-88EE-473D-9D41-5F8323DE2AE0@redhat.com> Message-ID: <538C56B0.4030500@jboss.com> On 01/06/14 06:13, Jason Greene wrote: > Hello Everyone, > > Since we were forced to pull the CDI 1.1 update out of 8.1 (TCK rules), and the work was already completed, I have added an 8.2 into jira, with a primary focus on delivering this important update. This release is dependent on an EE7 TCK update allowing it in, so I unfortunately can?t yet set a date for the release. While the focus should be on 9, I am leaving the 8.x branch open for high priority bug fixes that can be included with the CDI update. > > As in the past, I need two pull requests for every proposed addition to 8.x, one against master, and the other against the 8.x branch. Please prefix the subject of the pull request with [8.x]. I think this is better, whilst Kabir's wildfly-next branch did enable some WildFly 9 specific fixes whilst still working on 8 making sure all changes were correctly ported to WildFly 9 was more problematic - the two pull requests means the original author takes care of this. As there is still a possibility of further changes in 8 I believe this is going to mean any schema updates in 9 are going to require a major bump of the schema version whilst in 8 it would require a minor bump in version, using a minor bump in 9 would risk blocking the ability to bump 8. > > Thanks! > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From brian.stansberry at redhat.com Mon Jun 2 08:29:56 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 02 Jun 2014 07:29:56 -0500 Subject: [wildfly-dev] 8.2 Release Plan In-Reply-To: <538C56B0.4030500@jboss.com> References: <40B373B5-88EE-473D-9D41-5F8323DE2AE0@redhat.com> <538C56B0.4030500@jboss.com> Message-ID: <538C6E44.4070400@redhat.com> On 6/2/14, 5:49 AM, Darran Lofthouse wrote: > > > On 01/06/14 06:13, Jason Greene wrote: >> Hello Everyone, >> >> Since we were forced to pull the CDI 1.1 update out of 8.1 (TCK rules), and the work was already completed, I have added an 8.2 into jira, with a primary focus on delivering this important update. This release is dependent on an EE7 TCK update allowing it in, so I unfortunately can?t yet set a date for the release. While the focus should be on 9, I am leaving the 8.x branch open for high priority bug fixes that can be included with the CDI update. >> >> As in the past, I need two pull requests for every proposed addition to 8.x, one against master, and the other against the 8.x branch. Please prefix the subject of the pull request with [8.x]. > > I think this is better, whilst Kabir's wildfly-next branch did enable > some WildFly 9 specific fixes whilst still working on 8 making sure all > changes were correctly ported to WildFly 9 was more problematic - the > two pull requests means the original author takes care of this. > > As there is still a possibility of further changes in 8 I believe this > is going to mean any schema updates in 9 are going to require a major > bump of the schema version whilst in 8 it would require a minor bump in > version, using a minor bump in 9 would risk blocking the ability to bump 8. > Yes. That was the plan regardless. Everyone, please consider it policy that we do major version bumps on the schemas when we do major version bumps of the WF project itself. >> >> Thanks! >> >> -- >> Jason T. Greene >> WildFly Lead / JBoss EAP Platform Architect >> JBoss, a division of Red Hat >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From jai.forums2013 at gmail.com Mon Jun 2 08:31:05 2014 From: jai.forums2013 at gmail.com (Jaikiran Pai) Date: Mon, 02 Jun 2014 18:01:05 +0530 Subject: [wildfly-dev] WildFly 8.1.0.Final Released! In-Reply-To: <2FCC8E8F-C380-4FFF-AD86-751B1EBB4CA2@redhat.com> References: <2FCC8E8F-C380-4FFF-AD86-751B1EBB4CA2@redhat.com> Message-ID: <538C6E89.3080407@gmail.com> Congratulations everyone! -Jaikiran On Sunday 01 June 2014 10:24 AM, Jason Greene wrote: > Hi Everyone, > > I?m happy to announce the release of 8.1.0.Final. This release includes a few minor enhancements and a significant number of bug fixes (247 issues resolved!). Also, as requested by my recent poll, the release includes an update package that can be used with the patch command to update an 8.0.0.Final distro in-place. > > More details are available here: > https://community.jboss.org/wiki/WildFly810FinalReleaseNotes > > The standard download location is here: > http://wildfly.org/downloads > > Thank you everyone for all the hard work you put into this release! > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From brian.stansberry at redhat.com Mon Jun 2 09:41:32 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 02 Jun 2014 08:41:32 -0500 Subject: [wildfly-dev] 2 instance cluster in master/slave In-Reply-To: <5385A70B.2040206@jboss.com> References: <8B504D45-944D-4D3A-BF75-769DE8EF2167@jboss.com> <5385A70B.2040206@jboss.com> Message-ID: <538C7F0C.6080309@redhat.com> Yes. Please follow https://issues.jboss.org/browse/WFLY-2996 On 5/28/14, 4:06 AM, Darran Lofthouse wrote: > I think it is a feature request as we know we never added support for > http-remoting in domain mode ;-) An issue may already exist. > > On 28/05/14 00:13, Kabir Khan wrote: >> >> On 27 May 2014, at 23:39, Arun Gupta wrote: >> >>> Trying to following the instructions at: >>> >>> https://docs.jboss.org/author/display/WFLY8/WildFly+8+Cluster+Howto >>> >>> This shows how to setup a 2-instance cluster in master/slave where >>> master is on my laptop and slave is on a Raspi. Couple of questions >>> ... >>> >>> 1). Why the following entry is still referring to 9999 ? Shouldn't it be 9990 ? >>> >>> >>> >>> >>> >>> FTR it only works with 9999, not with 9990. >>> >>> Domain Controller shows the message: >>> >>> [Host Controller] 15:36:22,811 INFO [org.jboss.as.domain] (Host >>> Controller Service Threads - 28) JBAS010918: Registered remote slave >>> host "slave", WildFly 8.1.0.CR2 ?Kenny? >> It looks like we hardcode the old ?remote://? protocol in RemoteDomainConnectionService rather than the new http-remoting protocol, so it is a bug. I am not sure if that is something we should attempt to negotiate explicitly, or to make the element take a ?protocol? attribute? >>> >>> >>> 2). Master is throwing the following exception: >>> >>> 22:15:25,094 INFO [org.jboss.as.process.Server:server-one.status] >>> (ProcessController-threads - 3) JBAS012017: Starting process >>> 'Server:server-one' >>> [Server:server-one] Error occurred during initialization of VM >>> [Server:server-one] Server VM is only supported on ARMv7+ VFP >> This ^^ seems to be the real error. Try removing ?-server? in the jvm-options. >> >>> 22:15:25,557 INFO [org.jboss.as.process.Server:server-one.status] >>> (reaper for Server:server-one) JBAS012010: Process 'Server:server-one' >>> finished with an exit status of 1 >>> [Host Controller] 22:15:26,408 INFO [org.jboss.as.host.controller] >>> (ProcessControllerConnection-thread - 2) JBAS010926: Unregistering >>> server server-one >>> [Host Controller] 22:15:26,495 INFO [org.jboss.as.host.controller] >>> (Controller Boot Thread) JBAS010922: Starting server server-two >>> 22:15:26,417 ERROR [org.jboss.as.process.Server:server-one.status] >>> (ProcessController-threads - 3) JBAS012006: Failed to send data bytes >>> to process 'Server:server-one' input stream: java.io.IOException: >>> Stream closed >>> at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:434) >>> [rt.jar:1.7.0_40] >>> at java.io.OutputStream.write(OutputStream.java:116) [rt.jar:1.7.0_40] >>> at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) >>> [rt.jar:1.7.0_40] >>> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) >>> [rt.jar:1.7.0_40] >>> at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) >>> [rt.jar:1.7.0_40] >>> at org.jboss.as.process.stdin.BaseNCodecOutputStream.flush(BaseNCodecOutputStream.java:125) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.stdin.BaseNCodecOutputStream.flush(BaseNCodecOutputStream.java:137) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.stdin.Base64OutputStream.flush(Base64OutputStream.java:44) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.stdin.BaseNCodecOutputStream.close(BaseNCodecOutputStream.java:154) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.stdin.Base64OutputStream.close(Base64OutputStream.java:44) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.ManagedProcess.sendStdin(ManagedProcess.java:164) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.ProcessController.sendStdin(ProcessController.java:207) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.ProcessControllerServerHandler$InitMessageHandler$ConnectedMessageHandler.handleMessage(ProcessControllerServerHandler.java:140) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.protocol.ConnectionImpl.safeHandleMessage(ConnectionImpl.java:269) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.protocol.ConnectionImpl$1$1.run(ConnectionImpl.java:223) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>> [rt.jar:1.7.0_40] >>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>> [rt.jar:1.7.0_40] >>> at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_40] >>> at org.jboss.threads.JBossThread.run(JBossThread.java:122) >>> [jboss-threads-2.1.1.Final.jar:2.1.1.Final] >>> 22:15:26,573 ERROR [org.jboss.as.protocol.connection] >>> (ProcessController-threads - 3) JBAS016610: Failed to read a message: >>> java.io.IOException: Stream closed >>> at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:434) >>> [rt.jar:1.7.0_40] >>> at java.io.OutputStream.write(OutputStream.java:116) [rt.jar:1.7.0_40] >>> at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) >>> [rt.jar:1.7.0_40] >>> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) >>> [rt.jar:1.7.0_40] >>> at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) >>> [rt.jar:1.7.0_40] >>> at org.jboss.as.process.stdin.BaseNCodecOutputStream.flush(BaseNCodecOutputStream.java:125) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.stdin.BaseNCodecOutputStream.flush(BaseNCodecOutputStream.java:137) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.stdin.Base64OutputStream.flush(Base64OutputStream.java:44) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.stdin.BaseNCodecOutputStream.close(BaseNCodecOutputStream.java:154) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.stdin.Base64OutputStream.close(Base64OutputStream.java:44) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.ManagedProcess.sendStdin(ManagedProcess.java:164) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.ProcessController.sendStdin(ProcessController.java:207) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.ProcessControllerServerHandler$InitMessageHandler$ConnectedMessageHandler.handleMessage(ProcessControllerServerHandler.java:140) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.protocol.ConnectionImpl.safeHandleMessage(ConnectionImpl.java:269) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at org.jboss.as.process.protocol.ConnectionImpl$1$1.run(ConnectionImpl.java:223) >>> [wildfly-process-controller-8.1.0.CR2.jar:8.1.0.CR2] >>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>> [rt.jar:1.7.0_40] >>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>> [rt.jar:1.7.0_40] >>> at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_40] >>> at org.jboss.threads.JBossThread.run(JBossThread.java:122) >>> [jboss-threads-2.1.1.Final.jar:2.1.1.Final] >>> >>> Same error for server-two as well. >>> >>> Trying to explicitly start server-three on slave gives the same error. >>> >>> This is all using 8.1 CR2. >>> >>> Any idea what might be wrong ? >>> >>> Thanks >>> Aru >>> >>> -- >>> http://blog.arungupta.me >>> http://twitter.com/arungupta >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From david.lloyd at redhat.com Mon Jun 2 12:42:28 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Mon, 02 Jun 2014 11:42:28 -0500 Subject: [wildfly-dev] HTTP Upgrade for AS8 Management In-Reply-To: <5154184D.7080806@gmail.com> References: <5154184D.7080806@gmail.com> Message-ID: <538CA974.4070700@redhat.com> On 03/28/2013 05:15 AM, Stuart Douglas wrote: > - ModelControllerClient.Factory.create() now allows you to specify a > protocol, which can be either remote, http or https. > > - Remote JMX will now require a service:jmx:http(s)-remoting-jmx:// URL > rather than the current service:jmx:remoting-jmx:// Unfortunately I missed-slash-bungled this way back when, but we need to sort out our URI schemes. When we have multi-layer protocol going on, the URI scheme we should use is like this: outer+middle+inner:// where "outer" is the outermost protocol (e.g. "remote"), and "inner" is the innermost (not counting layer 3 and lower *unless* that figures directly in to the URI scheme; an example of this sub-case is "stratum+tcp" vs "stratum+udp"). So we *should* have: remote:// Direct Remoting-protocol connection remote+http:// Remoting over HTTP upgrade remote+https:// Remoting over HTTPS upgrade And (if these are even really needed; I think we dropped this distinction though maybe I'm wrong): jmx+remote:// JMX over Remoting jmx+remote+http:// JMX over Remoting over HTTP jmx+remote+https:// JMX over Remoting over HTTPS The most common "de facto" function of hyphenation in a URI scheme is to be a separator for a two-word protocol, like "view-source" or "ms-help" etc. The most common "de facto" function of using "+" is as I've described above, perhaps made most popular by Subversion's use. You may be wondering: Why not apply this to every single protocol we have? And/or every single protocol in existence? I think this goes beyond practicality - the point is to be unambiguous and consistent, and also align on the correct "remote" scheme name (we have a mix of "remote" and "remoting" today which is kind of confusing). Fixing this is not really a top priority obviously, but I would like to eventually unify our configuration on these scheme names (still supporting the old scheme names for compatibility of course). -- - DML From darran.lofthouse at jboss.com Mon Jun 2 12:46:13 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Mon, 02 Jun 2014 17:46:13 +0100 Subject: [wildfly-dev] HTTP Upgrade for AS8 Management In-Reply-To: <538CA974.4070700@redhat.com> References: <5154184D.7080806@gmail.com> <538CA974.4070700@redhat.com> Message-ID: <538CAA55.5000101@jboss.com> So just to be clear, you believe it should be 'remote' not 'remoting'? Regards, Darran Lofthouse. On 02/06/14 17:42, David M. Lloyd wrote: > On 03/28/2013 05:15 AM, Stuart Douglas wrote: >> - ModelControllerClient.Factory.create() now allows you to specify a >> protocol, which can be either remote, http or https. >> >> - Remote JMX will now require a service:jmx:http(s)-remoting-jmx:// URL >> rather than the current service:jmx:remoting-jmx:// > > Unfortunately I missed-slash-bungled this way back when, but we need to > sort out our URI schemes. > > When we have multi-layer protocol going on, the URI scheme we should use > is like this: > > outer+middle+inner:// > > where "outer" is the outermost protocol (e.g. "remote"), and "inner" is > the innermost (not counting layer 3 and lower *unless* that figures > directly in to the URI scheme; an example of this sub-case is > "stratum+tcp" vs "stratum+udp"). > > So we *should* have: > > remote:// Direct Remoting-protocol connection > remote+http:// Remoting over HTTP upgrade > remote+https:// Remoting over HTTPS upgrade > > And (if these are even really needed; I think we dropped this > distinction though maybe I'm wrong): > > jmx+remote:// JMX over Remoting > jmx+remote+http:// JMX over Remoting over HTTP > jmx+remote+https:// JMX over Remoting over HTTPS > > The most common "de facto" function of hyphenation in a URI scheme is to > be a separator for a two-word protocol, like "view-source" or "ms-help" > etc. The most common "de facto" function of using "+" is as I've > described above, perhaps made most popular by Subversion's use. > > You may be wondering: Why not apply this to every single protocol we > have? And/or every single protocol in existence? I think this goes > beyond practicality - the point is to be unambiguous and consistent, and > also align on the correct "remote" scheme name (we have a mix of > "remote" and "remoting" today which is kind of confusing). > > Fixing this is not really a top priority obviously, but I would like to > eventually unify our configuration on these scheme names (still > supporting the old scheme names for compatibility of course). > From david.lloyd at redhat.com Mon Jun 2 14:18:04 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Mon, 02 Jun 2014 13:18:04 -0500 Subject: [wildfly-dev] HTTP Upgrade for AS8 Management In-Reply-To: <538CAA55.5000101@jboss.com> References: <5154184D.7080806@gmail.com> <538CA974.4070700@redhat.com> <538CAA55.5000101@jboss.com> Message-ID: <538CBFDC.2090209@redhat.com> Yes. On 06/02/2014 11:46 AM, Darran Lofthouse wrote: > So just to be clear, you believe it should be 'remote' not 'remoting'? > > Regards, > Darran Lofthouse. > > > On 02/06/14 17:42, David M. Lloyd wrote: >> On 03/28/2013 05:15 AM, Stuart Douglas wrote: >>> - ModelControllerClient.Factory.create() now allows you to specify a >>> protocol, which can be either remote, http or https. >>> >>> - Remote JMX will now require a service:jmx:http(s)-remoting-jmx:// URL >>> rather than the current service:jmx:remoting-jmx:// >> >> Unfortunately I missed-slash-bungled this way back when, but we need to >> sort out our URI schemes. >> >> When we have multi-layer protocol going on, the URI scheme we should use >> is like this: >> >> outer+middle+inner:// >> >> where "outer" is the outermost protocol (e.g. "remote"), and "inner" is >> the innermost (not counting layer 3 and lower *unless* that figures >> directly in to the URI scheme; an example of this sub-case is >> "stratum+tcp" vs "stratum+udp"). >> >> So we *should* have: >> >> remote:// Direct Remoting-protocol connection >> remote+http:// Remoting over HTTP upgrade >> remote+https:// Remoting over HTTPS upgrade >> >> And (if these are even really needed; I think we dropped this >> distinction though maybe I'm wrong): >> >> jmx+remote:// JMX over Remoting >> jmx+remote+http:// JMX over Remoting over HTTP >> jmx+remote+https:// JMX over Remoting over HTTPS >> >> The most common "de facto" function of hyphenation in a URI scheme is to >> be a separator for a two-word protocol, like "view-source" or "ms-help" >> etc. The most common "de facto" function of using "+" is as I've >> described above, perhaps made most popular by Subversion's use. >> >> You may be wondering: Why not apply this to every single protocol we >> have? And/or every single protocol in existence? I think this goes >> beyond practicality - the point is to be unambiguous and consistent, and >> also align on the correct "remote" scheme name (we have a mix of >> "remote" and "remoting" today which is kind of confusing). >> >> Fixing this is not really a top priority obviously, but I would like to >> eventually unify our configuration on these scheme names (still >> supporting the old scheme names for compatibility of course). >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- - DML From darran.lofthouse at jboss.com Mon Jun 2 14:34:11 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Mon, 02 Jun 2014 19:34:11 +0100 Subject: [wildfly-dev] HTTP Upgrade for AS8 Management In-Reply-To: <538CBFDC.2090209@redhat.com> References: <5154184D.7080806@gmail.com> <538CA974.4070700@redhat.com> <538CAA55.5000101@jboss.com> <538CBFDC.2090209@redhat.com> Message-ID: <538CC3A3.4070001@jboss.com> Created the following for Remoting JMX: - https://issues.jboss.org/browse/REMJMX-85 Will drop the 'jmx' portion from the scheme. Regards, Darran Lofthouse. On 02/06/14 19:18, David M. Lloyd wrote: > Yes. > > On 06/02/2014 11:46 AM, Darran Lofthouse wrote: >> So just to be clear, you believe it should be 'remote' not 'remoting'? >> >> Regards, >> Darran Lofthouse. >> >> >> On 02/06/14 17:42, David M. Lloyd wrote: >>> On 03/28/2013 05:15 AM, Stuart Douglas wrote: >>>> - ModelControllerClient.Factory.create() now allows you to specify a >>>> protocol, which can be either remote, http or https. >>>> >>>> - Remote JMX will now require a service:jmx:http(s)-remoting-jmx:// URL >>>> rather than the current service:jmx:remoting-jmx:// >>> >>> Unfortunately I missed-slash-bungled this way back when, but we need to >>> sort out our URI schemes. >>> >>> When we have multi-layer protocol going on, the URI scheme we should use >>> is like this: >>> >>> outer+middle+inner:// >>> >>> where "outer" is the outermost protocol (e.g. "remote"), and "inner" is >>> the innermost (not counting layer 3 and lower *unless* that figures >>> directly in to the URI scheme; an example of this sub-case is >>> "stratum+tcp" vs "stratum+udp"). >>> >>> So we *should* have: >>> >>> remote:// Direct Remoting-protocol connection >>> remote+http:// Remoting over HTTP upgrade >>> remote+https:// Remoting over HTTPS upgrade >>> >>> And (if these are even really needed; I think we dropped this >>> distinction though maybe I'm wrong): >>> >>> jmx+remote:// JMX over Remoting >>> jmx+remote+http:// JMX over Remoting over HTTP >>> jmx+remote+https:// JMX over Remoting over HTTPS >>> >>> The most common "de facto" function of hyphenation in a URI scheme is to >>> be a separator for a two-word protocol, like "view-source" or "ms-help" >>> etc. The most common "de facto" function of using "+" is as I've >>> described above, perhaps made most popular by Subversion's use. >>> >>> You may be wondering: Why not apply this to every single protocol we >>> have? And/or every single protocol in existence? I think this goes >>> beyond practicality - the point is to be unambiguous and consistent, and >>> also align on the correct "remote" scheme name (we have a mix of >>> "remote" and "remoting" today which is kind of confusing). >>> >>> Fixing this is not really a top priority obviously, but I would like to >>> eventually unify our configuration on these scheme names (still >>> supporting the old scheme names for compatibility of course). >>> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > From smarlow at redhat.com Mon Jun 2 21:59:06 2014 From: smarlow at redhat.com (Scott Marlow) Date: Mon, 02 Jun 2014 21:59:06 -0400 Subject: [wildfly-dev] WFLY9 documentation area In-Reply-To: <5388A33F.9000709@redhat.com> References: <5388A33F.9000709@redhat.com> Message-ID: <538D2BEA.1060105@redhat.com> On 05/30/2014 11:26 AM, Alessio Soldano wrote: > Folks, > should https://docs.jboss.org/author/display/WFLY8 be cloned into > https://docs.jboss.org/author/display/WFLY9 ? Who can do that? I'll give it a shot. > The WS team needs to start updating the doc for the next WFLY version... > > Cheers > Alessio > From fjuma at redhat.com Tue Jun 3 10:41:20 2014 From: fjuma at redhat.com (Farah Juma) Date: Tue, 3 Jun 2014 10:41:20 -0400 (EDT) Subject: [wildfly-dev] WildFly 8.1.0.Final on OpenShift - with JDK 8 support! In-Reply-To: <1529115147.13927371.1401747463715.JavaMail.zimbra@redhat.com> Message-ID: <248742028.14212206.1401806480451.JavaMail.zimbra@redhat.com> Since WildFly 8.1.0.Final was just released, the OpenShift WildFly cartridge has been updated as well and now includes support for JDK 8! See https://community.jboss.org/people/fjuma/blog/2014/06/03/wildfly-810final-on-openshift--with-jdk-8-support for more details on how to get started. Please try it out and provide feedback. From brian.stansberry at redhat.com Tue Jun 3 10:43:30 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Tue, 03 Jun 2014 09:43:30 -0500 Subject: [wildfly-dev] WildFly 8.1.0.Final on OpenShift - with JDK 8 support! In-Reply-To: <248742028.14212206.1401806480451.JavaMail.zimbra@redhat.com> References: <248742028.14212206.1401806480451.JavaMail.zimbra@redhat.com> Message-ID: <538DDF12.2090606@redhat.com> Excellent! Great job. On 6/3/14, 9:41 AM, Farah Juma wrote: > Since WildFly 8.1.0.Final was just released, the OpenShift WildFly cartridge has been updated as well and now includes support for JDK 8! > > See https://community.jboss.org/people/fjuma/blog/2014/06/03/wildfly-810final-on-openshift--with-jdk-8-support for more details on how to get started. Please try it out and provide feedback. > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From tomaz.cerar at gmail.com Tue Jun 3 10:51:00 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Tue, 3 Jun 2014 16:51:00 +0200 Subject: [wildfly-dev] WildFly 8.1.0.Final on OpenShift - with JDK 8 support! In-Reply-To: <538DDF12.2090606@redhat.com> References: <248742028.14212206.1401806480451.JavaMail.zimbra@redhat.com> <538DDF12.2090606@redhat.com> Message-ID: Great news! tnx for all your effort! On Tue, Jun 3, 2014 at 4:43 PM, Brian Stansberry < brian.stansberry at redhat.com> wrote: > Excellent! Great job. > > On 6/3/14, 9:41 AM, Farah Juma wrote: > > Since WildFly 8.1.0.Final was just released, the OpenShift WildFly > cartridge has been updated as well and now includes support for JDK 8! > > > > See > https://community.jboss.org/people/fjuma/blog/2014/06/03/wildfly-810final-on-openshift--with-jdk-8-support > for more details on how to get started. Please try it out and provide > feedback. > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > -- > Brian Stansberry > Senior Principal Software Engineer > JBoss by Red Hat > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140603/cafb7b25/attachment.html From jason.greene at redhat.com Tue Jun 3 11:16:00 2014 From: jason.greene at redhat.com (Jason Greene) Date: Tue, 3 Jun 2014 10:16:00 -0500 Subject: [wildfly-dev] 2 instance cluster in master/slave In-Reply-To: References: <8B504D45-944D-4D3A-BF75-769DE8EF2167@jboss.com> <5385DDE5.90704@redhat.com> <5385E4D5.9080803@redhat.com> <5605C912-EFDA-476C-8689-A596391E010F@redhat.com> <0FC395B8-AEEF-457B-87D5-6A9F13F624CC@redhat.com> <2D5DED6C-263A-44DC-98F0-615EDA8E8766@redhat.com> ! <975729B1-39EF-40C1-82A0-BE68795836E9@jboss.com> Message-ID: Very cool. Did you ever give the wildfly as a proxy approach a try? On May 31, 2014, at 8:37 PM, Arun Gupta wrote: > Please help spread the word. > > Let me know if you have fun ideas/projects that should run on Raspi > and help us build thought leadership :-) > > Arun > > On Sat, May 31, 2014 at 7:47 AM, David Aroca wrote: >> Thnks Arun! >> >> >> 2014-05-31 3:29 GMT-05:00 Kabir Khan : >>> >>> Nice! >>> On 31 May 2014, at 05:10, Arun Gupta wrote: >>> >>>> And finally, the three-part article showing how to setup WildFly >>>> cluster on Raspberry Pi is now available at: >>>> >>>> http://blog.arungupta.me/2014/05/wildfly-cluster-raspberrypi-techtip28/ >>>> >>>> Feedback always welcome! >>>> >>>> Weekend can now start :) >>>> >>>> Cheers >>>> Arun >>>> >>>> >>>> On Fri, May 30, 2014 at 5:58 PM, Jason T. Greene >>>> wrote: >>>>> >>>>> >>>>> Sent from my iPhone >>>>> >>>>>> On May 30, 2014, at 6:44 PM, Arun Gupta wrote: >>>>>> >>>>>> One (hopefully) last bit... >>>>>> >>>>>> How/where do I set the sticky session ? >>>>> >>>>> If you are using mod_proxy there is a memory based sticky session >>>>> parameter you have to set to JSESSIONID, see the apache page. >>>>> >>>>> >>>>>> >>>>>> Arun >>>>>> >>>>>>> On Fri, May 30, 2014 at 11:27 AM, Jason Greene >>>>>>> wrote: >>>>>>> >>>>>>> On May 30, 2014, at 12:54 PM, Arun Gupta >>>>>>> wrote: >>>>>>> >>>>>>>>> >>>>>>>>> A couple other options that would be fun to play with: >>>>>>>>> >>>>>>>>> 1. Using Undertow?s reverse proxy like this (assuming you named the >>>>>>>>> nodes pi1, pi2, and thus they have a pi1, pi2 jvmroute, which defaults to >>>>>>>>> the host name if you didn?t set it): >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> Where will I add this fragment ? >>>>>>> >>>>>>> In your domain.xml define a new profile called proxy, which is >>>>>>> derived from the ?default" profile. >>>>>>> >>>>>>> In your new profile under the undertow subsystem, in the handlers >>>>>>> section, below the welcome content file handler, add the above proxy config. >>>>>>> You then need to change the location name=?/? to point to the >>>>>>> ?reverse-proxy? handler (instead of ?welcome-content?) >>>>>>> >>>>>>> You basically want 3 server instances, one proxy, web server 1, and >>>>>>> web server 2, all preferably on separate systems. The proxy would be >>>>>>> assigned the proxy profile, the two other servers would get ha profiles. >>>>>>> >>>>>>> You could have your DC collocated on the proxy or on a separate box. >>>>>>> You need to be sure that your instance-id matches the jvm route on the web 1 >>>>>>> and 2 boxes (defaults to hostname) for sticky sessions to work properly. If >>>>>>> you look at the cookie value you will see the jvmroute as a suffix. >>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> 2. You could also use undertow 1.1 standalone which has a >>>>>>>>> mod_cluster impl (coming to WildFly soon) >>>>>>>>> >>>>>>>>> >>>>>>>>> https://github.com/undertow-io/undertow/blob/master/examples/src/main/java/io/undertow/examples/reverseproxy/ModClusterProxyServer.java >>>>>>>>> >>>>>>>>> (requires alteration for your topology) >>>>>>>> OK, let me try the simpler route first. >>>>>>>> >>>>>>>> I'm having issues building mod_cluster on ARM and following up on >>>>>>>> that >>>>>>>> separately. Seems like I may have to use mod_proxy for now since >>>>>>>> this >>>>>>>> is baked into Apache2 for ARM. >>>>>>>> >>>>>>>>>> The session ids are indeed different. Just pushed out the latest >>>>>>>>>> blog >>>>>>>>>> in this series at: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> http://blog.arungupta.me/2014/05/wildfly-managed-domain-raspberrypi-techtip27/ >>>>>>>>>> >>>>>>>>>> The session id are shown towards the end in screen snapshots, and >>>>>>>>>> are >>>>>>>>>> indeed different. >>>>>>>>> >>>>>>>>> So the problem is you need to either have a shared cookie domain, >>>>>>>>> or use an LB, since the cookie domain has to match the URL for the browser >>>>>>>>> to send the same cookie. You can do this in either the global config >>>>>>>>> (standalone.xml under servlet-container), or you can add a setting to your >>>>>>>>> web.xml like this: >>>>>>>>> >>>>>>>>> >>>>>>>>> .example >>>>>>>>> >>>>>>>> >>>>>>>> Can this element be added to domain.xml as well for the managed >>>>>>>> domain mode ? >>>>>>> >>>>>>> Yes next to the ?server? block inside the undertow subsystem you can >>>>>>> add: >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Although note that you ONLY have to do this if you are not using an >>>>>>> LB. >>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Then you want to add host entries to hosts: >>>>>>>>> >>>>>>>>> pi1.example 10.x.x.x >>>>>>>>> pi2.example 10.x.x.x >>>>>>>> >>>>>>>> These entries would be made in each individual /etc/hosts ? >>>>>>> >>>>>>> You just need this on the machine with the client browser, so that >>>>>>> when it sends HTTP requests it does ?Host: pi1.example? instead of "Host: >>>>>>> 10.xxxxx?. >>>>>>> >>>>>>> If you decide to go the LB route, and want to have name references >>>>>>> then you could do them everywhere to make it all easy. >>>>>>> >>>>>>>> >>>>>>>> Arun >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> After you do that you should be able to stick pi1.example and >>>>>>>>> pi2.example in the browser. >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Jason T. Greene >>>>>>>>> WildFly Lead / JBoss EAP Platform Architect >>>>>>>>> JBoss, a division of Red Hat >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> http://blog.arungupta.me >>>>>>>> http://twitter.com/arungupta >>>>>>> >>>>>>> -- >>>>>>> Jason T. Greene >>>>>>> WildFly Lead / JBoss EAP Platform Architect >>>>>>> JBoss, a division of Red Hat >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> http://blog.arungupta.me >>>>>> http://twitter.com/arungupta >>>> >>>> >>>> >>>> -- >>>> http://blog.arungupta.me >>>> http://twitter.com/arungupta >>> >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> >> > > > > -- > http://blog.arungupta.me > http://twitter.com/arungupta -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From tom.jenkinson at redhat.com Tue Jun 3 11:14:37 2014 From: tom.jenkinson at redhat.com (Tom Jenkinson) Date: Tue, 03 Jun 2014 16:14:37 +0100 Subject: [wildfly-dev] WildFly 8.1.0.Final on OpenShift - with JDK 8 support! In-Reply-To: <248742028.14212206.1401806480451.JavaMail.zimbra@redhat.com> References: <248742028.14212206.1401806480451.JavaMail.zimbra@redhat.com> Message-ID: <538DE65D.208@redhat.com> Great work Farah! On 03/06/14 15:41, Farah Juma wrote: > Since WildFly 8.1.0.Final was just released, the OpenShift WildFly cartridge has been updated as well and now includes support for JDK 8! > > See https://community.jboss.org/people/fjuma/blog/2014/06/03/wildfly-810final-on-openshift--with-jdk-8-support for more details on how to get started. Please try it out and provide feedback. > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From javapapo at mac.com Tue Jun 3 12:57:19 2014 From: javapapo at mac.com (Paris Apostolopoulos) Date: Tue, 03 Jun 2014 19:57:19 +0300 Subject: [wildfly-dev] Embedded Widfly + Arquillian, just wondering :) Message-ID: <635542B1-7F62-4B0D-A61A-762636F9A777@mac.com> Hello, greetings from Athens, Greece. I am wondering if will ever be in the pipeline a full blown widfly-embended (uber jar) version. Why? Well I have noticed from several teams around, working with JavaE, some of them even coding towards Jboss / Widfly 8, that for some reason, when they wanted to prepare a set of tests to run in embedded mode, then the simplest way was to add 2 dependencies with GlassFish 3.1 jar , setup an Arquillian maven profile and fire it off. So I am wondering if a fully functional embedded version of Wildfly, 1 dependency away from the widfly adapter and the current ?fake? container dependency, wouldn't be a win-win case for both the use of Arquillian and Wildfly among teams all over the world? One step further, maybe the use of this combo would make some of teams reconsider and ?start? adopting more related Jboss / Redhat projects :). Thanks ------ Paris Apostolopoulos Senior Software Engineer 'The best thing about a boolean is even if you are wrong, you are only off by a bit' linkedin | blog | twitter | g+ | podcast | -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140603/7985951b/attachment-0001.html From ssilvert at redhat.com Tue Jun 3 13:46:30 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Tue, 03 Jun 2014 13:46:30 -0400 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 Message-ID: <538E09F6.6080701@redhat.com> In response to Jason's request for design proposals on this list, here is the proposal for Keycloak SSO in WildFly 9. Background ----------------- A major part of our console unification effort is to allow various JBoss management consoles to take advantage of Single Signon. To achieve this end, Keycloak SSO[1] will be integrated into the WildFly 9 platform. The first management consoles to use Keycloak will likely be the WildFly Web Console[2] and the JON Console. Proof of Concept ----------------------- A hacked-up proof of concept is available at https://github.com/ssilvert/wildfly/tree/kcauth. It demonstrates a WildFly standalone server using Keycloak for both authentication and authorization in the Web Console. It also shows single signon between the Web Console and the Keycloak Admin application. See https://github.com/ssilvert/wildfly/blob/kcauth/keycloak/KeycloakSetup.txt for details. One interesting finding of the POC is that the Keycloak integration required no changes to the WildFly Web Console. All the integration is done on the server side and the GWT client works perfectly as-is. Relation to the Elytron and other WildFly 9 changes ------------------------------------------------------------------------ Keycloak is expected to use Elytron at a low level. Nothing at the Keycloak integration level should be affected by the Elytron project. However, there are many other expected changes to security that may effect how Keycloak is integrated. It is likely that the initial integration of Keycloak will happen before these aforementioned changes. This will be an advantage as the unit tests for Keycloak integration can help to validate these changes. Default Authentication Mechanism ------------------------------------------------ Keycloak is a very new technology. Given that security is so vital, we need time for Keycloak to mature. When Keycloak is first integrated, it will not be the default authentication/authorization mechanism for the WildFly Web Console. However, selecting Keycloak for authentication should be as simple as executing one or two CLI commands. We can switch to Keycloak as the default whenever we all believe that both Keycloak itself and its integration into WildFly are ready for prime time. Hopefully, that will just be a matter of months after first integration. Initial Integration ------------------------ The initial integration for most of Keycloak will only be available on standalone. However, on a domain controller, the WildFly Web Console will still be able to use Keycloak for authentication and authorization. In this case, the domain controller must be able to reach a Keycloak Authentication Server somewhere on the network. Keycloak Authentication Server and Admin Console ----------------------------------------------------------------------- The Keycloak Authentication Server is responsible for authenticating and authorizing users. The Keycloak Admin Console is an AngularJS UI that administrators use to manage users, roles, sessions, passwords, assigned applications, etc. Both the auth server and admin console are served from the same WAR. It should be possible to deploy this without using a WAR or servlets, but that is not planned for the initial WildFly integration. Because of this current limitation, the auth server and admin console will not be present in a domain controller. Keycloak Database -------------------------- The Keycloak database contains all the server-side configuration data such as the data manipulated by the Keycloak Admin Console. By default, it is an H2 database that lives in the standalone/data directory. It is created when the auth server is first deployed. This database will be initialized with a single "admin" user who has all rights within the Keycloak Admin Console and within the WildFly Web Console. On first login, the admin user must change his password. By default, both consoles will be in the same master realm so that users can potentially do single signon and move freely between them. H2 is not recommended for production use. Keycloak has tools available to migrate data to another database. Keycloak Adapter ------------------------ A Keycloak adapter is a bit of software that is attached to an application deployment. This software knows how to talk to the Keycloak auth server. It handles the OAuth protocol from the client side. In the case of the WildFly Web Console, the adapter will be a pure Undertow, non-servlet adapter. The reason for using a pure Undertow adapter instead of the current Keycloak WildFly adapter is that the latter adapter relies on the Servlet API, which is forbidden on a domain controller. The proof of concept mentioned above contains the code needed for a pure Undertow adapter. This code will likely be migrated into the Keycloak project. Keycloak Adapter Configuration ------------------------------------------- A Keycloak adapter configuration is a json or DMR representation of an application's client-side Keycloak configuration. It is used by the adapter to find data such as public keys and the location of the auth server. (From the POC, see master->Applications->web-console->Installation) In the case of WildFly Web Console, we actually have two application endpoints that need to be configured and protected by Keycloak. These are the GWT-based UI and the http management endpoint that accepts DMR operations. The Keycloak configuration for these applications will live in the DMR management tree under SuperUser access. This restrictive access only applies to the Keycloak adapter configuration. Any RBAC role can be assigned to a user of the WildFly Web Console via the Keycloak Admin Console. Note that the proof of concept still uses json files for the adapter configuration. The real implementation will need to store the configuration in the DMR management tree so that it can be maintained by CLI or the WildFly Web Console. Questions for further discussion -------------------------------------------- 1. WildFly ships with pre-defined RBAC roles. [3] Should these roles be available at the realm level or only to the WildFly Web Console? Could/should other consoles make use of these roles? 2. On first login, you are required to change the admin password. What other initial setup should be required? Change realm public key? Change client secret? Others? 3. In the POC, the Keycloak Auth Server WAR is extracted into the standalone/deployments directory. Are there better options? Should it even be deployed by default? 4. By default, what Login Options should be enabled for the master realm? Currently, these options are social login, user registration, forget password, remember me, verify email, direct grant API, and require SSL. 5. Should Keycloak audit log be enabled by default? If so, what should be the expiration value? 6. What should the initial pasword policy be? (length, mixed case, special chars, etc.) [1] http://keycloak.jboss.org/ http://docs.jboss.org/keycloak/docs/1.0-beta-1/userguide/html_single/index.html [2] https://github.com/hal/core [3] http://planet.jboss.org/post/role_based_access_control_in_wildfly_8_tech_tip_120 From bburke at redhat.com Tue Jun 3 14:07:48 2014 From: bburke at redhat.com (Bill Burke) Date: Tue, 03 Jun 2014 14:07:48 -0400 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E09F6.6080701@redhat.com> References: <538E09F6.6080701@redhat.com> Message-ID: <538E0EF4.2010608@redhat.com> On 6/3/2014 1:46 PM, Stan Silvert wrote: > > 2. On first login, you are required to change the admin password. What > other initial setup should be required? Change realm public key? > Change client secret? Others? > You should be able to self-bootstrap a new install on initial boot. Its what we do for the Aerogear UPS server. > > 5. Should Keycloak audit log be enabled by default? If so, what should > be the expiration value? > Not sure. We're relying on tools like fail2ban for brute force detection at the moment, but hope to get fail2ban like features in Keycloak after 1.0 is released. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com From filippespolti at gmail.com Tue Jun 3 14:08:34 2014 From: filippespolti at gmail.com (Filippe Costa Spolti) Date: Tue, 03 Jun 2014 15:08:34 -0300 Subject: [wildfly-dev] WildFly 8.1.0.Final on OpenShift - with JDK 8 support! In-Reply-To: References: <248742028.14212206.1401806480451.JavaMail.zimbra@redhat.com> <538DDF12.2090606@redhat.com> Message-ID: <538E0F22.1050808@gmail.com> I need upgrade my cartridge.. :) Nice guys. Regards, ______________________________________ Filippe Costa Spolti Linux User n?515639 - http://linuxcounter.net/ filippespolti at gmail.com "Be yourself" On 06/03/2014 11:51 AM, Tomaz( Cerar wrote: > Great news! > > tnx for all your effort! > > > On Tue, Jun 3, 2014 at 4:43 PM, Brian Stansberry > > wrote: > > Excellent! Great job. > > On 6/3/14, 9:41 AM, Farah Juma wrote: > > Since WildFly 8.1.0.Final was just released, the OpenShift > WildFly cartridge has been updated as well and now includes > support for JDK 8! > > > > See > https://community.jboss.org/people/fjuma/blog/2014/06/03/wildfly-810final-on-openshift--with-jdk-8-support > for more details on how to get started. Please try it out and > provide feedback. > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > -- > Brian Stansberry > Senior Principal Software Engineer > JBoss by Red Hat > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140603/fa709288/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: linkedin.png Type: image/png Size: 957 bytes Desc: not available Url : http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140603/fa709288/attachment.png From daviaro at gmail.com Tue Jun 3 14:11:51 2014 From: daviaro at gmail.com (David Aroca) Date: Tue, 3 Jun 2014 13:11:51 -0500 Subject: [wildfly-dev] WildFly 8.1.0.Final on OpenShift - with JDK 8 support! In-Reply-To: <538E0F22.1050808@gmail.com> References: <248742028.14212206.1401806480451.JavaMail.zimbra@redhat.com> <538DDF12.2090606@redhat.com> <538E0F22.1050808@gmail.com> Message-ID: excelent news!!!! thnks for all!! 2014-06-03 13:08 GMT-05:00 Filippe Costa Spolti : > I need upgrade my cartridge.. :) Nice guys. > > Regards, > ______________________________________ > Filippe Costa Spolti > Linux User n?515639 - http://linuxcounter.net/ > filippespolti at gmail.com > "Be yourself" > > On 06/03/2014 11:51 AM, Toma? Cerar wrote: > > Great news! > > tnx for all your effort! > > > On Tue, Jun 3, 2014 at 4:43 PM, Brian Stansberry < > brian.stansberry at redhat.com> wrote: > >> Excellent! Great job. >> >> On 6/3/14, 9:41 AM, Farah Juma wrote: >> > Since WildFly 8.1.0.Final was just released, the OpenShift WildFly >> cartridge has been updated as well and now includes support for JDK 8! >> > >> > See >> https://community.jboss.org/people/fjuma/blog/2014/06/03/wildfly-810final-on-openshift--with-jdk-8-support >> for more details on how to get started. Please try it out and provide >> feedback. >> > >> > _______________________________________________ >> > wildfly-dev mailing list >> > wildfly-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > >> >> >> -- >> Brian Stansberry >> Senior Principal Software Engineer >> JBoss by Red Hat >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > > > _______________________________________________ > wildfly-dev mailing listwildfly-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140603/0bb89a17/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: linkedin.png Type: image/png Size: 957 bytes Desc: not available Url : http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140603/0bb89a17/attachment-0001.png From arun.gupta at gmail.com Tue Jun 3 14:21:14 2014 From: arun.gupta at gmail.com (Arun Gupta) Date: Tue, 3 Jun 2014 14:21:14 -0400 Subject: [wildfly-dev] 2 instance cluster in master/slave In-Reply-To: References: <8B504D45-944D-4D3A-BF75-769DE8EF2167@jboss.com> <5385DDE5.90704@redhat.com> <5385E4D5.9080803@redhat.com> <5605C912-EFDA-476C-8689-A596391E010F@redhat.com> <0FC395B8-AEEF-457B-87D5-6A9F13F624CC@redhat.com> <2D5DED6C-263A-44DC-98F0-615EDA8E8766@redhat.com> <975729B1-39EF-40C1-82A0-BE68795836E9@jboss.com> Message-ID: I've not yet, but will try that on desktop first and then on Raspi. Have added that to the list of blog items. Arun On Tue, Jun 3, 2014 at 11:16 AM, Jason Greene wrote: > Very cool. Did you ever give the wildfly as a proxy approach a try? > > On May 31, 2014, at 8:37 PM, Arun Gupta wrote: > >> Please help spread the word. >> >> Let me know if you have fun ideas/projects that should run on Raspi >> and help us build thought leadership :-) >> >> Arun >> >> On Sat, May 31, 2014 at 7:47 AM, David Aroca wrote: >>> Thnks Arun! >>> >>> >>> 2014-05-31 3:29 GMT-05:00 Kabir Khan : >>>> >>>> Nice! >>>> On 31 May 2014, at 05:10, Arun Gupta wrote: >>>> >>>>> And finally, the three-part article showing how to setup WildFly >>>>> cluster on Raspberry Pi is now available at: >>>>> >>>>> http://blog.arungupta.me/2014/05/wildfly-cluster-raspberrypi-techtip28/ >>>>> >>>>> Feedback always welcome! >>>>> >>>>> Weekend can now start :) >>>>> >>>>> Cheers >>>>> Arun >>>>> >>>>> >>>>> On Fri, May 30, 2014 at 5:58 PM, Jason T. Greene >>>>> wrote: >>>>>> >>>>>> >>>>>> Sent from my iPhone >>>>>> >>>>>>> On May 30, 2014, at 6:44 PM, Arun Gupta wrote: >>>>>>> >>>>>>> One (hopefully) last bit... >>>>>>> >>>>>>> How/where do I set the sticky session ? >>>>>> >>>>>> If you are using mod_proxy there is a memory based sticky session >>>>>> parameter you have to set to JSESSIONID, see the apache page. >>>>>> >>>>>> >>>>>>> >>>>>>> Arun >>>>>>> >>>>>>>> On Fri, May 30, 2014 at 11:27 AM, Jason Greene >>>>>>>> wrote: >>>>>>>> >>>>>>>> On May 30, 2014, at 12:54 PM, Arun Gupta >>>>>>>> wrote: >>>>>>>> >>>>>>>>>> >>>>>>>>>> A couple other options that would be fun to play with: >>>>>>>>>> >>>>>>>>>> 1. Using Undertow?s reverse proxy like this (assuming you named the >>>>>>>>>> nodes pi1, pi2, and thus they have a pi1, pi2 jvmroute, which defaults to >>>>>>>>>> the host name if you didn?t set it): >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> Where will I add this fragment ? >>>>>>>> >>>>>>>> In your domain.xml define a new profile called proxy, which is >>>>>>>> derived from the ?default" profile. >>>>>>>> >>>>>>>> In your new profile under the undertow subsystem, in the handlers >>>>>>>> section, below the welcome content file handler, add the above proxy config. >>>>>>>> You then need to change the location name=?/? to point to the >>>>>>>> ?reverse-proxy? handler (instead of ?welcome-content?) >>>>>>>> >>>>>>>> You basically want 3 server instances, one proxy, web server 1, and >>>>>>>> web server 2, all preferably on separate systems. The proxy would be >>>>>>>> assigned the proxy profile, the two other servers would get ha profiles. >>>>>>>> >>>>>>>> You could have your DC collocated on the proxy or on a separate box. >>>>>>>> You need to be sure that your instance-id matches the jvm route on the web 1 >>>>>>>> and 2 boxes (defaults to hostname) for sticky sessions to work properly. If >>>>>>>> you look at the cookie value you will see the jvmroute as a suffix. >>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2. You could also use undertow 1.1 standalone which has a >>>>>>>>>> mod_cluster impl (coming to WildFly soon) >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> https://github.com/undertow-io/undertow/blob/master/examples/src/main/java/io/undertow/examples/reverseproxy/ModClusterProxyServer.java >>>>>>>>>> >>>>>>>>>> (requires alteration for your topology) >>>>>>>>> OK, let me try the simpler route first. >>>>>>>>> >>>>>>>>> I'm having issues building mod_cluster on ARM and following up on >>>>>>>>> that >>>>>>>>> separately. Seems like I may have to use mod_proxy for now since >>>>>>>>> this >>>>>>>>> is baked into Apache2 for ARM. >>>>>>>>> >>>>>>>>>>> The session ids are indeed different. Just pushed out the latest >>>>>>>>>>> blog >>>>>>>>>>> in this series at: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> http://blog.arungupta.me/2014/05/wildfly-managed-domain-raspberrypi-techtip27/ >>>>>>>>>>> >>>>>>>>>>> The session id are shown towards the end in screen snapshots, and >>>>>>>>>>> are >>>>>>>>>>> indeed different. >>>>>>>>>> >>>>>>>>>> So the problem is you need to either have a shared cookie domain, >>>>>>>>>> or use an LB, since the cookie domain has to match the URL for the browser >>>>>>>>>> to send the same cookie. You can do this in either the global config >>>>>>>>>> (standalone.xml under servlet-container), or you can add a setting to your >>>>>>>>>> web.xml like this: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> .example >>>>>>>>>> >>>>>>>>> >>>>>>>>> Can this element be added to domain.xml as well for the managed >>>>>>>>> domain mode ? >>>>>>>> >>>>>>>> Yes next to the ?server? block inside the undertow subsystem you can >>>>>>>> add: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Although note that you ONLY have to do this if you are not using an >>>>>>>> LB. >>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Then you want to add host entries to hosts: >>>>>>>>>> >>>>>>>>>> pi1.example 10.x.x.x >>>>>>>>>> pi2.example 10.x.x.x >>>>>>>>> >>>>>>>>> These entries would be made in each individual /etc/hosts ? >>>>>>>> >>>>>>>> You just need this on the machine with the client browser, so that >>>>>>>> when it sends HTTP requests it does ?Host: pi1.example? instead of "Host: >>>>>>>> 10.xxxxx?. >>>>>>>> >>>>>>>> If you decide to go the LB route, and want to have name references >>>>>>>> then you could do them everywhere to make it all easy. >>>>>>>> >>>>>>>>> >>>>>>>>> Arun >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> After you do that you should be able to stick pi1.example and >>>>>>>>>> pi2.example in the browser. >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Jason T. Greene >>>>>>>>>> WildFly Lead / JBoss EAP Platform Architect >>>>>>>>>> JBoss, a division of Red Hat >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> http://blog.arungupta.me >>>>>>>>> http://twitter.com/arungupta >>>>>>>> >>>>>>>> -- >>>>>>>> Jason T. Greene >>>>>>>> WildFly Lead / JBoss EAP Platform Architect >>>>>>>> JBoss, a division of Red Hat >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> http://blog.arungupta.me >>>>>>> http://twitter.com/arungupta >>>>> >>>>> >>>>> >>>>> -- >>>>> http://blog.arungupta.me >>>>> http://twitter.com/arungupta >>>> >>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >>> >> >> >> >> -- >> http://blog.arungupta.me >> http://twitter.com/arungupta > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > -- http://blog.arungupta.me http://twitter.com/arungupta From darran.lofthouse at jboss.com Tue Jun 3 14:25:24 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Tue, 03 Jun 2014 19:25:24 +0100 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E09F6.6080701@redhat.com> References: <538E09F6.6080701@redhat.com> Message-ID: <538E1314.2030101@jboss.com> On 03/06/14 18:46, Stan Silvert wrote: > In response to Jason's request for design proposals on this list, here > is the proposal for Keycloak SSO in WildFly 9. > > Background > ----------------- > A major part of our console unification effort is to allow various JBoss > management consoles to take advantage of Single Signon. To achieve this > end, Keycloak SSO[1] will be integrated into the WildFly 9 platform. > The first management consoles to use Keycloak will likely be the WildFly > Web Console[2] and the JON Console. > > > Proof of Concept > ----------------------- > A hacked-up proof of concept is available at > https://github.com/ssilvert/wildfly/tree/kcauth. It demonstrates a > WildFly standalone server using Keycloak for both authentication and > authorization in the Web Console. It also shows single signon between > the Web Console and the Keycloak Admin application. See > https://github.com/ssilvert/wildfly/blob/kcauth/keycloak/KeycloakSetup.txt > for details. > > One interesting finding of the POC is that the Keycloak integration > required no changes to the WildFly Web Console. All the integration is > done on the server side and the GWT client works perfectly as-is. > > > Relation to the Elytron and other WildFly 9 changes > ------------------------------------------------------------------------ > Keycloak is expected to use Elytron at a low level. Nothing at the > Keycloak integration level should be affected by the Elytron project. > > However, there are many other expected changes to security that may > effect how Keycloak is integrated. It is likely that the initial > integration of Keycloak will happen before these aforementioned > changes. This will be an advantage as the unit tests for Keycloak > integration can help to validate these changes. One important change we will need to get in first is the splitting of the contexts that server the management requests, the existing /management context needs to remain supporting the existing authentication mechanisms with cross origin restrictions left on. > > Default Authentication Mechanism > ------------------------------------------------ > Keycloak is a very new technology. Given that security is so vital, we > need time for Keycloak to mature. When Keycloak is first integrated, it > will not be the default authentication/authorization mechanism for the > WildFly Web Console. However, selecting Keycloak for authentication > should be as simple as executing one or two CLI commands. > > We can switch to Keycloak as the default whenever we all believe that > both Keycloak itself and its integration into WildFly are ready for > prime time. Hopefully, that will just be a matter of months after first > integration. We also need to have the SSL out of the box or as soon as possible after problem solved. Even then how much does it make sense for each app server installation to have it's own SSO infrastructure? > Initial Integration > ------------------------ > The initial integration for most of Keycloak will only be available on > standalone. However, on a domain controller, the WildFly Web Console > will still be able to use Keycloak for authentication and > authorization. In this case, the domain controller must be able to > reach a Keycloak Authentication Server somewhere on the network. > > > Keycloak Authentication Server and Admin Console > ----------------------------------------------------------------------- > The Keycloak Authentication Server is responsible for authenticating and > authorizing users. The Keycloak Admin Console is an AngularJS UI that > administrators use to manage users, roles, sessions, passwords, assigned > applications, etc. > > Both the auth server and admin console are served from the same WAR. It > should be possible to deploy this without using a WAR or servlets, but > that is not planned for the initial WildFly integration. Because of > this current limitation, the auth server and admin console will not be > present in a domain controller. This is going against the current design of AS7/WildFly exposing management related operations over the management interface and leaving the web container to be purely about a users deployments. > > Keycloak Database > -------------------------- > The Keycloak database contains all the server-side configuration data > such as the data manipulated by the Keycloak Admin Console. By default, > it is an H2 database that lives in the standalone/data directory. It is > created when the auth server is first deployed. > > This database will be initialized with a single "admin" user who has all > rights within the Keycloak Admin Console and within the WildFly Web > Console. On first login, the admin user must change his password. > > By default, both consoles will be in the same master realm so that users > can potentially do single signon and move freely between them. > > H2 is not recommended for production use. Keycloak has tools available > to migrate data to another database. > > > Keycloak Adapter > ------------------------ > A Keycloak adapter is a bit of software that is attached to an > application deployment. This software knows how to talk to the Keycloak > auth server. It handles the OAuth protocol from the client side. > > In the case of the WildFly Web Console, the adapter will be a pure > Undertow, non-servlet adapter. The reason for using a pure Undertow > adapter instead of the current Keycloak WildFly adapter is that the > latter adapter relies on the Servlet API, which is forbidden on a domain > controller. The proof of concept mentioned above contains the code > needed for a pure Undertow adapter. This code will likely be migrated > into the Keycloak project. > > > Keycloak Adapter Configuration > ------------------------------------------- > A Keycloak adapter configuration is a json or DMR representation of an > application's client-side Keycloak configuration. It is used by the > adapter to find data such as public keys and the location of the auth > server. (From the POC, see master->Applications->web-console->Installation) > > In the case of WildFly Web Console, we actually have two application > endpoints that need to be configured and protected by Keycloak. These > are the GWT-based UI and the http management endpoint that accepts DMR > operations. The Keycloak configuration for these applications will live > in the DMR management tree under SuperUser access. This restrictive > access only applies to the Keycloak adapter configuration. Any RBAC > role can be assigned to a user of the WildFly Web Console via the > Keycloak Admin Console. > > Note that the proof of concept still uses json files for the adapter > configuration. The real implementation will need to store the > configuration in the DMR management tree so that it can be maintained by > CLI or the WildFly Web Console. > > > Questions for further discussion > -------------------------------------------- > 1. WildFly ships with pre-defined RBAC roles. [3] Should these roles > be available at the realm level or only to the WildFly Web Console? > Could/should other consoles make use of these roles? The direction we are moving towards with the wildfly-elytron work is that role assignment / mapping is something that happens at the point a call reaches a secured resource and will be in the context of that secure resource. As an example a user could call one deployment and be assigned one set of roles, that same user could call a different deployment and have a completely different set of roles - for simplicity we will allow a 1:1 mapping from what was loaded from the identity store to roles but that will not be always the case. > 2. On first login, you are required to change the admin password. What > other initial setup should be required? Change realm public key? > Change client secret? Others? This is something that would be required to happen at the command line, a connection from a web browser could not be trusted to perform this. > 3. In the POC, the Keycloak Auth Server WAR is extracted into the > standalone/deployments directory. Are there better options? Should it > even be deployed by default? As I say above this goes against the current architecture of separating management from apps. > 4. By default, what Login Options should be enabled for the master > realm? Currently, these options are social login, user registration, > forget password, remember me, verify email, direct grant API, and > require SSL. > > 5. Should Keycloak audit log be enabled by default? If so, what should > be the expiration value? > > 6. What should the initial pasword policy be? (length, mixed case, > special chars, etc.) Password policy is warn only for weak passwords, administrators can override and choose what ever they want. > > [1] http://keycloak.jboss.org/ > http://docs.jboss.org/keycloak/docs/1.0-beta-1/userguide/html_single/index.html > [2] https://github.com/hal/core > [3] > http://planet.jboss.org/post/role_based_access_control_in_wildfly_8_tech_tip_120 > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From rachmato at redhat.com Tue Jun 3 15:23:05 2014 From: rachmato at redhat.com (Richard Achmatowicz) Date: Tue, 03 Jun 2014 15:23:05 -0400 Subject: [wildfly-dev] Service assumptions and the web profile Message-ID: <538E2099.8010207@redhat.com> Hi A general question on which services we can assume are available in a sever configuration and which we cannot. When installing services, we often need to add in service dependencies. If a dependency is marked as required and it does not exist, the service will not start correctly. So, when setting up a service and its dependencies, if possible, I would like to know which dependencies are guaranteed to be available and which I may need to optionally check for. The OPTIONAL flag for dependencies was meant to address this but it now deprecated as it doesn't work so well when you are unlucky enough to have your dependency start after your dependent service. The web profile is intended to be a slimmed down version of the full profile, and in the case of the EJB subsystem, the spec says that it need not implement certain EJB feature subsets, among which are asynchronous method invocations, timer service and remote invocations. However, our web profile EJB subsystem includes all of these. It is conceivable that an admin would want to create a slimmed down version of the EJB subsystem and remove some of these services. All three of these can be easily removed by deleting configuration elements. What to do here? - assume that all subsystems and services defined in the shipped web profile will be present and no dependency checking is required? - assume that a certain minimal subset of subsystems and services defined in the shipped web profile will be present and that an admin may "turn off" some services, for example the features not part of EJB Lite and so some form of dependency checking is required? And which services can we assume are present? In the case of the EJB subsystem, I would expect that dependencies like the Remoting system endpoint can be assumed to be present, but the optional features above and beyond EJB Lite may not be. But this is all pretty much ad hoc. Any thoughts? From david.lloyd at redhat.com Tue Jun 3 15:43:28 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Tue, 03 Jun 2014 14:43:28 -0500 Subject: [wildfly-dev] Service assumptions and the web profile In-Reply-To: <538E2099.8010207@redhat.com> References: <538E2099.8010207@redhat.com> Message-ID: <538E2560.4000303@redhat.com> I think everyone is probably familiar with the danger of optional dependencies, and the reason they've been deprecated, but I'll reiterate it just to make sure nobody has lingering questions about it. When a service A has an optional dependency on service B, whether or not that dependency is filled depends not on the simple presence of B, but also on whether B is installed before A. This is a necessary consequence of our algorithm, which in turn is what gives us the performance that we have (which is tied directly to the algorithmic complexity of the MSC service installation process). So given that we have to externally control the order of installation, such that we know B will come before A, there's actually no reason to even have optional dependencies, since if you already know that B is missing, you can simply opt to exclude the dependency in the first place. Optional dependencies merely become a minor convenience, saving you from having to do one "if" statement, at this point. Yet their mere presence has resulted in 100% misuse. So, there is a different preferred strategy. Here's how it is supposed to work, conceptually. Imagine I have a subsystem X, which optionally consumes subsystem Y. The correct time to detect the presence or absence of Y is related to the lifecycle of the management model data. When X is added to the model, and that transaction completes, we know that we have X with no Y. All the services produced on behalf of X would then statically be aware of this information, and would elide any dependency on Y. One obvious question with this scheme is: what happens when Y is added, after the fact? The answer to this question depends on X. X must, in the same transaction, detect the addition of Y and decide what to do. Several actions are possible, depending on how X works and how its dependency on Y works. Options include automatically removing and rebuilding services with the new dependency, retroactively modifying X in some way so as to update it without stopping it, or simply ignoring the addition of Y, using a "reload-required" or similar state to indicate to the user that the running system is inconsistent with the stored model. I know we don't really have any APIs to directly facilitate these concepts, but this is a part of what the management SPI redesign is all about. In the new model, one will be able to express optional dependencies at a resource level rather than a service level. On 06/03/2014 02:23 PM, Richard Achmatowicz wrote: > Hi > > A general question on which services we can assume are available in a > sever configuration and which we cannot. > > When installing services, we often need to add in service dependencies. > If a dependency is marked as required and it does not exist, the service > will not start correctly. So, when setting up a service and its > dependencies, if possible, I would like to know which dependencies are > guaranteed to be available and which I may need to optionally check for. > The OPTIONAL flag for dependencies was meant to address this but it now > deprecated as it doesn't work so well when you are unlucky enough to > have your dependency start after your dependent service. > > The web profile is intended to be a slimmed down version of the full > profile, and in the case of the EJB subsystem, the spec says that it > need not implement certain EJB feature subsets, among which are > asynchronous method invocations, timer service and remote invocations. > However, our web profile EJB subsystem includes all of these. It is > conceivable that an admin would want to create a slimmed down version of > the EJB subsystem and remove some of these services. All three of these > can be easily removed by deleting configuration elements. > > What to do here? > - assume that all subsystems and services defined in the shipped web > profile will be present and no dependency checking is required? > - assume that a certain minimal subset of subsystems and services > defined in the shipped web profile will be present and that an admin may > "turn off" some services, for example the features not part of EJB Lite > and so some form of dependency checking is required? And which services > can we assume are present? > > In the case of the EJB subsystem, I would expect that dependencies like > the Remoting system endpoint can be assumed to be present, but the > optional features above and beyond EJB Lite may not be. But this is all > pretty much ad hoc. > > Any thoughts? -- - DML From jason.greene at redhat.com Tue Jun 3 15:51:58 2014 From: jason.greene at redhat.com (Jason Greene) Date: Tue, 3 Jun 2014 14:51:58 -0500 Subject: [wildfly-dev] Service assumptions and the web profile In-Reply-To: <538E2099.8010207@redhat.com> References: <538E2099.8010207@redhat.com> Message-ID: Yes just to be clear: OPTIONAL SHOULD NEVER BE USED FOR CROSS SUBSYSTEM DEPENDENCIES! It?s very hard to use OPTIONAL correctly, so avoid it like the plague (this is why it?s deprecated). There are other strategies that work in many cases. For example you can use PASSIVE services. However, the thing we are really missing is cross-subsystem negotiation (capabilities). This is actually a trivial thing to implement, Kabir had a prototype at one point, and we should introduce it in 9. The classic problem example is jacorb and tx. They require circular configuration. The general notion is that you have a set of detyped flags that convey various configuration relevant info like: tx subsystem: SUPPORTS_TRANSACTIONS SUPPORTS_JTS corba subsytem: SUPPORTS_CORBA These can be assembled in quickly after parsing but before services are assembled. Then subsystem service assembly can begin in parallel, as today. So for example, corba can see that JTS is enabled, and install the proper interceptors. TX can see Corba and create the proper service dependency. On Jun 3, 2014, at 2:23 PM, Richard Achmatowicz wrote: > Hi > > A general question on which services we can assume are available in a > sever configuration and which we cannot. > > When installing services, we often need to add in service dependencies. > If a dependency is marked as required and it does not exist, the service > will not start correctly. So, when setting up a service and its > dependencies, if possible, I would like to know which dependencies are > guaranteed to be available and which I may need to optionally check for. > The OPTIONAL flag for dependencies was meant to address this but it now > deprecated as it doesn't work so well when you are unlucky enough to > have your dependency start after your dependent service. > > The web profile is intended to be a slimmed down version of the full > profile, and in the case of the EJB subsystem, the spec says that it > need not implement certain EJB feature subsets, among which are > asynchronous method invocations, timer service and remote invocations. > However, our web profile EJB subsystem includes all of these. It is > conceivable that an admin would want to create a slimmed down version of > the EJB subsystem and remove some of these services. All three of these > can be easily removed by deleting configuration elements. > > What to do here? > - assume that all subsystems and services defined in the shipped web > profile will be present and no dependency checking is required? > - assume that a certain minimal subset of subsystems and services > defined in the shipped web profile will be present and that an admin may > "turn off" some services, for example the features not part of EJB Lite > and so some form of dependency checking is required? And which services > can we assume are present? > > In the case of the EJB subsystem, I would expect that dependencies like > the Remoting system endpoint can be assumed to be present, but the > optional features above and beyond EJB Lite may not be. But this is all > pretty much ad hoc. > > Any thoughts? > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From brian.stansberry at redhat.com Tue Jun 3 16:01:32 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Tue, 03 Jun 2014 15:01:32 -0500 Subject: [wildfly-dev] Service assumptions and the web profile In-Reply-To: <538E2560.4000303@redhat.com> References: <538E2099.8010207@redhat.com> <538E2560.4000303@redhat.com> Message-ID: <538E299C.2080509@redhat.com> On 6/3/14, 2:43 PM, David M. Lloyd wrote: > I think everyone is probably familiar with the danger of optional > dependencies, and the reason they've been deprecated, but I'll reiterate > it just to make sure nobody has lingering questions about it. > > When a service A has an optional dependency on service B, whether or not > that dependency is filled depends not on the simple presence of B, but > also on whether B is installed before A. This is a necessary > consequence of our algorithm, which in turn is what gives us the > performance that we have (which is tied directly to the algorithmic > complexity of the MSC service installation process). > > So given that we have to externally control the order of installation, > such that we know B will come before A, there's actually no reason to > even have optional dependencies, since if you already know that B is > missing, you can simply opt to exclude the dependency in the first > place. Optional dependencies merely become a minor convenience, saving > you from having to do one "if" statement, at this point. A bit more than that though, as it's really if (readModelFromOtherSubystemToSeeIfFooIsSet(...)) > Yet their mere > presence has resulted in 100% misuse. > > So, there is a different preferred strategy. Here's how it is supposed > to work, conceptually. > > Imagine I have a subsystem X, which optionally consumes subsystem Y. > The correct time to detect the presence or absence of Y is related to > the lifecycle of the management model data. When X is added to the > model, and that transaction completes, we know that we have X with no Y. This should be done in the OperationStepHandler that adds the service, in Stage.RUNTIME. In Stage.RUNTIME you know that all changes that will be made to the model during the current transaction are present, even during boot when we do a lot of work concurrently. > All the services produced on behalf of X would then statically be > aware of this information, and would elide any dependency on Y. > > One obvious question with this scheme is: what happens when Y is added, > after the fact? The answer to this question depends on X. X must, in > the same transaction, detect the addition of Y and decide what to do. > Several actions are possible, depending on how X works and how its > dependency on Y works. Options include automatically removing and > rebuilding services with the new dependency, retroactively modifying X > in some way so as to update it without stopping it, or simply ignoring > the addition of Y, using a "reload-required" or similar state to > indicate to the user that the running system is inconsistent with the > stored model. > > I know we don't really have any APIs to directly facilitate these > concepts, but this is a part of what the management SPI redesign is all > about. In the new model, one will be able to express optional > dependencies at a resource level rather than a service level. Jeff Mesnil -- I'm curious how useful the internal notification stuff you've added in the existing code will be for this use case. Prior to that there was nothing at all that X could count on to become aware of the later addition Y. > > On 06/03/2014 02:23 PM, Richard Achmatowicz wrote: >> Hi >> >> A general question on which services we can assume are available in a >> sever configuration and which we cannot. >> >> When installing services, we often need to add in service dependencies. >> If a dependency is marked as required and it does not exist, the service >> will not start correctly. So, when setting up a service and its >> dependencies, if possible, I would like to know which dependencies are >> guaranteed to be available and which I may need to optionally check for. >> The OPTIONAL flag for dependencies was meant to address this but it now >> deprecated as it doesn't work so well when you are unlucky enough to >> have your dependency start after your dependent service. >> >> The web profile is intended to be a slimmed down version of the full >> profile, and in the case of the EJB subsystem, the spec says that it >> need not implement certain EJB feature subsets, among which are >> asynchronous method invocations, timer service and remote invocations. >> However, our web profile EJB subsystem includes all of these. It is >> conceivable that an admin would want to create a slimmed down version of >> the EJB subsystem and remove some of these services. All three of these >> can be easily removed by deleting configuration elements. >> >> What to do here? >> - assume that all subsystems and services defined in the shipped web >> profile will be present and no dependency checking is required? >> - assume that a certain minimal subset of subsystems and services >> defined in the shipped web profile will be present and that an admin may >> "turn off" some services, for example the features not part of EJB Lite >> and so some form of dependency checking is required? And which services >> can we assume are present? >> >> In the case of the EJB subsystem, I would expect that dependencies like >> the Remoting system endpoint can be assumed to be present, but the >> optional features above and beyond EJB Lite may not be. But this is all >> pretty much ad hoc. >> >> Any thoughts? > > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From brian.stansberry at redhat.com Tue Jun 3 16:06:27 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Tue, 03 Jun 2014 15:06:27 -0500 Subject: [wildfly-dev] Service assumptions and the web profile In-Reply-To: References: <538E2099.8010207@redhat.com> Message-ID: <538E2AC3.5090302@redhat.com> On 6/3/14, 2:51 PM, Jason Greene wrote: > Yes just to be clear: > OPTIONAL SHOULD NEVER BE USED FOR CROSS SUBSYSTEM DEPENDENCIES! > > It?s very hard to use OPTIONAL correctly, so avoid it like the plague (this is why it?s deprecated). > > There are other strategies that work in many cases. For example you can use PASSIVE services. > > However, the thing we are really missing is cross-subsystem negotiation (capabilities). This is actually a trivial thing to implement, Kabir had a prototype at one point, and we should introduce it in 9. > > The classic problem example is jacorb and tx. They require circular configuration. > > The general notion is that you have a set of detyped flags that convey various configuration relevant info like: > > tx subsystem: SUPPORTS_TRANSACTIONS > SUPPORTS_JTS > > corba subsytem: SUPPORTS_CORBA > > These can be assembled in quickly after parsing but before services are assembled. Then subsystem service assembly can begin in parallel, as today. So for example, corba can see that JTS is enabled, and install the proper interceptors. TX can see Corba and create the proper service dependency. > To expand a bit on what you said, the way parallel boot works is all the MODEL stage stuff is done in parallel. All of it completes before the RUNTIME stuff starts. So this data can (and logically should) be populated during the MODEL stage. The service assembly work that needs it is done in RUNTIME, and can count the data being available. > On Jun 3, 2014, at 2:23 PM, Richard Achmatowicz wrote: > >> Hi >> >> A general question on which services we can assume are available in a >> sever configuration and which we cannot. >> >> When installing services, we often need to add in service dependencies. >> If a dependency is marked as required and it does not exist, the service >> will not start correctly. So, when setting up a service and its >> dependencies, if possible, I would like to know which dependencies are >> guaranteed to be available and which I may need to optionally check for. >> The OPTIONAL flag for dependencies was meant to address this but it now >> deprecated as it doesn't work so well when you are unlucky enough to >> have your dependency start after your dependent service. >> >> The web profile is intended to be a slimmed down version of the full >> profile, and in the case of the EJB subsystem, the spec says that it >> need not implement certain EJB feature subsets, among which are >> asynchronous method invocations, timer service and remote invocations. >> However, our web profile EJB subsystem includes all of these. It is >> conceivable that an admin would want to create a slimmed down version of >> the EJB subsystem and remove some of these services. All three of these >> can be easily removed by deleting configuration elements. >> >> What to do here? >> - assume that all subsystems and services defined in the shipped web >> profile will be present and no dependency checking is required? >> - assume that a certain minimal subset of subsystems and services >> defined in the shipped web profile will be present and that an admin may >> "turn off" some services, for example the features not part of EJB Lite >> and so some form of dependency checking is required? And which services >> can we assume are present? >> >> In the case of the EJB subsystem, I would expect that dependencies like >> the Remoting system endpoint can be assumed to be present, but the >> optional features above and beyond EJB Lite may not be. But this is all >> pretty much ad hoc. >> >> Any thoughts? >> >> >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From brian.stansberry at redhat.com Tue Jun 3 16:15:14 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Tue, 03 Jun 2014 15:15:14 -0500 Subject: [wildfly-dev] Service assumptions and the web profile In-Reply-To: References: <538E2099.8010207@redhat.com> Message-ID: <538E2CD2.3090407@redhat.com> On 6/3/14, 2:51 PM, Jason Greene wrote: > Yes just to be clear: > OPTIONAL SHOULD NEVER BE USED FOR CROSS SUBSYSTEM DEPENDENCIES! > > It?s very hard to use OPTIONAL correctly, so avoid it like the plague (this is why it?s deprecated). > > There are other strategies that work in many cases. For example you can use PASSIVE services. > > However, the thing we are really missing is cross-subsystem negotiation (capabilities). This is actually a trivial thing to implement, Kabir had a prototype at one point, and we should introduce it in 9. > > The classic problem example is jacorb and tx. They require circular configuration. > > The general notion is that you have a set of detyped flags that convey various configuration relevant info like: > > tx subsystem: SUPPORTS_TRANSACTIONS > SUPPORTS_JTS > > corba subsytem: SUPPORTS_CORBA > > These can be assembled in quickly after parsing but before services are assembled. Then subsystem service assembly can begin in parallel, as today. So for example, corba can see that JTS is enabled, and install the proper interceptors. TX can see Corba and create the proper service dependency. > Perhaps we should do something more sophisticated than a simple string. The string is fine for providing information that something is present, but I don't like the way things work once that info is present. Too much stuff where subsystem X is depending on details of subsystem Y to wire things up. I'd like to see this stuff converted into proper APIs exposed by each subsystem. If tx and corba associated an impl of a relevant API with those SUPPORTS... keys, that's more useful. > On Jun 3, 2014, at 2:23 PM, Richard Achmatowicz wrote: > >> Hi >> >> A general question on which services we can assume are available in a >> sever configuration and which we cannot. >> >> When installing services, we often need to add in service dependencies. >> If a dependency is marked as required and it does not exist, the service >> will not start correctly. So, when setting up a service and its >> dependencies, if possible, I would like to know which dependencies are >> guaranteed to be available and which I may need to optionally check for. >> The OPTIONAL flag for dependencies was meant to address this but it now >> deprecated as it doesn't work so well when you are unlucky enough to >> have your dependency start after your dependent service. >> >> The web profile is intended to be a slimmed down version of the full >> profile, and in the case of the EJB subsystem, the spec says that it >> need not implement certain EJB feature subsets, among which are >> asynchronous method invocations, timer service and remote invocations. >> However, our web profile EJB subsystem includes all of these. It is >> conceivable that an admin would want to create a slimmed down version of >> the EJB subsystem and remove some of these services. All three of these >> can be easily removed by deleting configuration elements. >> >> What to do here? >> - assume that all subsystems and services defined in the shipped web >> profile will be present and no dependency checking is required? >> - assume that a certain minimal subset of subsystems and services >> defined in the shipped web profile will be present and that an admin may >> "turn off" some services, for example the features not part of EJB Lite >> and so some form of dependency checking is required? And which services >> can we assume are present? >> >> In the case of the EJB subsystem, I would expect that dependencies like >> the Remoting system endpoint can be assumed to be present, but the >> optional features above and beyond EJB Lite may not be. But this is all >> pretty much ad hoc. >> >> Any thoughts? >> >> >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From ssilvert at redhat.com Tue Jun 3 16:19:59 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Tue, 03 Jun 2014 16:19:59 -0400 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E1314.2030101@jboss.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> Message-ID: <538E2DEF.8060400@redhat.com> On 6/3/2014 2:25 PM, Darran Lofthouse wrote: > On 03/06/14 18:46, Stan Silvert wrote: >> Relation to the Elytron and other WildFly 9 changes >> ------------------------------------------------------------------------ >> Keycloak is expected to use Elytron at a low level. Nothing at the >> Keycloak integration level should be affected by the Elytron project. >> >> However, there are many other expected changes to security that may >> effect how Keycloak is integrated. It is likely that the initial >> integration of Keycloak will happen before these aforementioned >> changes. This will be an advantage as the unit tests for Keycloak >> integration can help to validate these changes. > One important change we will need to get in first is the splitting of > the contexts that server the management requests, the existing > /management context needs to remain supporting the existing > authentication mechanisms with cross origin restrictions left on. This will be fine. There is presently no reliance on CORS. > >> Default Authentication Mechanism >> ------------------------------------------------ >> Keycloak is a very new technology. Given that security is so vital, we >> need time for Keycloak to mature. When Keycloak is first integrated, it >> will not be the default authentication/authorization mechanism for the >> WildFly Web Console. However, selecting Keycloak for authentication >> should be as simple as executing one or two CLI commands. >> >> We can switch to Keycloak as the default whenever we all believe that >> both Keycloak itself and its integration into WildFly are ready for >> prime time. Hopefully, that will just be a matter of months after first >> integration. > We also need to have the SSL out of the box or as soon as possible after > problem solved. +1 > Even then how much does it make sense for each app > server installation to have it's own SSO infrastructure? It's important to at least have it available as an option to turn on. In production, the SSO infrastructure wouldn't be live on every instance. Also, Keycloak is much more than just SSO infrastructure. Other features like user management, password management, auditing, skinning, and the nice UI make it an excellent choice for applications that don't require SSO. Who wants to keep coding all that stuff by hand? > >> Initial Integration >> ------------------------ >> The initial integration for most of Keycloak will only be available on >> standalone. However, on a domain controller, the WildFly Web Console >> will still be able to use Keycloak for authentication and >> authorization. In this case, the domain controller must be able to >> reach a Keycloak Authentication Server somewhere on the network. >> >> >> Keycloak Authentication Server and Admin Console >> ----------------------------------------------------------------------- >> The Keycloak Authentication Server is responsible for authenticating and >> authorizing users. The Keycloak Admin Console is an AngularJS UI that >> administrators use to manage users, roles, sessions, passwords, assigned >> applications, etc. >> >> Both the auth server and admin console are served from the same WAR. It >> should be possible to deploy this without using a WAR or servlets, but >> that is not planned for the initial WildFly integration. Because of >> this current limitation, the auth server and admin console will not be >> present in a domain controller. > This is going against the current design of AS7/WildFly exposing > management related operations over the management interface and leaving > the web container to be purely about a users deployments. The auth server and admin console don't necessarily need to be deployed as a WAR. It's an AngularJS app, so we could make it work exactly the same way the web console does. There is also a middle ground where don't expose the fact that it's a WAR. I think JON does something like that? This is a big discussion we will need to have. > >> Keycloak Database >> -------------------------- >> The Keycloak database contains all the server-side configuration data >> such as the data manipulated by the Keycloak Admin Console. By default, >> it is an H2 database that lives in the standalone/data directory. It is >> created when the auth server is first deployed. >> >> This database will be initialized with a single "admin" user who has all >> rights within the Keycloak Admin Console and within the WildFly Web >> Console. On first login, the admin user must change his password. >> >> By default, both consoles will be in the same master realm so that users >> can potentially do single signon and move freely between them. >> >> H2 is not recommended for production use. Keycloak has tools available >> to migrate data to another database. >> >> >> Keycloak Adapter >> ------------------------ >> A Keycloak adapter is a bit of software that is attached to an >> application deployment. This software knows how to talk to the Keycloak >> auth server. It handles the OAuth protocol from the client side. >> >> In the case of the WildFly Web Console, the adapter will be a pure >> Undertow, non-servlet adapter. The reason for using a pure Undertow >> adapter instead of the current Keycloak WildFly adapter is that the >> latter adapter relies on the Servlet API, which is forbidden on a domain >> controller. The proof of concept mentioned above contains the code >> needed for a pure Undertow adapter. This code will likely be migrated >> into the Keycloak project. >> >> >> Keycloak Adapter Configuration >> ------------------------------------------- >> A Keycloak adapter configuration is a json or DMR representation of an >> application's client-side Keycloak configuration. It is used by the >> adapter to find data such as public keys and the location of the auth >> server. (From the POC, see master->Applications->web-console->Installation) >> >> In the case of WildFly Web Console, we actually have two application >> endpoints that need to be configured and protected by Keycloak. These >> are the GWT-based UI and the http management endpoint that accepts DMR >> operations. The Keycloak configuration for these applications will live >> in the DMR management tree under SuperUser access. This restrictive >> access only applies to the Keycloak adapter configuration. Any RBAC >> role can be assigned to a user of the WildFly Web Console via the >> Keycloak Admin Console. >> >> Note that the proof of concept still uses json files for the adapter >> configuration. The real implementation will need to store the >> configuration in the DMR management tree so that it can be maintained by >> CLI or the WildFly Web Console. >> >> >> Questions for further discussion >> -------------------------------------------- >> 1. WildFly ships with pre-defined RBAC roles. [3] Should these roles >> be available at the realm level or only to the WildFly Web Console? >> Could/should other consoles make use of these roles? > The direction we are moving towards with the wildfly-elytron work is > that role assignment / mapping is something that happens at the point a > call reaches a secured resource and will be in the context of that > secure resource. As an example a user could call one deployment and be > assigned one set of roles, that same user could call a different > deployment and have a completely different set of roles - for simplicity > we will allow a 1:1 mapping from what was loaded from the identity store > to roles but that will not be always the case. > >> 2. On first login, you are required to change the admin password. What >> other initial setup should be required? Change realm public key? >> Change client secret? Others? > This is something that would be required to happen at the command line, > a connection from a web browser could not be trusted to perform this. > >> 3. In the POC, the Keycloak Auth Server WAR is extracted into the >> standalone/deployments directory. Are there better options? Should it >> even be deployed by default? > As I say above this goes against the current architecture of separating > management from apps. > >> 4. By default, what Login Options should be enabled for the master >> realm? Currently, these options are social login, user registration, >> forget password, remember me, verify email, direct grant API, and >> require SSL. >> >> 5. Should Keycloak audit log be enabled by default? If so, what should >> be the expiration value? >> >> 6. What should the initial pasword policy be? (length, mixed case, >> special chars, etc.) > Password policy is warn only for weak passwords, administrators can > override and choose what ever they want. > >> [1] http://keycloak.jboss.org/ >> http://docs.jboss.org/keycloak/docs/1.0-beta-1/userguide/html_single/index.html >> [2] https://github.com/hal/core >> [3] >> http://planet.jboss.org/post/role_based_access_control_in_wildfly_8_tech_tip_120 >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From bburke at redhat.com Tue Jun 3 16:27:25 2014 From: bburke at redhat.com (Bill Burke) Date: Tue, 03 Jun 2014 16:27:25 -0400 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E1314.2030101@jboss.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> Message-ID: <538E2FAD.9020902@redhat.com> On 6/3/2014 2:25 PM, Darran Lofthouse wrote: >> Both the auth server and admin console are served from the same WAR. It >> should be possible to deploy this without using a WAR or servlets, but >> that is not planned for the initial WildFly integration. Because of >> this current limitation, the auth server and admin console will not be >> present in a domain controller. > > This is going against the current design of AS7/WildFly exposing > management related operations over the management interface and leaving > the web container to be purely about a users deployments. Keycloak uses Resteasy. We could write an adapter for whatever HTTP engine the mgmt interface is using. Unfortunately, we also need a storage mechanism JPA or Mongo. We could write a file-based back-end if needed. >> Questions for further discussion >> -------------------------------------------- >> 1. WildFly ships with pre-defined RBAC roles. [3] Should these roles >> be available at the realm level or only to the WildFly Web Console? >> Could/should other consoles make use of these roles? > > The direction we are moving towards with the wildfly-elytron work is > that role assignment / mapping is something that happens at the point a > call reaches a secured resource and will be in the context of that > secure resource. As an example a user could call one deployment and be > assigned one set of roles, that same user could call a different > deployment and have a completely different set of roles - for simplicity > we will allow a 1:1 mapping from what was loaded from the identity store > to roles but that will not be always the case. Keycloak works similarly, albeit with remote invocations. Clients get a unique token with each application they do SSO with. For nested invocations, i.e. If you log into Application A and it needs to invoke on Application B, things can be configured so that the token has all role mappings that are needed. Each application in the nested call chain looks inside the token for the role mappings it is interested in. If there was an adequate SPI, then Keycloak would fit in with this idea of each deployment call would have its own set of role mappings. * Keycloak supports realm-level and per-app-level roles. * Keycloak has the concept of scope mappings. Scopes are the roles the application is allowed to ask for. So the token is only populated with permissions the application is allowed to ask for. So, if a user has "admin" role, the token will not get populated with the "admin" role mapping unless the application has that scope. * Keycloak has the concept of a "composite role". Its kinda like a combination of a role group and a role. BTW, I think cross-component calls need to be able to inherit the "identity store" from the calling component. IMO, it would be very rare (even weird) if cross-component calls each used their own "identity store". Currently, Its even more weird (and wrong) that each time you cross a component layer (deployment) reauthentication happens with the identity store. > >> 2. On first login, you are required to change the admin password. What >> other initial setup should be required? Change realm public key? >> Change client secret? Others? > > This is something that would be required to happen at the command line, > a connection from a web browser could not be trusted to perform this. > What if Keycloak out of the box only allowed connections from localhost? That it would block all other incoming traffic and only allow connections from 127.0.0.1? Admins would have to remove this restriction. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com From brian.stansberry at redhat.com Tue Jun 3 16:33:52 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Tue, 03 Jun 2014 15:33:52 -0500 Subject: [wildfly-dev] JMX Console over Web Admin Console In-Reply-To: References: <537D51A9.7090803@redhat.com> Message-ID: <538E3130.4060905@redhat.com> Hi Sebastian, On 6/1/14, 1:21 PM, Sebastian ?askawiec wrote: > Hi Brian > > Thanks for clarification and sorry for late response. > > I created Feature Request to add expose MBean server through HTTP > management interface: https://issues.jboss.org/browse/WFLY-3426 > Thanks. > It would be great to have MBean server exposed via Wildfly HTTP > Management interface, but I know several teams which would like to have > such functionality in JBoss AS 7. This is why I started looking at > Darran's port to JMX console > (https://github.com/dandreadis/wildfly/commits/jmx-console). I rebased > it, detached from Wildfly parent and pushed to my branch > (https://github.com/altanis/wildfly/commits/jmx-console-ported). The > same WAR file seems to work correctly on JBoss AS 7 as well as Wildfly. > > In my opinion it would be great to have this console available publicly. > Is it possible to make the WAR file available through JBoss Nexus > (perhaps thirdparty-releases repository)? If it is, I'd squash all > commits and push only jmx-console code into new github repository (to > make it separate from Wildfly). > What maven Group were you wanting to use? That jmx-console-ported branch has org.wildfly in the pom. > Best regards > Sebastian > > > > 2014-05-22 3:23 GMT+02:00 Brian Stansberry >: > > I agree that if we exposed the mbean server over HTTP that it should be > via a context on our HTTP management interface. Either that or expose > mbeans as part of our standard management resource tree. That would make > integration in the web console much more practical. > > I don't see us ever bringing back the AS5-style jmx-console.war that > runs on port 8080 as part of the WildFly distribution. That would > introduce a requirement for EE into our management infrastructure, and > we won't do that. Management is part of WildFly core, and WildFly core > does not require EE. If the Servlet-based jmx-console.war code linked > from WFLY-1197 gets further developed, I see it as a community effort > for people who want to install that on their own, not as something we'd > distribute as part of WildFly itself. > > On 5/21/14, 7:37 AM, Sebastian ?askawiec wrote: > > Hi > > > > One of our projects is based on JBoss 5.1 and we are considering > > migrating it to Wildfly. One of our problems is Web based JMX > Console... > > We have pretty complicated production environment and Web based JMX > > console with basic Auth delegated to LDAP is the simplest > solution for us. > > > > I noticed that there was a ticket opened for porting legacy JMX > Console: > > https://issues.jboss.org/browse/WFLY-1197. > > However I think it would be much better idea to to have this > > functionality in Web Administraction console. In my opinion it > would be > > great to have it under "Runtime" in "Status" submenu. > > > > What do you think about this idea? > > > > Best Regards > > -- > > Sebastian ?askawiec > > > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > -- > Brian Stansberry > Senior Principal Software Engineer > JBoss by Red Hat > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > -- > Sebastian ?askawiec -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From Anil.Saldhana at redhat.com Tue Jun 3 16:37:59 2014 From: Anil.Saldhana at redhat.com (Anil Saldhana) Date: Tue, 03 Jun 2014 15:37:59 -0500 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E2FAD.9020902@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538E2FAD.9020902@redhat.com> Message-ID: <538E3227.4080908@redhat.com> On 06/03/2014 03:27 PM, Bill Burke wrote: > > On 6/3/2014 2:25 PM, Darran Lofthouse wrote: >>> Both the auth server and admin console are served from the same WAR. It >>> should be possible to deploy this without using a WAR or servlets, but >>> that is not planned for the initial WildFly integration. Because of >>> this current limitation, the auth server and admin console will not be >>> present in a domain controller. >> This is going against the current design of AS7/WildFly exposing >> management related operations over the management interface and leaving >> the web container to be purely about a users deployments. > Keycloak uses Resteasy. We could write an adapter for whatever HTTP > engine the mgmt interface is using. Unfortunately, we also need a > storage mechanism JPA or Mongo. We could write a file-based back-end > if needed. PicketLink IDM default storage is file based. Any opportunity to map KeyCloak storage to the IDM API? Last time, Bill told me that he is not very happy with the IDM API. > > >>> Questions for further discussion >>> -------------------------------------------- >>> 1. WildFly ships with pre-defined RBAC roles. [3] Should these roles >>> be available at the realm level or only to the WildFly Web Console? >>> Could/should other consoles make use of these roles? >> The direction we are moving towards with the wildfly-elytron work is >> that role assignment / mapping is something that happens at the point a >> call reaches a secured resource and will be in the context of that >> secure resource. As an example a user could call one deployment and be >> assigned one set of roles, that same user could call a different >> deployment and have a completely different set of roles - for simplicity >> we will allow a 1:1 mapping from what was loaded from the identity store >> to roles but that will not be always the case. > Keycloak works similarly, albeit with remote invocations. Clients get a > unique token with each application they do SSO with. For nested > invocations, i.e. If you log into Application A and it needs to invoke > on Application B, things can be configured so that the token has all > role mappings that are needed. Each application in the nested call > chain looks inside the token for the role mappings it is interested in. > > If there was an adequate SPI, then Keycloak would fit in with this idea > of each deployment call would have its own set of role mappings. > > > * Keycloak supports realm-level and per-app-level roles. > > * Keycloak has the concept of scope mappings. Scopes are the roles the > application is allowed to ask for. So the token is only populated with > permissions the application is allowed to ask for. So, if a user has > "admin" role, the token will not get populated with the "admin" role > mapping unless the application has that scope. > > * Keycloak has the concept of a "composite role". Its kinda like a > combination of a role group and a role. > > BTW, I think cross-component calls need to be able to inherit the > "identity store" from the calling component. IMO, it would be very > rare (even weird) if cross-component calls each used their own "identity > store". > > Currently, Its even more weird (and wrong) that each time you cross a > component layer (deployment) reauthentication happens with the identity > store. > > > > >>> 2. On first login, you are required to change the admin password. What >>> other initial setup should be required? Change realm public key? >>> Change client secret? Others? >> This is something that would be required to happen at the command line, >> a connection from a web browser could not be trusted to perform this. >> > What if Keycloak out of the box only allowed connections from localhost? > That it would block all other incoming traffic and only allow > connections from 127.0.0.1? Admins would have to remove this restriction. > > From ssilvert at redhat.com Tue Jun 3 16:43:48 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Tue, 03 Jun 2014 16:43:48 -0400 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E2FAD.9020902@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538E2FAD.9020902@redhat.com> Message-ID: <538E3384.80507@redhat.com> On 6/3/2014 4:27 PM, Bill Burke wrote: > > On 6/3/2014 2:25 PM, Darran Lofthouse wrote: >>> Both the auth server and admin console are served from the same WAR. It >>> should be possible to deploy this without using a WAR or servlets, but >>> that is not planned for the initial WildFly integration. Because of >>> this current limitation, the auth server and admin console will not be >>> present in a domain controller. >> This is going against the current design of AS7/WildFly exposing >> management related operations over the management interface and leaving >> the web container to be purely about a users deployments. > Keycloak uses Resteasy. We could write an adapter for whatever HTTP > engine the mgmt interface is using. Unfortunately, we also need a > storage mechanism JPA or Mongo. We could write a file-based back-end > if needed. Most of the config data could be stored in the management model. You would still need a general storage mechanism for user data, but that doesn't go against the current design because that is what we have currently. I'm interested in opinions about how important it would be to do all that. From bburke at redhat.com Tue Jun 3 16:49:16 2014 From: bburke at redhat.com (Bill Burke) Date: Tue, 03 Jun 2014 16:49:16 -0400 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E3384.80507@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538E2FAD.9020902@redhat.com> <538E3384.80507@redhat.com> Message-ID: <538E34CC.1050802@redhat.com> On 6/3/2014 4:43 PM, Stan Silvert wrote: > On 6/3/2014 4:27 PM, Bill Burke wrote: >> >> On 6/3/2014 2:25 PM, Darran Lofthouse wrote: >>>> Both the auth server and admin console are served from the same WAR. It >>>> should be possible to deploy this without using a WAR or servlets, but >>>> that is not planned for the initial WildFly integration. Because of >>>> this current limitation, the auth server and admin console will not be >>>> present in a domain controller. >>> This is going against the current design of AS7/WildFly exposing >>> management related operations over the management interface and leaving >>> the web container to be purely about a users deployments. >> Keycloak uses Resteasy. We could write an adapter for whatever HTTP >> engine the mgmt interface is using. Unfortunately, we also need a >> storage mechanism JPA or Mongo. We could write a file-based back-end >> if needed. > Most of the config data could be stored in the management model. You > would still need a general storage mechanism for user data, but that > doesn't go against the current design because that is what we have > currently. > > I'm interested in opinions about how important it would be to do all that. Yeah, we could probably write a management model backend too. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com From bburke at redhat.com Tue Jun 3 16:50:57 2014 From: bburke at redhat.com (Bill Burke) Date: Tue, 03 Jun 2014 16:50:57 -0400 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E3227.4080908@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538E2FAD.9020902@redhat.com> <538E3227.4080908@redhat.com> Message-ID: <538E3531.7030409@redhat.com> On 6/3/2014 4:37 PM, Anil Saldhana wrote: > On 06/03/2014 03:27 PM, Bill Burke wrote: >> >> On 6/3/2014 2:25 PM, Darran Lofthouse wrote: >>>> Both the auth server and admin console are served from the same WAR. It >>>> should be possible to deploy this without using a WAR or servlets, but >>>> that is not planned for the initial WildFly integration. Because of >>>> this current limitation, the auth server and admin console will not be >>>> present in a domain controller. >>> This is going against the current design of AS7/WildFly exposing >>> management related operations over the management interface and leaving >>> the web container to be purely about a users deployments. >> Keycloak uses Resteasy. We could write an adapter for whatever HTTP >> engine the mgmt interface is using. Unfortunately, we also need a >> storage mechanism JPA or Mongo. We could write a file-based back-end >> if needed. > PicketLink IDM default storage is file based. Any opportunity to map > KeyCloak > storage to the IDM API? Last time, Bill told me that he is not very > happy with > the IDM API. Keycloak storage has in the past been mapped to the PL IDM API. That code still exists but is not up to date. We *do* use PL IDM API for mapping user-data only (not role mappings) to LDAP/AD storage. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com From david.lloyd at redhat.com Tue Jun 3 22:21:23 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Tue, 03 Jun 2014 21:21:23 -0500 Subject: [wildfly-dev] New security sub-project: WildFly Elytron Message-ID: <538E82A3.1060104@redhat.com> WildFly Elytron [1] is a new WildFly sub-project which will completely replace the combination of PicketBox and JAAS as the WildFly client and server security mechanism. An "elytron" (?l????tr?n, plural "elytra") is the hard, protective casing over a wing of certain flying insects (e.g. beetles). Here is a high-level project summary: WildFly Elytron does presently, or will, satisfy the following goals: ? Establish and clearly define terminology around WildFly's security concepts ? Provide support for secure server-side authentication mechanisms (i.e. eliminating the historical "send the password everywhere" style of authentication and forwarding) supporting HTTP [2], SASL [3] (including SASL+GSSAPI [4]), and TLS [5] connection types, as well as supporting other authentication protocols in the future without change (such as RADIUS [6], GSS [7], EAP [8]) ? Provide a simple means to support multiple security associations per security context (one per authentication system, including local and remote application servers, remote databases, remote LDAP, etc.) ? Provide support for password credential types using the standard JCE archetypal API structure (including but not limited to plain, UNIX DES/MD5/SHA crypt types, bcrypt, mechanism-specific pre-hashed passwords, etc.) ? Provide SPIs to support all of the above, such that consumers such as Undertow, JBoss SASL, HornetQ etc. can use them directly with a minimum of integration overhead ? Provide SPIs to support and maintain security contexts ? Integrate seamlessly with PicketLink IDM and Keycloak projects ? Provide SPIs to integrate with IDM systems (such as PicketLink) as well as simple/local user stores (such as KeyStores or plain files, and possibly also simple JDBC and/or LDAP backends as well) ? Provide SPIs to support name rewriting and realm selection based on arbitrary, pluggable criteria ? Provide a Remoting-based connection-bound authentication service to establish or forward authentication between systems ? Provide SPIs to allow all Remoting-based protocols to reuse/share security contexts (EJB, JNDI, etc.) ? Integrate seamlessly with Kerberos authentication schemes for all authentication mechanisms (including inbound and outbound identity propagation for all currently supporting protocols) ? Provide improved integration with EE standards (JAAC and JASPIC) The following are presently non- or anti-goals: ? Any provision to support JAAS Subject as a security context (due to performance and correctness concerns)? ? Any provision to support JAAS LoginContext (due to tight integration with Subject) ? Any provision to maintain API compatibility with PicketBox (this is not presently an established requirement and thus would add undue implementation complexity, if it is indeed even possible) ? Replicate Kerberos-style ticket-based credential forwarding (just use Kerberos in this case) ? You may note that this is in contrast with a previous post to the AS 7 list [9] in which I advocated simply unifying on Subject. Subsequent research uncovered a number of performance and implementation weaknesses in JAAS that have since convinced the security team that we should no longer be relying on it. Most of the discussion on this project happens in the #wildfly-dev+ (note the plus sign) channel on FreeNode IRC. At some point in the near-ish future I will hopefully also have some (open-source) presentation materials about the architecture. Questions and comments welcome; feel free to peruse the code and comment in GitHub as well. References/links: [1] https://github.com/wildfly-security/wildfly-elytron [2] http://tools.ietf.org/html/rfc2616 [3] http://tools.ietf.org/html/rfc4422 [4] http://tools.ietf.org/html/rfc4752 [5] http://tools.ietf.org/html/rfc5246 [6] http://tools.ietf.org/html/rfc2865 and http://tools.ietf.org/html/rfc2866 [7] http://tools.ietf.org/html/rfc2743 and related [8] http://tools.ietf.org/html/rfc3748 [9] http://lists.jboss.org/pipermail/jboss-as7-dev/2013-February/007730.html -- - DML From darran.lofthouse at jboss.com Wed Jun 4 03:37:01 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Wed, 04 Jun 2014 08:37:01 +0100 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E2FAD.9020902@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538E2FAD.9020902@redhat.com> Message-ID: <538ECC9D.8090201@jboss.com> On 03/06/14 21:27, Bill Burke wrote: > BTW, I think cross-component calls need to be able to inherit the > "identity store" from the calling component. IMO, it would be very > rare (even weird) if cross-component calls each used their own "identity > store". Yes that is something we have been discussing on the wildfly-elytron discussions, the concept that when a call crosses one container to the next we already have an authenticated identity what we need to be doing to re mapping the roles so that the role mapping is in the context of the second component. > Currently, Its even more weird (and wrong) that each time you cross a > component layer (deployment) reauthentication happens with the identity > store. As far as I am concerned this is just a side effect of JAAS, as I say above we will have the need for re analysing the role mapping on the crossing of containers so that each container sees the correct role mapping but not this re-authentication. This in itself however is moving into the teritory of other design discussions we will be bringing to this list - I just wanted to confirm that we are moving away from this assumption of a principal and credential that gets authenticated on each container crossing. > >> >>> 2. On first login, you are required to change the admin password. What >>> other initial setup should be required? Change realm public key? >>> Change client secret? Others? >> >> This is something that would be required to happen at the command line, >> a connection from a web browser could not be trusted to perform this. >> > > What if Keycloak out of the box only allowed connections from localhost? > That it would block all other incoming traffic and only allow > connections from 127.0.0.1? Admins would have to remove this restriction. In early AS7 discussions that was deemed insufficient due to privilege escalation of clients running on the same box, this is the whole reason the CLI has a local authentication mechanism to verify the user of the CLI actually has access to the AS installation. > > From rory.odonnell at oracle.com Wed Jun 4 03:37:59 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Wed, 04 Jun 2014 08:37:59 +0100 Subject: [wildfly-dev] Early Access builds for JDK 9 b15, JDK 8u20 b16 are available on java.net Message-ID: <538ECCD7.1090204@oracle.com> Hi Guys, Early Access builds for JDK 9 b15 , JDK 8u20 b16 are available on java.net. As we enter the later phases of development for JDK 8u20 , please log any show stoppers as soon as possible. JDK 7u60 is available for download [0] . Rgds, Rory [0] http://www.oracle.com/technetwork/java/javase/downloads/index.html -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140604/4f6efedb/attachment.html From darran.lofthouse at jboss.com Wed Jun 4 04:01:20 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Wed, 04 Jun 2014 09:01:20 +0100 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E2DEF.8060400@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538E2DEF.8060400@redhat.com> Message-ID: <538ED250.4060608@jboss.com> On 03/06/14 21:19, Stan Silvert wrote: > On 6/3/2014 2:25 PM, Darran Lofthouse wrote: >> On 03/06/14 18:46, Stan Silvert wrote: >>> Relation to the Elytron and other WildFly 9 changes >>> ------------------------------------------------------------------------ >>> Keycloak is expected to use Elytron at a low level. Nothing at the >>> Keycloak integration level should be affected by the Elytron project. >>> >>> However, there are many other expected changes to security that may >>> effect how Keycloak is integrated. It is likely that the initial >>> integration of Keycloak will happen before these aforementioned >>> changes. This will be an advantage as the unit tests for Keycloak >>> integration can help to validate these changes. >> One important change we will need to get in first is the splitting of >> the contexts that server the management requests, the existing >> /management context needs to remain supporting the existing >> authentication mechanisms with cross origin restrictions left on. > This will be fine. There is presently no reliance on CORS. >> >>> Default Authentication Mechanism >>> ------------------------------------------------ >>> Keycloak is a very new technology. Given that security is so vital, we >>> need time for Keycloak to mature. When Keycloak is first integrated, it >>> will not be the default authentication/authorization mechanism for the >>> WildFly Web Console. However, selecting Keycloak for authentication >>> should be as simple as executing one or two CLI commands. >>> >>> We can switch to Keycloak as the default whenever we all believe that >>> both Keycloak itself and its integration into WildFly are ready for >>> prime time. Hopefully, that will just be a matter of months after first >>> integration. >> We also need to have the SSL out of the box or as soon as possible after >> problem solved. > +1 >> Even then how much does it make sense for each app >> server installation to have it's own SSO infrastructure? > It's important to at least have it available as an option to turn on. That part I am fine with, out biggest concern is the WildFly distribution needs to be fully functional out of the box which means for KeyCloak to be used by default we need everything in that single instance. On the other hand, here are the steps you need to go through including how to set up your SSO infrastructure and cross reference it from the WildFly configuration is fine. Even installing it as a war on a WildFly installation in this context is fine as by then you have reached a point where the purpose of that installation is to host KeyCloak. If I am reading your results from the POC correctly this may be the best way to go anyway to get a standalone and domain mode solution available simultaneously - packaging and distribution options for a complete solution could be a second step. One point while I think of it - we will need the native management interface to be secured by the same identity store and make sure the existing http mechanisms can use the same store, if that has not already been considered that may be a higher priority. > In production, the SSO infrastructure wouldn't be live on every instance. > > Also, Keycloak is much more than just SSO infrastructure. Other > features like user management, password management, auditing, skinning, > and the nice UI make it an excellent choice for applications that don't > require SSO. Who wants to keep coding all that stuff by hand? Auditing I am deliberately ignoring other than to say that is going to be a big topic in itself ;-) We already have two auditing solutions in WildFly one purely for management, the other for apps - the app auditing is tied very closely to the JAAS integration so we know something will happen in that area. From the perspective of wildfly-elytron we haven't reviewed auditing yet as it should not be driving the security solution. >>> Initial Integration >>> ------------------------ >>> The initial integration for most of Keycloak will only be available on >>> standalone. However, on a domain controller, the WildFly Web Console >>> will still be able to use Keycloak for authentication and >>> authorization. In this case, the domain controller must be able to >>> reach a Keycloak Authentication Server somewhere on the network. >>> >>> >>> Keycloak Authentication Server and Admin Console >>> ----------------------------------------------------------------------- >>> The Keycloak Authentication Server is responsible for authenticating and >>> authorizing users. The Keycloak Admin Console is an AngularJS UI that >>> administrators use to manage users, roles, sessions, passwords, assigned >>> applications, etc. >>> >>> Both the auth server and admin console are served from the same WAR. It >>> should be possible to deploy this without using a WAR or servlets, but >>> that is not planned for the initial WildFly integration. Because of >>> this current limitation, the auth server and admin console will not be >>> present in a domain controller. >> This is going against the current design of AS7/WildFly exposing >> management related operations over the management interface and leaving >> the web container to be purely about a users deployments. > The auth server and admin console don't necessarily need to be deployed > as a WAR. It's an AngularJS app, so we could make it work exactly the > same way the web console does. There is also a middle ground where > don't expose the fact that it's a WAR. I think JON does something like > that? > > This is a big discussion we will need to have. +1 As I say above it may be better to first reach the point where a WildFly instance can be configured to use an existing KeyCloak installation in standalone and domain mode (and with the native interface and standard http mechanisms) and then address the how to bring the rest of KeyCloak in as a second step. Finding a way to bring it all in would be a pre-requisite for an out of the box solution, we also have other items to bring up again soon such as continuing with out of the box authentication not dependent on SSL (although this has it's own issues if content in the management model is sensitive). But as I understand some of the demand for this the first major problem is there is no way for users to even enable this form of SSO so getting the first step enabled would be a major step forward. One point to clarify (not saying anyone is saying this but just to be clear) - I don't see us reaching a point where we say KeyCloak is exclusively the only authentication approach we will support for management, we have legacy client support requirements and end users will also have their own set of preferred solution. From darran.lofthouse at jboss.com Wed Jun 4 05:23:49 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Wed, 04 Jun 2014 10:23:49 +0100 Subject: [wildfly-dev] New security sub-project: WildFly Elytron In-Reply-To: <538E82A3.1060104@redhat.com> References: <538E82A3.1060104@redhat.com> Message-ID: <538EE5A5.7090603@jboss.com> In the interest of holding this in a central location I have created the following article: - https://community.jboss.org/wiki/WildFlyElytron-ProjectSummary Added the source and Jira links and also references for JBoss SASL which although will evolve for WildFly Elytron will be remaining an independent project. Regards, Darran Lofthouse. On 04/06/14 03:21, David M. Lloyd wrote: > ? Establish and clearly define terminology around WildFly's security > concepts > ? Provide support for secure server-side authentication mechanisms (i.e. > eliminating the historical "send the password everywhere" style of > authentication and forwarding) supporting HTTP [2], SASL [3] (including > SASL+GSSAPI [4]), and TLS [5] connection types, as well as supporting > other authentication protocols in the future without change (such as > RADIUS [6], GSS [7], EAP [8]) > ? Provide a simple means to support multiple security associations per > security context (one per authentication system, including local and > remote application servers, remote databases, remote LDAP, etc.) > ? Provide support for password credential types using the standard JCE > archetypal API structure (including but not limited to plain, UNIX > DES/MD5/SHA crypt types, bcrypt, mechanism-specific pre-hashed > passwords, etc.) > ? Provide SPIs to support all of the above, such that consumers such as > Undertow, JBoss SASL, HornetQ etc. can use them directly with a minimum > of integration overhead > ? Provide SPIs to support and maintain security contexts > ? Integrate seamlessly with PicketLink IDM and Keycloak projects > ? Provide SPIs to integrate with IDM systems (such as PicketLink) as > well as simple/local user stores (such as KeyStores or plain files, and > possibly also simple JDBC and/or LDAP backends as well) > ? Provide SPIs to support name rewriting and realm selection based on > arbitrary, pluggable criteria > ? Provide a Remoting-based connection-bound authentication service to > establish or forward authentication between systems > ? Provide SPIs to allow all Remoting-based protocols to reuse/share > security contexts (EJB, JNDI, etc.) > ? Integrate seamlessly with Kerberos authentication schemes for all > authentication mechanisms (including inbound and outbound identity > propagation for all currently supporting protocols) > ? Provide improved integration with EE standards (JAAC and JASPIC) > > The following are presently non- or anti-goals: > > ? Any provision to support JAAS Subject as a security context (due to > performance and correctness concerns)? > ? Any provision to support JAAS LoginContext (due to tight integration > with Subject) > ? Any provision to maintain API compatibility with PicketBox (this is > not presently an established requirement and thus would add undue > implementation complexity, if it is indeed even possible) > ? Replicate Kerberos-style ticket-based credential forwarding (just use > Kerberos in this case) > > ? You may note that this is in contrast with a previous post to the AS 7 > list [9] in which I advocated simply unifying on Subject. Subsequent > research uncovered a number of performance and implementation weaknesses > in JAAS that have since convinced the security team that we should no > longer be relying on it. > > Most of the discussion on this project happens in the #wildfly-dev+ > (note the plus sign) channel on FreeNode IRC. At some point in the > near-ish future I will hopefully also have some (open-source) > presentation materials about the architecture. > > Questions and comments welcome; feel free to peruse the code and comment > in GitHub as well. > > References/links: > > [1]https://github.com/wildfly-security/wildfly-elytron > [2]http://tools.ietf.org/html/rfc2616 > [3]http://tools.ietf.org/html/rfc4422 > [4]http://tools.ietf.org/html/rfc4752 > [5]http://tools.ietf.org/html/rfc5246 > [6]http://tools.ietf.org/html/rfc2865 and > http://tools.ietf.org/html/rfc2866 > [7]http://tools.ietf.org/html/rfc2743 and related > [8]http://tools.ietf.org/html/rfc3748 > [9]http://lists.jboss.org/pipermail/jboss-as7-dev/2013-February/007730.html From jmesnil at redhat.com Wed Jun 4 08:39:30 2014 From: jmesnil at redhat.com (Jeff Mesnil) Date: Wed, 4 Jun 2014 14:39:30 +0200 Subject: [wildfly-dev] Service assumptions and the web profile In-Reply-To: <538E299C.2080509@redhat.com> References: <538E2099.8010207@redhat.com> <538E2560.4000303@redhat.com> <538E299C.2080509@redhat.com> Message-ID: <2BB1FF03-76D4-4C58-B925-63FD2DCA9EE1@redhat.com> On 3 Jun 2014, at 22:01, Brian Stansberry wrote: >> One obvious question with this scheme is: what happens when Y is added, >> after the fact? The answer to this question depends on X. X must, in >> the same transaction, detect the addition of Y and decide what to do. >> Several actions are possible, depending on how X works and how its >> dependency on Y works. Options include automatically removing and >> rebuilding services with the new dependency, retroactively modifying X >> in some way so as to update it without stopping it, or simply ignoring >> the addition of Y, using a "reload-required" or similar state to >> indicate to the user that the running system is inconsistent with the >> stored model. >> >> I know we don't really have any APIs to directly facilitate these >> concepts, but this is a part of what the management SPI redesign is all >> about. In the new model, one will be able to express optional >> dependencies at a resource level rather than a service level. > > Jeff Mesnil -- I'm curious how useful the internal notification stuff > you've added in the existing code will be for this use case. Prior to > that there was nothing at all that X could count on to become aware of > the later addition Y. Notifications would work a the resource level. Notifications for added/removed resources are emitted[1] at the end of the step that adds/removes a resource in the MODEL stage. So a resource X could register a notification listener on the resource Y's path address. When the Y resource is added, the listener will be notified and X would be notified of the addition of Y and act accordingly. If I understand the example correctly, when a resource X is added, it could check whether the resource Y is already there and use it if that?s the case. Otherwise, it would register a notification listener and postpone this execution until the resource Y is added (with no guarantee it ever will). [1] https://github.com/jmesnil/wildfly/commits/WFLY-266_WFLY-3159_notification_support_and_jmx#diff-889ac0c285b120937da9477f6d61ab1dR689 -- Jeff Mesnil JBoss, a division of Red Hat http://jmesnil.net/ From sebastian.laskawiec at gmail.com Wed Jun 4 09:53:20 2014 From: sebastian.laskawiec at gmail.com (=?UTF-8?Q?Sebastian_=C5=81askawiec?=) Date: Wed, 4 Jun 2014 15:53:20 +0200 Subject: [wildfly-dev] JMX Console over Web Admin Console In-Reply-To: <538E3130.4060905@redhat.com> References: <537D51A9.7090803@redhat.com> <538E3130.4060905@redhat.com> Message-ID: Hi Brian I thought about: - *org.jboss* - org.jboss.as - org.wildfly ,artifact id: - wildfly-jmx-console - *jboss-jmx-console* and finally version: - start from the scratch 1.0.0-SNAPSHOT My preferences are - org.jboss as group id and jboss-jmx-console as artifact id. What do you think, is it ok? Best regards Sebastian 2014-06-03 22:33 GMT+02:00 Brian Stansberry : > Hi Sebastian, > > > On 6/1/14, 1:21 PM, Sebastian ?askawiec wrote: > >> Hi Brian >> >> Thanks for clarification and sorry for late response. >> >> I created Feature Request to add expose MBean server through HTTP >> management interface: https://issues.jboss.org/browse/WFLY-3426 >> >> > Thanks. > > > It would be great to have MBean server exposed via Wildfly HTTP >> Management interface, but I know several teams which would like to have >> such functionality in JBoss AS 7. This is why I started looking at >> Darran's port to JMX console >> (https://github.com/dandreadis/wildfly/commits/jmx-console). I rebased >> it, detached from Wildfly parent and pushed to my branch >> (https://github.com/altanis/wildfly/commits/jmx-console-ported). The >> same WAR file seems to work correctly on JBoss AS 7 as well as Wildfly. >> >> In my opinion it would be great to have this console available publicly. >> Is it possible to make the WAR file available through JBoss Nexus >> (perhaps thirdparty-releases repository)? If it is, I'd squash all >> commits and push only jmx-console code into new github repository (to >> make it separate from Wildfly). >> >> > What maven Group were you wanting to use? That jmx-console-ported branch > has org.wildfly in the pom. > > Best regards >> Sebastian >> >> >> >> 2014-05-22 3:23 GMT+02:00 Brian Stansberry > >: >> >> >> I agree that if we exposed the mbean server over HTTP that it should >> be >> via a context on our HTTP management interface. Either that or expose >> mbeans as part of our standard management resource tree. That would >> make >> integration in the web console much more practical. >> >> I don't see us ever bringing back the AS5-style jmx-console.war that >> runs on port 8080 as part of the WildFly distribution. That would >> introduce a requirement for EE into our management infrastructure, and >> we won't do that. Management is part of WildFly core, and WildFly core >> does not require EE. If the Servlet-based jmx-console.war code linked >> from WFLY-1197 gets further developed, I see it as a community effort >> for people who want to install that on their own, not as something >> we'd >> distribute as part of WildFly itself. >> >> On 5/21/14, 7:37 AM, Sebastian ?askawiec wrote: >> > Hi >> > >> > One of our projects is based on JBoss 5.1 and we are considering >> > migrating it to Wildfly. One of our problems is Web based JMX >> Console... >> > We have pretty complicated production environment and Web based JMX >> > console with basic Auth delegated to LDAP is the simplest >> solution for us. >> > >> > I noticed that there was a ticket opened for porting legacy JMX >> Console: >> > https://issues.jboss.org/browse/WFLY-1197. >> > However I think it would be much better idea to to have this >> > functionality in Web Administraction console. In my opinion it >> would be >> > great to have it under "Runtime" in "Status" submenu. >> > >> > What do you think about this idea? >> > >> > Best Regards >> > -- >> > Sebastian ?askawiec >> > >> > >> > _______________________________________________ >> > wildfly-dev mailing list >> > wildfly-dev at lists.jboss.org >> >> > https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > >> >> >> -- >> Brian Stansberry >> Senior Principal Software Engineer >> JBoss by Red Hat >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> >> >> >> >> -- >> Sebastian ?askawiec >> > > > -- > Brian Stansberry > Senior Principal Software Engineer > JBoss by Red Hat > -- Sebastian ?askawiec -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140604/b70957eb/attachment-0001.html From kabir.khan at jboss.com Wed Jun 4 09:57:53 2014 From: kabir.khan at jboss.com (Kabir Khan) Date: Wed, 4 Jun 2014 14:57:53 +0100 Subject: [wildfly-dev] Service assumptions and the web profile In-Reply-To: <2BB1FF03-76D4-4C58-B925-63FD2DCA9EE1@redhat.com> References: <538E2099.8010207@redhat.com> <538E2560.4000303@redhat.com> <538E299C.2080509@redhat.com> <2BB1FF03-76D4-4C58-B925-63FD2DCA9EE1@redhat.com> Message-ID: Another issue would be if service A has an optional dependency (as described, not using the actual optional dependency) on service B. We?re describing allowing A to react if B shows up. Should we also react if B goes away? On 4 Jun 2014, at 13:39, Jeff Mesnil wrote: > > On 3 Jun 2014, at 22:01, Brian Stansberry wrote: > >>> One obvious question with this scheme is: what happens when Y is added, >>> after the fact? The answer to this question depends on X. X must, in >>> the same transaction, detect the addition of Y and decide what to do. >>> Several actions are possible, depending on how X works and how its >>> dependency on Y works. Options include automatically removing and >>> rebuilding services with the new dependency, retroactively modifying X >>> in some way so as to update it without stopping it, or simply ignoring >>> the addition of Y, using a "reload-required" or similar state to >>> indicate to the user that the running system is inconsistent with the >>> stored model. >>> >>> I know we don't really have any APIs to directly facilitate these >>> concepts, but this is a part of what the management SPI redesign is all >>> about. In the new model, one will be able to express optional >>> dependencies at a resource level rather than a service level. >> >> Jeff Mesnil -- I'm curious how useful the internal notification stuff >> you've added in the existing code will be for this use case. Prior to >> that there was nothing at all that X could count on to become aware of >> the later addition Y. > > Notifications would work a the resource level. > Notifications for added/removed resources are emitted[1] at the end of the step that adds/removes a resource in the MODEL stage. > > So a resource X could register a notification listener on the resource Y's path address. > When the Y resource is added, the listener will be notified and X would be notified of the addition of Y and act accordingly. > > If I understand the example correctly, when a resource X is added, it could check whether the resource Y is already there and use it if that?s the case. > Otherwise, it would register a notification listener and postpone this execution until the resource Y is added (with no guarantee it ever will). > > [1] https://github.com/jmesnil/wildfly/commits/WFLY-266_WFLY-3159_notification_support_and_jmx#diff-889ac0c285b120937da9477f6d61ab1dR689 > -- > Jeff Mesnil > JBoss, a division of Red Hat > http://jmesnil.net/ > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From Anil.Saldhana at redhat.com Wed Jun 4 10:03:15 2014 From: Anil.Saldhana at redhat.com (Anil Saldhana) Date: Wed, 04 Jun 2014 09:03:15 -0500 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538ED250.4060608@jboss.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538E2DEF.8060400@redhat.com> <538ED250.4060608@jboss.com> Message-ID: <538F2723.8020100@redhat.com> On 06/04/2014 03:01 AM, Darran Lofthouse wrote: > > On 03/06/14 21:19, Stan Silvert wrote: >> On 6/3/2014 2:25 PM, Darran Lofthouse wrote: >>> On 03/06/14 18:46, Stan Silvert wrote: >>>> Relation to the Elytron and other WildFly 9 changes >>>> ------------------------------------------------------------------------ >>>> Keycloak is expected to use Elytron at a low level. Nothing at the >>>> Keycloak integration level should be affected by the Elytron project. >>>> >>>> However, there are many other expected changes to security that may >>>> effect how Keycloak is integrated. It is likely that the initial >>>> integration of Keycloak will happen before these aforementioned >>>> changes. This will be an advantage as the unit tests for Keycloak >>>> integration can help to validate these changes. >>> One important change we will need to get in first is the splitting of >>> the contexts that server the management requests, the existing >>> /management context needs to remain supporting the existing >>> authentication mechanisms with cross origin restrictions left on. >> This will be fine. There is presently no reliance on CORS. >>>> Default Authentication Mechanism >>>> ------------------------------------------------ >>>> Keycloak is a very new technology. Given that security is so vital, we >>>> need time for Keycloak to mature. When Keycloak is first integrated, it >>>> will not be the default authentication/authorization mechanism for the >>>> WildFly Web Console. However, selecting Keycloak for authentication >>>> should be as simple as executing one or two CLI commands. >>>> >>>> We can switch to Keycloak as the default whenever we all believe that >>>> both Keycloak itself and its integration into WildFly are ready for >>>> prime time. Hopefully, that will just be a matter of months after first >>>> integration. >>> We also need to have the SSL out of the box or as soon as possible after >>> problem solved. >> +1 >>> Even then how much does it make sense for each app >>> server installation to have it's own SSO infrastructure? >> It's important to at least have it available as an option to turn on. > That part I am fine with, out biggest concern is the WildFly > distribution needs to be fully functional out of the box which means for > KeyCloak to be used by default we need everything in that single instance. > > On the other hand, here are the steps you need to go through including > how to set up your SSO infrastructure and cross reference it from the > WildFly configuration is fine. Even installing it as a war on a WildFly > installation in this context is fine as by then you have reached a point > where the purpose of that installation is to host KeyCloak. > > If I am reading your results from the POC correctly this may be the best > way to go anyway to get a standalone and domain mode solution available > simultaneously - packaging and distribution options for a complete > solution could be a second step. > > One point while I think of it - we will need the native management > interface to be secured by the same identity store and make sure the > existing http mechanisms can use the same store, if that has not already > been considered that may be a higher priority. > >> In production, the SSO infrastructure wouldn't be live on every instance. >> >> Also, Keycloak is much more than just SSO infrastructure. Other >> features like user management, password management, auditing, skinning, >> and the nice UI make it an excellent choice for applications that don't >> require SSO. Who wants to keep coding all that stuff by hand? > Auditing I am deliberately ignoring other than to say that is going to > be a big topic in itself ;-) We already have two auditing solutions in > WildFly one purely for management, the other for apps - the app auditing > is tied very closely to the JAAS integration so we know something will > happen in that area. From the perspective of wildfly-elytron we haven't > reviewed auditing yet as it should not be driving the security solution. The App auditing is not tied to JAAS. It is done in the EJB and Web security integration. I am tired of people just equating what we have to JAAS. JAAS is an implementation detail. > >>>> Initial Integration >>>> ------------------------ >>>> The initial integration for most of Keycloak will only be available on >>>> standalone. However, on a domain controller, the WildFly Web Console >>>> will still be able to use Keycloak for authentication and >>>> authorization. In this case, the domain controller must be able to >>>> reach a Keycloak Authentication Server somewhere on the network. >>>> >>>> >>>> Keycloak Authentication Server and Admin Console >>>> ----------------------------------------------------------------------- >>>> The Keycloak Authentication Server is responsible for authenticating and >>>> authorizing users. The Keycloak Admin Console is an AngularJS UI that >>>> administrators use to manage users, roles, sessions, passwords, assigned >>>> applications, etc. >>>> >>>> Both the auth server and admin console are served from the same WAR. It >>>> should be possible to deploy this without using a WAR or servlets, but >>>> that is not planned for the initial WildFly integration. Because of >>>> this current limitation, the auth server and admin console will not be >>>> present in a domain controller. >>> This is going against the current design of AS7/WildFly exposing >>> management related operations over the management interface and leaving >>> the web container to be purely about a users deployments. >> The auth server and admin console don't necessarily need to be deployed >> as a WAR. It's an AngularJS app, so we could make it work exactly the >> same way the web console does. There is also a middle ground where >> don't expose the fact that it's a WAR. I think JON does something like >> that? >> >> This is a big discussion we will need to have. > +1 As I say above it may be better to first reach the point where a > WildFly instance can be configured to use an existing KeyCloak > installation in standalone and domain mode (and with the native > interface and standard http mechanisms) and then address the how to > bring the rest of KeyCloak in as a second step. > > Finding a way to bring it all in would be a pre-requisite for an out of > the box solution, we also have other items to bring up again soon such > as continuing with out of the box authentication not dependent on SSL > (although this has it's own issues if content in the management model is > sensitive). > > But as I understand some of the demand for this the first major problem > is there is no way for users to even enable this form of SSO so getting > the first step enabled would be a major step forward. > > One point to clarify (not saying anyone is saying this but just to be > clear) - I don't see us reaching a point where we say KeyCloak is > exclusively the only authentication approach we will support for > management, we have legacy client support requirements and end users > will also have their own set of preferred solution. > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From tomaz.cerar at gmail.com Wed Jun 4 10:08:58 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Wed, 4 Jun 2014 16:08:58 +0200 Subject: [wildfly-dev] JMX Console over Web Admin Console In-Reply-To: References: <537D51A9.7090803@redhat.com> <538E3130.4060905@redhat.com> Message-ID: In any case it cannot be org.jboss.* it can be org.wildfly. Looking trough the rebased code it is still war application depending on servlet container to be present. Taking that into consideration, this cannot be part of our main codebase/distribution, but having it as external add-on project sounds fine. In this case i would go for org.wildfly.jmx-console as groupId and artifact id based on logical part of artifact inside the project. probably just jmx-console. btw, your rebased project still imports java ee6 dependencies, given wildfly is ee7 now it would be wise to upgrade that. -- tomaz On Wed, Jun 4, 2014 at 3:53 PM, Sebastian ?askawiec < sebastian.laskawiec at gmail.com> wrote: > Hi Brian > > I thought about: > > - > > *org.jboss* > > - > > org.jboss.as > > - > > org.wildfly > > > ,artifact id: > > - wildfly-jmx-console > - *jboss-jmx-console* > > and finally version: > > - start from the scratch 1.0.0-SNAPSHOT > > My preferences are - org.jboss as group id and jboss-jmx-console as > artifact id. What do you think, is it ok? > > Best regards > Sebastian > > > > 2014-06-03 22:33 GMT+02:00 Brian Stansberry : > > Hi Sebastian, >> >> >> On 6/1/14, 1:21 PM, Sebastian ?askawiec wrote: >> >>> Hi Brian >>> >>> Thanks for clarification and sorry for late response. >>> >>> I created Feature Request to add expose MBean server through HTTP >>> management interface: https://issues.jboss.org/browse/WFLY-3426 >>> >>> >> Thanks. >> >> >> It would be great to have MBean server exposed via Wildfly HTTP >>> Management interface, but I know several teams which would like to have >>> such functionality in JBoss AS 7. This is why I started looking at >>> Darran's port to JMX console >>> (https://github.com/dandreadis/wildfly/commits/jmx-console). I rebased >>> it, detached from Wildfly parent and pushed to my branch >>> (https://github.com/altanis/wildfly/commits/jmx-console-ported). The >>> same WAR file seems to work correctly on JBoss AS 7 as well as Wildfly. >>> >>> In my opinion it would be great to have this console available publicly. >>> Is it possible to make the WAR file available through JBoss Nexus >>> (perhaps thirdparty-releases repository)? If it is, I'd squash all >>> commits and push only jmx-console code into new github repository (to >>> make it separate from Wildfly). >>> >>> >> What maven Group were you wanting to use? That jmx-console-ported branch >> has org.wildfly in the pom. >> >> Best regards >>> Sebastian >>> >>> >>> >>> 2014-05-22 3:23 GMT+02:00 Brian Stansberry >> >: >>> >>> >>> I agree that if we exposed the mbean server over HTTP that it should >>> be >>> via a context on our HTTP management interface. Either that or expose >>> mbeans as part of our standard management resource tree. That would >>> make >>> integration in the web console much more practical. >>> >>> I don't see us ever bringing back the AS5-style jmx-console.war that >>> runs on port 8080 as part of the WildFly distribution. That would >>> introduce a requirement for EE into our management infrastructure, >>> and >>> we won't do that. Management is part of WildFly core, and WildFly >>> core >>> does not require EE. If the Servlet-based jmx-console.war code linked >>> from WFLY-1197 gets further developed, I see it as a community effort >>> for people who want to install that on their own, not as something >>> we'd >>> distribute as part of WildFly itself. >>> >>> On 5/21/14, 7:37 AM, Sebastian ?askawiec wrote: >>> > Hi >>> > >>> > One of our projects is based on JBoss 5.1 and we are considering >>> > migrating it to Wildfly. One of our problems is Web based JMX >>> Console... >>> > We have pretty complicated production environment and Web based >>> JMX >>> > console with basic Auth delegated to LDAP is the simplest >>> solution for us. >>> > >>> > I noticed that there was a ticket opened for porting legacy JMX >>> Console: >>> > https://issues.jboss.org/browse/WFLY-1197. >>> > However I think it would be much better idea to to have this >>> > functionality in Web Administraction console. In my opinion it >>> would be >>> > great to have it under "Runtime" in "Status" submenu. >>> > >>> > What do you think about this idea? >>> > >>> > Best Regards >>> > -- >>> > Sebastian ?askawiec >>> > >>> > >>> > _______________________________________________ >>> > wildfly-dev mailing list >>> > wildfly-dev at lists.jboss.org >>> >>> > https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> > >>> >>> >>> -- >>> Brian Stansberry >>> Senior Principal Software Engineer >>> JBoss by Red Hat >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >>> >>> >>> >>> -- >>> Sebastian ?askawiec >>> >> >> >> -- >> Brian Stansberry >> Senior Principal Software Engineer >> JBoss by Red Hat >> > > > > -- > Sebastian ?askawiec > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140604/7c0ddd6f/attachment-0001.html From darran.lofthouse at jboss.com Wed Jun 4 10:46:09 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Wed, 04 Jun 2014 15:46:09 +0100 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538F2723.8020100@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538E2DEF.8060400@redhat.com> <538ED250.4060608@jboss.com> <538F2723.8020100@redhat.com> Message-ID: <538F3131.40704@jboss.com> On 04/06/14 15:03, Anil Saldhana wrote: > On 06/04/2014 03:01 AM, Darran Lofthouse wrote: >> >> On 03/06/14 21:19, Stan Silvert wrote: >>> Also, Keycloak is much more than just SSO infrastructure. Other >>> features like user management, password management, auditing, skinning, >>> and the nice UI make it an excellent choice for applications that don't >>> require SSO. Who wants to keep coding all that stuff by hand? >> Auditing I am deliberately ignoring other than to say that is going to >> be a big topic in itself ;-) We already have two auditing solutions in >> WildFly one purely for management, the other for apps - the app auditing >> is tied very closely to the JAAS integration so we know something will >> happen in that area. From the perspective of wildfly-elytron we haven't >> reviewed auditing yet as it should not be driving the security solution. > The App auditing is not tied to JAAS. It is done in the EJB and Web security > integration. I am tired of people just equating what we have to JAAS. JAAS > is an implementation detail. Sorry you are quite right, what I mean to say was the current app audit logging is in the container to PicketBox integration points which is an area that will be re-visited in the wildfly-elytron efforts. From Anil.Saldhana at redhat.com Wed Jun 4 10:50:20 2014 From: Anil.Saldhana at redhat.com (Anil Saldhana) Date: Wed, 04 Jun 2014 09:50:20 -0500 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538F3131.40704@jboss.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538E2DEF.8060400@redhat.com> <538ED250.4060608@jboss.com> <538F2723.8020100@redhat.com> <538F3131.40704@jboss.com> Message-ID: <538F322C.30908@redhat.com> On 06/04/2014 09:46 AM, Darran Lofthouse wrote: > On 04/06/14 15:03, Anil Saldhana wrote: >> On 06/04/2014 03:01 AM, Darran Lofthouse wrote: >>> On 03/06/14 21:19, Stan Silvert wrote: >>>> Also, Keycloak is much more than just SSO infrastructure. Other >>>> features like user management, password management, auditing, skinning, >>>> and the nice UI make it an excellent choice for applications that don't >>>> require SSO. Who wants to keep coding all that stuff by hand? >>> Auditing I am deliberately ignoring other than to say that is going to >>> be a big topic in itself ;-) We already have two auditing solutions in >>> WildFly one purely for management, the other for apps - the app auditing >>> is tied very closely to the JAAS integration so we know something will >>> happen in that area. From the perspective of wildfly-elytron we haven't >>> reviewed auditing yet as it should not be driving the security solution. >> The App auditing is not tied to JAAS. It is done in the EJB and Web security >> integration. I am tired of people just equating what we have to JAAS. JAAS >> is an implementation detail. > Sorry you are quite right, what I mean to say was the current app audit > logging is in the container to PicketBox integration points which is an > area that will be re-visited in the wildfly-elytron efforts. No apologies necessary. :-) I think PicketBox has to be respected for all it does and not completely tie it to JAAS, since that is just an implementation detail. I am hoping all the shortcomings we have in PBox to be rectified in WildFly Elytron. From smarlow at redhat.com Wed Jun 4 10:55:01 2014 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 04 Jun 2014 10:55:01 -0400 Subject: [wildfly-dev] Hibernate 5 in WildFly 9... Message-ID: <538F3345.20409@redhat.com> Hi, Just a heads up for others, that I started making local changes to integrate the Hibernate ORM master (5.0) branch in Jipijapa and WildFly master (9.0). If others would like to contribute on this effort, I can share links to my integration branches. So far, I have been keeping the 1-1 bidirectional Hibernate dependencies in mind for this work. Currently, this is done by placing the latest Hibernate ORM jars in the org.hibernate:main module. Placeholder modules are also available for earlier Hibernate releases (org.hibernate:3 + org.hibernate:4.1). When we introduce Hibernate 5 as the default in WildFly 9, we will also have a legacy org.hibernate:4.3 module. Applications can also include persistence providers as well, they just need to include the Jipijapa integration classes as well (see https://docs.jboss.org/author/display/WFLY8/JPA+Reference+Guide#JPAReferenceGuide-PackagingpersistenceproviderswithyourapplicationandwhichJipijapaartifactstoinclude). We also could look at doing something similar to Hibernate ORM org.hibernate.boot.registry.selector.spi.StrategySelector (if we can find something to group on.) In the past, we talked about adding logical names for each (future) persistence provider module name, so that users could use a shorter (logical) name in their application configuration (e.g. persistence.xml hints). It would also be awesome to have a configuration setting for specifying the default persistence provider name based on a logical name that could be specified in standalone.xml. Question: Is anyone against removing Hibernate 3.x support from WildFly 9? Scott From jmesnil at redhat.com Wed Jun 4 11:09:10 2014 From: jmesnil at redhat.com (Jeff Mesnil) Date: Wed, 4 Jun 2014 17:09:10 +0200 Subject: [wildfly-dev] Service assumptions and the web profile In-Reply-To: References: <538E2099.8010207@redhat.com> <538E2560.4000303@redhat.com> <538E299C.2080509@redhat.com> <2BB1FF03-76D4-4C58-B925-63FD2DCA9EE1@redhat.com> Message-ID: <774EAFE2-51AE-43ED-B792-B267987B278F@redhat.com> On 4 Jun 2014, at 15:57, Kabir Khan wrote: > Another issue would be if service A has an optional dependency (as described, not using the actual optional dependency) on service B. > We?re describing allowing A to react if B shows up. Should we also react if B goes away? That is no different that the current case. A would have to react if B goes away whether B is added there before A or after (by using hypothetical notification listener). >From my experience on the messaging subsystem, I have a resource A that is to know whether another subsystem resource B exist before adding a dependency on its service (typical example is the messaging?s http-connector that depends on the existence of the http-upgrade handler of the undertow subsystem) If Y is removed, its corresponding service will be stopped and the service that I have started when A was added will be transitively stopped. jeff -- Jeff Mesnil JBoss, a division of Red Hat http://jmesnil.net/ From david.lloyd at redhat.com Wed Jun 4 12:07:33 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Wed, 04 Jun 2014 11:07:33 -0500 Subject: [wildfly-dev] On the WildFly Elytron PasswordFactory API Message-ID: <538F4445.9090604@redhat.com> The JDK's cryptography/security architecture includes facilities for handling many kinds of cryptographic key materials, but it does not include one to handle text passwords. Text passwords are handled in a very wide variety of formats and used in a variety of ways, especially when you add challenge/response algorithms and legacy systems into the mix. Pursuant to that, there is a new API inside of WildFly Elytron for the purpose of handling passwords and translating them between various useful formats. At present this API is designed to be similar to and consistent with the JDK key handling APIs. So I'll dive right in to examples of usage, based on the use cases that have been identified so far: Example: Importing an verifying a passwd file password ------------------------------------------------------ PasswordFactory pf = PasswordFactory.getInstance("crypt"); // Get a Password for a crypt string PasswordSpec spec = new CryptStringPasswordSpec(passwdChars); Password password = pf.generatePassword(spec); // Now we can verify it if (! pf.verify(password, "mygu3ss".toCharArray())) { throw new AuthenticationException("Wrong password"); } Example: Importing and exporting a clear password ------------------------------------------------- PasswordFactory pf = PasswordFactory.getInstance("clear"); // Import PasswordSpec spec = new ClearPasswordSpec("p4ssw0rd".toCharArray()); Password password = pf.generatePassword(spec); // Verify boolean ok = pf.verify(password, "p4ssw0rd".toCharArray()); // Is it clear? boolean isClear = pf.convertibleToKeySpec(password, ClearPasswordSpec.class); assert password instanceof TwoWayPassword; assert ! (password instanceof OneWayPassword); // Export again ClearPasswordSpec clearSpec = pf.getKeySpec(password, ClearPasswordSpec.class); System.out.printf("The password is: %s%n", new String(clearSpec.getEncodedPassword())); Example: Encrypting a new password ---------------------------------- PasswordFactory pf = PasswordFactory.getInstance("sha1crypt"); // API not yet established but will be similar to this possibly: ???? parameters = new ???SHA1CryptPasswordParameterSpec("p4ssw0rd".toCharArray()); Password encrypted = pf.generatePassword(parameters); assert encrypted instanceof SHA1CryptPassword; If anyone has other use cases they feel need to be covered, or questions or comments about the API, speak up. -- - DML From jason.greene at redhat.com Wed Jun 4 13:23:04 2014 From: jason.greene at redhat.com (Jason Greene) Date: Wed, 4 Jun 2014 12:23:04 -0500 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538E1314.2030101@jboss.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> Message-ID: On Jun 3, 2014, at 1:25 PM, Darran Lofthouse wrote: >> Both the auth server and admin console are served from the same WAR. It >> should be possible to deploy this without using a WAR or servlets, but >> that is not planned for the initial WildFly integration. Because of >> this current limitation, the auth server and admin console will not be >> present in a domain controller. > > This is going against the current design of AS7/WildFly exposing > management related operations over the management interface and leaving > the web container to be purely about a users deployments. Sorry for my delayed reply. I hadn?t had a chance to read the full thread. My understanding of the original and still current goal of key cloak is to be more of an appliance, and also largely independent of WildFly. >From that perspective, I don?t think embedding Keycloak solely to be in the same VM makes a lot of sense (more details as to why follow). It?s fine to have KeyCloak running on a WildFly instance (either as a subsystem or a deployment), but to me this seems to be a bit more of a black box to the user. So a typical topology, based on the factors I am aware of would look like this: ?????????????????????????????????????????????????????? ?????????????????????????????????????????????????????? ???????????????+------+?????Auth???????+----------+??? ???????????????|??????+---------------->??????????|??? ???????????????|??DC??|????????????????|?Keycloak?|??? ??????????+----+??????+----+???????????|??????????|??? ??????????|????+------+????|???????????+----------+??? ??????????|????????????????|?????????????????????????? ??????+---v--+??????????+--v---+?????????????????????? ??????|??????|??????????|??????|?????????????????????? ??????|??HC??|??????????|??HC??|?????????????????????? ????+-+??????+-+??????+-+??????+-+???????????????????? ????|?+--+---+?|??????|?+--+---+?|???????????????????? ????|????|?????|??????|????|?????|???????????????????? ???+v-+?+v-+?+-v+????+v-+?+v-+?+-v+??????????????????? ???|S1|?|S2|?|S3|????|S4|?|S5|?|S6|??????????????????? ???+--+?+--+?+--+????+--+?+--+?+--+??????????????????? ?????????????????????????????????????????????????????? Each box represents a different JVM running potentially on separate hardware. So from the architecture the key element we need is for the DC (and standalone server) to come pre bundled with a client that can talk to the Keycloak blackbox (whether it be WildFly or fat jar or whatever). I assume this mostly amounts to OAUTH communication. Now as to why I don?t think embedding as it is makes a lot of sense, is because it wouldn?t really be a tightly integrated component, but rather two distinct systems duct taped together. We would have: 1. Multiple distinct management consoles 2. Multiple distinct management APIs 3. Multiple distinct management protocols 4. Multiple distinct CLI/tools There is of course ways to paper over this and shove them together but you end up with leaky abstractions. Like lets say the CLI could issue REST operations against Keycloak as well. Thats great but that means things like the compensating transaction model don?t let you mix management changes with keycloak changes. Another issue is that WildFly has some pretty strict backwards compatibility contracts with regards to management that stem from EAP. Keycloak, at this stage of the process might not want to put up with us requesting similar conservative governance. It might be better for us to limit the API dependencies to best enable the project to continue to evolve. -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From jperkins at redhat.com Wed Jun 4 13:31:31 2014 From: jperkins at redhat.com (James R. Perkins) Date: Wed, 04 Jun 2014 10:31:31 -0700 Subject: [wildfly-dev] EAR META-INF Visibility Message-ID: <538F57F3.6000405@redhat.com> Currently the EAR/META-INF directory is not visible to it's sub-modules. Should we expose the EAR/META-INF directory to the class path? I've seen questions a few times when a user wants to use their own copy of log4j in an EAR and they place the log4j.xxx configuration file in the EAR/META-INF directory. Since the log4j.jar doesn't have the directory on it's class path the conflagration file can not be read. Anyone have thoughts or opinions on this? -- James R. Perkins JBoss by Red Hat From jason.greene at redhat.com Wed Jun 4 13:36:35 2014 From: jason.greene at redhat.com (Jason Greene) Date: Wed, 4 Jun 2014 12:36:35 -0500 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> Message-ID: On Jun 4, 2014, at 12:23 PM, Jason Greene wrote: > > On Jun 3, 2014, at 1:25 PM, Darran Lofthouse wrote: > >>> Both the auth server and admin console are served from the same WAR. It >>> should be possible to deploy this without using a WAR or servlets, but >>> that is not planned for the initial WildFly integration. Because of >>> this current limitation, the auth server and admin console will not be >>> present in a domain controller. >> >> This is going against the current design of AS7/WildFly exposing >> management related operations over the management interface and leaving >> the web container to be purely about a users deployments. > > Sorry for my delayed reply. I hadn?t had a chance to read the full thread. > > My understanding of the original and still current goal of key cloak is to be more of an appliance, and also largely independent of WildFly. > > From that perspective, I don?t think embedding Keycloak solely to be in the same VM makes a lot of sense (more details as to why follow). It?s fine to have KeyCloak running on a WildFly instance (either as a subsystem or a deployment), but to me this seems to be a bit more of a black box to the user. > > So a typical topology, based on the factors I am aware of would look like this: > > ?????????????????????????????????????????????????????? > ?????????????????????????????????????????????????????? > ???????????????+------+?????Auth???????+----------+??? > ???????????????|??????+---------------->??????????|??? > ???????????????|??DC??|????????????????|?Keycloak?|??? > ??????????+----+??????+----+???????????|??????????|??? > ??????????|????+------+????|???????????+----------+??? > ??????????|????????????????|?????????????????????????? > ??????+---v--+??????????+--v---+?????????????????????? > ??????|??????|??????????|??????|?????????????????????? > ??????|??HC??|??????????|??HC??|?????????????????????? > ????+-+??????+-+??????+-+??????+-+???????????????????? > ????|?+--+---+?|??????|?+--+---+?|???????????????????? > ????|????|?????|??????|????|?????|???????????????????? > ???+v-+?+v-+?+-v+????+v-+?+v-+?+-v+??????????????????? > ???|S1|?|S2|?|S3|????|S4|?|S5|?|S6|??????????????????? > ???+--+?+--+?+--+????+--+?+--+?+--+ Actually it should look like this, if you factor in deployments doing auth as well.????????????????????????????????????????????????????? ?????????????????????????????????????????????????????? ???????????????+------+?????Auth???????+----------+??? ???????????????|??????+---------------->??????????|??? ???????????????|??DC??|????????????????|?Keycloak?|??? ??????????+----+??????+----+???????????|??????????|??? ??????????|????+------+????|???????????+-----^----+??? ??????????|????????????????|?????????????????|???????? ??????+---v--+??????????+--v---+???????????? |????????? ??????|??????|??????????|??????|?????????????|???????? ??????|??HC??|??????????|??HC??|?????????????|?Application Auth???????? ????+-+??????+-+??????+-+??????+-+???????????| ???????? ????|?+--+---+?|??????|?+--+---+?|???????????|????????? ????|????|?????|??????|????|?????|???????????|???????? ???+v-+?+v-+?+-v+????+v-+?+v-+?+-v+??????????|???????? ???|S1|?|S2|?|S3|????|S4|?|S5|?|S6|----------+??????????????????? ???+--+?+--+?+--+????+--+?+--+?+--+??????????????????? ?????????????????????????????????????????????????????? -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From arjan.tijms at gmail.com Wed Jun 4 13:44:20 2014 From: arjan.tijms at gmail.com (arjan tijms) Date: Wed, 4 Jun 2014 19:44:20 +0200 Subject: [wildfly-dev] EAR META-INF Visibility In-Reply-To: <538F57F3.6000405@redhat.com> References: <538F57F3.6000405@redhat.com> Message-ID: On Wed, Jun 4, 2014 at 7:31 PM, James R. Perkins wrote: > Currently the EAR/META-INF directory is not visible to it's sub-modules. > Should we expose the EAR/META-INF directory to the class path? > > I've seen questions a few times when a user wants to use their own copy > of log4j in an EAR and they place the log4j.xxx configuration file in > the EAR/META-INF directory. Since the log4j.jar doesn't have the > directory on it's class path the conflagration file can not be read. > > Anyone have thoughts or opinions on this? I think the EAR's root is a very logical place to put configuration files like that. There really is no good other location. Picking a random EJB module and putting it there is weird, and putting it in every individual WEB module is weird too. If anything, especially with the upcoming configuration JSR in mind, I think the Java EE spec should just demand that EAR/META-INF is on the class path. So I'd love JBoss/WildFly to have this as a proprietary option, but eventually it should be handled by the spec too. Kind regards, Arjan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140604/dfda5b01/attachment.html From jason.greene at redhat.com Wed Jun 4 13:48:07 2014 From: jason.greene at redhat.com (Jason Greene) Date: Wed, 4 Jun 2014 12:48:07 -0500 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> Message-ID: <64E74D85-C04E-4225-8DB5-891FC48CF71E@redhat.com> On Jun 4, 2014, at 12:23 PM, Jason Greene wrote: > There is of course ways to paper over this and shove them together but you end up with leaky abstractions. Like lets say the CLI could issue REST operations against Keycloak as well. Thats great but that means things like the compensating transaction model don?t let you mix management changes with keycloak changes. > > Another issue is that WildFly has some pretty strict backwards compatibility contracts with regards to management that stem from EAP. Keycloak, at this stage of the process might not want to put up with us requesting similar conservative governance. It might be better for us to limit the API dependencies to best enable the project to continue to evolve. Other problems I forgot to mention (oops sorry!) 1. Lifecycle robustness problems - Management is not supposed to affect applications, so if the user takes down or moves the DC it would take down application auth as well - Bad! 2. Chicken-egg problems in standalone mode - subsystems aren?t started when the server is in admin-only mode. (Although this one is solvable) 3. Lots of additional work. -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From brian.stansberry at redhat.com Wed Jun 4 14:25:33 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Wed, 04 Jun 2014 13:25:33 -0500 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <64E74D85-C04E-4225-8DB5-891FC48CF71E@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <64E74D85-C04E-4225-8DB5-891FC48CF71E@redhat.com> Message-ID: <538F649D.1050809@redhat.com> On 6/4/14, 12:48 PM, Jason Greene wrote: > > On Jun 4, 2014, at 12:23 PM, Jason Greene wrote: > >> There is of course ways to paper over this and shove them together but you end up with leaky abstractions. Like lets say the CLI could issue REST operations against Keycloak as well. Thats great but that means things like the compensating transaction model don?t let you mix management changes with keycloak changes. >> >> Another issue is that WildFly has some pretty strict backwards compatibility contracts with regards to management that stem from EAP. Keycloak, at this stage of the process might not want to put up with us requesting similar conservative governance. It might be better for us to limit the API dependencies to best enable the project to continue to evolve. > > Other problems I forgot to mention (oops sorry!) > > 1. Lifecycle robustness problems - Management is not supposed to affect applications, so if the user takes down or moves the DC it would take down application auth as well - Bad! > 2. Chicken-egg problems in standalone mode - subsystems aren?t started when the server is in admin-only mode. (Although this one is solvable) Subsystem services aren't started only because operation handlers by default don't do runtime stuff in admin-only mode. But a subsystem author could certainly have their handlers do stuff in admin-only if it was appropriate. Your comment however makes me realize a flaw in how I'd been seeing this, where the Keycloak server could simply be an application running on one of the servers in the domain. But servers are under the control of an HC, and an HC in admin-only will not launch servers. So there's a chicken-egg issue there. > 3. Lots of additional work. > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From bburke at redhat.com Wed Jun 4 15:32:18 2014 From: bburke at redhat.com (Bill Burke) Date: Wed, 04 Jun 2014 15:32:18 -0400 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> Message-ID: <538F7442.2070306@redhat.com> On 6/4/2014 1:23 PM, Jason Greene wrote: > > On Jun 3, 2014, at 1:25 PM, Darran Lofthouse wrote: > >>> Both the auth server and admin console are served from the same WAR. It >>> should be possible to deploy this without using a WAR or servlets, but >>> that is not planned for the initial WildFly integration. Because of >>> this current limitation, the auth server and admin console will not be >>> present in a domain controller. >> >> This is going against the current design of AS7/WildFly exposing >> management related operations over the management interface and leaving >> the web container to be purely about a users deployments. > > Sorry for my delayed reply. I hadn?t had a chance to read the full thread. > > My understanding of the original and still current goal of key cloak is to be more of an appliance, and also largely independent of WildFly. > > From that perspective, I don?t think embedding Keycloak solely to be in the same VM makes a lot of sense (more details as to why follow). It?s fine to have KeyCloak running on a WildFly instance (either as a subsystem or a deployment), but to me this seems to be a bit more of a black box to the user. > > So a typical topology, based on the factors I am aware of would look like this: > > ?????????????????????????????????????????????????????? > ?????????????????????????????????????????????????????? > ???????????????+------+?????Auth???????+----------+??? > ???????????????|??????+---------------->??????????|??? > ???????????????|??DC??|????????????????|?Keycloak?|??? > ??????????+----+??????+----+???????????|??????????|??? > ??????????|????+------+????|???????????+----------+??? > ??????????|????????????????|?????????????????????????? > ??????+---v--+??????????+--v---+?????????????????????? > ??????|??????|??????????|??????|?????????????????????? > ??????|??HC??|??????????|??HC??|?????????????????????? > ????+-+??????+-+??????+-+??????+-+???????????????????? > ????|?+--+---+?|??????|?+--+---+?|???????????????????? > ????|????|?????|??????|????|?????|???????????????????? > ???+v-+?+v-+?+-v+????+v-+?+v-+?+-v+??????????????????? > ???|S1|?|S2|?|S3|????|S4|?|S5|?|S6|??????????????????? > ???+--+?+--+?+--+????+--+?+--+?+--+??????????????????? > ?????????????????????????????????????????????????????? > > Each box represents a different JVM running potentially on separate hardware. > > So from the architecture the key element we need is for the DC (and standalone server) to come pre bundled with a client that can talk to the Keycloak blackbox (whether it be WildFly or fat jar or whatever). I assume this mostly amounts to OAUTH communication. > > Now as to why I don?t think embedding as it is makes a lot of sense, is because it wouldn?t really be a tightly integrated component, but rather two distinct systems duct taped together. We would have: > > 1. Multiple distinct management consoles > 2. Multiple distinct management APIs > 3. Multiple distinct management protocols > 4. Multiple distinct CLI/tools > > There is of course ways to paper over this and shove them together but you end up with leaky abstractions. Like lets say the CLI could issue REST operations against Keycloak as well. Thats great but that means things like the compensating transaction model don?t let you mix management changes with keycloak changes. > > Another issue is that WildFly has some pretty strict backwards compatibility contracts with regards to management that stem from EAP. Keycloak, at this stage of the process might not want to put up with us requesting similar conservative governance. It might be better for us to limit the API dependencies to best enable the project to continue to evolve. > Jason, I think we should first get Keycloak to secure Wildfly in standalone mode or with a domain controller. In both cases the Wildfly console should be securable by Keycloak. I'm betting that a lot of these issues will flesh out and become much clearer on how to solve. Irregardless of the Wildfly team vetoing the inclusion of keycloak, it is a very important use case for us to be able to be embbeded and to secure Wildfly and to manage security for Wildfly. We have already learned a lot by being embedded with Aerogear UPS as their security console and solution. For example, keycloak now has pluggable themes/skins themes/skins for its entire UI: admin console, login pages, etc. This has allowed Keycloak to be branded as an Aerogear subsystem and it looks like one product. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com From rodakr at gmx.ch Wed Jun 4 15:40:49 2014 From: rodakr at gmx.ch (Radoslaw Rodak) Date: Wed, 4 Jun 2014 21:40:49 +0200 Subject: [wildfly-dev] New security sub-project: WildFly Elytron In-Reply-To: <538E82A3.1060104@redhat.com> References: <538E82A3.1060104@redhat.com> Message-ID: > The following are presently non- or anti-goals: > > ? Any provision to support JAAS Subject as a security context (due to > performance and correctness concerns)? > ? Any provision to support JAAS LoginContext (due to tight integration > with Subject) > ? Any provision to maintain API compatibility with PicketBox (this is > not presently an established requirement and thus would add undue > implementation complexity, if it is indeed even possible) > ? Replicate Kerberos-style ticket-based credential forwarding (just use > Kerberos in this case) > > ? You may note that this is in contrast with a previous post to the AS 7 > list [9] in which I advocated simply unifying on Subject. Subsequent > research uncovered a number of performance and implementation weaknesses > in JAAS that have since convinced the security team that we should no > longer be relying on it. Is there any hope to have in Elytron a way to be able to integrate third part products supporting user identity propagation with JAAS like Corba, IBM MQ ? with Wildfly? From david.lloyd at redhat.com Wed Jun 4 16:34:17 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Wed, 04 Jun 2014 15:34:17 -0500 Subject: [wildfly-dev] New security sub-project: WildFly Elytron In-Reply-To: References: <538E82A3.1060104@redhat.com> Message-ID: <538F82C9.40607@redhat.com> On 06/04/2014 02:40 PM, Radoslaw Rodak wrote: >> The following are presently non- or anti-goals: >> >> ? Any provision to support JAAS Subject as a security context (due to >> performance and correctness concerns)? >> ? Any provision to support JAAS LoginContext (due to tight integration >> with Subject) >> ? Any provision to maintain API compatibility with PicketBox (this is >> not presently an established requirement and thus would add undue >> implementation complexity, if it is indeed even possible) >> ? Replicate Kerberos-style ticket-based credential forwarding (just use >> Kerberos in this case) >> >> ? You may note that this is in contrast with a previous post to the AS 7 >> list [9] in which I advocated simply unifying on Subject. Subsequent >> research uncovered a number of performance and implementation weaknesses >> in JAAS that have since convinced the security team that we should no >> longer be relying on it. > > > Is there any hope to have in Elytron a way to be able to integrate third part products supporting user identity propagation with JAAS like Corba, IBM MQ ? with Wildfly? Yes, however it may not be possible using one single integration methodology. Experience has shown that every vendor uses JAAS in different ways, so we would have to approach each item on a case-by-case basis. -- - DML From jason.greene at redhat.com Wed Jun 4 17:05:55 2014 From: jason.greene at redhat.com (Jason Greene) Date: Wed, 4 Jun 2014 16:05:55 -0500 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538F7442.2070306@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538F7442.2070306@redhat.com> Message-ID: On Jun 4, 2014, at 2:32 PM, Bill Burke wrote: > > > On 6/4/2014 1:23 PM, Jason Greene wrote: >> >> On Jun 3, 2014, at 1:25 PM, Darran Lofthouse wrote: >> >>>> Both the auth server and admin console are served from the same WAR. It >>>> should be possible to deploy this without using a WAR or servlets, but >>>> that is not planned for the initial WildFly integration. Because of >>>> this current limitation, the auth server and admin console will not be >>>> present in a domain controller. >>> >>> This is going against the current design of AS7/WildFly exposing >>> management related operations over the management interface and leaving >>> the web container to be purely about a users deployments. >> >> Sorry for my delayed reply. I hadn?t had a chance to read the full thread. >> >> My understanding of the original and still current goal of key cloak is to be more of an appliance, and also largely independent of WildFly. >> >> From that perspective, I don?t think embedding Keycloak solely to be in the same VM makes a lot of sense (more details as to why follow). It?s fine to have KeyCloak running on a WildFly instance (either as a subsystem or a deployment), but to me this seems to be a bit more of a black box to the user. >> >> So a typical topology, based on the factors I am aware of would look like this: >> >> ?????????????????????????????????????????????????????? >> ?????????????????????????????????????????????????????? >> ???????????????+------+?????Auth???????+----------+??? >> ???????????????|??????+---------------->??????????|??? >> ???????????????|??DC??|????????????????|?Keycloak?|??? >> ??????????+----+??????+----+???????????|??????????|??? >> ??????????|????+------+????|???????????+----------+??? >> ??????????|????????????????|?????????????????????????? >> ??????+---v--+??????????+--v---+?????????????????????? >> ??????|??????|??????????|??????|?????????????????????? >> ??????|??HC??|??????????|??HC??|?????????????????????? >> ????+-+??????+-+??????+-+??????+-+???????????????????? >> ????|?+--+---+?|??????|?+--+---+?|???????????????????? >> ????|????|?????|??????|????|?????|???????????????????? >> ???+v-+?+v-+?+-v+????+v-+?+v-+?+-v+??????????????????? >> ???|S1|?|S2|?|S3|????|S4|?|S5|?|S6|??????????????????? >> ???+--+?+--+?+--+????+--+?+--+?+--+??????????????????? >> ?????????????????????????????????????????????????????? >> >> Each box represents a different JVM running potentially on separate hardware. >> >> So from the architecture the key element we need is for the DC (and standalone server) to come pre bundled with a client that can talk to the Keycloak blackbox (whether it be WildFly or fat jar or whatever). I assume this mostly amounts to OAUTH communication. >> >> Now as to why I don?t think embedding as it is makes a lot of sense, is because it wouldn?t really be a tightly integrated component, but rather two distinct systems duct taped together. We would have: >> >> 1. Multiple distinct management consoles >> 2. Multiple distinct management APIs >> 3. Multiple distinct management protocols >> 4. Multiple distinct CLI/tools >> >> There is of course ways to paper over this and shove them together but you end up with leaky abstractions. Like lets say the CLI could issue REST operations against Keycloak as well. Thats great but that means things like the compensating transaction model don?t let you mix management changes with keycloak changes. >> >> Another issue is that WildFly has some pretty strict backwards compatibility contracts with regards to management that stem from EAP. Keycloak, at this stage of the process might not want to put up with us requesting similar conservative governance. It might be better for us to limit the API dependencies to best enable the project to continue to evolve. >> > > Jason, > > I think we should first get Keycloak to secure Wildfly in standalone > mode or with a domain controller. In both cases the Wildfly console > should be securable by Keycloak. I'm betting that a lot of these issues > will flesh out and become much clearer on how to solve. Certainly agree there. > > Irregardless of the Wildfly team vetoing the inclusion of keycloak, it > is a very important use case for us to be able to be embbeded and to > secure Wildfly and to manage security for Wildfly. > > We have already learned a lot by being embedded with Aerogear UPS as > their security console and solution. For example, keycloak now has > pluggable themes/skins themes/skins for its entire UI: admin console, > login pages, etc. This has allowed Keycloak to be branded as an > Aerogear subsystem and it looks like one product. I don?t think anyone has veto?d anything. I have just highlighted the challenges. They aren?t insurmountable but they would require some effort to solve. We could for example have management operation wrappers which trigger the appropriate actions in key cloak, and this could solve the CLI problems I mentioned, and allow for the admin console to do cross system interactions. Some of the other issues I don?t have a clear idea on, but some thinking might come up with something. > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From jason.greene at redhat.com Wed Jun 4 17:05:55 2014 From: jason.greene at redhat.com (Jason Greene) Date: Wed, 4 Jun 2014 16:05:55 -0500 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <538F7442.2070306@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538F7442.2070306@redhat.com> Message-ID: <524B225F-A557-4CFD-8C11-18D01E8E9919@redhat.com> On Jun 4, 2014, at 2:32 PM, Bill Burke wrote: > > > On 6/4/2014 1:23 PM, Jason Greene wrote: >> >> On Jun 3, 2014, at 1:25 PM, Darran Lofthouse wrote: >> >>>> Both the auth server and admin console are served from the same WAR. It >>>> should be possible to deploy this without using a WAR or servlets, but >>>> that is not planned for the initial WildFly integration. Because of >>>> this current limitation, the auth server and admin console will not be >>>> present in a domain controller. >>> >>> This is going against the current design of AS7/WildFly exposing >>> management related operations over the management interface and leaving >>> the web container to be purely about a users deployments. >> >> Sorry for my delayed reply. I hadn?t had a chance to read the full thread. >> >> My understanding of the original and still current goal of key cloak is to be more of an appliance, and also largely independent of WildFly. >> >> From that perspective, I don?t think embedding Keycloak solely to be in the same VM makes a lot of sense (more details as to why follow). It?s fine to have KeyCloak running on a WildFly instance (either as a subsystem or a deployment), but to me this seems to be a bit more of a black box to the user. >> >> So a typical topology, based on the factors I am aware of would look like this: >> >> ?????????????????????????????????????????????????????? >> ?????????????????????????????????????????????????????? >> ???????????????+------+?????Auth???????+----------+??? >> ???????????????|??????+---------------->??????????|??? >> ???????????????|??DC??|????????????????|?Keycloak?|??? >> ??????????+----+??????+----+???????????|??????????|??? >> ??????????|????+------+????|???????????+----------+??? >> ??????????|????????????????|?????????????????????????? >> ??????+---v--+??????????+--v---+?????????????????????? >> ??????|??????|??????????|??????|?????????????????????? >> ??????|??HC??|??????????|??HC??|?????????????????????? >> ????+-+??????+-+??????+-+??????+-+???????????????????? >> ????|?+--+---+?|??????|?+--+---+?|???????????????????? >> ????|????|?????|??????|????|?????|???????????????????? >> ???+v-+?+v-+?+-v+????+v-+?+v-+?+-v+??????????????????? >> ???|S1|?|S2|?|S3|????|S4|?|S5|?|S6|??????????????????? >> ???+--+?+--+?+--+????+--+?+--+?+--+??????????????????? >> ?????????????????????????????????????????????????????? >> >> Each box represents a different JVM running potentially on separate hardware. >> >> So from the architecture the key element we need is for the DC (and standalone server) to come pre bundled with a client that can talk to the Keycloak blackbox (whether it be WildFly or fat jar or whatever). I assume this mostly amounts to OAUTH communication. >> >> Now as to why I don?t think embedding as it is makes a lot of sense, is because it wouldn?t really be a tightly integrated component, but rather two distinct systems duct taped together. We would have: >> >> 1. Multiple distinct management consoles >> 2. Multiple distinct management APIs >> 3. Multiple distinct management protocols >> 4. Multiple distinct CLI/tools >> >> There is of course ways to paper over this and shove them together but you end up with leaky abstractions. Like lets say the CLI could issue REST operations against Keycloak as well. Thats great but that means things like the compensating transaction model don?t let you mix management changes with keycloak changes. >> >> Another issue is that WildFly has some pretty strict backwards compatibility contracts with regards to management that stem from EAP. Keycloak, at this stage of the process might not want to put up with us requesting similar conservative governance. It might be better for us to limit the API dependencies to best enable the project to continue to evolve. >> > > Jason, > > I think we should first get Keycloak to secure Wildfly in standalone > mode or with a domain controller. In both cases the Wildfly console > should be securable by Keycloak. I'm betting that a lot of these issues > will flesh out and become much clearer on how to solve. Certainly agree there. > > Irregardless of the Wildfly team vetoing the inclusion of keycloak, it > is a very important use case for us to be able to be embbeded and to > secure Wildfly and to manage security for Wildfly. > > We have already learned a lot by being embedded with Aerogear UPS as > their security console and solution. For example, keycloak now has > pluggable themes/skins themes/skins for its entire UI: admin console, > login pages, etc. This has allowed Keycloak to be branded as an > Aerogear subsystem and it looks like one product. I don?t think anyone has veto?d anything. I have just highlighted the challenges. They aren?t insurmountable but they would require some effort to solve. We could for example have management operation wrappers which trigger the appropriate actions in key cloak, and this could solve the CLI problems I mentioned, and allow for the admin console to do cross system interactions. Some of the other issues I don?t have a clear idea on, but some thinking might come up with something. > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From arun.gupta at gmail.com Wed Jun 4 17:08:12 2014 From: arun.gupta at gmail.com (Arun Gupta) Date: Wed, 4 Jun 2014 17:08:12 -0400 Subject: [wildfly-dev] Syntax error when applying patch and exiting jboss-cli Message-ID: [disconnected /] patch apply ../wildfly-8.1.0.Final-update/wildfly-8.1.0.Final.patch { "outcome" : "success", "result" : {} } [disconnected /] [disconnected /] [disconnected /] [disconnected /] exit logging.configuration already set in JAVA_OPTS ./bin/jboss-cli.sh: line 81: syntax error near unexpected token `fi' ./bin/jboss-cli.sh: line 81: `fi' Known issue ? Arun -- http://blog.arungupta.me http://twitter.com/arungupta From stuart.w.douglas at gmail.com Wed Jun 4 17:09:47 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Wed, 04 Jun 2014 16:09:47 -0500 Subject: [wildfly-dev] Syntax error when applying patch and exiting jboss-cli In-Reply-To: References: Message-ID: <538F8B1B.4060109@gmail.com> I think this is a known issue that occurs if you apply the patch when you are in a [disconnected] state. Stuart Arun Gupta wrote: > [disconnected /] patch apply > ../wildfly-8.1.0.Final-update/wildfly-8.1.0.Final.patch > { > "outcome" : "success", > "result" : {} > } > [disconnected /] > [disconnected /] > [disconnected /] > [disconnected /] exit > > > logging.configuration already set in JAVA_OPTS > ./bin/jboss-cli.sh: line 81: syntax error near unexpected token `fi' > ./bin/jboss-cli.sh: line 81: `fi' > > Known issue ? > > Arun > > From jason.greene at redhat.com Wed Jun 4 17:14:38 2014 From: jason.greene at redhat.com (Jason Greene) Date: Wed, 4 Jun 2014 16:14:38 -0500 Subject: [wildfly-dev] Syntax error when applying patch and exiting jboss-cli In-Reply-To: References: Message-ID: <10B4D68C-8D6C-4DAF-AAB1-CE040599F369@redhat.com> On Jun 4, 2014, at 4:08 PM, Arun Gupta wrote: > [disconnected /] patch apply > ../wildfly-8.1.0.Final-update/wildfly-8.1.0.Final.patch > { > "outcome" : "success", > "result" : {} > } > [disconnected /] > [disconnected /] > [disconnected /] > [disconnected /] exit > > > logging.configuration already set in JAVA_OPTS > ./bin/jboss-cli.sh: line 81: syntax error near unexpected token `fi' > ./bin/jboss-cli.sh: line 81: `fi' > > Known issue ? > Yes, it?s IMO a design mistake with offline patching. The big issue is that the patch tool patches itself live instead of isolating itself or staging. So like this issue is caused by the cli shell script getting updated before its fully parsed. The good is that its harmless, as long as you quit before running any non-patch commands (as mentioned in the instructions). We need to fix this in 9. -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From jason.greene at redhat.com Wed Jun 4 17:14:38 2014 From: jason.greene at redhat.com (Jason Greene) Date: Wed, 4 Jun 2014 16:14:38 -0500 Subject: [wildfly-dev] Syntax error when applying patch and exiting jboss-cli In-Reply-To: References: Message-ID: On Jun 4, 2014, at 4:08 PM, Arun Gupta wrote: > [disconnected /] patch apply > ../wildfly-8.1.0.Final-update/wildfly-8.1.0.Final.patch > { > "outcome" : "success", > "result" : {} > } > [disconnected /] > [disconnected /] > [disconnected /] > [disconnected /] exit > > > logging.configuration already set in JAVA_OPTS > ./bin/jboss-cli.sh: line 81: syntax error near unexpected token `fi' > ./bin/jboss-cli.sh: line 81: `fi' > > Known issue ? > Yes, it?s IMO a design mistake with offline patching. The big issue is that the patch tool patches itself live instead of isolating itself or staging. So like this issue is caused by the cli shell script getting updated before its fully parsed. The good is that its harmless, as long as you quit before running any non-patch commands (as mentioned in the instructions). We need to fix this in 9. -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From arun.gupta at gmail.com Wed Jun 4 17:18:39 2014 From: arun.gupta at gmail.com (Arun Gupta) Date: Wed, 4 Jun 2014 17:18:39 -0400 Subject: [wildfly-dev] Syntax error when applying patch and exiting jboss-cli In-Reply-To: <10B4D68C-8D6C-4DAF-AAB1-CE040599F369@redhat.com> References: <10B4D68C-8D6C-4DAF-AAB1-CE040599F369@redhat.com> Message-ID: An issue filed ? On Wed, Jun 4, 2014 at 5:14 PM, Jason Greene wrote: > > On Jun 4, 2014, at 4:08 PM, Arun Gupta wrote: > >> [disconnected /] patch apply >> ../wildfly-8.1.0.Final-update/wildfly-8.1.0.Final.patch >> { >> "outcome" : "success", >> "result" : {} >> } >> [disconnected /] >> [disconnected /] >> [disconnected /] >> [disconnected /] exit >> >> >> logging.configuration already set in JAVA_OPTS >> ./bin/jboss-cli.sh: line 81: syntax error near unexpected token `fi' >> ./bin/jboss-cli.sh: line 81: `fi' >> >> Known issue ? >> > Yes, it?s IMO a design mistake with offline patching. The big issue is that the patch tool patches itself live instead of isolating itself or staging. So like this issue is caused by the cli shell script getting updated before its fully parsed. The good is that its harmless, as long as you quit before running any non-patch commands (as mentioned in the instructions). > > We need to fix this in 9. > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > -- http://blog.arungupta.me http://twitter.com/arungupta From emuckenh at redhat.com Thu Jun 5 04:15:04 2014 From: emuckenh at redhat.com (Emanuel Muckenhuber) Date: Thu, 05 Jun 2014 10:15:04 +0200 Subject: [wildfly-dev] Syntax error when applying patch and exiting jboss-cli In-Reply-To: References: <10B4D68C-8D6C-4DAF-AAB1-CE040599F369@redhat.com> Message-ID: <53902708.2040209@redhat.com> No, there is no issues filed for this particular one yet, i am going to create one. Thanks, Emanuel On 04/06/14 23:18, Arun Gupta wrote: > An issue filed ? > > On Wed, Jun 4, 2014 at 5:14 PM, Jason Greene wrote: >> >> On Jun 4, 2014, at 4:08 PM, Arun Gupta wrote: >> >>> [disconnected /] patch apply >>> ../wildfly-8.1.0.Final-update/wildfly-8.1.0.Final.patch >>> { >>> "outcome" : "success", >>> "result" : {} >>> } >>> [disconnected /] >>> [disconnected /] >>> [disconnected /] >>> [disconnected /] exit >>> >>> >>> logging.configuration already set in JAVA_OPTS >>> ./bin/jboss-cli.sh: line 81: syntax error near unexpected token `fi' >>> ./bin/jboss-cli.sh: line 81: `fi' >>> >>> Known issue ? >>> >> Yes, it?s IMO a design mistake with offline patching. The big issue is that the patch tool patches itself live instead of isolating itself or staging. So like this issue is caused by the cli shell script getting updated before its fully parsed. The good is that its harmless, as long as you quit before running any non-patch commands (as mentioned in the instructions). >> >> We need to fix this in 9. >> >> -- >> Jason T. Greene >> WildFly Lead / JBoss EAP Platform Architect >> JBoss, a division of Red Hat >> > > > From darran.lofthouse at jboss.com Thu Jun 5 04:45:24 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Thu, 05 Jun 2014 09:45:24 +0100 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538F7442.2070306@redhat.com> Message-ID: <53902E24.7050005@jboss.com> On 04/06/14 22:05, Jason Greene wrote: > > On Jun 4, 2014, at 2:32 PM, Bill Burke wrote: > >> >> >> On 6/4/2014 1:23 PM, Jason Greene wrote: >>> >>> On Jun 3, 2014, at 1:25 PM, Darran Lofthouse wrote: >>> >>>>> Both the auth server and admin console are served from the same WAR. It >>>>> should be possible to deploy this without using a WAR or servlets, but >>>>> that is not planned for the initial WildFly integration. Because of >>>>> this current limitation, the auth server and admin console will not be >>>>> present in a domain controller. >>>> >>>> This is going against the current design of AS7/WildFly exposing >>>> management related operations over the management interface and leaving >>>> the web container to be purely about a users deployments. >>> >>> Sorry for my delayed reply. I hadn?t had a chance to read the full thread. >>> >>> My understanding of the original and still current goal of key cloak is to be more of an appliance, and also largely independent of WildFly. >>> >>> From that perspective, I don?t think embedding Keycloak solely to be in the same VM makes a lot of sense (more details as to why follow). It?s fine to have KeyCloak running on a WildFly instance (either as a subsystem or a deployment), but to me this seems to be a bit more of a black box to the user. >>> >>> So a typical topology, based on the factors I am aware of would look like this: >>> >>> ?????????????????????????????????????????????????????? >>> ?????????????????????????????????????????????????????? >>> ???????????????+------+?????Auth???????+----------+??? >>> ???????????????|??????+---------------->??????????|??? >>> ???????????????|??DC??|????????????????|?Keycloak?|??? >>> ??????????+----+??????+----+???????????|??????????|??? >>> ??????????|????+------+????|???????????+----------+??? >>> ??????????|????????????????|?????????????????????????? >>> ??????+---v--+??????????+--v---+?????????????????????? >>> ??????|??????|??????????|??????|?????????????????????? >>> ??????|??HC??|??????????|??HC??|?????????????????????? >>> ????+-+??????+-+??????+-+??????+-+???????????????????? >>> ????|?+--+---+?|??????|?+--+---+?|???????????????????? >>> ????|????|?????|??????|????|?????|???????????????????? >>> ???+v-+?+v-+?+-v+????+v-+?+v-+?+-v+??????????????????? >>> ???|S1|?|S2|?|S3|????|S4|?|S5|?|S6|??????????????????? >>> ???+--+?+--+?+--+????+--+?+--+?+--+??????????????????? >>> ?????????????????????????????????????????????????????? >>> >>> Each box represents a different JVM running potentially on separate hardware. >>> >>> So from the architecture the key element we need is for the DC (and standalone server) to come pre bundled with a client that can talk to the Keycloak blackbox (whether it be WildFly or fat jar or whatever). I assume this mostly amounts to OAUTH communication. >>> >>> Now as to why I don?t think embedding as it is makes a lot of sense, is because it wouldn?t really be a tightly integrated component, but rather two distinct systems duct taped together. We would have: >>> >>> 1. Multiple distinct management consoles >>> 2. Multiple distinct management APIs >>> 3. Multiple distinct management protocols >>> 4. Multiple distinct CLI/tools >>> >>> There is of course ways to paper over this and shove them together but you end up with leaky abstractions. Like lets say the CLI could issue REST operations against Keycloak as well. Thats great but that means things like the compensating transaction model don?t let you mix management changes with keycloak changes. >>> >>> Another issue is that WildFly has some pretty strict backwards compatibility contracts with regards to management that stem from EAP. Keycloak, at this stage of the process might not want to put up with us requesting similar conservative governance. It might be better for us to limit the API dependencies to best enable the project to continue to evolve. >>> >> >> Jason, >> >> I think we should first get Keycloak to secure Wildfly in standalone >> mode or with a domain controller. In both cases the Wildfly console >> should be securable by Keycloak. I'm betting that a lot of these issues >> will flesh out and become much clearer on how to solve. > > Certainly agree there. +1 This is what I was trying to say in a reply to Stan earlier, getting to the point where we can enable keycloak based authentication for the http management interface in standalone mode and in domain mode sounds like the ideal starting point. For one in itself it is a complete deliverable task that provides a complete set of functionality and it completely removes any obstacle from those that wish to use KeyCloak instead of the standard HTTP mechanisms. As a second task we can then review how a default bundling with KeyCloak could be provided either enabled by default or enableable - but hopefully you can see from some of the messages here providing the complete solution has a lot of issues that need to be resolved. > >> >> Irregardless of the Wildfly team vetoing the inclusion of keycloak, it >> is a very important use case for us to be able to be embbeded and to >> secure Wildfly and to manage security for Wildfly. >> >> We have already learned a lot by being embedded with Aerogear UPS as >> their security console and solution. For example, keycloak now has >> pluggable themes/skins themes/skins for its entire UI: admin console, >> login pages, etc. This has allowed Keycloak to be branded as an >> Aerogear subsystem and it looks like one product. > > I don?t think anyone has veto?d anything. I have just highlighted the challenges. They aren?t insurmountable but they would require some effort to solve. We could for example have management operation wrappers which trigger the appropriate actions in key cloak, and this could solve the CLI problems I mentioned, and allow for the admin console to do cross system interactions. Some of the other issues I don?t have a clear idea on, but some thinking might come up with something. Please don't feel like anything is bein veto'd - if we were vetoing anything we would be coming back with lines like project elytron is well underway, you are going to be interfacing with existing implementations that we know are changing, discussing KeyCloak today is a time drain etc.... Personally I want to see KeyCloak in for authentication as soon as possible, it is going to be representative of the approaches we must be able to support with the wildfly-elytron work and as Stan says having a testable existing implementation to compare against will provide us a lot of benefits in this area. But for the complete solution I think we have a lot more issues to solve, the application server development has progressed a long way since we effectively just had a standalone mode server - everything we do we now need to consider both standalone mode and domain mode. We have also had a lot of input from the security response team and the current design constraints we operate in for our out of the box offering is based on a lot of discussion with them as well as other interested parties focussed on the developer experience. One other aspect I experience when it comes to security is if you take the simple problem first and solve that adding a solution for the complex problem becomes much harder. And then finally lets say we add a full standalone solution to the WildFly codebase today and leave domain mode to be handled second, we risk reaching a point if domain mode is not ready that either Jason has to release an app server with domain mode behaving differently to standalone mode or the release has to be held up. So my preference here is we identify the task that we can deliver in it's entirety and look at getting authentication working for both standalone and domain mode and then look at the default inclusion as a second step. This will give use something that can be documented, used, demoed, blogged about etc... The second stage would then be removing some of the manual installation tasks a user would need to perform but in the first stage we would have reached the major milestone of KeyCloak being usable for authentication when managing WildFly. > > >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From darran.lofthouse at jboss.com Thu Jun 5 04:50:16 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Thu, 05 Jun 2014 09:50:16 +0100 Subject: [wildfly-dev] New security sub-project: WildFly Elytron In-Reply-To: <538F82C9.40607@redhat.com> References: <538E82A3.1060104@redhat.com> <538F82C9.40607@redhat.com> Message-ID: <53902F48.4060708@jboss.com> +1 Recently looking at how different JDBC driver vendors, and different JDK vendors interpret the use of JAAS for Kerberos propagation there are a lot of different interpretation of the same spec / APIs!! On 04/06/14 21:34, David M. Lloyd wrote: > On 06/04/2014 02:40 PM, Radoslaw Rodak wrote: >>> The following are presently non- or anti-goals: >>> >>> ? Any provision to support JAAS Subject as a security context (due to >>> performance and correctness concerns)? >>> ? Any provision to support JAAS LoginContext (due to tight integration >>> with Subject) >>> ? Any provision to maintain API compatibility with PicketBox (this is >>> not presently an established requirement and thus would add undue >>> implementation complexity, if it is indeed even possible) >>> ? Replicate Kerberos-style ticket-based credential forwarding (just use >>> Kerberos in this case) >>> >>> ? You may note that this is in contrast with a previous post to the AS 7 >>> list [9] in which I advocated simply unifying on Subject. Subsequent >>> research uncovered a number of performance and implementation weaknesses >>> in JAAS that have since convinced the security team that we should no >>> longer be relying on it. >> >> >> Is there any hope to have in Elytron a way to be able to integrate third part products supporting user identity propagation with JAAS like Corba, IBM MQ ? with Wildfly? > > Yes, however it may not be possible using one single integration > methodology. Experience has shown that every vendor uses JAAS in > different ways, so we would have to approach each item on a case-by-case > basis. > > From arjan.tijms at gmail.com Thu Jun 5 05:50:46 2014 From: arjan.tijms at gmail.com (arjan tijms) Date: Thu, 5 Jun 2014 11:50:46 +0200 Subject: [wildfly-dev] New security sub-project: WildFly Elytron In-Reply-To: <53902F48.4060708@jboss.com> References: <538E82A3.1060104@redhat.com> <538F82C9.40607@redhat.com> <53902F48.4060708@jboss.com> Message-ID: Hi, On Thu, Jun 5, 2014 at 10:50 AM, Darran Lofthouse < darran.lofthouse at jboss.com> wrote: > +1 Recently looking at how different JDBC driver vendors, and different > JDK vendors interpret the use of JAAS for Kerberos propagation there are > a lot of different interpretation of the same spec / APIs!! > JAAS, and especially JAAS in Java EE, is not the universal standard you may think it is. Some parts are interpreted differently, but other parts are just not specified. How to store a username and roles in the "bag of principles" that the Subject is, is particularly notorious. I wrote a post about that subject (no pun) here: http://arjan-tijms.blogspot.com/2014/02/jaas-in-java-ee-is-not-universal.html I wonder btw if any of the work done for this WildFly Elytron project (and previous work done for Picketbox/link) could possibly be used for feedback on how to improve the security APIs in Java EE itself. Has this ever been considered? Kind regards, Arjan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140605/ce90a4f1/attachment.html From darran.lofthouse at jboss.com Thu Jun 5 06:04:38 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Thu, 05 Jun 2014 11:04:38 +0100 Subject: [wildfly-dev] New security sub-project: WildFly Elytron In-Reply-To: References: <538E82A3.1060104@redhat.com> <538F82C9.40607@redhat.com> <53902F48.4060708@jboss.com> Message-ID: <539040B6.6010601@jboss.com> On 05/06/14 10:50, arjan tijms wrote: > Hi, > > On Thu, Jun 5, 2014 at 10:50 AM, Darran Lofthouse > > wrote: > > +1 Recently looking at how different JDBC driver vendors, and different > JDK vendors interpret the use of JAAS for Kerberos propagation there are > a lot of different interpretation of the same spec / APIs!! > > > JAAS, and especially JAAS in Java EE, is not the universal standard you > may think it is. We have certainly come to that conclusion as well ;-) My view on JAAS is that it is actually a client side API that pre-dated J2EE, the J2EE specs left security decisions down to the vendors and as at the time only simple security solutions were in demand (validate plain text username and password) JAAS was quickly adopted as this was something it could do. It is then the demand for more complex solutions that have started to show the limitations of how much can be achieved with it. > Some parts are interpreted differently, but other parts > are just not specified. How to store a username and roles in the "bag of > principles" that the Subject is, is particularly notorious. I wrote a > post about that subject (no pun) here: > http://arjan-tijms.blogspot.com/2014/02/jaas-in-java-ee-is-not-universal.html > > I wonder btw if any of the work done for this WildFly Elytron project > (and previous work done for Picketbox/link) could possibly be used for > feedback on how to improve the security APIs in Java EE itself. Has this > ever been considered? > > Kind regards, > Arjan From arun.gupta at gmail.com Thu Jun 5 06:54:06 2014 From: arun.gupta at gmail.com (Arun Gupta) Date: Thu, 5 Jun 2014 06:54:06 -0400 Subject: [wildfly-dev] Patching from previous versions ? Message-ID: When 8.2 becomes available, will there be a patch available that will allow to upgrade from 8.1 only or 8.0 as well ? And similarly for future versions, will the patch only be available from the previous version or all previous major/minor versions ? Arun -- http://blog.arungupta.me http://twitter.com/arungupta From brian.stansberry at redhat.com Thu Jun 5 10:17:21 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Thu, 05 Jun 2014 09:17:21 -0500 Subject: [wildfly-dev] Patching from previous versions ? In-Reply-To: References: Message-ID: <53907BF1.8010003@redhat.com> On 6/5/14, 5:54 AM, Arun Gupta wrote: > When 8.2 becomes available, will there be a patch available that will > allow to upgrade from 8.1 only or 8.0 as well ? > > And similarly for future versions, will the patch only be available > from the previous version or all previous major/minor versions ? > My 2 cents. I don't see us supporting updating across major versions. Even if by some chance we could, I doubt we'd want to set that precedent. As for updating from 8.0 to 8.2 in one update file, that's technically possible, but I'm not sure it's worth the effort. A update for 8.0->8.2 would essentially consist of the 8.0->8.1 update and then the 8.1->8.2 update packaged in the same file. That would be a really big file. > Arun > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From ssilvert at redhat.com Thu Jun 5 14:01:33 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Thu, 05 Jun 2014 14:01:33 -0400 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <53902E24.7050005@jboss.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538F7442.2070306@redhat.com> <53902E24.7050005@jboss.com> Message-ID: <5390B07D.2060102@redhat.com> I'm back from PTO now. Thanks to everyone for the excellent feedback. It sounds like the one thing we have broad agreement on is that we should at least ship the Keycloak adapters with WildFly. That way, the Web Console and other client apps can use Keycloak as their auth server if they want. I like Darran's suggestion to go ahead and integrate the adapters as a first task. It should keep me busy for awhile. We can keep thinking about Keycloak auth server integration in the mean time. So now with a narrower focus, I still have one problem. What are the requirements of a domain controller using Keycloak? More specifically, is it a requirement to be able to log into Web Console when the DC is the only thing running? I ask because if Web Console on DC is secured with Keycloak and it can't reach the Keycloak auth server then you can't log in. Maybe we already have this problem? Is it ever the case that the Web Console authenticates against an LDAP server? If so then you have the same problem if it can't reach the LDAP server. Stan On 6/5/2014 4:45 AM, Darran Lofthouse wrote: > > On 04/06/14 22:05, Jason Greene wrote: >> On Jun 4, 2014, at 2:32 PM, Bill Burke wrote: >> >>> >>> On 6/4/2014 1:23 PM, Jason Greene wrote: >>>> On Jun 3, 2014, at 1:25 PM, Darran Lofthouse wrote: >>>> >>>>>> Both the auth server and admin console are served from the same WAR. It >>>>>> should be possible to deploy this without using a WAR or servlets, but >>>>>> that is not planned for the initial WildFly integration. Because of >>>>>> this current limitation, the auth server and admin console will not be >>>>>> present in a domain controller. >>>>> This is going against the current design of AS7/WildFly exposing >>>>> management related operations over the management interface and leaving >>>>> the web container to be purely about a users deployments. >>>> Sorry for my delayed reply. I hadn?t had a chance to read the full thread. >>>> >>>> My understanding of the original and still current goal of key cloak is to be more of an appliance, and also largely independent of WildFly. >>>> >>>> From that perspective, I don?t think embedding Keycloak solely to be in the same VM makes a lot of sense (more details as to why follow). It?s fine to have KeyCloak running on a WildFly instance (either as a subsystem or a deployment), but to me this seems to be a bit more of a black box to the user. >>>> >>>> So a typical topology, based on the factors I am aware of would look like this: >>>> >>>> ?????????????????????????????????????????????????????? >>>> ?????????????????????????????????????????????????????? >>>> ???????????????+------+?????Auth???????+----------+??? >>>> ???????????????|??????+---------------->??????????|??? >>>> ???????????????|??DC??|????????????????|?Keycloak?|??? >>>> ??????????+----+??????+----+???????????|??????????|??? >>>> ??????????|????+------+????|???????????+----------+??? >>>> ??????????|????????????????|?????????????????????????? >>>> ??????+---v--+??????????+--v---+?????????????????????? >>>> ??????|??????|??????????|??????|?????????????????????? >>>> ??????|??HC??|??????????|??HC??|?????????????????????? >>>> ????+-+??????+-+??????+-+??????+-+???????????????????? >>>> ????|?+--+---+?|??????|?+--+---+?|???????????????????? >>>> ????|????|?????|??????|????|?????|???????????????????? >>>> ???+v-+?+v-+?+-v+????+v-+?+v-+?+-v+??????????????????? >>>> ???|S1|?|S2|?|S3|????|S4|?|S5|?|S6|??????????????????? >>>> ???+--+?+--+?+--+????+--+?+--+?+--+??????????????????? >>>> ?????????????????????????????????????????????????????? >>>> >>>> Each box represents a different JVM running potentially on separate hardware. >>>> >>>> So from the architecture the key element we need is for the DC (and standalone server) to come pre bundled with a client that can talk to the Keycloak blackbox (whether it be WildFly or fat jar or whatever). I assume this mostly amounts to OAUTH communication. >>>> >>>> Now as to why I don?t think embedding as it is makes a lot of sense, is because it wouldn?t really be a tightly integrated component, but rather two distinct systems duct taped together. We would have: >>>> >>>> 1. Multiple distinct management consoles >>>> 2. Multiple distinct management APIs >>>> 3. Multiple distinct management protocols >>>> 4. Multiple distinct CLI/tools >>>> >>>> There is of course ways to paper over this and shove them together but you end up with leaky abstractions. Like lets say the CLI could issue REST operations against Keycloak as well. Thats great but that means things like the compensating transaction model don?t let you mix management changes with keycloak changes. >>>> >>>> Another issue is that WildFly has some pretty strict backwards compatibility contracts with regards to management that stem from EAP. Keycloak, at this stage of the process might not want to put up with us requesting similar conservative governance. It might be better for us to limit the API dependencies to best enable the project to continue to evolve. >>>> >>> Jason, >>> >>> I think we should first get Keycloak to secure Wildfly in standalone >>> mode or with a domain controller. In both cases the Wildfly console >>> should be securable by Keycloak. I'm betting that a lot of these issues >>> will flesh out and become much clearer on how to solve. >> Certainly agree there. > +1 This is what I was trying to say in a reply to Stan earlier, getting > to the point where we can enable keycloak based authentication for the > http management interface in standalone mode and in domain mode sounds > like the ideal starting point. > > For one in itself it is a complete deliverable task that provides a > complete set of functionality and it completely removes any obstacle > from those that wish to use KeyCloak instead of the standard HTTP > mechanisms. > > As a second task we can then review how a default bundling with KeyCloak > could be provided either enabled by default or enableable - but > hopefully you can see from some of the messages here providing the > complete solution has a lot of issues that need to be resolved. > >>> Irregardless of the Wildfly team vetoing the inclusion of keycloak, it >>> is a very important use case for us to be able to be embbeded and to >>> secure Wildfly and to manage security for Wildfly. >>> >>> We have already learned a lot by being embedded with Aerogear UPS as >>> their security console and solution. For example, keycloak now has >>> pluggable themes/skins themes/skins for its entire UI: admin console, >>> login pages, etc. This has allowed Keycloak to be branded as an >>> Aerogear subsystem and it looks like one product. >> I don?t think anyone has veto?d anything. I have just highlighted the challenges. They aren?t insurmountable but they would require some effort to solve. We could for example have management operation wrappers which trigger the appropriate actions in key cloak, and this could solve the CLI problems I mentioned, and allow for the admin console to do cross system interactions. Some of the other issues I don?t have a clear idea on, but some thinking might come up with something. > Please don't feel like anything is bein veto'd - if we were vetoing > anything we would be coming back with lines like project elytron is well > underway, you are going to be interfacing with existing implementations > that we know are changing, discussing KeyCloak today is a time drain > etc.... > > Personally I want to see KeyCloak in for authentication as soon as > possible, it is going to be representative of the approaches we must be > able to support with the wildfly-elytron work and as Stan says having a > testable existing implementation to compare against will provide us a > lot of benefits in this area. > > But for the complete solution I think we have a lot more issues to > solve, the application server development has progressed a long way > since we effectively just had a standalone mode server - everything we > do we now need to consider both standalone mode and domain mode. We > have also had a lot of input from the security response team and the > current design constraints we operate in for our out of the box offering > is based on a lot of discussion with them as well as other interested > parties focussed on the developer experience. > > One other aspect I experience when it comes to security is if you take > the simple problem first and solve that adding a solution for the > complex problem becomes much harder. And then finally lets say we add a > full standalone solution to the WildFly codebase today and leave domain > mode to be handled second, we risk reaching a point if domain mode is > not ready that either Jason has to release an app server with domain > mode behaving differently to standalone mode or the release has to be > held up. > > So my preference here is we identify the task that we can deliver in > it's entirety and look at getting authentication working for both > standalone and domain mode and then look at the default inclusion as a > second step. This will give use something that can be documented, used, > demoed, blogged about etc... The second stage would then be removing > some of the manual installation tasks a user would need to perform but > in the first stage we would have reached the major milestone of KeyCloak > being usable for authentication when managing WildFly. > >> >>> -- >>> Bill Burke >>> JBoss, a division of Red Hat >>> http://bill.burkecentral.com >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> -- >> Jason T. Greene >> WildFly Lead / JBoss EAP Platform Architect >> JBoss, a division of Red Hat >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From darran.lofthouse at jboss.com Thu Jun 5 14:15:06 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Thu, 05 Jun 2014 19:15:06 +0100 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <5390B07D.2060102@redhat.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538F7442.2070306@redhat.com> <53902E24.7050005@jboss.com> <5390B07D.2060102@redhat.com> Message-ID: <5390B3AA.8020302@jboss.com> In that case yes they do need to be able to log into the admin console on a DC that is not running any servers, as you say if the LDAP server is down they can not connect (except by using local auth if enabled) - but in this case is there anything stopping them installing keycloak on a completely independent wildfly installation? i.e. the security infrastructure is independent from the app server installation it is securing. Regards, Darran Lofthouse. On 05/06/14 19:01, Stan Silvert wrote: > I'm back from PTO now. Thanks to everyone for the excellent feedback. > > It sounds like the one thing we have broad agreement on is that we > should at least ship the Keycloak adapters with WildFly. That way, the > Web Console and other client apps can use Keycloak as their auth server > if they want. > > I like Darran's suggestion to go ahead and integrate the adapters as a > first task. It should keep me busy for awhile. We can keep thinking > about Keycloak auth server integration in the mean time. > > So now with a narrower focus, I still have one problem. What are the > requirements of a domain controller using Keycloak? More specifically, > is it a requirement to be able to log into Web Console when the DC is > the only thing running? > > I ask because if Web Console on DC is secured with Keycloak and it can't > reach the Keycloak auth server then you can't log in. Maybe we already > have this problem? Is it ever the case that the Web Console > authenticates against an LDAP server? If so then you have the same > problem if it can't reach the LDAP server. > > Stan > > > On 6/5/2014 4:45 AM, Darran Lofthouse wrote: >> >> On 04/06/14 22:05, Jason Greene wrote: >>> On Jun 4, 2014, at 2:32 PM, Bill Burke wrote: >>> >>>> >>>> On 6/4/2014 1:23 PM, Jason Greene wrote: >>>>> On Jun 3, 2014, at 1:25 PM, Darran Lofthouse wrote: >>>>> >>>>>>> Both the auth server and admin console are served from the same WAR. It >>>>>>> should be possible to deploy this without using a WAR or servlets, but >>>>>>> that is not planned for the initial WildFly integration. Because of >>>>>>> this current limitation, the auth server and admin console will not be >>>>>>> present in a domain controller. >>>>>> This is going against the current design of AS7/WildFly exposing >>>>>> management related operations over the management interface and leaving >>>>>> the web container to be purely about a users deployments. >>>>> Sorry for my delayed reply. I hadn?t had a chance to read the full thread. >>>>> >>>>> My understanding of the original and still current goal of key cloak is to be more of an appliance, and also largely independent of WildFly. >>>>> >>>>> From that perspective, I don?t think embedding Keycloak solely to be in the same VM makes a lot of sense (more details as to why follow). It?s fine to have KeyCloak running on a WildFly instance (either as a subsystem or a deployment), but to me this seems to be a bit more of a black box to the user. >>>>> >>>>> So a typical topology, based on the factors I am aware of would look like this: >>>>> >>>>> ?????????????????????????????????????????????????????? >>>>> ?????????????????????????????????????????????????????? >>>>> ???????????????+------+?????Auth???????+----------+??? >>>>> ???????????????|??????+---------------->??????????|??? >>>>> ???????????????|??DC??|????????????????|?Keycloak?|??? >>>>> ??????????+----+??????+----+???????????|??????????|??? >>>>> ??????????|????+------+????|???????????+----------+??? >>>>> ??????????|????????????????|?????????????????????????? >>>>> ??????+---v--+??????????+--v---+?????????????????????? >>>>> ??????|??????|??????????|??????|?????????????????????? >>>>> ??????|??HC??|??????????|??HC??|?????????????????????? >>>>> ????+-+??????+-+??????+-+??????+-+???????????????????? >>>>> ????|?+--+---+?|??????|?+--+---+?|???????????????????? >>>>> ????|????|?????|??????|????|?????|???????????????????? >>>>> ???+v-+?+v-+?+-v+????+v-+?+v-+?+-v+??????????????????? >>>>> ???|S1|?|S2|?|S3|????|S4|?|S5|?|S6|??????????????????? >>>>> ???+--+?+--+?+--+????+--+?+--+?+--+??????????????????? >>>>> ?????????????????????????????????????????????????????? >>>>> >>>>> Each box represents a different JVM running potentially on separate hardware. >>>>> >>>>> So from the architecture the key element we need is for the DC (and standalone server) to come pre bundled with a client that can talk to the Keycloak blackbox (whether it be WildFly or fat jar or whatever). I assume this mostly amounts to OAUTH communication. >>>>> >>>>> Now as to why I don?t think embedding as it is makes a lot of sense, is because it wouldn?t really be a tightly integrated component, but rather two distinct systems duct taped together. We would have: >>>>> >>>>> 1. Multiple distinct management consoles >>>>> 2. Multiple distinct management APIs >>>>> 3. Multiple distinct management protocols >>>>> 4. Multiple distinct CLI/tools >>>>> >>>>> There is of course ways to paper over this and shove them together but you end up with leaky abstractions. Like lets say the CLI could issue REST operations against Keycloak as well. Thats great but that means things like the compensating transaction model don?t let you mix management changes with keycloak changes. >>>>> >>>>> Another issue is that WildFly has some pretty strict backwards compatibility contracts with regards to management that stem from EAP. Keycloak, at this stage of the process might not want to put up with us requesting similar conservative governance. It might be better for us to limit the API dependencies to best enable the project to continue to evolve. >>>>> >>>> Jason, >>>> >>>> I think we should first get Keycloak to secure Wildfly in standalone >>>> mode or with a domain controller. In both cases the Wildfly console >>>> should be securable by Keycloak. I'm betting that a lot of these issues >>>> will flesh out and become much clearer on how to solve. >>> Certainly agree there. >> +1 This is what I was trying to say in a reply to Stan earlier, getting >> to the point where we can enable keycloak based authentication for the >> http management interface in standalone mode and in domain mode sounds >> like the ideal starting point. >> >> For one in itself it is a complete deliverable task that provides a >> complete set of functionality and it completely removes any obstacle >> from those that wish to use KeyCloak instead of the standard HTTP >> mechanisms. >> >> As a second task we can then review how a default bundling with KeyCloak >> could be provided either enabled by default or enableable - but >> hopefully you can see from some of the messages here providing the >> complete solution has a lot of issues that need to be resolved. >> >>>> Irregardless of the Wildfly team vetoing the inclusion of keycloak, it >>>> is a very important use case for us to be able to be embbeded and to >>>> secure Wildfly and to manage security for Wildfly. >>>> >>>> We have already learned a lot by being embedded with Aerogear UPS as >>>> their security console and solution. For example, keycloak now has >>>> pluggable themes/skins themes/skins for its entire UI: admin console, >>>> login pages, etc. This has allowed Keycloak to be branded as an >>>> Aerogear subsystem and it looks like one product. >>> I don?t think anyone has veto?d anything. I have just highlighted the challenges. They aren?t insurmountable but they would require some effort to solve. We could for example have management operation wrappers which trigger the appropriate actions in key cloak, and this could solve the CLI problems I mentioned, and allow for the admin console to do cross system interactions. Some of the other issues I don?t have a clear idea on, but some thinking might come up with something. >> Please don't feel like anything is bein veto'd - if we were vetoing >> anything we would be coming back with lines like project elytron is well >> underway, you are going to be interfacing with existing implementations >> that we know are changing, discussing KeyCloak today is a time drain >> etc.... >> >> Personally I want to see KeyCloak in for authentication as soon as >> possible, it is going to be representative of the approaches we must be >> able to support with the wildfly-elytron work and as Stan says having a >> testable existing implementation to compare against will provide us a >> lot of benefits in this area. >> >> But for the complete solution I think we have a lot more issues to >> solve, the application server development has progressed a long way >> since we effectively just had a standalone mode server - everything we >> do we now need to consider both standalone mode and domain mode. We >> have also had a lot of input from the security response team and the >> current design constraints we operate in for our out of the box offering >> is based on a lot of discussion with them as well as other interested >> parties focussed on the developer experience. >> >> One other aspect I experience when it comes to security is if you take >> the simple problem first and solve that adding a solution for the >> complex problem becomes much harder. And then finally lets say we add a >> full standalone solution to the WildFly codebase today and leave domain >> mode to be handled second, we risk reaching a point if domain mode is >> not ready that either Jason has to release an app server with domain >> mode behaving differently to standalone mode or the release has to be >> held up. >> >> So my preference here is we identify the task that we can deliver in >> it's entirety and look at getting authentication working for both >> standalone and domain mode and then look at the default inclusion as a >> second step. This will give use something that can be documented, used, >> demoed, blogged about etc... The second stage would then be removing >> some of the manual installation tasks a user would need to perform but >> in the first stage we would have reached the major milestone of KeyCloak >> being usable for authentication when managing WildFly. >> >>> >>>> -- >>>> Bill Burke >>>> JBoss, a division of Red Hat >>>> http://bill.burkecentral.com >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> -- >>> Jason T. Greene >>> WildFly Lead / JBoss EAP Platform Architect >>> JBoss, a division of Red Hat >>> >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From ssilvert at redhat.com Thu Jun 5 14:21:07 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Thu, 05 Jun 2014 14:21:07 -0400 Subject: [wildfly-dev] Keycloak SSO in WildFly 9 In-Reply-To: <5390B3AA.8020302@jboss.com> References: <538E09F6.6080701@redhat.com> <538E1314.2030101@jboss.com> <538F7442.2070306@redhat.com> <53902E24.7050005@jboss.com> <5390B07D.2060102@redhat.com> <5390B3AA.8020302@jboss.com> Message-ID: <5390B513.1030907@redhat.com> On 6/5/2014 2:15 PM, Darran Lofthouse wrote: > i.e. the security infrastructure is independent from the app server > installation it is securing. I think that answers my question. Sounds like this is a non-issue. > > Regards, > Darran Lofthouse. > > > On 05/06/14 19:01, Stan Silvert wrote: >> I'm back from PTO now. Thanks to everyone for the excellent feedback. >> >> It sounds like the one thing we have broad agreement on is that we >> should at least ship the Keycloak adapters with WildFly. That way, the >> Web Console and other client apps can use Keycloak as their auth server >> if they want. >> >> I like Darran's suggestion to go ahead and integrate the adapters as a >> first task. It should keep me busy for awhile. We can keep thinking >> about Keycloak auth server integration in the mean time. >> >> So now with a narrower focus, I still have one problem. What are the >> requirements of a domain controller using Keycloak? More specifically, >> is it a requirement to be able to log into Web Console when the DC is >> the only thing running? >> >> I ask because if Web Console on DC is secured with Keycloak and it can't >> reach the Keycloak auth server then you can't log in. Maybe we already >> have this problem? Is it ever the case that the Web Console >> authenticates against an LDAP server? If so then you have the same >> problem if it can't reach the LDAP server. >> >> Stan >> >> >> On 6/5/2014 4:45 AM, Darran Lofthouse wrote: >>> On 04/06/14 22:05, Jason Greene wrote: >>>> On Jun 4, 2014, at 2:32 PM, Bill Burke wrote: >>>> >>>>> On 6/4/2014 1:23 PM, Jason Greene wrote: >>>>>> On Jun 3, 2014, at 1:25 PM, Darran Lofthouse wrote: >>>>>> >>>>>>>> Both the auth server and admin console are served from the same WAR. It >>>>>>>> should be possible to deploy this without using a WAR or servlets, but >>>>>>>> that is not planned for the initial WildFly integration. Because of >>>>>>>> this current limitation, the auth server and admin console will not be >>>>>>>> present in a domain controller. >>>>>>> This is going against the current design of AS7/WildFly exposing >>>>>>> management related operations over the management interface and leaving >>>>>>> the web container to be purely about a users deployments. >>>>>> Sorry for my delayed reply. I hadn?t had a chance to read the full thread. >>>>>> >>>>>> My understanding of the original and still current goal of key cloak is to be more of an appliance, and also largely independent of WildFly. >>>>>> >>>>>> From that perspective, I don?t think embedding Keycloak solely to be in the same VM makes a lot of sense (more details as to why follow). It?s fine to have KeyCloak running on a WildFly instance (either as a subsystem or a deployment), but to me this seems to be a bit more of a black box to the user. >>>>>> >>>>>> So a typical topology, based on the factors I am aware of would look like this: >>>>>> >>>>>> ?????????????????????????????????????????????????????? >>>>>> ?????????????????????????????????????????????????????? >>>>>> ???????????????+------+?????Auth???????+----------+??? >>>>>> ???????????????|??????+---------------->??????????|??? >>>>>> ???????????????|??DC??|????????????????|?Keycloak?|??? >>>>>> ??????????+----+??????+----+???????????|??????????|??? >>>>>> ??????????|????+------+????|???????????+----------+??? >>>>>> ??????????|????????????????|?????????????????????????? >>>>>> ??????+---v--+??????????+--v---+?????????????????????? >>>>>> ??????|??????|??????????|??????|?????????????????????? >>>>>> ??????|??HC??|??????????|??HC??|?????????????????????? >>>>>> ????+-+??????+-+??????+-+??????+-+???????????????????? >>>>>> ????|?+--+---+?|??????|?+--+---+?|???????????????????? >>>>>> ????|????|?????|??????|????|?????|???????????????????? >>>>>> ???+v-+?+v-+?+-v+????+v-+?+v-+?+-v+??????????????????? >>>>>> ???|S1|?|S2|?|S3|????|S4|?|S5|?|S6|??????????????????? >>>>>> ???+--+?+--+?+--+????+--+?+--+?+--+??????????????????? >>>>>> ?????????????????????????????????????????????????????? >>>>>> >>>>>> Each box represents a different JVM running potentially on separate hardware. >>>>>> >>>>>> So from the architecture the key element we need is for the DC (and standalone server) to come pre bundled with a client that can talk to the Keycloak blackbox (whether it be WildFly or fat jar or whatever). I assume this mostly amounts to OAUTH communication. >>>>>> >>>>>> Now as to why I don?t think embedding as it is makes a lot of sense, is because it wouldn?t really be a tightly integrated component, but rather two distinct systems duct taped together. We would have: >>>>>> >>>>>> 1. Multiple distinct management consoles >>>>>> 2. Multiple distinct management APIs >>>>>> 3. Multiple distinct management protocols >>>>>> 4. Multiple distinct CLI/tools >>>>>> >>>>>> There is of course ways to paper over this and shove them together but you end up with leaky abstractions. Like lets say the CLI could issue REST operations against Keycloak as well. Thats great but that means things like the compensating transaction model don?t let you mix management changes with keycloak changes. >>>>>> >>>>>> Another issue is that WildFly has some pretty strict backwards compatibility contracts with regards to management that stem from EAP. Keycloak, at this stage of the process might not want to put up with us requesting similar conservative governance. It might be better for us to limit the API dependencies to best enable the project to continue to evolve. >>>>>> >>>>> Jason, >>>>> >>>>> I think we should first get Keycloak to secure Wildfly in standalone >>>>> mode or with a domain controller. In both cases the Wildfly console >>>>> should be securable by Keycloak. I'm betting that a lot of these issues >>>>> will flesh out and become much clearer on how to solve. >>>> Certainly agree there. >>> +1 This is what I was trying to say in a reply to Stan earlier, getting >>> to the point where we can enable keycloak based authentication for the >>> http management interface in standalone mode and in domain mode sounds >>> like the ideal starting point. >>> >>> For one in itself it is a complete deliverable task that provides a >>> complete set of functionality and it completely removes any obstacle >>> from those that wish to use KeyCloak instead of the standard HTTP >>> mechanisms. >>> >>> As a second task we can then review how a default bundling with KeyCloak >>> could be provided either enabled by default or enableable - but >>> hopefully you can see from some of the messages here providing the >>> complete solution has a lot of issues that need to be resolved. >>> >>>>> Irregardless of the Wildfly team vetoing the inclusion of keycloak, it >>>>> is a very important use case for us to be able to be embbeded and to >>>>> secure Wildfly and to manage security for Wildfly. >>>>> >>>>> We have already learned a lot by being embedded with Aerogear UPS as >>>>> their security console and solution. For example, keycloak now has >>>>> pluggable themes/skins themes/skins for its entire UI: admin console, >>>>> login pages, etc. This has allowed Keycloak to be branded as an >>>>> Aerogear subsystem and it looks like one product. >>>> I don?t think anyone has veto?d anything. I have just highlighted the challenges. They aren?t insurmountable but they would require some effort to solve. We could for example have management operation wrappers which trigger the appropriate actions in key cloak, and this could solve the CLI problems I mentioned, and allow for the admin console to do cross system interactions. Some of the other issues I don?t have a clear idea on, but some thinking might come up with something. >>> Please don't feel like anything is bein veto'd - if we were vetoing >>> anything we would be coming back with lines like project elytron is well >>> underway, you are going to be interfacing with existing implementations >>> that we know are changing, discussing KeyCloak today is a time drain >>> etc.... >>> >>> Personally I want to see KeyCloak in for authentication as soon as >>> possible, it is going to be representative of the approaches we must be >>> able to support with the wildfly-elytron work and as Stan says having a >>> testable existing implementation to compare against will provide us a >>> lot of benefits in this area. >>> >>> But for the complete solution I think we have a lot more issues to >>> solve, the application server development has progressed a long way >>> since we effectively just had a standalone mode server - everything we >>> do we now need to consider both standalone mode and domain mode. We >>> have also had a lot of input from the security response team and the >>> current design constraints we operate in for our out of the box offering >>> is based on a lot of discussion with them as well as other interested >>> parties focussed on the developer experience. >>> >>> One other aspect I experience when it comes to security is if you take >>> the simple problem first and solve that adding a solution for the >>> complex problem becomes much harder. And then finally lets say we add a >>> full standalone solution to the WildFly codebase today and leave domain >>> mode to be handled second, we risk reaching a point if domain mode is >>> not ready that either Jason has to release an app server with domain >>> mode behaving differently to standalone mode or the release has to be >>> held up. >>> >>> So my preference here is we identify the task that we can deliver in >>> it's entirety and look at getting authentication working for both >>> standalone and domain mode and then look at the default inclusion as a >>> second step. This will give use something that can be documented, used, >>> demoed, blogged about etc... The second stage would then be removing >>> some of the manual installation tasks a user would need to perform but >>> in the first stage we would have reached the major milestone of KeyCloak >>> being usable for authentication when managing WildFly. >>> >>>>> -- >>>>> Bill Burke >>>>> JBoss, a division of Red Hat >>>>> http://bill.burkecentral.com >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> -- >>>> Jason T. Greene >>>> WildFly Lead / JBoss EAP Platform Architect >>>> JBoss, a division of Red Hat >>>> >>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From arun.gupta at gmail.com Thu Jun 5 15:38:34 2014 From: arun.gupta at gmail.com (Arun Gupta) Date: Thu, 5 Jun 2014 15:38:34 -0400 Subject: [wildfly-dev] Patching from previous versions ? In-Reply-To: <53907BF1.8010003@redhat.com> References: <53907BF1.8010003@redhat.com> Message-ID: On Thu, Jun 5, 2014 at 10:17 AM, Brian Stansberry wrote: > On 6/5/14, 5:54 AM, Arun Gupta wrote: >> When 8.2 becomes available, will there be a patch available that will >> allow to upgrade from 8.1 only or 8.0 as well ? >> >> And similarly for future versions, will the patch only be available >> from the previous version or all previous major/minor versions ? >> > > My 2 cents. > > I don't see us supporting updating across major versions. Even if by > some chance we could, I doubt we'd want to set that precedent. So update will be only for minor versions ? > > As for updating from 8.0 to 8.2 in one update file, that's technically > possible, but I'm not sure it's worth the effort. A update for 8.0->8.2 > would essentially consist of the 8.0->8.1 update and then the 8.1->8.2 > update packaged in the same file. That would be a really big file. I think asking the users to update from 8.0 -> 8.1 and then to 8.2 is reasonable. But is that the current thought process ? Arun > >> Arun >> > > > -- > Brian Stansberry > Senior Principal Software Engineer > JBoss by Red Hat > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- http://blog.arungupta.me http://twitter.com/arungupta From brian.stansberry at redhat.com Thu Jun 5 15:45:35 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Thu, 05 Jun 2014 14:45:35 -0500 Subject: [wildfly-dev] Patching from previous versions ? In-Reply-To: References: <53907BF1.8010003@redhat.com> Message-ID: <5390C8DF.8020703@redhat.com> On 6/5/14, 2:38 PM, Arun Gupta wrote: > On Thu, Jun 5, 2014 at 10:17 AM, Brian Stansberry > wrote: >> On 6/5/14, 5:54 AM, Arun Gupta wrote: >>> When 8.2 becomes available, will there be a patch available that will >>> allow to upgrade from 8.1 only or 8.0 as well ? >>> >>> And similarly for future versions, will the patch only be available >>> from the previous version or all previous major/minor versions ? >>> >> >> My 2 cents. >> >> I don't see us supporting updating across major versions. Even if by >> some chance we could, I doubt we'd want to set that precedent. > > So update will be only for minor versions ? > >> >> As for updating from 8.0 to 8.2 in one update file, that's technically >> possible, but I'm not sure it's worth the effort. A update for 8.0->8.2 >> would essentially consist of the 8.0->8.1 update and then the 8.1->8.2 >> update packaged in the same file. That would be a really big file. > > I think asking the users to update from 8.0 -> 8.1 and then to 8.2 is > reasonable. But is that the current thought process ? > AFAIK this thread *is* the current thought process. :) So you and I have stated opinions, which is a good start. In the end, the component lead (Emanuel Muckenhuber) and Jason will decide. I'm highly confident that updates will only be for minor versions though. At least for now. Maybe someday that will change. > Arun > >> >>> Arun >>> >> >> >> -- >> Brian Stansberry >> Senior Principal Software Engineer >> JBoss by Red Hat >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From jason.greene at redhat.com Thu Jun 5 17:41:03 2014 From: jason.greene at redhat.com (Jason Greene) Date: Thu, 5 Jun 2014 16:41:03 -0500 Subject: [wildfly-dev] Patching from previous versions ? In-Reply-To: <5390C8DF.8020703@redhat.com> References: <53907BF1.8010003@redhat.com> <5390C8DF.8020703@redhat.com> Message-ID: <4ECD86A2-14B5-4BB5-9EA5-8067B51A994F@redhat.com> On Jun 5, 2014, at 2:45 PM, Brian Stansberry wrote: >> I think asking the users to update from 8.0 -> 8.1 and then to 8.2 is >> reasonable. But is that the current thought process ? >> > > AFAIK this thread *is* the current thought process. :) So you and I have > stated opinions, which is a good start. In the end, the component lead > (Emanuel Muckenhuber) and Jason will decide. Emanuel wrote a facility that would allow for us to ship both together, but I think that would require that the future class diff algorithm be very efficient. I think it?s best to assume that we won?t. > > I'm highly confident that updates will only be for minor versions > though. At least for now. Maybe someday that will change. Yes, I think the big issue is that we don?t want never-ending patch build-up (even though you can expunge old patches), so it makes the most sense to start fresh at major intervals. I think we should leave the door open to the idea for 9 though, even if it is just a tiny crack that is open :) -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From tomaz.cerar at gmail.com Fri Jun 6 05:09:32 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Fri, 6 Jun 2014 11:09:32 +0200 Subject: [wildfly-dev] Maven upgrade coming Message-ID: Hey guys, Just bit of heads up, we are moving build to require maven 3.2.1 This will bring us munch awaited feature of * exclusion of transitive dependencies. for more see http://maven.apache.org/docs/3.2.1/release-notes.html If you don't use maven provided in source tree, you should upgrade, as you wont be able to build anymore as soon as PR is merged. -- tomaz -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140606/1735303e/attachment.html From emuckenh at redhat.com Fri Jun 6 05:19:52 2014 From: emuckenh at redhat.com (Emanuel Muckenhuber) Date: Fri, 06 Jun 2014 11:19:52 +0200 Subject: [wildfly-dev] Patching from previous versions ? In-Reply-To: <4ECD86A2-14B5-4BB5-9EA5-8067B51A994F@redhat.com> References: <53907BF1.8010003@redhat.com> <5390C8DF.8020703@redhat.com> <4ECD86A2-14B5-4BB5-9EA5-8067B51A994F@redhat.com> Message-ID: <539187B8.5070306@redhat.com> On 05/06/14 23:41, Jason Greene wrote: > > On Jun 5, 2014, at 2:45 PM, Brian Stansberry wrote: > >>> I think asking the users to update from 8.0 -> 8.1 and then to 8.2 is >>> reasonable. But is that the current thought process ? >>> >> >> AFAIK this thread *is* the current thought process. :) So you and I have >> stated opinions, which is a good start. In the end, the component lead >> (Emanuel Muckenhuber) and Jason will decide. > > Emanuel wrote a facility that would allow for us to ship both together, but I think that would require that the future class diff algorithm be very efficient. I think it?s best to assume that we won?t. > >> >> I'm highly confident that updates will only be for minor versions >> though. At least for now. Maybe someday that will change. > > Yes, I think the big issue is that we don?t want never-ending patch build-up (even though you can expunge old patches), so it makes the most sense to start fresh at major intervals. I think we should leave the door open to the idea for 9 though, even if it is just a tiny crack that is open :) > Yeah, i think we do have more flexibility in what do for WFLY 9, since we were quite limited in what we were able to do with 8. Emanuel From arjan.tijms at gmail.com Fri Jun 6 12:11:42 2014 From: arjan.tijms at gmail.com (arjan tijms) Date: Fri, 6 Jun 2014 18:11:42 +0200 Subject: [wildfly-dev] getRequestURI returns welcome file instead of original request Message-ID: Hi, I noticed there's a difference in behaviour between JBossWeb/Tomcat and Undertow with respect to welcome files. Given a request to / and a welcome file set to /index JBossWeb wil return "/" when HttpServletRequest#getRequestURI is called, and "/index" when HttpServletRequest#getServletPath is called. Undertow will return "/index" in both cases. It's clear what happens by looking at ServletInitialHandler#handleRequest which does a full rewrite for welcome files: exchange.setRelativePath(exchange.getRelativePath() + info.getRewriteLocation()); exchange.setRequestURI(exchange.getRequestURI() + info.getRewriteLocation()); exchange.setRequestPath(exchange.getRequestPath() + info.getRewriteLocation()); The Servlet spec (10.10) does seem to justify this somewhat by saying the following: "The container may send the request to the welcome resource with a forward, a redirect, or a container specific mechanism that is indistinguishable from a direct request." However, the JavaDoc for HttpServletRequest#getRequestURI doesn't seem to allow this. At any length, it's a nasty difference that breaks various things. Wonder what the general opinion is about this. Was it a conscious decision to do a full rewrite in Undertow, or was it something that slipped through? Kind regards, Arjan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140606/53e98e5c/attachment.html From stuart.w.douglas at gmail.com Fri Jun 6 13:28:34 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Fri, 06 Jun 2014 12:28:34 -0500 Subject: [wildfly-dev] getRequestURI returns welcome file instead of original request In-Reply-To: References: Message-ID: <5391FA42.1000408@gmail.com> > > "The container may send the request to the welcome resource with a > forward, a redirect, or a container specific mechanism that is > indistinguishable from a direct request." It was because of this that I decided to do a full rewrite, which comes under the 'indistinguishable from a direct request' part. > > However, the JavaDoc for HttpServletRequest#getRequestURI doesn't seem > to allow this. What makes you say that? > > At any length, it's a nasty difference that breaks various things. > > Wonder what the general opinion is about this. Was it a conscious > decision to do a full rewrite in Undertow, or was it something that > slipped through? In general I have been trying to match jboss web's functionality for under specified behaviour such as this. If you file a JIRA I will look at changing it in 1.1. Stuart > > Kind regards, > Arjan > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From arjan.tijms at gmail.com Fri Jun 6 15:04:52 2014 From: arjan.tijms at gmail.com (arjan tijms) Date: Fri, 6 Jun 2014 21:04:52 +0200 Subject: [wildfly-dev] getRequestURI returns welcome file instead of original request In-Reply-To: <5391FA42.1000408@gmail.com> References: <5391FA42.1000408@gmail.com> Message-ID: Hi, On Fri, Jun 6, 2014 at 7:28 PM, Stuart Douglas wrote: > >> "The container may send the request to the welcome resource with a >> forward, a redirect, or a container specific mechanism that is >> indistinguishable from a direct request." >> > > It was because of this that I decided to do a full rewrite, which comes > under the 'indistinguishable from a direct request' part. I can understand that. It's a troublesome section and it makes it very hard to support welcome pages in a universal way (to have an application that directly runs on multiple containers). A forward in the Servlet spec is really a different kind of request with respect to filters etc. > However, the JavaDoc for HttpServletRequest#getRequestURI doesn't seem >> to allow this. >> > > What makes you say that? It's more of an interpretation, but the JavaDoc seems to say that it returns the request as it was done by the client. This information is important, since if internal rewrites take place you need this information to render (postback) links that don't expose the name of the rewritten resource. getServletPath() seems to leave a little bit of room to allow it to point to a rewritten resource, as it talks more about the Servlet that is invoked. But I realize that my interpretation is subjective. > In general I have been trying to match jboss web's functionality for under >> specified behaviour such as this. >> > > If you file a JIRA I will look at changing it in 1.1. > Sure! Thanks in advance. Btw, after some fiddling I found that with a handler containing the following code the JBossWeb behavior can be restored: public class RequestURIHandler implements HttpHandler { private HttpHandler next; public RequestURIHandler(HttpHandler next) { this.next = next; } @Override public void handleRequest(final HttpServerExchange exchange) throws Exception { String requestURI = exchange.getRequestURI(); next.handleRequest(exchange); exchange.setRequestURI(requestURI); } } Which I then register as an initialHandler: public class UndertowHandlerExtension implements ServletExtension { @Override public void handleDeployment(final DeploymentInfo deploymentInfo, final ServletContext servletContext) { deploymentInfo.addInitialHandlerChainWrapper(handler -> new RequestURIHandler(handler)); } } This seems to work. Is this a correct way? Kind regards, Arjan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140606/1ce642b6/attachment-0001.html From sebastian.laskawiec at gmail.com Sun Jun 8 03:34:21 2014 From: sebastian.laskawiec at gmail.com (=?UTF-8?Q?Sebastian_=C5=81askawiec?=) Date: Sun, 8 Jun 2014 09:34:21 +0200 Subject: [wildfly-dev] JMX Console over Web Admin Console In-Reply-To: References: <537D51A9.7090803@redhat.com> <538E3130.4060905@redhat.com> Message-ID: Hi Tomaz Thanks for the hints! I created separate repository with proper group id. I also replaced JBoss logo with Wildfly and corrected code packages. Everything might be found here: https://github.com/altanis/wildfly-jmx-console Is it possible to release this war file into some publicly available repository? Best regards Sebastian 2014-06-04 16:08 GMT+02:00 Toma? Cerar : > In any case it cannot be org.jboss.* > it can be org.wildfly. > > Looking trough the rebased code it is still war application depending on > servlet container to be present. > Taking that into consideration, this cannot be part of our main > codebase/distribution, but having it as external add-on project sounds fine. > > In this case i would go for org.wildfly.jmx-console as groupId and > artifact id based on logical part of artifact inside the project. > probably just jmx-console. > > btw, your rebased project still imports java ee6 dependencies, given > wildfly is ee7 now it would be wise to upgrade that. > > -- > tomaz > > > On Wed, Jun 4, 2014 at 3:53 PM, Sebastian ?askawiec < > sebastian.laskawiec at gmail.com> wrote: > >> Hi Brian >> >> I thought about: >> >> - >> >> *org.jboss* >> >> - >> >> org.jboss.as >> >> - >> >> org.wildfly >> >> >> ,artifact id: >> >> - wildfly-jmx-console >> - *jboss-jmx-console* >> >> and finally version: >> >> - start from the scratch 1.0.0-SNAPSHOT >> >> My preferences are - org.jboss as group id and jboss-jmx-console as >> artifact id. What do you think, is it ok? >> >> Best regards >> Sebastian >> >> >> >> 2014-06-03 22:33 GMT+02:00 Brian Stansberry >> : >> >> Hi Sebastian, >>> >>> >>> On 6/1/14, 1:21 PM, Sebastian ?askawiec wrote: >>> >>>> Hi Brian >>>> >>>> Thanks for clarification and sorry for late response. >>>> >>>> I created Feature Request to add expose MBean server through HTTP >>>> management interface: https://issues.jboss.org/browse/WFLY-3426 >>>> >>>> >>> Thanks. >>> >>> >>> It would be great to have MBean server exposed via Wildfly HTTP >>>> Management interface, but I know several teams which would like to have >>>> such functionality in JBoss AS 7. This is why I started looking at >>>> Darran's port to JMX console >>>> (https://github.com/dandreadis/wildfly/commits/jmx-console). I rebased >>>> it, detached from Wildfly parent and pushed to my branch >>>> (https://github.com/altanis/wildfly/commits/jmx-console-ported). The >>>> same WAR file seems to work correctly on JBoss AS 7 as well as Wildfly. >>>> >>>> In my opinion it would be great to have this console available publicly. >>>> Is it possible to make the WAR file available through JBoss Nexus >>>> (perhaps thirdparty-releases repository)? If it is, I'd squash all >>>> commits and push only jmx-console code into new github repository (to >>>> make it separate from Wildfly). >>>> >>>> >>> What maven Group were you wanting to use? That jmx-console-ported branch >>> has org.wildfly in the pom. >>> >>> Best regards >>>> Sebastian >>>> >>>> >>>> >>>> 2014-05-22 3:23 GMT+02:00 Brian Stansberry >>> >: >>>> >>>> >>>> I agree that if we exposed the mbean server over HTTP that it >>>> should be >>>> via a context on our HTTP management interface. Either that or >>>> expose >>>> mbeans as part of our standard management resource tree. That would >>>> make >>>> integration in the web console much more practical. >>>> >>>> I don't see us ever bringing back the AS5-style jmx-console.war that >>>> runs on port 8080 as part of the WildFly distribution. That would >>>> introduce a requirement for EE into our management infrastructure, >>>> and >>>> we won't do that. Management is part of WildFly core, and WildFly >>>> core >>>> does not require EE. If the Servlet-based jmx-console.war code >>>> linked >>>> from WFLY-1197 gets further developed, I see it as a community >>>> effort >>>> for people who want to install that on their own, not as something >>>> we'd >>>> distribute as part of WildFly itself. >>>> >>>> On 5/21/14, 7:37 AM, Sebastian ?askawiec wrote: >>>> > Hi >>>> > >>>> > One of our projects is based on JBoss 5.1 and we are considering >>>> > migrating it to Wildfly. One of our problems is Web based JMX >>>> Console... >>>> > We have pretty complicated production environment and Web based >>>> JMX >>>> > console with basic Auth delegated to LDAP is the simplest >>>> solution for us. >>>> > >>>> > I noticed that there was a ticket opened for porting legacy JMX >>>> Console: >>>> > https://issues.jboss.org/browse/WFLY-1197. >>>> > However I think it would be much better idea to to have this >>>> > functionality in Web Administraction console. In my opinion it >>>> would be >>>> > great to have it under "Runtime" in "Status" submenu. >>>> > >>>> > What do you think about this idea? >>>> > >>>> > Best Regards >>>> > -- >>>> > Sebastian ?askawiec >>>> > >>>> > >>>> > _______________________________________________ >>>> > wildfly-dev mailing list >>>> > wildfly-dev at lists.jboss.org >>>> >>>> > https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> > >>>> >>>> >>>> -- >>>> Brian Stansberry >>>> Senior Principal Software Engineer >>>> JBoss by Red Hat >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>>> >>>> >>>> >>>> -- >>>> Sebastian ?askawiec >>>> >>> >>> >>> -- >>> Brian Stansberry >>> Senior Principal Software Engineer >>> JBoss by Red Hat >>> >> >> >> >> -- >> Sebastian ?askawiec >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > -- Sebastian ?askawiec -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140608/a2e55fc3/attachment.html From rory.odonnell at oracle.com Mon Jun 9 08:03:15 2014 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Mon, 09 Jun 2014 05:03:15 -0700 Subject: [wildfly-dev] Early Access builds for JDK 9 b16, JDK 8u20 b17 are available on java.net Message-ID: <5395A283.8000307@oracle.com> Hi Guys, Early Access builds for JDK 9 b16 and JDK 8u20 b17 are available on java.net. As we enter the later phases of development for JDK 8u20 , please log any show stoppers as soon as possible. Rgds, Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140609/9ca601be/attachment.html From jperkins at redhat.com Mon Jun 9 13:37:26 2014 From: jperkins at redhat.com (James R. Perkins) Date: Mon, 09 Jun 2014 10:37:26 -0700 Subject: [wildfly-dev] WildFly Bootstrap(ish) Message-ID: <5395F0D6.7030902@redhat.com> For the wildfly-maven-plugin I've written a simple class to launch a process that starts WildFly. It also has a thin wrapper around the deployment builder to ease the deployment process. I've heard we've been asked a few times about possibly creating a Gradle plugin. As I understand it you can't use a maven plugin with Gradle. I'm considering creating a separate bootstrap(ish) type of project to simple launch WildFly from Java. Would anyone else find this useful? Or does anyone have any objections to this? -- James R. Perkins JBoss by Red Hat From sdouglas at redhat.com Mon Jun 9 18:38:55 2014 From: sdouglas at redhat.com (Stuart Douglas) Date: Mon, 09 Jun 2014 17:38:55 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) Message-ID: <5396377F.80003@redhat.com> Server suspend and resume is a feature that allows a running server to gracefully finish of all running requests. The most common use case for this is graceful shutdown, where you would like a server to complete all running requests, reject any new ones, and then shut down, however there are also plenty of other valid use cases (e.g. suspend the server, modify a data source or some other config, then resume). User View: For the users point of view two new operations will be added to the server: suspend(timeout) resume() A runtime only attribute suspend-state (is this a good name?) will also be added, that can take one of three possible values, RUNNING, SUSPENDING, SUSPENDED. A timeout attribute will also be added to the shutdown operation. If this is present then the server will first be suspended, and the server will not shut down until either the suspend is successful or the timeout occurs. If no timeout parameter is passed to the operation then a normal non-graceful shutdown will take place. In domain mode these operations will be added to both individual server and a complete server group. Implementation Details Suspend/resume operates on entry points to the server. Any request that is currently running must not be affected by the suspend state, however any new request should be rejected. In general subsystems will track the number of outstanding requests, and when this hits zero they are considered suspended. We will introduce the notion of a global SuspendController, that manages the servers suspend state. All subsystems that wish to do a graceful shutdown register callback handlers with this controller. When the suspend() operation is invoked the controller will invoke all these callbacks, letting the subsystem know that the server is suspend, and providing the subsystem with a SuspendContext object that the subsystem can then use to notify the controller that the suspend is complete. What the subsystem does when it receives a suspend command, and when it considers itself suspended will vary, but in the common case it will immediatly start rejecting external requests (e.g. Undertow will start responding with a 503 to all new requests). The subsystem will also track the number of outstanding requests, and when this hits zero then the subsystem will notify the controller that is has successfully suspended. Some subsystems will obviously want to do other actions on suspend, e.g. clustering will likely want to fail over, mod_cluster will notify the load balancer that the node is no longer available etc. In some cases we may want to make this configurable to an extent (e.g. Undertow could be configured to allow requests with an existing session, and not consider itself timed out until all sessions have either timed out or been invalidated, although this will obviously take a while). If anyone has any feedback let me know. In terms of implementation my basic plan is to get the core functionality and the Undertow implementation into Wildfly, and then work with subsystem authors to implement subsystem specific functionality once the core is in place. Stuart The A timeout attribute will also be added to the shutdown command, From stuart.w.douglas at gmail.com Mon Jun 9 18:45:38 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Mon, 09 Jun 2014 17:45:38 -0500 Subject: [wildfly-dev] WildFly Bootstrap(ish) In-Reply-To: <5395F0D6.7030902@redhat.com> References: <5395F0D6.7030902@redhat.com> Message-ID: <53963912.40303@gmail.com> We talked about this in Brno, as this was something the tools team wanted. I think what they were after was some bootstrap API that basically gave them the command line arguments they needed to launch the server, although I can't remember the full details. Stuart James R. Perkins wrote: > For the wildfly-maven-plugin I've written a simple class to launch a > process that starts WildFly. It also has a thin wrapper around the > deployment builder to ease the deployment process. > > I've heard we've been asked a few times about possibly creating a Gradle > plugin. As I understand it you can't use a maven plugin with Gradle. I'm > considering creating a separate bootstrap(ish) type of project to simple > launch WildFly from Java. Would anyone else find this useful? Or does > anyone have any objections to this? > From jperkins at redhat.com Mon Jun 9 18:48:41 2014 From: jperkins at redhat.com (James R. Perkins) Date: Mon, 09 Jun 2014 15:48:41 -0700 Subject: [wildfly-dev] WildFly Bootstrap(ish) In-Reply-To: <53963912.40303@gmail.com> References: <5395F0D6.7030902@redhat.com> <53963912.40303@gmail.com> Message-ID: <539639C9.8060903@redhat.com> That would be easy enough to do. I created a quick project locally to just build a Server object to start, stop and check the state. Using some kind of builder to create the command or command parameters would be quite easy. On 06/09/2014 03:45 PM, Stuart Douglas wrote: > We talked about this in Brno, as this was something the tools team > wanted. I think what they were after was some bootstrap API that > basically gave them the command line arguments they needed to launch > the server, although I can't remember the full details. > > Stuart > > James R. Perkins wrote: >> For the wildfly-maven-plugin I've written a simple class to launch a >> process that starts WildFly. It also has a thin wrapper around the >> deployment builder to ease the deployment process. >> >> I've heard we've been asked a few times about possibly creating a Gradle >> plugin. As I understand it you can't use a maven plugin with Gradle. I'm >> considering creating a separate bootstrap(ish) type of project to simple >> launch WildFly from Java. Would anyone else find this useful? Or does >> anyone have any objections to this? >> -- James R. Perkins JBoss by Red Hat From qutpeter at gmail.com Mon Jun 9 18:58:05 2014 From: qutpeter at gmail.com (Peter Cai) Date: Tue, 10 Jun 2014 08:58:05 +1000 Subject: [wildfly-dev] WildFly Bootstrap(ish) In-Reply-To: <5395F0D6.7030902@redhat.com> References: <5395F0D6.7030902@redhat.com> Message-ID: Hi James, I believe that's where the core distribution of Wildfly comes in --- to allow interested users to boot/extend wildfly as any type of server, not merely EE container. I do find this useful. In my previous project, we build a software to distrbute fax to email. This software is running in different IDC across Australia, where faxes are terminated from telcom network, and instances of this software need to be managed and synchronized provision data from central node. If this piece of software has been equipped with Domain Management features like Wildfly provides, it would have make our lives much easier. Regards, On Tue, Jun 10, 2014 at 3:37 AM, James R. Perkins wrote: > For the wildfly-maven-plugin I've written a simple class to launch a > process that starts WildFly. It also has a thin wrapper around the > deployment builder to ease the deployment process. > > I've heard we've been asked a few times about possibly creating a Gradle > plugin. As I understand it you can't use a maven plugin with Gradle. I'm > considering creating a separate bootstrap(ish) type of project to simple > launch WildFly from Java. Would anyone else find this useful? Or does > anyone have any objections to this? > > -- > James R. Perkins > JBoss by Red Hat > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140610/eea0ca82/attachment.html From anmiller at redhat.com Mon Jun 9 19:04:16 2014 From: anmiller at redhat.com (Andrig Miller) Date: Mon, 9 Jun 2014 19:04:16 -0400 (EDT) Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5396377F.80003@redhat.com> References: <5396377F.80003@redhat.com> Message-ID: <22131731.5850.1402355055126.JavaMail.andrig@worklaptop.miller.org> What I am bringing up is more subsystem specific, but it might be valuable to think about. In case of the time out of the graceful shutdown, what behavior would we consider correct in terms of an inflight transaction? Should it be a forced rollback, so that when the server is started back up, the transaction manager will not find in the log a transaction to be recovered? Or, should it be considered the same as a crashed state, where transactions should be recoverable, and the recover manager wouuld try to recover the transaction? I would lean towards the first, as this would be considered graceful by the administrator, and having a transaction be in a state where it would be recovered on a restart, doesn't seem graceful to me. Andy ----- Original Message ----- > From: "Stuart Douglas" > To: "Wildfly Dev mailing list" > Sent: Monday, June 9, 2014 4:38:55 PM > Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) > > Server suspend and resume is a feature that allows a running server > to > gracefully finish of all running requests. The most common use case > for > this is graceful shutdown, where you would like a server to complete > all > running requests, reject any new ones, and then shut down, however > there > are also plenty of other valid use cases (e.g. suspend the server, > modify a data source or some other config, then resume). > > User View: > > For the users point of view two new operations will be added to the > server: > > suspend(timeout) > resume() > > A runtime only attribute suspend-state (is this a good name?) will > also > be added, that can take one of three possible values, RUNNING, > SUSPENDING, SUSPENDED. > > A timeout attribute will also be added to the shutdown operation. If > this is present then the server will first be suspended, and the > server > will not shut down until either the suspend is successful or the > timeout > occurs. If no timeout parameter is passed to the operation then a > normal > non-graceful shutdown will take place. > > In domain mode these operations will be added to both individual > server > and a complete server group. > > Implementation Details > > Suspend/resume operates on entry points to the server. Any request > that > is currently running must not be affected by the suspend state, > however > any new request should be rejected. In general subsystems will track > the > number of outstanding requests, and when this hits zero they are > considered suspended. > > We will introduce the notion of a global SuspendController, that > manages > the servers suspend state. All subsystems that wish to do a graceful > shutdown register callback handlers with this controller. > > When the suspend() operation is invoked the controller will invoke > all > these callbacks, letting the subsystem know that the server is > suspend, > and providing the subsystem with a SuspendContext object that the > subsystem can then use to notify the controller that the suspend is > complete. > > What the subsystem does when it receives a suspend command, and when > it > considers itself suspended will vary, but in the common case it will > immediatly start rejecting external requests (e.g. Undertow will > start > responding with a 503 to all new requests). The subsystem will also > track the number of outstanding requests, and when this hits zero > then > the subsystem will notify the controller that is has successfully > suspended. > Some subsystems will obviously want to do other actions on suspend, > e.g. > clustering will likely want to fail over, mod_cluster will notify the > load balancer that the node is no longer available etc. In some cases > we > may want to make this configurable to an extent (e.g. Undertow could > be > configured to allow requests with an existing session, and not > consider > itself timed out until all sessions have either timed out or been > invalidated, although this will obviously take a while). > > If anyone has any feedback let me know. In terms of implementation my > basic plan is to get the core functionality and the Undertow > implementation into Wildfly, and then work with subsystem authors to > implement subsystem specific functionality once the core is in place. > > Stuart > > > > > > > > The > > A timeout attribute will also be added to the shutdown command, > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From jperkins at redhat.com Mon Jun 9 19:05:01 2014 From: jperkins at redhat.com (James R. Perkins) Date: Mon, 09 Jun 2014 16:05:01 -0700 Subject: [wildfly-dev] WildFly Bootstrap(ish) In-Reply-To: References: <5395F0D6.7030902@redhat.com> Message-ID: <53963D9D.4050705@redhat.com> Hello Peter, The core distribution would be a little different. The idea with this is that it would essentially launch and manage a process. It would likely only be useful for plugins. The core distribution would be a stripped down version of WildFly. You'd still have to have some kind of script or way to start the server. On 06/09/2014 03:58 PM, Peter Cai wrote: > Hi James, > I believe that's where the core distribution of Wildfly comes in --- > to allow interested users to boot/extend wildfly as any type of > server, not merely EE container. > I do find this useful. In my previous project, we build a software to > distrbute fax to email. This software is running in different IDC > across Australia, where faxes are terminated from telcom network, and > instances of this software need to be managed and synchronized > provision data from central node. If this piece of software has been > equipped with Domain Management features like Wildfly provides, it > would have make our lives much easier. > Regards, > > > On Tue, Jun 10, 2014 at 3:37 AM, James R. Perkins > wrote: > > For the wildfly-maven-plugin I've written a simple class to launch a > process that starts WildFly. It also has a thin wrapper around the > deployment builder to ease the deployment process. > > I've heard we've been asked a few times about possibly creating a > Gradle > plugin. As I understand it you can't use a maven plugin with > Gradle. I'm > considering creating a separate bootstrap(ish) type of project to > simple > launch WildFly from Java. Would anyone else find this useful? Or does > anyone have any objections to this? > > -- > James R. Perkins > JBoss by Red Hat > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > -- James R. Perkins JBoss by Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140609/98863d61/attachment.html From qutpeter at gmail.com Mon Jun 9 19:21:51 2014 From: qutpeter at gmail.com (Peter Cai) Date: Tue, 10 Jun 2014 09:21:51 +1000 Subject: [wildfly-dev] WildFly Bootstrap(ish) In-Reply-To: <53963D9D.4050705@redhat.com> References: <5395F0D6.7030902@redhat.com> <53963D9D.4050705@redhat.com> Message-ID: Hi James, Probably I got you wrong, I left out important context ----- wildfly-maven-plugin. When you said "It would likely only be useful for plugins", do you means maven plugins? Regards, On Tue, Jun 10, 2014 at 9:05 AM, James R. Perkins wrote: > Hello Peter, > The core distribution would be a little different. The idea with this is > that it would essentially launch and manage a process. It would likely only > be useful for plugins. > > The core distribution would be a stripped down version of WildFly. You'd > still have to have some kind of script or way to start the server. > > > On 06/09/2014 03:58 PM, Peter Cai wrote: > > Hi James, > I believe that's where the core distribution of Wildfly comes in --- to > allow interested users to boot/extend wildfly as any type of server, not > merely EE container. > > I do find this useful. In my previous project, we build a software to > distrbute fax to email. This software is running in different IDC across > Australia, where faxes are terminated from telcom network, and instances of > this software need to be managed and synchronized provision data from > central node. If this piece of software has been equipped with Domain > Management features like Wildfly provides, it would have make our lives > much easier. > > Regards, > > > On Tue, Jun 10, 2014 at 3:37 AM, James R. Perkins > wrote: > >> For the wildfly-maven-plugin I've written a simple class to launch a >> process that starts WildFly. It also has a thin wrapper around the >> deployment builder to ease the deployment process. >> >> I've heard we've been asked a few times about possibly creating a Gradle >> plugin. As I understand it you can't use a maven plugin with Gradle. I'm >> considering creating a separate bootstrap(ish) type of project to simple >> launch WildFly from Java. Would anyone else find this useful? Or does >> anyone have any objections to this? >> >> -- >> James R. Perkins >> JBoss by Red Hat >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > > -- > James R. Perkins > JBoss by Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140610/0f15df43/attachment-0001.html From jperkins at redhat.com Mon Jun 9 19:24:39 2014 From: jperkins at redhat.com (James R. Perkins) Date: Mon, 09 Jun 2014 16:24:39 -0700 Subject: [wildfly-dev] WildFly Bootstrap(ish) In-Reply-To: References: <5395F0D6.7030902@redhat.com> <53963D9D.4050705@redhat.com> Message-ID: <53964237.6060404@redhat.com> On 06/09/2014 04:21 PM, Peter Cai wrote: > Hi James, > Probably I got you wrong, I left out important context ----- > wildfly-maven-plugin. > When you said "It would likely only be useful for plugins", do you > means maven plugins? Yeah, but it could be any plugin. Like a Gradle plugin or a Forge plugin. As Stuart had mentioned too the JBoss Tools team may want to use it for creating the launch command and/or parameters. > Regards, > > > On Tue, Jun 10, 2014 at 9:05 AM, James R. Perkins > wrote: > > Hello Peter, > The core distribution would be a little different. The idea with > this is that it would essentially launch and manage a process. It > would likely only be useful for plugins. > > The core distribution would be a stripped down version of WildFly. > You'd still have to have some kind of script or way to start the > server. > > > On 06/09/2014 03:58 PM, Peter Cai wrote: >> Hi James, >> I believe that's where the core distribution of Wildfly comes in >> --- to allow interested users to boot/extend wildfly as any type >> of server, not merely EE container. >> I do find this useful. In my previous project, we build a >> software to distrbute fax to email. This software is running in >> different IDC across Australia, where faxes are terminated from >> telcom network, and instances of this software need to be managed >> and synchronized provision data from central node. If this piece >> of software has been equipped with Domain Management features >> like Wildfly provides, it would have make our lives much easier. >> Regards, >> >> >> On Tue, Jun 10, 2014 at 3:37 AM, James R. Perkins >> > wrote: >> >> For the wildfly-maven-plugin I've written a simple class to >> launch a >> process that starts WildFly. It also has a thin wrapper >> around the >> deployment builder to ease the deployment process. >> >> I've heard we've been asked a few times about possibly >> creating a Gradle >> plugin. As I understand it you can't use a maven plugin with >> Gradle. I'm >> considering creating a separate bootstrap(ish) type of >> project to simple >> launch WildFly from Java. Would anyone else find this useful? >> Or does >> anyone have any objections to this? >> >> -- >> James R. Perkins >> JBoss by Red Hat >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> >> > > -- > James R. Perkins > JBoss by Red Hat > > -- James R. Perkins JBoss by Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140609/5d828fea/attachment.html From qutpeter at gmail.com Mon Jun 9 19:28:17 2014 From: qutpeter at gmail.com (Peter Cai) Date: Tue, 10 Jun 2014 09:28:17 +1000 Subject: [wildfly-dev] WildFly Bootstrap(ish) In-Reply-To: <53964237.6060404@redhat.com> References: <5395F0D6.7030902@redhat.com> <53963D9D.4050705@redhat.com> <53964237.6060404@redhat.com> Message-ID: Thanks James. On Tue, Jun 10, 2014 at 9:24 AM, James R. Perkins wrote: > > On 06/09/2014 04:21 PM, Peter Cai wrote: > > Hi James, > Probably I got you wrong, I left out important context ----- > wildfly-maven-plugin. > > When you said "It would likely only be useful for plugins", do you means > maven plugins? > > Yeah, but it could be any plugin. Like a Gradle plugin or a Forge plugin. > As Stuart had mentioned too the JBoss Tools team may want to use it for > creating the launch command and/or parameters. > > > > Regards, > > > On Tue, Jun 10, 2014 at 9:05 AM, James R. Perkins > wrote: > >> Hello Peter, >> The core distribution would be a little different. The idea with this is >> that it would essentially launch and manage a process. It would likely only >> be useful for plugins. >> >> The core distribution would be a stripped down version of WildFly. You'd >> still have to have some kind of script or way to start the server. >> >> >> On 06/09/2014 03:58 PM, Peter Cai wrote: >> >> Hi James, >> I believe that's where the core distribution of Wildfly comes in --- to >> allow interested users to boot/extend wildfly as any type of server, not >> merely EE container. >> >> I do find this useful. In my previous project, we build a software to >> distrbute fax to email. This software is running in different IDC across >> Australia, where faxes are terminated from telcom network, and instances of >> this software need to be managed and synchronized provision data from >> central node. If this piece of software has been equipped with Domain >> Management features like Wildfly provides, it would have make our lives >> much easier. >> >> Regards, >> >> >> On Tue, Jun 10, 2014 at 3:37 AM, James R. Perkins >> wrote: >> >>> For the wildfly-maven-plugin I've written a simple class to launch a >>> process that starts WildFly. It also has a thin wrapper around the >>> deployment builder to ease the deployment process. >>> >>> I've heard we've been asked a few times about possibly creating a Gradle >>> plugin. As I understand it you can't use a maven plugin with Gradle. I'm >>> considering creating a separate bootstrap(ish) type of project to simple >>> launch WildFly from Java. Would anyone else find this useful? Or does >>> anyone have any objections to this? >>> >>> -- >>> James R. Perkins >>> JBoss by Red Hat >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> >> -- >> James R. Perkins >> JBoss by Red Hat >> >> > > -- > James R. Perkins > JBoss by Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140610/005486ed/attachment.html From stuart.w.douglas at gmail.com Mon Jun 9 19:48:37 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Mon, 9 Jun 2014 18:48:37 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <22131731.5850.1402355055126.JavaMail.andrig@worklaptop.miller.org> References: <5396377F.80003@redhat.com> <22131731.5850.1402355055126.JavaMail.andrig@worklaptop.miller.org> Message-ID: <786BAC10-7BCE-47B8-BD87-968FE3FA4830@gmail.com> Something I forgot to mention is that we will need a switch to turn this off, as there is a small but noticeable cost with tracking in flight requests. > On 9 Jun 2014, at 18:04, Andrig Miller wrote: > > What I am bringing up is more subsystem specific, but it might be valuable to think about. In case of the time out of the graceful shutdown, what behavior would we consider correct in terms of an inflight transaction? It waits for the transaction to finish before shutting down. Stuart > > Should it be a forced rollback, so that when the server is started back up, the transaction manager will not find in the log a transaction to be recovered? > > Or, should it be considered the same as a crashed state, where transactions should be recoverable, and the recover manager wouuld try to recover the transaction? > > I would lean towards the first, as this would be considered graceful by the administrator, and having a transaction be in a state where it would be recovered on a restart, doesn't seem graceful to me. > > Andy > > ----- Original Message ----- >> From: "Stuart Douglas" >> To: "Wildfly Dev mailing list" >> Sent: Monday, June 9, 2014 4:38:55 PM >> Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) >> >> Server suspend and resume is a feature that allows a running server >> to >> gracefully finish of all running requests. The most common use case >> for >> this is graceful shutdown, where you would like a server to complete >> all >> running requests, reject any new ones, and then shut down, however >> there >> are also plenty of other valid use cases (e.g. suspend the server, >> modify a data source or some other config, then resume). >> >> User View: >> >> For the users point of view two new operations will be added to the >> server: >> >> suspend(timeout) >> resume() >> >> A runtime only attribute suspend-state (is this a good name?) will >> also >> be added, that can take one of three possible values, RUNNING, >> SUSPENDING, SUSPENDED. >> >> A timeout attribute will also be added to the shutdown operation. If >> this is present then the server will first be suspended, and the >> server >> will not shut down until either the suspend is successful or the >> timeout >> occurs. If no timeout parameter is passed to the operation then a >> normal >> non-graceful shutdown will take place. >> >> In domain mode these operations will be added to both individual >> server >> and a complete server group. >> >> Implementation Details >> >> Suspend/resume operates on entry points to the server. Any request >> that >> is currently running must not be affected by the suspend state, >> however >> any new request should be rejected. In general subsystems will track >> the >> number of outstanding requests, and when this hits zero they are >> considered suspended. >> >> We will introduce the notion of a global SuspendController, that >> manages >> the servers suspend state. All subsystems that wish to do a graceful >> shutdown register callback handlers with this controller. >> >> When the suspend() operation is invoked the controller will invoke >> all >> these callbacks, letting the subsystem know that the server is >> suspend, >> and providing the subsystem with a SuspendContext object that the >> subsystem can then use to notify the controller that the suspend is >> complete. >> >> What the subsystem does when it receives a suspend command, and when >> it >> considers itself suspended will vary, but in the common case it will >> immediatly start rejecting external requests (e.g. Undertow will >> start >> responding with a 503 to all new requests). The subsystem will also >> track the number of outstanding requests, and when this hits zero >> then >> the subsystem will notify the controller that is has successfully >> suspended. >> Some subsystems will obviously want to do other actions on suspend, >> e.g. >> clustering will likely want to fail over, mod_cluster will notify the >> load balancer that the node is no longer available etc. In some cases >> we >> may want to make this configurable to an extent (e.g. Undertow could >> be >> configured to allow requests with an existing session, and not >> consider >> itself timed out until all sessions have either timed out or been >> invalidated, although this will obviously take a while). >> >> If anyone has any feedback let me know. In terms of implementation my >> basic plan is to get the core functionality and the Undertow >> implementation into Wildfly, and then work with subsystem authors to >> implement subsystem specific functionality once the core is in place. >> >> Stuart >> >> >> >> >> >> >> >> The >> >> A timeout attribute will also be added to the shutdown command, >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From jgreene at redhat.com Mon Jun 9 22:01:09 2014 From: jgreene at redhat.com (Jason T. Greene) Date: Mon, 9 Jun 2014 22:01:09 -0400 (EDT) Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <786BAC10-7BCE-47B8-BD87-968FE3FA4830@gmail.com> References: <5396377F.80003@redhat.com> <22131731.5850.1402355055126.JavaMail.andrig@worklaptop.miller.org> <786BAC10-7BCE-47B8-BD87-968FE3FA4830@gmail.com> Message-ID: IIRC the behavior for a tx timeout is a rollback, but we should check that. > On Jun 9, 2014, at 6:50 PM, Stuart Douglas wrote: > > Something I forgot to mention is that we will need a switch to turn this off, as there is a small but noticeable cost with tracking in flight requests. > > >> On 9 Jun 2014, at 18:04, Andrig Miller wrote: >> >> What I am bringing up is more subsystem specific, but it might be valuable to think about. In case of the time out of the graceful shutdown, what behavior would we consider correct in terms of an inflight transaction? > > It waits for the transaction to finish before shutting down. > > Stuart > >> >> Should it be a forced rollback, so that when the server is started back up, the transaction manager will not find in the log a transaction to be recovered? >> >> Or, should it be considered the same as a crashed state, where transactions should be recoverable, and the recover manager wouuld try to recover the transaction? >> >> I would lean towards the first, as this would be considered graceful by the administrator, and having a transaction be in a state where it would be recovered on a restart, doesn't seem graceful to me. >> >> Andy >> >> ----- Original Message ----- >>> From: "Stuart Douglas" >>> To: "Wildfly Dev mailing list" >>> Sent: Monday, June 9, 2014 4:38:55 PM >>> Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) >>> >>> Server suspend and resume is a feature that allows a running server >>> to >>> gracefully finish of all running requests. The most common use case >>> for >>> this is graceful shutdown, where you would like a server to complete >>> all >>> running requests, reject any new ones, and then shut down, however >>> there >>> are also plenty of other valid use cases (e.g. suspend the server, >>> modify a data source or some other config, then resume). >>> >>> User View: >>> >>> For the users point of view two new operations will be added to the >>> server: >>> >>> suspend(timeout) >>> resume() >>> >>> A runtime only attribute suspend-state (is this a good name?) will >>> also >>> be added, that can take one of three possible values, RUNNING, >>> SUSPENDING, SUSPENDED. >>> >>> A timeout attribute will also be added to the shutdown operation. If >>> this is present then the server will first be suspended, and the >>> server >>> will not shut down until either the suspend is successful or the >>> timeout >>> occurs. If no timeout parameter is passed to the operation then a >>> normal >>> non-graceful shutdown will take place. >>> >>> In domain mode these operations will be added to both individual >>> server >>> and a complete server group. >>> >>> Implementation Details >>> >>> Suspend/resume operates on entry points to the server. Any request >>> that >>> is currently running must not be affected by the suspend state, >>> however >>> any new request should be rejected. In general subsystems will track >>> the >>> number of outstanding requests, and when this hits zero they are >>> considered suspended. >>> >>> We will introduce the notion of a global SuspendController, that >>> manages >>> the servers suspend state. All subsystems that wish to do a graceful >>> shutdown register callback handlers with this controller. >>> >>> When the suspend() operation is invoked the controller will invoke >>> all >>> these callbacks, letting the subsystem know that the server is >>> suspend, >>> and providing the subsystem with a SuspendContext object that the >>> subsystem can then use to notify the controller that the suspend is >>> complete. >>> >>> What the subsystem does when it receives a suspend command, and when >>> it >>> considers itself suspended will vary, but in the common case it will >>> immediatly start rejecting external requests (e.g. Undertow will >>> start >>> responding with a 503 to all new requests). The subsystem will also >>> track the number of outstanding requests, and when this hits zero >>> then >>> the subsystem will notify the controller that is has successfully >>> suspended. >>> Some subsystems will obviously want to do other actions on suspend, >>> e.g. >>> clustering will likely want to fail over, mod_cluster will notify the >>> load balancer that the node is no longer available etc. In some cases >>> we >>> may want to make this configurable to an extent (e.g. Undertow could >>> be >>> configured to allow requests with an existing session, and not >>> consider >>> itself timed out until all sessions have either timed out or been >>> invalidated, although this will obviously take a while). >>> >>> If anyone has any feedback let me know. In terms of implementation my >>> basic plan is to get the core functionality and the Undertow >>> implementation into Wildfly, and then work with subsystem authors to >>> implement subsystem specific functionality once the core is in place. >>> >>> Stuart >>> >>> >>> >>> >>> >>> >>> >>> The >>> >>> A timeout attribute will also be added to the shutdown command, >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From jgreene at redhat.com Mon Jun 9 22:09:04 2014 From: jgreene at redhat.com (Jason T. Greene) Date: Mon, 9 Jun 2014 22:09:04 -0400 (EDT) Subject: [wildfly-dev] WildFly Bootstrap(ish) In-Reply-To: References: <5395F0D6.7030902@redhat.com> Message-ID: <7CCAEE6E-C6D9-4CBF-BD6A-440E7BD16583@redhat.com> We made a big step towards what you describe with the wildfly core distribution in 8. It gives you management, modularity, a service container, and an http server (primarily for servicing http management requests) Sent from my iPhone > On Jun 9, 2014, at 5:59 PM, Peter Cai wrote: > > Hi James, > I believe that's where the core distribution of Wildfly comes in --- to allow interested users to boot/extend wildfly as any type of server, not merely EE container. > > I do find this useful. In my previous project, we build a software to distrbute fax to email. This software is running in different IDC across Australia, where faxes are terminated from telcom network, and instances of this software need to be managed and synchronized provision data from central node. If this piece of software has been equipped with Domain Management features like Wildfly provides, it would have make our lives much easier. > > Regards, > > >> On Tue, Jun 10, 2014 at 3:37 AM, James R. Perkins wrote: >> For the wildfly-maven-plugin I've written a simple class to launch a >> process that starts WildFly. It also has a thin wrapper around the >> deployment builder to ease the deployment process. >> >> I've heard we've been asked a few times about possibly creating a Gradle >> plugin. As I understand it you can't use a maven plugin with Gradle. I'm >> considering creating a separate bootstrap(ish) type of project to simple >> launch WildFly from Java. Would anyone else find this useful? Or does >> anyone have any objections to this? >> >> -- >> James R. Perkins >> JBoss by Red Hat >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140609/dffe3495/attachment.html From smcgowan at redhat.com Mon Jun 9 22:11:03 2014 From: smcgowan at redhat.com (Shelly McGowan) Date: Mon, 9 Jun 2014 22:11:03 -0400 (EDT) Subject: [wildfly-dev] WildFly Bootstrap(ish) In-Reply-To: <539639C9.8060903@redhat.com> References: <5395F0D6.7030902@redhat.com> <53963912.40303@gmail.com> <539639C9.8060903@redhat.com> Message-ID: <1178736717.23591000.1402366263225.JavaMail.zimbra@redhat.com> Launcher API JIRA: https://issues.jboss.org/browse/WFLY-2427 contains previous discussion and input from tools team. Shelly ----- Original Message ----- From: "James R. Perkins" To: "Stuart Douglas" Cc: wildfly-dev at lists.jboss.org Sent: Monday, June 9, 2014 6:48:41 PM Subject: Re: [wildfly-dev] WildFly Bootstrap(ish) That would be easy enough to do. I created a quick project locally to just build a Server object to start, stop and check the state. Using some kind of builder to create the command or command parameters would be quite easy. On 06/09/2014 03:45 PM, Stuart Douglas wrote: > We talked about this in Brno, as this was something the tools team > wanted. I think what they were after was some bootstrap API that > basically gave them the command line arguments they needed to launch > the server, although I can't remember the full details. > > Stuart > > James R. Perkins wrote: >> For the wildfly-maven-plugin I've written a simple class to launch a >> process that starts WildFly. It also has a thin wrapper around the >> deployment builder to ease the deployment process. >> >> I've heard we've been asked a few times about possibly creating a Gradle >> plugin. As I understand it you can't use a maven plugin with Gradle. I'm >> considering creating a separate bootstrap(ish) type of project to simple >> launch WildFly from Java. Would anyone else find this useful? Or does >> anyone have any objections to this? >> -- James R. Perkins JBoss by Red Hat _______________________________________________ wildfly-dev mailing list wildfly-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/wildfly-dev From jperkins at redhat.com Mon Jun 9 22:13:59 2014 From: jperkins at redhat.com (James R. Perkins) Date: Mon, 9 Jun 2014 22:13:59 -0400 (EDT) Subject: [wildfly-dev] WildFly Bootstrap(ish) In-Reply-To: <1178736717.23591000.1402366263225.JavaMail.zimbra@redhat.com> References: <5395F0D6.7030902@redhat.com> <53963912.40303@gmail.com> <539639C9.8060903@redhat.com> <1178736717.23591000.1402366263225.JavaMail.zimbra@redhat.com> Message-ID: <1841227492.19405922.1402366439697.JavaMail.zimbra@zmail16.collab.prod.int.phx2.redhat.com> Excellent. I'll have a look at what's going on there. -- James R. Perkins JBoss by Red Hat On Jun 9, 2014 7:11 PM, Shelly McGowan wrote: > > > > Launcher API JIRA: > https://issues.jboss.org/browse/WFLY-2427 > > contains p Launcher API JIRA: https://issues.jboss.org/browse/WFLY-2427 contains previous discussion and input from tools team. Shelly ----- Original Message ----- From: "James R. Perkins" To: "Stuart Douglas" Cc: wildfly-dev at lists.jboss.org Sent: Monday, June 9, 2014 6:48:41 PM Subject: Re: [wildfly-dev] WildFly Bootstrap(ish) That would be easy enough to do. I created a quick project locally to just build a Server object to start, stop and check the state. Using some kind of builder to create the command or command parameters would be quite easy. On 06/09/2014 03:45 PM, Stuart Douglas wrote: > We talked about this in Brno, as this was something the tools team > wanted. I think what they were after was some bootstrap API that > basically gave them the command line arguments they needed to launch > the server, although I can't remember the full details. > > Stuart > > James R. Perkins wrote: >> For the wildfly-maven-plugin I've written a simple class to launch a >> process that starts WildFly. It also has a thin wrapper around the >> deployment builder to ease the deployment process. >> >> I've heard we've been asked a few times about possibly creating a Gradle >> plugin. As I understand it you can't use a maven plugin with Gradle. I'm >> considering creating a separate bootstrap(ish) type of project to simple >> launch WildFly from Java. Would anyone else find this useful? Or does >> anyone have any objections to this? >> -- James R. Perkins JBoss by Red Hat _______________________________________________ wildfly-dev mailing list wildfly-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/wildfly-dev From smarlow at redhat.com Mon Jun 9 22:35:31 2014 From: smarlow at redhat.com (Scott Marlow) Date: Mon, 09 Jun 2014 22:35:31 -0400 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5396377F.80003@redhat.com> References: <5396377F.80003@redhat.com> Message-ID: <53966EF3.7070108@redhat.com> On 06/09/2014 06:38 PM, Stuart Douglas wrote: > Server suspend and resume is a feature that allows a running server to > gracefully finish of all running requests. The most common use case for > this is graceful shutdown, where you would like a server to complete all > running requests, reject any new ones, and then shut down, however there > are also plenty of other valid use cases (e.g. suspend the server, > modify a data source or some other config, then resume). > > User View: > > For the users point of view two new operations will be added to the server: > > suspend(timeout) > resume() > > A runtime only attribute suspend-state (is this a good name?) will also > be added, that can take one of three possible values, RUNNING, > SUSPENDING, SUSPENDED. The SuspendController "state" might be a shorter attribute name and just as meaningful. When are we in the RUNNING state? Is that simply the pre-state for SUSPENDING? > > A timeout attribute will also be added to the shutdown operation. If > this is present then the server will first be suspended, and the server > will not shut down until either the suspend is successful or the timeout > occurs. If no timeout parameter is passed to the operation then a normal > non-graceful shutdown will take place. Will non-graceful shutdown wait for non-daemon threads or terminate immediately (call System.exit()). > > In domain mode these operations will be added to both individual server > and a complete server group. > > Implementation Details > > Suspend/resume operates on entry points to the server. Any request that > is currently running must not be affected by the suspend state, however > any new request should be rejected. In general subsystems will track the > number of outstanding requests, and when this hits zero they are > considered suspended. > > We will introduce the notion of a global SuspendController, that manages > the servers suspend state. All subsystems that wish to do a graceful > shutdown register callback handlers with this controller. > > When the suspend() operation is invoked the controller will invoke all > these callbacks, letting the subsystem know that the server is suspend, > and providing the subsystem with a SuspendContext object that the > subsystem can then use to notify the controller that the suspend is > complete. > > What the subsystem does when it receives a suspend command, and when it > considers itself suspended will vary, but in the common case it will > immediatly start rejecting external requests (e.g. Undertow will start > responding with a 503 to all new requests). The subsystem will also > track the number of outstanding requests, and when this hits zero then > the subsystem will notify the controller that is has successfully > suspended. > Some subsystems will obviously want to do other actions on suspend, e.g. > clustering will likely want to fail over, mod_cluster will notify the > load balancer that the node is no longer available etc. In some cases we > may want to make this configurable to an extent (e.g. Undertow could be > configured to allow requests with an existing session, and not consider > itself timed out until all sessions have either timed out or been > invalidated, although this will obviously take a while). > > If anyone has any feedback let me know. In terms of implementation my > basic plan is to get the core functionality and the Undertow > implementation into Wildfly, and then work with subsystem authors to > implement subsystem specific functionality once the core is in place. > > Stuart > > > > > > > > The > > A timeout attribute will also be added to the shutdown command, > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From stuart.w.douglas at gmail.com Mon Jun 9 22:40:11 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Mon, 09 Jun 2014 21:40:11 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <53966EF3.7070108@redhat.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> Message-ID: <5396700B.9030003@gmail.com> Scott Marlow wrote: > On 06/09/2014 06:38 PM, Stuart Douglas wrote: >> Server suspend and resume is a feature that allows a running server to >> gracefully finish of all running requests. The most common use case for >> this is graceful shutdown, where you would like a server to complete all >> running requests, reject any new ones, and then shut down, however there >> are also plenty of other valid use cases (e.g. suspend the server, >> modify a data source or some other config, then resume). >> >> User View: >> >> For the users point of view two new operations will be added to the server: >> >> suspend(timeout) >> resume() >> >> A runtime only attribute suspend-state (is this a good name?) will also >> be added, that can take one of three possible values, RUNNING, >> SUSPENDING, SUSPENDED. > > The SuspendController "state" might be a shorter attribute name and just > as meaningful. This will be in the global server namespace (i.e. from the CLI :read-attribute(name="suspend-state"). I think the name 'state' is just two generic, which kind of state are we talking about? > > When are we in the RUNNING state? Is that simply the pre-state for > SUSPENDING? 99.99% of the time. Basically servers are always running unless they are have been explicitly suspended, and then they go from suspending to suspended. Note that if resume is called at any time the server goes to RUNNING again immediately, as when subsystems are notified they should be able to begin accepting requests again straight away. We also have admin only mode, which is a kinda similar concept, so we need to make sure we document the differences. > >> A timeout attribute will also be added to the shutdown operation. If >> this is present then the server will first be suspended, and the server >> will not shut down until either the suspend is successful or the timeout >> occurs. If no timeout parameter is passed to the operation then a normal >> non-graceful shutdown will take place. > > Will non-graceful shutdown wait for non-daemon threads or terminate > immediately (call System.exit()). It will execute the same way it does today (all services will shut down and then the server will exit). Stuart > >> In domain mode these operations will be added to both individual server >> and a complete server group. >> >> Implementation Details >> >> Suspend/resume operates on entry points to the server. Any request that >> is currently running must not be affected by the suspend state, however >> any new request should be rejected. In general subsystems will track the >> number of outstanding requests, and when this hits zero they are >> considered suspended. >> >> We will introduce the notion of a global SuspendController, that manages >> the servers suspend state. All subsystems that wish to do a graceful >> shutdown register callback handlers with this controller. >> >> When the suspend() operation is invoked the controller will invoke all >> these callbacks, letting the subsystem know that the server is suspend, >> and providing the subsystem with a SuspendContext object that the >> subsystem can then use to notify the controller that the suspend is >> complete. >> >> What the subsystem does when it receives a suspend command, and when it >> considers itself suspended will vary, but in the common case it will >> immediatly start rejecting external requests (e.g. Undertow will start >> responding with a 503 to all new requests). The subsystem will also >> track the number of outstanding requests, and when this hits zero then >> the subsystem will notify the controller that is has successfully >> suspended. >> Some subsystems will obviously want to do other actions on suspend, e.g. >> clustering will likely want to fail over, mod_cluster will notify the >> load balancer that the node is no longer available etc. In some cases we >> may want to make this configurable to an extent (e.g. Undertow could be >> configured to allow requests with an existing session, and not consider >> itself timed out until all sessions have either timed out or been >> invalidated, although this will obviously take a while). >> >> If anyone has any feedback let me know. In terms of implementation my >> basic plan is to get the core functionality and the Undertow >> implementation into Wildfly, and then work with subsystem authors to >> implement subsystem specific functionality once the core is in place. >> >> Stuart >> >> >> >> >> >> >> >> The >> >> A timeout attribute will also be added to the shutdown command, >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From jai.forums2013 at gmail.com Tue Jun 10 00:46:53 2014 From: jai.forums2013 at gmail.com (Jaikiran Pai) Date: Tue, 10 Jun 2014 10:16:53 +0530 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5396377F.80003@redhat.com> References: <5396377F.80003@redhat.com> Message-ID: <53968DBD.4080507@gmail.com> This is more of a subsystem specific question - How are internal operations (i.e. something that doesn't exactly have an entry point /into/ the server) handled when the server is in a suspended state or when it is suspending. One such example is, an EJB application which might have scheduled timer tasks associated with it. Are such timer tasks supposed to continue to run even when server is suspended or when it is suspending? Or is the subsystem expected to shut those down too? -Jaikiran On Tuesday 10 June 2014 04:08 AM, Stuart Douglas wrote: > Server suspend and resume is a feature that allows a running server to > gracefully finish of all running requests. The most common use case for > this is graceful shutdown, where you would like a server to complete all > running requests, reject any new ones, and then shut down, however there > are also plenty of other valid use cases (e.g. suspend the server, > modify a data source or some other config, then resume). > > User View: > > For the users point of view two new operations will be added to the server: > > suspend(timeout) > resume() > > A runtime only attribute suspend-state (is this a good name?) will also > be added, that can take one of three possible values, RUNNING, > SUSPENDING, SUSPENDED. > > A timeout attribute will also be added to the shutdown operation. If > this is present then the server will first be suspended, and the server > will not shut down until either the suspend is successful or the timeout > occurs. If no timeout parameter is passed to the operation then a normal > non-graceful shutdown will take place. > > In domain mode these operations will be added to both individual server > and a complete server group. > > Implementation Details > > Suspend/resume operates on entry points to the server. Any request that > is currently running must not be affected by the suspend state, however > any new request should be rejected. In general subsystems will track the > number of outstanding requests, and when this hits zero they are > considered suspended. > > We will introduce the notion of a global SuspendController, that manages > the servers suspend state. All subsystems that wish to do a graceful > shutdown register callback handlers with this controller. > > When the suspend() operation is invoked the controller will invoke all > these callbacks, letting the subsystem know that the server is suspend, > and providing the subsystem with a SuspendContext object that the > subsystem can then use to notify the controller that the suspend is > complete. > > What the subsystem does when it receives a suspend command, and when it > considers itself suspended will vary, but in the common case it will > immediatly start rejecting external requests (e.g. Undertow will start > responding with a 503 to all new requests). The subsystem will also > track the number of outstanding requests, and when this hits zero then > the subsystem will notify the controller that is has successfully > suspended. > Some subsystems will obviously want to do other actions on suspend, e.g. > clustering will likely want to fail over, mod_cluster will notify the > load balancer that the node is no longer available etc. In some cases we > may want to make this configurable to an extent (e.g. Undertow could be > configured to allow requests with an existing session, and not consider > itself timed out until all sessions have either timed out or been > invalidated, although this will obviously take a while). > > If anyone has any feedback let me know. In terms of implementation my > basic plan is to get the core functionality and the Undertow > implementation into Wildfly, and then work with subsystem authors to > implement subsystem specific functionality once the core is in place. > > Stuart > > > > > > > > The > > A timeout attribute will also be added to the shutdown command, > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140610/3a75eeb1/attachment-0001.html From jai.forums2013 at gmail.com Tue Jun 10 01:02:02 2014 From: jai.forums2013 at gmail.com (Jaikiran Pai) Date: Tue, 10 Jun 2014 10:32:02 +0530 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <53968DBD.4080507@gmail.com> References: <5396377F.80003@redhat.com> <53968DBD.4080507@gmail.com> Message-ID: <5396914A.5050508@gmail.com> One other question - When a server is put into suspend mode, is it going to trigger undeployment of certain deployed deployments? And would that be considered the expected behaviour? More specifically, when the admin triggers a suspend, are the subsystems allowed to trigger certain operations which might stop the services that back the currently deployed deployments or are they expected to keep those services in a started/UP state? -Jaikiran On Tuesday 10 June 2014 10:16 AM, Jaikiran Pai wrote: > This is more of a subsystem specific question - How are internal > operations (i.e. something that doesn't exactly have an entry point > /into/ the server) handled when the server is in a suspended state or > when it is suspending. One such example is, an EJB application which > might have scheduled timer tasks associated with it. Are such timer > tasks supposed to continue to run even when server is suspended or > when it is suspending? Or is the subsystem expected to shut those down > too? > > -Jaikiran > On Tuesday 10 June 2014 04:08 AM, Stuart Douglas wrote: >> Server suspend and resume is a feature that allows a running server to >> gracefully finish of all running requests. The most common use case for >> this is graceful shutdown, where you would like a server to complete all >> running requests, reject any new ones, and then shut down, however there >> are also plenty of other valid use cases (e.g. suspend the server, >> modify a data source or some other config, then resume). >> >> User View: >> >> For the users point of view two new operations will be added to the server: >> >> suspend(timeout) >> resume() >> >> A runtime only attribute suspend-state (is this a good name?) will also >> be added, that can take one of three possible values, RUNNING, >> SUSPENDING, SUSPENDED. >> >> A timeout attribute will also be added to the shutdown operation. If >> this is present then the server will first be suspended, and the server >> will not shut down until either the suspend is successful or the timeout >> occurs. If no timeout parameter is passed to the operation then a normal >> non-graceful shutdown will take place. >> >> In domain mode these operations will be added to both individual server >> and a complete server group. >> >> Implementation Details >> >> Suspend/resume operates on entry points to the server. Any request that >> is currently running must not be affected by the suspend state, however >> any new request should be rejected. In general subsystems will track the >> number of outstanding requests, and when this hits zero they are >> considered suspended. >> >> We will introduce the notion of a global SuspendController, that manages >> the servers suspend state. All subsystems that wish to do a graceful >> shutdown register callback handlers with this controller. >> >> When the suspend() operation is invoked the controller will invoke all >> these callbacks, letting the subsystem know that the server is suspend, >> and providing the subsystem with a SuspendContext object that the >> subsystem can then use to notify the controller that the suspend is >> complete. >> >> What the subsystem does when it receives a suspend command, and when it >> considers itself suspended will vary, but in the common case it will >> immediatly start rejecting external requests (e.g. Undertow will start >> responding with a 503 to all new requests). The subsystem will also >> track the number of outstanding requests, and when this hits zero then >> the subsystem will notify the controller that is has successfully >> suspended. >> Some subsystems will obviously want to do other actions on suspend, e.g. >> clustering will likely want to fail over, mod_cluster will notify the >> load balancer that the node is no longer available etc. In some cases we >> may want to make this configurable to an extent (e.g. Undertow could be >> configured to allow requests with an existing session, and not consider >> itself timed out until all sessions have either timed out or been >> invalidated, although this will obviously take a while). >> >> If anyone has any feedback let me know. In terms of implementation my >> basic plan is to get the core functionality and the Undertow >> implementation into Wildfly, and then work with subsystem authors to >> implement subsystem specific functionality once the core is in place. >> >> Stuart >> >> >> >> >> >> >> >> The >> >> A timeout attribute will also be added to the shutdown command, >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140610/c4c89c80/attachment.html From mmusgrov at redhat.com Tue Jun 10 05:51:02 2014 From: mmusgrov at redhat.com (Michael Musgrove) Date: Tue, 10 Jun 2014 10:51:02 +0100 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: References: <5396377F.80003@redhat.com> <22131731.5850.1402355055126.JavaMail.andrig@worklaptop.miller.org> <786BAC10-7BCE-47B8-BD87-968FE3FA4830@gmail.com> Message-ID: <5396D506.1030402@redhat.com> I agree with Stuart, it should wait for the transaction to finish before shutting down. And yes (with caveats), Jason, when the timeout is reached our transaction reaper will abort the transaction. However, if the transaction was started with a timeout value of 0 it will never abort. Also, if the suspend happens when there are prepared transactions then it's too late to cancel the transaction and they will be recovered when the system is resumed. Note also that suspending before an in-flight transaction has prepared is probably safe since the resource will either: - rollback the branch if all connections to the db are closed (when the system suspends); or - rollback the branch if the XAResource timeout (set via the XAResource.setTransactionTimeout()) value is reached [And since it was never prepared we have no log record for it so we would not do anything on resume] Mike > IIRC the behavior for a tx timeout is a rollback, but we should check that. > >> On Jun 9, 2014, at 6:50 PM, Stuart Douglas wrote: >> >> Something I forgot to mention is that we will need a switch to turn this off, as there is a small but noticeable cost with tracking in flight requests. >> >> >>> On 9 Jun 2014, at 18:04, Andrig Miller wrote: >>> >>> What I am bringing up is more subsystem specific, but it might be valuable to think about. In case of the time out of the graceful shutdown, what behavior would we consider correct in terms of an inflight transaction? >> It waits for the transaction to finish before shutting down. >> >> Stuart >> >>> Should it be a forced rollback, so that when the server is started back up, the transaction manager will not find in the log a transaction to be recovered? >>> >>> Or, should it be considered the same as a crashed state, where transactions should be recoverable, and the recover manager wouuld try to recover the transaction? >>> >>> I would lean towards the first, as this would be considered graceful by the administrator, and having a transaction be in a state where it would be recovered on a restart, doesn't seem graceful to me. >>> >>> Andy >>> >>> ----- Original Message ----- >>>> From: "Stuart Douglas" >>>> To: "Wildfly Dev mailing list" >>>> Sent: Monday, June 9, 2014 4:38:55 PM >>>> Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) >>>> >>>> Server suspend and resume is a feature that allows a running server >>>> to >>>> gracefully finish of all running requests. The most common use case >>>> for >>>> this is graceful shutdown, where you would like a server to complete >>>> all >>>> running requests, reject any new ones, and then shut down, however >>>> there >>>> are also plenty of other valid use cases (e.g. suspend the server, >>>> modify a data source or some other config, then resume). >>>> >>>> User View: >>>> >>>> For the users point of view two new operations will be added to the >>>> server: >>>> >>>> suspend(timeout) >>>> resume() >>>> >>>> A runtime only attribute suspend-state (is this a good name?) will >>>> also >>>> be added, that can take one of three possible values, RUNNING, >>>> SUSPENDING, SUSPENDED. >>>> >>>> A timeout attribute will also be added to the shutdown operation. If >>>> this is present then the server will first be suspended, and the >>>> server >>>> will not shut down until either the suspend is successful or the >>>> timeout >>>> occurs. If no timeout parameter is passed to the operation then a >>>> normal >>>> non-graceful shutdown will take place. >>>> >>>> In domain mode these operations will be added to both individual >>>> server >>>> and a complete server group. >>>> >>>> Implementation Details >>>> >>>> Suspend/resume operates on entry points to the server. Any request >>>> that >>>> is currently running must not be affected by the suspend state, >>>> however >>>> any new request should be rejected. In general subsystems will track >>>> the >>>> number of outstanding requests, and when this hits zero they are >>>> considered suspended. >>>> >>>> We will introduce the notion of a global SuspendController, that >>>> manages >>>> the servers suspend state. All subsystems that wish to do a graceful >>>> shutdown register callback handlers with this controller. >>>> >>>> When the suspend() operation is invoked the controller will invoke >>>> all >>>> these callbacks, letting the subsystem know that the server is >>>> suspend, >>>> and providing the subsystem with a SuspendContext object that the >>>> subsystem can then use to notify the controller that the suspend is >>>> complete. >>>> >>>> What the subsystem does when it receives a suspend command, and when >>>> it >>>> considers itself suspended will vary, but in the common case it will >>>> immediatly start rejecting external requests (e.g. Undertow will >>>> start >>>> responding with a 503 to all new requests). The subsystem will also >>>> track the number of outstanding requests, and when this hits zero >>>> then >>>> the subsystem will notify the controller that is has successfully >>>> suspended. >>>> Some subsystems will obviously want to do other actions on suspend, >>>> e.g. >>>> clustering will likely want to fail over, mod_cluster will notify the >>>> load balancer that the node is no longer available etc. In some cases >>>> we >>>> may want to make this configurable to an extent (e.g. Undertow could >>>> be >>>> configured to allow requests with an existing session, and not >>>> consider >>>> itself timed out until all sessions have either timed out or been >>>> invalidated, although this will obviously take a while). >>>> >>>> If anyone has any feedback let me know. In terms of implementation my >>>> basic plan is to get the core functionality and the Undertow >>>> implementation into Wildfly, and then work with subsystem authors to >>>> implement subsystem specific functionality once the core is in place. >>>> >>>> Stuart >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> The >>>> >>>> A timeout attribute will also be added to the shutdown command, >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Michael Musgrove Transactions Team e: mmusgrov at redhat.com t: +44 191 243 0870 Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham (US), Charles Peters (US), Matt Parson (US), Michael O'Neill(Ireland) From stuart.w.douglas at gmail.com Tue Jun 10 08:20:42 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 10 Jun 2014 07:20:42 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <53968DBD.4080507@gmail.com> References: <5396377F.80003@redhat.com> <53968DBD.4080507@gmail.com> Message-ID: <65DBB87C-48FD-41E7-A60D-634A1E11DD78@gmail.com> They still have an entry point, the subsystem tracks outstanding timers and will not execute new ones while suspended. Stuart Sent from my iPhone > On 9 Jun 2014, at 23:46, Jaikiran Pai wrote: > > This is more of a subsystem specific question - How are internal operations (i.e. something that doesn't exactly have an entry point into the server) handled when the server is in a suspended state or when it is suspending. One such example is, an EJB application which might have scheduled timer tasks associated with it. Are such timer tasks supposed to continue to run even when server is suspended or when it is suspending? Or is the subsystem expected to shut those down too? > > -Jaikiran >> On Tuesday 10 June 2014 04:08 AM, Stuart Douglas wrote: >> Server suspend and resume is a feature that allows a running server to >> gracefully finish of all running requests. The most common use case for >> this is graceful shutdown, where you would like a server to complete all >> running requests, reject any new ones, and then shut down, however there >> are also plenty of other valid use cases (e.g. suspend the server, >> modify a data source or some other config, then resume). >> >> User View: >> >> For the users point of view two new operations will be added to the server: >> >> suspend(timeout) >> resume() >> >> A runtime only attribute suspend-state (is this a good name?) will also >> be added, that can take one of three possible values, RUNNING, >> SUSPENDING, SUSPENDED. >> >> A timeout attribute will also be added to the shutdown operation. If >> this is present then the server will first be suspended, and the server >> will not shut down until either the suspend is successful or the timeout >> occurs. If no timeout parameter is passed to the operation then a normal >> non-graceful shutdown will take place. >> >> In domain mode these operations will be added to both individual server >> and a complete server group. >> >> Implementation Details >> >> Suspend/resume operates on entry points to the server. Any request that >> is currently running must not be affected by the suspend state, however >> any new request should be rejected. In general subsystems will track the >> number of outstanding requests, and when this hits zero they are >> considered suspended. >> >> We will introduce the notion of a global SuspendController, that manages >> the servers suspend state. All subsystems that wish to do a graceful >> shutdown register callback handlers with this controller. >> >> When the suspend() operation is invoked the controller will invoke all >> these callbacks, letting the subsystem know that the server is suspend, >> and providing the subsystem with a SuspendContext object that the >> subsystem can then use to notify the controller that the suspend is >> complete. >> >> What the subsystem does when it receives a suspend command, and when it >> considers itself suspended will vary, but in the common case it will >> immediatly start rejecting external requests (e.g. Undertow will start >> responding with a 503 to all new requests). The subsystem will also >> track the number of outstanding requests, and when this hits zero then >> the subsystem will notify the controller that is has successfully >> suspended. >> Some subsystems will obviously want to do other actions on suspend, e.g. >> clustering will likely want to fail over, mod_cluster will notify the >> load balancer that the node is no longer available etc. In some cases we >> may want to make this configurable to an extent (e.g. Undertow could be >> configured to allow requests with an existing session, and not consider >> itself timed out until all sessions have either timed out or been >> invalidated, although this will obviously take a while). >> >> If anyone has any feedback let me know. In terms of implementation my >> basic plan is to get the core functionality and the Undertow >> implementation into Wildfly, and then work with subsystem authors to >> implement subsystem specific functionality once the core is in place. >> >> Stuart >> >> >> >> >> >> >> >> The >> >> A timeout attribute will also be added to the shutdown command, >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140610/a59255e7/attachment-0001.html From stuart.w.douglas at gmail.com Tue Jun 10 08:22:02 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 10 Jun 2014 07:22:02 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5396914A.5050508@gmail.com> References: <5396377F.80003@redhat.com> <53968DBD.4080507@gmail.com> <5396914A.5050508@gmail.com> Message-ID: <6F93F4E5-C77A-4495-813E-A550E1C74C36@gmail.com> Nothing is underplayed and all services remain up. If the server is resumed it should be able to begin processing requests immediately. Stuart Sent from my iPhone > On 10 Jun 2014, at 0:02, Jaikiran Pai wrote: > > One other question - When a server is put into suspend mode, is it going to trigger undeployment of certain deployed deployments? And would that be considered the expected behaviour? More specifically, when the admin triggers a suspend, are the subsystems allowed to trigger certain operations which might stop the services that back the currently deployed deployments or are they expected to keep those services in a started/UP state? > > -Jaikiran >> On Tuesday 10 June 2014 10:16 AM, Jaikiran Pai wrote: >> This is more of a subsystem specific question - How are internal operations (i.e. something that doesn't exactly have an entry point into the server) handled when the server is in a suspended state or when it is suspending. One such example is, an EJB application which might have scheduled timer tasks associated with it. Are such timer tasks supposed to continue to run even when server is suspended or when it is suspending? Or is the subsystem expected to shut those down too? >> >> -Jaikiran >>> On Tuesday 10 June 2014 04:08 AM, Stuart Douglas wrote: >>> Server suspend and resume is a feature that allows a running server to >>> gracefully finish of all running requests. The most common use case for >>> this is graceful shutdown, where you would like a server to complete all >>> running requests, reject any new ones, and then shut down, however there >>> are also plenty of other valid use cases (e.g. suspend the server, >>> modify a data source or some other config, then resume). >>> >>> User View: >>> >>> For the users point of view two new operations will be added to the server: >>> >>> suspend(timeout) >>> resume() >>> >>> A runtime only attribute suspend-state (is this a good name?) will also >>> be added, that can take one of three possible values, RUNNING, >>> SUSPENDING, SUSPENDED. >>> >>> A timeout attribute will also be added to the shutdown operation. If >>> this is present then the server will first be suspended, and the server >>> will not shut down until either the suspend is successful or the timeout >>> occurs. If no timeout parameter is passed to the operation then a normal >>> non-graceful shutdown will take place. >>> >>> In domain mode these operations will be added to both individual server >>> and a complete server group. >>> >>> Implementation Details >>> >>> Suspend/resume operates on entry points to the server. Any request that >>> is currently running must not be affected by the suspend state, however >>> any new request should be rejected. In general subsystems will track the >>> number of outstanding requests, and when this hits zero they are >>> considered suspended. >>> >>> We will introduce the notion of a global SuspendController, that manages >>> the servers suspend state. All subsystems that wish to do a graceful >>> shutdown register callback handlers with this controller. >>> >>> When the suspend() operation is invoked the controller will invoke all >>> these callbacks, letting the subsystem know that the server is suspend, >>> and providing the subsystem with a SuspendContext object that the >>> subsystem can then use to notify the controller that the suspend is >>> complete. >>> >>> What the subsystem does when it receives a suspend command, and when it >>> considers itself suspended will vary, but in the common case it will >>> immediatly start rejecting external requests (e.g. Undertow will start >>> responding with a 503 to all new requests). The subsystem will also >>> track the number of outstanding requests, and when this hits zero then >>> the subsystem will notify the controller that is has successfully >>> suspended. >>> Some subsystems will obviously want to do other actions on suspend, e.g. >>> clustering will likely want to fail over, mod_cluster will notify the >>> load balancer that the node is no longer available etc. In some cases we >>> may want to make this configurable to an extent (e.g. Undertow could be >>> configured to allow requests with an existing session, and not consider >>> itself timed out until all sessions have either timed out or been >>> invalidated, although this will obviously take a while). >>> >>> If anyone has any feedback let me know. In terms of implementation my >>> basic plan is to get the core functionality and the Undertow >>> implementation into Wildfly, and then work with subsystem authors to >>> implement subsystem specific functionality once the core is in place. >>> >>> Stuart >>> >>> >>> >>> >>> >>> >>> >>> The >>> >>> A timeout attribute will also be added to the shutdown command, >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140610/b39c255f/attachment.html From smarlow at redhat.com Tue Jun 10 08:29:36 2014 From: smarlow at redhat.com (Scott Marlow) Date: Tue, 10 Jun 2014 08:29:36 -0400 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5396700B.9030003@gmail.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> Message-ID: <5396FA30.1000706@redhat.com> On 06/09/2014 10:40 PM, Stuart Douglas wrote: > > > Scott Marlow wrote: >> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>> Server suspend and resume is a feature that allows a running server to >>> gracefully finish of all running requests. The most common use case for >>> this is graceful shutdown, where you would like a server to complete all >>> running requests, reject any new ones, and then shut down, however there >>> are also plenty of other valid use cases (e.g. suspend the server, >>> modify a data source or some other config, then resume). >>> >>> User View: >>> >>> For the users point of view two new operations will be added to the >>> server: >>> >>> suspend(timeout) >>> resume() >>> >>> A runtime only attribute suspend-state (is this a good name?) will also >>> be added, that can take one of three possible values, RUNNING, >>> SUSPENDING, SUSPENDED. >> >> The SuspendController "state" might be a shorter attribute name and just >> as meaningful. > > This will be in the global server namespace (i.e. from the CLI > :read-attribute(name="suspend-state"). > > I think the name 'state' is just two generic, which kind of state are we > talking about? I was thinking suspend-state was an attribute of SuspendController. Thanks for the explanation. > >> >> When are we in the RUNNING state? Is that simply the pre-state for >> SUSPENDING? > > 99.99% of the time. Basically servers are always running unless they are > have been explicitly suspended, and then they go from suspending to > suspended. Note that if resume is called at any time the server goes to > RUNNING again immediately, as when subsystems are notified they should > be able to begin accepting requests again straight away. > > We also have admin only mode, which is a kinda similar concept, so we > need to make sure we document the differences. > >> >>> A timeout attribute will also be added to the shutdown operation. If >>> this is present then the server will first be suspended, and the server >>> will not shut down until either the suspend is successful or the timeout >>> occurs. If no timeout parameter is passed to the operation then a normal >>> non-graceful shutdown will take place. >> >> Will non-graceful shutdown wait for non-daemon threads or terminate >> immediately (call System.exit()). > > It will execute the same way it does today (all services will shut down > and then the server will exit). > > Stuart > >> >>> In domain mode these operations will be added to both individual server >>> and a complete server group. >>> >>> Implementation Details >>> >>> Suspend/resume operates on entry points to the server. Any request that >>> is currently running must not be affected by the suspend state, however >>> any new request should be rejected. In general subsystems will track the >>> number of outstanding requests, and when this hits zero they are >>> considered suspended. >>> >>> We will introduce the notion of a global SuspendController, that manages >>> the servers suspend state. All subsystems that wish to do a graceful >>> shutdown register callback handlers with this controller. >>> >>> When the suspend() operation is invoked the controller will invoke all >>> these callbacks, letting the subsystem know that the server is suspend, >>> and providing the subsystem with a SuspendContext object that the >>> subsystem can then use to notify the controller that the suspend is >>> complete. >>> >>> What the subsystem does when it receives a suspend command, and when it >>> considers itself suspended will vary, but in the common case it will >>> immediatly start rejecting external requests (e.g. Undertow will start >>> responding with a 503 to all new requests). The subsystem will also >>> track the number of outstanding requests, and when this hits zero then >>> the subsystem will notify the controller that is has successfully >>> suspended. >>> Some subsystems will obviously want to do other actions on suspend, e.g. >>> clustering will likely want to fail over, mod_cluster will notify the >>> load balancer that the node is no longer available etc. In some cases we >>> may want to make this configurable to an extent (e.g. Undertow could be >>> configured to allow requests with an existing session, and not consider >>> itself timed out until all sessions have either timed out or been >>> invalidated, although this will obviously take a while). >>> >>> If anyone has any feedback let me know. In terms of implementation my >>> basic plan is to get the core functionality and the Undertow >>> implementation into Wildfly, and then work with subsystem authors to >>> implement subsystem specific functionality once the core is in place. >>> >>> Stuart >>> >>> >>> >>> >>> >>> >>> >>> The >>> >>> A timeout attribute will also be added to the shutdown command, >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev From anmiller at redhat.com Tue Jun 10 08:50:52 2014 From: anmiller at redhat.com (Andrig Miller) Date: Tue, 10 Jun 2014 08:50:52 -0400 (EDT) Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5396D506.1030402@redhat.com> References: <5396377F.80003@redhat.com> <22131731.5850.1402355055126.JavaMail.andrig@worklaptop.miller.org> <786BAC10-7BCE-47B8-BD87-968FE3FA4830@gmail.com> <5396D506.1030402@redhat.com> Message-ID: <31464906.7003.1402404649117.JavaMail.andrig@worklaptop.miller.org> ----- Original Message ----- > From: "Michael Musgrove" > To: "Jason T. Greene" , "Stuart Douglas" > Cc: "Wildfly Dev mailing list" , "Stuart Douglas" > Sent: Tuesday, June 10, 2014 3:51:02 AM > Subject: Re: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) > > I agree with Stuart, it should wait for the transaction to finish > before > shutting down. > > And yes (with caveats), Jason, when the timeout is reached our > transaction reaper will abort the transaction. However, if the > transaction was started with a timeout value of 0 it will never > abort. Also, if the suspend happens when there are prepared > transactions then it's too late to cancel the transaction and they > will be recovered when the system is resumed. > If the transaction will never abort, can we force a rollback? This could lead to a never ending "graceful" shutdown. Andy > Note also that suspending before an in-flight transaction has > prepared is probably safe since the resource will either: > > - rollback the branch if all connections to the db are closed (when > the system suspends); or > - rollback the branch if the XAResource timeout (set via the > XAResource.setTransactionTimeout()) value is reached > > [And since it was never prepared we have no log record for it so we > would not do anything on resume] > > Mike > > > IIRC the behavior for a tx timeout is a rollback, but we should > > check that. > > > >> On Jun 9, 2014, at 6:50 PM, Stuart Douglas > >> wrote: > >> > >> Something I forgot to mention is that we will need a switch to > >> turn this off, as there is a small but noticeable cost with > >> tracking in flight requests. > >> > >> > >>> On 9 Jun 2014, at 18:04, Andrig Miller > >>> wrote: > >>> > >>> What I am bringing up is more subsystem specific, but it might be > >>> valuable to think about. In case of the time out of the > >>> graceful shutdown, what behavior would we consider correct in > >>> terms of an inflight transaction? > >> It waits for the transaction to finish before shutting down. > >> > >> Stuart > >> > >>> Should it be a forced rollback, so that when the server is > >>> started back up, the transaction manager will not find in the > >>> log a transaction to be recovered? > >>> > >>> Or, should it be considered the same as a crashed state, where > >>> transactions should be recoverable, and the recover manager > >>> wouuld try to recover the transaction? > >>> > >>> I would lean towards the first, as this would be considered > >>> graceful by the administrator, and having a transaction be in a > >>> state where it would be recovered on a restart, doesn't seem > >>> graceful to me. > >>> > >>> Andy > >>> > >>> ----- Original Message ----- > >>>> From: "Stuart Douglas" > >>>> To: "Wildfly Dev mailing list" > >>>> Sent: Monday, June 9, 2014 4:38:55 PM > >>>> Subject: [wildfly-dev] Design Proposal: Server suspend/resume > >>>> (AKA Graceful Shutdown) > >>>> > >>>> Server suspend and resume is a feature that allows a running > >>>> server > >>>> to > >>>> gracefully finish of all running requests. The most common use > >>>> case > >>>> for > >>>> this is graceful shutdown, where you would like a server to > >>>> complete > >>>> all > >>>> running requests, reject any new ones, and then shut down, > >>>> however > >>>> there > >>>> are also plenty of other valid use cases (e.g. suspend the > >>>> server, > >>>> modify a data source or some other config, then resume). > >>>> > >>>> User View: > >>>> > >>>> For the users point of view two new operations will be added to > >>>> the > >>>> server: > >>>> > >>>> suspend(timeout) > >>>> resume() > >>>> > >>>> A runtime only attribute suspend-state (is this a good name?) > >>>> will > >>>> also > >>>> be added, that can take one of three possible values, RUNNING, > >>>> SUSPENDING, SUSPENDED. > >>>> > >>>> A timeout attribute will also be added to the shutdown > >>>> operation. If > >>>> this is present then the server will first be suspended, and the > >>>> server > >>>> will not shut down until either the suspend is successful or the > >>>> timeout > >>>> occurs. If no timeout parameter is passed to the operation then > >>>> a > >>>> normal > >>>> non-graceful shutdown will take place. > >>>> > >>>> In domain mode these operations will be added to both individual > >>>> server > >>>> and a complete server group. > >>>> > >>>> Implementation Details > >>>> > >>>> Suspend/resume operates on entry points to the server. Any > >>>> request > >>>> that > >>>> is currently running must not be affected by the suspend state, > >>>> however > >>>> any new request should be rejected. In general subsystems will > >>>> track > >>>> the > >>>> number of outstanding requests, and when this hits zero they are > >>>> considered suspended. > >>>> > >>>> We will introduce the notion of a global SuspendController, that > >>>> manages > >>>> the servers suspend state. All subsystems that wish to do a > >>>> graceful > >>>> shutdown register callback handlers with this controller. > >>>> > >>>> When the suspend() operation is invoked the controller will > >>>> invoke > >>>> all > >>>> these callbacks, letting the subsystem know that the server is > >>>> suspend, > >>>> and providing the subsystem with a SuspendContext object that > >>>> the > >>>> subsystem can then use to notify the controller that the suspend > >>>> is > >>>> complete. > >>>> > >>>> What the subsystem does when it receives a suspend command, and > >>>> when > >>>> it > >>>> considers itself suspended will vary, but in the common case it > >>>> will > >>>> immediatly start rejecting external requests (e.g. Undertow will > >>>> start > >>>> responding with a 503 to all new requests). The subsystem will > >>>> also > >>>> track the number of outstanding requests, and when this hits > >>>> zero > >>>> then > >>>> the subsystem will notify the controller that is has > >>>> successfully > >>>> suspended. > >>>> Some subsystems will obviously want to do other actions on > >>>> suspend, > >>>> e.g. > >>>> clustering will likely want to fail over, mod_cluster will > >>>> notify the > >>>> load balancer that the node is no longer available etc. In some > >>>> cases > >>>> we > >>>> may want to make this configurable to an extent (e.g. Undertow > >>>> could > >>>> be > >>>> configured to allow requests with an existing session, and not > >>>> consider > >>>> itself timed out until all sessions have either timed out or > >>>> been > >>>> invalidated, although this will obviously take a while). > >>>> > >>>> If anyone has any feedback let me know. In terms of > >>>> implementation my > >>>> basic plan is to get the core functionality and the Undertow > >>>> implementation into Wildfly, and then work with subsystem > >>>> authors to > >>>> implement subsystem specific functionality once the core is in > >>>> place. > >>>> > >>>> Stuart > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> The > >>>> > >>>> A timeout attribute will also be added to the shutdown command, > >>>> _______________________________________________ > >>>> wildfly-dev mailing list > >>>> wildfly-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev > >>> _______________________________________________ > >>> wildfly-dev mailing list > >>> wildfly-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev > >> _______________________________________________ > >> wildfly-dev mailing list > >> wildfly-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > -- > Michael Musgrove > Transactions Team > e: mmusgrov at redhat.com > t: +44 191 243 0870 > > Registered in England and Wales under Company Registration No. > 03798903 > Directors: Michael Cunningham (US), Charles Peters (US), Matt Parson > (US), Michael O'Neill(Ireland) > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From emartins at redhat.com Tue Jun 10 09:07:27 2014 From: emartins at redhat.com (Eduardo Martins) Date: Tue, 10 Jun 2014 14:07:27 +0100 Subject: [wildfly-dev] WildFly 9 Naming Rework (Design+Impl Discussion) Message-ID: <0EB75068-0A57-45EA-8E8D-17C61F5FD5FA@redhat.com> Last year I?ve been gathering what are the pain points with our current JNDI and @Resource injection related code, mostly the complaints I?ve noted (meetings, user forum, mail list, etc.) are: too much code needed to do simple things, such as bind a JNDI entry, and very low code reusage Naming related APIs not only easy to misuse, i.e. very error prone, but also promoting multiple ways to do same things not as slim or performance as it could and should be Also, new functionality is needed/desired, of most relevance: ability to use Naming subsystem configuration to add bindings to the scoped EE namespaces java:comp, java:module and java:app access bindings in the scoped EE namespaces even without EE components in context, for instance Persistence Units targeting the default datasource at java:comp/DefaultDatasource With all above in mind, I started reworking Naming/EE for WFLY 9, and such work is ready to be presented and reviewed. I created a Wiki page to document the design and APIs, which should later evolve as the definitive guide for WildFly subsystem developers wrt JNDI and @Resource. Check it out at https://docs.jboss.org/author/display/WFLY9/WildFly+9+JNDI+Implementation A fully working PoC, which passes our testsuites, is already available at https://github.com/emmartins/wildfly/tree/wfly9-naming-rework-v3 Possible further design/impl enhancements Is there really a good reason to not merge all the global naming stores into a single ?java:? one, and simplify (a lot) the logic to compute which store is relative to a jndi name? java: java:jboss java:jboss/exported java:global shared java:comp shared java:module shared java:app Since there is now a complete java: namespace always present, we could avoid multiple binds of same resource unless asked by spec, or with remote access in mind, e.g. java:jboss/ORB and java:comp/ORB Don?t manage binds made from Context#bind() (JNDI API), the module/app binder would be responsible for both binding and unbinding, as expected elsewhere when using the standard JNDI API. Besides simplifying our writable naming store logic, this would make 3rd party libs usable in WildFly without modifications or exposing special APIs. Note that this applies to global namespaces only, the scoped java:app, java:module and java:comp namespaces are read only when accessed through JNDI API. Remove the unofficial(?) policy that defines jndi names relative to java:, and use only the EE (xml & annotations) standard policy, which defines that all of these are relative to java:comp/env ?E PS: the shared PoC is not completed wrt new API usage, it just includes show cases for each feature. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140610/057ad3b8/attachment.html From sdouglas at redhat.com Tue Jun 10 09:13:04 2014 From: sdouglas at redhat.com (Stuart Douglas) Date: Tue, 10 Jun 2014 08:13:04 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <31464906.7003.1402404649117.JavaMail.andrig@worklaptop.miller.org> References: <5396377F.80003@redhat.com> <22131731.5850.1402355055126.JavaMail.andrig@worklaptop.miller.org> <786BAC10-7BCE-47B8-BD87-968FE3FA4830@gmail.com> <5396D506.1030402@redhat.com> <31464906.7003.1402404649117.JavaMail.andrig@worklaptop.miller.org> Message-ID: <53970460.1070900@redhat.com> > > If the transaction will never abort, can we force a rollback? This could lead to a never ending "graceful" shutdown. > That is why we have a timeout, once the timeout is done the server will shutdown anyway. We should probably have some kind of post-timeout callback that gets invoked, so the TX subsystem could potentially take some action. I'm not sure what the best action would be, the subsystem specific details will be up to the team that maintains the subsystem. Stuart > Andy > >> Note also that suspending before an in-flight transaction has >> prepared is probably safe since the resource will either: >> >> - rollback the branch if all connections to the db are closed (when >> the system suspends); or >> - rollback the branch if the XAResource timeout (set via the >> XAResource.setTransactionTimeout()) value is reached >> >> [And since it was never prepared we have no log record for it so we >> would not do anything on resume] >> >> Mike >> >>> IIRC the behavior for a tx timeout is a rollback, but we should >>> check that. >>> >>>> On Jun 9, 2014, at 6:50 PM, Stuart Douglas >>>> wrote: >>>> >>>> Something I forgot to mention is that we will need a switch to >>>> turn this off, as there is a small but noticeable cost with >>>> tracking in flight requests. >>>> >>>> >>>>> On 9 Jun 2014, at 18:04, Andrig Miller >>>>> wrote: >>>>> >>>>> What I am bringing up is more subsystem specific, but it might be >>>>> valuable to think about. In case of the time out of the >>>>> graceful shutdown, what behavior would we consider correct in >>>>> terms of an inflight transaction? >>>> It waits for the transaction to finish before shutting down. >>>> >>>> Stuart >>>> >>>>> Should it be a forced rollback, so that when the server is >>>>> started back up, the transaction manager will not find in the >>>>> log a transaction to be recovered? >>>>> >>>>> Or, should it be considered the same as a crashed state, where >>>>> transactions should be recoverable, and the recover manager >>>>> wouuld try to recover the transaction? >>>>> >>>>> I would lean towards the first, as this would be considered >>>>> graceful by the administrator, and having a transaction be in a >>>>> state where it would be recovered on a restart, doesn't seem >>>>> graceful to me. >>>>> >>>>> Andy >>>>> >>>>> ----- Original Message ----- >>>>>> From: "Stuart Douglas" >>>>>> To: "Wildfly Dev mailing list" >>>>>> Sent: Monday, June 9, 2014 4:38:55 PM >>>>>> Subject: [wildfly-dev] Design Proposal: Server suspend/resume >>>>>> (AKA Graceful Shutdown) >>>>>> >>>>>> Server suspend and resume is a feature that allows a running >>>>>> server >>>>>> to >>>>>> gracefully finish of all running requests. The most common use >>>>>> case >>>>>> for >>>>>> this is graceful shutdown, where you would like a server to >>>>>> complete >>>>>> all >>>>>> running requests, reject any new ones, and then shut down, >>>>>> however >>>>>> there >>>>>> are also plenty of other valid use cases (e.g. suspend the >>>>>> server, >>>>>> modify a data source or some other config, then resume). >>>>>> >>>>>> User View: >>>>>> >>>>>> For the users point of view two new operations will be added to >>>>>> the >>>>>> server: >>>>>> >>>>>> suspend(timeout) >>>>>> resume() >>>>>> >>>>>> A runtime only attribute suspend-state (is this a good name?) >>>>>> will >>>>>> also >>>>>> be added, that can take one of three possible values, RUNNING, >>>>>> SUSPENDING, SUSPENDED. >>>>>> >>>>>> A timeout attribute will also be added to the shutdown >>>>>> operation. If >>>>>> this is present then the server will first be suspended, and the >>>>>> server >>>>>> will not shut down until either the suspend is successful or the >>>>>> timeout >>>>>> occurs. If no timeout parameter is passed to the operation then >>>>>> a >>>>>> normal >>>>>> non-graceful shutdown will take place. >>>>>> >>>>>> In domain mode these operations will be added to both individual >>>>>> server >>>>>> and a complete server group. >>>>>> >>>>>> Implementation Details >>>>>> >>>>>> Suspend/resume operates on entry points to the server. Any >>>>>> request >>>>>> that >>>>>> is currently running must not be affected by the suspend state, >>>>>> however >>>>>> any new request should be rejected. In general subsystems will >>>>>> track >>>>>> the >>>>>> number of outstanding requests, and when this hits zero they are >>>>>> considered suspended. >>>>>> >>>>>> We will introduce the notion of a global SuspendController, that >>>>>> manages >>>>>> the servers suspend state. All subsystems that wish to do a >>>>>> graceful >>>>>> shutdown register callback handlers with this controller. >>>>>> >>>>>> When the suspend() operation is invoked the controller will >>>>>> invoke >>>>>> all >>>>>> these callbacks, letting the subsystem know that the server is >>>>>> suspend, >>>>>> and providing the subsystem with a SuspendContext object that >>>>>> the >>>>>> subsystem can then use to notify the controller that the suspend >>>>>> is >>>>>> complete. >>>>>> >>>>>> What the subsystem does when it receives a suspend command, and >>>>>> when >>>>>> it >>>>>> considers itself suspended will vary, but in the common case it >>>>>> will >>>>>> immediatly start rejecting external requests (e.g. Undertow will >>>>>> start >>>>>> responding with a 503 to all new requests). The subsystem will >>>>>> also >>>>>> track the number of outstanding requests, and when this hits >>>>>> zero >>>>>> then >>>>>> the subsystem will notify the controller that is has >>>>>> successfully >>>>>> suspended. >>>>>> Some subsystems will obviously want to do other actions on >>>>>> suspend, >>>>>> e.g. >>>>>> clustering will likely want to fail over, mod_cluster will >>>>>> notify the >>>>>> load balancer that the node is no longer available etc. In some >>>>>> cases >>>>>> we >>>>>> may want to make this configurable to an extent (e.g. Undertow >>>>>> could >>>>>> be >>>>>> configured to allow requests with an existing session, and not >>>>>> consider >>>>>> itself timed out until all sessions have either timed out or >>>>>> been >>>>>> invalidated, although this will obviously take a while). >>>>>> >>>>>> If anyone has any feedback let me know. In terms of >>>>>> implementation my >>>>>> basic plan is to get the core functionality and the Undertow >>>>>> implementation into Wildfly, and then work with subsystem >>>>>> authors to >>>>>> implement subsystem specific functionality once the core is in >>>>>> place. >>>>>> >>>>>> Stuart >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> The >>>>>> >>>>>> A timeout attribute will also be added to the shutdown command, >>>>>> _______________________________________________ >>>>>> wildfly-dev mailing list >>>>>> wildfly-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> >> -- >> Michael Musgrove >> Transactions Team >> e: mmusgrov at redhat.com >> t: +44 191 243 0870 >> >> Registered in England and Wales under Company Registration No. >> 03798903 >> Directors: Michael Cunningham (US), Charles Peters (US), Matt Parson >> (US), Michael O'Neill(Ireland) >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> From stuart.w.douglas at gmail.com Tue Jun 10 09:26:42 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 10 Jun 2014 08:26:42 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5396377F.80003@redhat.com> References: <5396377F.80003@redhat.com> Message-ID: <53970792.8090903@gmail.com> Note that none of this really has anything to do with services, suspending a server should not involve any services changing state, so that when a server is resumed it can start handling requests immediately. Stuart Stuart Douglas wrote: > Server suspend and resume is a feature that allows a running server to > gracefully finish of all running requests. The most common use case for > this is graceful shutdown, where you would like a server to complete all > running requests, reject any new ones, and then shut down, however there > are also plenty of other valid use cases (e.g. suspend the server, > modify a data source or some other config, then resume). > > User View: > > For the users point of view two new operations will be added to the server: > > suspend(timeout) > resume() > > A runtime only attribute suspend-state (is this a good name?) will also > be added, that can take one of three possible values, RUNNING, > SUSPENDING, SUSPENDED. > > A timeout attribute will also be added to the shutdown operation. If > this is present then the server will first be suspended, and the server > will not shut down until either the suspend is successful or the timeout > occurs. If no timeout parameter is passed to the operation then a normal > non-graceful shutdown will take place. > > In domain mode these operations will be added to both individual server > and a complete server group. > > Implementation Details > > Suspend/resume operates on entry points to the server. Any request that > is currently running must not be affected by the suspend state, however > any new request should be rejected. In general subsystems will track the > number of outstanding requests, and when this hits zero they are > considered suspended. > > We will introduce the notion of a global SuspendController, that manages > the servers suspend state. All subsystems that wish to do a graceful > shutdown register callback handlers with this controller. > > When the suspend() operation is invoked the controller will invoke all > these callbacks, letting the subsystem know that the server is suspend, > and providing the subsystem with a SuspendContext object that the > subsystem can then use to notify the controller that the suspend is > complete. > > What the subsystem does when it receives a suspend command, and when it > considers itself suspended will vary, but in the common case it will > immediatly start rejecting external requests (e.g. Undertow will start > responding with a 503 to all new requests). The subsystem will also > track the number of outstanding requests, and when this hits zero then > the subsystem will notify the controller that is has successfully > suspended. > Some subsystems will obviously want to do other actions on suspend, e.g. > clustering will likely want to fail over, mod_cluster will notify the > load balancer that the node is no longer available etc. In some cases we > may want to make this configurable to an extent (e.g. Undertow could be > configured to allow requests with an existing session, and not consider > itself timed out until all sessions have either timed out or been > invalidated, although this will obviously take a while). > > If anyone has any feedback let me know. In terms of implementation my > basic plan is to get the core functionality and the Undertow > implementation into Wildfly, and then work with subsystem authors to > implement subsystem specific functionality once the core is in place. > > Stuart > > > > > > > > The > > A timeout attribute will also be added to the shutdown command, > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From dandread at redhat.com Tue Jun 10 10:40:52 2014 From: dandread at redhat.com (Dimitris Andreadis) Date: Tue, 10 Jun 2014 16:40:52 +0200 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5396700B.9030003@gmail.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> Message-ID: <539718F4.2090606@redhat.com> Why not extend the states of the existing 'server-state' attribute to: (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html On 10/06/2014 04:40, Stuart Douglas wrote: > > > Scott Marlow wrote: >> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>> Server suspend and resume is a feature that allows a running server to >>> gracefully finish of all running requests. The most common use case for >>> this is graceful shutdown, where you would like a server to complete all >>> running requests, reject any new ones, and then shut down, however there >>> are also plenty of other valid use cases (e.g. suspend the server, >>> modify a data source or some other config, then resume). >>> >>> User View: >>> >>> For the users point of view two new operations will be added to the server: >>> >>> suspend(timeout) >>> resume() >>> >>> A runtime only attribute suspend-state (is this a good name?) will also >>> be added, that can take one of three possible values, RUNNING, >>> SUSPENDING, SUSPENDED. >> >> The SuspendController "state" might be a shorter attribute name and just >> as meaningful. > > This will be in the global server namespace (i.e. from the CLI > :read-attribute(name="suspend-state"). > > I think the name 'state' is just two generic, which kind of state are we > talking about? > >> >> When are we in the RUNNING state? Is that simply the pre-state for >> SUSPENDING? > > 99.99% of the time. Basically servers are always running unless they are > have been explicitly suspended, and then they go from suspending to > suspended. Note that if resume is called at any time the server goes to > RUNNING again immediately, as when subsystems are notified they should > be able to begin accepting requests again straight away. > > We also have admin only mode, which is a kinda similar concept, so we > need to make sure we document the differences. > >> >>> A timeout attribute will also be added to the shutdown operation. If >>> this is present then the server will first be suspended, and the server >>> will not shut down until either the suspend is successful or the timeout >>> occurs. If no timeout parameter is passed to the operation then a normal >>> non-graceful shutdown will take place. >> >> Will non-graceful shutdown wait for non-daemon threads or terminate >> immediately (call System.exit()). > > It will execute the same way it does today (all services will shut down > and then the server will exit). > > Stuart > >> >>> In domain mode these operations will be added to both individual server >>> and a complete server group. >>> >>> Implementation Details >>> >>> Suspend/resume operates on entry points to the server. Any request that >>> is currently running must not be affected by the suspend state, however >>> any new request should be rejected. In general subsystems will track the >>> number of outstanding requests, and when this hits zero they are >>> considered suspended. >>> >>> We will introduce the notion of a global SuspendController, that manages >>> the servers suspend state. All subsystems that wish to do a graceful >>> shutdown register callback handlers with this controller. >>> >>> When the suspend() operation is invoked the controller will invoke all >>> these callbacks, letting the subsystem know that the server is suspend, >>> and providing the subsystem with a SuspendContext object that the >>> subsystem can then use to notify the controller that the suspend is >>> complete. >>> >>> What the subsystem does when it receives a suspend command, and when it >>> considers itself suspended will vary, but in the common case it will >>> immediatly start rejecting external requests (e.g. Undertow will start >>> responding with a 503 to all new requests). The subsystem will also >>> track the number of outstanding requests, and when this hits zero then >>> the subsystem will notify the controller that is has successfully >>> suspended. >>> Some subsystems will obviously want to do other actions on suspend, e.g. >>> clustering will likely want to fail over, mod_cluster will notify the >>> load balancer that the node is no longer available etc. In some cases we >>> may want to make this configurable to an extent (e.g. Undertow could be >>> configured to allow requests with an existing session, and not consider >>> itself timed out until all sessions have either timed out or been >>> invalidated, although this will obviously take a while). >>> >>> If anyone has any feedback let me know. In terms of implementation my >>> basic plan is to get the core functionality and the Undertow >>> implementation into Wildfly, and then work with subsystem authors to >>> implement subsystem specific functionality once the core is in place. >>> >>> Stuart >>> >>> >>> >>> >>> >>> >>> >>> The >>> >>> A timeout attribute will also be added to the shutdown command, >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From sdouglas at redhat.com Tue Jun 10 11:13:32 2014 From: sdouglas at redhat.com (Stuart Douglas) Date: Tue, 10 Jun 2014 10:13:32 -0500 Subject: [wildfly-dev] Design Proposal: Build split and provisioning Message-ID: <5397209C.9090400@redhat.com> This design proposal covers the inter related tasks of splitting up the build, and also creating a build/provisioning system that will make it easy for end users to consume Wildfly. Apologies for the length, but it is a complex topic. The first part explains what we are trying to achieve, the second part covers how we are planning to actually implement it. The Wildfly code base is over a million lines of java and has a test suite that generally takes close to two hours to run in its entirety. This makes the project very unwieldily, and the large size and slow test suite makes development painful. To deal with this issue we are going to split the Wildfly code base into smaller discrete repositories. The planned split is as follows: - Core: just the WF core - Arquillian: the arquillian adaptors - Servlet: a WF distribution with just Undertow, and some basic EE functionality such as naming - EE: All the core EE related functionality, EJB's, messaging etc - Clustering: The core clustering functionality - Console: The management console - Dist: brings all the pieces together, and allows us to run all tests against a full server Note that this list is in no way final, and is open to debate. We will most likely want to split up the EE component at some point, possibly along some kind of web profile/full profile type split. Each of these repos will build a feature pack, which will contain the following: - Feature specification / description - Core version requirements (e.g. WF10) - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) - module.xml files for all required modules that are not provided by other features - References to maven GAV's for jars (possibly a level of indirection here, module.xml may just contain the group and artifact, and the version may be in a version.properties file to allow it to be easily overridden) - Default configuration snippet, subsystem snippets are packaged in the subsystem jars, templates that combine them into config files are part of the feature pack. - Misc files (e.g. xsds) with indication of where on path to place them Note that a feature pack is not a complete server, it cannot simply be extracted and run, it first needs to be assembled into a server by the provisioning tool. The feature packs also just contain references to the maven GAV of required jars, they do not have the actual jars in the pack (which should make them very lightweight). Feature packs will be assembled by the WF build tool, which is just a maven plugin that will replace our existing hacky collection of ant scripts. Actual server instances will be assembled by the provisioning tool, which will be implemented as a library with several different front ends, including a maven plugin and a CLI (possibly integrated into our existing CLI). In general the provisioning tool will be able to provision three different types of servers: - A traditional server with all jar files in the distribution - A server that uses maven coordinates in module.xml files, with all artifacts downloaded as part of the provisioning process - As above, but with artifacts being lazily loaded as needed (not recommended for production, but I think this may be useful from a developer point of view) The provisioning tool will work from an XML descriptor that describes the server that is to be built. In general this information will include: - GAV of the feature packs to use - Filtering information if not all features from a pack are required (e.g. just give me JAX-RS from the EE pack. In this case the only modules/subsystems installed from the pack will be modules and subystem that JAX-RS requires). - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), which will allow community users to easily upgrade individual components. - Configuration changes that are required (e.g. some way to add a datasource to the assembled server). The actual form this will take still needs to be decided. Note that this need to work on both a user level (a user adding a datasource) and a feature pack level (e.g. the JON feature packing adding a required data source). - GAV of deployments to install in the server. This should allow a server complete with deployments and the necessary config to be assembled and be immediately ready to be put into service. Note that if you just want a full WF install you should be able to provision it with a single line in the provisioning file, by specifying the dist feature pack. We will still provide our traditional download, which will be build by the provisioning tool as part of our build process. The provisioning tool will also be able to upgrade servers, which basically consists of provisioning a new modules directory. Rollback is provided by provisioning from an earlier version of provisioning file. When a server is provisioned the tool will make a backup copy of the file used, so it should always be possible to examine the provisioning file that was used to build the current server config. Note that when an update is performed on an existing server config will not be updated, unless the update adds an additional config file, in which case the new config file will be generated (however existing config will not be touched). Note that as a result of this split we will need to do much more frequent releases of the individual feature packs, to allow the most recent code to be integrated into dist. Implementation Plan The above changes are obviously a big job, and will not happen overnight. They are also highly likely to conflict with other changes, so maintaining a long running branch that gets rebased is not a practical option. Instead the plan it to perform the split in incremental changes. The basic steps are listed below, some of which can be performed in parallel. 1) Using the initial implementation of my build plugin (in my wildfly-build-plugin branch) we split up the server along the lines above. The code will all stay in the same repo, however the plugin will be used to build all the individual pieces, which are then assembled as part of the final build process. Note that the plugin in its current form does both the build and provision step, and the pack format is produces is far from the final pack format that we will want to use. 2) Split up the test suite into modules based on the features that they test. This will result in several smaller modules in place of a single large one, which should also be a usability improvement as individual tests will be be faster to run, and run times for all tests in a module should be more manageable. 3) Split the core into into own module. 4) Split everything else into its own module. As part of this step we need to make sure we still have the ability to run all tests against the full server, as well as against the cut down feature pack version of the server. 5) Focus on the build an provisioning tool, to implement all the features above, and to finalize the WF pack format. I think that just about covers it. There are still lots of nitty gritty details that need to be worked out, however I think this covers all the main aspects of the design. We are planning on starting work on this basically immediately, as we want to get this implemented as early in the WF9 cycle as possible. Stuart From stuart.w.douglas at gmail.com Tue Jun 10 11:17:52 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 10 Jun 2014 10:17:52 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <539718F4.2090606@redhat.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> <539718F4.2090606@redhat.com> Message-ID: <539721A0.30307@gmail.com> They are actually orthogonal, a server can be in both RESTART_REQUIRED and any one of the suspend states. RESTART_REQUIRED is very much tied to services and the management model, while suspend/resume is a runtime only thing that should not touch the state of services. Stuart Dimitris Andreadis wrote: > Why not extend the states of the existing 'server-state' attribute to: > > (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) > > http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html > > On 10/06/2014 04:40, Stuart Douglas wrote: >> >> Scott Marlow wrote: >>> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>>> Server suspend and resume is a feature that allows a running server to >>>> gracefully finish of all running requests. The most common use case for >>>> this is graceful shutdown, where you would like a server to complete all >>>> running requests, reject any new ones, and then shut down, however there >>>> are also plenty of other valid use cases (e.g. suspend the server, >>>> modify a data source or some other config, then resume). >>>> >>>> User View: >>>> >>>> For the users point of view two new operations will be added to the server: >>>> >>>> suspend(timeout) >>>> resume() >>>> >>>> A runtime only attribute suspend-state (is this a good name?) will also >>>> be added, that can take one of three possible values, RUNNING, >>>> SUSPENDING, SUSPENDED. >>> The SuspendController "state" might be a shorter attribute name and just >>> as meaningful. >> This will be in the global server namespace (i.e. from the CLI >> :read-attribute(name="suspend-state"). >> >> I think the name 'state' is just two generic, which kind of state are we >> talking about? >> >>> When are we in the RUNNING state? Is that simply the pre-state for >>> SUSPENDING? >> 99.99% of the time. Basically servers are always running unless they are >> have been explicitly suspended, and then they go from suspending to >> suspended. Note that if resume is called at any time the server goes to >> RUNNING again immediately, as when subsystems are notified they should >> be able to begin accepting requests again straight away. >> >> We also have admin only mode, which is a kinda similar concept, so we >> need to make sure we document the differences. >> >>>> A timeout attribute will also be added to the shutdown operation. If >>>> this is present then the server will first be suspended, and the server >>>> will not shut down until either the suspend is successful or the timeout >>>> occurs. If no timeout parameter is passed to the operation then a normal >>>> non-graceful shutdown will take place. >>> Will non-graceful shutdown wait for non-daemon threads or terminate >>> immediately (call System.exit()). >> It will execute the same way it does today (all services will shut down >> and then the server will exit). >> >> Stuart >> >>>> In domain mode these operations will be added to both individual server >>>> and a complete server group. >>>> >>>> Implementation Details >>>> >>>> Suspend/resume operates on entry points to the server. Any request that >>>> is currently running must not be affected by the suspend state, however >>>> any new request should be rejected. In general subsystems will track the >>>> number of outstanding requests, and when this hits zero they are >>>> considered suspended. >>>> >>>> We will introduce the notion of a global SuspendController, that manages >>>> the servers suspend state. All subsystems that wish to do a graceful >>>> shutdown register callback handlers with this controller. >>>> >>>> When the suspend() operation is invoked the controller will invoke all >>>> these callbacks, letting the subsystem know that the server is suspend, >>>> and providing the subsystem with a SuspendContext object that the >>>> subsystem can then use to notify the controller that the suspend is >>>> complete. >>>> >>>> What the subsystem does when it receives a suspend command, and when it >>>> considers itself suspended will vary, but in the common case it will >>>> immediatly start rejecting external requests (e.g. Undertow will start >>>> responding with a 503 to all new requests). The subsystem will also >>>> track the number of outstanding requests, and when this hits zero then >>>> the subsystem will notify the controller that is has successfully >>>> suspended. >>>> Some subsystems will obviously want to do other actions on suspend, e.g. >>>> clustering will likely want to fail over, mod_cluster will notify the >>>> load balancer that the node is no longer available etc. In some cases we >>>> may want to make this configurable to an extent (e.g. Undertow could be >>>> configured to allow requests with an existing session, and not consider >>>> itself timed out until all sessions have either timed out or been >>>> invalidated, although this will obviously take a while). >>>> >>>> If anyone has any feedback let me know. In terms of implementation my >>>> basic plan is to get the core functionality and the Undertow >>>> implementation into Wildfly, and then work with subsystem authors to >>>> implement subsystem specific functionality once the core is in place. >>>> >>>> Stuart >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> The >>>> >>>> A timeout attribute will also be added to the shutdown command, >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From dandread at redhat.com Tue Jun 10 11:32:46 2014 From: dandread at redhat.com (Dimitris Andreadis) Date: Tue, 10 Jun 2014 17:32:46 +0200 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <539721A0.30307@gmail.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> <539718F4.2090606@redhat.com> <539721A0.30307@gmail.com> Message-ID: <5397251E.5040805@redhat.com> Isn't RESTART_REQUIRED also orthogonal to RUNNING? On 10/06/2014 17:17, Stuart Douglas wrote: > They are actually orthogonal, a server can be in both RESTART_REQUIRED and any one of the > suspend states. > > RESTART_REQUIRED is very much tied to services and the management model, while > suspend/resume is a runtime only thing that should not touch the state of services. > > > Stuart > > Dimitris Andreadis wrote: >> Why not extend the states of the existing 'server-state' attribute to: >> >> (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) >> >> http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html >> >> On 10/06/2014 04:40, Stuart Douglas wrote: >>> >>> Scott Marlow wrote: >>>> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>>>> Server suspend and resume is a feature that allows a running server to >>>>> gracefully finish of all running requests. The most common use case for >>>>> this is graceful shutdown, where you would like a server to complete all >>>>> running requests, reject any new ones, and then shut down, however there >>>>> are also plenty of other valid use cases (e.g. suspend the server, >>>>> modify a data source or some other config, then resume). >>>>> >>>>> User View: >>>>> >>>>> For the users point of view two new operations will be added to the server: >>>>> >>>>> suspend(timeout) >>>>> resume() >>>>> >>>>> A runtime only attribute suspend-state (is this a good name?) will also >>>>> be added, that can take one of three possible values, RUNNING, >>>>> SUSPENDING, SUSPENDED. >>>> The SuspendController "state" might be a shorter attribute name and just >>>> as meaningful. >>> This will be in the global server namespace (i.e. from the CLI >>> :read-attribute(name="suspend-state"). >>> >>> I think the name 'state' is just two generic, which kind of state are we >>> talking about? >>> >>>> When are we in the RUNNING state? Is that simply the pre-state for >>>> SUSPENDING? >>> 99.99% of the time. Basically servers are always running unless they are >>> have been explicitly suspended, and then they go from suspending to >>> suspended. Note that if resume is called at any time the server goes to >>> RUNNING again immediately, as when subsystems are notified they should >>> be able to begin accepting requests again straight away. >>> >>> We also have admin only mode, which is a kinda similar concept, so we >>> need to make sure we document the differences. >>> >>>>> A timeout attribute will also be added to the shutdown operation. If >>>>> this is present then the server will first be suspended, and the server >>>>> will not shut down until either the suspend is successful or the timeout >>>>> occurs. If no timeout parameter is passed to the operation then a normal >>>>> non-graceful shutdown will take place. >>>> Will non-graceful shutdown wait for non-daemon threads or terminate >>>> immediately (call System.exit()). >>> It will execute the same way it does today (all services will shut down >>> and then the server will exit). >>> >>> Stuart >>> >>>>> In domain mode these operations will be added to both individual server >>>>> and a complete server group. >>>>> >>>>> Implementation Details >>>>> >>>>> Suspend/resume operates on entry points to the server. Any request that >>>>> is currently running must not be affected by the suspend state, however >>>>> any new request should be rejected. In general subsystems will track the >>>>> number of outstanding requests, and when this hits zero they are >>>>> considered suspended. >>>>> >>>>> We will introduce the notion of a global SuspendController, that manages >>>>> the servers suspend state. All subsystems that wish to do a graceful >>>>> shutdown register callback handlers with this controller. >>>>> >>>>> When the suspend() operation is invoked the controller will invoke all >>>>> these callbacks, letting the subsystem know that the server is suspend, >>>>> and providing the subsystem with a SuspendContext object that the >>>>> subsystem can then use to notify the controller that the suspend is >>>>> complete. >>>>> >>>>> What the subsystem does when it receives a suspend command, and when it >>>>> considers itself suspended will vary, but in the common case it will >>>>> immediatly start rejecting external requests (e.g. Undertow will start >>>>> responding with a 503 to all new requests). The subsystem will also >>>>> track the number of outstanding requests, and when this hits zero then >>>>> the subsystem will notify the controller that is has successfully >>>>> suspended. >>>>> Some subsystems will obviously want to do other actions on suspend, e.g. >>>>> clustering will likely want to fail over, mod_cluster will notify the >>>>> load balancer that the node is no longer available etc. In some cases we >>>>> may want to make this configurable to an extent (e.g. Undertow could be >>>>> configured to allow requests with an existing session, and not consider >>>>> itself timed out until all sessions have either timed out or been >>>>> invalidated, although this will obviously take a while). >>>>> >>>>> If anyone has any feedback let me know. In terms of implementation my >>>>> basic plan is to get the core functionality and the Undertow >>>>> implementation into Wildfly, and then work with subsystem authors to >>>>> implement subsystem specific functionality once the core is in place. >>>>> >>>>> Stuart >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> The >>>>> >>>>> A timeout attribute will also be added to the shutdown command, >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev From stuart.w.douglas at gmail.com Tue Jun 10 11:40:18 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 10 Jun 2014 10:40:18 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5397251E.5040805@redhat.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> <539718F4.2090606@redhat.com> <539721A0.30307@gmail.com> <5397251E.5040805@redhat.com> Message-ID: <539726E2.2030300@gmail.com> I don't think so, I think RESTART_REQUIRED means running, but I need to restart to apply management changes (I think that attribute can also be RELOAD_REQUIRED, I think the description may be a bit out of date). To accurately reflect all the possible states you would need something like: RUNNING PAUSING, PAUSED, RESTART_REQUIRED PAUSING_RESTART_REQUIRED PAUSED_RESTART_REQUIRED RELOAD_REQUIRED PAUSING_RELOAD_REQUIRED PAUSED_RELOAD_REQUIRED Which does not seem great, and may introduce compatibility problems for clients that are not expecting these new values. Stuart Dimitris Andreadis wrote: > Isn't RESTART_REQUIRED also orthogonal to RUNNING? > > On 10/06/2014 17:17, Stuart Douglas wrote: >> They are actually orthogonal, a server can be in both RESTART_REQUIRED >> and any one of the >> suspend states. >> >> RESTART_REQUIRED is very much tied to services and the management >> model, while >> suspend/resume is a runtime only thing that should not touch the state >> of services. >> >> >> Stuart >> >> Dimitris Andreadis wrote: >>> Why not extend the states of the existing 'server-state' attribute to: >>> >>> (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) >>> >>> http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html >>> >>> On 10/06/2014 04:40, Stuart Douglas wrote: >>>> >>>> Scott Marlow wrote: >>>>> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>>>>> Server suspend and resume is a feature that allows a running >>>>>> server to >>>>>> gracefully finish of all running requests. The most common use >>>>>> case for >>>>>> this is graceful shutdown, where you would like a server to >>>>>> complete all >>>>>> running requests, reject any new ones, and then shut down, however >>>>>> there >>>>>> are also plenty of other valid use cases (e.g. suspend the server, >>>>>> modify a data source or some other config, then resume). >>>>>> >>>>>> User View: >>>>>> >>>>>> For the users point of view two new operations will be added to >>>>>> the server: >>>>>> >>>>>> suspend(timeout) >>>>>> resume() >>>>>> >>>>>> A runtime only attribute suspend-state (is this a good name?) will >>>>>> also >>>>>> be added, that can take one of three possible values, RUNNING, >>>>>> SUSPENDING, SUSPENDED. >>>>> The SuspendController "state" might be a shorter attribute name and >>>>> just >>>>> as meaningful. >>>> This will be in the global server namespace (i.e. from the CLI >>>> :read-attribute(name="suspend-state"). >>>> >>>> I think the name 'state' is just two generic, which kind of state >>>> are we >>>> talking about? >>>> >>>>> When are we in the RUNNING state? Is that simply the pre-state for >>>>> SUSPENDING? >>>> 99.99% of the time. Basically servers are always running unless they >>>> are >>>> have been explicitly suspended, and then they go from suspending to >>>> suspended. Note that if resume is called at any time the server goes to >>>> RUNNING again immediately, as when subsystems are notified they should >>>> be able to begin accepting requests again straight away. >>>> >>>> We also have admin only mode, which is a kinda similar concept, so we >>>> need to make sure we document the differences. >>>> >>>>>> A timeout attribute will also be added to the shutdown operation. If >>>>>> this is present then the server will first be suspended, and the >>>>>> server >>>>>> will not shut down until either the suspend is successful or the >>>>>> timeout >>>>>> occurs. If no timeout parameter is passed to the operation then a >>>>>> normal >>>>>> non-graceful shutdown will take place. >>>>> Will non-graceful shutdown wait for non-daemon threads or terminate >>>>> immediately (call System.exit()). >>>> It will execute the same way it does today (all services will shut down >>>> and then the server will exit). >>>> >>>> Stuart >>>> >>>>>> In domain mode these operations will be added to both individual >>>>>> server >>>>>> and a complete server group. >>>>>> >>>>>> Implementation Details >>>>>> >>>>>> Suspend/resume operates on entry points to the server. Any request >>>>>> that >>>>>> is currently running must not be affected by the suspend state, >>>>>> however >>>>>> any new request should be rejected. In general subsystems will >>>>>> track the >>>>>> number of outstanding requests, and when this hits zero they are >>>>>> considered suspended. >>>>>> >>>>>> We will introduce the notion of a global SuspendController, that >>>>>> manages >>>>>> the servers suspend state. All subsystems that wish to do a graceful >>>>>> shutdown register callback handlers with this controller. >>>>>> >>>>>> When the suspend() operation is invoked the controller will invoke >>>>>> all >>>>>> these callbacks, letting the subsystem know that the server is >>>>>> suspend, >>>>>> and providing the subsystem with a SuspendContext object that the >>>>>> subsystem can then use to notify the controller that the suspend is >>>>>> complete. >>>>>> >>>>>> What the subsystem does when it receives a suspend command, and >>>>>> when it >>>>>> considers itself suspended will vary, but in the common case it will >>>>>> immediatly start rejecting external requests (e.g. Undertow will >>>>>> start >>>>>> responding with a 503 to all new requests). The subsystem will also >>>>>> track the number of outstanding requests, and when this hits zero >>>>>> then >>>>>> the subsystem will notify the controller that is has successfully >>>>>> suspended. >>>>>> Some subsystems will obviously want to do other actions on >>>>>> suspend, e.g. >>>>>> clustering will likely want to fail over, mod_cluster will notify the >>>>>> load balancer that the node is no longer available etc. In some >>>>>> cases we >>>>>> may want to make this configurable to an extent (e.g. Undertow >>>>>> could be >>>>>> configured to allow requests with an existing session, and not >>>>>> consider >>>>>> itself timed out until all sessions have either timed out or been >>>>>> invalidated, although this will obviously take a while). >>>>>> >>>>>> If anyone has any feedback let me know. In terms of implementation my >>>>>> basic plan is to get the core functionality and the Undertow >>>>>> implementation into Wildfly, and then work with subsystem authors to >>>>>> implement subsystem specific functionality once the core is in place. >>>>>> >>>>>> Stuart >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> The >>>>>> >>>>>> A timeout attribute will also be added to the shutdown command, >>>>>> _______________________________________________ >>>>>> wildfly-dev mailing list >>>>>> wildfly-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>> >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev From dandread at redhat.com Tue Jun 10 11:47:43 2014 From: dandread at redhat.com (Dimitris Andreadis) Date: Tue, 10 Jun 2014 17:47:43 +0200 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <539726E2.2030300@gmail.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> <539718F4.2090606@redhat.com> <539721A0.30307@gmail.com> <5397251E.5040805@redhat.com> <539726E2.2030300@gmail.com> Message-ID: <5397289F.8030404@redhat.com> It seems to me RESTART_REQUIRED (or RELOAD_REQUIRED) should be a boolean on its own to simplify the state diagram. On 10/06/2014 17:40, Stuart Douglas wrote: > I don't think so, I think RESTART_REQUIRED means running, but I need to restart to apply > management changes (I think that attribute can also be RELOAD_REQUIRED, I think the > description may be a bit out of date). > > To accurately reflect all the possible states you would need something like: > > RUNNING > PAUSING, > PAUSED, > RESTART_REQUIRED > PAUSING_RESTART_REQUIRED > PAUSED_RESTART_REQUIRED > RELOAD_REQUIRED > PAUSING_RELOAD_REQUIRED > PAUSED_RELOAD_REQUIRED > > Which does not seem great, and may introduce compatibility problems for clients that are not > expecting these new values. > > Stuart > > > > Dimitris Andreadis wrote: >> Isn't RESTART_REQUIRED also orthogonal to RUNNING? >> >> On 10/06/2014 17:17, Stuart Douglas wrote: >>> They are actually orthogonal, a server can be in both RESTART_REQUIRED >>> and any one of the >>> suspend states. >>> >>> RESTART_REQUIRED is very much tied to services and the management >>> model, while >>> suspend/resume is a runtime only thing that should not touch the state >>> of services. >>> >>> >>> Stuart >>> >>> Dimitris Andreadis wrote: >>>> Why not extend the states of the existing 'server-state' attribute to: >>>> >>>> (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) >>>> >>>> http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html >>>> >>>> On 10/06/2014 04:40, Stuart Douglas wrote: >>>>> >>>>> Scott Marlow wrote: >>>>>> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>>>>>> Server suspend and resume is a feature that allows a running >>>>>>> server to >>>>>>> gracefully finish of all running requests. The most common use >>>>>>> case for >>>>>>> this is graceful shutdown, where you would like a server to >>>>>>> complete all >>>>>>> running requests, reject any new ones, and then shut down, however >>>>>>> there >>>>>>> are also plenty of other valid use cases (e.g. suspend the server, >>>>>>> modify a data source or some other config, then resume). >>>>>>> >>>>>>> User View: >>>>>>> >>>>>>> For the users point of view two new operations will be added to >>>>>>> the server: >>>>>>> >>>>>>> suspend(timeout) >>>>>>> resume() >>>>>>> >>>>>>> A runtime only attribute suspend-state (is this a good name?) will >>>>>>> also >>>>>>> be added, that can take one of three possible values, RUNNING, >>>>>>> SUSPENDING, SUSPENDED. >>>>>> The SuspendController "state" might be a shorter attribute name and >>>>>> just >>>>>> as meaningful. >>>>> This will be in the global server namespace (i.e. from the CLI >>>>> :read-attribute(name="suspend-state"). >>>>> >>>>> I think the name 'state' is just two generic, which kind of state >>>>> are we >>>>> talking about? >>>>> >>>>>> When are we in the RUNNING state? Is that simply the pre-state for >>>>>> SUSPENDING? >>>>> 99.99% of the time. Basically servers are always running unless they >>>>> are >>>>> have been explicitly suspended, and then they go from suspending to >>>>> suspended. Note that if resume is called at any time the server goes to >>>>> RUNNING again immediately, as when subsystems are notified they should >>>>> be able to begin accepting requests again straight away. >>>>> >>>>> We also have admin only mode, which is a kinda similar concept, so we >>>>> need to make sure we document the differences. >>>>> >>>>>>> A timeout attribute will also be added to the shutdown operation. If >>>>>>> this is present then the server will first be suspended, and the >>>>>>> server >>>>>>> will not shut down until either the suspend is successful or the >>>>>>> timeout >>>>>>> occurs. If no timeout parameter is passed to the operation then a >>>>>>> normal >>>>>>> non-graceful shutdown will take place. >>>>>> Will non-graceful shutdown wait for non-daemon threads or terminate >>>>>> immediately (call System.exit()). >>>>> It will execute the same way it does today (all services will shut down >>>>> and then the server will exit). >>>>> >>>>> Stuart >>>>> >>>>>>> In domain mode these operations will be added to both individual >>>>>>> server >>>>>>> and a complete server group. >>>>>>> >>>>>>> Implementation Details >>>>>>> >>>>>>> Suspend/resume operates on entry points to the server. Any request >>>>>>> that >>>>>>> is currently running must not be affected by the suspend state, >>>>>>> however >>>>>>> any new request should be rejected. In general subsystems will >>>>>>> track the >>>>>>> number of outstanding requests, and when this hits zero they are >>>>>>> considered suspended. >>>>>>> >>>>>>> We will introduce the notion of a global SuspendController, that >>>>>>> manages >>>>>>> the servers suspend state. All subsystems that wish to do a graceful >>>>>>> shutdown register callback handlers with this controller. >>>>>>> >>>>>>> When the suspend() operation is invoked the controller will invoke >>>>>>> all >>>>>>> these callbacks, letting the subsystem know that the server is >>>>>>> suspend, >>>>>>> and providing the subsystem with a SuspendContext object that the >>>>>>> subsystem can then use to notify the controller that the suspend is >>>>>>> complete. >>>>>>> >>>>>>> What the subsystem does when it receives a suspend command, and >>>>>>> when it >>>>>>> considers itself suspended will vary, but in the common case it will >>>>>>> immediatly start rejecting external requests (e.g. Undertow will >>>>>>> start >>>>>>> responding with a 503 to all new requests). The subsystem will also >>>>>>> track the number of outstanding requests, and when this hits zero >>>>>>> then >>>>>>> the subsystem will notify the controller that is has successfully >>>>>>> suspended. >>>>>>> Some subsystems will obviously want to do other actions on >>>>>>> suspend, e.g. >>>>>>> clustering will likely want to fail over, mod_cluster will notify the >>>>>>> load balancer that the node is no longer available etc. In some >>>>>>> cases we >>>>>>> may want to make this configurable to an extent (e.g. Undertow >>>>>>> could be >>>>>>> configured to allow requests with an existing session, and not >>>>>>> consider >>>>>>> itself timed out until all sessions have either timed out or been >>>>>>> invalidated, although this will obviously take a while). >>>>>>> >>>>>>> If anyone has any feedback let me know. In terms of implementation my >>>>>>> basic plan is to get the core functionality and the Undertow >>>>>>> implementation into Wildfly, and then work with subsystem authors to >>>>>>> implement subsystem specific functionality once the core is in place. >>>>>>> >>>>>>> Stuart >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> The >>>>>>> >>>>>>> A timeout attribute will also be added to the shutdown command, >>>>>>> _______________________________________________ >>>>>>> wildfly-dev mailing list >>>>>>> wildfly-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>> >>>>>> _______________________________________________ >>>>>> wildfly-dev mailing list >>>>>> wildfly-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev From stuart.w.douglas at gmail.com Tue Jun 10 11:50:08 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 10 Jun 2014 10:50:08 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5397289F.8030404@redhat.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> <539718F4.2090606@redhat.com> <539721A0.30307@gmail.com> <5397251E.5040805@redhat.com> <539726E2.2030300@gmail.com> <5397289F.8030404@redhat.com> Message-ID: <53972930.9030809@gmail.com> We can't really change that now, as it is part of our existing API. Stuart Dimitris Andreadis wrote: > It seems to me RESTART_REQUIRED (or RELOAD_REQUIRED) should be a boolean > on its own to simplify the state diagram. > > On 10/06/2014 17:40, Stuart Douglas wrote: >> I don't think so, I think RESTART_REQUIRED means running, but I need >> to restart to apply >> management changes (I think that attribute can also be >> RELOAD_REQUIRED, I think the >> description may be a bit out of date). >> >> To accurately reflect all the possible states you would need something >> like: >> >> RUNNING >> PAUSING, >> PAUSED, >> RESTART_REQUIRED >> PAUSING_RESTART_REQUIRED >> PAUSED_RESTART_REQUIRED >> RELOAD_REQUIRED >> PAUSING_RELOAD_REQUIRED >> PAUSED_RELOAD_REQUIRED >> >> Which does not seem great, and may introduce compatibility problems >> for clients that are not >> expecting these new values. >> >> Stuart >> >> >> >> Dimitris Andreadis wrote: >>> Isn't RESTART_REQUIRED also orthogonal to RUNNING? >>> >>> On 10/06/2014 17:17, Stuart Douglas wrote: >>>> They are actually orthogonal, a server can be in both RESTART_REQUIRED >>>> and any one of the >>>> suspend states. >>>> >>>> RESTART_REQUIRED is very much tied to services and the management >>>> model, while >>>> suspend/resume is a runtime only thing that should not touch the state >>>> of services. >>>> >>>> >>>> Stuart >>>> >>>> Dimitris Andreadis wrote: >>>>> Why not extend the states of the existing 'server-state' attribute to: >>>>> >>>>> (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) >>>>> >>>>> http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html >>>>> >>>>> On 10/06/2014 04:40, Stuart Douglas wrote: >>>>>> >>>>>> Scott Marlow wrote: >>>>>>> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>>>>>>> Server suspend and resume is a feature that allows a running >>>>>>>> server to >>>>>>>> gracefully finish of all running requests. The most common use >>>>>>>> case for >>>>>>>> this is graceful shutdown, where you would like a server to >>>>>>>> complete all >>>>>>>> running requests, reject any new ones, and then shut down, however >>>>>>>> there >>>>>>>> are also plenty of other valid use cases (e.g. suspend the server, >>>>>>>> modify a data source or some other config, then resume). >>>>>>>> >>>>>>>> User View: >>>>>>>> >>>>>>>> For the users point of view two new operations will be added to >>>>>>>> the server: >>>>>>>> >>>>>>>> suspend(timeout) >>>>>>>> resume() >>>>>>>> >>>>>>>> A runtime only attribute suspend-state (is this a good name?) will >>>>>>>> also >>>>>>>> be added, that can take one of three possible values, RUNNING, >>>>>>>> SUSPENDING, SUSPENDED. >>>>>>> The SuspendController "state" might be a shorter attribute name and >>>>>>> just >>>>>>> as meaningful. >>>>>> This will be in the global server namespace (i.e. from the CLI >>>>>> :read-attribute(name="suspend-state"). >>>>>> >>>>>> I think the name 'state' is just two generic, which kind of state >>>>>> are we >>>>>> talking about? >>>>>> >>>>>>> When are we in the RUNNING state? Is that simply the pre-state for >>>>>>> SUSPENDING? >>>>>> 99.99% of the time. Basically servers are always running unless they >>>>>> are >>>>>> have been explicitly suspended, and then they go from suspending to >>>>>> suspended. Note that if resume is called at any time the server >>>>>> goes to >>>>>> RUNNING again immediately, as when subsystems are notified they >>>>>> should >>>>>> be able to begin accepting requests again straight away. >>>>>> >>>>>> We also have admin only mode, which is a kinda similar concept, so we >>>>>> need to make sure we document the differences. >>>>>> >>>>>>>> A timeout attribute will also be added to the shutdown >>>>>>>> operation. If >>>>>>>> this is present then the server will first be suspended, and the >>>>>>>> server >>>>>>>> will not shut down until either the suspend is successful or the >>>>>>>> timeout >>>>>>>> occurs. If no timeout parameter is passed to the operation then a >>>>>>>> normal >>>>>>>> non-graceful shutdown will take place. >>>>>>> Will non-graceful shutdown wait for non-daemon threads or terminate >>>>>>> immediately (call System.exit()). >>>>>> It will execute the same way it does today (all services will shut >>>>>> down >>>>>> and then the server will exit). >>>>>> >>>>>> Stuart >>>>>> >>>>>>>> In domain mode these operations will be added to both individual >>>>>>>> server >>>>>>>> and a complete server group. >>>>>>>> >>>>>>>> Implementation Details >>>>>>>> >>>>>>>> Suspend/resume operates on entry points to the server. Any request >>>>>>>> that >>>>>>>> is currently running must not be affected by the suspend state, >>>>>>>> however >>>>>>>> any new request should be rejected. In general subsystems will >>>>>>>> track the >>>>>>>> number of outstanding requests, and when this hits zero they are >>>>>>>> considered suspended. >>>>>>>> >>>>>>>> We will introduce the notion of a global SuspendController, that >>>>>>>> manages >>>>>>>> the servers suspend state. All subsystems that wish to do a >>>>>>>> graceful >>>>>>>> shutdown register callback handlers with this controller. >>>>>>>> >>>>>>>> When the suspend() operation is invoked the controller will invoke >>>>>>>> all >>>>>>>> these callbacks, letting the subsystem know that the server is >>>>>>>> suspend, >>>>>>>> and providing the subsystem with a SuspendContext object that the >>>>>>>> subsystem can then use to notify the controller that the suspend is >>>>>>>> complete. >>>>>>>> >>>>>>>> What the subsystem does when it receives a suspend command, and >>>>>>>> when it >>>>>>>> considers itself suspended will vary, but in the common case it >>>>>>>> will >>>>>>>> immediatly start rejecting external requests (e.g. Undertow will >>>>>>>> start >>>>>>>> responding with a 503 to all new requests). The subsystem will also >>>>>>>> track the number of outstanding requests, and when this hits zero >>>>>>>> then >>>>>>>> the subsystem will notify the controller that is has successfully >>>>>>>> suspended. >>>>>>>> Some subsystems will obviously want to do other actions on >>>>>>>> suspend, e.g. >>>>>>>> clustering will likely want to fail over, mod_cluster will >>>>>>>> notify the >>>>>>>> load balancer that the node is no longer available etc. In some >>>>>>>> cases we >>>>>>>> may want to make this configurable to an extent (e.g. Undertow >>>>>>>> could be >>>>>>>> configured to allow requests with an existing session, and not >>>>>>>> consider >>>>>>>> itself timed out until all sessions have either timed out or been >>>>>>>> invalidated, although this will obviously take a while). >>>>>>>> >>>>>>>> If anyone has any feedback let me know. In terms of >>>>>>>> implementation my >>>>>>>> basic plan is to get the core functionality and the Undertow >>>>>>>> implementation into Wildfly, and then work with subsystem >>>>>>>> authors to >>>>>>>> implement subsystem specific functionality once the core is in >>>>>>>> place. >>>>>>>> >>>>>>>> Stuart >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> The >>>>>>>> >>>>>>>> A timeout attribute will also be added to the shutdown command, >>>>>>>> _______________________________________________ >>>>>>>> wildfly-dev mailing list >>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> wildfly-dev mailing list >>>>>>> wildfly-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>> _______________________________________________ >>>>>> wildfly-dev mailing list >>>>>> wildfly-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>> >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev From dandread at redhat.com Tue Jun 10 12:21:01 2014 From: dandread at redhat.com (Dimitris Andreadis) Date: Tue, 10 Jun 2014 18:21:01 +0200 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <53972930.9030809@gmail.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> <539718F4.2090606@redhat.com> <539721A0.30307@gmail.com> <5397251E.5040805@redhat.com> <539726E2.2030300@gmail.com> <5397289F.8030404@redhat.com> <53972930.9030809@gmail.com> Message-ID: <5397306D.4060705@redhat.com> Sure. Which justifies trying to avoid those issues in the first place ;) On 10/06/2014 17:50, Stuart Douglas wrote: > We can't really change that now, as it is part of our existing API. > > Stuart > > Dimitris Andreadis wrote: >> It seems to me RESTART_REQUIRED (or RELOAD_REQUIRED) should be a boolean >> on its own to simplify the state diagram. >> >> On 10/06/2014 17:40, Stuart Douglas wrote: >>> I don't think so, I think RESTART_REQUIRED means running, but I need >>> to restart to apply >>> management changes (I think that attribute can also be >>> RELOAD_REQUIRED, I think the >>> description may be a bit out of date). >>> >>> To accurately reflect all the possible states you would need something >>> like: >>> >>> RUNNING >>> PAUSING, >>> PAUSED, >>> RESTART_REQUIRED >>> PAUSING_RESTART_REQUIRED >>> PAUSED_RESTART_REQUIRED >>> RELOAD_REQUIRED >>> PAUSING_RELOAD_REQUIRED >>> PAUSED_RELOAD_REQUIRED >>> >>> Which does not seem great, and may introduce compatibility problems >>> for clients that are not >>> expecting these new values. >>> >>> Stuart >>> >>> >>> >>> Dimitris Andreadis wrote: >>>> Isn't RESTART_REQUIRED also orthogonal to RUNNING? >>>> >>>> On 10/06/2014 17:17, Stuart Douglas wrote: >>>>> They are actually orthogonal, a server can be in both RESTART_REQUIRED >>>>> and any one of the >>>>> suspend states. >>>>> >>>>> RESTART_REQUIRED is very much tied to services and the management >>>>> model, while >>>>> suspend/resume is a runtime only thing that should not touch the state >>>>> of services. >>>>> >>>>> >>>>> Stuart >>>>> >>>>> Dimitris Andreadis wrote: >>>>>> Why not extend the states of the existing 'server-state' attribute to: >>>>>> >>>>>> (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) >>>>>> >>>>>> http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html >>>>>> >>>>>> On 10/06/2014 04:40, Stuart Douglas wrote: >>>>>>> >>>>>>> Scott Marlow wrote: >>>>>>>> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>>>>>>>> Server suspend and resume is a feature that allows a running >>>>>>>>> server to >>>>>>>>> gracefully finish of all running requests. The most common use >>>>>>>>> case for >>>>>>>>> this is graceful shutdown, where you would like a server to >>>>>>>>> complete all >>>>>>>>> running requests, reject any new ones, and then shut down, however >>>>>>>>> there >>>>>>>>> are also plenty of other valid use cases (e.g. suspend the server, >>>>>>>>> modify a data source or some other config, then resume). >>>>>>>>> >>>>>>>>> User View: >>>>>>>>> >>>>>>>>> For the users point of view two new operations will be added to >>>>>>>>> the server: >>>>>>>>> >>>>>>>>> suspend(timeout) >>>>>>>>> resume() >>>>>>>>> >>>>>>>>> A runtime only attribute suspend-state (is this a good name?) will >>>>>>>>> also >>>>>>>>> be added, that can take one of three possible values, RUNNING, >>>>>>>>> SUSPENDING, SUSPENDED. >>>>>>>> The SuspendController "state" might be a shorter attribute name and >>>>>>>> just >>>>>>>> as meaningful. >>>>>>> This will be in the global server namespace (i.e. from the CLI >>>>>>> :read-attribute(name="suspend-state"). >>>>>>> >>>>>>> I think the name 'state' is just two generic, which kind of state >>>>>>> are we >>>>>>> talking about? >>>>>>> >>>>>>>> When are we in the RUNNING state? Is that simply the pre-state for >>>>>>>> SUSPENDING? >>>>>>> 99.99% of the time. Basically servers are always running unless they >>>>>>> are >>>>>>> have been explicitly suspended, and then they go from suspending to >>>>>>> suspended. Note that if resume is called at any time the server >>>>>>> goes to >>>>>>> RUNNING again immediately, as when subsystems are notified they >>>>>>> should >>>>>>> be able to begin accepting requests again straight away. >>>>>>> >>>>>>> We also have admin only mode, which is a kinda similar concept, so we >>>>>>> need to make sure we document the differences. >>>>>>> >>>>>>>>> A timeout attribute will also be added to the shutdown >>>>>>>>> operation. If >>>>>>>>> this is present then the server will first be suspended, and the >>>>>>>>> server >>>>>>>>> will not shut down until either the suspend is successful or the >>>>>>>>> timeout >>>>>>>>> occurs. If no timeout parameter is passed to the operation then a >>>>>>>>> normal >>>>>>>>> non-graceful shutdown will take place. >>>>>>>> Will non-graceful shutdown wait for non-daemon threads or terminate >>>>>>>> immediately (call System.exit()). >>>>>>> It will execute the same way it does today (all services will shut >>>>>>> down >>>>>>> and then the server will exit). >>>>>>> >>>>>>> Stuart >>>>>>> >>>>>>>>> In domain mode these operations will be added to both individual >>>>>>>>> server >>>>>>>>> and a complete server group. >>>>>>>>> >>>>>>>>> Implementation Details >>>>>>>>> >>>>>>>>> Suspend/resume operates on entry points to the server. Any request >>>>>>>>> that >>>>>>>>> is currently running must not be affected by the suspend state, >>>>>>>>> however >>>>>>>>> any new request should be rejected. In general subsystems will >>>>>>>>> track the >>>>>>>>> number of outstanding requests, and when this hits zero they are >>>>>>>>> considered suspended. >>>>>>>>> >>>>>>>>> We will introduce the notion of a global SuspendController, that >>>>>>>>> manages >>>>>>>>> the servers suspend state. All subsystems that wish to do a >>>>>>>>> graceful >>>>>>>>> shutdown register callback handlers with this controller. >>>>>>>>> >>>>>>>>> When the suspend() operation is invoked the controller will invoke >>>>>>>>> all >>>>>>>>> these callbacks, letting the subsystem know that the server is >>>>>>>>> suspend, >>>>>>>>> and providing the subsystem with a SuspendContext object that the >>>>>>>>> subsystem can then use to notify the controller that the suspend is >>>>>>>>> complete. >>>>>>>>> >>>>>>>>> What the subsystem does when it receives a suspend command, and >>>>>>>>> when it >>>>>>>>> considers itself suspended will vary, but in the common case it >>>>>>>>> will >>>>>>>>> immediatly start rejecting external requests (e.g. Undertow will >>>>>>>>> start >>>>>>>>> responding with a 503 to all new requests). The subsystem will also >>>>>>>>> track the number of outstanding requests, and when this hits zero >>>>>>>>> then >>>>>>>>> the subsystem will notify the controller that is has successfully >>>>>>>>> suspended. >>>>>>>>> Some subsystems will obviously want to do other actions on >>>>>>>>> suspend, e.g. >>>>>>>>> clustering will likely want to fail over, mod_cluster will >>>>>>>>> notify the >>>>>>>>> load balancer that the node is no longer available etc. In some >>>>>>>>> cases we >>>>>>>>> may want to make this configurable to an extent (e.g. Undertow >>>>>>>>> could be >>>>>>>>> configured to allow requests with an existing session, and not >>>>>>>>> consider >>>>>>>>> itself timed out until all sessions have either timed out or been >>>>>>>>> invalidated, although this will obviously take a while). >>>>>>>>> >>>>>>>>> If anyone has any feedback let me know. In terms of >>>>>>>>> implementation my >>>>>>>>> basic plan is to get the core functionality and the Undertow >>>>>>>>> implementation into Wildfly, and then work with subsystem >>>>>>>>> authors to >>>>>>>>> implement subsystem specific functionality once the core is in >>>>>>>>> place. >>>>>>>>> >>>>>>>>> Stuart >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> The >>>>>>>>> >>>>>>>>> A timeout attribute will also be added to the shutdown command, >>>>>>>>> _______________________________________________ >>>>>>>>> wildfly-dev mailing list >>>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> wildfly-dev mailing list >>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>> _______________________________________________ >>>>>>> wildfly-dev mailing list >>>>>>> wildfly-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>> >>>>>> _______________________________________________ >>>>>> wildfly-dev mailing list >>>>>> wildfly-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev From jperkins at redhat.com Tue Jun 10 12:42:58 2014 From: jperkins at redhat.com (James R. Perkins) Date: Tue, 10 Jun 2014 09:42:58 -0700 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <5397209C.9090400@redhat.com> References: <5397209C.9090400@redhat.com> Message-ID: <53973592.3050700@redhat.com> On 06/10/2014 08:13 AM, Stuart Douglas wrote: > This design proposal covers the inter related tasks of splitting up the > build, and also creating a build/provisioning system that will make it > easy for end users to consume Wildfly. Apologies for the length, but it > is a complex topic. The first part explains what we are trying to > achieve, the second part covers how we are planning to actually > implement it. > > The Wildfly code base is over a million lines of java and has a test > suite that generally takes close to two hours to run in its entirety. > This makes the project very unwieldily, and the large size and slow test > suite makes development painful. > > To deal with this issue we are going to split the Wildfly code base into > smaller discrete repositories. The planned split is as follows: > > - Core: just the WF core > - Arquillian: the arquillian adaptors > - Servlet: a WF distribution with just Undertow, and some basic EE > functionality such as naming > - EE: All the core EE related functionality, EJB's, messaging etc > - Clustering: The core clustering functionality > - Console: The management console > - Dist: brings all the pieces together, and allows us to run all tests > against a full server > > Note that this list is in no way final, and is open to debate. We will > most likely want to split up the EE component at some point, possibly > along some kind of web profile/full profile type split. > > Each of these repos will build a feature pack, which will contain the > following: > > - Feature specification / description > - Core version requirements (e.g. WF10) > - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) > - module.xml files for all required modules that are not provided by > other features > - References to maven GAV's for jars (possibly a level of indirection > here, module.xml may just contain the group and artifact, and the > version may be in a version.properties file to allow it to be easily > overridden) > - Default configuration snippet, subsystem snippets are packaged in the > subsystem jars, templates that combine them into config files are part > of the feature pack. > - Misc files (e.g. xsds) with indication of where on path to place them > > Note that a feature pack is not a complete server, it cannot simply be > extracted and run, it first needs to be assembled into a server by the > provisioning tool. The feature packs also just contain references to the > maven GAV of required jars, they do not have the actual jars in the pack > (which should make them very lightweight). > > Feature packs will be assembled by the WF build tool, which is just a > maven plugin that will replace our existing hacky collection of ant > scripts. > > Actual server instances will be assembled by the provisioning tool, > which will be implemented as a library with several different front > ends, including a maven plugin and a CLI (possibly integrated into our > existing CLI). In general the provisioning tool will be able to > provision three different types of servers: > > - A traditional server with all jar files in the distribution > - A server that uses maven coordinates in module.xml files, with all > artifacts downloaded as part of the provisioning process > - As above, but with artifacts being lazily loaded as needed (not > recommended for production, but I think this may be useful from a > developer point of view) > > The provisioning tool will work from an XML descriptor that describes > the server that is to be built. In general this information will include: > > - GAV of the feature packs to use > - Filtering information if not all features from a pack are required > (e.g. just give me JAX-RS from the EE pack. In this case the only > modules/subsystems installed from the pack will be modules and subystem > that JAX-RS requires). > - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), > which will allow community users to easily upgrade individual components. > - Configuration changes that are required (e.g. some way to add a > datasource to the assembled server). The actual form this will take > still needs to be decided. Note that this need to work on both a user > level (a user adding a datasource) and a feature pack level (e.g. the > JON feature packing adding a required data source). > - GAV of deployments to install in the server. This should allow a > server complete with deployments and the necessary config to be > assembled and be immediately ready to be put into service. > > Note that if you just want a full WF install you should be able to > provision it with a single line in the provisioning file, by specifying > the dist feature pack. We will still provide our traditional download, > which will be build by the provisioning tool as part of our build process. > > The provisioning tool will also be able to upgrade servers, which > basically consists of provisioning a new modules directory. Rollback is > provided by provisioning from an earlier version of provisioning file. > When a server is provisioned the tool will make a backup copy of the > file used, so it should always be possible to examine the provisioning > file that was used to build the current server config. > > Note that when an update is performed on an existing server config will > not be updated, unless the update adds an additional config file, in > which case the new config file will be generated (however existing > config will not be touched). > > Note that as a result of this split we will need to do much more > frequent releases of the individual feature packs, to allow the most > recent code to be integrated into dist. > > Implementation Plan > > The above changes are obviously a big job, and will not happen > overnight. They are also highly likely to conflict with other changes, > so maintaining a long running branch that gets rebased is not a > practical option. Instead the plan it to perform the split in > incremental changes. The basic steps are listed below, some of which can > be performed in parallel. > > 1) Using the initial implementation of my build plugin (in my > wildfly-build-plugin branch) we split up the server along the lines > above. The code will all stay in the same repo, however the plugin will > be used to build all the individual pieces, which are then assembled as > part of the final build process. Note that the plugin in its current > form does both the build and provision step, and the pack format is > produces is far from the final pack format that we will want to use. I think the plugin should be a separate project to it's not tied to the same release cycle. There's already a groupId of org.wildfly.plugins where the wildfly-maven-plugin exists. Maybe it should use the same groupId. This would also allow other projects to use the plugin sooner and start to assemble their own runtime. It might help determine some issues quicker as well. > > 2) Split up the test suite into modules based on the features that they > test. This will result in several smaller modules in place of a single > large one, which should also be a usability improvement as individual > tests will be be faster to run, and run times for all tests in a module > should be more manageable. > > 3) Split the core into into own module. > > 4) Split everything else into its own module. As part of this step we > need to make sure we still have the ability to run all tests against the > full server, as well as against the cut down feature pack version of the > server. > > 5) Focus on the build an provisioning tool, to implement all the > features above, and to finalize the WF pack format. > > I think that just about covers it. There are still lots of nitty gritty > details that need to be worked out, however I think this covers all the > main aspects of the design. We are planning on starting work on this > basically immediately, as we want to get this implemented as early in > the WF9 cycle as possible. > > Stuart Overall I think this plan is great. I personally can't wait until we at least have a true core server to use. FWIW IBM uses the term "feature pack" for WebSphere Application Server extras. Though they tend to be huge and not easy to apply. > > > > > > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- James R. Perkins JBoss by Red Hat From stuart.w.douglas at gmail.com Tue Jun 10 12:49:51 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 10 Jun 2014 11:49:51 -0500 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <53973592.3050700@redhat.com> References: <5397209C.9090400@redhat.com> <53973592.3050700@redhat.com> Message-ID: <5397372F.2020205@gmail.com> > I think the plugin should be a separate project to it's not tied to the > same release cycle. There's already a groupId of org.wildfly.plugins > where the wildfly-maven-plugin exists. Maybe it should use the same groupId. > > This would also allow other projects to use the plugin sooner and start > to assemble their own runtime. It might help determine some issues > quicker as well. I agree, although initially I want to keep it in the WF code base, just so we do not need to do a new release every day while it is evolving rapidly. > Overall I think this plan is great. I personally can't wait until we at > least have a true core server to use. > > FWIW IBM uses the term "feature pack" for WebSphere Application Server > extras. Though they tend to be huge and not easy to apply. If anyone has any better names I would love to hear them. Stuart >> >> >> >> >> >> >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > From jgreene at redhat.com Tue Jun 10 12:57:44 2014 From: jgreene at redhat.com (Jason T. Greene) Date: Tue, 10 Jun 2014 12:57:44 -0400 (EDT) Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <539726E2.2030300@gmail.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> <539718F4.2090606@redhat.com> <539721A0.30307@gmail.com> <5397251E.5040805@redhat.com> <539726E2.2030300@gmail.com> Message-ID: You could potentially eliminate the PAUSING_*_REQUIRED states, since you typically wouldn't want to recommend restart in them > On Jun 10, 2014, at 10:41 AM, Stuart Douglas wrote: > > I don't think so, I think RESTART_REQUIRED means running, but I need to > restart to apply management changes (I think that attribute can also be > RELOAD_REQUIRED, I think the description may be a bit out of date). > > To accurately reflect all the possible states you would need something like: > > RUNNING > PAUSING, > PAUSED, > RESTART_REQUIRED > PAUSING_RESTART_REQUIRED > PAUSED_RESTART_REQUIRED > RELOAD_REQUIRED > PAUSING_RELOAD_REQUIRED > PAUSED_RELOAD_REQUIRED > > Which does not seem great, and may introduce compatibility problems for > clients that are not expecting these new values. > > Stuart > > > > Dimitris Andreadis wrote: >> Isn't RESTART_REQUIRED also orthogonal to RUNNING? >> >>> On 10/06/2014 17:17, Stuart Douglas wrote: >>> They are actually orthogonal, a server can be in both RESTART_REQUIRED >>> and any one of the >>> suspend states. >>> >>> RESTART_REQUIRED is very much tied to services and the management >>> model, while >>> suspend/resume is a runtime only thing that should not touch the state >>> of services. >>> >>> >>> Stuart >>> >>> Dimitris Andreadis wrote: >>>> Why not extend the states of the existing 'server-state' attribute to: >>>> >>>> (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) >>>> >>>> http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html >>>> >>>>> On 10/06/2014 04:40, Stuart Douglas wrote: >>>>> >>>>> Scott Marlow wrote: >>>>>>> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>>>>>> Server suspend and resume is a feature that allows a running >>>>>>> server to >>>>>>> gracefully finish of all running requests. The most common use >>>>>>> case for >>>>>>> this is graceful shutdown, where you would like a server to >>>>>>> complete all >>>>>>> running requests, reject any new ones, and then shut down, however >>>>>>> there >>>>>>> are also plenty of other valid use cases (e.g. suspend the server, >>>>>>> modify a data source or some other config, then resume). >>>>>>> >>>>>>> User View: >>>>>>> >>>>>>> For the users point of view two new operations will be added to >>>>>>> the server: >>>>>>> >>>>>>> suspend(timeout) >>>>>>> resume() >>>>>>> >>>>>>> A runtime only attribute suspend-state (is this a good name?) will >>>>>>> also >>>>>>> be added, that can take one of three possible values, RUNNING, >>>>>>> SUSPENDING, SUSPENDED. >>>>>> The SuspendController "state" might be a shorter attribute name and >>>>>> just >>>>>> as meaningful. >>>>> This will be in the global server namespace (i.e. from the CLI >>>>> :read-attribute(name="suspend-state"). >>>>> >>>>> I think the name 'state' is just two generic, which kind of state >>>>> are we >>>>> talking about? >>>>> >>>>>> When are we in the RUNNING state? Is that simply the pre-state for >>>>>> SUSPENDING? >>>>> 99.99% of the time. Basically servers are always running unless they >>>>> are >>>>> have been explicitly suspended, and then they go from suspending to >>>>> suspended. Note that if resume is called at any time the server goes to >>>>> RUNNING again immediately, as when subsystems are notified they should >>>>> be able to begin accepting requests again straight away. >>>>> >>>>> We also have admin only mode, which is a kinda similar concept, so we >>>>> need to make sure we document the differences. >>>>> >>>>>>> A timeout attribute will also be added to the shutdown operation. If >>>>>>> this is present then the server will first be suspended, and the >>>>>>> server >>>>>>> will not shut down until either the suspend is successful or the >>>>>>> timeout >>>>>>> occurs. If no timeout parameter is passed to the operation then a >>>>>>> normal >>>>>>> non-graceful shutdown will take place. >>>>>> Will non-graceful shutdown wait for non-daemon threads or terminate >>>>>> immediately (call System.exit()). >>>>> It will execute the same way it does today (all services will shut down >>>>> and then the server will exit). >>>>> >>>>> Stuart >>>>> >>>>>>> In domain mode these operations will be added to both individual >>>>>>> server >>>>>>> and a complete server group. >>>>>>> >>>>>>> Implementation Details >>>>>>> >>>>>>> Suspend/resume operates on entry points to the server. Any request >>>>>>> that >>>>>>> is currently running must not be affected by the suspend state, >>>>>>> however >>>>>>> any new request should be rejected. In general subsystems will >>>>>>> track the >>>>>>> number of outstanding requests, and when this hits zero they are >>>>>>> considered suspended. >>>>>>> >>>>>>> We will introduce the notion of a global SuspendController, that >>>>>>> manages >>>>>>> the servers suspend state. All subsystems that wish to do a graceful >>>>>>> shutdown register callback handlers with this controller. >>>>>>> >>>>>>> When the suspend() operation is invoked the controller will invoke >>>>>>> all >>>>>>> these callbacks, letting the subsystem know that the server is >>>>>>> suspend, >>>>>>> and providing the subsystem with a SuspendContext object that the >>>>>>> subsystem can then use to notify the controller that the suspend is >>>>>>> complete. >>>>>>> >>>>>>> What the subsystem does when it receives a suspend command, and >>>>>>> when it >>>>>>> considers itself suspended will vary, but in the common case it will >>>>>>> immediatly start rejecting external requests (e.g. Undertow will >>>>>>> start >>>>>>> responding with a 503 to all new requests). The subsystem will also >>>>>>> track the number of outstanding requests, and when this hits zero >>>>>>> then >>>>>>> the subsystem will notify the controller that is has successfully >>>>>>> suspended. >>>>>>> Some subsystems will obviously want to do other actions on >>>>>>> suspend, e.g. >>>>>>> clustering will likely want to fail over, mod_cluster will notify the >>>>>>> load balancer that the node is no longer available etc. In some >>>>>>> cases we >>>>>>> may want to make this configurable to an extent (e.g. Undertow >>>>>>> could be >>>>>>> configured to allow requests with an existing session, and not >>>>>>> consider >>>>>>> itself timed out until all sessions have either timed out or been >>>>>>> invalidated, although this will obviously take a while). >>>>>>> >>>>>>> If anyone has any feedback let me know. In terms of implementation my >>>>>>> basic plan is to get the core functionality and the Undertow >>>>>>> implementation into Wildfly, and then work with subsystem authors to >>>>>>> implement subsystem specific functionality once the core is in place. >>>>>>> >>>>>>> Stuart >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> The >>>>>>> >>>>>>> A timeout attribute will also be added to the shutdown command, >>>>>>> _______________________________________________ >>>>>>> wildfly-dev mailing list >>>>>>> wildfly-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>> _______________________________________________ >>>>>> wildfly-dev mailing list >>>>>> wildfly-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From smarlow at redhat.com Tue Jun 10 15:17:35 2014 From: smarlow at redhat.com (Scott Marlow) Date: Tue, 10 Jun 2014 15:17:35 -0400 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <5397209C.9090400@redhat.com> References: <5397209C.9090400@redhat.com> Message-ID: <539759CF.1050902@redhat.com> On 06/10/2014 11:13 AM, Stuart Douglas wrote: > This design proposal covers the inter related tasks of splitting up the > build, and also creating a build/provisioning system that will make it > easy for end users to consume Wildfly. Apologies for the length, but it > is a complex topic. The first part explains what we are trying to > achieve, the second part covers how we are planning to actually > implement it. > > The Wildfly code base is over a million lines of java and has a test > suite that generally takes close to two hours to run in its entirety. > This makes the project very unwieldily, and the large size and slow test > suite makes development painful. > > To deal with this issue we are going to split the Wildfly code base into > smaller discrete repositories. The planned split is as follows: > > - Core: just the WF core > - Arquillian: the arquillian adaptors > - Servlet: a WF distribution with just Undertow, and some basic EE > functionality such as naming > - EE: All the core EE related functionality, EJB's, messaging etc > - Clustering: The core clustering functionality > - Console: The management console > - Dist: brings all the pieces together, and allows us to run all tests > against a full server Any concerns about circular dependencies that could impact the build? For example, EE depends on Clustering and Clustering depends on EE. Adding separate system level interfaces for each module might help, so that Clustering doesn't depend directly on the EE module and the EE module doesn't depend on Clustering. > > Note that this list is in no way final, and is open to debate. We will > most likely want to split up the EE component at some point, possibly > along some kind of web profile/full profile type split. > > Each of these repos will build a feature pack, which will contain the > following: > > - Feature specification / description > - Core version requirements (e.g. WF10) > - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) > - module.xml files for all required modules that are not provided by > other features > - References to maven GAV's for jars (possibly a level of indirection > here, module.xml may just contain the group and artifact, and the > version may be in a version.properties file to allow it to be easily > overridden) > - Default configuration snippet, subsystem snippets are packaged in the > subsystem jars, templates that combine them into config files are part > of the feature pack. > - Misc files (e.g. xsds) with indication of where on path to place them > > Note that a feature pack is not a complete server, it cannot simply be > extracted and run, it first needs to be assembled into a server by the > provisioning tool. The feature packs also just contain references to the > maven GAV of required jars, they do not have the actual jars in the pack > (which should make them very lightweight). > > Feature packs will be assembled by the WF build tool, which is just a > maven plugin that will replace our existing hacky collection of ant > scripts. > > Actual server instances will be assembled by the provisioning tool, > which will be implemented as a library with several different front > ends, including a maven plugin and a CLI (possibly integrated into our > existing CLI). In general the provisioning tool will be able to > provision three different types of servers: > > - A traditional server with all jar files in the distribution > - A server that uses maven coordinates in module.xml files, with all > artifacts downloaded as part of the provisioning process > - As above, but with artifacts being lazily loaded as needed (not > recommended for production, but I think this may be useful from a > developer point of view) > > The provisioning tool will work from an XML descriptor that describes > the server that is to be built. In general this information will include: > > - GAV of the feature packs to use > - Filtering information if not all features from a pack are required > (e.g. just give me JAX-RS from the EE pack. In this case the only > modules/subsystems installed from the pack will be modules and subystem > that JAX-RS requires). > - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), > which will allow community users to easily upgrade individual components. > - Configuration changes that are required (e.g. some way to add a > datasource to the assembled server). The actual form this will take > still needs to be decided. Note that this need to work on both a user > level (a user adding a datasource) and a feature pack level (e.g. the > JON feature packing adding a required data source). > - GAV of deployments to install in the server. This should allow a > server complete with deployments and the necessary config to be > assembled and be immediately ready to be put into service. > > Note that if you just want a full WF install you should be able to > provision it with a single line in the provisioning file, by specifying > the dist feature pack. We will still provide our traditional download, > which will be build by the provisioning tool as part of our build process. > > The provisioning tool will also be able to upgrade servers, which > basically consists of provisioning a new modules directory. Rollback is > provided by provisioning from an earlier version of provisioning file. > When a server is provisioned the tool will make a backup copy of the > file used, so it should always be possible to examine the provisioning > file that was used to build the current server config. > > Note that when an update is performed on an existing server config will > not be updated, unless the update adds an additional config file, in > which case the new config file will be generated (however existing > config will not be touched). > > Note that as a result of this split we will need to do much more > frequent releases of the individual feature packs, to allow the most > recent code to be integrated into dist. > > Implementation Plan > > The above changes are obviously a big job, and will not happen > overnight. They are also highly likely to conflict with other changes, > so maintaining a long running branch that gets rebased is not a > practical option. Instead the plan it to perform the split in > incremental changes. The basic steps are listed below, some of which can > be performed in parallel. > > 1) Using the initial implementation of my build plugin (in my > wildfly-build-plugin branch) we split up the server along the lines > above. The code will all stay in the same repo, however the plugin will > be used to build all the individual pieces, which are then assembled as > part of the final build process. Note that the plugin in its current > form does both the build and provision step, and the pack format is > produces is far from the final pack format that we will want to use. > > 2) Split up the test suite into modules based on the features that they > test. This will result in several smaller modules in place of a single > large one, which should also be a usability improvement as individual > tests will be be faster to run, and run times for all tests in a module > should be more manageable. > > 3) Split the core into into own module. > > 4) Split everything else into its own module. As part of this step we > need to make sure we still have the ability to run all tests against the > full server, as well as against the cut down feature pack version of the > server. > > 5) Focus on the build an provisioning tool, to implement all the > features above, and to finalize the WF pack format. > > I think that just about covers it. There are still lots of nitty gritty > details that need to be worked out, however I think this covers all the > main aspects of the design. We are planning on starting work on this > basically immediately, as we want to get this implemented as early in > the WF9 cycle as possible. > > Stuart > > > > > > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From tomaz.cerar at gmail.com Tue Jun 10 15:55:58 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Tue, 10 Jun 2014 21:55:58 +0200 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <539759CF.1050902@redhat.com> References: <5397209C.9090400@redhat.com> <539759CF.1050902@redhat.com> Message-ID: On Tue, Jun 10, 2014 at 9:17 PM, Scott Marlow wrote: > Any concerns about circular dependencies that could impact the build? > For example, EE depends on Clustering and Clustering depends on EE. > EE does *not* depend on clustering and if does it is a bug that needs to be fixed. But in any case your question is valid and I don't think we will address this in phase one as most of "features" just build on top of another there is no mix and match yet. But when we do have scenarios like this it should be quite easily addressed, just resolve complete graph and if we get a loop, we fail the build. -- tomaz -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140610/2e140321/attachment.html From sdouglas at redhat.com Tue Jun 10 16:21:00 2014 From: sdouglas at redhat.com (Stuart Douglas) Date: Tue, 10 Jun 2014 15:21:00 -0500 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <539759CF.1050902@redhat.com> References: <5397209C.9090400@redhat.com> <539759CF.1050902@redhat.com> Message-ID: <539768AC.7000701@redhat.com> > > Any concerns about circular dependencies that could impact the build? > For example, EE depends on Clustering and Clustering depends on EE. As part of the split we are going to have to address some issues with inter module dependencies. Circular references in general should not be too much of a problem, I think the main issue will be modules depending on things that they should not, and we will have to do some refactoring to fix. Stuart > > Adding separate system level interfaces for each module might help, so > that Clustering doesn't depend directly on the EE module and the EE > module doesn't depend on Clustering. > >> >> Note that this list is in no way final, and is open to debate. We will >> most likely want to split up the EE component at some point, possibly >> along some kind of web profile/full profile type split. >> >> Each of these repos will build a feature pack, which will contain the >> following: >> >> - Feature specification / description >> - Core version requirements (e.g. WF10) >> - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) >> - module.xml files for all required modules that are not provided by >> other features >> - References to maven GAV's for jars (possibly a level of indirection >> here, module.xml may just contain the group and artifact, and the >> version may be in a version.properties file to allow it to be easily >> overridden) >> - Default configuration snippet, subsystem snippets are packaged in the >> subsystem jars, templates that combine them into config files are part >> of the feature pack. >> - Misc files (e.g. xsds) with indication of where on path to place them >> >> Note that a feature pack is not a complete server, it cannot simply be >> extracted and run, it first needs to be assembled into a server by the >> provisioning tool. The feature packs also just contain references to the >> maven GAV of required jars, they do not have the actual jars in the pack >> (which should make them very lightweight). >> >> Feature packs will be assembled by the WF build tool, which is just a >> maven plugin that will replace our existing hacky collection of ant >> scripts. >> >> Actual server instances will be assembled by the provisioning tool, >> which will be implemented as a library with several different front >> ends, including a maven plugin and a CLI (possibly integrated into our >> existing CLI). In general the provisioning tool will be able to >> provision three different types of servers: >> >> - A traditional server with all jar files in the distribution >> - A server that uses maven coordinates in module.xml files, with all >> artifacts downloaded as part of the provisioning process >> - As above, but with artifacts being lazily loaded as needed (not >> recommended for production, but I think this may be useful from a >> developer point of view) >> >> The provisioning tool will work from an XML descriptor that describes >> the server that is to be built. In general this information will include: >> >> - GAV of the feature packs to use >> - Filtering information if not all features from a pack are required >> (e.g. just give me JAX-RS from the EE pack. In this case the only >> modules/subsystems installed from the pack will be modules and subystem >> that JAX-RS requires). >> - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), >> which will allow community users to easily upgrade individual components. >> - Configuration changes that are required (e.g. some way to add a >> datasource to the assembled server). The actual form this will take >> still needs to be decided. Note that this need to work on both a user >> level (a user adding a datasource) and a feature pack level (e.g. the >> JON feature packing adding a required data source). >> - GAV of deployments to install in the server. This should allow a >> server complete with deployments and the necessary config to be >> assembled and be immediately ready to be put into service. >> >> Note that if you just want a full WF install you should be able to >> provision it with a single line in the provisioning file, by specifying >> the dist feature pack. We will still provide our traditional download, >> which will be build by the provisioning tool as part of our build >> process. >> >> The provisioning tool will also be able to upgrade servers, which >> basically consists of provisioning a new modules directory. Rollback is >> provided by provisioning from an earlier version of provisioning file. >> When a server is provisioned the tool will make a backup copy of the >> file used, so it should always be possible to examine the provisioning >> file that was used to build the current server config. >> >> Note that when an update is performed on an existing server config will >> not be updated, unless the update adds an additional config file, in >> which case the new config file will be generated (however existing >> config will not be touched). >> >> Note that as a result of this split we will need to do much more >> frequent releases of the individual feature packs, to allow the most >> recent code to be integrated into dist. >> >> Implementation Plan >> >> The above changes are obviously a big job, and will not happen >> overnight. They are also highly likely to conflict with other changes, >> so maintaining a long running branch that gets rebased is not a >> practical option. Instead the plan it to perform the split in >> incremental changes. The basic steps are listed below, some of which can >> be performed in parallel. >> >> 1) Using the initial implementation of my build plugin (in my >> wildfly-build-plugin branch) we split up the server along the lines >> above. The code will all stay in the same repo, however the plugin will >> be used to build all the individual pieces, which are then assembled as >> part of the final build process. Note that the plugin in its current >> form does both the build and provision step, and the pack format is >> produces is far from the final pack format that we will want to use. >> >> 2) Split up the test suite into modules based on the features that they >> test. This will result in several smaller modules in place of a single >> large one, which should also be a usability improvement as individual >> tests will be be faster to run, and run times for all tests in a module >> should be more manageable. >> >> 3) Split the core into into own module. >> >> 4) Split everything else into its own module. As part of this step we >> need to make sure we still have the ability to run all tests against the >> full server, as well as against the cut down feature pack version of the >> server. >> >> 5) Focus on the build an provisioning tool, to implement all the >> features above, and to finalize the WF pack format. >> >> I think that just about covers it. There are still lots of nitty gritty >> details that need to be worked out, however I think this covers all the >> main aspects of the design. We are planning on starting work on this >> basically immediately, as we want to get this implemented as early in >> the WF9 cycle as possible. >> >> Stuart >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > From jperkins at redhat.com Tue Jun 10 19:33:09 2014 From: jperkins at redhat.com (James R. Perkins) Date: Tue, 10 Jun 2014 16:33:09 -0700 Subject: [wildfly-dev] Design Proposal: Log Viewer Message-ID: <539795B5.5030009@redhat.com> While there wasn't a huge talk about this at the Brno meeting I know Heiko brought it up as part of the extended metrics. It's on some future product road maps as well too. I figured I might as well bring it up here and get opinions on it. This design proposal covers how capturing log messages. The "viewer" will likely be an operation that returns an object list of log record details. The design of how a GUI view would look/work is beyond the scope of this proposal. There is currently an operation to view a log file. This has several limitations. The file must be defined as a known file handler. There is also no way to filter results, e.g. errors only. If per-deployment logging is used, those log messages are not viewable as the files are not accessible. For the list of requirements I'm going to be lazy and just give the link the wiki page https://community.jboss.org/wiki/LogViewerDesign. Implementation: 1) There will be a new resource on the logging subsystem resource that can be enabled or disabled, currently called log-collector. Probably some attributes, but I'm not sure what will need to be configurable at this point. This will likely act like a handler and be assignable only to loggers and not the async-handler. 2) If a deployment uses per-deployment logging then a separate log-collector will need to be added to the deployments log context 3) Logging profiles will also have their own log-collector. 4) The messages should be written asynchronously and to a file in some kind of formatted structure. The structure will likely be JSON. 5) An operation to query the messages will need to be create. This operation should allow the results to be filtered on various fields as well as limit the data set returned and allow for a starting position. 6) All operations associated with view the log should use RBAC to control the access. 7) Audit logs will eventually need to be viewable and queryable. This might be separate from the logging subsystem as it is now, but it will need to be done. There are things like how long or how many records should we keep that needs to be determined. This could possibly be configurable via attributes on the resource. This is about all I've got at this point. I'd appreciate any feedback. -- James R. Perkins JBoss by Red Hat From smarlow at redhat.com Tue Jun 10 21:53:22 2014 From: smarlow at redhat.com (Scott Marlow) Date: Tue, 10 Jun 2014 21:53:22 -0400 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <539795B5.5030009@redhat.com> References: <539795B5.5030009@redhat.com> Message-ID: <5397B692.2060209@redhat.com> Any concern about the number of users viewing the server logs at the same time and the impact that could have on a system under load? For example, if a bunch of users arrive at work around the same time and they are all curious about how things went last night. They all could make a request to show the last 1000 lines of the server.log file (which could peg the CPU). You might ask why a large number of users have access to view the logs but the problem is still worth considering. On 06/10/2014 07:33 PM, James R. Perkins wrote: > While there wasn't a huge talk about this at the Brno meeting I know > Heiko brought it up as part of the extended metrics. It's on some future > product road maps as well too. I figured I might as well bring it up > here and get opinions on it. > > This design proposal covers how capturing log messages. The "viewer" > will likely be an operation that returns an object list of log record > details. The design of how a GUI view would look/work is beyond the > scope of this proposal. > > There is currently an operation to view a log file. This has several > limitations. The file must be defined as a known file handler. There is > also no way to filter results, e.g. errors only. If per-deployment > logging is used, those log messages are not viewable as the files are > not accessible. > > For the list of requirements I'm going to be lazy and just give the link > the wiki page https://community.jboss.org/wiki/LogViewerDesign. > > Implementation: > > 1) There will be a new resource on the logging subsystem resource that > can be enabled or disabled, currently called log-collector. Probably > some attributes, but I'm not sure what will need to be configurable at > this point. This will likely act like a handler and be assignable only > to loggers and not the async-handler. > > 2) If a deployment uses per-deployment logging then a separate > log-collector will need to be added to the deployments log context > > 3) Logging profiles will also have their own log-collector. > > 4) The messages should be written asynchronously and to a file in some > kind of formatted structure. The structure will likely be JSON. > > 5) An operation to query the messages will need to be create. This > operation should allow the results to be filtered on various fields as > well as limit the data set returned and allow for a starting position. > > 6) All operations associated with view the log should use RBAC to > control the access. > > 7) Audit logs will eventually need to be viewable and queryable. This > might be separate from the logging subsystem as it is now, but it will > need to be done. > > > There are things like how long or how many records should we keep that > needs to be determined. This could possibly be configurable via > attributes on the resource. > > This is about all I've got at this point. I'd appreciate any feedback. > From hbraun at redhat.com Wed Jun 11 03:26:21 2014 From: hbraun at redhat.com (Heiko Braun) Date: Wed, 11 Jun 2014 09:26:21 +0200 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <539795B5.5030009@redhat.com> References: <539795B5.5030009@redhat.com> Message-ID: <207B138F-815C-48CA-B3DC-1222B67083A7@redhat.com> I am not sure I fully understand the design proposal. Is it correct to say that it basically consists of two things? a) format for structured log data b) an additional logging service to that adopts unstructured (default) log messages to a) /Heiko On 11 Jun 2014, at 01:33, James R. Perkins wrote: > While there wasn't a huge talk about this at the Brno meeting I know > Heiko brought it up as part of the extended metrics. It's on some future > product road maps as well too. I figured I might as well bring it up > here and get opinions on it. > > This design proposal covers how capturing log messages. The "viewer" > will likely be an operation that returns an object list of log record > details. The design of how a GUI view would look/work is beyond the > scope of this proposal. > > There is currently an operation to view a log file. This has several > limitations. The file must be defined as a known file handler. There is > also no way to filter results, e.g. errors only. If per-deployment > logging is used, those log messages are not viewable as the files are > not accessible. > > For the list of requirements I'm going to be lazy and just give the link > the wiki page https://community.jboss.org/wiki/LogViewerDesign. > > Implementation: > > 1) There will be a new resource on the logging subsystem resource that > can be enabled or disabled, currently called log-collector. Probably > some attributes, but I'm not sure what will need to be configurable at > this point. This will likely act like a handler and be assignable only > to loggers and not the async-handler. > > 2) If a deployment uses per-deployment logging then a separate > log-collector will need to be added to the deployments log context > > 3) Logging profiles will also have their own log-collector. > > 4) The messages should be written asynchronously and to a file in some > kind of formatted structure. The structure will likely be JSON. > > 5) An operation to query the messages will need to be create. This > operation should allow the results to be filtered on various fields as > well as limit the data set returned and allow for a starting position. > > 6) All operations associated with view the log should use RBAC to > control the access. > > 7) Audit logs will eventually need to be viewable and queryable. This > might be separate from the logging subsystem as it is now, but it will > need to be done. > > > There are things like how long or how many records should we keep that > needs to be determined. This could possibly be configurable via > attributes on the resource. > > This is about all I've got at this point. I'd appreciate any feedback. > > -- > James R. Perkins > JBoss by Red Hat > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From kabir.khan at jboss.com Wed Jun 11 04:45:34 2014 From: kabir.khan at jboss.com (Kabir Khan) Date: Wed, 11 Jun 2014 09:45:34 +0100 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <539795B5.5030009@redhat.com> References: <539795B5.5030009@redhat.com> Message-ID: <096F609B-7F8D-4537-AAF4-DFA1019322F2@jboss.com> On 11 Jun 2014, at 00:33, James R. Perkins wrote: > While there wasn't a huge talk about this at the Brno meeting I know > Heiko brought it up as part of the extended metrics. It's on some future > product road maps as well too. I figured I might as well bring it up > here and get opinions on it. > > This design proposal covers how capturing log messages. The "viewer" > will likely be an operation that returns an object list of log record > details. The design of how a GUI view would look/work is beyond the > scope of this proposal. > > There is currently an operation to view a log file. This has several > limitations. The file must be defined as a known file handler. There is > also no way to filter results, e.g. errors only. If per-deployment > logging is used, those log messages are not viewable as the files are > not accessible. > > For the list of requirements I'm going to be lazy and just give the link > the wiki page https://community.jboss.org/wiki/LogViewerDesign. > > Implementation: > > 1) There will be a new resource on the logging subsystem resource that > can be enabled or disabled, currently called log-collector. Probably > some attributes, but I'm not sure what will need to be configurable at > this point. This will likely act like a handler and be assignable only > to loggers and not the async-handler. > > 2) If a deployment uses per-deployment logging then a separate > log-collector will need to be added to the deployments log context > > 3) Logging profiles will also have their own log-collector. > > 4) The messages should be written asynchronously and to a file in some > kind of formatted structure. The structure will likely be JSON. > > 5) An operation to query the messages will need to be create. This > operation should allow the results to be filtered on various fields as > well as limit the data set returned and allow for a starting position. > > 6) All operations associated with view the log should use RBAC to > control the access. > > 7) Audit logs will eventually need to be viewable and queryable. This > might be separate from the logging subsystem as it is now, but it will > need to be done. Something similar to what you say for the logging subsystem could probably be done to configure and write out formatted data, so there might be an opportunity for some code sharing. However, audit logging has extra security concerns. This should only be possible to view for AUDITOR or SUPERUSER. Another, possibly more important, issue is that while we do have a file handler for audit logging, I expect that anybody who takes security seriously will configure audit logging to use the syslog handler. Then writing a copy of the audit log to a local file becomes a security risk. I see you mention the metrics stuff Heiko talked about in your introduction, so if the intent is to use that to write the audit log records, that might be better (although I am not familiar with how security will be handled there) > > > There are things like how long or how many records should we keep that > needs to be determined. This could possibly be configurable via > attributes on the resource. > > This is about all I've got at this point. I'd appreciate any feedback. > > -- > James R. Perkins > JBoss by Red Hat > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From qutpeter at gmail.com Wed Jun 11 06:31:46 2014 From: qutpeter at gmail.com (Peter Cai) Date: Wed, 11 Jun 2014 20:31:46 +1000 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <5397209C.9090400@redhat.com> References: <5397209C.9090400@redhat.com> Message-ID: Hi Stuart, Many good points. I have some questions in this regard. 1, Could you please clarify the difference between feature packs, module, and subsystems? If I understand correctly, that subsystems, and module to feature packs is like what ingredients to recipes. 2, To my gut feel, the feature packs you described shares the same concept of feature of Karaf. For more info, please refer to http://karaf.apache.org/manual/latest/users-guide/provisioning.html. Is it possible for Wildfly to be built on top of OSGi container in the future? Regards, Peter C On Wed, Jun 11, 2014 at 1:13 AM, Stuart Douglas wrote: > This design proposal covers the inter related tasks of splitting up the > build, and also creating a build/provisioning system that will make it > easy for end users to consume Wildfly. Apologies for the length, but it > is a complex topic. The first part explains what we are trying to > achieve, the second part covers how we are planning to actually > implement it. > > The Wildfly code base is over a million lines of java and has a test > suite that generally takes close to two hours to run in its entirety. > This makes the project very unwieldily, and the large size and slow test > suite makes development painful. > > To deal with this issue we are going to split the Wildfly code base into > smaller discrete repositories. The planned split is as follows: > > - Core: just the WF core > - Arquillian: the arquillian adaptors > - Servlet: a WF distribution with just Undertow, and some basic EE > functionality such as naming > - EE: All the core EE related functionality, EJB's, messaging etc > - Clustering: The core clustering functionality > - Console: The management console > - Dist: brings all the pieces together, and allows us to run all tests > against a full server > > Note that this list is in no way final, and is open to debate. We will > most likely want to split up the EE component at some point, possibly > along some kind of web profile/full profile type split. > > Each of these repos will build a feature pack, which will contain the > following: > > - Feature specification / description > - Core version requirements (e.g. WF10) > - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) > - module.xml files for all required modules that are not provided by > other features > - References to maven GAV's for jars (possibly a level of indirection > here, module.xml may just contain the group and artifact, and the > version may be in a version.properties file to allow it to be easily > overridden) > - Default configuration snippet, subsystem snippets are packaged in the > subsystem jars, templates that combine them into config files are part > of the feature pack. > - Misc files (e.g. xsds) with indication of where on path to place them > > Note that a feature pack is not a complete server, it cannot simply be > extracted and run, it first needs to be assembled into a server by the > provisioning tool. The feature packs also just contain references to the > maven GAV of required jars, they do not have the actual jars in the pack > (which should make them very lightweight). > > Feature packs will be assembled by the WF build tool, which is just a > maven plugin that will replace our existing hacky collection of ant > scripts. > > Actual server instances will be assembled by the provisioning tool, > which will be implemented as a library with several different front > ends, including a maven plugin and a CLI (possibly integrated into our > existing CLI). In general the provisioning tool will be able to > provision three different types of servers: > > - A traditional server with all jar files in the distribution > - A server that uses maven coordinates in module.xml files, with all > artifacts downloaded as part of the provisioning process > - As above, but with artifacts being lazily loaded as needed (not > recommended for production, but I think this may be useful from a > developer point of view) > > The provisioning tool will work from an XML descriptor that describes > the server that is to be built. In general this information will include: > > - GAV of the feature packs to use > - Filtering information if not all features from a pack are required > (e.g. just give me JAX-RS from the EE pack. In this case the only > modules/subsystems installed from the pack will be modules and subystem > that JAX-RS requires). > - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), > which will allow community users to easily upgrade individual components. > - Configuration changes that are required (e.g. some way to add a > datasource to the assembled server). The actual form this will take > still needs to be decided. Note that this need to work on both a user > level (a user adding a datasource) and a feature pack level (e.g. the > JON feature packing adding a required data source). > - GAV of deployments to install in the server. This should allow a > server complete with deployments and the necessary config to be > assembled and be immediately ready to be put into service. > > Note that if you just want a full WF install you should be able to > provision it with a single line in the provisioning file, by specifying > the dist feature pack. We will still provide our traditional download, > which will be build by the provisioning tool as part of our build process. > > The provisioning tool will also be able to upgrade servers, which > basically consists of provisioning a new modules directory. Rollback is > provided by provisioning from an earlier version of provisioning file. > When a server is provisioned the tool will make a backup copy of the > file used, so it should always be possible to examine the provisioning > file that was used to build the current server config. > > Note that when an update is performed on an existing server config will > not be updated, unless the update adds an additional config file, in > which case the new config file will be generated (however existing > config will not be touched). > > Note that as a result of this split we will need to do much more > frequent releases of the individual feature packs, to allow the most > recent code to be integrated into dist. > > Implementation Plan > > The above changes are obviously a big job, and will not happen > overnight. They are also highly likely to conflict with other changes, > so maintaining a long running branch that gets rebased is not a > practical option. Instead the plan it to perform the split in > incremental changes. The basic steps are listed below, some of which can > be performed in parallel. > > 1) Using the initial implementation of my build plugin (in my > wildfly-build-plugin branch) we split up the server along the lines > above. The code will all stay in the same repo, however the plugin will > be used to build all the individual pieces, which are then assembled as > part of the final build process. Note that the plugin in its current > form does both the build and provision step, and the pack format is > produces is far from the final pack format that we will want to use. > > 2) Split up the test suite into modules based on the features that they > test. This will result in several smaller modules in place of a single > large one, which should also be a usability improvement as individual > tests will be be faster to run, and run times for all tests in a module > should be more manageable. > > 3) Split the core into into own module. > > 4) Split everything else into its own module. As part of this step we > need to make sure we still have the ability to run all tests against the > full server, as well as against the cut down feature pack version of the > server. > > 5) Focus on the build an provisioning tool, to implement all the > features above, and to finalize the WF pack format. > > I think that just about covers it. There are still lots of nitty gritty > details that need to be worked out, however I think this covers all the > main aspects of the design. We are planning on starting work on this > basically immediately, as we want to get this implemented as early in > the WF9 cycle as possible. > > Stuart > > > > > > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140611/bad1e764/attachment-0001.html From bburke at redhat.com Wed Jun 11 08:05:58 2014 From: bburke at redhat.com (Bill Burke) Date: Wed, 11 Jun 2014 08:05:58 -0400 Subject: [wildfly-dev] New security sub-project: WildFly Elytron In-Reply-To: <538E82A3.1060104@redhat.com> References: <538E82A3.1060104@redhat.com> Message-ID: <53984626.5090201@redhat.com> Is there a mail list for this? Or are discussions here? -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com From david.lloyd at redhat.com Wed Jun 11 08:32:34 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Wed, 11 Jun 2014 07:32:34 -0500 Subject: [wildfly-dev] New security sub-project: WildFly Elytron In-Reply-To: <53984626.5090201@redhat.com> References: <538E82A3.1060104@redhat.com> <53984626.5090201@redhat.com> Message-ID: <53984C62.3060407@redhat.com> On 06/11/2014 07:05 AM, Bill Burke wrote: > Is there a mail list for this? Or are discussions here? Discussions are here. -- - DML From ssilvert at redhat.com Wed Jun 11 09:24:42 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Wed, 11 Jun 2014 09:24:42 -0400 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <539795B5.5030009@redhat.com> References: <539795B5.5030009@redhat.com> Message-ID: <5398589A.3070406@redhat.com> On 6/10/2014 7:33 PM, James R. Perkins wrote: > This design proposal covers how capturing log messages. The "viewer" > will likely be an operation that returns an object list of log record > details. The design of how a GUI view would look/work is beyond the > scope of this proposal. The problem I see here is that we are designing something based on what we think a log viewer might want. Personally, I've yet to see a log viewer that works better than a text editor. I sure hope we don't get bogged down writing our own viewer. IMHO, what James has given us already is pretty close to what we need. If you look at the latest CLI GUI you will find that it now includes the ability to download logs and, if desired, load them into your own log viewer. Works pretty well. https://community.jboss.org/wiki/AGUIForTheCommandLineInterface#log-download Beyond having that same functionality in the web console, what else does a user need? > The file must be defined as a known file handler. IMHO, solving this would indeed be a good enhancement. > There is > also no way to filter results, e.g. errors only. IMHO, solving this is NOT a good enhancement. Filtering should be left to the user's log viewer on the client side. > If per-deployment > logging is used, those log messages are not viewable as the files are > not accessible. IMHO, solving this would be a good enhancement. Give the app developer a way to have his log downloaded from a management operation. IMHO, everything below adds lots of complexity and potentially slows down the server. Simple downloading of logs is what the user really wants. > > For the list of requirements I'm going to be lazy and just give the link > the wiki page https://community.jboss.org/wiki/LogViewerDesign. > > Implementation: > > 1) There will be a new resource on the logging subsystem resource that > can be enabled or disabled, currently called log-collector. Probably > some attributes, but I'm not sure what will need to be configurable at > this point. This will likely act like a handler and be assignable only > to loggers and not the async-handler. > > 2) If a deployment uses per-deployment logging then a separate > log-collector will need to be added to the deployments log context > > 3) Logging profiles will also have their own log-collector. > > 4) The messages should be written asynchronously and to a file in some > kind of formatted structure. The structure will likely be JSON. > > 5) An operation to query the messages will need to be create. This > operation should allow the results to be filtered on various fields as > well as limit the data set returned and allow for a starting position. > > 6) All operations associated with view the log should use RBAC to > control the access. > > 7) Audit logs will eventually need to be viewable and queryable. This > might be separate from the logging subsystem as it is now, but it will > need to be done. > > > There are things like how long or how many records should we keep that > needs to be determined. This could possibly be configurable via > attributes on the resource. > > This is about all I've got at this point. I'd appreciate any feedback. > From darran.lofthouse at jboss.com Wed Jun 11 10:07:53 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Wed, 11 Jun 2014 15:07:53 +0100 Subject: [wildfly-dev] New security sub-project: WildFly Elytron In-Reply-To: <53984C62.3060407@redhat.com> References: <538E82A3.1060104@redhat.com> <53984626.5090201@redhat.com> <53984C62.3060407@redhat.com> Message-ID: <539862B9.70002@jboss.com> On 11/06/14 13:32, David M. Lloyd wrote: > On 06/11/2014 07:05 AM, Bill Burke wrote: >> Is there a mail list for this? Or are discussions here? > > Discussions are here. We took the decision that the kind of discussions we have need to be visible to everyone that works on WildFly in one way another so starting a new list and asking everyone to subscribe did not make sense. > > From david.lloyd at redhat.com Wed Jun 11 10:30:17 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Wed, 11 Jun 2014 09:30:17 -0500 Subject: [wildfly-dev] On the WildFly Elytron PasswordFactory API In-Reply-To: <538F4445.9090604@redhat.com> References: <538F4445.9090604@redhat.com> Message-ID: <539867F9.6020103@redhat.com> On 06/04/2014 11:07 AM, David M. Lloyd wrote: [...] > Example: Encrypting a new password > ---------------------------------- > > PasswordFactory pf = PasswordFactory.getInstance("sha1crypt"); > // API not yet established but will be similar to this possibly: > ???? parameters = new > ???SHA1CryptPasswordParameterSpec("p4ssw0rd".toCharArray()); > Password encrypted = pf.generatePassword(parameters); > assert encrypted instanceof SHA1CryptPassword; I have a concrete specification for this example now: PasswordFactory pf = PasswordFactory.getInstance("sha-256-crypt"); // use a 64-byte random salt; most algorithms support flexible sizes byte[] salt = new byte[64]; ThreadLocalRandom.current().getBytes(salt); // iteration count is 4096, can generally be more (or less) AlgorithmParameterSpec aps = new HashedPasswordAlgorithmSpec(4096, salt); char[] chars = "p4ssw0rd".toCharArray(); PasswordSpec spec = new EncryptablePasswordSpec(chars, aps); Password pw = pf.generatePassword(spec); assert pw.getAlgorithm().equals("sha-256-crypt"); assert pw instanceof UnixSHACryptPassword; assert pf.verifyPassword(pw, chars); -- - DML From jperkins at redhat.com Wed Jun 11 10:57:49 2014 From: jperkins at redhat.com (James R. Perkins) Date: Wed, 11 Jun 2014 07:57:49 -0700 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <207B138F-815C-48CA-B3DC-1222B67083A7@redhat.com> References: <539795B5.5030009@redhat.com> <207B138F-815C-48CA-B3DC-1222B67083A7@redhat.com> Message-ID: <53986E6D.5040108@redhat.com> On 06/11/2014 12:26 AM, Heiko Braun wrote: > I am not sure I fully understand the design proposal. Is it correct to say that it basically consists of two things? > > a) format for structured log data > b) an additional logging service to that adopts unstructured (default) log messages to a) For A, yes. It will just use a formatter that will likely be set on a file handler. For B, kind of. Essentially a service that will read and optionally filter log messages and return a collection of results. From the console standpoint it would just be an operation you would execute to get results to display. Of course since this is mainly for the web console any input you have to make it easier for you guys would be great :) > > > /Heiko > > On 11 Jun 2014, at 01:33, James R. Perkins wrote: > >> While there wasn't a huge talk about this at the Brno meeting I know >> Heiko brought it up as part of the extended metrics. It's on some future >> product road maps as well too. I figured I might as well bring it up >> here and get opinions on it. >> >> This design proposal covers how capturing log messages. The "viewer" >> will likely be an operation that returns an object list of log record >> details. The design of how a GUI view would look/work is beyond the >> scope of this proposal. >> >> There is currently an operation to view a log file. This has several >> limitations. The file must be defined as a known file handler. There is >> also no way to filter results, e.g. errors only. If per-deployment >> logging is used, those log messages are not viewable as the files are >> not accessible. >> >> For the list of requirements I'm going to be lazy and just give the link >> the wiki page https://community.jboss.org/wiki/LogViewerDesign. >> >> Implementation: >> >> 1) There will be a new resource on the logging subsystem resource that >> can be enabled or disabled, currently called log-collector. Probably >> some attributes, but I'm not sure what will need to be configurable at >> this point. This will likely act like a handler and be assignable only >> to loggers and not the async-handler. >> >> 2) If a deployment uses per-deployment logging then a separate >> log-collector will need to be added to the deployments log context >> >> 3) Logging profiles will also have their own log-collector. >> >> 4) The messages should be written asynchronously and to a file in some >> kind of formatted structure. The structure will likely be JSON. >> >> 5) An operation to query the messages will need to be create. This >> operation should allow the results to be filtered on various fields as >> well as limit the data set returned and allow for a starting position. >> >> 6) All operations associated with view the log should use RBAC to >> control the access. >> >> 7) Audit logs will eventually need to be viewable and queryable. This >> might be separate from the logging subsystem as it is now, but it will >> need to be done. >> >> >> There are things like how long or how many records should we keep that >> needs to be determined. This could possibly be configurable via >> attributes on the resource. >> >> This is about all I've got at this point. I'd appreciate any feedback. >> >> -- >> James R. Perkins >> JBoss by Red Hat >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev -- James R. Perkins JBoss by Red Hat From jperkins at redhat.com Wed Jun 11 10:58:52 2014 From: jperkins at redhat.com (James R. Perkins) Date: Wed, 11 Jun 2014 07:58:52 -0700 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <096F609B-7F8D-4537-AAF4-DFA1019322F2@jboss.com> References: <539795B5.5030009@redhat.com> <096F609B-7F8D-4537-AAF4-DFA1019322F2@jboss.com> Message-ID: <53986EAC.4020100@redhat.com> On 06/11/2014 01:45 AM, Kabir Khan wrote: > On 11 Jun 2014, at 00:33, James R. Perkins wrote: > >> While there wasn't a huge talk about this at the Brno meeting I know >> Heiko brought it up as part of the extended metrics. It's on some future >> product road maps as well too. I figured I might as well bring it up >> here and get opinions on it. >> >> This design proposal covers how capturing log messages. The "viewer" >> will likely be an operation that returns an object list of log record >> details. The design of how a GUI view would look/work is beyond the >> scope of this proposal. >> >> There is currently an operation to view a log file. This has several >> limitations. The file must be defined as a known file handler. There is >> also no way to filter results, e.g. errors only. If per-deployment >> logging is used, those log messages are not viewable as the files are >> not accessible. >> >> For the list of requirements I'm going to be lazy and just give the link >> the wiki page https://community.jboss.org/wiki/LogViewerDesign. >> >> Implementation: >> >> 1) There will be a new resource on the logging subsystem resource that >> can be enabled or disabled, currently called log-collector. Probably >> some attributes, but I'm not sure what will need to be configurable at >> this point. This will likely act like a handler and be assignable only >> to loggers and not the async-handler. >> >> 2) If a deployment uses per-deployment logging then a separate >> log-collector will need to be added to the deployments log context >> >> 3) Logging profiles will also have their own log-collector. >> >> 4) The messages should be written asynchronously and to a file in some >> kind of formatted structure. The structure will likely be JSON. >> >> 5) An operation to query the messages will need to be create. This >> operation should allow the results to be filtered on various fields as >> well as limit the data set returned and allow for a starting position. >> >> 6) All operations associated with view the log should use RBAC to >> control the access. >> >> 7) Audit logs will eventually need to be viewable and queryable. This >> might be separate from the logging subsystem as it is now, but it will >> need to be done. > Something similar to what you say for the logging subsystem could probably be done to configure and write out formatted data, so there might be an opportunity for some code sharing. However, audit logging has extra security concerns. This should only be possible to view for AUDITOR or SUPERUSER. Another, possibly more important, issue is that while we do have a file handler for audit logging, I expect that anybody who takes security seriously will configure audit logging to use the syslog handler. Then writing a copy of the audit log to a local file becomes a security risk. I see you mention the metrics stuff Heiko talked about in your introduction, so if the intent is to use that to write the audit log records, that might be better (although I am not familiar with how security will be handled there) Okay perfect. I initially didn't even want audit logs shown, but PM thinks they should be there and accessible. > >> >> There are things like how long or how many records should we keep that >> needs to be determined. This could possibly be configurable via >> attributes on the resource. >> >> This is about all I've got at this point. I'd appreciate any feedback. >> >> -- >> James R. Perkins >> JBoss by Red Hat >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev -- James R. Perkins JBoss by Red Hat From stuart.w.douglas at gmail.com Wed Jun 11 10:59:10 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Wed, 11 Jun 2014 09:59:10 -0500 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: References: <5397209C.9090400@redhat.com> Message-ID: <53986EBE.4080509@gmail.com> Peter Cai wrote: > Hi Stuart, > Many good points. > > I have some questions in this regard. > 1, Could you please clarify the difference between feature packs, > module, and subsystems? > If I understand correctly, that subsystems, and module to feature packs > is like what ingredients to recipes. A feature pack contains subsystems and other modules. > > 2, To my gut feel, the feature packs you described shares the same > concept of feature of Karaf. For more info, please refer to > http://karaf.apache.org/manual/latest/users-guide/provisioning.html. Is > it possible for Wildfly to be built on top of OSGi container in the future? > It is kinda like Karaf, but we are not based on OSGi, and have no plans to move. Stuart > Regards, > Peter C > > > On Wed, Jun 11, 2014 at 1:13 AM, Stuart Douglas > wrote: > > This design proposal covers the inter related tasks of splitting up the > build, and also creating a build/provisioning system that will make it > easy for end users to consume Wildfly. Apologies for the length, but it > is a complex topic. The first part explains what we are trying to > achieve, the second part covers how we are planning to actually > implement it. > > The Wildfly code base is over a million lines of java and has a test > suite that generally takes close to two hours to run in its entirety. > This makes the project very unwieldily, and the large size and slow test > suite makes development painful. > > To deal with this issue we are going to split the Wildfly code base into > smaller discrete repositories. The planned split is as follows: > > - Core: just the WF core > - Arquillian: the arquillian adaptors > - Servlet: a WF distribution with just Undertow, and some basic EE > functionality such as naming > - EE: All the core EE related functionality, EJB's, messaging etc > - Clustering: The core clustering functionality > - Console: The management console > - Dist: brings all the pieces together, and allows us to run all tests > against a full server > > Note that this list is in no way final, and is open to debate. We will > most likely want to split up the EE component at some point, possibly > along some kind of web profile/full profile type split. > > Each of these repos will build a feature pack, which will contain the > following: > > - Feature specification / description > - Core version requirements (e.g. WF10) > - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) > - module.xml files for all required modules that are not provided by > other features > - References to maven GAV's for jars (possibly a level of indirection > here, module.xml may just contain the group and artifact, and the > version may be in a version.properties file to allow it to be easily > overridden) > - Default configuration snippet, subsystem snippets are packaged in the > subsystem jars, templates that combine them into config files are part > of the feature pack. > - Misc files (e.g. xsds) with indication of where on path to place them > > Note that a feature pack is not a complete server, it cannot simply be > extracted and run, it first needs to be assembled into a server by the > provisioning tool. The feature packs also just contain references to the > maven GAV of required jars, they do not have the actual jars in the pack > (which should make them very lightweight). > > Feature packs will be assembled by the WF build tool, which is just a > maven plugin that will replace our existing hacky collection of ant > scripts. > > Actual server instances will be assembled by the provisioning tool, > which will be implemented as a library with several different front > ends, including a maven plugin and a CLI (possibly integrated into our > existing CLI). In general the provisioning tool will be able to > provision three different types of servers: > > - A traditional server with all jar files in the distribution > - A server that uses maven coordinates in module.xml files, with all > artifacts downloaded as part of the provisioning process > - As above, but with artifacts being lazily loaded as needed (not > recommended for production, but I think this may be useful from a > developer point of view) > > The provisioning tool will work from an XML descriptor that describes > the server that is to be built. In general this information will > include: > > - GAV of the feature packs to use > - Filtering information if not all features from a pack are required > (e.g. just give me JAX-RS from the EE pack. In this case the only > modules/subsystems installed from the pack will be modules and subystem > that JAX-RS requires). > - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), > which will allow community users to easily upgrade individual > components. > - Configuration changes that are required (e.g. some way to add a > datasource to the assembled server). The actual form this will take > still needs to be decided. Note that this need to work on both a user > level (a user adding a datasource) and a feature pack level (e.g. the > JON feature packing adding a required data source). > - GAV of deployments to install in the server. This should allow a > server complete with deployments and the necessary config to be > assembled and be immediately ready to be put into service. > > Note that if you just want a full WF install you should be able to > provision it with a single line in the provisioning file, by specifying > the dist feature pack. We will still provide our traditional download, > which will be build by the provisioning tool as part of our build > process. > > The provisioning tool will also be able to upgrade servers, which > basically consists of provisioning a new modules directory. Rollback is > provided by provisioning from an earlier version of provisioning file. > When a server is provisioned the tool will make a backup copy of the > file used, so it should always be possible to examine the provisioning > file that was used to build the current server config. > > Note that when an update is performed on an existing server config will > not be updated, unless the update adds an additional config file, in > which case the new config file will be generated (however existing > config will not be touched). > > Note that as a result of this split we will need to do much more > frequent releases of the individual feature packs, to allow the most > recent code to be integrated into dist. > > Implementation Plan > > The above changes are obviously a big job, and will not happen > overnight. They are also highly likely to conflict with other changes, > so maintaining a long running branch that gets rebased is not a > practical option. Instead the plan it to perform the split in > incremental changes. The basic steps are listed below, some of which can > be performed in parallel. > > 1) Using the initial implementation of my build plugin (in my > wildfly-build-plugin branch) we split up the server along the lines > above. The code will all stay in the same repo, however the plugin will > be used to build all the individual pieces, which are then assembled as > part of the final build process. Note that the plugin in its current > form does both the build and provision step, and the pack format is > produces is far from the final pack format that we will want to use. > > 2) Split up the test suite into modules based on the features that they > test. This will result in several smaller modules in place of a single > large one, which should also be a usability improvement as individual > tests will be be faster to run, and run times for all tests in a module > should be more manageable. > > 3) Split the core into into own module. > > 4) Split everything else into its own module. As part of this step we > need to make sure we still have the ability to run all tests against the > full server, as well as against the cut down feature pack version of the > server. > > 5) Focus on the build an provisioning tool, to implement all the > features above, and to finalize the WF pack format. > > I think that just about covers it. There are still lots of nitty gritty > details that need to be worked out, however I think this covers all the > main aspects of the design. We are planning on starting work on this > basically immediately, as we want to get this implemented as early in > the WF9 cycle as possible. > > Stuart > > > > > > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From jperkins at redhat.com Wed Jun 11 11:01:40 2014 From: jperkins at redhat.com (James R. Perkins) Date: Wed, 11 Jun 2014 08:01:40 -0700 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <5397B692.2060209@redhat.com> References: <539795B5.5030009@redhat.com> <5397B692.2060209@redhat.com> Message-ID: <53986F54.60502@redhat.com> On 06/10/2014 06:53 PM, Scott Marlow wrote: > Any concern about the number of users viewing the server logs at the > same time and the impact that could have on a system under load? For > example, if a bunch of users arrive at work around the same time and > they are all curious about how things went last night. They all could > make a request to show the last 1000 lines of the server.log file (which > could peg the CPU). You might ask why a large number of users have > access to view the logs but the problem is still worth considering. Actually it's not something I've thought of. Though I suppose this could be an issue with any operation that returns large results. > > > On 06/10/2014 07:33 PM, James R. Perkins wrote: >> While there wasn't a huge talk about this at the Brno meeting I know >> Heiko brought it up as part of the extended metrics. It's on some future >> product road maps as well too. I figured I might as well bring it up >> here and get opinions on it. >> >> This design proposal covers how capturing log messages. The "viewer" >> will likely be an operation that returns an object list of log record >> details. The design of how a GUI view would look/work is beyond the >> scope of this proposal. >> >> There is currently an operation to view a log file. This has several >> limitations. The file must be defined as a known file handler. There is >> also no way to filter results, e.g. errors only. If per-deployment >> logging is used, those log messages are not viewable as the files are >> not accessible. >> >> For the list of requirements I'm going to be lazy and just give the link >> the wiki page https://community.jboss.org/wiki/LogViewerDesign. >> >> Implementation: >> >> 1) There will be a new resource on the logging subsystem resource that >> can be enabled or disabled, currently called log-collector. Probably >> some attributes, but I'm not sure what will need to be configurable at >> this point. This will likely act like a handler and be assignable only >> to loggers and not the async-handler. >> >> 2) If a deployment uses per-deployment logging then a separate >> log-collector will need to be added to the deployments log context >> >> 3) Logging profiles will also have their own log-collector. >> >> 4) The messages should be written asynchronously and to a file in some >> kind of formatted structure. The structure will likely be JSON. >> >> 5) An operation to query the messages will need to be create. This >> operation should allow the results to be filtered on various fields as >> well as limit the data set returned and allow for a starting position. >> >> 6) All operations associated with view the log should use RBAC to >> control the access. >> >> 7) Audit logs will eventually need to be viewable and queryable. This >> might be separate from the logging subsystem as it is now, but it will >> need to be done. >> >> >> There are things like how long or how many records should we keep that >> needs to be determined. This could possibly be configurable via >> attributes on the resource. >> >> This is about all I've got at this point. I'd appreciate any feedback. >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- James R. Perkins JBoss by Red Hat From jperkins at redhat.com Wed Jun 11 11:14:21 2014 From: jperkins at redhat.com (James R. Perkins) Date: Wed, 11 Jun 2014 08:14:21 -0700 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <5398589A.3070406@redhat.com> References: <539795B5.5030009@redhat.com> <5398589A.3070406@redhat.com> Message-ID: <5398724D.1050003@redhat.com> On 06/11/2014 06:24 AM, Stan Silvert wrote: > On 6/10/2014 7:33 PM, James R. Perkins wrote: >> This design proposal covers how capturing log messages. The "viewer" >> will likely be an operation that returns an object list of log record >> details. The design of how a GUI view would look/work is beyond the >> scope of this proposal. > The problem I see here is that we are designing something based on what > we think a log viewer might want. > > Personally, I've yet to see a log viewer that works better than a text > editor. I sure hope we don't get bogged down writing our own viewer. Though I haven't used it logstash (http://logstash.net/) looks quite nice. I'm not sure we need to fully mimic it, but it seems like we want a lot of the functionality from it. > > IMHO, what James has given us already is pretty close to what we need. > If you look at the latest CLI GUI you will find that it now includes the > ability to download logs and, if desired, load them into your own log > viewer. Works pretty well. > https://community.jboss.org/wiki/AGUIForTheCommandLineInterface#log-download > > Beyond having that same functionality in the web console, what else does > a user need? The problem is it's pretty limited. It doesn't support log files that may have been created via per-deployment logging where the deployment includes it's own logging configuration. It has to make assumptions that the line terminator is either LF or CRLF. The file has to be located in the jboss.server.log.dir directory as well. If you don't store your log files in that directory, you can't read them. >> The file must be defined as a known file handler. > IMHO, solving this would indeed be a good enhancement. This part is already done. Only files defined as a file-handler, periodic-rotating-file-handler or size-rotating-file-handler can be used. The problem is if you need extra functionality and you use a custom-handler you can't read the log file. >> There is >> also no way to filter results, e.g. errors only. > IMHO, solving this is NOT a good enhancement. Filtering should be left > to the user's log viewer on the client side. This part is a tough one. I do agree filter can be done on the client side, but that could result in large result sets being returned just to get a few messages. >> If per-deployment >> logging is used, those log messages are not viewable as the files are >> not accessible. > IMHO, solving this would be a good enhancement. Give the app developer > a way to have his log downloaded from a management operation. The question is how? We can maybe make some guesses if they use a JBoss Log Manager configuration. Though it's more common they use log4j with some specific handlers they have created. I can't think of a good way to determine which files should be allowed or not. Also opening up the allowed files to be outside the jboss.server.log.dir opens up security concerns. I mean I could just create a file handler with a path of say ${user.home}/.ssh/id_rsa and read the private key of whatever server it's running on :) > > IMHO, everything below adds lots of complexity and potentially slows > down the server. Simple downloading of logs is what the user really wants. >> For the list of requirements I'm going to be lazy and just give the link >> the wiki page https://community.jboss.org/wiki/LogViewerDesign. >> >> Implementation: >> >> 1) There will be a new resource on the logging subsystem resource that >> can be enabled or disabled, currently called log-collector. Probably >> some attributes, but I'm not sure what will need to be configurable at >> this point. This will likely act like a handler and be assignable only >> to loggers and not the async-handler. >> >> 2) If a deployment uses per-deployment logging then a separate >> log-collector will need to be added to the deployments log context >> >> 3) Logging profiles will also have their own log-collector. >> >> 4) The messages should be written asynchronously and to a file in some >> kind of formatted structure. The structure will likely be JSON. >> >> 5) An operation to query the messages will need to be create. This >> operation should allow the results to be filtered on various fields as >> well as limit the data set returned and allow for a starting position. >> >> 6) All operations associated with view the log should use RBAC to >> control the access. >> >> 7) Audit logs will eventually need to be viewable and queryable. This >> might be separate from the logging subsystem as it is now, but it will >> need to be done. >> >> >> There are things like how long or how many records should we keep that >> needs to be determined. This could possibly be configurable via >> attributes on the resource. >> >> This is about all I've got at this point. I'd appreciate any feedback. >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- James R. Perkins JBoss by Red Hat From smarlow at redhat.com Wed Jun 11 11:31:16 2014 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 11 Jun 2014 11:31:16 -0400 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <53986F54.60502@redhat.com> References: <539795B5.5030009@redhat.com> <5397B692.2060209@redhat.com> <53986F54.60502@redhat.com> Message-ID: <53987644.1080304@redhat.com> On 06/11/2014 11:01 AM, James R. Perkins wrote: > > On 06/10/2014 06:53 PM, Scott Marlow wrote: >> Any concern about the number of users viewing the server logs at the >> same time and the impact that could have on a system under load? For >> example, if a bunch of users arrive at work around the same time and >> they are all curious about how things went last night. They all could >> make a request to show the last 1000 lines of the server.log file (which >> could peg the CPU). You might ask why a large number of users have >> access to view the logs but the problem is still worth considering. > Actually it's not something I've thought of. Though I suppose this could > be an issue with any operation that returns large results. Would be good to have feedback on how many users are likely to concurrently view logs. I suspect the count will be higher than we might expect (depending on which users have access for a particular deployment). One possible solution could be a http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Semaphore.html that is configured for the maximum number of users allowed to view logs concurrently. For example, if the number of Semaphore permits is configured for five and eighty users are trying to view logs at the same time, logs will be returned for five users at a time (until all users have received their logs or a timeout occurs). There are probably other ways to deal with this as well. >> >> >> On 06/10/2014 07:33 PM, James R. Perkins wrote: >>> While there wasn't a huge talk about this at the Brno meeting I know >>> Heiko brought it up as part of the extended metrics. It's on some future >>> product road maps as well too. I figured I might as well bring it up >>> here and get opinions on it. >>> >>> This design proposal covers how capturing log messages. The "viewer" >>> will likely be an operation that returns an object list of log record >>> details. The design of how a GUI view would look/work is beyond the >>> scope of this proposal. >>> >>> There is currently an operation to view a log file. This has several >>> limitations. The file must be defined as a known file handler. There is >>> also no way to filter results, e.g. errors only. If per-deployment >>> logging is used, those log messages are not viewable as the files are >>> not accessible. >>> >>> For the list of requirements I'm going to be lazy and just give the link >>> the wiki page https://community.jboss.org/wiki/LogViewerDesign. >>> >>> Implementation: >>> >>> 1) There will be a new resource on the logging subsystem resource that >>> can be enabled or disabled, currently called log-collector. Probably >>> some attributes, but I'm not sure what will need to be configurable at >>> this point. This will likely act like a handler and be assignable only >>> to loggers and not the async-handler. >>> >>> 2) If a deployment uses per-deployment logging then a separate >>> log-collector will need to be added to the deployments log context >>> >>> 3) Logging profiles will also have their own log-collector. >>> >>> 4) The messages should be written asynchronously and to a file in some >>> kind of formatted structure. The structure will likely be JSON. >>> >>> 5) An operation to query the messages will need to be create. This >>> operation should allow the results to be filtered on various fields as >>> well as limit the data set returned and allow for a starting position. >>> >>> 6) All operations associated with view the log should use RBAC to >>> control the access. >>> >>> 7) Audit logs will eventually need to be viewable and queryable. This >>> might be separate from the logging subsystem as it is now, but it will >>> need to be done. >>> >>> >>> There are things like how long or how many records should we keep that >>> needs to be determined. This could possibly be configurable via >>> attributes on the resource. >>> >>> This is about all I've got at this point. I'd appreciate any feedback. >>> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > From Anil.Saldhana at redhat.com Wed Jun 11 11:33:41 2014 From: Anil.Saldhana at redhat.com (Anil Saldhana) Date: Wed, 11 Jun 2014 10:33:41 -0500 Subject: [wildfly-dev] On the WildFly Elytron PasswordFactory API In-Reply-To: <539867F9.6020103@redhat.com> References: <538F4445.9090604@redhat.com> <539867F9.6020103@redhat.com> Message-ID: <539876D5.40906@redhat.com> On 06/11/2014 09:30 AM, David M. Lloyd wrote: > On 06/04/2014 11:07 AM, David M. Lloyd wrote: > [...] >> Example: Encrypting a new password >> ---------------------------------- >> >> PasswordFactory pf = PasswordFactory.getInstance("sha1crypt"); >> // API not yet established but will be similar to this possibly: >> ???? parameters = new >> ???SHA1CryptPasswordParameterSpec("p4ssw0rd".toCharArray()); >> Password encrypted = pf.generatePassword(parameters); >> assert encrypted instanceof SHA1CryptPassword; > I have a concrete specification for this example now: > > PasswordFactory pf = PasswordFactory.getInstance("sha-256-crypt"); > // use a 64-byte random salt; most algorithms support flexible sizes > byte[] salt = new byte[64]; > ThreadLocalRandom.current().getBytes(salt); > // iteration count is 4096, can generally be more (or less) > AlgorithmParameterSpec aps = > new HashedPasswordAlgorithmSpec(4096, salt); > char[] chars = "p4ssw0rd".toCharArray(); > PasswordSpec spec = new EncryptablePasswordSpec(chars, aps); > Password pw = pf.generatePassword(spec); > assert pw.getAlgorithm().equals("sha-256-crypt"); > assert pw instanceof UnixSHACryptPassword; > assert pf.verifyPassword(pw, chars); > - Best is to make the salt and iteration count configurable. - Opportunities to inject a custom random generator. The following may be important : - RW the masked password to a file. I will think further on the usecases we have seen over the years and report back. From stuart.w.douglas at gmail.com Wed Jun 11 11:43:48 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Wed, 11 Jun 2014 10:43:48 -0500 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <53987644.1080304@redhat.com> References: <539795B5.5030009@redhat.com> <5397B692.2060209@redhat.com> <53986F54.60502@redhat.com> <53987644.1080304@redhat.com> Message-ID: <53987934.3020906@gmail.com> I don't think we should be worrying about that. Management operations happen under a global lock, and it is already possible to perform operations that return a lot of content (e.g. reading the whole resource tree). There would need to be a *lot* of admins and a very under powered server to make this a problem, and even then the solution is 'don't do that'. Stuart Scott Marlow wrote: > On 06/11/2014 11:01 AM, James R. Perkins wrote: >> On 06/10/2014 06:53 PM, Scott Marlow wrote: >>> Any concern about the number of users viewing the server logs at the >>> same time and the impact that could have on a system under load? For >>> example, if a bunch of users arrive at work around the same time and >>> they are all curious about how things went last night. They all could >>> make a request to show the last 1000 lines of the server.log file (which >>> could peg the CPU). You might ask why a large number of users have >>> access to view the logs but the problem is still worth considering. >> Actually it's not something I've thought of. Though I suppose this could >> be an issue with any operation that returns large results. > > Would be good to have feedback on how many users are likely to > concurrently view logs. I suspect the count will be higher than we > might expect (depending on which users have access for a particular > deployment). > > One possible solution could be a > http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Semaphore.html > that is configured for the maximum number of users allowed to view logs > concurrently. For example, if the number of Semaphore permits is > configured for five and eighty users are trying to view logs at the same > time, logs will be returned for five users at a time (until all users > have received their logs or a timeout occurs). > > There are probably other ways to deal with this as well. > >>> >>> On 06/10/2014 07:33 PM, James R. Perkins wrote: >>>> While there wasn't a huge talk about this at the Brno meeting I know >>>> Heiko brought it up as part of the extended metrics. It's on some future >>>> product road maps as well too. I figured I might as well bring it up >>>> here and get opinions on it. >>>> >>>> This design proposal covers how capturing log messages. The "viewer" >>>> will likely be an operation that returns an object list of log record >>>> details. The design of how a GUI view would look/work is beyond the >>>> scope of this proposal. >>>> >>>> There is currently an operation to view a log file. This has several >>>> limitations. The file must be defined as a known file handler. There is >>>> also no way to filter results, e.g. errors only. If per-deployment >>>> logging is used, those log messages are not viewable as the files are >>>> not accessible. >>>> >>>> For the list of requirements I'm going to be lazy and just give the link >>>> the wiki page https://community.jboss.org/wiki/LogViewerDesign. >>>> >>>> Implementation: >>>> >>>> 1) There will be a new resource on the logging subsystem resource that >>>> can be enabled or disabled, currently called log-collector. Probably >>>> some attributes, but I'm not sure what will need to be configurable at >>>> this point. This will likely act like a handler and be assignable only >>>> to loggers and not the async-handler. >>>> >>>> 2) If a deployment uses per-deployment logging then a separate >>>> log-collector will need to be added to the deployments log context >>>> >>>> 3) Logging profiles will also have their own log-collector. >>>> >>>> 4) The messages should be written asynchronously and to a file in some >>>> kind of formatted structure. The structure will likely be JSON. >>>> >>>> 5) An operation to query the messages will need to be create. This >>>> operation should allow the results to be filtered on various fields as >>>> well as limit the data set returned and allow for a starting position. >>>> >>>> 6) All operations associated with view the log should use RBAC to >>>> control the access. >>>> >>>> 7) Audit logs will eventually need to be viewable and queryable. This >>>> might be separate from the logging subsystem as it is now, but it will >>>> need to be done. >>>> >>>> >>>> There are things like how long or how many records should we keep that >>>> needs to be determined. This could possibly be configurable via >>>> attributes on the resource. >>>> >>>> This is about all I've got at this point. I'd appreciate any feedback. >>>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From bburke at redhat.com Wed Jun 11 11:44:03 2014 From: bburke at redhat.com (Bill Burke) Date: Wed, 11 Jun 2014 11:44:03 -0400 Subject: [wildfly-dev] On the WildFly Elytron PasswordFactory API In-Reply-To: <539876D5.40906@redhat.com> References: <538F4445.9090604@redhat.com> <539867F9.6020103@redhat.com> <539876D5.40906@redhat.com> Message-ID: <53987943.4010604@redhat.com> On 6/11/2014 11:33 AM, Anil Saldhana wrote: > On 06/11/2014 09:30 AM, David M. Lloyd wrote: >> On 06/04/2014 11:07 AM, David M. Lloyd wrote: >> [...] >>> Example: Encrypting a new password >>> ---------------------------------- >>> >>> PasswordFactory pf = PasswordFactory.getInstance("sha1crypt"); >>> // API not yet established but will be similar to this possibly: >>> ???? parameters = new >>> ???SHA1CryptPasswordParameterSpec("p4ssw0rd".toCharArray()); >>> Password encrypted = pf.generatePassword(parameters); >>> assert encrypted instanceof SHA1CryptPassword; >> I have a concrete specification for this example now: >> >> PasswordFactory pf = PasswordFactory.getInstance("sha-256-crypt"); >> // use a 64-byte random salt; most algorithms support flexible sizes >> byte[] salt = new byte[64]; >> ThreadLocalRandom.current().getBytes(salt); >> // iteration count is 4096, can generally be more (or less) >> AlgorithmParameterSpec aps = >> new HashedPasswordAlgorithmSpec(4096, salt); >> char[] chars = "p4ssw0rd".toCharArray(); >> PasswordSpec spec = new EncryptablePasswordSpec(chars, aps); >> Password pw = pf.generatePassword(spec); >> assert pw.getAlgorithm().equals("sha-256-crypt"); >> assert pw instanceof UnixSHACryptPassword; >> assert pf.verifyPassword(pw, chars); >> > - Best is to make the salt and iteration count configurable. +1 5000 iterations is actually a *huge* performance hit, but unfortunately way lower than what I've seen recommended. (I've seen as high as 100,000 based on today's hardware). In Keycloak we store the iteration count along with the password so that the admin can change the default iteration count in the future. We recalculate the hash on a successful login if the default count and user count are different. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com From stuart.w.douglas at gmail.com Wed Jun 11 11:57:13 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Wed, 11 Jun 2014 10:57:13 -0500 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <5397209C.9090400@redhat.com> References: <5397209C.9090400@redhat.com> Message-ID: <53987C59.7010403@gmail.com> Something that I did not cover was how to actually do the split it terms of preserving history. We have a few options: 1) Just copy the files into a clean repo. There is no history in the repo, but you could always check the existing wildfly repo if you really need it. 2) Copy the complete WF repo and then delete the parts that are not going to be part of the new repo. This leaves complete history, but means that the check outs will be larger than they should be. 3) Use git-filter-branch to create a new repo with just the history of the relevant files. We still have a small checkout size, but the history is still in the repo. I think we should go with option 3. Stuart Stuart Douglas wrote: > This design proposal covers the inter related tasks of splitting up the > build, and also creating a build/provisioning system that will make it > easy for end users to consume Wildfly. Apologies for the length, but it > is a complex topic. The first part explains what we are trying to > achieve, the second part covers how we are planning to actually > implement it. > > The Wildfly code base is over a million lines of java and has a test > suite that generally takes close to two hours to run in its entirety. > This makes the project very unwieldily, and the large size and slow test > suite makes development painful. > > To deal with this issue we are going to split the Wildfly code base into > smaller discrete repositories. The planned split is as follows: > > - Core: just the WF core > - Arquillian: the arquillian adaptors > - Servlet: a WF distribution with just Undertow, and some basic EE > functionality such as naming > - EE: All the core EE related functionality, EJB's, messaging etc > - Clustering: The core clustering functionality > - Console: The management console > - Dist: brings all the pieces together, and allows us to run all tests > against a full server > > Note that this list is in no way final, and is open to debate. We will > most likely want to split up the EE component at some point, possibly > along some kind of web profile/full profile type split. > > Each of these repos will build a feature pack, which will contain the > following: > > - Feature specification / description > - Core version requirements (e.g. WF10) > - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) > - module.xml files for all required modules that are not provided by > other features > - References to maven GAV's for jars (possibly a level of indirection > here, module.xml may just contain the group and artifact, and the > version may be in a version.properties file to allow it to be easily > overridden) > - Default configuration snippet, subsystem snippets are packaged in the > subsystem jars, templates that combine them into config files are part > of the feature pack. > - Misc files (e.g. xsds) with indication of where on path to place them > > Note that a feature pack is not a complete server, it cannot simply be > extracted and run, it first needs to be assembled into a server by the > provisioning tool. The feature packs also just contain references to the > maven GAV of required jars, they do not have the actual jars in the pack > (which should make them very lightweight). > > Feature packs will be assembled by the WF build tool, which is just a > maven plugin that will replace our existing hacky collection of ant > scripts. > > Actual server instances will be assembled by the provisioning tool, > which will be implemented as a library with several different front > ends, including a maven plugin and a CLI (possibly integrated into our > existing CLI). In general the provisioning tool will be able to > provision three different types of servers: > > - A traditional server with all jar files in the distribution > - A server that uses maven coordinates in module.xml files, with all > artifacts downloaded as part of the provisioning process > - As above, but with artifacts being lazily loaded as needed (not > recommended for production, but I think this may be useful from a > developer point of view) > > The provisioning tool will work from an XML descriptor that describes > the server that is to be built. In general this information will include: > > - GAV of the feature packs to use > - Filtering information if not all features from a pack are required > (e.g. just give me JAX-RS from the EE pack. In this case the only > modules/subsystems installed from the pack will be modules and subystem > that JAX-RS requires). > - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), > which will allow community users to easily upgrade individual components. > - Configuration changes that are required (e.g. some way to add a > datasource to the assembled server). The actual form this will take > still needs to be decided. Note that this need to work on both a user > level (a user adding a datasource) and a feature pack level (e.g. the > JON feature packing adding a required data source). > - GAV of deployments to install in the server. This should allow a > server complete with deployments and the necessary config to be > assembled and be immediately ready to be put into service. > > Note that if you just want a full WF install you should be able to > provision it with a single line in the provisioning file, by specifying > the dist feature pack. We will still provide our traditional download, > which will be build by the provisioning tool as part of our build process. > > The provisioning tool will also be able to upgrade servers, which > basically consists of provisioning a new modules directory. Rollback is > provided by provisioning from an earlier version of provisioning file. > When a server is provisioned the tool will make a backup copy of the > file used, so it should always be possible to examine the provisioning > file that was used to build the current server config. > > Note that when an update is performed on an existing server config will > not be updated, unless the update adds an additional config file, in > which case the new config file will be generated (however existing > config will not be touched). > > Note that as a result of this split we will need to do much more > frequent releases of the individual feature packs, to allow the most > recent code to be integrated into dist. > > Implementation Plan > > The above changes are obviously a big job, and will not happen > overnight. They are also highly likely to conflict with other changes, > so maintaining a long running branch that gets rebased is not a > practical option. Instead the plan it to perform the split in > incremental changes. The basic steps are listed below, some of which can > be performed in parallel. > > 1) Using the initial implementation of my build plugin (in my > wildfly-build-plugin branch) we split up the server along the lines > above. The code will all stay in the same repo, however the plugin will > be used to build all the individual pieces, which are then assembled as > part of the final build process. Note that the plugin in its current > form does both the build and provision step, and the pack format is > produces is far from the final pack format that we will want to use. > > 2) Split up the test suite into modules based on the features that they > test. This will result in several smaller modules in place of a single > large one, which should also be a usability improvement as individual > tests will be be faster to run, and run times for all tests in a module > should be more manageable. > > 3) Split the core into into own module. > > 4) Split everything else into its own module. As part of this step we > need to make sure we still have the ability to run all tests against the > full server, as well as against the cut down feature pack version of the > server. > > 5) Focus on the build an provisioning tool, to implement all the > features above, and to finalize the WF pack format. > > I think that just about covers it. There are still lots of nitty gritty > details that need to be worked out, however I think this covers all the > main aspects of the design. We are planning on starting work on this > basically immediately, as we want to get this implemented as early in > the WF9 cycle as possible. > > Stuart > > > > > > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From brian.stansberry at redhat.com Wed Jun 11 12:07:20 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Wed, 11 Jun 2014 11:07:20 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5396377F.80003@redhat.com> References: <5396377F.80003@redhat.com> Message-ID: <53987EB8.4090903@redhat.com> On 6/9/14, 5:38 PM, Stuart Douglas wrote: > Server suspend and resume is a feature that allows a running server to > gracefully finish of all running requests. The most common use case for > this is graceful shutdown, where you would like a server to complete all > running requests, reject any new ones, and then shut down, however there > are also plenty of other valid use cases (e.g. suspend the server, > modify a data source or some other config, then resume). > > User View: > > For the users point of view two new operations will be added to the server: > > suspend(timeout) > resume() > > A runtime only attribute suspend-state (is this a good name?) will also > be added, that can take one of three possible values, RUNNING, > SUSPENDING, SUSPENDED. > > A timeout attribute will also be added to the shutdown operation. If > this is present then the server will first be suspended, and the server > will not shut down until either the suspend is successful or the timeout > occurs. If no timeout parameter is passed to the operation then a normal > non-graceful shutdown will take place. > > In domain mode these operations will be added to both individual server > and a complete server group. > > Implementation Details > > Suspend/resume operates on entry points to the server. Any request that > is currently running must not be affected by the suspend state, however > any new request should be rejected. In general subsystems will track the > number of outstanding requests, and when this hits zero they are > considered suspended. > > We will introduce the notion of a global SuspendController, that manages > the servers suspend state. All subsystems that wish to do a graceful > shutdown register callback handlers with this controller. > > When the suspend() operation is invoked the controller will invoke all > these callbacks, letting the subsystem know that the server is suspend, > and providing the subsystem with a SuspendContext object that the > subsystem can then use to notify the controller that the suspend is > complete. > > What the subsystem does when it receives a suspend command, and when it > considers itself suspended will vary, but in the common case it will > immediatly start rejecting external requests (e.g. Undertow will start > responding with a 503 to all new requests). I think there will need to be some mechanism for coordination between subsystems here. For example, I doubt mod_cluster will want Undertow deciding to start sending 503s before it gets a chance to get the LB sorted. > The subsystem will also > track the number of outstanding requests, and when this hits zero then > the subsystem will notify the controller that is has successfully > suspended. > Some subsystems will obviously want to do other actions on suspend, e.g. > clustering will likely want to fail over, mod_cluster will notify the > load balancer that the node is no longer available etc. In some cases we > may want to make this configurable to an extent (e.g. Undertow could be > configured to allow requests with an existing session, and not consider > itself timed out until all sessions have either timed out or been > invalidated, although this will obviously take a while). > > If anyone has any feedback let me know. In terms of implementation my > basic plan is to get the core functionality and the Undertow > implementation into Wildfly, and then work with subsystem authors to > implement subsystem specific functionality once the core is in place. > > Stuart > > > > > > > > The > > A timeout attribute will also be added to the shutdown command, > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From smarlow at redhat.com Wed Jun 11 12:08:09 2014 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 11 Jun 2014 12:08:09 -0400 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <53987934.3020906@gmail.com> References: <539795B5.5030009@redhat.com> <5397B692.2060209@redhat.com> <53986F54.60502@redhat.com> <53987644.1080304@redhat.com> <53987934.3020906@gmail.com> Message-ID: <53987EE9.2060801@redhat.com> On 06/11/2014 11:43 AM, Stuart Douglas wrote: > I don't think we should be worrying about that. Management operations > happen under a global lock, and it is already possible to perform > operations that return a lot of content (e.g. reading the whole resource > tree). If we already have a single mutually exclusive, global lock in use for operations like "viewing logs", I'm less worried. > > There would need to be a *lot* of admins and a very under powered server > to make this a problem, and even then the solution is 'don't do that'. I've seen this "lot of admins" situation before with log viewing, which is why I brought it up. > > Stuart > > Scott Marlow wrote: >> On 06/11/2014 11:01 AM, James R. Perkins wrote: >>> On 06/10/2014 06:53 PM, Scott Marlow wrote: >>>> Any concern about the number of users viewing the server logs at the >>>> same time and the impact that could have on a system under load? For >>>> example, if a bunch of users arrive at work around the same time and >>>> they are all curious about how things went last night. They all could >>>> make a request to show the last 1000 lines of the server.log file >>>> (which >>>> could peg the CPU). You might ask why a large number of users have >>>> access to view the logs but the problem is still worth considering. >>> Actually it's not something I've thought of. Though I suppose this could >>> be an issue with any operation that returns large results. >> >> Would be good to have feedback on how many users are likely to >> concurrently view logs. I suspect the count will be higher than we >> might expect (depending on which users have access for a particular >> deployment). >> >> One possible solution could be a >> http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Semaphore.html >> >> that is configured for the maximum number of users allowed to view logs >> concurrently. For example, if the number of Semaphore permits is >> configured for five and eighty users are trying to view logs at the same >> time, logs will be returned for five users at a time (until all users >> have received their logs or a timeout occurs). >> >> There are probably other ways to deal with this as well. >> >>>> >>>> On 06/10/2014 07:33 PM, James R. Perkins wrote: >>>>> While there wasn't a huge talk about this at the Brno meeting I know >>>>> Heiko brought it up as part of the extended metrics. It's on some >>>>> future >>>>> product road maps as well too. I figured I might as well bring it up >>>>> here and get opinions on it. >>>>> >>>>> This design proposal covers how capturing log messages. The "viewer" >>>>> will likely be an operation that returns an object list of log record >>>>> details. The design of how a GUI view would look/work is beyond the >>>>> scope of this proposal. >>>>> >>>>> There is currently an operation to view a log file. This has several >>>>> limitations. The file must be defined as a known file handler. >>>>> There is >>>>> also no way to filter results, e.g. errors only. If per-deployment >>>>> logging is used, those log messages are not viewable as the files are >>>>> not accessible. >>>>> >>>>> For the list of requirements I'm going to be lazy and just give the >>>>> link >>>>> the wiki page https://community.jboss.org/wiki/LogViewerDesign. >>>>> >>>>> Implementation: >>>>> >>>>> 1) There will be a new resource on the logging subsystem resource that >>>>> can be enabled or disabled, currently called log-collector. Probably >>>>> some attributes, but I'm not sure what will need to be configurable at >>>>> this point. This will likely act like a handler and be assignable only >>>>> to loggers and not the async-handler. >>>>> >>>>> 2) If a deployment uses per-deployment logging then a separate >>>>> log-collector will need to be added to the deployments log context >>>>> >>>>> 3) Logging profiles will also have their own log-collector. >>>>> >>>>> 4) The messages should be written asynchronously and to a file in some >>>>> kind of formatted structure. The structure will likely be JSON. >>>>> >>>>> 5) An operation to query the messages will need to be create. This >>>>> operation should allow the results to be filtered on various fields as >>>>> well as limit the data set returned and allow for a starting position. >>>>> >>>>> 6) All operations associated with view the log should use RBAC to >>>>> control the access. >>>>> >>>>> 7) Audit logs will eventually need to be viewable and queryable. This >>>>> might be separate from the logging subsystem as it is now, but it will >>>>> need to be done. >>>>> >>>>> >>>>> There are things like how long or how many records should we keep that >>>>> needs to be determined. This could possibly be configurable via >>>>> attributes on the resource. >>>>> >>>>> This is about all I've got at this point. I'd appreciate any feedback. >>>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev From tomaz.cerar at gmail.com Wed Jun 11 12:08:53 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Wed, 11 Jun 2014 09:08:53 -0700 Subject: [wildfly-dev] Design Proposal: Build split and provisioning Message-ID: <55136749173123130@unknownmsgid> I already have some work done for 3)... Sent from my Phone From: Stuart Douglas Sent: ?11.?6.?2014 17:58 To: Stuart Douglas Cc: Wildfly Dev mailing list Subject: Re: [wildfly-dev] Design Proposal: Build split and provisioning Something that I did not cover was how to actually do the split it terms of preserving history. We have a few options: 1) Just copy the files into a clean repo. There is no history in the repo, but you could always check the existing wildfly repo if you really need it. 2) Copy the complete WF repo and then delete the parts that are not going to be part of the new repo. This leaves complete history, but means that the check outs will be larger than they should be. 3) Use git-filter-branch to create a new repo with just the history of the relevant files. We still have a small checkout size, but the history is still in the repo. I think we should go with option 3. Stuart Stuart Douglas wrote: > This design proposal covers the inter related tasks of splitting up the > build, and also creating a build/provisioning system that will make it > easy for end users to consume Wildfly. Apologies for the length, but it > is a complex topic. The first part explains what we are trying to > achieve, the second part covers how we are planning to actually > implement it. > > The Wildfly code base is over a million lines of java and has a test > suite that generally takes close to two hours to run in its entirety. > This makes the project very unwieldily, and the large size and slow test > suite makes development painful. > > To deal with this issue we are going to split the Wildfly code base into > smaller discrete repositories. The planned split is as follows: > > - Core: just the WF core > - Arquillian: the arquillian adaptors > - Servlet: a WF distribution with just Undertow, and some basic EE > functionality such as naming > - EE: All the core EE related functionality, EJB's, messaging etc > - Clustering: The core clustering functionality > - Console: The management console > - Dist: brings all the pieces together, and allows us to run all tests > against a full server > > Note that this list is in no way final, and is open to debate. We will > most likely want to split up the EE component at some point, possibly > along some kind of web profile/full profile type split. > > Each of these repos will build a feature pack, which will contain the > following: > > - Feature specification / description > - Core version requirements (e.g. WF10) > - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) > - module.xml files for all required modules that are not provided by > other features > - References to maven GAV's for jars (possibly a level of indirection > here, module.xml may just contain the group and artifact, and the > version may be in a version.properties file to allow it to be easily > overridden) > - Default configuration snippet, subsystem snippets are packaged in the > subsystem jars, templates that combine them into config files are part > of the feature pack. > - Misc files (e.g. xsds) with indication of where on path to place them > > Note that a feature pack is not a complete server, it cannot simply be > extracted and run, it first needs to be assembled into a server by the > provisioning tool. The feature packs also just contain references to the > maven GAV of required jars, they do not have the actual jars in the pack > (which should make them very lightweight). > > Feature packs will be assembled by the WF build tool, which is just a > maven plugin that will replace our existing hacky collection of ant > scripts. > > Actual server instances will be assembled by the provisioning tool, > which will be implemented as a library with several different front > ends, including a maven plugin and a CLI (possibly integrated into our > existing CLI). In general the provisioning tool will be able to > provision three different types of servers: > > - A traditional server with all jar files in the distribution > - A server that uses maven coordinates in module.xml files, with all > artifacts downloaded as part of the provisioning process > - As above, but with artifacts being lazily loaded as needed (not > recommended for production, but I think this may be useful from a > developer point of view) > > The provisioning tool will work from an XML descriptor that describes > the server that is to be built. In general this information will include: > > - GAV of the feature packs to use > - Filtering information if not all features from a pack are required > (e.g. just give me JAX-RS from the EE pack. In this case the only > modules/subsystems installed from the pack will be modules and subystem > that JAX-RS requires). > - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), > which will allow community users to easily upgrade individual components. > - Configuration changes that are required (e.g. some way to add a > datasource to the assembled server). The actual form this will take > still needs to be decided. Note that this need to work on both a user > level (a user adding a datasource) and a feature pack level (e.g. the > JON feature packing adding a required data source). > - GAV of deployments to install in the server. This should allow a > server complete with deployments and the necessary config to be > assembled and be immediately ready to be put into service. > > Note that if you just want a full WF install you should be able to > provision it with a single line in the provisioning file, by specifying > the dist feature pack. We will still provide our traditional download, > which will be build by the provisioning tool as part of our build process. > > The provisioning tool will also be able to upgrade servers, which > basically consists of provisioning a new modules directory. Rollback is > provided by provisioning from an earlier version of provisioning file. > When a server is provisioned the tool will make a backup copy of the > file used, so it should always be possible to examine the provisioning > file that was used to build the current server config. > > Note that when an update is performed on an existing server config will > not be updated, unless the update adds an additional config file, in > which case the new config file will be generated (however existing > config will not be touched). > > Note that as a result of this split we will need to do much more > frequent releases of the individual feature packs, to allow the most > recent code to be integrated into dist. > > Implementation Plan > > The above changes are obviously a big job, and will not happen > overnight. They are also highly likely to conflict with other changes, > so maintaining a long running branch that gets rebased is not a > practical option. Instead the plan it to perform the split in > incremental changes. The basic steps are listed below, some of which can > be performed in parallel. > > 1) Using the initial implementation of my build plugin (in my > wildfly-build-plugin branch) we split up the server along the lines > above. The code will all stay in the same repo, however the plugin will > be used to build all the individual pieces, which are then assembled as > part of the final build process. Note that the plugin in its current > form does both the build and provision step, and the pack format is > produces is far from the final pack format that we will want to use. > > 2) Split up the test suite into modules based on the features that they > test. This will result in several smaller modules in place of a single > large one, which should also be a usability improvement as individual > tests will be be faster to run, and run times for all tests in a module > should be more manageable. > > 3) Split the core into into own module. > > 4) Split everything else into its own module. As part of this step we > need to make sure we still have the ability to run all tests against the > full server, as well as against the cut down feature pack version of the > server. > > 5) Focus on the build an provisioning tool, to implement all the > features above, and to finalize the WF pack format. > > I think that just about covers it. There are still lots of nitty gritty > details that need to be worked out, however I think this covers all the > main aspects of the design. We are planning on starting work on this > basically immediately, as we want to get this implemented as early in > the WF9 cycle as possible. > > Stuart > > > > > > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev _______________________________________________ wildfly-dev mailing list wildfly-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/wildfly-dev From david.lloyd at redhat.com Wed Jun 11 12:14:12 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Wed, 11 Jun 2014 11:14:12 -0500 Subject: [wildfly-dev] On the WildFly Elytron PasswordFactory API In-Reply-To: <53987943.4010604@redhat.com> References: <538F4445.9090604@redhat.com> <539867F9.6020103@redhat.com> <539876D5.40906@redhat.com> <53987943.4010604@redhat.com> Message-ID: <53988054.4090208@redhat.com> On 06/11/2014 10:44 AM, Bill Burke wrote: > > > On 6/11/2014 11:33 AM, Anil Saldhana wrote: >> On 06/11/2014 09:30 AM, David M. Lloyd wrote: >>> On 06/04/2014 11:07 AM, David M. Lloyd wrote: >>> [...] >>>> Example: Encrypting a new password >>>> ---------------------------------- >>>> >>>> PasswordFactory pf = PasswordFactory.getInstance("sha1crypt"); >>>> // API not yet established but will be similar to this possibly: >>>> ???? parameters = new >>>> ???SHA1CryptPasswordParameterSpec("p4ssw0rd".toCharArray()); >>>> Password encrypted = pf.generatePassword(parameters); >>>> assert encrypted instanceof SHA1CryptPassword; >>> I have a concrete specification for this example now: >>> >>> PasswordFactory pf = PasswordFactory.getInstance("sha-256-crypt"); >>> // use a 64-byte random salt; most algorithms support flexible sizes >>> byte[] salt = new byte[64]; >>> ThreadLocalRandom.current().getBytes(salt); >>> // iteration count is 4096, can generally be more (or less) >>> AlgorithmParameterSpec aps = >>> new HashedPasswordAlgorithmSpec(4096, salt); >>> char[] chars = "p4ssw0rd".toCharArray(); >>> PasswordSpec spec = new EncryptablePasswordSpec(chars, aps); >>> Password pw = pf.generatePassword(spec); >>> assert pw.getAlgorithm().equals("sha-256-crypt"); >>> assert pw instanceof UnixSHACryptPassword; >>> assert pf.verifyPassword(pw, chars); >>> >> - Best is to make the salt and iteration count configurable. > > +1 > > 5000 iterations is actually a *huge* performance hit, but unfortunately > way lower than what I've seen recommended. (I've seen as high as > 100,000 based on today's hardware). Yeah the point of having the algorithm parameter spec is to allow these things to be specified. Iteration count is recommended to be pretty high these days, unfortunately, but with this kind of parameter spec, it is completely configurable so if there's some reason to use a lower count (or a higher one), you can certainly do it. > In Keycloak we store the iteration count along with the password so that > the admin can change the default iteration count in the future. We > recalculate the hash on a successful login if the default count and user > count are different. Yeah the newer SASL SCRAM mechanisms (and other challenge-response mechanisms like Digest-MD5 and, I believe, HTTP's digest) also have some support for caching pre-hashed passwords to help performance. While on the one hand, this means that the hash is essentially sufficient to authenticate, on the other hand the server can always periodically regenerate the hash with a different salt, which causes the previous hashed password to essentially become invalid without actually requiring a password change. -- - DML From brian.stansberry at redhat.com Wed Jun 11 12:21:33 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Wed, 11 Jun 2014 11:21:33 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5397306D.4060705@redhat.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> <539718F4.2090606@redhat.com> <539721A0.30307@gmail.com> <5397251E.5040805@redhat.com> <539726E2.2030300@gmail.com> <5397289F.8030404@redhat.com> <53972930.9030809@gmail.com> <5397306D.4060705@redhat.com> Message-ID: <5398820D.6060901@redhat.com> I do think these are orthogonal and should not be combined. The existing attribute is fundamentally about how the state of the runtime services relates to the persistent configuration. STARTING == out of sync due to still getting in sync during start RUNNING == in sync RELOAD_REQURIRED = out of sync, needs a reload to get in sync RESTART_REQUIRED = out of sync, needs a full process restart to get in sync There are two problems though with the existing attribute that exposes this: 1) It's named "server-state" on a server and "host-state" on a Host Controller. Really crappy name; way too broad. That's fixable by creating a new attribute and making the old one an alias for compatibility purposes. 2) The RUNNING state is really poorly named. The could perhaps be fixed by coming up with a new name and translating it back to "RUNNING" in the handlers for the legacy "server-state" and "host-state" attributes. On 6/10/14, 11:21 AM, Dimitris Andreadis wrote: > Sure. Which justifies trying to avoid those issues in the first place ;) > > On 10/06/2014 17:50, Stuart Douglas wrote: >> We can't really change that now, as it is part of our existing API. >> >> Stuart >> >> Dimitris Andreadis wrote: >>> It seems to me RESTART_REQUIRED (or RELOAD_REQUIRED) should be a boolean >>> on its own to simplify the state diagram. >>> >>> On 10/06/2014 17:40, Stuart Douglas wrote: >>>> I don't think so, I think RESTART_REQUIRED means running, but I need >>>> to restart to apply >>>> management changes (I think that attribute can also be >>>> RELOAD_REQUIRED, I think the >>>> description may be a bit out of date). >>>> >>>> To accurately reflect all the possible states you would need something >>>> like: >>>> >>>> RUNNING >>>> PAUSING, >>>> PAUSED, >>>> RESTART_REQUIRED >>>> PAUSING_RESTART_REQUIRED >>>> PAUSED_RESTART_REQUIRED >>>> RELOAD_REQUIRED >>>> PAUSING_RELOAD_REQUIRED >>>> PAUSED_RELOAD_REQUIRED >>>> >>>> Which does not seem great, and may introduce compatibility problems >>>> for clients that are not >>>> expecting these new values. >>>> >>>> Stuart >>>> >>>> >>>> >>>> Dimitris Andreadis wrote: >>>>> Isn't RESTART_REQUIRED also orthogonal to RUNNING? >>>>> >>>>> On 10/06/2014 17:17, Stuart Douglas wrote: >>>>>> They are actually orthogonal, a server can be in both RESTART_REQUIRED >>>>>> and any one of the >>>>>> suspend states. >>>>>> >>>>>> RESTART_REQUIRED is very much tied to services and the management >>>>>> model, while >>>>>> suspend/resume is a runtime only thing that should not touch the state >>>>>> of services. >>>>>> >>>>>> >>>>>> Stuart >>>>>> >>>>>> Dimitris Andreadis wrote: >>>>>>> Why not extend the states of the existing 'server-state' attribute to: >>>>>>> >>>>>>> (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) >>>>>>> >>>>>>> http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html >>>>>>> >>>>>>> On 10/06/2014 04:40, Stuart Douglas wrote: >>>>>>>> >>>>>>>> Scott Marlow wrote: >>>>>>>>> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>>>>>>>>> Server suspend and resume is a feature that allows a running >>>>>>>>>> server to >>>>>>>>>> gracefully finish of all running requests. The most common use >>>>>>>>>> case for >>>>>>>>>> this is graceful shutdown, where you would like a server to >>>>>>>>>> complete all >>>>>>>>>> running requests, reject any new ones, and then shut down, however >>>>>>>>>> there >>>>>>>>>> are also plenty of other valid use cases (e.g. suspend the server, >>>>>>>>>> modify a data source or some other config, then resume). >>>>>>>>>> >>>>>>>>>> User View: >>>>>>>>>> >>>>>>>>>> For the users point of view two new operations will be added to >>>>>>>>>> the server: >>>>>>>>>> >>>>>>>>>> suspend(timeout) >>>>>>>>>> resume() >>>>>>>>>> >>>>>>>>>> A runtime only attribute suspend-state (is this a good name?) will >>>>>>>>>> also >>>>>>>>>> be added, that can take one of three possible values, RUNNING, >>>>>>>>>> SUSPENDING, SUSPENDED. >>>>>>>>> The SuspendController "state" might be a shorter attribute name and >>>>>>>>> just >>>>>>>>> as meaningful. >>>>>>>> This will be in the global server namespace (i.e. from the CLI >>>>>>>> :read-attribute(name="suspend-state"). >>>>>>>> >>>>>>>> I think the name 'state' is just two generic, which kind of state >>>>>>>> are we >>>>>>>> talking about? >>>>>>>> >>>>>>>>> When are we in the RUNNING state? Is that simply the pre-state for >>>>>>>>> SUSPENDING? >>>>>>>> 99.99% of the time. Basically servers are always running unless they >>>>>>>> are >>>>>>>> have been explicitly suspended, and then they go from suspending to >>>>>>>> suspended. Note that if resume is called at any time the server >>>>>>>> goes to >>>>>>>> RUNNING again immediately, as when subsystems are notified they >>>>>>>> should >>>>>>>> be able to begin accepting requests again straight away. >>>>>>>> >>>>>>>> We also have admin only mode, which is a kinda similar concept, so we >>>>>>>> need to make sure we document the differences. >>>>>>>> >>>>>>>>>> A timeout attribute will also be added to the shutdown >>>>>>>>>> operation. If >>>>>>>>>> this is present then the server will first be suspended, and the >>>>>>>>>> server >>>>>>>>>> will not shut down until either the suspend is successful or the >>>>>>>>>> timeout >>>>>>>>>> occurs. If no timeout parameter is passed to the operation then a >>>>>>>>>> normal >>>>>>>>>> non-graceful shutdown will take place. >>>>>>>>> Will non-graceful shutdown wait for non-daemon threads or terminate >>>>>>>>> immediately (call System.exit()). >>>>>>>> It will execute the same way it does today (all services will shut >>>>>>>> down >>>>>>>> and then the server will exit). >>>>>>>> >>>>>>>> Stuart >>>>>>>> >>>>>>>>>> In domain mode these operations will be added to both individual >>>>>>>>>> server >>>>>>>>>> and a complete server group. >>>>>>>>>> >>>>>>>>>> Implementation Details >>>>>>>>>> >>>>>>>>>> Suspend/resume operates on entry points to the server. Any request >>>>>>>>>> that >>>>>>>>>> is currently running must not be affected by the suspend state, >>>>>>>>>> however >>>>>>>>>> any new request should be rejected. In general subsystems will >>>>>>>>>> track the >>>>>>>>>> number of outstanding requests, and when this hits zero they are >>>>>>>>>> considered suspended. >>>>>>>>>> >>>>>>>>>> We will introduce the notion of a global SuspendController, that >>>>>>>>>> manages >>>>>>>>>> the servers suspend state. All subsystems that wish to do a >>>>>>>>>> graceful >>>>>>>>>> shutdown register callback handlers with this controller. >>>>>>>>>> >>>>>>>>>> When the suspend() operation is invoked the controller will invoke >>>>>>>>>> all >>>>>>>>>> these callbacks, letting the subsystem know that the server is >>>>>>>>>> suspend, >>>>>>>>>> and providing the subsystem with a SuspendContext object that the >>>>>>>>>> subsystem can then use to notify the controller that the suspend is >>>>>>>>>> complete. >>>>>>>>>> >>>>>>>>>> What the subsystem does when it receives a suspend command, and >>>>>>>>>> when it >>>>>>>>>> considers itself suspended will vary, but in the common case it >>>>>>>>>> will >>>>>>>>>> immediatly start rejecting external requests (e.g. Undertow will >>>>>>>>>> start >>>>>>>>>> responding with a 503 to all new requests). The subsystem will also >>>>>>>>>> track the number of outstanding requests, and when this hits zero >>>>>>>>>> then >>>>>>>>>> the subsystem will notify the controller that is has successfully >>>>>>>>>> suspended. >>>>>>>>>> Some subsystems will obviously want to do other actions on >>>>>>>>>> suspend, e.g. >>>>>>>>>> clustering will likely want to fail over, mod_cluster will >>>>>>>>>> notify the >>>>>>>>>> load balancer that the node is no longer available etc. In some >>>>>>>>>> cases we >>>>>>>>>> may want to make this configurable to an extent (e.g. Undertow >>>>>>>>>> could be >>>>>>>>>> configured to allow requests with an existing session, and not >>>>>>>>>> consider >>>>>>>>>> itself timed out until all sessions have either timed out or been >>>>>>>>>> invalidated, although this will obviously take a while). >>>>>>>>>> >>>>>>>>>> If anyone has any feedback let me know. In terms of >>>>>>>>>> implementation my >>>>>>>>>> basic plan is to get the core functionality and the Undertow >>>>>>>>>> implementation into Wildfly, and then work with subsystem >>>>>>>>>> authors to >>>>>>>>>> implement subsystem specific functionality once the core is in >>>>>>>>>> place. >>>>>>>>>> >>>>>>>>>> Stuart >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> The >>>>>>>>>> >>>>>>>>>> A timeout attribute will also be added to the shutdown command, >>>>>>>>>> _______________________________________________ >>>>>>>>>> wildfly-dev mailing list >>>>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> wildfly-dev mailing list >>>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>> _______________________________________________ >>>>>>>> wildfly-dev mailing list >>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> wildfly-dev mailing list >>>>>>> wildfly-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From brian.stansberry at redhat.com Wed Jun 11 12:47:02 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Wed, 11 Jun 2014 11:47:02 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <5398820D.6060901@redhat.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> <539718F4.2090606@redhat.com> <539721A0.30307@gmail.com> <5397251E.5040805@redhat.com> <539726E2.2030300@gmail.com> <5397289F.8030404@redhat.com> <53972930.9030809@gmail.com> <5397306D.4060705@redhat.com> <5398820D.6060901@redhat.com> Message-ID: <53988806.7090301@redhat.com> The STARTING state in the existing attribute makes me think an equivalent thing is needed for this concept. STARTING in the existing means the runtime services are possibly out of sync due to boot. Doesn't a similar problem exist with RUNNING, SUSPENDING, SUSPENDED? It's about how the server is reacting to external requests. There's some state during boot/reload when the server is not reacting normally to external requests. Perhaps that's just another condition where the server is SUSPENDED. This leads to whether this whole mechanism can be used to provide "Graceful Startup". We have problems with this now; endpoints accepting requests before everything is fully ready, leading to things like 404s because a deployment is installed yet. On 6/11/14, 11:21 AM, Brian Stansberry wrote: > I do think these are orthogonal and should not be combined. > > The existing attribute is fundamentally about how the state of the > runtime services relates to the persistent configuration. > > STARTING == out of sync due to still getting in sync during start > RUNNING == in sync > RELOAD_REQURIRED = out of sync, needs a reload to get in sync > RESTART_REQUIRED = out of sync, needs a full process restart to get in sync > > There are two problems though with the existing attribute that exposes this: > > 1) It's named "server-state" on a server and "host-state" on a Host > Controller. Really crappy name; way too broad. > > That's fixable by creating a new attribute and making the old one an > alias for compatibility purposes. > > 2) The RUNNING state is really poorly named. > > The could perhaps be fixed by coming up with a new name and translating > it back to "RUNNING" in the handlers for the legacy "server-state" and > "host-state" attributes. > > > On 6/10/14, 11:21 AM, Dimitris Andreadis wrote: >> Sure. Which justifies trying to avoid those issues in the first place ;) >> >> On 10/06/2014 17:50, Stuart Douglas wrote: >>> We can't really change that now, as it is part of our existing API. >>> >>> Stuart >>> >>> Dimitris Andreadis wrote: >>>> It seems to me RESTART_REQUIRED (or RELOAD_REQUIRED) should be a boolean >>>> on its own to simplify the state diagram. >>>> >>>> On 10/06/2014 17:40, Stuart Douglas wrote: >>>>> I don't think so, I think RESTART_REQUIRED means running, but I need >>>>> to restart to apply >>>>> management changes (I think that attribute can also be >>>>> RELOAD_REQUIRED, I think the >>>>> description may be a bit out of date). >>>>> >>>>> To accurately reflect all the possible states you would need something >>>>> like: >>>>> >>>>> RUNNING >>>>> PAUSING, >>>>> PAUSED, >>>>> RESTART_REQUIRED >>>>> PAUSING_RESTART_REQUIRED >>>>> PAUSED_RESTART_REQUIRED >>>>> RELOAD_REQUIRED >>>>> PAUSING_RELOAD_REQUIRED >>>>> PAUSED_RELOAD_REQUIRED >>>>> >>>>> Which does not seem great, and may introduce compatibility problems >>>>> for clients that are not >>>>> expecting these new values. >>>>> >>>>> Stuart >>>>> >>>>> >>>>> >>>>> Dimitris Andreadis wrote: >>>>>> Isn't RESTART_REQUIRED also orthogonal to RUNNING? >>>>>> >>>>>> On 10/06/2014 17:17, Stuart Douglas wrote: >>>>>>> They are actually orthogonal, a server can be in both RESTART_REQUIRED >>>>>>> and any one of the >>>>>>> suspend states. >>>>>>> >>>>>>> RESTART_REQUIRED is very much tied to services and the management >>>>>>> model, while >>>>>>> suspend/resume is a runtime only thing that should not touch the state >>>>>>> of services. >>>>>>> >>>>>>> >>>>>>> Stuart >>>>>>> >>>>>>> Dimitris Andreadis wrote: >>>>>>>> Why not extend the states of the existing 'server-state' attribute to: >>>>>>>> >>>>>>>> (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) >>>>>>>> >>>>>>>> http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html >>>>>>>> >>>>>>>> On 10/06/2014 04:40, Stuart Douglas wrote: >>>>>>>>> >>>>>>>>> Scott Marlow wrote: >>>>>>>>>> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>>>>>>>>>> Server suspend and resume is a feature that allows a running >>>>>>>>>>> server to >>>>>>>>>>> gracefully finish of all running requests. The most common use >>>>>>>>>>> case for >>>>>>>>>>> this is graceful shutdown, where you would like a server to >>>>>>>>>>> complete all >>>>>>>>>>> running requests, reject any new ones, and then shut down, however >>>>>>>>>>> there >>>>>>>>>>> are also plenty of other valid use cases (e.g. suspend the server, >>>>>>>>>>> modify a data source or some other config, then resume). >>>>>>>>>>> >>>>>>>>>>> User View: >>>>>>>>>>> >>>>>>>>>>> For the users point of view two new operations will be added to >>>>>>>>>>> the server: >>>>>>>>>>> >>>>>>>>>>> suspend(timeout) >>>>>>>>>>> resume() >>>>>>>>>>> >>>>>>>>>>> A runtime only attribute suspend-state (is this a good name?) will >>>>>>>>>>> also >>>>>>>>>>> be added, that can take one of three possible values, RUNNING, >>>>>>>>>>> SUSPENDING, SUSPENDED. >>>>>>>>>> The SuspendController "state" might be a shorter attribute name and >>>>>>>>>> just >>>>>>>>>> as meaningful. >>>>>>>>> This will be in the global server namespace (i.e. from the CLI >>>>>>>>> :read-attribute(name="suspend-state"). >>>>>>>>> >>>>>>>>> I think the name 'state' is just two generic, which kind of state >>>>>>>>> are we >>>>>>>>> talking about? >>>>>>>>> >>>>>>>>>> When are we in the RUNNING state? Is that simply the pre-state for >>>>>>>>>> SUSPENDING? >>>>>>>>> 99.99% of the time. Basically servers are always running unless they >>>>>>>>> are >>>>>>>>> have been explicitly suspended, and then they go from suspending to >>>>>>>>> suspended. Note that if resume is called at any time the server >>>>>>>>> goes to >>>>>>>>> RUNNING again immediately, as when subsystems are notified they >>>>>>>>> should >>>>>>>>> be able to begin accepting requests again straight away. >>>>>>>>> >>>>>>>>> We also have admin only mode, which is a kinda similar concept, so we >>>>>>>>> need to make sure we document the differences. >>>>>>>>> >>>>>>>>>>> A timeout attribute will also be added to the shutdown >>>>>>>>>>> operation. If >>>>>>>>>>> this is present then the server will first be suspended, and the >>>>>>>>>>> server >>>>>>>>>>> will not shut down until either the suspend is successful or the >>>>>>>>>>> timeout >>>>>>>>>>> occurs. If no timeout parameter is passed to the operation then a >>>>>>>>>>> normal >>>>>>>>>>> non-graceful shutdown will take place. >>>>>>>>>> Will non-graceful shutdown wait for non-daemon threads or terminate >>>>>>>>>> immediately (call System.exit()). >>>>>>>>> It will execute the same way it does today (all services will shut >>>>>>>>> down >>>>>>>>> and then the server will exit). >>>>>>>>> >>>>>>>>> Stuart >>>>>>>>> >>>>>>>>>>> In domain mode these operations will be added to both individual >>>>>>>>>>> server >>>>>>>>>>> and a complete server group. >>>>>>>>>>> >>>>>>>>>>> Implementation Details >>>>>>>>>>> >>>>>>>>>>> Suspend/resume operates on entry points to the server. Any request >>>>>>>>>>> that >>>>>>>>>>> is currently running must not be affected by the suspend state, >>>>>>>>>>> however >>>>>>>>>>> any new request should be rejected. In general subsystems will >>>>>>>>>>> track the >>>>>>>>>>> number of outstanding requests, and when this hits zero they are >>>>>>>>>>> considered suspended. >>>>>>>>>>> >>>>>>>>>>> We will introduce the notion of a global SuspendController, that >>>>>>>>>>> manages >>>>>>>>>>> the servers suspend state. All subsystems that wish to do a >>>>>>>>>>> graceful >>>>>>>>>>> shutdown register callback handlers with this controller. >>>>>>>>>>> >>>>>>>>>>> When the suspend() operation is invoked the controller will invoke >>>>>>>>>>> all >>>>>>>>>>> these callbacks, letting the subsystem know that the server is >>>>>>>>>>> suspend, >>>>>>>>>>> and providing the subsystem with a SuspendContext object that the >>>>>>>>>>> subsystem can then use to notify the controller that the suspend is >>>>>>>>>>> complete. >>>>>>>>>>> >>>>>>>>>>> What the subsystem does when it receives a suspend command, and >>>>>>>>>>> when it >>>>>>>>>>> considers itself suspended will vary, but in the common case it >>>>>>>>>>> will >>>>>>>>>>> immediatly start rejecting external requests (e.g. Undertow will >>>>>>>>>>> start >>>>>>>>>>> responding with a 503 to all new requests). The subsystem will also >>>>>>>>>>> track the number of outstanding requests, and when this hits zero >>>>>>>>>>> then >>>>>>>>>>> the subsystem will notify the controller that is has successfully >>>>>>>>>>> suspended. >>>>>>>>>>> Some subsystems will obviously want to do other actions on >>>>>>>>>>> suspend, e.g. >>>>>>>>>>> clustering will likely want to fail over, mod_cluster will >>>>>>>>>>> notify the >>>>>>>>>>> load balancer that the node is no longer available etc. In some >>>>>>>>>>> cases we >>>>>>>>>>> may want to make this configurable to an extent (e.g. Undertow >>>>>>>>>>> could be >>>>>>>>>>> configured to allow requests with an existing session, and not >>>>>>>>>>> consider >>>>>>>>>>> itself timed out until all sessions have either timed out or been >>>>>>>>>>> invalidated, although this will obviously take a while). >>>>>>>>>>> >>>>>>>>>>> If anyone has any feedback let me know. In terms of >>>>>>>>>>> implementation my >>>>>>>>>>> basic plan is to get the core functionality and the Undertow >>>>>>>>>>> implementation into Wildfly, and then work with subsystem >>>>>>>>>>> authors to >>>>>>>>>>> implement subsystem specific functionality once the core is in >>>>>>>>>>> place. >>>>>>>>>>> >>>>>>>>>>> Stuart >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The >>>>>>>>>>> >>>>>>>>>>> A timeout attribute will also be added to the shutdown command, >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> wildfly-dev mailing list >>>>>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> wildfly-dev mailing list >>>>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>>> _______________________________________________ >>>>>>>>> wildfly-dev mailing list >>>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> wildfly-dev mailing list >>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From brian.stansberry at redhat.com Wed Jun 11 13:10:33 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Wed, 11 Jun 2014 12:10:33 -0500 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <5397372F.2020205@gmail.com> References: <5397209C.9090400@redhat.com> <53973592.3050700@redhat.com> <5397372F.2020205@gmail.com> Message-ID: <53988D89.5020601@redhat.com> On 6/10/14, 11:49 AM, Stuart Douglas wrote: > >> >> FWIW IBM uses the term "feature pack" for WebSphere Application Server >> extras. Though they tend to be huge and not easy to apply. > > If anyone has any better names I would love to hear them. > I don't consider the fact that someone else uses the same term for a similar thing to be a negative. Unless that thing isn't really similar at all. -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From jperkins at redhat.com Wed Jun 11 13:30:11 2014 From: jperkins at redhat.com (James R. Perkins) Date: Wed, 11 Jun 2014 10:30:11 -0700 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <53988D89.5020601@redhat.com> References: <5397209C.9090400@redhat.com> <53973592.3050700@redhat.com> <5397372F.2020205@gmail.com> <53988D89.5020601@redhat.com> Message-ID: <53989223.5010209@redhat.com> On 06/11/2014 10:10 AM, Brian Stansberry wrote: > On 6/10/14, 11:49 AM, Stuart Douglas wrote: >>> FWIW IBM uses the term "feature pack" for WebSphere Application Server >>> extras. Though they tend to be huge and not easy to apply. >> If anyone has any better names I would love to hear them. >> > I don't consider the fact that someone else uses the same term for a > similar thing to be a negative. Unless that thing isn't really similar > at all. Agreed I just wanted to point it out. I was mainly a bit concerned with how converted devs might look at it. I don't know what most users think, but I know when I saw WAS needed a feature pack to do what I was doing I just would skip it unless I really needed it. The WAS feature packs were usually huge and I never liked the way they applied. That said, it's also a positive being the same name for converts as they know what it means. > > -- James R. Perkins JBoss by Red Hat From florian.pirchner at gmail.com Thu Jun 12 01:38:44 2014 From: florian.pirchner at gmail.com (Florian Pirchner) Date: Thu, 12 Jun 2014 07:38:44 +0200 Subject: [wildfly-dev] Subsystems Message-ID: <53993CE4.1060509@gmail.com> Hi, i got a question. Are subsystems in WildFly 8 based on the OSGi subsystem specification? It seems that OSGi was removed from kernel and can be added as an addon; right? Thanks, Florian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140612/6f35216e/attachment.html From qutpeter at gmail.com Thu Jun 12 02:51:05 2014 From: qutpeter at gmail.com (Peter Cai) Date: Thu, 12 Jun 2014 16:51:05 +1000 Subject: [wildfly-dev] Subsystems In-Reply-To: <53993CE4.1060509@gmail.com> References: <53993CE4.1060509@gmail.com> Message-ID: Wildfly is based on jboss-modules, rather than OSGi. Regards, On Thu, Jun 12, 2014 at 3:38 PM, Florian Pirchner < florian.pirchner at gmail.com> wrote: > Hi, > > i got a question. Are subsystems in WildFly 8 based on the OSGi subsystem > specification? > > It seems that OSGi was removed from kernel and can be added as an addon; > right? > > Thanks, Florian > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140612/89e0c9b0/attachment.html From ehugonne at redhat.com Thu Jun 12 06:33:26 2014 From: ehugonne at redhat.com (Emmanuel Hugonnet) Date: Thu, 12 Jun 2014 12:33:26 +0200 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <53987C59.7010403@gmail.com> References: <5397209C.9090400@redhat.com> <53987C59.7010403@gmail.com> Message-ID: <539981F6.904@redhat.com> Hi, Concerning tests and the integration tests in the test suite, will the splitting requires them to be run twice or more ? Let's say we test CDI integration with JAX-RS, will it be tested in the feature pack JAR-RS, or in JAX-RS and JavaEE and complete dist ? Cheers, Emmanuel Le 11/06/2014 17:57, Stuart Douglas a ?crit : > Something that I did not cover was how to actually do the split it terms > of preserving history. We have a few options: > > 1) Just copy the files into a clean repo. There is no history in the > repo, but you could always check the existing wildfly repo if you really > need it. > > 2) Copy the complete WF repo and then delete the parts that are not > going to be part of the new repo. This leaves complete history, but > means that the check outs will be larger than they should be. > > 3) Use git-filter-branch to create a new repo with just the history of > the relevant files. We still have a small checkout size, but the history > is still in the repo. > > I think we should go with option 3. > > Stuart > > Stuart Douglas wrote: >> This design proposal covers the inter related tasks of splitting up the >> build, and also creating a build/provisioning system that will make it >> easy for end users to consume Wildfly. Apologies for the length, but it >> is a complex topic. The first part explains what we are trying to >> achieve, the second part covers how we are planning to actually >> implement it. >> >> The Wildfly code base is over a million lines of java and has a test >> suite that generally takes close to two hours to run in its entirety. >> This makes the project very unwieldily, and the large size and slow test >> suite makes development painful. >> >> To deal with this issue we are going to split the Wildfly code base into >> smaller discrete repositories. The planned split is as follows: >> >> - Core: just the WF core >> - Arquillian: the arquillian adaptors >> - Servlet: a WF distribution with just Undertow, and some basic EE >> functionality such as naming >> - EE: All the core EE related functionality, EJB's, messaging etc >> - Clustering: The core clustering functionality >> - Console: The management console >> - Dist: brings all the pieces together, and allows us to run all tests >> against a full server >> >> Note that this list is in no way final, and is open to debate. We will >> most likely want to split up the EE component at some point, possibly >> along some kind of web profile/full profile type split. >> >> Each of these repos will build a feature pack, which will contain the >> following: >> >> - Feature specification / description >> - Core version requirements (e.g. WF10) >> - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) >> - module.xml files for all required modules that are not provided by >> other features >> - References to maven GAV's for jars (possibly a level of indirection >> here, module.xml may just contain the group and artifact, and the >> version may be in a version.properties file to allow it to be easily >> overridden) >> - Default configuration snippet, subsystem snippets are packaged in the >> subsystem jars, templates that combine them into config files are part >> of the feature pack. >> - Misc files (e.g. xsds) with indication of where on path to place them >> >> Note that a feature pack is not a complete server, it cannot simply be >> extracted and run, it first needs to be assembled into a server by the >> provisioning tool. The feature packs also just contain references to the >> maven GAV of required jars, they do not have the actual jars in the pack >> (which should make them very lightweight). >> >> Feature packs will be assembled by the WF build tool, which is just a >> maven plugin that will replace our existing hacky collection of ant >> scripts. >> >> Actual server instances will be assembled by the provisioning tool, >> which will be implemented as a library with several different front >> ends, including a maven plugin and a CLI (possibly integrated into our >> existing CLI). In general the provisioning tool will be able to >> provision three different types of servers: >> >> - A traditional server with all jar files in the distribution >> - A server that uses maven coordinates in module.xml files, with all >> artifacts downloaded as part of the provisioning process >> - As above, but with artifacts being lazily loaded as needed (not >> recommended for production, but I think this may be useful from a >> developer point of view) >> >> The provisioning tool will work from an XML descriptor that describes >> the server that is to be built. In general this information will include: >> >> - GAV of the feature packs to use >> - Filtering information if not all features from a pack are required >> (e.g. just give me JAX-RS from the EE pack. In this case the only >> modules/subsystems installed from the pack will be modules and subystem >> that JAX-RS requires). >> - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), >> which will allow community users to easily upgrade individual components. >> - Configuration changes that are required (e.g. some way to add a >> datasource to the assembled server). The actual form this will take >> still needs to be decided. Note that this need to work on both a user >> level (a user adding a datasource) and a feature pack level (e.g. the >> JON feature packing adding a required data source). >> - GAV of deployments to install in the server. This should allow a >> server complete with deployments and the necessary config to be >> assembled and be immediately ready to be put into service. >> >> Note that if you just want a full WF install you should be able to >> provision it with a single line in the provisioning file, by specifying >> the dist feature pack. We will still provide our traditional download, >> which will be build by the provisioning tool as part of our build process. >> >> The provisioning tool will also be able to upgrade servers, which >> basically consists of provisioning a new modules directory. Rollback is >> provided by provisioning from an earlier version of provisioning file. >> When a server is provisioned the tool will make a backup copy of the >> file used, so it should always be possible to examine the provisioning >> file that was used to build the current server config. >> >> Note that when an update is performed on an existing server config will >> not be updated, unless the update adds an additional config file, in >> which case the new config file will be generated (however existing >> config will not be touched). >> >> Note that as a result of this split we will need to do much more >> frequent releases of the individual feature packs, to allow the most >> recent code to be integrated into dist. >> >> Implementation Plan >> >> The above changes are obviously a big job, and will not happen >> overnight. They are also highly likely to conflict with other changes, >> so maintaining a long running branch that gets rebased is not a >> practical option. Instead the plan it to perform the split in >> incremental changes. The basic steps are listed below, some of which can >> be performed in parallel. >> >> 1) Using the initial implementation of my build plugin (in my >> wildfly-build-plugin branch) we split up the server along the lines >> above. The code will all stay in the same repo, however the plugin will >> be used to build all the individual pieces, which are then assembled as >> part of the final build process. Note that the plugin in its current >> form does both the build and provision step, and the pack format is >> produces is far from the final pack format that we will want to use. >> >> 2) Split up the test suite into modules based on the features that they >> test. This will result in several smaller modules in place of a single >> large one, which should also be a usability improvement as individual >> tests will be be faster to run, and run times for all tests in a module >> should be more manageable. >> >> 3) Split the core into into own module. >> >> 4) Split everything else into its own module. As part of this step we >> need to make sure we still have the ability to run all tests against the >> full server, as well as against the cut down feature pack version of the >> server. >> >> 5) Focus on the build an provisioning tool, to implement all the >> features above, and to finalize the WF pack format. >> >> I think that just about covers it. There are still lots of nitty gritty >> details that need to be worked out, however I think this covers all the >> main aspects of the design. We are planning on starting work on this >> basically immediately, as we want to get this implemented as early in >> the WF9 cycle as possible. >> >> Stuart >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 538 bytes Desc: OpenPGP digital signature Url : http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140612/dcf87b98/attachment.bin From tomaz.cerar at gmail.com Thu Jun 12 06:45:29 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Thu, 12 Jun 2014 12:45:29 +0200 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <539981F6.904@redhat.com> References: <5397209C.9090400@redhat.com> <53987C59.7010403@gmail.com> <539981F6.904@redhat.com> Message-ID: In short no. But in practice, it is complicated question I am currently working in splitting testsuite to core & full distro. So some tests will only be part of core and others of full distro. When we went trough tests we saw that some tests overlap and would need to be run in both testsuites, but luckily they are a minority. Bigger problem we have right now is to make sure tests that belong to core properly work there as not everything is avalible. There will be some work needed to fix such tests but in end they will be just in one place. To address your question more specifically, idea is that one feature is only tested in its appropriate distro/feature and not duplicated. unless that new distro/feature adds some stuff that modifies default behavior where testing of some features should be re-done. even now we have tests that only test specific features and ones that tests integrations of bunch of features. We just need to be smart about how do testing. -- tomaz On Thu, Jun 12, 2014 at 12:33 PM, Emmanuel Hugonnet wrote: > Hi, > Concerning tests and the integration tests in the test suite, will the > splitting requires them to be run twice or more ? > Let's say we test CDI integration with JAX-RS, will it be tested in the > feature pack JAR-RS, or in JAX-RS and JavaEE and complete dist ? > Cheers, > Emmanuel > > Le 11/06/2014 17:57, Stuart Douglas a ?crit : > > Something that I did not cover was how to actually do the split it terms > > of preserving history. We have a few options: > > > > 1) Just copy the files into a clean repo. There is no history in the > > repo, but you could always check the existing wildfly repo if you really > > need it. > > > > 2) Copy the complete WF repo and then delete the parts that are not > > going to be part of the new repo. This leaves complete history, but > > means that the check outs will be larger than they should be. > > > > 3) Use git-filter-branch to create a new repo with just the history of > > the relevant files. We still have a small checkout size, but the history > > is still in the repo. > > > > I think we should go with option 3. > > > > Stuart > > > > Stuart Douglas wrote: > >> This design proposal covers the inter related tasks of splitting up the > >> build, and also creating a build/provisioning system that will make it > >> easy for end users to consume Wildfly. Apologies for the length, but it > >> is a complex topic. The first part explains what we are trying to > >> achieve, the second part covers how we are planning to actually > >> implement it. > >> > >> The Wildfly code base is over a million lines of java and has a test > >> suite that generally takes close to two hours to run in its entirety. > >> This makes the project very unwieldily, and the large size and slow test > >> suite makes development painful. > >> > >> To deal with this issue we are going to split the Wildfly code base into > >> smaller discrete repositories. The planned split is as follows: > >> > >> - Core: just the WF core > >> - Arquillian: the arquillian adaptors > >> - Servlet: a WF distribution with just Undertow, and some basic EE > >> functionality such as naming > >> - EE: All the core EE related functionality, EJB's, messaging etc > >> - Clustering: The core clustering functionality > >> - Console: The management console > >> - Dist: brings all the pieces together, and allows us to run all tests > >> against a full server > >> > >> Note that this list is in no way final, and is open to debate. We will > >> most likely want to split up the EE component at some point, possibly > >> along some kind of web profile/full profile type split. > >> > >> Each of these repos will build a feature pack, which will contain the > >> following: > >> > >> - Feature specification / description > >> - Core version requirements (e.g. WF10) > >> - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) > >> - module.xml files for all required modules that are not provided by > >> other features > >> - References to maven GAV's for jars (possibly a level of indirection > >> here, module.xml may just contain the group and artifact, and the > >> version may be in a version.properties file to allow it to be easily > >> overridden) > >> - Default configuration snippet, subsystem snippets are packaged in the > >> subsystem jars, templates that combine them into config files are part > >> of the feature pack. > >> - Misc files (e.g. xsds) with indication of where on path to place them > >> > >> Note that a feature pack is not a complete server, it cannot simply be > >> extracted and run, it first needs to be assembled into a server by the > >> provisioning tool. The feature packs also just contain references to the > >> maven GAV of required jars, they do not have the actual jars in the pack > >> (which should make them very lightweight). > >> > >> Feature packs will be assembled by the WF build tool, which is just a > >> maven plugin that will replace our existing hacky collection of ant > >> scripts. > >> > >> Actual server instances will be assembled by the provisioning tool, > >> which will be implemented as a library with several different front > >> ends, including a maven plugin and a CLI (possibly integrated into our > >> existing CLI). In general the provisioning tool will be able to > >> provision three different types of servers: > >> > >> - A traditional server with all jar files in the distribution > >> - A server that uses maven coordinates in module.xml files, with all > >> artifacts downloaded as part of the provisioning process > >> - As above, but with artifacts being lazily loaded as needed (not > >> recommended for production, but I think this may be useful from a > >> developer point of view) > >> > >> The provisioning tool will work from an XML descriptor that describes > >> the server that is to be built. In general this information will > include: > >> > >> - GAV of the feature packs to use > >> - Filtering information if not all features from a pack are required > >> (e.g. just give me JAX-RS from the EE pack. In this case the only > >> modules/subsystems installed from the pack will be modules and subystem > >> that JAX-RS requires). > >> - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), > >> which will allow community users to easily upgrade individual > components. > >> - Configuration changes that are required (e.g. some way to add a > >> datasource to the assembled server). The actual form this will take > >> still needs to be decided. Note that this need to work on both a user > >> level (a user adding a datasource) and a feature pack level (e.g. the > >> JON feature packing adding a required data source). > >> - GAV of deployments to install in the server. This should allow a > >> server complete with deployments and the necessary config to be > >> assembled and be immediately ready to be put into service. > >> > >> Note that if you just want a full WF install you should be able to > >> provision it with a single line in the provisioning file, by specifying > >> the dist feature pack. We will still provide our traditional download, > >> which will be build by the provisioning tool as part of our build > process. > >> > >> The provisioning tool will also be able to upgrade servers, which > >> basically consists of provisioning a new modules directory. Rollback is > >> provided by provisioning from an earlier version of provisioning file. > >> When a server is provisioned the tool will make a backup copy of the > >> file used, so it should always be possible to examine the provisioning > >> file that was used to build the current server config. > >> > >> Note that when an update is performed on an existing server config will > >> not be updated, unless the update adds an additional config file, in > >> which case the new config file will be generated (however existing > >> config will not be touched). > >> > >> Note that as a result of this split we will need to do much more > >> frequent releases of the individual feature packs, to allow the most > >> recent code to be integrated into dist. > >> > >> Implementation Plan > >> > >> The above changes are obviously a big job, and will not happen > >> overnight. They are also highly likely to conflict with other changes, > >> so maintaining a long running branch that gets rebased is not a > >> practical option. Instead the plan it to perform the split in > >> incremental changes. The basic steps are listed below, some of which can > >> be performed in parallel. > >> > >> 1) Using the initial implementation of my build plugin (in my > >> wildfly-build-plugin branch) we split up the server along the lines > >> above. The code will all stay in the same repo, however the plugin will > >> be used to build all the individual pieces, which are then assembled as > >> part of the final build process. Note that the plugin in its current > >> form does both the build and provision step, and the pack format is > >> produces is far from the final pack format that we will want to use. > >> > >> 2) Split up the test suite into modules based on the features that they > >> test. This will result in several smaller modules in place of a single > >> large one, which should also be a usability improvement as individual > >> tests will be be faster to run, and run times for all tests in a module > >> should be more manageable. > >> > >> 3) Split the core into into own module. > >> > >> 4) Split everything else into its own module. As part of this step we > >> need to make sure we still have the ability to run all tests against the > >> full server, as well as against the cut down feature pack version of the > >> server. > >> > >> 5) Focus on the build an provisioning tool, to implement all the > >> features above, and to finalize the WF pack format. > >> > >> I think that just about covers it. There are still lots of nitty gritty > >> details that need to be worked out, however I think this covers all the > >> main aspects of the design. We are planning on starting work on this > >> basically immediately, as we want to get this implemented as early in > >> the WF9 cycle as possible. > >> > >> Stuart > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> _______________________________________________ > >> wildfly-dev mailing list > >> wildfly-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140612/aaf51e46/attachment-0001.html From stuart.w.douglas at gmail.com Thu Jun 12 09:19:56 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Thu, 12 Jun 2014 08:19:56 -0500 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <539981F6.904@redhat.com> References: <5397209C.9090400@redhat.com> <53987C59.7010403@gmail.com> <539981F6.904@redhat.com> Message-ID: <5399A8FC.6010802@gmail.com> Basically yes. This still needs to have some of the details worked out, but in general tests are going to be split into modules based on the features they required, and these modules will become part of the relevant feature packs, so these tests will be run when building the feature pack. These tests will also be published as a test-jar artifact, and our dist repo that builds a complete server will also have the ability to run all the tests. This means that if you have a web test it will be run once when building the web feature, and again when integrating a new version of that feature into the main distro. Stuart Emmanuel Hugonnet wrote: > Hi, > Concerning tests and the integration tests in the test suite, will the splitting requires them to be run twice or more ? > Let's say we test CDI integration with JAX-RS, will it be tested in the feature pack JAR-RS, or in JAX-RS and JavaEE and complete dist ? > Cheers, > Emmanuel > > Le 11/06/2014 17:57, Stuart Douglas a ?crit : >> Something that I did not cover was how to actually do the split it terms >> of preserving history. We have a few options: >> >> 1) Just copy the files into a clean repo. There is no history in the >> repo, but you could always check the existing wildfly repo if you really >> need it. >> >> 2) Copy the complete WF repo and then delete the parts that are not >> going to be part of the new repo. This leaves complete history, but >> means that the check outs will be larger than they should be. >> >> 3) Use git-filter-branch to create a new repo with just the history of >> the relevant files. We still have a small checkout size, but the history >> is still in the repo. >> >> I think we should go with option 3. >> >> Stuart >> >> Stuart Douglas wrote: >>> This design proposal covers the inter related tasks of splitting up the >>> build, and also creating a build/provisioning system that will make it >>> easy for end users to consume Wildfly. Apologies for the length, but it >>> is a complex topic. The first part explains what we are trying to >>> achieve, the second part covers how we are planning to actually >>> implement it. >>> >>> The Wildfly code base is over a million lines of java and has a test >>> suite that generally takes close to two hours to run in its entirety. >>> This makes the project very unwieldily, and the large size and slow test >>> suite makes development painful. >>> >>> To deal with this issue we are going to split the Wildfly code base into >>> smaller discrete repositories. The planned split is as follows: >>> >>> - Core: just the WF core >>> - Arquillian: the arquillian adaptors >>> - Servlet: a WF distribution with just Undertow, and some basic EE >>> functionality such as naming >>> - EE: All the core EE related functionality, EJB's, messaging etc >>> - Clustering: The core clustering functionality >>> - Console: The management console >>> - Dist: brings all the pieces together, and allows us to run all tests >>> against a full server >>> >>> Note that this list is in no way final, and is open to debate. We will >>> most likely want to split up the EE component at some point, possibly >>> along some kind of web profile/full profile type split. >>> >>> Each of these repos will build a feature pack, which will contain the >>> following: >>> >>> - Feature specification / description >>> - Core version requirements (e.g. WF10) >>> - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) >>> - module.xml files for all required modules that are not provided by >>> other features >>> - References to maven GAV's for jars (possibly a level of indirection >>> here, module.xml may just contain the group and artifact, and the >>> version may be in a version.properties file to allow it to be easily >>> overridden) >>> - Default configuration snippet, subsystem snippets are packaged in the >>> subsystem jars, templates that combine them into config files are part >>> of the feature pack. >>> - Misc files (e.g. xsds) with indication of where on path to place them >>> >>> Note that a feature pack is not a complete server, it cannot simply be >>> extracted and run, it first needs to be assembled into a server by the >>> provisioning tool. The feature packs also just contain references to the >>> maven GAV of required jars, they do not have the actual jars in the pack >>> (which should make them very lightweight). >>> >>> Feature packs will be assembled by the WF build tool, which is just a >>> maven plugin that will replace our existing hacky collection of ant >>> scripts. >>> >>> Actual server instances will be assembled by the provisioning tool, >>> which will be implemented as a library with several different front >>> ends, including a maven plugin and a CLI (possibly integrated into our >>> existing CLI). In general the provisioning tool will be able to >>> provision three different types of servers: >>> >>> - A traditional server with all jar files in the distribution >>> - A server that uses maven coordinates in module.xml files, with all >>> artifacts downloaded as part of the provisioning process >>> - As above, but with artifacts being lazily loaded as needed (not >>> recommended for production, but I think this may be useful from a >>> developer point of view) >>> >>> The provisioning tool will work from an XML descriptor that describes >>> the server that is to be built. In general this information will include: >>> >>> - GAV of the feature packs to use >>> - Filtering information if not all features from a pack are required >>> (e.g. just give me JAX-RS from the EE pack. In this case the only >>> modules/subsystems installed from the pack will be modules and subystem >>> that JAX-RS requires). >>> - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), >>> which will allow community users to easily upgrade individual components. >>> - Configuration changes that are required (e.g. some way to add a >>> datasource to the assembled server). The actual form this will take >>> still needs to be decided. Note that this need to work on both a user >>> level (a user adding a datasource) and a feature pack level (e.g. the >>> JON feature packing adding a required data source). >>> - GAV of deployments to install in the server. This should allow a >>> server complete with deployments and the necessary config to be >>> assembled and be immediately ready to be put into service. >>> >>> Note that if you just want a full WF install you should be able to >>> provision it with a single line in the provisioning file, by specifying >>> the dist feature pack. We will still provide our traditional download, >>> which will be build by the provisioning tool as part of our build process. >>> >>> The provisioning tool will also be able to upgrade servers, which >>> basically consists of provisioning a new modules directory. Rollback is >>> provided by provisioning from an earlier version of provisioning file. >>> When a server is provisioned the tool will make a backup copy of the >>> file used, so it should always be possible to examine the provisioning >>> file that was used to build the current server config. >>> >>> Note that when an update is performed on an existing server config will >>> not be updated, unless the update adds an additional config file, in >>> which case the new config file will be generated (however existing >>> config will not be touched). >>> >>> Note that as a result of this split we will need to do much more >>> frequent releases of the individual feature packs, to allow the most >>> recent code to be integrated into dist. >>> >>> Implementation Plan >>> >>> The above changes are obviously a big job, and will not happen >>> overnight. They are also highly likely to conflict with other changes, >>> so maintaining a long running branch that gets rebased is not a >>> practical option. Instead the plan it to perform the split in >>> incremental changes. The basic steps are listed below, some of which can >>> be performed in parallel. >>> >>> 1) Using the initial implementation of my build plugin (in my >>> wildfly-build-plugin branch) we split up the server along the lines >>> above. The code will all stay in the same repo, however the plugin will >>> be used to build all the individual pieces, which are then assembled as >>> part of the final build process. Note that the plugin in its current >>> form does both the build and provision step, and the pack format is >>> produces is far from the final pack format that we will want to use. >>> >>> 2) Split up the test suite into modules based on the features that they >>> test. This will result in several smaller modules in place of a single >>> large one, which should also be a usability improvement as individual >>> tests will be be faster to run, and run times for all tests in a module >>> should be more manageable. >>> >>> 3) Split the core into into own module. >>> >>> 4) Split everything else into its own module. As part of this step we >>> need to make sure we still have the ability to run all tests against the >>> full server, as well as against the cut down feature pack version of the >>> server. >>> >>> 5) Focus on the build an provisioning tool, to implement all the >>> features above, and to finalize the WF pack format. >>> >>> I think that just about covers it. There are still lots of nitty gritty >>> details that need to be worked out, however I think this covers all the >>> main aspects of the design. We are planning on starting work on this >>> basically immediately, as we want to get this implemented as early in >>> the WF9 cycle as possible. >>> >>> Stuart >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > From jgreene at redhat.com Thu Jun 12 10:32:21 2014 From: jgreene at redhat.com (Jason T. Greene) Date: Thu, 12 Jun 2014 10:32:21 -0400 (EDT) Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <53986EBE.4080509@gmail.com> References: <5397209C.9090400@redhat.com> <53986EBE.4080509@gmail.com> Message-ID: > On Jun 11, 2014, at 9:59 AM, Stuart Douglas wrote: > > It is kinda like Karaf, but we are not based on OSGi, and have no plans > to move. The problem with using OSGi for the base modularity in WildFly, is that Java EE class-loading rules do not map well to it, so you ultimately end up with two classloading models. It also mandates it's own service model and brings in a vast amount of complexity that our server internals didn't really benefit from. Finally we felt the performance cost of it's dependency resolution algorithm was way too high (we wanted deterministic o(1) resolution) That said, we did expect interest in building OSGi applications on a full application server, so our original plan with AS7 was to build a lightweight flexible modular class loader, that would be mappable to all known class loading models, including OSGi and Java EE. That became JBoss Modules. AS7.x did ship with OSGi support based on JBoss Modules, however it had very little uptake with our users and customers. Instead there was more interest in just using our modularity layer directly. Based on this, we decided to split off the OSGi layer into a separate optional project, and focus more energy to other areas which were in more demand. This isn't set in stone though. Ultimately WildFly is driven by what the community wants and of course, code contributions is the best vehicle. -Jason From stuart.w.douglas at gmail.com Thu Jun 12 10:45:00 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Thu, 12 Jun 2014 09:45:00 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <53988806.7090301@redhat.com> References: <5396377F.80003@redhat.com> <53966EF3.7070108@redhat.com> <5396700B.9030003@gmail.com> <539718F4.2090606@redhat.com> <539721A0.30307@gmail.com> <5397251E.5040805@redhat.com> <539726E2.2030300@gmail.com> <5397289F.8030404@redhat.com> <53972930.9030809@gmail.com> <5397306D.4060705@redhat.com> <5398820D.6060901@redhat.com> <53988806.7090301@redhat.com> Message-ID: <5399BCEC.7050701@gmail.com> Brian Stansberry wrote: > The STARTING state in the existing attribute makes me think an > equivalent thing is needed for this concept. This is a good idea, we could do this by just having a serverStarted() notification that gets sent to all subsystems. This will allow them to basically start in a paused state, and only allow access after the server is up. Stuart > > STARTING in the existing means the runtime services are possibly out of > sync due to boot. > > Doesn't a similar problem exist with RUNNING, SUSPENDING, SUSPENDED? > It's about how the server is reacting to external requests. There's some > state during boot/reload when the server is not reacting normally to > external requests. > > Perhaps that's just another condition where the server is SUSPENDED. > > This leads to whether this whole mechanism can be used to provide > "Graceful Startup". We have problems with this now; endpoints accepting > requests before everything is fully ready, leading to things like 404s > because a deployment is installed yet. > > On 6/11/14, 11:21 AM, Brian Stansberry wrote: >> I do think these are orthogonal and should not be combined. >> >> The existing attribute is fundamentally about how the state of the >> runtime services relates to the persistent configuration. >> >> STARTING == out of sync due to still getting in sync during start >> RUNNING == in sync >> RELOAD_REQURIRED = out of sync, needs a reload to get in sync >> RESTART_REQUIRED = out of sync, needs a full process restart to get in sync >> >> There are two problems though with the existing attribute that exposes this: >> >> 1) It's named "server-state" on a server and "host-state" on a Host >> Controller. Really crappy name; way too broad. >> >> That's fixable by creating a new attribute and making the old one an >> alias for compatibility purposes. >> >> 2) The RUNNING state is really poorly named. >> >> The could perhaps be fixed by coming up with a new name and translating >> it back to "RUNNING" in the handlers for the legacy "server-state" and >> "host-state" attributes. >> >> >> On 6/10/14, 11:21 AM, Dimitris Andreadis wrote: >>> Sure. Which justifies trying to avoid those issues in the first place ;) >>> >>> On 10/06/2014 17:50, Stuart Douglas wrote: >>>> We can't really change that now, as it is part of our existing API. >>>> >>>> Stuart >>>> >>>> Dimitris Andreadis wrote: >>>>> It seems to me RESTART_REQUIRED (or RELOAD_REQUIRED) should be a boolean >>>>> on its own to simplify the state diagram. >>>>> >>>>> On 10/06/2014 17:40, Stuart Douglas wrote: >>>>>> I don't think so, I think RESTART_REQUIRED means running, but I need >>>>>> to restart to apply >>>>>> management changes (I think that attribute can also be >>>>>> RELOAD_REQUIRED, I think the >>>>>> description may be a bit out of date). >>>>>> >>>>>> To accurately reflect all the possible states you would need something >>>>>> like: >>>>>> >>>>>> RUNNING >>>>>> PAUSING, >>>>>> PAUSED, >>>>>> RESTART_REQUIRED >>>>>> PAUSING_RESTART_REQUIRED >>>>>> PAUSED_RESTART_REQUIRED >>>>>> RELOAD_REQUIRED >>>>>> PAUSING_RELOAD_REQUIRED >>>>>> PAUSED_RELOAD_REQUIRED >>>>>> >>>>>> Which does not seem great, and may introduce compatibility problems >>>>>> for clients that are not >>>>>> expecting these new values. >>>>>> >>>>>> Stuart >>>>>> >>>>>> >>>>>> >>>>>> Dimitris Andreadis wrote: >>>>>>> Isn't RESTART_REQUIRED also orthogonal to RUNNING? >>>>>>> >>>>>>> On 10/06/2014 17:17, Stuart Douglas wrote: >>>>>>>> They are actually orthogonal, a server can be in both RESTART_REQUIRED >>>>>>>> and any one of the >>>>>>>> suspend states. >>>>>>>> >>>>>>>> RESTART_REQUIRED is very much tied to services and the management >>>>>>>> model, while >>>>>>>> suspend/resume is a runtime only thing that should not touch the state >>>>>>>> of services. >>>>>>>> >>>>>>>> >>>>>>>> Stuart >>>>>>>> >>>>>>>> Dimitris Andreadis wrote: >>>>>>>>> Why not extend the states of the existing 'server-state' attribute to: >>>>>>>>> >>>>>>>>> (STARTING, RUNNING, SUSPENDING, SUSPENDED, RESTART_REQUIRED RUNNING) >>>>>>>>> >>>>>>>>> http://wildscribe.github.io/Wildfly/8.0.0.Final/index.html >>>>>>>>> >>>>>>>>> On 10/06/2014 04:40, Stuart Douglas wrote: >>>>>>>>>> Scott Marlow wrote: >>>>>>>>>>> On 06/09/2014 06:38 PM, Stuart Douglas wrote: >>>>>>>>>>>> Server suspend and resume is a feature that allows a running >>>>>>>>>>>> server to >>>>>>>>>>>> gracefully finish of all running requests. The most common use >>>>>>>>>>>> case for >>>>>>>>>>>> this is graceful shutdown, where you would like a server to >>>>>>>>>>>> complete all >>>>>>>>>>>> running requests, reject any new ones, and then shut down, however >>>>>>>>>>>> there >>>>>>>>>>>> are also plenty of other valid use cases (e.g. suspend the server, >>>>>>>>>>>> modify a data source or some other config, then resume). >>>>>>>>>>>> >>>>>>>>>>>> User View: >>>>>>>>>>>> >>>>>>>>>>>> For the users point of view two new operations will be added to >>>>>>>>>>>> the server: >>>>>>>>>>>> >>>>>>>>>>>> suspend(timeout) >>>>>>>>>>>> resume() >>>>>>>>>>>> >>>>>>>>>>>> A runtime only attribute suspend-state (is this a good name?) will >>>>>>>>>>>> also >>>>>>>>>>>> be added, that can take one of three possible values, RUNNING, >>>>>>>>>>>> SUSPENDING, SUSPENDED. >>>>>>>>>>> The SuspendController "state" might be a shorter attribute name and >>>>>>>>>>> just >>>>>>>>>>> as meaningful. >>>>>>>>>> This will be in the global server namespace (i.e. from the CLI >>>>>>>>>> :read-attribute(name="suspend-state"). >>>>>>>>>> >>>>>>>>>> I think the name 'state' is just two generic, which kind of state >>>>>>>>>> are we >>>>>>>>>> talking about? >>>>>>>>>> >>>>>>>>>>> When are we in the RUNNING state? Is that simply the pre-state for >>>>>>>>>>> SUSPENDING? >>>>>>>>>> 99.99% of the time. Basically servers are always running unless they >>>>>>>>>> are >>>>>>>>>> have been explicitly suspended, and then they go from suspending to >>>>>>>>>> suspended. Note that if resume is called at any time the server >>>>>>>>>> goes to >>>>>>>>>> RUNNING again immediately, as when subsystems are notified they >>>>>>>>>> should >>>>>>>>>> be able to begin accepting requests again straight away. >>>>>>>>>> >>>>>>>>>> We also have admin only mode, which is a kinda similar concept, so we >>>>>>>>>> need to make sure we document the differences. >>>>>>>>>> >>>>>>>>>>>> A timeout attribute will also be added to the shutdown >>>>>>>>>>>> operation. If >>>>>>>>>>>> this is present then the server will first be suspended, and the >>>>>>>>>>>> server >>>>>>>>>>>> will not shut down until either the suspend is successful or the >>>>>>>>>>>> timeout >>>>>>>>>>>> occurs. If no timeout parameter is passed to the operation then a >>>>>>>>>>>> normal >>>>>>>>>>>> non-graceful shutdown will take place. >>>>>>>>>>> Will non-graceful shutdown wait for non-daemon threads or terminate >>>>>>>>>>> immediately (call System.exit()). >>>>>>>>>> It will execute the same way it does today (all services will shut >>>>>>>>>> down >>>>>>>>>> and then the server will exit). >>>>>>>>>> >>>>>>>>>> Stuart >>>>>>>>>> >>>>>>>>>>>> In domain mode these operations will be added to both individual >>>>>>>>>>>> server >>>>>>>>>>>> and a complete server group. >>>>>>>>>>>> >>>>>>>>>>>> Implementation Details >>>>>>>>>>>> >>>>>>>>>>>> Suspend/resume operates on entry points to the server. Any request >>>>>>>>>>>> that >>>>>>>>>>>> is currently running must not be affected by the suspend state, >>>>>>>>>>>> however >>>>>>>>>>>> any new request should be rejected. In general subsystems will >>>>>>>>>>>> track the >>>>>>>>>>>> number of outstanding requests, and when this hits zero they are >>>>>>>>>>>> considered suspended. >>>>>>>>>>>> >>>>>>>>>>>> We will introduce the notion of a global SuspendController, that >>>>>>>>>>>> manages >>>>>>>>>>>> the servers suspend state. All subsystems that wish to do a >>>>>>>>>>>> graceful >>>>>>>>>>>> shutdown register callback handlers with this controller. >>>>>>>>>>>> >>>>>>>>>>>> When the suspend() operation is invoked the controller will invoke >>>>>>>>>>>> all >>>>>>>>>>>> these callbacks, letting the subsystem know that the server is >>>>>>>>>>>> suspend, >>>>>>>>>>>> and providing the subsystem with a SuspendContext object that the >>>>>>>>>>>> subsystem can then use to notify the controller that the suspend is >>>>>>>>>>>> complete. >>>>>>>>>>>> >>>>>>>>>>>> What the subsystem does when it receives a suspend command, and >>>>>>>>>>>> when it >>>>>>>>>>>> considers itself suspended will vary, but in the common case it >>>>>>>>>>>> will >>>>>>>>>>>> immediatly start rejecting external requests (e.g. Undertow will >>>>>>>>>>>> start >>>>>>>>>>>> responding with a 503 to all new requests). The subsystem will also >>>>>>>>>>>> track the number of outstanding requests, and when this hits zero >>>>>>>>>>>> then >>>>>>>>>>>> the subsystem will notify the controller that is has successfully >>>>>>>>>>>> suspended. >>>>>>>>>>>> Some subsystems will obviously want to do other actions on >>>>>>>>>>>> suspend, e.g. >>>>>>>>>>>> clustering will likely want to fail over, mod_cluster will >>>>>>>>>>>> notify the >>>>>>>>>>>> load balancer that the node is no longer available etc. In some >>>>>>>>>>>> cases we >>>>>>>>>>>> may want to make this configurable to an extent (e.g. Undertow >>>>>>>>>>>> could be >>>>>>>>>>>> configured to allow requests with an existing session, and not >>>>>>>>>>>> consider >>>>>>>>>>>> itself timed out until all sessions have either timed out or been >>>>>>>>>>>> invalidated, although this will obviously take a while). >>>>>>>>>>>> >>>>>>>>>>>> If anyone has any feedback let me know. In terms of >>>>>>>>>>>> implementation my >>>>>>>>>>>> basic plan is to get the core functionality and the Undertow >>>>>>>>>>>> implementation into Wildfly, and then work with subsystem >>>>>>>>>>>> authors to >>>>>>>>>>>> implement subsystem specific functionality once the core is in >>>>>>>>>>>> place. >>>>>>>>>>>> >>>>>>>>>>>> Stuart >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The >>>>>>>>>>>> >>>>>>>>>>>> A timeout attribute will also be added to the shutdown command, >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> wildfly-dev mailing list >>>>>>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> wildfly-dev mailing list >>>>>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>>>> _______________________________________________ >>>>>>>>>> wildfly-dev mailing list >>>>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> wildfly-dev mailing list >>>>>>>>> wildfly-dev at lists.jboss.org >>>>>>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> > > From darran.lofthouse at jboss.com Thu Jun 12 10:45:58 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Thu, 12 Jun 2014 15:45:58 +0100 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <53987C59.7010403@gmail.com> References: <5397209C.9090400@redhat.com> <53987C59.7010403@gmail.com> Message-ID: <5399BD26.70202@jboss.com> Please also consider the requirements we need to follow when maintaining the products based on previous AS7/WildFly releases. Regards, Darran Lofthouse. On 11/06/14 16:57, Stuart Douglas wrote: > Something that I did not cover was how to actually do the split it terms > of preserving history. We have a few options: > > 1) Just copy the files into a clean repo. There is no history in the > repo, but you could always check the existing wildfly repo if you really > need it. > > 2) Copy the complete WF repo and then delete the parts that are not > going to be part of the new repo. This leaves complete history, but > means that the check outs will be larger than they should be. > > 3) Use git-filter-branch to create a new repo with just the history of > the relevant files. We still have a small checkout size, but the history > is still in the repo. > > I think we should go with option 3. > > Stuart > > Stuart Douglas wrote: >> This design proposal covers the inter related tasks of splitting up the >> build, and also creating a build/provisioning system that will make it >> easy for end users to consume Wildfly. Apologies for the length, but it >> is a complex topic. The first part explains what we are trying to >> achieve, the second part covers how we are planning to actually >> implement it. >> >> The Wildfly code base is over a million lines of java and has a test >> suite that generally takes close to two hours to run in its entirety. >> This makes the project very unwieldily, and the large size and slow test >> suite makes development painful. >> >> To deal with this issue we are going to split the Wildfly code base into >> smaller discrete repositories. The planned split is as follows: >> >> - Core: just the WF core >> - Arquillian: the arquillian adaptors >> - Servlet: a WF distribution with just Undertow, and some basic EE >> functionality such as naming >> - EE: All the core EE related functionality, EJB's, messaging etc >> - Clustering: The core clustering functionality >> - Console: The management console >> - Dist: brings all the pieces together, and allows us to run all tests >> against a full server >> >> Note that this list is in no way final, and is open to debate. We will >> most likely want to split up the EE component at some point, possibly >> along some kind of web profile/full profile type split. >> >> Each of these repos will build a feature pack, which will contain the >> following: >> >> - Feature specification / description >> - Core version requirements (e.g. WF10) >> - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) >> - module.xml files for all required modules that are not provided by >> other features >> - References to maven GAV's for jars (possibly a level of indirection >> here, module.xml may just contain the group and artifact, and the >> version may be in a version.properties file to allow it to be easily >> overridden) >> - Default configuration snippet, subsystem snippets are packaged in the >> subsystem jars, templates that combine them into config files are part >> of the feature pack. >> - Misc files (e.g. xsds) with indication of where on path to place them >> >> Note that a feature pack is not a complete server, it cannot simply be >> extracted and run, it first needs to be assembled into a server by the >> provisioning tool. The feature packs also just contain references to the >> maven GAV of required jars, they do not have the actual jars in the pack >> (which should make them very lightweight). >> >> Feature packs will be assembled by the WF build tool, which is just a >> maven plugin that will replace our existing hacky collection of ant >> scripts. >> >> Actual server instances will be assembled by the provisioning tool, >> which will be implemented as a library with several different front >> ends, including a maven plugin and a CLI (possibly integrated into our >> existing CLI). In general the provisioning tool will be able to >> provision three different types of servers: >> >> - A traditional server with all jar files in the distribution >> - A server that uses maven coordinates in module.xml files, with all >> artifacts downloaded as part of the provisioning process >> - As above, but with artifacts being lazily loaded as needed (not >> recommended for production, but I think this may be useful from a >> developer point of view) >> >> The provisioning tool will work from an XML descriptor that describes >> the server that is to be built. In general this information will include: >> >> - GAV of the feature packs to use >> - Filtering information if not all features from a pack are required >> (e.g. just give me JAX-RS from the EE pack. In this case the only >> modules/subsystems installed from the pack will be modules and subystem >> that JAX-RS requires). >> - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), >> which will allow community users to easily upgrade individual components. >> - Configuration changes that are required (e.g. some way to add a >> datasource to the assembled server). The actual form this will take >> still needs to be decided. Note that this need to work on both a user >> level (a user adding a datasource) and a feature pack level (e.g. the >> JON feature packing adding a required data source). >> - GAV of deployments to install in the server. This should allow a >> server complete with deployments and the necessary config to be >> assembled and be immediately ready to be put into service. >> >> Note that if you just want a full WF install you should be able to >> provision it with a single line in the provisioning file, by specifying >> the dist feature pack. We will still provide our traditional download, >> which will be build by the provisioning tool as part of our build process. >> >> The provisioning tool will also be able to upgrade servers, which >> basically consists of provisioning a new modules directory. Rollback is >> provided by provisioning from an earlier version of provisioning file. >> When a server is provisioned the tool will make a backup copy of the >> file used, so it should always be possible to examine the provisioning >> file that was used to build the current server config. >> >> Note that when an update is performed on an existing server config will >> not be updated, unless the update adds an additional config file, in >> which case the new config file will be generated (however existing >> config will not be touched). >> >> Note that as a result of this split we will need to do much more >> frequent releases of the individual feature packs, to allow the most >> recent code to be integrated into dist. >> >> Implementation Plan >> >> The above changes are obviously a big job, and will not happen >> overnight. They are also highly likely to conflict with other changes, >> so maintaining a long running branch that gets rebased is not a >> practical option. Instead the plan it to perform the split in >> incremental changes. The basic steps are listed below, some of which can >> be performed in parallel. >> >> 1) Using the initial implementation of my build plugin (in my >> wildfly-build-plugin branch) we split up the server along the lines >> above. The code will all stay in the same repo, however the plugin will >> be used to build all the individual pieces, which are then assembled as >> part of the final build process. Note that the plugin in its current >> form does both the build and provision step, and the pack format is >> produces is far from the final pack format that we will want to use. >> >> 2) Split up the test suite into modules based on the features that they >> test. This will result in several smaller modules in place of a single >> large one, which should also be a usability improvement as individual >> tests will be be faster to run, and run times for all tests in a module >> should be more manageable. >> >> 3) Split the core into into own module. >> >> 4) Split everything else into its own module. As part of this step we >> need to make sure we still have the ability to run all tests against the >> full server, as well as against the cut down feature pack version of the >> server. >> >> 5) Focus on the build an provisioning tool, to implement all the >> features above, and to finalize the WF pack format. >> >> I think that just about covers it. There are still lots of nitty gritty >> details that need to be worked out, however I think this covers all the >> main aspects of the design. We are planning on starting work on this >> basically immediately, as we want to get this implemented as early in >> the WF9 cycle as possible. >> >> Stuart >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From stuart.w.douglas at gmail.com Thu Jun 12 10:48:01 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Thu, 12 Jun 2014 09:48:01 -0500 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <5399BD26.70202@jboss.com> References: <5397209C.9090400@redhat.com> <53987C59.7010403@gmail.com> <5399BD26.70202@jboss.com> Message-ID: <5399BDA1.5070505@gmail.com> I am not sure if either of these choices will make much difference? In general our directory layout will be the same, so it should just be possibly to cherry pick patches from WF. Stuart Darran Lofthouse wrote: > Please also consider the requirements we need to follow when maintaining > the products based on previous AS7/WildFly releases. > > Regards, > Darran Lofthouse. > > > On 11/06/14 16:57, Stuart Douglas wrote: >> Something that I did not cover was how to actually do the split it terms >> of preserving history. We have a few options: >> >> 1) Just copy the files into a clean repo. There is no history in the >> repo, but you could always check the existing wildfly repo if you really >> need it. >> >> 2) Copy the complete WF repo and then delete the parts that are not >> going to be part of the new repo. This leaves complete history, but >> means that the check outs will be larger than they should be. >> >> 3) Use git-filter-branch to create a new repo with just the history of >> the relevant files. We still have a small checkout size, but the history >> is still in the repo. >> >> I think we should go with option 3. >> >> Stuart >> >> Stuart Douglas wrote: >>> This design proposal covers the inter related tasks of splitting up the >>> build, and also creating a build/provisioning system that will make it >>> easy for end users to consume Wildfly. Apologies for the length, but it >>> is a complex topic. The first part explains what we are trying to >>> achieve, the second part covers how we are planning to actually >>> implement it. >>> >>> The Wildfly code base is over a million lines of java and has a test >>> suite that generally takes close to two hours to run in its entirety. >>> This makes the project very unwieldily, and the large size and slow test >>> suite makes development painful. >>> >>> To deal with this issue we are going to split the Wildfly code base into >>> smaller discrete repositories. The planned split is as follows: >>> >>> - Core: just the WF core >>> - Arquillian: the arquillian adaptors >>> - Servlet: a WF distribution with just Undertow, and some basic EE >>> functionality such as naming >>> - EE: All the core EE related functionality, EJB's, messaging etc >>> - Clustering: The core clustering functionality >>> - Console: The management console >>> - Dist: brings all the pieces together, and allows us to run all tests >>> against a full server >>> >>> Note that this list is in no way final, and is open to debate. We will >>> most likely want to split up the EE component at some point, possibly >>> along some kind of web profile/full profile type split. >>> >>> Each of these repos will build a feature pack, which will contain the >>> following: >>> >>> - Feature specification / description >>> - Core version requirements (e.g. WF10) >>> - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) >>> - module.xml files for all required modules that are not provided by >>> other features >>> - References to maven GAV's for jars (possibly a level of indirection >>> here, module.xml may just contain the group and artifact, and the >>> version may be in a version.properties file to allow it to be easily >>> overridden) >>> - Default configuration snippet, subsystem snippets are packaged in the >>> subsystem jars, templates that combine them into config files are part >>> of the feature pack. >>> - Misc files (e.g. xsds) with indication of where on path to place them >>> >>> Note that a feature pack is not a complete server, it cannot simply be >>> extracted and run, it first needs to be assembled into a server by the >>> provisioning tool. The feature packs also just contain references to the >>> maven GAV of required jars, they do not have the actual jars in the pack >>> (which should make them very lightweight). >>> >>> Feature packs will be assembled by the WF build tool, which is just a >>> maven plugin that will replace our existing hacky collection of ant >>> scripts. >>> >>> Actual server instances will be assembled by the provisioning tool, >>> which will be implemented as a library with several different front >>> ends, including a maven plugin and a CLI (possibly integrated into our >>> existing CLI). In general the provisioning tool will be able to >>> provision three different types of servers: >>> >>> - A traditional server with all jar files in the distribution >>> - A server that uses maven coordinates in module.xml files, with all >>> artifacts downloaded as part of the provisioning process >>> - As above, but with artifacts being lazily loaded as needed (not >>> recommended for production, but I think this may be useful from a >>> developer point of view) >>> >>> The provisioning tool will work from an XML descriptor that describes >>> the server that is to be built. In general this information will include: >>> >>> - GAV of the feature packs to use >>> - Filtering information if not all features from a pack are required >>> (e.g. just give me JAX-RS from the EE pack. In this case the only >>> modules/subsystems installed from the pack will be modules and subystem >>> that JAX-RS requires). >>> - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), >>> which will allow community users to easily upgrade individual components. >>> - Configuration changes that are required (e.g. some way to add a >>> datasource to the assembled server). The actual form this will take >>> still needs to be decided. Note that this need to work on both a user >>> level (a user adding a datasource) and a feature pack level (e.g. the >>> JON feature packing adding a required data source). >>> - GAV of deployments to install in the server. This should allow a >>> server complete with deployments and the necessary config to be >>> assembled and be immediately ready to be put into service. >>> >>> Note that if you just want a full WF install you should be able to >>> provision it with a single line in the provisioning file, by specifying >>> the dist feature pack. We will still provide our traditional download, >>> which will be build by the provisioning tool as part of our build process. >>> >>> The provisioning tool will also be able to upgrade servers, which >>> basically consists of provisioning a new modules directory. Rollback is >>> provided by provisioning from an earlier version of provisioning file. >>> When a server is provisioned the tool will make a backup copy of the >>> file used, so it should always be possible to examine the provisioning >>> file that was used to build the current server config. >>> >>> Note that when an update is performed on an existing server config will >>> not be updated, unless the update adds an additional config file, in >>> which case the new config file will be generated (however existing >>> config will not be touched). >>> >>> Note that as a result of this split we will need to do much more >>> frequent releases of the individual feature packs, to allow the most >>> recent code to be integrated into dist. >>> >>> Implementation Plan >>> >>> The above changes are obviously a big job, and will not happen >>> overnight. They are also highly likely to conflict with other changes, >>> so maintaining a long running branch that gets rebased is not a >>> practical option. Instead the plan it to perform the split in >>> incremental changes. The basic steps are listed below, some of which can >>> be performed in parallel. >>> >>> 1) Using the initial implementation of my build plugin (in my >>> wildfly-build-plugin branch) we split up the server along the lines >>> above. The code will all stay in the same repo, however the plugin will >>> be used to build all the individual pieces, which are then assembled as >>> part of the final build process. Note that the plugin in its current >>> form does both the build and provision step, and the pack format is >>> produces is far from the final pack format that we will want to use. >>> >>> 2) Split up the test suite into modules based on the features that they >>> test. This will result in several smaller modules in place of a single >>> large one, which should also be a usability improvement as individual >>> tests will be be faster to run, and run times for all tests in a module >>> should be more manageable. >>> >>> 3) Split the core into into own module. >>> >>> 4) Split everything else into its own module. As part of this step we >>> need to make sure we still have the ability to run all tests against the >>> full server, as well as against the cut down feature pack version of the >>> server. >>> >>> 5) Focus on the build an provisioning tool, to implement all the >>> features above, and to finalize the WF pack format. >>> >>> I think that just about covers it. There are still lots of nitty gritty >>> details that need to be worked out, however I think this covers all the >>> main aspects of the design. We are planning on starting work on this >>> basically immediately, as we want to get this implemented as early in >>> the WF9 cycle as possible. >>> >>> Stuart >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From darran.lofthouse at jboss.com Thu Jun 12 10:51:32 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Thu, 12 Jun 2014 15:51:32 +0100 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <5399BDA1.5070505@gmail.com> References: <5397209C.9090400@redhat.com> <53987C59.7010403@gmail.com> <5399BD26.70202@jboss.com> <5399BDA1.5070505@gmail.com> Message-ID: <5399BE74.2080203@jboss.com> On 12/06/14 15:48, Stuart Douglas wrote: > I am not sure if either of these choices will make much difference? In > general our directory layout will be the same, so it should just be > possibly to cherry pick patches from WF. That is probably true but I think this is an important enough point that before changes are made we know that it will be possible rather than than just should be possible. > Stuart > > Darran Lofthouse wrote: >> Please also consider the requirements we need to follow when maintaining >> the products based on previous AS7/WildFly releases. >> >> Regards, >> Darran Lofthouse. >> >> >> On 11/06/14 16:57, Stuart Douglas wrote: >>> Something that I did not cover was how to actually do the split it terms >>> of preserving history. We have a few options: >>> >>> 1) Just copy the files into a clean repo. There is no history in the >>> repo, but you could always check the existing wildfly repo if you really >>> need it. >>> >>> 2) Copy the complete WF repo and then delete the parts that are not >>> going to be part of the new repo. This leaves complete history, but >>> means that the check outs will be larger than they should be. >>> >>> 3) Use git-filter-branch to create a new repo with just the history of >>> the relevant files. We still have a small checkout size, but the history >>> is still in the repo. >>> >>> I think we should go with option 3. >>> >>> Stuart >>> >>> Stuart Douglas wrote: >>>> This design proposal covers the inter related tasks of splitting up the >>>> build, and also creating a build/provisioning system that will make it >>>> easy for end users to consume Wildfly. Apologies for the length, but it >>>> is a complex topic. The first part explains what we are trying to >>>> achieve, the second part covers how we are planning to actually >>>> implement it. >>>> >>>> The Wildfly code base is over a million lines of java and has a test >>>> suite that generally takes close to two hours to run in its entirety. >>>> This makes the project very unwieldily, and the large size and slow >>>> test >>>> suite makes development painful. >>>> >>>> To deal with this issue we are going to split the Wildfly code base >>>> into >>>> smaller discrete repositories. The planned split is as follows: >>>> >>>> - Core: just the WF core >>>> - Arquillian: the arquillian adaptors >>>> - Servlet: a WF distribution with just Undertow, and some basic EE >>>> functionality such as naming >>>> - EE: All the core EE related functionality, EJB's, messaging etc >>>> - Clustering: The core clustering functionality >>>> - Console: The management console >>>> - Dist: brings all the pieces together, and allows us to run all tests >>>> against a full server >>>> >>>> Note that this list is in no way final, and is open to debate. We will >>>> most likely want to split up the EE component at some point, possibly >>>> along some kind of web profile/full profile type split. >>>> >>>> Each of these repos will build a feature pack, which will contain the >>>> following: >>>> >>>> - Feature specification / description >>>> - Core version requirements (e.g. WF10) >>>> - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) >>>> - module.xml files for all required modules that are not provided by >>>> other features >>>> - References to maven GAV's for jars (possibly a level of indirection >>>> here, module.xml may just contain the group and artifact, and the >>>> version may be in a version.properties file to allow it to be easily >>>> overridden) >>>> - Default configuration snippet, subsystem snippets are packaged in the >>>> subsystem jars, templates that combine them into config files are part >>>> of the feature pack. >>>> - Misc files (e.g. xsds) with indication of where on path to place them >>>> >>>> Note that a feature pack is not a complete server, it cannot simply be >>>> extracted and run, it first needs to be assembled into a server by the >>>> provisioning tool. The feature packs also just contain references to >>>> the >>>> maven GAV of required jars, they do not have the actual jars in the >>>> pack >>>> (which should make them very lightweight). >>>> >>>> Feature packs will be assembled by the WF build tool, which is just a >>>> maven plugin that will replace our existing hacky collection of ant >>>> scripts. >>>> >>>> Actual server instances will be assembled by the provisioning tool, >>>> which will be implemented as a library with several different front >>>> ends, including a maven plugin and a CLI (possibly integrated into our >>>> existing CLI). In general the provisioning tool will be able to >>>> provision three different types of servers: >>>> >>>> - A traditional server with all jar files in the distribution >>>> - A server that uses maven coordinates in module.xml files, with all >>>> artifacts downloaded as part of the provisioning process >>>> - As above, but with artifacts being lazily loaded as needed (not >>>> recommended for production, but I think this may be useful from a >>>> developer point of view) >>>> >>>> The provisioning tool will work from an XML descriptor that describes >>>> the server that is to be built. In general this information will >>>> include: >>>> >>>> - GAV of the feature packs to use >>>> - Filtering information if not all features from a pack are required >>>> (e.g. just give me JAX-RS from the EE pack. In this case the only >>>> modules/subsystems installed from the pack will be modules and subystem >>>> that JAX-RS requires). >>>> - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), >>>> which will allow community users to easily upgrade individual >>>> components. >>>> - Configuration changes that are required (e.g. some way to add a >>>> datasource to the assembled server). The actual form this will take >>>> still needs to be decided. Note that this need to work on both a user >>>> level (a user adding a datasource) and a feature pack level (e.g. the >>>> JON feature packing adding a required data source). >>>> - GAV of deployments to install in the server. This should allow a >>>> server complete with deployments and the necessary config to be >>>> assembled and be immediately ready to be put into service. >>>> >>>> Note that if you just want a full WF install you should be able to >>>> provision it with a single line in the provisioning file, by specifying >>>> the dist feature pack. We will still provide our traditional download, >>>> which will be build by the provisioning tool as part of our build >>>> process. >>>> >>>> The provisioning tool will also be able to upgrade servers, which >>>> basically consists of provisioning a new modules directory. Rollback is >>>> provided by provisioning from an earlier version of provisioning file. >>>> When a server is provisioned the tool will make a backup copy of the >>>> file used, so it should always be possible to examine the provisioning >>>> file that was used to build the current server config. >>>> >>>> Note that when an update is performed on an existing server config will >>>> not be updated, unless the update adds an additional config file, in >>>> which case the new config file will be generated (however existing >>>> config will not be touched). >>>> >>>> Note that as a result of this split we will need to do much more >>>> frequent releases of the individual feature packs, to allow the most >>>> recent code to be integrated into dist. >>>> >>>> Implementation Plan >>>> >>>> The above changes are obviously a big job, and will not happen >>>> overnight. They are also highly likely to conflict with other changes, >>>> so maintaining a long running branch that gets rebased is not a >>>> practical option. Instead the plan it to perform the split in >>>> incremental changes. The basic steps are listed below, some of which >>>> can >>>> be performed in parallel. >>>> >>>> 1) Using the initial implementation of my build plugin (in my >>>> wildfly-build-plugin branch) we split up the server along the lines >>>> above. The code will all stay in the same repo, however the plugin will >>>> be used to build all the individual pieces, which are then assembled as >>>> part of the final build process. Note that the plugin in its current >>>> form does both the build and provision step, and the pack format is >>>> produces is far from the final pack format that we will want to use. >>>> >>>> 2) Split up the test suite into modules based on the features that they >>>> test. This will result in several smaller modules in place of a single >>>> large one, which should also be a usability improvement as individual >>>> tests will be be faster to run, and run times for all tests in a module >>>> should be more manageable. >>>> >>>> 3) Split the core into into own module. >>>> >>>> 4) Split everything else into its own module. As part of this step we >>>> need to make sure we still have the ability to run all tests against >>>> the >>>> full server, as well as against the cut down feature pack version of >>>> the >>>> server. >>>> >>>> 5) Focus on the build an provisioning tool, to implement all the >>>> features above, and to finalize the WF pack format. >>>> >>>> I think that just about covers it. There are still lots of nitty gritty >>>> details that need to be worked out, however I think this covers all the >>>> main aspects of the design. We are planning on starting work on this >>>> basically immediately, as we want to get this implemented as early in >>>> the WF9 cycle as possible. >>>> >>>> Stuart >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev From stuart.w.douglas at gmail.com Thu Jun 12 10:56:31 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Thu, 12 Jun 2014 09:56:31 -0500 Subject: [wildfly-dev] Design Proposal: Build split and provisioning In-Reply-To: <5399BE74.2080203@jboss.com> References: <5397209C.9090400@redhat.com> <53987C59.7010403@gmail.com> <5399BD26.70202@jboss.com> <5399BDA1.5070505@gmail.com> <5399BE74.2080203@jboss.com> Message-ID: <5399BF9F.60603@gmail.com> Just double checked this and it works fine. Basically I just created a core distro using the script in my wildfly-build-plugin branch, and then initialized a new git repo with the result. I then added a new commit, and tested cherry-picking it onto an older code base, and it worked as expected. Stuart Darran Lofthouse wrote: > On 12/06/14 15:48, Stuart Douglas wrote: >> I am not sure if either of these choices will make much difference? In >> general our directory layout will be the same, so it should just be >> possibly to cherry pick patches from WF. > > That is probably true but I think this is an important enough point that > before changes are made we know that it will be possible rather than > than just should be possible. > >> Stuart >> >> Darran Lofthouse wrote: >>> Please also consider the requirements we need to follow when maintaining >>> the products based on previous AS7/WildFly releases. >>> >>> Regards, >>> Darran Lofthouse. >>> >>> >>> On 11/06/14 16:57, Stuart Douglas wrote: >>>> Something that I did not cover was how to actually do the split it >>>> terms >>>> of preserving history. We have a few options: >>>> >>>> 1) Just copy the files into a clean repo. There is no history in the >>>> repo, but you could always check the existing wildfly repo if you >>>> really >>>> need it. >>>> >>>> 2) Copy the complete WF repo and then delete the parts that are not >>>> going to be part of the new repo. This leaves complete history, but >>>> means that the check outs will be larger than they should be. >>>> >>>> 3) Use git-filter-branch to create a new repo with just the history of >>>> the relevant files. We still have a small checkout size, but the >>>> history >>>> is still in the repo. >>>> >>>> I think we should go with option 3. >>>> >>>> Stuart >>>> >>>> Stuart Douglas wrote: >>>>> This design proposal covers the inter related tasks of splitting up >>>>> the >>>>> build, and also creating a build/provisioning system that will make it >>>>> easy for end users to consume Wildfly. Apologies for the length, >>>>> but it >>>>> is a complex topic. The first part explains what we are trying to >>>>> achieve, the second part covers how we are planning to actually >>>>> implement it. >>>>> >>>>> The Wildfly code base is over a million lines of java and has a test >>>>> suite that generally takes close to two hours to run in its entirety. >>>>> This makes the project very unwieldily, and the large size and slow >>>>> test >>>>> suite makes development painful. >>>>> >>>>> To deal with this issue we are going to split the Wildfly code base >>>>> into >>>>> smaller discrete repositories. The planned split is as follows: >>>>> >>>>> - Core: just the WF core >>>>> - Arquillian: the arquillian adaptors >>>>> - Servlet: a WF distribution with just Undertow, and some basic EE >>>>> functionality such as naming >>>>> - EE: All the core EE related functionality, EJB's, messaging etc >>>>> - Clustering: The core clustering functionality >>>>> - Console: The management console >>>>> - Dist: brings all the pieces together, and allows us to run all tests >>>>> against a full server >>>>> >>>>> Note that this list is in no way final, and is open to debate. We will >>>>> most likely want to split up the EE component at some point, possibly >>>>> along some kind of web profile/full profile type split. >>>>> >>>>> Each of these repos will build a feature pack, which will contain the >>>>> following: >>>>> >>>>> - Feature specification / description >>>>> - Core version requirements (e.g. WF10) >>>>> - Dependency info on other features (e.g. RestEASY X requires CDI 1.1) >>>>> - module.xml files for all required modules that are not provided by >>>>> other features >>>>> - References to maven GAV's for jars (possibly a level of indirection >>>>> here, module.xml may just contain the group and artifact, and the >>>>> version may be in a version.properties file to allow it to be easily >>>>> overridden) >>>>> - Default configuration snippet, subsystem snippets are packaged in >>>>> the >>>>> subsystem jars, templates that combine them into config files are part >>>>> of the feature pack. >>>>> - Misc files (e.g. xsds) with indication of where on path to place >>>>> them >>>>> >>>>> Note that a feature pack is not a complete server, it cannot simply be >>>>> extracted and run, it first needs to be assembled into a server by the >>>>> provisioning tool. The feature packs also just contain references to >>>>> the >>>>> maven GAV of required jars, they do not have the actual jars in the >>>>> pack >>>>> (which should make them very lightweight). >>>>> >>>>> Feature packs will be assembled by the WF build tool, which is just a >>>>> maven plugin that will replace our existing hacky collection of ant >>>>> scripts. >>>>> >>>>> Actual server instances will be assembled by the provisioning tool, >>>>> which will be implemented as a library with several different front >>>>> ends, including a maven plugin and a CLI (possibly integrated into our >>>>> existing CLI). In general the provisioning tool will be able to >>>>> provision three different types of servers: >>>>> >>>>> - A traditional server with all jar files in the distribution >>>>> - A server that uses maven coordinates in module.xml files, with all >>>>> artifacts downloaded as part of the provisioning process >>>>> - As above, but with artifacts being lazily loaded as needed (not >>>>> recommended for production, but I think this may be useful from a >>>>> developer point of view) >>>>> >>>>> The provisioning tool will work from an XML descriptor that describes >>>>> the server that is to be built. In general this information will >>>>> include: >>>>> >>>>> - GAV of the feature packs to use >>>>> - Filtering information if not all features from a pack are required >>>>> (e.g. just give me JAX-RS from the EE pack. In this case the only >>>>> modules/subsystems installed from the pack will be modules and >>>>> subystem >>>>> that JAX-RS requires). >>>>> - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8), >>>>> which will allow community users to easily upgrade individual >>>>> components. >>>>> - Configuration changes that are required (e.g. some way to add a >>>>> datasource to the assembled server). The actual form this will take >>>>> still needs to be decided. Note that this need to work on both a user >>>>> level (a user adding a datasource) and a feature pack level (e.g. the >>>>> JON feature packing adding a required data source). >>>>> - GAV of deployments to install in the server. This should allow a >>>>> server complete with deployments and the necessary config to be >>>>> assembled and be immediately ready to be put into service. >>>>> >>>>> Note that if you just want a full WF install you should be able to >>>>> provision it with a single line in the provisioning file, by >>>>> specifying >>>>> the dist feature pack. We will still provide our traditional download, >>>>> which will be build by the provisioning tool as part of our build >>>>> process. >>>>> >>>>> The provisioning tool will also be able to upgrade servers, which >>>>> basically consists of provisioning a new modules directory. >>>>> Rollback is >>>>> provided by provisioning from an earlier version of provisioning file. >>>>> When a server is provisioned the tool will make a backup copy of the >>>>> file used, so it should always be possible to examine the provisioning >>>>> file that was used to build the current server config. >>>>> >>>>> Note that when an update is performed on an existing server config >>>>> will >>>>> not be updated, unless the update adds an additional config file, in >>>>> which case the new config file will be generated (however existing >>>>> config will not be touched). >>>>> >>>>> Note that as a result of this split we will need to do much more >>>>> frequent releases of the individual feature packs, to allow the most >>>>> recent code to be integrated into dist. >>>>> >>>>> Implementation Plan >>>>> >>>>> The above changes are obviously a big job, and will not happen >>>>> overnight. They are also highly likely to conflict with other changes, >>>>> so maintaining a long running branch that gets rebased is not a >>>>> practical option. Instead the plan it to perform the split in >>>>> incremental changes. The basic steps are listed below, some of which >>>>> can >>>>> be performed in parallel. >>>>> >>>>> 1) Using the initial implementation of my build plugin (in my >>>>> wildfly-build-plugin branch) we split up the server along the lines >>>>> above. The code will all stay in the same repo, however the plugin >>>>> will >>>>> be used to build all the individual pieces, which are then >>>>> assembled as >>>>> part of the final build process. Note that the plugin in its current >>>>> form does both the build and provision step, and the pack format is >>>>> produces is far from the final pack format that we will want to use. >>>>> >>>>> 2) Split up the test suite into modules based on the features that >>>>> they >>>>> test. This will result in several smaller modules in place of a single >>>>> large one, which should also be a usability improvement as individual >>>>> tests will be be faster to run, and run times for all tests in a >>>>> module >>>>> should be more manageable. >>>>> >>>>> 3) Split the core into into own module. >>>>> >>>>> 4) Split everything else into its own module. As part of this step we >>>>> need to make sure we still have the ability to run all tests against >>>>> the >>>>> full server, as well as against the cut down feature pack version of >>>>> the >>>>> server. >>>>> >>>>> 5) Focus on the build an provisioning tool, to implement all the >>>>> features above, and to finalize the WF pack format. >>>>> >>>>> I think that just about covers it. There are still lots of nitty >>>>> gritty >>>>> details that need to be worked out, however I think this covers all >>>>> the >>>>> main aspects of the design. We are planning on starting work on this >>>>> basically immediately, as we want to get this implemented as early in >>>>> the WF9 cycle as possible. >>>>> >>>>> Stuart >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev From Anil.Saldhana at redhat.com Thu Jun 12 11:55:10 2014 From: Anil.Saldhana at redhat.com (Anil Saldhana) Date: Thu, 12 Jun 2014 10:55:10 -0500 Subject: [wildfly-dev] On the WildFly Elytron PasswordFactory API In-Reply-To: <53988054.4090208@redhat.com> References: <538F4445.9090604@redhat.com> <539867F9.6020103@redhat.com> <539876D5.40906@redhat.com> <53987943.4010604@redhat.com> <53988054.4090208@redhat.com> Message-ID: <5399CD5E.4040804@redhat.com> I also want to highlight the difference between PBE and PBKDF2 (http://en.wikipedia.org/wiki/PBKDF2). Developers keep pushing for PBKDF2 which is essentially a one way process. You cannot get the password back. In the case of an application server, there is a need to get access to the configured database password to talk to a database or another EIS system. So it is a two way process. Not all databases can do a hashed/digest mechanism. I hope we can document this in Elytron documentation somewhere. Similarly, bcrypt (http://en.wikipedia.org/wiki/Bcrypt) is mentioned a lot. It again is a one way process. Also below.... On 06/11/2014 11:14 AM, David M. Lloyd wrote: > On 06/11/2014 10:44 AM, Bill Burke wrote: >> >> On 6/11/2014 11:33 AM, Anil Saldhana wrote: >>> On 06/11/2014 09:30 AM, David M. Lloyd wrote: >>>> On 06/04/2014 11:07 AM, David M. Lloyd wrote: >>>> [...] >>>>> Example: Encrypting a new password >>>>> ---------------------------------- >>>>> >>>>> PasswordFactory pf = PasswordFactory.getInstance("sha1crypt"); >>>>> // API not yet established but will be similar to this possibly: >>>>> ???? parameters = new >>>>> ???SHA1CryptPasswordParameterSpec("p4ssw0rd".toCharArray()); >>>>> Password encrypted = pf.generatePassword(parameters); >>>>> assert encrypted instanceof SHA1CryptPassword; >>>> I have a concrete specification for this example now: >>>> >>>> PasswordFactory pf = PasswordFactory.getInstance("sha-256-crypt"); >>>> // use a 64-byte random salt; most algorithms support flexible sizes >>>> byte[] salt = new byte[64]; >>>> ThreadLocalRandom.current().getBytes(salt); >>>> // iteration count is 4096, can generally be more (or less) >>>> AlgorithmParameterSpec aps = >>>> new HashedPasswordAlgorithmSpec(4096, salt); >>>> char[] chars = "p4ssw0rd".toCharArray(); >>>> PasswordSpec spec = new EncryptablePasswordSpec(chars, aps); >>>> Password pw = pf.generatePassword(spec); >>>> assert pw.getAlgorithm().equals("sha-256-crypt"); >>>> assert pw instanceof UnixSHACryptPassword; >>>> assert pf.verifyPassword(pw, chars); >>>> >>> - Best is to make the salt and iteration count configurable. >> +1 >> >> 5000 iterations is actually a *huge* performance hit, but unfortunately >> way lower than what I've seen recommended. (I've seen as high as >> 100,000 based on today's hardware). > Yeah the point of having the algorithm parameter spec is to allow these > things to be specified. Iteration count is recommended to be pretty > high these days, unfortunately, but with this kind of parameter spec, it > is completely configurable so if there's some reason to use a lower > count (or a higher one), you can certainly do it. > >> In Keycloak we store the iteration count along with the password so that >> the admin can change the default iteration count in the future. We >> recalculate the hash on a successful login if the default count and user >> count are different. > Yeah the newer SASL SCRAM mechanisms (and other challenge-response > mechanisms like Digest-MD5 and, I believe, HTTP's digest) also have some > support for caching pre-hashed passwords to help performance. While on > the one hand, this means that the hash is essentially sufficient to > authenticate, on the other hand the server can always periodically > regenerate the hash with a different salt, which causes the previous > hashed password to essentially become invalid without actually requiring > a password change. > Agree. From Elytron perspective, it is important to provide the configuration to handle all potential use cases. From david.lloyd at redhat.com Thu Jun 12 12:08:31 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Thu, 12 Jun 2014 11:08:31 -0500 Subject: [wildfly-dev] On the WildFly Elytron PasswordFactory API In-Reply-To: <5399CD5E.4040804@redhat.com> References: <538F4445.9090604@redhat.com> <539867F9.6020103@redhat.com> <539876D5.40906@redhat.com> <53987943.4010604@redhat.com> <53988054.4090208@redhat.com> <5399CD5E.4040804@redhat.com> Message-ID: <5399D07F.40600@redhat.com> On 06/12/2014 10:55 AM, Anil Saldhana wrote: > I also want to highlight the difference between PBE and PBKDF2 > (http://en.wikipedia.org/wiki/PBKDF2). > Developers keep pushing for PBKDF2 which is essentially a one way > process. You cannot get the password back. > In the case of an application server, there is a need to get access to > the configured database password to talk to > a database or another EIS system. So it is a two way process. Not all > databases can do a hashed/digest mechanism. > > I hope we can document this in Elytron documentation somewhere. The Password SPI in fact has OneWayPassword and TwoWayPassword sub-interfaces. At present, the only TwoWayPassword implementation we have is "clear", which, as the name says, is a clear password (and thus is trivially "reversible"). We recently were discussing that there seem to be very few (if any) good, reliable two-way password strategies (which do not involve a keystore, which is *not* the same thing). I've deliberately been referring to non-clear TwoWayPassword schemes as "obfuscation" rather than "encryption" since few (if any) two-way algorithms will actually make the password "secure" in the event of theft. More likely this is for the "accidental printout" kind of case. That said, if anyone knows of any good two-way password obfuscation algorithms they think should be supported, please comment here and/or open an issue at https://issues.jboss.org/browse/ELY describing the algorithm (preferably with a link to a specification if possible). -- - DML From Anil.Saldhana at redhat.com Thu Jun 12 13:08:37 2014 From: Anil.Saldhana at redhat.com (Anil Saldhana) Date: Thu, 12 Jun 2014 12:08:37 -0500 Subject: [wildfly-dev] On the WildFly Elytron PasswordFactory API In-Reply-To: <5399D07F.40600@redhat.com> References: <538F4445.9090604@redhat.com> <539867F9.6020103@redhat.com> <539876D5.40906@redhat.com> <53987943.4010604@redhat.com> <53988054.4090208@redhat.com> <5399CD5E.4040804@redhat.com> <5399D07F.40600@redhat.com> Message-ID: <5399DE95.3020800@redhat.com> On 06/12/2014 11:08 AM, David M. Lloyd wrote: > On 06/12/2014 10:55 AM, Anil Saldhana wrote: >> I also want to highlight the difference between PBE and PBKDF2 >> (http://en.wikipedia.org/wiki/PBKDF2). >> Developers keep pushing for PBKDF2 which is essentially a one way >> process. You cannot get the password back. >> In the case of an application server, there is a need to get access to >> the configured database password to talk to >> a database or another EIS system. So it is a two way process. Not all >> databases can do a hashed/digest mechanism. >> >> I hope we can document this in Elytron documentation somewhere. > The Password SPI in fact has OneWayPassword and TwoWayPassword > sub-interfaces. > > At present, the only TwoWayPassword implementation we have is "clear", > which, as the name says, is a clear password (and thus is trivially > "reversible"). We recently were discussing that there seem to be very > few (if any) good, reliable two-way password strategies (which do not > involve a keystore, which is *not* the same thing). > > I've deliberately been referring to non-clear TwoWayPassword schemes as > "obfuscation" rather than "encryption" since few (if any) two-way > algorithms will actually make the password "secure" in the event of > theft. More likely this is for the "accidental printout" kind of case. You are using the right term, David. I use obfuscation or masking for the two way password feature. I remember around 2007, Jason and I had this minor argument with a JBoss author who kept insisting on using the word "Encryption for the masking. Unfortunately PBE is the only available mechanism to do the two way password without the low-user-experience usage of a keystore or other certificate mechanism. > > That said, if anyone knows of any good two-way password obfuscation > algorithms they think should be supported, please comment here and/or > open an issue at https://issues.jboss.org/browse/ELY describing the > algorithm (preferably with a link to a specification if possible). > I have seen a lot of usage and demand for this open source project - jasypt. http://www.jasypt.org/ I have been planning on using it in PicketLink (http://www.picketlink.org) to get away from all the PBE based mechanisms we have to mask passwords in configuration files. Maybe Elytron can use this library as a dependency. From brian.stansberry at redhat.com Thu Jun 12 13:41:27 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Thu, 12 Jun 2014 12:41:27 -0500 Subject: [wildfly-dev] Design Proposal: Log Viewer In-Reply-To: <53987EE9.2060801@redhat.com> References: <539795B5.5030009@redhat.com> <5397B692.2060209@redhat.com> <53986F54.60502@redhat.com> <53987644.1080304@redhat.com> <53987934.3020906@gmail.com> <53987EE9.2060801@redhat.com> Message-ID: <5399E647.5050809@redhat.com> On 6/11/14, 11:08 AM, Scott Marlow wrote: > On 06/11/2014 11:43 AM, Stuart Douglas wrote: >> I don't think we should be worrying about that. Management operations >> happen under a global lock, and it is already possible to perform >> operations that return a lot of content (e.g. reading the whole resource >> tree). > > If we already have a single mutually exclusive, global lock in use for > operations like "viewing logs", I'm less worried. > The mutually exclusive management lock would not typically be used for something like this, which is a read. It could be misused for that, but IMO it's a misuse. >> >> There would need to be a *lot* of admins and a very under powered server >> to make this a problem, and even then the solution is 'don't do that'. > > I've seen this "lot of admins" situation before with log viewing, which > is why I brought it up. > >> >> Stuart >> >> Scott Marlow wrote: >>> On 06/11/2014 11:01 AM, James R. Perkins wrote: >>>> On 06/10/2014 06:53 PM, Scott Marlow wrote: >>>>> Any concern about the number of users viewing the server logs at the >>>>> same time and the impact that could have on a system under load? For >>>>> example, if a bunch of users arrive at work around the same time and >>>>> they are all curious about how things went last night. They all could >>>>> make a request to show the last 1000 lines of the server.log file >>>>> (which >>>>> could peg the CPU). You might ask why a large number of users have >>>>> access to view the logs but the problem is still worth considering. >>>> Actually it's not something I've thought of. Though I suppose this could >>>> be an issue with any operation that returns large results. >>> >>> Would be good to have feedback on how many users are likely to >>> concurrently view logs. I suspect the count will be higher than we >>> might expect (depending on which users have access for a particular >>> deployment). >>> >>> One possible solution could be a >>> http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Semaphore.html >>> >>> that is configured for the maximum number of users allowed to view logs >>> concurrently. For example, if the number of Semaphore permits is >>> configured for five and eighty users are trying to view logs at the same >>> time, logs will be returned for five users at a time (until all users >>> have received their logs or a timeout occurs). >>> >>> There are probably other ways to deal with this as well. >>> >>>>> >>>>> On 06/10/2014 07:33 PM, James R. Perkins wrote: >>>>>> While there wasn't a huge talk about this at the Brno meeting I know >>>>>> Heiko brought it up as part of the extended metrics. It's on some >>>>>> future >>>>>> product road maps as well too. I figured I might as well bring it up >>>>>> here and get opinions on it. >>>>>> >>>>>> This design proposal covers how capturing log messages. The "viewer" >>>>>> will likely be an operation that returns an object list of log record >>>>>> details. The design of how a GUI view would look/work is beyond the >>>>>> scope of this proposal. >>>>>> >>>>>> There is currently an operation to view a log file. This has several >>>>>> limitations. The file must be defined as a known file handler. >>>>>> There is >>>>>> also no way to filter results, e.g. errors only. If per-deployment >>>>>> logging is used, those log messages are not viewable as the files are >>>>>> not accessible. >>>>>> >>>>>> For the list of requirements I'm going to be lazy and just give the >>>>>> link >>>>>> the wiki page https://community.jboss.org/wiki/LogViewerDesign. >>>>>> >>>>>> Implementation: >>>>>> >>>>>> 1) There will be a new resource on the logging subsystem resource that >>>>>> can be enabled or disabled, currently called log-collector. Probably >>>>>> some attributes, but I'm not sure what will need to be configurable at >>>>>> this point. This will likely act like a handler and be assignable only >>>>>> to loggers and not the async-handler. >>>>>> >>>>>> 2) If a deployment uses per-deployment logging then a separate >>>>>> log-collector will need to be added to the deployments log context >>>>>> >>>>>> 3) Logging profiles will also have their own log-collector. >>>>>> >>>>>> 4) The messages should be written asynchronously and to a file in some >>>>>> kind of formatted structure. The structure will likely be JSON. >>>>>> >>>>>> 5) An operation to query the messages will need to be create. This >>>>>> operation should allow the results to be filtered on various fields as >>>>>> well as limit the data set returned and allow for a starting position. >>>>>> >>>>>> 6) All operations associated with view the log should use RBAC to >>>>>> control the access. >>>>>> >>>>>> 7) Audit logs will eventually need to be viewable and queryable. This >>>>>> might be separate from the logging subsystem as it is now, but it will >>>>>> need to be done. >>>>>> >>>>>> >>>>>> There are things like how long or how many records should we keep that >>>>>> needs to be determined. This could possibly be configurable via >>>>>> attributes on the resource. >>>>>> >>>>>> This is about all I've got at this point. I'd appreciate any feedback. >>>>>> >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From stuart.w.douglas at gmail.com Thu Jun 12 14:16:19 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Thu, 12 Jun 2014 13:16:19 -0500 Subject: [wildfly-dev] Design Proposal: Server suspend/resume (AKA Graceful Shutdown) In-Reply-To: <53987EB8.4090903@redhat.com> References: <5396377F.80003@redhat.com> <53987EB8.4090903@redhat.com> Message-ID: <5399EE73.5080006@gmail.com> >> What the subsystem does when it receives a suspend command, and when it >> considers itself suspended will vary, but in the common case it will >> immediatly start rejecting external requests (e.g. Undertow will start >> responding with a 503 to all new requests). > > I think there will need to be some mechanism for coordination between > subsystems here. For example, I doubt mod_cluster will want Undertow > deciding to start sending 503s before it gets a chance to get the LB sorted. > Maybe this needs to be a two step process. Basically preSuspend() gets called for each container, and they notify when this is done (which would be mod_cluster notifying the load balancer), then the actual suspend takes place. I don't think we need anything any more complex. Stuart From smarlow at redhat.com Fri Jun 13 14:52:10 2014 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 13 Jun 2014 14:52:10 -0400 Subject: [wildfly-dev] does Attachments.NEXT_PHASE_DEPS wait for async services or does the next DUP phase start as soon as the service.start returns... Message-ID: <539B485A.2040507@redhat.com> Hi, In PersistenceUnitServiceHandler [1], we are creating the persistence unit service (PersistenceUnitServiceImpl [2]) and specifying Attachments.NEXT_PHASE_DEPS for the service that are creating. What happens when the service [2] is started asynchronously? Will the next deployment phase start as soon as the async service is started? Or will the next deployment phase as soon as the PersistenceUnitServiceImpl.start method returns? I suspect that it is as soon as the async start method returns, which means that the JPA ordering is wrong (with respect to allowing the persistence provider to rewrite entity classes completely before the POST_MODULE phase starts for the deployment.) This came up with an EclipseLink issue [3] with weaving not working in an EAR that contains war (with JSF managed beans) and a jar that has a persistence unit/entities. The managed bean classes (indirectly) reference the entity classes, which causes them to be loaded during the POST_MODULE phase. We are reaching the POST_MODULE phase sooner than I expected. Scott [1] https://github.com/wildfly/wildfly/blob/master/jpa/src/main/java/org/jboss/as/jpa/processor/PersistenceUnitServiceHandler.java#L424 [2] https://github.com/wildfly/wildfly/blob/master/jpa/src/main/java/org/jboss/as/jpa/service/PersistenceUnitServiceImpl.java#L192 [3] https://community.jboss.org/message/878100#878100 From smarlow at redhat.com Fri Jun 13 17:15:46 2014 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 13 Jun 2014 17:15:46 -0400 Subject: [wildfly-dev] does Attachments.NEXT_PHASE_DEPS wait for async services or does the next DUP phase start as soon as the service.start returns... In-Reply-To: <539B485A.2040507@redhat.com> References: <539B485A.2040507@redhat.com> Message-ID: <539B6A02.40607@redhat.com> If we don't have anything built in already, we probably could introduce another service that synchronously waits for the async activity to completely start and is marked with NEXT_PHASE_DEPS so that the next deployment phase doesn't start until the background task has completed. Something else? Scott On 06/13/2014 02:52 PM, Scott Marlow wrote: > Hi, > > In PersistenceUnitServiceHandler [1], we are creating the persistence > unit service (PersistenceUnitServiceImpl [2]) and specifying > Attachments.NEXT_PHASE_DEPS for the service that are creating. > > What happens when the service [2] is started asynchronously? Will the > next deployment phase start as soon as the async service is started? Or > will the next deployment phase as soon as the > PersistenceUnitServiceImpl.start method returns? I suspect that it is > as soon as the async start method returns, which means that the JPA > ordering is wrong (with respect to allowing the persistence provider to > rewrite entity classes completely before the POST_MODULE phase starts > for the deployment.) > > This came up with an EclipseLink issue [3] with weaving not working in > an EAR that contains war (with JSF managed beans) and a jar that has a > persistence unit/entities. The managed bean classes (indirectly) > reference the entity classes, which causes them to be loaded during the > POST_MODULE phase. We are reaching the POST_MODULE phase sooner than I > expected. > > > Scott > > > [1] > https://github.com/wildfly/wildfly/blob/master/jpa/src/main/java/org/jboss/as/jpa/processor/PersistenceUnitServiceHandler.java#L424 > > [2] > https://github.com/wildfly/wildfly/blob/master/jpa/src/main/java/org/jboss/as/jpa/service/PersistenceUnitServiceImpl.java#L192 > > [3] https://community.jboss.org/message/878100#878100 > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From smarlow at redhat.com Fri Jun 13 20:02:44 2014 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 13 Jun 2014 20:02:44 -0400 Subject: [wildfly-dev] does Attachments.NEXT_PHASE_DEPS wait for async services or does the next DUP phase start as soon as the service.start returns... In-Reply-To: <539B6A02.40607@redhat.com> References: <539B485A.2040507@redhat.com> <539B6A02.40607@redhat.com> Message-ID: <539B9124.4030506@redhat.com> On 06/13/2014 05:15 PM, Scott Marlow wrote: > If we don't have anything built in already, we probably could introduce > another service that synchronously waits for the async activity to > completely start and is marked with NEXT_PHASE_DEPS so that the next > deployment phase doesn't start until the background task has completed. > > Something else? If the above idea is just about making the persistence unit service synchronous, there are probably easier ways to do that (perhaps adding a sync/async deployment hint for the persistence unit). Will also need to revisit the single core deadlock case https://issues.jboss.org/browse/AS7-4786. > > Scott > > On 06/13/2014 02:52 PM, Scott Marlow wrote: >> Hi, >> >> In PersistenceUnitServiceHandler [1], we are creating the persistence >> unit service (PersistenceUnitServiceImpl [2]) and specifying >> Attachments.NEXT_PHASE_DEPS for the service that are creating. >> >> What happens when the service [2] is started asynchronously? Will the >> next deployment phase start as soon as the async service is started? Or >> will the next deployment phase as soon as the >> PersistenceUnitServiceImpl.start method returns? I suspect that it is >> as soon as the async start method returns, which means that the JPA >> ordering is wrong (with respect to allowing the persistence provider to >> rewrite entity classes completely before the POST_MODULE phase starts >> for the deployment.) >> >> This came up with an EclipseLink issue [3] with weaving not working in >> an EAR that contains war (with JSF managed beans) and a jar that has a >> persistence unit/entities. The managed bean classes (indirectly) >> reference the entity classes, which causes them to be loaded during the >> POST_MODULE phase. We are reaching the POST_MODULE phase sooner than I >> expected. >> >> >> Scott >> >> >> [1] >> https://github.com/wildfly/wildfly/blob/master/jpa/src/main/java/org/jboss/as/jpa/processor/PersistenceUnitServiceHandler.java#L424 >> >> [2] >> https://github.com/wildfly/wildfly/blob/master/jpa/src/main/java/org/jboss/as/jpa/service/PersistenceUnitServiceImpl.java#L192 >> >> [3] https://community.jboss.org/message/878100#878100 >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From stuart.w.douglas at gmail.com Sat Jun 14 10:40:40 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Sat, 14 Jun 2014 09:40:40 -0500 Subject: [wildfly-dev] does Attachments.NEXT_PHASE_DEPS wait for async services or does the next DUP phase start as soon as the service.start returns... In-Reply-To: <539B485A.2040507@redhat.com> References: <539B485A.2040507@redhat.com> Message-ID: <539C5EE8.6060802@gmail.com> Scott Marlow wrote: > Hi, > > In PersistenceUnitServiceHandler [1], we are creating the persistence > unit service (PersistenceUnitServiceImpl [2]) and specifying > Attachments.NEXT_PHASE_DEPS for the service that are creating. > > What happens when the service [2] is started asynchronously? Will the > next deployment phase start as soon as the async service is started? Or > will the next deployment phase as soon as the > PersistenceUnitServiceImpl.start method returns? I suspect that it is > as soon as the async start method returns, which means that the JPA > ordering is wrong (with respect to allowing the persistence provider to > rewrite entity classes completely before the POST_MODULE phase starts > for the deployment.) It should not matter if it is an async start of not, the phase should not advance until the service is actually started. This just uses normal MSC service deps, and given that we use async start a fair bit it would surprise me if it was at fault here. Stuart > > This came up with an EclipseLink issue [3] with weaving not working in > an EAR that contains war (with JSF managed beans) and a jar that has a > persistence unit/entities. The managed bean classes (indirectly) > reference the entity classes, which causes them to be loaded during the > POST_MODULE phase. We are reaching the POST_MODULE phase sooner than I > expected. > > > Scott > > > [1] > https://github.com/wildfly/wildfly/blob/master/jpa/src/main/java/org/jboss/as/jpa/processor/PersistenceUnitServiceHandler.java#L424 > > [2] > https://github.com/wildfly/wildfly/blob/master/jpa/src/main/java/org/jboss/as/jpa/service/PersistenceUnitServiceImpl.java#L192 > > [3] https://community.jboss.org/message/878100#878100 > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From a.walker at base2services.com Sun Jun 15 19:57:25 2014 From: a.walker at base2services.com (Aaron Walker) Date: Mon, 16 Jun 2014 09:57:25 +1000 Subject: [wildfly-dev] JMX Console over Web Admin Console In-Reply-To: References: <537D51A9.7090803@redhat.com> <538E3130.4060905@redhat.com> Message-ID: <01B7E2CB-D0F0-4E66-B373-C3029D849F23@base2services.com> My 2cents Have you looked at using this http://www.jolokia.org/agent/war.html it exposes JMX via a http and supports other agents including a JVM agent. It also has a number of clients. Unless you really needed the old web-ui provided by the jmx-console I would struggle to see the value in porting it. ?Aaron On 8 Jun 2014, at 5:34 pm, Sebastian ?askawiec wrote: > Hi Tomaz > > Thanks for the hints! > I created separate repository with proper group id. I also replaced JBoss logo with Wildfly and corrected code packages. Everything might be found here: https://github.com/altanis/wildfly-jmx-console > > Is it possible to release this war file into some publicly available repository? > > Best regards > Sebastian > > > 2014-06-04 16:08 GMT+02:00 Toma? Cerar : > In any case it cannot be org.jboss.* > it can be org.wildfly. > > Looking trough the rebased code it is still war application depending on servlet container to be present. > Taking that into consideration, this cannot be part of our main codebase/distribution, but having it as external add-on project sounds fine. > > In this case i would go for org.wildfly.jmx-console as groupId and artifact id based on logical part of artifact inside the project. > probably just jmx-console. > > btw, your rebased project still imports java ee6 dependencies, given wildfly is ee7 now it would be wise to upgrade that. > > -- > tomaz > > > On Wed, Jun 4, 2014 at 3:53 PM, Sebastian ?askawiec wrote: > Hi Brian > > I thought about: > org.jboss > > > org.jboss.as > org.wildfly > ,artifact id: > wildfly-jmx-console > jboss-jmx-console > and finally version: > start from the scratch 1.0.0-SNAPSHOT > My preferences are - org.jboss as group id and jboss-jmx-console as artifact id. What do you think, is it ok? > > Best regards > Sebastian > > > > > > > 2014-06-03 22:33 GMT+02:00 Brian Stansberry : > > Hi Sebastian, > > > On 6/1/14, 1:21 PM, Sebastian ?askawiec wrote: > Hi Brian > > Thanks for clarification and sorry for late response. > > I created Feature Request to add expose MBean server through HTTP > management interface: https://issues.jboss.org/browse/WFLY-3426 > > > Thanks. > > > It would be great to have MBean server exposed via Wildfly HTTP > Management interface, but I know several teams which would like to have > such functionality in JBoss AS 7. This is why I started looking at > Darran's port to JMX console > (https://github.com/dandreadis/wildfly/commits/jmx-console). I rebased > it, detached from Wildfly parent and pushed to my branch > (https://github.com/altanis/wildfly/commits/jmx-console-ported). The > same WAR file seems to work correctly on JBoss AS 7 as well as Wildfly. > > In my opinion it would be great to have this console available publicly. > Is it possible to make the WAR file available through JBoss Nexus > (perhaps thirdparty-releases repository)? If it is, I'd squash all > commits and push only jmx-console code into new github repository (to > make it separate from Wildfly). > > > What maven Group were you wanting to use? That jmx-console-ported branch has org.wildfly in the pom. > > Best regards > Sebastian > > > > 2014-05-22 3:23 GMT+02:00 Brian Stansberry >: > > > I agree that if we exposed the mbean server over HTTP that it should be > via a context on our HTTP management interface. Either that or expose > mbeans as part of our standard management resource tree. That would make > integration in the web console much more practical. > > I don't see us ever bringing back the AS5-style jmx-console.war that > runs on port 8080 as part of the WildFly distribution. That would > introduce a requirement for EE into our management infrastructure, and > we won't do that. Management is part of WildFly core, and WildFly core > does not require EE. If the Servlet-based jmx-console.war code linked > from WFLY-1197 gets further developed, I see it as a community effort > for people who want to install that on their own, not as something we'd > distribute as part of WildFly itself. > > On 5/21/14, 7:37 AM, Sebastian ?askawiec wrote: > > Hi > > > > One of our projects is based on JBoss 5.1 and we are considering > > migrating it to Wildfly. One of our problems is Web based JMX > Console... > > We have pretty complicated production environment and Web based JMX > > console with basic Auth delegated to LDAP is the simplest > solution for us. > > > > I noticed that there was a ticket opened for porting legacy JMX > Console: > > https://issues.jboss.org/browse/WFLY-1197. > > However I think it would be much better idea to to have this > > functionality in Web Administraction console. In my opinion it > would be > > great to have it under "Runtime" in "Status" submenu. > > > > What do you think about this idea? > > > > Best Regards > > -- > > Sebastian ?askawiec > > > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > -- > Brian Stansberry > Senior Principal Software Engineer > JBoss by Red Hat > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > -- > Sebastian ?askawiec > > > -- > Brian Stansberry > Senior Principal Software Engineer > JBoss by Red Hat > > > > -- > Sebastian ?askawiec > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > -- > Sebastian ?askawiec > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140616/18246a87/attachment-0001.html From qutpeter at gmail.com Mon Jun 16 08:23:18 2014 From: qutpeter at gmail.com (Peter Cai) Date: Mon, 16 Jun 2014 22:23:18 +1000 Subject: [wildfly-dev] Is wildfly subsystem maven archetype available Message-ID: Hi, Is Wildfly subsystem maven archetype available in redhat maven repo? Regards, Peter C -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140616/2a667c0d/attachment.html From tomaz.cerar at gmail.com Mon Jun 16 08:39:16 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Mon, 16 Jun 2014 14:39:16 +0200 Subject: [wildfly-dev] Is wildfly subsystem maven archetype available In-Reply-To: References: Message-ID: Yes it will be. I just forgot bit about it. Let me get on this right away. -- tomaz On Mon, Jun 16, 2014 at 2:23 PM, Peter Cai wrote: > Hi, > > Is Wildfly subsystem maven archetype available in redhat maven repo? > > Regards, > Peter C > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140616/edf672c2/attachment.html From qutpeter at gmail.com Mon Jun 16 08:42:10 2014 From: qutpeter at gmail.com (Peter Cai) Date: Mon, 16 Jun 2014 22:42:10 +1000 Subject: [wildfly-dev] Is wildfly subsystem maven archetype available In-Reply-To: References: Message-ID: Awesome. Looking forward to seeing it. Peter C On Mon, Jun 16, 2014 at 10:39 PM, Toma? Cerar wrote: > Yes it will be. > I just forgot bit about it. > > Let me get on this right away. > > -- > tomaz > > > On Mon, Jun 16, 2014 at 2:23 PM, Peter Cai wrote: > >> Hi, >> >> Is Wildfly subsystem maven archetype available in redhat maven repo? >> >> Regards, >> Peter C >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140616/526dc890/attachment.html From tomaz.cerar at gmail.com Mon Jun 16 11:08:26 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Mon, 16 Jun 2014 17:08:26 +0200 Subject: [wildfly-dev] Is wildfly subsystem maven archetype available In-Reply-To: References: Message-ID: Hey, I have released 8.0.0.Final version of WildFly subsystem archetype. Docs are somewhat updated https://docs.jboss.org/author/display/WFLY8/Example+subsystem or at least enough to get you started. Sources are available at https://github.com/wildfly/archetypes together with extra subsystem example. I will be working on improving archetype and docs with few new things that can be used in coming days. But what is there now it should be enough to get you started. Let me know if you find any problems. -- tomaz On Mon, Jun 16, 2014 at 2:42 PM, Peter Cai wrote: > Awesome. > Looking forward to seeing it. > > Peter C > > > On Mon, Jun 16, 2014 at 10:39 PM, Toma? Cerar > wrote: > >> Yes it will be. >> I just forgot bit about it. >> >> Let me get on this right away. >> >> -- >> tomaz >> >> >> On Mon, Jun 16, 2014 at 2:23 PM, Peter Cai wrote: >> >>> Hi, >>> >>> Is Wildfly subsystem maven archetype available in redhat maven repo? >>> >>> Regards, >>> Peter C >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140616/5b1fb825/attachment.html From sdouglas at redhat.com Mon Jun 16 13:38:31 2014 From: sdouglas at redhat.com (Stuart Douglas) Date: Mon, 16 Jun 2014 11:38:31 -0600 Subject: [wildfly-dev] First patch for the build split has been merged Message-ID: <539F2B97.9000308@redhat.com> Hi all, The first patch of many for the build split has been merged. This introduces a few changes, the most obvious of which is that the server that is produced in the build dir is now a 'thin' server, that uses jars directly from the local maven directory, and the full traditional server is now built in the 'dist' directory. Stuart From brian.stansberry at redhat.com Mon Jun 16 14:00:57 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 16 Jun 2014 13:00:57 -0500 Subject: [wildfly-dev] First patch for the build split has been merged In-Reply-To: <539F2B97.9000308@redhat.com> References: <539F2B97.9000308@redhat.com> Message-ID: <539F30D9.5080200@redhat.com> If you have an open PR that update one of the management config schemas that ends up in docs/schema, you'll likely need to rebase and fix conflicts, as the files have been moved to the subsystems. On 6/16/14, 12:38 PM, Stuart Douglas wrote: > Hi all, > > The first patch of many for the build split has been merged. This > introduces a few changes, the most obvious of which is that the server > that is produced in the build dir is now a 'thin' server, that uses jars > directly from the local maven directory, and the full traditional server > is now built in the 'dist' directory. > > > Stuart > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From qutpeter at gmail.com Tue Jun 17 06:53:00 2014 From: qutpeter at gmail.com (Peter Cai) Date: Tue, 17 Jun 2014 20:53:00 +1000 Subject: [wildfly-dev] Is wildfly subsystem maven archetype available In-Reply-To: References: Message-ID: Thanks Tomaz, up and kicking. On Tue, Jun 17, 2014 at 1:08 AM, Toma? Cerar wrote: > Hey, > > I have released 8.0.0.Final version of WildFly subsystem archetype. > > Docs are somewhat updated > https://docs.jboss.org/author/display/WFLY8/Example+subsystem > or at least enough to get you started. > > Sources are available at https://github.com/wildfly/archetypes together > with extra subsystem example. > > I will be working on improving archetype and docs with few new things that > can be used in coming days. > > But what is there now it should be enough to get you started. > > Let me know if you find any problems. > > -- > tomaz > > > > On Mon, Jun 16, 2014 at 2:42 PM, Peter Cai wrote: > >> Awesome. >> Looking forward to seeing it. >> >> Peter C >> >> >> On Mon, Jun 16, 2014 at 10:39 PM, Toma? Cerar >> wrote: >> >>> Yes it will be. >>> I just forgot bit about it. >>> >>> Let me get on this right away. >>> >>> -- >>> tomaz >>> >>> >>> On Mon, Jun 16, 2014 at 2:23 PM, Peter Cai wrote: >>> >>>> Hi, >>>> >>>> Is Wildfly subsystem maven archetype available in redhat maven repo? >>>> >>>> Regards, >>>> Peter C >>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140617/8c8c7bd4/attachment.html From kdejan at gmail.com Tue Jun 17 07:27:43 2014 From: kdejan at gmail.com (Dejan Kitic) Date: Tue, 17 Jun 2014 12:27:43 +0100 Subject: [wildfly-dev] Making RemoteConnectionFactory available in subsystem Message-ID: Hi guys, I am trying to figure out if it's possible to make HornetQ RemoteConnectionFactory available within subsystem using something like: final CastingInjector connFactInjector = new CastingInjector(connFactInjector, ConnectionFactory.class); and then doing something like: ... .addDependency(ConnectionFactoryService.SERVICE_NAME, connFactInjector) within SubsystemAdd performRuntime. Above is just thinking on the subject, the actual problem would be what to put in the addDependency call for service name, and even how to specify that I need to wait for the RemoteConnectionFactory to become available. Might be completely off with this, so any help is appreciated. Regards, Dejan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140617/e3a4c3b2/attachment-0001.html From jmesnil at redhat.com Tue Jun 17 09:16:48 2014 From: jmesnil at redhat.com (Jeff Mesnil) Date: Tue, 17 Jun 2014 15:16:48 +0200 Subject: [wildfly-dev] Making RemoteConnectionFactory available in subsystem In-Reply-To: References: Message-ID: <76790573-3F52-422D-B2F6-43256CD6DB75@redhat.com> On 17 Jun 2014, at 13:27, Dejan Kitic wrote: > Hi guys, > > I am trying to figure out if it's possible to make HornetQ RemoteConnectionFactory available within subsystem using something like: > > final CastingInjector connFactInjector = new CastingInjector(connFactInjector, > ConnectionFactory.class); > and then doing something like: > > ... > .addDependency(ConnectionFactoryService.SERVICE_NAME, connFactInjector) > > > within SubsystemAdd performRuntime. > > Above is just thinking on the subject, the actual problem would be what to put in the addDependency call for service name, and even how to specify that I need to wait for the RemoteConnectionFactory to become available. Unfortunately, that will not work. You could create a dependency on the connection factory name but the service does not return the JMS ConnectionFactory as its value (long story short, HornetQ creates the object internally and does not expose it from its management API). An alternative would be to depend on the JNDI binding of the remote connection factory instead. jeff -- Jeff Mesnil JBoss, a division of Red Hat http://jmesnil.net/ From kdejan at gmail.com Tue Jun 17 09:23:57 2014 From: kdejan at gmail.com (Dejan Kitic) Date: Tue, 17 Jun 2014 14:23:57 +0100 Subject: [wildfly-dev] Making RemoteConnectionFactory available in subsystem In-Reply-To: <76790573-3F52-422D-B2F6-43256CD6DB75@redhat.com> References: <76790573-3F52-422D-B2F6-43256CD6DB75@redhat.com> Message-ID: Hi Jeff, Big thanks for your answer. Alternative with JNDI binding sounds good - think that might just do the trick for me here. Dejan On Tue, Jun 17, 2014 at 2:16 PM, Jeff Mesnil wrote: > > On 17 Jun 2014, at 13:27, Dejan Kitic wrote: > > > Hi guys, > > > > I am trying to figure out if it's possible to make HornetQ > RemoteConnectionFactory available within subsystem using something like: > > > > final CastingInjector connFactInjector = new > CastingInjector(connFactInjector, > > ConnectionFactory.class); > > and then doing something like: > > > > ... > > .addDependency(ConnectionFactoryService.SERVICE_NAME, connFactInjector) > > > > > > within SubsystemAdd performRuntime. > > > > Above is just thinking on the subject, the actual problem would be what > to put in the addDependency call for service name, and even how to specify > that I need to wait for the RemoteConnectionFactory to become available. > > Unfortunately, that will not work. > > You could create a dependency on the connection factory name but the > service does not return the JMS ConnectionFactory as its value (long story > short, HornetQ creates the object internally and does not expose it from > its management API). > An alternative would be to depend on the JNDI binding of the remote > connection factory instead. > > jeff > > > -- > Jeff Mesnil > JBoss, a division of Red Hat > http://jmesnil.net/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140617/592a875a/attachment.html From tomaz.cerar at gmail.com Tue Jun 17 09:46:08 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Tue, 17 Jun 2014 15:46:08 +0200 Subject: [wildfly-dev] Making RemoteConnectionFactory available in subsystem In-Reply-To: <76790573-3F52-422D-B2F6-43256CD6DB75@redhat.com> References: <76790573-3F52-422D-B2F6-43256CD6DB75@redhat.com> Message-ID: Jeff, this is not about mgmt api exposure if I understood question correctly. It is question on level of msc services. Code inside services cannot access JNDI at all, so injecting service as dependency is only solution. -- tomaz On Tue, Jun 17, 2014 at 3:16 PM, Jeff Mesnil wrote: > > On 17 Jun 2014, at 13:27, Dejan Kitic wrote: > > > Hi guys, > > > > I am trying to figure out if it's possible to make HornetQ > RemoteConnectionFactory available within subsystem using something like: > > > > final CastingInjector connFactInjector = new > CastingInjector(connFactInjector, > > ConnectionFactory.class); > > and then doing something like: > > > > ... > > .addDependency(ConnectionFactoryService.SERVICE_NAME, connFactInjector) > > > > > > within SubsystemAdd performRuntime. > > > > Above is just thinking on the subject, the actual problem would be what > to put in the addDependency call for service name, and even how to specify > that I need to wait for the RemoteConnectionFactory to become available. > > Unfortunately, that will not work. > > You could create a dependency on the connection factory name but the > service does not return the JMS ConnectionFactory as its value (long story > short, HornetQ creates the object internally and does not expose it from > its management API). > An alternative would be to depend on the JNDI binding of the remote > connection factory instead. > > jeff > > > -- > Jeff Mesnil > JBoss, a division of Red Hat > http://jmesnil.net/ > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140617/5ae36ff0/attachment.html From jmesnil at redhat.com Tue Jun 17 09:52:00 2014 From: jmesnil at redhat.com (Jeff Mesnil) Date: Tue, 17 Jun 2014 15:52:00 +0200 Subject: [wildfly-dev] Making RemoteConnectionFactory available in subsystem In-Reply-To: References: <76790573-3F52-422D-B2F6-43256CD6DB75@redhat.com> Message-ID: <56338B80-E00C-4683-9AE9-7236EB1EDF5E@redhat.com> On 17 Jun 2014, at 15:46, Toma? Cerar wrote: > Jeff, > this is not about mgmt api exposure if I understood question correctly. > It is question on level of msc services. > Code inside services cannot access JNDI at all, so injecting service as dependency is only solution. I may not have been clear enough. I advised to depend on the JNDI binding as a service using the ContextNames.bindInfoFor(name).getServiceName(). As far as I can tell, this is the only way to depend on the ConnectionFactory from inside the msc services. -- Jeff Mesnil JBoss, a division of Red Hat http://jmesnil.net/ From kdejan at gmail.com Tue Jun 17 09:52:58 2014 From: kdejan at gmail.com (Dejan Kitic) Date: Tue, 17 Jun 2014 14:52:58 +0100 Subject: [wildfly-dev] Making RemoteConnectionFactory available in subsystem In-Reply-To: References: <76790573-3F52-422D-B2F6-43256CD6DB75@redhat.com> Message-ID: Hi Tomaz, You did...It is on the level of msc services, I need to prevent my subsystem from starting up until I have RemoteConnectionFactory available. I can't do the lookup from within, so my thinking was to declare dependency to something in the messaging subsystem and use injectors to make it available to my code. I thought there was a way to do what Jeff proposed from within the msc service. Dejan On Tue, Jun 17, 2014 at 2:46 PM, Toma? Cerar wrote: > Jeff, > this is not about mgmt api exposure if I understood question correctly. > It is question on level of msc services. > Code inside services cannot access JNDI at all, so injecting service as > dependency is only solution. > > -- > tomaz > > > On Tue, Jun 17, 2014 at 3:16 PM, Jeff Mesnil wrote: > >> >> On 17 Jun 2014, at 13:27, Dejan Kitic wrote: >> >> > Hi guys, >> > >> > I am trying to figure out if it's possible to make HornetQ >> RemoteConnectionFactory available within subsystem using something like: >> > >> > final CastingInjector connFactInjector = new >> CastingInjector(connFactInjector, >> > ConnectionFactory.class); >> > and then doing something like: >> > >> > ... >> > .addDependency(ConnectionFactoryService.SERVICE_NAME, connFactInjector) >> > >> > >> > within SubsystemAdd performRuntime. >> > >> > Above is just thinking on the subject, the actual problem would be what >> to put in the addDependency call for service name, and even how to specify >> that I need to wait for the RemoteConnectionFactory to become available. >> >> Unfortunately, that will not work. >> >> You could create a dependency on the connection factory name but the >> service does not return the JMS ConnectionFactory as its value (long story >> short, HornetQ creates the object internally and does not expose it from >> its management API). >> An alternative would be to depend on the JNDI binding of the remote >> connection factory instead. >> >> jeff >> >> >> -- >> Jeff Mesnil >> JBoss, a division of Red Hat >> http://jmesnil.net/ >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140617/6b017cd3/attachment.html From stuart.w.douglas at gmail.com Tue Jun 17 17:56:45 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 17 Jun 2014 15:56:45 -0600 Subject: [wildfly-dev] First patch for the build split has been merged In-Reply-To: <539F2B97.9000308@redhat.com> References: <539F2B97.9000308@redhat.com> Message-ID: <53A0B99D.90904@gmail.com> I have created a wiki doc with details of the new build process. This document will be updated as work progresses. https://community.jboss.org/wiki/WildflyBuildProcess Stuart Stuart Douglas wrote: > Hi all, > > The first patch of many for the build split has been merged. This > introduces a few changes, the most obvious of which is that the server > that is produced in the build dir is now a 'thin' server, that uses jars > directly from the local maven directory, and the full traditional server > is now built in the 'dist' directory. > > > Stuart > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From stuart.w.douglas at gmail.com Tue Jun 17 20:25:49 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Tue, 17 Jun 2014 18:25:49 -0600 Subject: [wildfly-dev] Handling history with the build split Message-ID: <53A0DC8D.9080201@gmail.com> Hi all, So something we have been thinking about is how best to handle history when doing the split, there were three obvious options that we looked at: 1) Copy the repo for each split, and just delete what is no longer needed. This means that you have full history, however each repo is 130Mb+, so once you have 4 or more repos you are looking at a very large amount of data to checkout a full WF server, which will probably put off potential contributors. 2) Clean break Copy the files into a core repo, and just have a clean break, which means that if you want to view history you will need to refer to the original WF repo, which is not great (I know I use history and git annotate a lot). This also means that it looks like the whole server was added by one person in a single commit, which is not great. 3) git filter-branch We use the filer branch command to filter out all non-relevant history. This means most history is intact, however you do loose the full context of the commit if it modified subsystems that have been moved into separate repos. It looks like it should be possible to append the old commit sha to the message on a new line, which will make it easy to look up the old commit if you need to see the full context. The interesting thing is that options 2) and 3) can also be used with a little known command called 'git replace' (which is basically a replacement for grafts), to basically graft the full history over the top of the truncated/rewritten history. Basically if you care about the full history you will be able to run a script, and it will graft the complete WF history into your repo, so any command that works with history will show each commit in its entirety. I think we should use option 3) combined with 'git replace'. This means that the repo will be much smaller, and contain a rewritten version of history that should be enough for most people. If anyone needs a full history (e.g. when doing backporting) all that will be required is running a simple script to replace the fake history with the real thing. Comments? Stuart From brian.stansberry at redhat.com Tue Jun 17 21:13:33 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Tue, 17 Jun 2014 20:13:33 -0500 Subject: [wildfly-dev] Handling history with the build split In-Reply-To: <53A0DC8D.9080201@gmail.com> References: <53A0DC8D.9080201@gmail.com> Message-ID: <53A0E7BD.2000707@redhat.com> One thing that occurred to me is the main repo will have the 8.x branch, which was split off pretty close to when you started this work. So switching to that branch makes it fairly easy to see the full context of a commit if it's lost with git-replace. On 6/17/14, 7:25 PM, Stuart Douglas wrote: > Hi all, > > So something we have been thinking about is how best to handle history > when doing the split, there were three obvious options that we looked at: > > 1) Copy the repo for each split, and just delete what is no longer needed. > > This means that you have full history, however each repo is 130Mb+, so > once you have 4 or more repos you are looking at a very large amount of > data to checkout a full WF server, which will probably put off potential > contributors. > > 2) Clean break > > Copy the files into a core repo, and just have a clean break, which > means that if you want to view history you will need to refer to the > original WF repo, which is not great (I know I use history and git > annotate a lot). This also means that it looks like the whole server was > added by one person in a single commit, which is not great. > > 3) git filter-branch > > We use the filer branch command to filter out all non-relevant history. > This means most history is intact, however you do loose the full context > of the commit if it modified subsystems that have been moved into > separate repos. > > It looks like it should be possible to append the old commit sha to the > message on a new line, which will make it easy to look up the old commit > if you need to see the full context. > > > > The interesting thing is that options 2) and 3) can also be used with a > little known command called 'git replace' (which is basically a > replacement for grafts), to basically graft the full history over the > top of the truncated/rewritten history. Basically if you care about the > full history you will be able to run a script, and it will graft the > complete WF history into your repo, so any command that works with > history will show each commit in its entirety. > > I think we should use option 3) combined with 'git replace'. This means > that the repo will be much smaller, and contain a rewritten version of > history that should be enough for most people. If anyone needs a full > history (e.g. when doing backporting) all that will be required is > running a simple script to replace the fake history with the real thing. > > Comments? > > Stuart > > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From jason.greene at redhat.com Tue Jun 17 23:33:23 2014 From: jason.greene at redhat.com (Jason Greene) Date: Tue, 17 Jun 2014 22:33:23 -0500 Subject: [wildfly-dev] Handling history with the build split In-Reply-To: <53A0E7BD.2000707@redhat.com> References: <53A0DC8D.9080201@gmail.com> <53A0E7BD.2000707@redhat.com> Message-ID: <02D24A42-59A9-4A03-AA52-E7DCDB801E8E@redhat.com> Yeah the main dist branch should probably be a preserved history for that reason. On Jun 17, 2014, at 8:13 PM, Brian Stansberry wrote: > One thing that occurred to me is the main repo will have the 8.x branch, > which was split off pretty close to when you started this work. So > switching to that branch makes it fairly easy to see the full context of > a commit if it's lost with git-replace. > > On 6/17/14, 7:25 PM, Stuart Douglas wrote: >> Hi all, >> >> So something we have been thinking about is how best to handle history >> when doing the split, there were three obvious options that we looked at: >> >> 1) Copy the repo for each split, and just delete what is no longer needed. >> >> This means that you have full history, however each repo is 130Mb+, so >> once you have 4 or more repos you are looking at a very large amount of >> data to checkout a full WF server, which will probably put off potential >> contributors. >> >> 2) Clean break >> >> Copy the files into a core repo, and just have a clean break, which >> means that if you want to view history you will need to refer to the >> original WF repo, which is not great (I know I use history and git >> annotate a lot). This also means that it looks like the whole server was >> added by one person in a single commit, which is not great. >> >> 3) git filter-branch >> >> We use the filer branch command to filter out all non-relevant history. >> This means most history is intact, however you do loose the full context >> of the commit if it modified subsystems that have been moved into >> separate repos. >> >> It looks like it should be possible to append the old commit sha to the >> message on a new line, which will make it easy to look up the old commit >> if you need to see the full context. >> >> >> >> The interesting thing is that options 2) and 3) can also be used with a >> little known command called 'git replace' (which is basically a >> replacement for grafts), to basically graft the full history over the >> top of the truncated/rewritten history. Basically if you care about the >> full history you will be able to run a script, and it will graft the >> complete WF history into your repo, so any command that works with >> history will show each commit in its entirety. >> >> I think we should use option 3) combined with 'git replace'. This means >> that the repo will be much smaller, and contain a rewritten version of >> history that should be enough for most people. If anyone needs a full >> history (e.g. when doing backporting) all that will be required is >> running a simple script to replace the fake history with the real thing. >> >> Comments? >> >> Stuart >> >> >> >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > > -- > Brian Stansberry > Senior Principal Software Engineer > JBoss by Red Hat > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From bgeorges at redhat.com Wed Jun 18 02:48:10 2014 From: bgeorges at redhat.com (Bruno Georges) Date: Wed, 18 Jun 2014 02:48:10 -0400 (EDT) Subject: [wildfly-dev] First patch for the build split has been merged In-Reply-To: <53A0B99D.90904@gmail.com> References: <539F2B97.9000308@redhat.com> <53A0B99D.90904@gmail.com> Message-ID: Thank you Stuart. Great initiative. Bruno Sent from my iPhone > On 17 Jun, 2014, at 23:57, Stuart Douglas wrote: > > I have created a wiki doc with details of the new build process. This > document will be updated as work progresses. > > https://community.jboss.org/wiki/WildflyBuildProcess > > Stuart > > Stuart Douglas wrote: >> Hi all, >> >> The first patch of many for the build split has been merged. This >> introduces a few changes, the most obvious of which is that the server >> that is produced in the build dir is now a 'thin' server, that uses jars >> directly from the local maven directory, and the full traditional server >> is now built in the 'dist' directory. >> >> >> Stuart >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From darran.lofthouse at jboss.com Wed Jun 18 04:42:00 2014 From: darran.lofthouse at jboss.com (Darran Lofthouse) Date: Wed, 18 Jun 2014 09:42:00 +0100 Subject: [wildfly-dev] Handling history with the build split In-Reply-To: <53A0DC8D.9080201@gmail.com> References: <53A0DC8D.9080201@gmail.com> Message-ID: <53A150D8.7070008@jboss.com> Does the modified history contain enough information to do 'git annotate' - personally that is my main use of history to find out why on earth a line of code exists ;-) Having said that git is not the best at preserving history after refactoring and we have been able to cope with that in numerous places. Regards, Darran Lofthouse. On 18/06/14 01:25, Stuart Douglas wrote: > Hi all, > > So something we have been thinking about is how best to handle history > when doing the split, there were three obvious options that we looked at: > > 1) Copy the repo for each split, and just delete what is no longer needed. > > This means that you have full history, however each repo is 130Mb+, so > once you have 4 or more repos you are looking at a very large amount of > data to checkout a full WF server, which will probably put off potential > contributors. > > 2) Clean break > > Copy the files into a core repo, and just have a clean break, which > means that if you want to view history you will need to refer to the > original WF repo, which is not great (I know I use history and git > annotate a lot). This also means that it looks like the whole server was > added by one person in a single commit, which is not great. > > 3) git filter-branch > > We use the filer branch command to filter out all non-relevant history. > This means most history is intact, however you do loose the full context > of the commit if it modified subsystems that have been moved into > separate repos. > > It looks like it should be possible to append the old commit sha to the > message on a new line, which will make it easy to look up the old commit > if you need to see the full context. > > > > The interesting thing is that options 2) and 3) can also be used with a > little known command called 'git replace' (which is basically a > replacement for grafts), to basically graft the full history over the > top of the truncated/rewritten history. Basically if you care about the > full history you will be able to run a script, and it will graft the > complete WF history into your repo, so any command that works with > history will show each commit in its entirety. > > I think we should use option 3) combined with 'git replace'. This means > that the repo will be much smaller, and contain a rewritten version of > history that should be enough for most people. If anyone needs a full > history (e.g. when doing backporting) all that will be required is > running a simple script to replace the fake history with the real thing. > > Comments? > > Stuart > > > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From akostadi at redhat.com Wed Jun 18 05:07:03 2014 From: akostadi at redhat.com (Aleksandar Kostadinov) Date: Wed, 18 Jun 2014 12:07:03 +0300 Subject: [wildfly-dev] Handling history with the build split In-Reply-To: <53A0DC8D.9080201@gmail.com> References: <53A0DC8D.9080201@gmail.com> Message-ID: <53A156B7.6080304@redhat.com> Stuart Douglas wrote, On 06/18/2014 03:25 AM (EEST): > Hi all, > > So something we have been thinking about is how best to handle history > when doing the split, there were three obvious options that we looked at: > > 1) Copy the repo for each split, and just delete what is no longer needed. > > This means that you have full history, however each repo is 130Mb+, so > once you have 4 or more repos you are looking at a very large amount of > data to checkout a full WF server, which will probably put off potential > contributors. FYI wrt #1 git clone --reference can help reducing size for people that clone more of the repos. I agree though that perhaps #3 would be simpler and would work for most of the people. From stuart.w.douglas at gmail.com Wed Jun 18 08:13:25 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Wed, 18 Jun 2014 06:13:25 -0600 Subject: [wildfly-dev] Handling history with the build split In-Reply-To: <53A150D8.7070008@jboss.com> References: <53A0DC8D.9080201@gmail.com> <53A150D8.7070008@jboss.com> Message-ID: <53A18265.6050309@gmail.com> Darran Lofthouse wrote: > Does the modified history contain enough information to do 'git > annotate' - personally that is my main use of history to find out why on > earth a line of code exists ;-) Yes, that will still exist. What may be lost is modifications to other files that were done in the same commit but that are now part of a different repo (although you will still be able to just look them up in the old repo). Stuart > > Having said that git is not the best at preserving history after > refactoring and we have been able to cope with that in numerous places. > > Regards, > Darran Lofthouse. > > > On 18/06/14 01:25, Stuart Douglas wrote: >> Hi all, >> >> So something we have been thinking about is how best to handle history >> when doing the split, there were three obvious options that we looked at: >> >> 1) Copy the repo for each split, and just delete what is no longer needed. >> >> This means that you have full history, however each repo is 130Mb+, so >> once you have 4 or more repos you are looking at a very large amount of >> data to checkout a full WF server, which will probably put off potential >> contributors. >> >> 2) Clean break >> >> Copy the files into a core repo, and just have a clean break, which >> means that if you want to view history you will need to refer to the >> original WF repo, which is not great (I know I use history and git >> annotate a lot). This also means that it looks like the whole server was >> added by one person in a single commit, which is not great. >> >> 3) git filter-branch >> >> We use the filer branch command to filter out all non-relevant history. >> This means most history is intact, however you do loose the full context >> of the commit if it modified subsystems that have been moved into >> separate repos. >> >> It looks like it should be possible to append the old commit sha to the >> message on a new line, which will make it easy to look up the old commit >> if you need to see the full context. >> >> >> >> The interesting thing is that options 2) and 3) can also be used with a >> little known command called 'git replace' (which is basically a >> replacement for grafts), to basically graft the full history over the >> top of the truncated/rewritten history. Basically if you care about the >> full history you will be able to run a script, and it will graft the >> complete WF history into your repo, so any command that works with >> history will show each commit in its entirety. >> >> I think we should use option 3) combined with 'git replace'. This means >> that the repo will be much smaller, and contain a rewritten version of >> history that should be enough for most people. If anyone needs a full >> history (e.g. when doing backporting) all that will be required is >> running a simple script to replace the fake history with the real thing. >> >> Comments? >> >> Stuart >> >> >> >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From mmusgrov at redhat.com Wed Jun 18 10:12:31 2014 From: mmusgrov at redhat.com (Michael Musgrove) Date: Wed, 18 Jun 2014 15:12:31 +0100 Subject: [wildfly-dev] Is there a way to avoid multiple singleton masters Message-ID: <53A19E4F.2090902@redhat.com> I'd like to have an option of running our transaction recovery manager as an HA singleton. WFLY-68 implies that the master can run at most once (even in the presences of network partition) but I don't see how we can guarantee that. If I hold the service start in a breakpoint then split the network and then allow another master on the other side of the partition to become master and then release the breakpoint we will briefly have two services running. I know using breakpoints is invalid but surely there are timing windows where the same outcome could conceivably occur. Mike -- Michael Musgrove Transactions Team e: mmusgrov at redhat.com t: +44 191 243 0870 Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham (US), Charles Peters (US), Matt Parson (US), Michael O'Neill(Ireland) From paul.ferraro at redhat.com Wed Jun 18 10:52:18 2014 From: paul.ferraro at redhat.com (Paul Ferraro) Date: Wed, 18 Jun 2014 10:52:18 -0400 (EDT) Subject: [wildfly-dev] Is there a way to avoid multiple singleton masters In-Reply-To: <53A19E4F.2090902@redhat.com> References: <53A19E4F.2090902@redhat.com> Message-ID: <1312761307.345537.1403103138680.JavaMail.zimbra@redhat.com> WFLY-68 adds the ability to specify a quorum required for a singleton service to start, i.e. a partition must have at least Q members in order for a singleton master election to take place. In general, the quorum value should be int(N/2)+1, where N is the cluster size. e.g. If your cluster size is 3, it would be advisable to set the quorum size to 2. You can find an example of building a singleton service with a specified quorum here: https://github.com/wildfly/wildfly/blob/master/testsuite/integration/clust/src/test/java/org/jboss/as/test/clustering/cluster/singleton/service/MyServiceActivator.java This of course means that any partition that does not have a least Q members will not have a transaction recovery manager until the partition is healed. ----- Original Message ----- > From: "Michael Musgrove" > To: wildfly-dev at lists.jboss.org > Sent: Wednesday, June 18, 2014 10:12:31 AM > Subject: [wildfly-dev] Is there a way to avoid multiple singleton masters > > I'd like to have an option of running our transaction recovery manager > as an HA singleton. WFLY-68 implies that the master can run at most once > (even in the presences of network partition) but I don't see how we can > guarantee that. If I hold the service start in a breakpoint then split > the network and then allow another master on the other side of the > partition to become master and then release the breakpoint we will > briefly have two services running. I know using breakpoints is invalid > but surely there are timing windows where the same outcome could > conceivably occur. > > Mike > > -- > Michael Musgrove > Transactions Team > e: mmusgrov at redhat.com > t: +44 191 243 0870 > > Registered in England and Wales under Company Registration No. 03798903 > Directors: Michael Cunningham (US), Charles Peters (US), Matt Parson (US), > Michael O'Neill(Ireland) > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From mmusgrov at redhat.com Wed Jun 18 11:10:47 2014 From: mmusgrov at redhat.com (Michael Musgrove) Date: Wed, 18 Jun 2014 16:10:47 +0100 Subject: [wildfly-dev] Is there a way to avoid multiple singleton masters In-Reply-To: <1312761307.345537.1403103138680.JavaMail.zimbra@redhat.com> References: <53A19E4F.2090902@redhat.com> <1312761307.345537.1403103138680.JavaMail.zimbra@redhat.com> Message-ID: <53A1ABF7.7040607@redhat.com> Thanks Paul. I would also like to know how we can guarantee at most once semantics given the scenario I described whereby two masters could be elected in the event of a network partition. And isn't there also a window where the network partitions with the old master running on the minority side and a new master is elected on the majority side before old master is stopped - how do you handle the race. Mike > WFLY-68 adds the ability to specify a quorum required for a singleton service to start, i.e. a partition must have at least Q members in order for a singleton master election to take place. In general, the quorum value should be int(N/2)+1, where N is the cluster size. e.g. If your cluster size is 3, it would be advisable to set the quorum size to 2. > You can find an example of building a singleton service with a specified quorum here: > https://github.com/wildfly/wildfly/blob/master/testsuite/integration/clust/src/test/java/org/jboss/as/test/clustering/cluster/singleton/service/MyServiceActivator.java > > This of course means that any partition that does not have a least Q members will not have a transaction recovery manager until the partition is healed. > > ----- Original Message ----- >> From: "Michael Musgrove" >> To: wildfly-dev at lists.jboss.org >> Sent: Wednesday, June 18, 2014 10:12:31 AM >> Subject: [wildfly-dev] Is there a way to avoid multiple singleton masters >> >> I'd like to have an option of running our transaction recovery manager >> as an HA singleton. WFLY-68 implies that the master can run at most once >> (even in the presences of network partition) but I don't see how we can >> guarantee that. If I hold the service start in a breakpoint then split >> the network and then allow another master on the other side of the >> partition to become master and then release the breakpoint we will >> briefly have two services running. I know using breakpoints is invalid >> but surely there are timing windows where the same outcome could >> conceivably occur. >> >> Mike >> >> -- >> Michael Musgrove >> Transactions Team >> e: mmusgrov at redhat.com >> t: +44 191 243 0870 >> >> Registered in England and Wales under Company Registration No. 03798903 >> Directors: Michael Cunningham (US), Charles Peters (US), Matt Parson (US), >> Michael O'Neill(Ireland) >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> -- Michael Musgrove Transactions Team e: mmusgrov at redhat.com t: +44 191 243 0870 Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham (US), Charles Peters (US), Matt Parson (US), Michael O'Neill(Ireland) From frank.langelage at osnanet.de Wed Jun 18 18:51:27 2014 From: frank.langelage at osnanet.de (Frank Langelage) Date: Thu, 19 Jun 2014 00:51:27 +0200 Subject: [wildfly-dev] problem with JSF 2.2.7 Message-ID: <53A217EF.8010208@osnanet.de> After upgrade of current sources from github including patch to JSF 2.2.7 I get this: 19.06. 00:05:14,934 ERROR [io.undertow.request#handleFirstRequest] UT005023: Exception handling request to /web-maj2e-langfr-dev/Login.xhtml: java.lang.RuntimeException: java.lang.IllegalStateException at io.undertow.servlet.spec.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:182) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.security.ServletFormAuthenticationMechanism.servePage(ServletFormAuthenticationMechanism.java:85) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.impl.FormAuthenticationMechanism.sendChallenge(FormAuthenticationMechanism.java:158) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.impl.SecurityContextImpl$ChallengeSender.transition(SecurityContextImpl.java:330) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.impl.SecurityContextImpl$ChallengeSender.transition(SecurityContextImpl.java:349) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.impl.SecurityContextImpl$ChallengeSender.access$300(SecurityContextImpl.java:314) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.impl.SecurityContextImpl.sendChallenges(SecurityContextImpl.java:135) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.impl.SecurityContextImpl.authTransition(SecurityContextImpl.java:109) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.impl.SecurityContextImpl.authTransition(SecurityContextImpl.java:114) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.impl.SecurityContextImpl.authenticate(SecurityContextImpl.java:99) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:54) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.server.handlers.DisableCacheHandler.handleRequest(DisableCacheHandler.java:27) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:45) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:61) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:70) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:243) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:230) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:76) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:149) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.server.Connectors.executeRootHandler(Connectors.java:195) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:733) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_60] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_60] at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_60] Caused by: java.lang.IllegalStateException at com.sun.faces.context.FacesContextImpl.assertNotReleased(FacesContextImpl.java:705) [jsf-impl-2.2.7-jbossorg-1.jar:] at com.sun.faces.context.FacesContextImpl.getAttributes(FacesContextImpl.java:237) [jsf-impl-2.2.7-jbossorg-1.jar:] at org.richfaces.context.ExtendedPartialViewContext.setInstance(ExtendedPartialViewContext.java:55) [richfaces-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] at org.richfaces.context.ExtendedPartialViewContext.release(ExtendedPartialViewContext.java:64) [richfaces-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] at org.richfaces.context.ExtendedPartialViewContextImpl.release(ExtendedPartialViewContextImpl.java:473) [richfaces-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] at com.sun.faces.context.FacesContextImpl.release(FacesContextImpl.java:591) [jsf-impl-2.2.7-jbossorg-1.jar:] at javax.faces.webapp.FacesServlet.service(FacesServlet.java:665) [jboss-jsf-api_2.2_spec-2.2.7.jar:2.2.7] at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:82) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:61) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:232) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.handlers.ServletInitialHandler.dispatchToPath(ServletInitialHandler.java:175) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] at io.undertow.servlet.spec.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:159) [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] ... 32 more When adding -Dversion.com.sun.faces=2.2.6-jbossorg-4 -Dversion.org.jboss.spec.javax.faces.jboss-jsf-api_2.2_spec=2.2.6 for build to get the the prior JSF-version, my web application works without problems. From qutpeter at gmail.com Thu Jun 19 00:30:31 2014 From: qutpeter at gmail.com (Peter Cai) Date: Thu, 19 Jun 2014 14:30:31 +1000 Subject: [wildfly-dev] First patch for the build split has been merged In-Reply-To: <53A0B99D.90904@gmail.com> References: <539F2B97.9000308@redhat.com> <53A0B99D.90904@gmail.com> Message-ID: Hi Stuart, Any plan to support dyanmic provisioning (i.e., one / a group of subsystems can be added via CLI rather than configuring subsystems in xml file before server started ) on Wildfly? Regards, On Wed, Jun 18, 2014 at 7:56 AM, Stuart Douglas wrote: > I have created a wiki doc with details of the new build process. This > document will be updated as work progresses. > > https://community.jboss.org/wiki/WildflyBuildProcess > > Stuart > > Stuart Douglas wrote: > > Hi all, > > > > The first patch of many for the build split has been merged. This > > introduces a few changes, the most obvious of which is that the server > > that is produced in the build dir is now a 'thin' server, that uses jars > > directly from the local maven directory, and the full traditional server > > is now built in the 'dist' directory. > > > > > > Stuart > > > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140619/cf742778/attachment.html From stuart.w.douglas at gmail.com Thu Jun 19 09:10:12 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Thu, 19 Jun 2014 09:10:12 -0400 Subject: [wildfly-dev] First patch for the build split has been merged In-Reply-To: References: <539F2B97.9000308@redhat.com> <53A0B99D.90904@gmail.com> Message-ID: <53A2E134.5070404@gmail.com> Yes, that is one of the goals of the build split. Eventually the existing maven plugin will be split into a provisioning tool and a build tool. Stuart Peter Cai wrote: > Hi Stuart, > Any plan to support dyanmic provisioning (i.e., one / a group of > subsystems can be added via CLI rather than configuring subsystems > in xml file before server started ) on Wildfly? > Regards, > > > On Wed, Jun 18, 2014 at 7:56 AM, Stuart Douglas > > wrote: > > I have created a wiki doc with details of the new build process. This > document will be updated as work progresses. > > https://community.jboss.org/wiki/WildflyBuildProcess > > Stuart > > Stuart Douglas wrote: > > Hi all, > > > > The first patch of many for the build split has been merged. This > > introduces a few changes, the most obvious of which is that the > server > > that is produced in the build dir is now a 'thin' server, that > uses jars > > directly from the local maven directory, and the full traditional > server > > is now built in the 'dist' directory. > > > > > > Stuart > > > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > From brian.stansberry at redhat.com Thu Jun 19 10:29:44 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Thu, 19 Jun 2014 09:29:44 -0500 Subject: [wildfly-dev] First patch for the build split has been merged In-Reply-To: <53A2E134.5070404@gmail.com> References: <539F2B97.9000308@redhat.com> <53A0B99D.90904@gmail.com> <53A2E134.5070404@gmail.com> Message-ID: <53A2F3D8.5020304@redhat.com> Peter, even now you shouldn't have to edit xml to add subsystems -- the CLI can do that. The build split and associated tooling should make it much easier to build up a server with the exact set of subsystems you want, but you shouldn't be *forced* to edit xml even today. On 6/19/14, 8:10 AM, Stuart Douglas wrote: > Yes, that is one of the goals of the build split. Eventually the > existing maven plugin will be split into a provisioning tool and a build > tool. > > Stuart > > Peter Cai wrote: >> Hi Stuart, >> Any plan to support dyanmic provisioning (i.e., one / a group of >> subsystems can be added via CLI rather than configuring subsystems >> in xml file before server started ) on Wildfly? >> Regards, >> >> >> On Wed, Jun 18, 2014 at 7:56 AM, Stuart Douglas >> > wrote: >> >> I have created a wiki doc with details of the new build process. This >> document will be updated as work progresses. >> >> https://community.jboss.org/wiki/WildflyBuildProcess >> >> Stuart >> >> Stuart Douglas wrote: >> > Hi all, >> > >> > The first patch of many for the build split has been merged. This >> > introduces a few changes, the most obvious of which is that the >> server >> > that is produced in the build dir is now a 'thin' server, that >> uses jars >> > directly from the local maven directory, and the full traditional >> server >> > is now built in the 'dist' directory. >> > >> > >> > Stuart >> > >> > >> > _______________________________________________ >> > wildfly-dev mailing list >> > wildfly-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From fjuma at redhat.com Thu Jun 19 11:21:29 2014 From: fjuma at redhat.com (Farah Juma) Date: Thu, 19 Jun 2014 11:21:29 -0400 (EDT) Subject: [wildfly-dev] problem with JSF 2.2.7 In-Reply-To: <53A217EF.8010208@osnanet.de> References: <53A217EF.8010208@osnanet.de> Message-ID: <1053355220.20409600.1403191289741.JavaMail.zimbra@redhat.com> Thanks for reporting this issue, Frank. This is a bug in RichFaces that was introduced by this recent Mojarra fix: https://java.net/jira/browse/JAVASERVERFACES-3203 The RichFaces ExtendedPartialViewContext class just needs to be updated accordingly. Please create a RichFaces JIRA issue to track this. Thanks! Farah ----- Original Message ----- > From: "Frank Langelage" > To: wildfly-dev at lists.jboss.org > Sent: Wednesday, June 18, 2014 6:51:27 PM > Subject: [wildfly-dev] problem with JSF 2.2.7 > > After upgrade of current sources from github including patch to JSF > 2.2.7 I get this: > 19.06. 00:05:14,934 ERROR [io.undertow.request#handleFirstRequest] > UT005023: Exception handling request to > /web-maj2e-langfr-dev/Login.xhtml: java.lang.RuntimeException: > java.lang.IllegalStateException > at > io.undertow.servlet.spec.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:182) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.security.ServletFormAuthenticationMechanism.servePage(ServletFormAuthenticationMechanism.java:85) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.impl.FormAuthenticationMechanism.sendChallenge(FormAuthenticationMechanism.java:158) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] at > io.undertow.security.impl.SecurityContextImpl$ChallengeSender.transition(SecurityContextImpl.java:330) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.impl.SecurityContextImpl$ChallengeSender.transition(SecurityContextImpl.java:349) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.impl.SecurityContextImpl$ChallengeSender.access$300(SecurityContextImpl.java:314) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.impl.SecurityContextImpl.sendChallenges(SecurityContextImpl.java:135) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.impl.SecurityContextImpl.authTransition(SecurityContextImpl.java:109) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.impl.SecurityContextImpl.authTransition(SecurityContextImpl.java:114) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.impl.SecurityContextImpl.authenticate(SecurityContextImpl.java:99) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:54) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.server.handlers.DisableCacheHandler.handleRequest(DisableCacheHandler.java:27) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:45) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:61) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:70) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) > at > io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:243) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:230) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:76) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:149) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.server.Connectors.executeRootHandler(Connectors.java:195) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:733) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [rt.jar:1.7.0_60] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [rt.jar:1.7.0_60] > at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_60] > Caused by: java.lang.IllegalStateException > at > com.sun.faces.context.FacesContextImpl.assertNotReleased(FacesContextImpl.java:705) > [jsf-impl-2.2.7-jbossorg-1.jar:] > at > com.sun.faces.context.FacesContextImpl.getAttributes(FacesContextImpl.java:237) > [jsf-impl-2.2.7-jbossorg-1.jar:] > at > org.richfaces.context.ExtendedPartialViewContext.setInstance(ExtendedPartialViewContext.java:55) > [richfaces-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] > at > org.richfaces.context.ExtendedPartialViewContext.release(ExtendedPartialViewContext.java:64) > [richfaces-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] > at > org.richfaces.context.ExtendedPartialViewContextImpl.release(ExtendedPartialViewContextImpl.java:473) > [richfaces-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT] > at > com.sun.faces.context.FacesContextImpl.release(FacesContextImpl.java:591) > [jsf-impl-2.2.7-jbossorg-1.jar:] > at > javax.faces.webapp.FacesServlet.service(FacesServlet.java:665) > [jboss-jsf-api_2.2_spec-2.2.7.jar:2.2.7] > at > io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:82) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:61) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) > [undertow-core-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:232) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.handlers.ServletInitialHandler.dispatchToPath(ServletInitialHandler.java:175) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > at > io.undertow.servlet.spec.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:159) > [undertow-servlet-1.1.0.Beta2.jar:1.1.0.Beta2] > ... 32 more > > When adding > -Dversion.com.sun.faces=2.2.6-jbossorg-4 > -Dversion.org.jboss.spec.javax.faces.jboss-jsf-api_2.2_spec=2.2.6 > for build to get the the prior JSF-version, my web application works > without problems. > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From smarlow at redhat.com Fri Jun 20 06:44:46 2014 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 20 Jun 2014 06:44:46 -0400 Subject: [wildfly-dev] does Attachments.NEXT_PHASE_DEPS wait for async services or does the next DUP phase start as soon as the service.start returns... In-Reply-To: <539C5EE8.6060802@gmail.com> References: <539B485A.2040507@redhat.com> <539C5EE8.6060802@gmail.com> Message-ID: <53A4109E.3000103@redhat.com> > > It should not matter if it is an async start of not, the phase should > not advance until the service is actually started. This just uses normal > MSC service deps, and given that we use async start a fair bit it would > surprise me if it was at fault here. > > Stuart > Thanks Stuart. Is there a way to get EAR subdeployments to run in the same phase? Seems like the .war subdeployment is reaching the POST_MODULE PHASE before the .jar subdeployment does. I looked at updated TRACE logging output on the forum thread [1]. This is for an ear deployment, that contains an EJB jar 'tcApp.jar' with entity classes and a WEB jar that contains managed beans that the JSFManagedBeanProcessor will load. The managed bean classes reference the entity classes. [2] shows that we are first starting the persistence unit (via PersistenceProvider.createContainerEntityManagerFactory) for the tcApp.jar subdeployment. However, the tcWeb.war subdeployment gets to a later phase (POST_MODULE) [3] before the PersistenceUnitServiceImpl has called StartContext.complete(). In the POST_MODULE [3] JSFManagedBeanProcessor loads the referenced managed beans, which loads the entity classes (see [4] which shows one of the entities being loaded). By the time that PersistenceUnitServiceImpl has called StartContext.complete() [5], several entity classes are already loaded. Scott [1] https://community.jboss.org/thread/241821 [2] 2014-06-19 14:13:54,873 TRACE [org.jboss.as.jpa] (ServerService Thread Pool -- 57) calling createContainerEntityManagerFactory for pu=EnterpriseApplication1.ear/tcApp.jar#MyPU with integration properties={eclipselink.archive.factory=org.jipijapa.eclipselink.JBossArchiveFactoryImpl, eclipselink.target-server=org.jipijapa.eclipselink.JBossAS7ServerPlatform, eclipselink.logging.logger=org.jipijapa.eclipselink.JBossLogger}, application properties={eclipselink.jpa.uppercase-column-names=true, eclipselink.cache.size.default=5000, eclipselink.logging.level=FINE, eclipselink.logging.logger=JavaLogger} [3] 2014-06-19 14:13:55,408 INFO [org.jboss.as.jsf] (MSC service thread 1-10) !!!JSFManagedBeanProcessor deployment phase=POST_MODULE, managedBean=tc.web.controller.settings.OrganizationConfigController, module classloader=ModuleClassLoader for Module "deployment.EnterpriseApplication1.ear.tcWeb.war:main" from Service Module Loader [4] 2014-06-19 14:13:55,434 TRACE [org.jboss.as.jpa] (MSC service thread 1-10) !!!JPADelegatingClassFileTransformer.transform tc/entities/type/Attribute [5] 2014-06-19 14:13:58,014 TRACE [org.jboss.as.jpa] (ServerService Thread Pool -- 57) !!!PersistenceUnitServiceImpl startcontext marked complete From stuart.w.douglas at gmail.com Fri Jun 20 09:25:16 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Fri, 20 Jun 2014 09:25:16 -0400 Subject: [wildfly-dev] does Attachments.NEXT_PHASE_DEPS wait for async services or does the next DUP phase start as soon as the service.start returns... In-Reply-To: <53A4109E.3000103@redhat.com> References: <539B485A.2040507@redhat.com> <539C5EE8.6060802@gmail.com> <53A4109E.3000103@redhat.com> Message-ID: <53A4363C.7010603@gmail.com> >> > > Thanks Stuart. Is there a way to get EAR subdeployments to run in the > same phase? Seems like the .war subdeployment is reaching the > POST_MODULE PHASE before the .jar subdeployment does. So they do actually run in the same phase, kinda. With an ear first the top level deployment runs a phase, and then all sub deployments run the same phase in parallel. I think the problem here is that NEXT_PHASE_DEPS only applies to the current deployment. So all the modules will run FIRST_MODULE_USE, but only the jar will actually wait for the PU to be installed before running post module, the top level and war deployments will run POST_MODULE to completion (but nothing more will happen until the jar deployment has also run POST_MODULE). I think you need to make sure that every deployment/subdeployment ends up with the NEXT_PHASE_DEP set up. Stuart > > I looked at updated TRACE logging output on the forum thread [1]. This > is for an ear deployment, that contains an EJB jar 'tcApp.jar' with > entity classes and a WEB jar that contains managed beans that the > JSFManagedBeanProcessor will load. The managed bean classes reference > the entity classes. > > [2] shows that we are first starting the persistence unit (via > PersistenceProvider.createContainerEntityManagerFactory) for the > tcApp.jar subdeployment. However, the tcWeb.war subdeployment gets to a > later phase (POST_MODULE) [3] before the PersistenceUnitServiceImpl has > called StartContext.complete(). In the POST_MODULE [3] > JSFManagedBeanProcessor loads the referenced managed beans, which loads > the entity classes (see [4] which shows one of the entities being loaded). > > By the time that PersistenceUnitServiceImpl has called > StartContext.complete() [5], several entity classes are already loaded. > > > Scott > > > [1] https://community.jboss.org/thread/241821 > > [2] 2014-06-19 14:13:54,873 TRACE [org.jboss.as.jpa] (ServerService > Thread Pool -- 57) calling createContainerEntityManagerFactory for > pu=EnterpriseApplication1.ear/tcApp.jar#MyPU with integration > properties={eclipselink.archive.factory=org.jipijapa.eclipselink.JBossArchiveFactoryImpl, > eclipselink.target-server=org.jipijapa.eclipselink.JBossAS7ServerPlatform, > eclipselink.logging.logger=org.jipijapa.eclipselink.JBossLogger}, > application properties={eclipselink.jpa.uppercase-column-names=true, > eclipselink.cache.size.default=5000, eclipselink.logging.level=FINE, > eclipselink.logging.logger=JavaLogger} > > > [3] 2014-06-19 14:13:55,408 INFO [org.jboss.as.jsf] (MSC service thread > 1-10) !!!JSFManagedBeanProcessor deployment phase=POST_MODULE, > managedBean=tc.web.controller.settings.OrganizationConfigController, > module classloader=ModuleClassLoader for Module > "deployment.EnterpriseApplication1.ear.tcWeb.war:main" from Service > Module Loader > > [4] 2014-06-19 14:13:55,434 TRACE [org.jboss.as.jpa] (MSC service thread > 1-10) !!!JPADelegatingClassFileTransformer.transform > tc/entities/type/Attribute > > [5] 2014-06-19 14:13:58,014 TRACE [org.jboss.as.jpa] (ServerService > Thread Pool -- 57) !!!PersistenceUnitServiceImpl startcontext marked > complete > From smarlow at redhat.com Fri Jun 20 09:52:44 2014 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 20 Jun 2014 09:52:44 -0400 Subject: [wildfly-dev] does Attachments.NEXT_PHASE_DEPS wait for async services or does the next DUP phase start as soon as the service.start returns... In-Reply-To: <53A4363C.7010603@gmail.com> References: <539B485A.2040507@redhat.com> <539C5EE8.6060802@gmail.com> <53A4109E.3000103@redhat.com> <53A4363C.7010603@gmail.com> Message-ID: <53A43CAC.70508@redhat.com> On 06/20/2014 09:25 AM, Stuart Douglas wrote: > >>> >> >> Thanks Stuart. Is there a way to get EAR subdeployments to run in the >> same phase? Seems like the .war subdeployment is reaching the >> POST_MODULE PHASE before the .jar subdeployment does. > > So they do actually run in the same phase, kinda. With an ear first the > top level deployment runs a phase, and then all sub deployments run the > same phase in parallel. > > I think the problem here is that NEXT_PHASE_DEPS only applies to the > current deployment. So all the modules will run FIRST_MODULE_USE, but > only the jar will actually wait for the PU to be installed before > running post module, the top level and war deployments will run > POST_MODULE to completion (but nothing more will happen until the jar > deployment has also run POST_MODULE). > > I think you need to make sure that every deployment/subdeployment ends > up with the NEXT_PHASE_DEP set up. Ah, excellent solution! Thanks Stuart! > > Stuart > From smarlow at redhat.com Fri Jun 20 10:05:32 2014 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 20 Jun 2014 10:05:32 -0400 Subject: [wildfly-dev] does Attachments.NEXT_PHASE_DEPS wait for async services or does the next DUP phase start as soon as the service.start returns... In-Reply-To: <53A43CAC.70508@redhat.com> References: <539B485A.2040507@redhat.com> <539C5EE8.6060802@gmail.com> <53A4109E.3000103@redhat.com> <53A4363C.7010603@gmail.com> <53A43CAC.70508@redhat.com> Message-ID: <53A43FAC.60006@redhat.com> On 06/20/2014 09:52 AM, Scott Marlow wrote: > On 06/20/2014 09:25 AM, Stuart Douglas wrote: >> >>>> >>> >>> Thanks Stuart. Is there a way to get EAR subdeployments to run in the >>> same phase? Seems like the .war subdeployment is reaching the >>> POST_MODULE PHASE before the .jar subdeployment does. >> >> So they do actually run in the same phase, kinda. With an ear first the >> top level deployment runs a phase, and then all sub deployments run the >> same phase in parallel. >> >> I think the problem here is that NEXT_PHASE_DEPS only applies to the >> current deployment. So all the modules will run FIRST_MODULE_USE, but >> only the jar will actually wait for the PU to be installed before >> running post module, the top level and war deployments will run >> POST_MODULE to completion (but nothing more will happen until the jar >> deployment has also run POST_MODULE). >> >> I think you need to make sure that every deployment/subdeployment ends >> up with the NEXT_PHASE_DEP set up. > > Ah, excellent solution! Thanks Stuart! I created WFLY-3531 for this. > >> >> Stuart >> > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From manderse at redhat.com Sat Jun 21 18:10:14 2014 From: manderse at redhat.com (Max Rydahl Andersen) Date: Sun, 22 Jun 2014 00:10:14 +0200 Subject: [wildfly-dev] JBoss Modules support Message-ID: <32BD72FB-A509-498C-95F7-C3ED283301A2@redhat.com> Hey guys, Just wanted to let you know that Rob Stryker added in support in our eclipse tooling for grokking the new multi layered module+patching system. Thus unless you go changing this layout again we should be set for a while. But even possible more interesting for you to know is that we now support setting up class path for your project that will honor 'Dependencies' in your manifest.mf file. That means as long as you have a local wildfly install and use it as target for your project we will pick up the binaries from that server based on module id. In short: you can start doing development against wildfly without having to configure any maven, ant, gradle etc. It will just work :) /max http://about.me/maxandersen From brian.stansberry at redhat.com Mon Jun 23 14:20:07 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 23 Jun 2014 13:20:07 -0500 Subject: [wildfly-dev] Core and subsystem capabilities and requirements Message-ID: <53A86FD7.6020502@redhat.com> As we continue with our work splitting the WildFly code base into a core repo and then separate repos related to sets of features that we need to solidify the contracts between the various features and between features and the core. I've taken a crack at an initial design document on this: see [1]. We also need to do the practical work of identifying the various dependencies between our existing subsystems, see [2] for a start on that. I'd love to get feedback on this thread regarding the proposed design, as well as get direct edits on the [2] doc to flesh out the existing relationships. Short version: A capability is a set of functionality that becomes available when a user triggers it by including some configuration resource or attribute in the management model. We'll identify capabilities via two strings: the name of the providing subsystem and then a capability name. A null subsystem name means a core capability; a null capability name means the base capability of the subsystem. Capabilities will also declare the identifiers of any other capabilities they require. There are two use cases for this capability/requirement data: provisioning (hopefully) and runtime. Hopefully this information can be used by provisioning tooling when building up a configuration document for the server/domain it is provisioning. So instead of always including a stock configuration for a subsystem, allow the user to tailor it a bit by declaring what capabilities are required. At runtime, when the configuration model is updated in such a way that a capability is now required, the OSH that handles that update will register the capability with the management layer in the MODEL stage. At the end of the MODEL stage the management layer will resolve all provided and required capabilities, failing the op if anything required is unavailable. Thereafter, in the RUNTIME stage an OSH that needs a capability from another subsystem or the core can request an object implementing the API for that capability from the OperationContext. I've thought a lot more about the runtime use case than I have about the provisioning use case. [1] https://community.jboss.org/docs/DOC-52712 [2] https://community.jboss.org/docs/DOC-52700 -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From david.lloyd at redhat.com Mon Jun 23 15:52:57 2014 From: david.lloyd at redhat.com (David M. Lloyd) Date: Mon, 23 Jun 2014 14:52:57 -0500 Subject: [wildfly-dev] Core and subsystem capabilities and requirements In-Reply-To: <53A86FD7.6020502@redhat.com> References: <53A86FD7.6020502@redhat.com> Message-ID: <53A88599.1090003@redhat.com> On 06/23/2014 01:20 PM, Brian Stansberry wrote: > As we continue with our work splitting the WildFly code base into a core > repo and then separate repos related to sets of features that we need to > solidify the contracts between the various features and between features > and the core. > > I've taken a crack at an initial design document on this: see [1]. We > also need to do the practical work of identifying the various > dependencies between our existing subsystems, see [2] for a start on that. > > I'd love to get feedback on this thread regarding the proposed design, > as well as get direct edits on the [2] doc to flesh out the existing > relationships. Here is what jumps out at me at first. ? I don't understand the reason to not allow optional dependencies on capabilities. It would be of similar implementation complexity to the suggested permutation implementation, however it would avoid the problem of requiring 2? permutations for n optional dependencies. ? I don't understand the purpose of binding capability identifiers to subsystem identifiers. It seems plausible to have a subsystem provide, for example, two capabilities now, but allow for a capability to be implemented separately in the future. A concrete example would be the way we ultimately moved Servlet from JBoss Web to Undertow. Ideally we'd only ever depend on the capability, making subsystems completely interchangeable. ? I do think we should set up capabilities for things in the core model (outside subsystems), especially as we seem to always increase the number of things that can't be in subsystems (a trend I hope we can reverse in the future). ? Can you define what having a capability be "provided by default" means? Or perhaps more aptly, what it means to *not* be provided by default? ? What about uniqueness? Can/should we enforce that each capability is only ever satisfied by one subsystem? > [1] https://community.jboss.org/docs/DOC-52712 > > [2] https://community.jboss.org/docs/DOC-52700 > -- - DML From jason.greene at redhat.com Mon Jun 23 16:31:04 2014 From: jason.greene at redhat.com (Jason Greene) Date: Mon, 23 Jun 2014 15:31:04 -0500 Subject: [wildfly-dev] Core and subsystem capabilities and requirements In-Reply-To: <53A88599.1090003@redhat.com> References: <53A86FD7.6020502@redhat.com> <53A88599.1090003@redhat.com> Message-ID: <7630471E-0B96-44E7-9C0F-2C92C5D9FFE7@redhat.com> On Jun 23, 2014, at 2:52 PM, David M. Lloyd wrote: > On 06/23/2014 01:20 PM, Brian Stansberry wrote: >> As we continue with our work splitting the WildFly code base into a core >> repo and then separate repos related to sets of features that we need to >> solidify the contracts between the various features and between features >> and the core. >> >> I've taken a crack at an initial design document on this: see [1]. We >> also need to do the practical work of identifying the various >> dependencies between our existing subsystems, see [2] for a start on that. >> >> I'd love to get feedback on this thread regarding the proposed design, >> as well as get direct edits on the [2] doc to flesh out the existing >> relationships. > > Here is what jumps out at me at first. > > ? I don't understand the reason to not allow optional dependencies on > capabilities. It would be of similar implementation complexity to the > suggested permutation implementation, however it would avoid the problem > of requiring 2? permutations for n optional dependencies. I had the same thought. > > ? I don't understand the purpose of binding capability identifiers to > subsystem identifiers. It seems plausible to have a subsystem provide, > for example, two capabilities now, but allow for a capability to be > implemented separately in the future. A concrete example would be the > way we ultimately moved Servlet from JBoss Web to Undertow. Ideally > we'd only ever depend on the capability, making subsystems completely > interchangeable. I think Brian was trying to allow for thirdparty subsystems to potentially collaborate without a global registration (like we have with Phases). In practice I would imagine all of our out of the box subsystems would be using null. Alternatively we could just use some kind of ad-hoc string as a group name or something. -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From kabir.khan at jboss.com Mon Jun 23 16:38:11 2014 From: kabir.khan at jboss.com (Kabir Khan) Date: Mon, 23 Jun 2014 21:38:11 +0100 Subject: [wildfly-dev] Core and subsystem capabilities and requirements In-Reply-To: <7630471E-0B96-44E7-9C0F-2C92C5D9FFE7@redhat.com> References: <53A86FD7.6020502@redhat.com> <53A88599.1090003@redhat.com> <7630471E-0B96-44E7-9C0F-2C92C5D9FFE7@redhat.com> Message-ID: <65B2411F-733A-4C3C-BFC7-19223046CABC@jboss.com> On 23 Jun 2014, at 21:31, Jason Greene wrote: > > On Jun 23, 2014, at 2:52 PM, David M. Lloyd wrote: > >> On 06/23/2014 01:20 PM, Brian Stansberry wrote: >>> As we continue with our work splitting the WildFly code base into a core >>> repo and then separate repos related to sets of features that we need to >>> solidify the contracts between the various features and between features >>> and the core. >>> >>> I've taken a crack at an initial design document on this: see [1]. We >>> also need to do the practical work of identifying the various >>> dependencies between our existing subsystems, see [2] for a start on that. >>> >>> I'd love to get feedback on this thread regarding the proposed design, >>> as well as get direct edits on the [2] doc to flesh out the existing >>> relationships. >> >> Here is what jumps out at me at first. >> >> ? I don't understand the reason to not allow optional dependencies on >> capabilities. It would be of similar implementation complexity to the >> suggested permutation implementation, however it would avoid the problem >> of requiring 2? permutations for n optional dependencies. > > I had the same thought. I don?t really understand what you two are getting at here. What would an optional requirement be? I can?t really get my head around ?I would like this to be there but I don?t care if it isn?t?. > >> >> ? I don't understand the purpose of binding capability identifiers to >> subsystem identifiers. It seems plausible to have a subsystem provide, >> for example, two capabilities now, but allow for a capability to be >> implemented separately in the future. A concrete example would be the >> way we ultimately moved Servlet from JBoss Web to Undertow. Ideally >> we'd only ever depend on the capability, making subsystems completely >> interchangeable. > > I think Brian was trying to allow for thirdparty subsystems to potentially collaborate without a global registration (like we have with Phases). In practice I would imagine all of our out of the box subsystems would be using null. Alternatively we could just use some kind of ad-hoc string as a group name or something. > > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From jason.greene at redhat.com Mon Jun 23 16:43:51 2014 From: jason.greene at redhat.com (Jason Greene) Date: Mon, 23 Jun 2014 15:43:51 -0500 Subject: [wildfly-dev] Core and subsystem capabilities and requirements In-Reply-To: <65B2411F-733A-4C3C-BFC7-19223046CABC@jboss.com> References: <53A86FD7.6020502@redhat.com> <53A88599.1090003@redhat.com> <7630471E-0B96-44E7-9C0F-2C92C5D9FFE7@redhat.com> <65B2411F-733A-4C3C-BFC7-19223046CABC@jboss.com> Message-ID: On Jun 23, 2014, at 3:38 PM, Kabir Khan wrote: > > On 23 Jun 2014, at 21:31, Jason Greene wrote: > >> >> On Jun 23, 2014, at 2:52 PM, David M. Lloyd wrote: >> >>> On 06/23/2014 01:20 PM, Brian Stansberry wrote: >>>> As we continue with our work splitting the WildFly code base into a core >>>> repo and then separate repos related to sets of features that we need to >>>> solidify the contracts between the various features and between features >>>> and the core. >>>> >>>> I've taken a crack at an initial design document on this: see [1]. We >>>> also need to do the practical work of identifying the various >>>> dependencies between our existing subsystems, see [2] for a start on that. >>>> >>>> I'd love to get feedback on this thread regarding the proposed design, >>>> as well as get direct edits on the [2] doc to flesh out the existing >>>> relationships. >>> >>> Here is what jumps out at me at first. >>> >>> ? I don't understand the reason to not allow optional dependencies on >>> capabilities. It would be of similar implementation complexity to the >>> suggested permutation implementation, however it would avoid the problem >>> of requiring 2? permutations for n optional dependencies. >> >> I had the same thought. > I don?t really understand what you two are getting at here. What would an optional requirement be? I can?t really get my head around ?I would like this to be there but I don?t care if it isn?t?. A good example is, Corba and JTS. Corba needs to be bootstrapped with a special interceptor if JTS is enabled. Corba?s bootstrap could check some HAVE_JTS capability and if present add the interceptor. >> >>> >>> ? I don't understand the purpose of binding capability identifiers to >>> subsystem identifiers. It seems plausible to have a subsystem provide, >>> for example, two capabilities now, but allow for a capability to be >>> implemented separately in the future. A concrete example would be the >>> way we ultimately moved Servlet from JBoss Web to Undertow. Ideally >>> we'd only ever depend on the capability, making subsystems completely >>> interchangeable. >> >> I think Brian was trying to allow for thirdparty subsystems to potentially collaborate without a global registration (like we have with Phases). In practice I would imagine all of our out of the box subsystems would be using null. Alternatively we could just use some kind of ad-hoc string as a group name or something. >> >> -- >> Jason T. Greene >> WildFly Lead / JBoss EAP Platform Architect >> JBoss, a division of Red Hat >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From brian.stansberry at redhat.com Mon Jun 23 16:56:27 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 23 Jun 2014 15:56:27 -0500 Subject: [wildfly-dev] Core and subsystem capabilities and requirements In-Reply-To: <53A88599.1090003@redhat.com> References: <53A86FD7.6020502@redhat.com> <53A88599.1090003@redhat.com> Message-ID: <53A8947B.6010300@redhat.com> On 6/23/14, 2:52 PM, David M. Lloyd wrote: > On 06/23/2014 01:20 PM, Brian Stansberry wrote: >> As we continue with our work splitting the WildFly code base into a core >> repo and then separate repos related to sets of features that we need to >> solidify the contracts between the various features and between features >> and the core. >> >> I've taken a crack at an initial design document on this: see [1]. We >> also need to do the practical work of identifying the various >> dependencies between our existing subsystems, see [2] for a start on that. >> >> I'd love to get feedback on this thread regarding the proposed design, >> as well as get direct edits on the [2] doc to flesh out the existing >> relationships. > > Here is what jumps out at me at first. > > ? I don't understand the reason to not allow optional dependencies on > capabilities. It would be of similar implementation complexity to the > suggested permutation implementation, however it would avoid the problem > of requiring 2? permutations for n optional dependencies. > I knew that would draw some comment and was already backing off it a bit as I wrote various drafts. :) The main thing is, say the user declares they want capability A, which requires B and C. Then they say they want capability C. Did they forget B or do they really not want it? In the runtime case, there's no problem. The user is actually providing a full configuration, and it's clear exactly what they need. In the provisioning case, it's a bit less clear what they want. But it's a valid requirement to say "you must declare B" if you want it. > ? I don't understand the purpose of binding capability identifiers to > subsystem identifiers. It seems plausible to have a subsystem provide, > for example, two capabilities now, but allow for a capability to be > implemented separately in the future. A concrete example would be the > way we ultimately moved Servlet from JBoss Web to Undertow. Ideally > we'd only ever depend on the capability, making subsystems completely > interchangeable. > See your question about uniqueness. Associating these with subsystems provides a form of namespacing; without that we'd have to come up with something else. It's a valid point that we don't have to use association with subsystems to provide this though. Any other suggestions? > ? I do think we should set up capabilities for things in the core model > (outside subsystems), especially as we seem to always increase the > number of things that can't be in subsystems (a trend I hope we can > reverse in the future). > Yes, I assume we will need to as well. > ? Can you define what having a capability be "provided by default" > means? Or perhaps more aptly, what it means to *not* be provided by > default? > I'm not sure which of these you're asking about, or maybe something else. So I'll answer a couple things: "default. boolean indicating whether a capability is provided by default. Always true for a base capability (i.e. if a subsystem is present, its base capability is as well.) Usage would be for the provisioning use case described below, to reduce the amount of information a user would need to provide to get a typical configuration. TBD whether this make sense." Say an infinispan subsystem provides a base capability that is local caching only and also a clustered caching capability that requires jgroups. 98% of the time a user who wants infinispan wants clustered caching, so having it there by default saves a user who tells the provisioning tool they want the extension also having to say they want that capability. The 2% user who doesn't want that capability would need to indicate that somehow in the spec they provide to the tool. "Subsystems are only required to register a base capability if it either depends on some other capability or exposes a runtime API. Otherwise, a default base capability will be registered automatically at the end of Stage.MODEL. (TBD: not certain is this default registration provides any value.)" Say a subsystem like jdr, which currently no one depends on. Does it need to declare that it provides the "jdr" capability? Or can that just be implicit? This really depends on whether 1) there is any value in providing a complete list of capabilities and 2) it's worthwhile not requiring a subsystem to explictly declare a capability, but instead to just provide a default one. Re: 1) I was thinking about things like an installer/provisioning tool. We don't want users thinking in terms of "I want these 4 capabilities, and, oh, I also want these 3 subsystems too." Users should be able to represent everything in terms of capabilities. So my answer to 1) is "yes". Re: 2) if we decide capabilities are not bound to subsystems, then there's no choice; a subsystem will have to declare a capability. > ? What about uniqueness? Can/should we enforce that each capability is > only ever satisfied by one subsystem? > In a given context (e.g. a server or a domain profile), yes. >> [1] https://community.jboss.org/docs/DOC-52712 >> >> [2] https://community.jboss.org/docs/DOC-52700 >> > > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From brian.stansberry at redhat.com Mon Jun 23 17:03:11 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 23 Jun 2014 16:03:11 -0500 Subject: [wildfly-dev] Core and subsystem capabilities and requirements In-Reply-To: <53A8947B.6010300@redhat.com> References: <53A86FD7.6020502@redhat.com> <53A88599.1090003@redhat.com> <53A8947B.6010300@redhat.com> Message-ID: <53A8960F.7020108@redhat.com> On 6/23/14, 3:56 PM, Brian Stansberry wrote: > On 6/23/14, 2:52 PM, David M. Lloyd wrote: >> >> ? I don't understand the reason to not allow optional dependencies on >> capabilities. It would be of similar implementation complexity to the >> suggested permutation implementation, however it would avoid the problem >> of requiring 2? permutations for n optional dependencies. >> > > I knew that would draw some comment and was already backing off it a bit > as I wrote various drafts. :) > > The main thing is, say the user declares they want capability A, which > requires B and C. Then they say they want capability C. Did they forget > B or do they really not want it? > Meant to say *optionally* requires B and C. We may also would need to deal with cases where B *or* C but not both is required. I think in that kind of situation though it would be better to have two capabilities with non-optional requirements. BTW, I thinking working out the EJB capabilities/requirements will be an excellent way to start, as it has so many. -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From ssilvert at redhat.com Mon Jun 23 21:02:08 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Mon, 23 Jun 2014 21:02:08 -0400 Subject: [wildfly-dev] Strange enforcer error Message-ID: <53A8CE10.7090307@redhat.com> I just rebased from master and I get the enforcer error below. Any idea where the reference to org.wildfly:wildfly-server:8.1.0.Final comes from? mvn dependency:tree doesn't show it. I've grepped everything for 8.1.0.Final and don't see anything either. [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building WildFly: Server 9.0.0.Alpha1-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (ban-bad-dependencies) @ wildfly-server --- [WARNING] Dependency convergence error for org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT paths to dependency are: +-org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT and +-org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT +-org.wildfly:wildfly-domain-http-interface:9.0.0.Alpha1-SNAPSHOT +-org.wildfly:wildfly-server:8.1.0.Final [WARNING] Rule 1: org.apache.maven.plugins.enforcer.DependencyConvergence failed with message: Failed while enforcing releasability the error(s) are [ Dependency convergence error for org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT paths to dependency are: +-org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT and +-org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT +-org.wildfly:wildfly-domain-http-interface:9.0.0.Alpha1-SNAPSHOT +-org.wildfly:wildfly-server:8.1.0.Final ] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.744 s [INFO] Finished at: 2014-06-23T20:56:18-05:00 [INFO] Final Memory: 19M/429M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (ban-bad-dependencies) on project wildfly-server: Some Enforcer rules have fai led. Look above for specific messages explaining why the rule failed. -> [Help 1] [ERROR] From arun.gupta at gmail.com Tue Jun 24 00:42:05 2014 From: arun.gupta at gmail.com (Arun Gupta) Date: Mon, 23 Jun 2014 21:42:05 -0700 Subject: [wildfly-dev] Migrating from JBoss AS 7.1 -> WildFly 8.1 Message-ID: FYI http://jdevelopment.nl/experiences-migrating-jboss-7-wildfly-81/ Thanks, Arun -- http://blog.arungupta.me http://twitter.com/arungupta From tomaz.cerar at gmail.com Tue Jun 24 16:04:47 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Tue, 24 Jun 2014 22:04:47 +0200 Subject: [wildfly-dev] Strange enforcer error In-Reply-To: <53A8CE10.7090307@redhat.com> References: <53A8CE10.7090307@redhat.com> Message-ID: looks like a problem with your local rebase. you have cyclic dependancy from wildfly-server --> http-interface --> wildfly-server On Tue, Jun 24, 2014 at 3:02 AM, Stan Silvert wrote: > I just rebased from master and I get the enforcer error below. Any idea > where the reference to org.wildfly:wildfly-server:8.1.0.Final comes from? > > mvn dependency:tree doesn't show it. I've grepped everything for > 8.1.0.Final and don't see anything either. > > [INFO] > [INFO] > ------------------------------------------------------------------------ > [INFO] Building WildFly: Server 9.0.0.Alpha1-SNAPSHOT > [INFO] > ------------------------------------------------------------------------ > [INFO] > [INFO] --- maven-enforcer-plugin:1.3.1:enforce (ban-bad-dependencies) @ > wildfly-server --- > [WARNING] > Dependency convergence error for > org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT paths to dependency are: > +-org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT > and > +-org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT > +-org.wildfly:wildfly-domain-http-interface:9.0.0.Alpha1-SNAPSHOT > +-org.wildfly:wildfly-server:8.1.0.Final > > [WARNING] Rule 1: > org.apache.maven.plugins.enforcer.DependencyConvergence failed with > message: > Failed while enforcing releasability the error(s) are [ > Dependency convergence error for > org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT paths to dependency are: > +-org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT > and > +-org.wildfly:wildfly-server:9.0.0.Alpha1-SNAPSHOT > +-org.wildfly:wildfly-domain-http-interface:9.0.0.Alpha1-SNAPSHOT > +-org.wildfly:wildfly-server:8.1.0.Final > ] > [INFO] > ------------------------------------------------------------------------ > [INFO] BUILD FAILURE > [INFO] > ------------------------------------------------------------------------ > [INFO] Total time: 1.744 s > [INFO] Finished at: 2014-06-23T20:56:18-05:00 > [INFO] Final Memory: 19M/429M > [INFO] > ------------------------------------------------------------------------ > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce > (ban-bad-dependencies) on project wildfly-server: Some Enforcer rules > have fai > led. Look above for specific messages explaining why the rule failed. -> > [Help 1] > [ERROR] > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140624/931df9ec/attachment-0001.html From dandread at redhat.com Wed Jun 25 12:38:21 2014 From: dandread at redhat.com (Dimitris Andreadis) Date: Wed, 25 Jun 2014 18:38:21 +0200 Subject: [wildfly-dev] Migrating from JBoss AS 7.1 -> WildFly 8.1 In-Reply-To: References: Message-ID: <53AAFAFD.3050604@redhat.com> Great article. On 24/06/2014 06:42, Arun Gupta wrote: > FYI http://jdevelopment.nl/experiences-migrating-jboss-7-wildfly-81/ > > Thanks, > Arun > From dandread at redhat.com Thu Jun 26 05:13:13 2014 From: dandread at redhat.com (Dimitris Andreadis) Date: Thu, 26 Jun 2014 11:13:13 +0200 Subject: [wildfly-dev] Migrating from JBoss AS 7.1 -> WildFly 8.1 In-Reply-To: <53AAFAFD.3050604@redhat.com> References: <53AAFAFD.3050604@redhat.com> Message-ID: <53ABE429.4090808@redhat.com> How can we approach this in a more organized way? I think we need a central place (JIRA, Wiki, other) to record things we know are different going from AS7 to WF8+, along with workarounds. This can be used as reference for developers or for migration tooling. Also we need to track work for anything for which we can act pro-actively and offer compatibility options (e.g. Tomcat valves ported to Undertow). I assume all subsystem/component owners/leads need to be involved, especially in areas where we know before hand there are going to be compatibility issues (e.g. Undertow). WDTY? /Dimitris On 25/06/2014 18:38, Dimitris Andreadis wrote: > Great article. > > On 24/06/2014 06:42, Arun Gupta wrote: >> FYI http://jdevelopment.nl/experiences-migrating-jboss-7-wildfly-81/ >> >> Thanks, >> Arun >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From ewertz at redhat.com Thu Jun 26 06:41:09 2014 From: ewertz at redhat.com (Edward Wertz) Date: Thu, 26 Jun 2014 06:41:09 -0400 (EDT) Subject: [wildfly-dev] Automatically resolving expressions in the CLI In-Reply-To: <1734367221.34071197.1403779221377.JavaMail.zimbra@redhat.com> Message-ID: <924955857.34071548.1403779269224.JavaMail.zimbra@redhat.com> I'm looking into whether it's possible to automatically resolve expressions when executing operations and commands in the CLI. >From my understanding, there are two variations of the problem. * Operations are server-side processes that are accessed via ':' in the CLI and, currently, the CLI presents the results returned as-is to the users. ex: ':read-resource' * Commands are processes that get manipulated by the CLI before getting presented to users. ex: 'ls' I've been experimenting with adding arguments to the CLI commands, like 'ls --resolve-expressions', and gotten it working for the standalone and domain side of things. However, I can't control the scope of the argument, so it's available in situations that cannot accurately resolve expressions like the 'profile=full' section of the domain tree. The results wouldn't be reliable. The same problem would apply to adding parameters to the server-side operations. The scope of the operations themselves can be controlled, but not their parameters. An execution like ':read-resource(recursive=true resolve-expressions=true)' can't resolve expressions unless it's used against an actual server or host, but the operation is available almost everywhere. Again, the results wouldn't be reliable. I'm wondering if anyone can suggest a way to attack this problem? There is already a ':resolve-expression(expression=___)' operation, so users can somewhat laboriously get the runtime values they want, but I can't figure out a way to integrate the values into the existing framework successfully. Other than creating entirely new operations and commands, like 'ls-resolve' and ':read-resource-resolve', which seems like an unsustainable way to solve the problem. Thanks, Joe Wertz From arun.gupta at gmail.com Thu Jun 26 07:44:02 2014 From: arun.gupta at gmail.com (Arun Gupta) Date: Thu, 26 Jun 2014 05:44:02 -0600 Subject: [wildfly-dev] Migrating from JBoss AS 7.1 -> WildFly 8.1 In-Reply-To: <53ABE429.4090808@redhat.com> References: <53AAFAFD.3050604@redhat.com> <53ABE429.4090808@redhat.com> Message-ID: +10 This will be a great resource for developers and migration tooling. And I can take help from the community members who are migrating apps to WildFly to contribute their experience there as well. Arun On Thu, Jun 26, 2014 at 3:13 AM, Dimitris Andreadis wrote: > How can we approach this in a more organized way? > > I think we need a central place (JIRA, Wiki, other) to record things we know are different > going from AS7 to WF8+, along with workarounds. This can be used as reference for developers > or for migration tooling. > > Also we need to track work for anything for which we can act pro-actively and offer > compatibility options (e.g. Tomcat valves ported to Undertow). > > I assume all subsystem/component owners/leads need to be involved, especially in areas where > we know before hand there are going to be compatibility issues (e.g. Undertow). > > WDTY? > > /Dimitris > > On 25/06/2014 18:38, Dimitris Andreadis wrote: >> Great article. >> >> On 24/06/2014 06:42, Arun Gupta wrote: >>> FYI http://jdevelopment.nl/experiences-migrating-jboss-7-wildfly-81/ >>> >>> Thanks, >>> Arun >>> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- http://blog.arungupta.me http://twitter.com/arungupta From smarlow at redhat.com Thu Jun 26 09:28:48 2014 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 26 Jun 2014 09:28:48 -0400 Subject: [wildfly-dev] Migrating from JBoss AS 7.1 -> WildFly 8.1 In-Reply-To: <53ABE429.4090808@redhat.com> References: <53AAFAFD.3050604@redhat.com> <53ABE429.4090808@redhat.com> Message-ID: <53AC2010.1030507@redhat.com> On 06/26/2014 05:13 AM, Dimitris Andreadis wrote: > How can we approach this in a more organized way? > > I think we need a central place (JIRA, Wiki, other) to record things we know are different > going from AS7 to WF8+, along with workarounds. This can be used as reference for developers > or for migration tooling. > > Also we need to track work for anything for which we can act pro-actively and offer > compatibility options (e.g. Tomcat valves ported to Undertow). > > I assume all subsystem/component owners/leads need to be involved, especially in areas where > we know before hand there are going to be compatibility issues (e.g. Undertow). > > WDTY? For migrating to WildFly 8, we have https://docs.jboss.org/author/display/WFLY8/How+do+I+migrate+my+application+from+AS5+or+AS6+to+WildFly For migrating to WildFly 9, we have https://docs.jboss.org/author/display/WFLY9/How+do+I+migrate+my+application+from+AS5+or+AS6+to+WildFly > > /Dimitris > > On 25/06/2014 18:38, Dimitris Andreadis wrote: >> Great article. >> >> On 24/06/2014 06:42, Arun Gupta wrote: >>> FYI http://jdevelopment.nl/experiences-migrating-jboss-7-wildfly-81/ >>> >>> Thanks, >>> Arun >>> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From brian.stansberry at redhat.com Thu Jun 26 11:31:08 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Thu, 26 Jun 2014 10:31:08 -0500 Subject: [wildfly-dev] Automatically resolving expressions in the CLI In-Reply-To: <924955857.34071548.1403779269224.JavaMail.zimbra@redhat.com> References: <924955857.34071548.1403779269224.JavaMail.zimbra@redhat.com> Message-ID: <53AC3CBC.6050403@redhat.com> Thanks, Joe, for looking into this. I'm curious what you've done so far with your 'ls --resolve-expressions' work. Did you use the existing ':resolve-expression(expression=___)' low level operation to process any expressions found in the :read-resource response? There are a few aspects of this I'd like to explore. One is the UX one. Is allowing 'resolve-expressions' in some contexts and not others a good UX? Will users understand that? I'm ambivalent about that, and am interested in others' opinions. If it can work for a server and for anything under /host=*, then I'm ambivalent. Any restriction at all is unintuitive, but once the user learns that there is a restriction, that's a pretty understandable one. If it only works for a patchwork of stuff under /host=* then I'm real negative about it. An area of concern is /host=*/server-config=*, where an expression might be irrelevant to the host, only resolving correctly on the server that is created using that server-config. That will need careful examination. A second one is how this data would be displayed with 'ls'. A separate additional column? Or replacing the current data? The answer to this might impact how it would be implemented server side. The third aspect is the technical issue of how to make any 'resolve-expressions' param or CLI argument available in certain contexts and not in others. That's very likely solvable on the server side; not sure how difficult it would be in the CLI high-level command. FYI, for others reading this, offline Joe pointed out there's a related JIRA for this: https://issues.jboss.org/browse/WFLY-1069. On 6/26/14, 5:41 AM, Edward Wertz wrote: > I'm looking into whether it's possible to automatically resolve expressions when executing operations and commands in the CLI. > >>From my understanding, there are two variations of the problem. > > * Operations are server-side processes that are accessed via ':' in the CLI and, currently, the CLI presents the results returned as-is to the users. ex: ':read-resource' > > * Commands are processes that get manipulated by the CLI before getting presented to users. ex: 'ls' > > I've been experimenting with adding arguments to the CLI commands, like 'ls --resolve-expressions', and gotten it working for the standalone and domain side of things. However, I can't control the scope of the argument, so it's available in situations that cannot accurately resolve expressions like the 'profile=full' section of the domain tree. The results wouldn't be reliable. > > The same problem would apply to adding parameters to the server-side operations. The scope of the operations themselves can be controlled, but not their parameters. An execution like ':read-resource(recursive=true resolve-expressions=true)' can't resolve expressions unless it's used against an actual server or host, but the operation is available almost everywhere. Again, the results wouldn't be reliable. > > I'm wondering if anyone can suggest a way to attack this problem? There is already a ':resolve-expression(expression=___)' operation, so users can somewhat laboriously get the runtime values they want, but I can't figure out a way to integrate the values into the existing framework successfully. Other than creating entirely new operations and commands, like 'ls-resolve' and ':read-resource-resolve', which seems like an unsustainable way to solve the problem. > > Thanks, > > Joe Wertz > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From spederse at redhat.com Thu Jun 26 12:24:48 2014 From: spederse at redhat.com (=?iso-8859-1?Q?St=E5le?= W Pedersen) Date: Thu, 26 Jun 2014 18:24:48 +0200 Subject: [wildfly-dev] Automatically resolving expressions in the CLI In-Reply-To: <53AC3CBC.6050403@redhat.com> References: <924955857.34071548.1403779269224.JavaMail.zimbra@redhat.com> <53AC3CBC.6050403@redhat.com> Message-ID: <20140626162448.GE4045@beistet> hi, this might be side stepping the topic a bit, but atm im working on rewriting the cli to work with the latest version of ?sh. the new api in ?sh have capabilities like specifying: - an option activator (the option will not be shown during completion until activator is "validated") - option validator (will be processed after user have typed in a value) - option completer +++ the homepage is here; http://aeshell.github.io/ the docs are not quite up to date, but they should give an idea of how it works. the plan is to have designated completers for "browsing" the tree etc, activators, validators etc. so the commands should be fairly simple without much boilerplate. with this change it would also be easy to add custom commands to the cli if needed. ive just started and i cant dedicate 100% on this so i dont know when ill be done, the plan is to target wildfly9. - if aleksey and brian approves :) here is the branch im working against atm: https://github.com/stalep/wildfly/tree/aesh_upgrade_take_two st?le On 26.06.14 10:31, Brian Stansberry wrote: >Thanks, Joe, for looking into this. > >I'm curious what you've done so far with your 'ls --resolve-expressions' >work. Did you use the existing ':resolve-expression(expression=___)' low >level operation to process any expressions found in the :read-resource >response? > >There are a few aspects of this I'd like to explore. > >One is the UX one. Is allowing 'resolve-expressions' in some contexts >and not others a good UX? Will users understand that? I'm ambivalent >about that, and am interested in others' opinions. > >If it can work for a server and for anything under /host=*, then I'm >ambivalent. Any restriction at all is unintuitive, but once the user >learns that there is a restriction, that's a pretty understandable one. >If it only works for a patchwork of stuff under /host=* then I'm real >negative about it. An area of concern is /host=*/server-config=*, where >an expression might be irrelevant to the host, only resolving correctly >on the server that is created using that server-config. That will need >careful examination. > >A second one is how this data would be displayed with 'ls'. A separate >additional column? Or replacing the current data? The answer to this >might impact how it would be implemented server side. > >The third aspect is the technical issue of how to make any >'resolve-expressions' param or CLI argument available in certain >contexts and not in others. That's very likely solvable on the server >side; not sure how difficult it would be in the CLI high-level command. > >FYI, for others reading this, offline Joe pointed out there's a related >JIRA for this: https://issues.jboss.org/browse/WFLY-1069. > >On 6/26/14, 5:41 AM, Edward Wertz wrote: >> I'm looking into whether it's possible to automatically resolve expressions when executing operations and commands in the CLI. >> >>>From my understanding, there are two variations of the problem. >> >> * Operations are server-side processes that are accessed via ':' in the CLI and, currently, the CLI presents the results returned as-is to the users. ex: ':read-resource' >> >> * Commands are processes that get manipulated by the CLI before getting presented to users. ex: 'ls' >> >> I've been experimenting with adding arguments to the CLI commands, like 'ls --resolve-expressions', and gotten it working for the standalone and domain side of things. However, I can't control the scope of the argument, so it's available in situations that cannot accurately resolve expressions like the 'profile=full' section of the domain tree. The results wouldn't be reliable. >> >> The same problem would apply to adding parameters to the server-side operations. The scope of the operations themselves can be controlled, but not their parameters. An execution like ':read-resource(recursive=true resolve-expressions=true)' can't resolve expressions unless it's used against an actual server or host, but the operation is available almost everywhere. Again, the results wouldn't be reliable. >> >> I'm wondering if anyone can suggest a way to attack this problem? There is already a ':resolve-expression(expression=___)' operation, so users can somewhat laboriously get the runtime values they want, but I can't figure out a way to integrate the values into the existing framework successfully. Other than creating entirely new operations and commands, like 'ls-resolve' and ':read-resource-resolve', which seems like an unsustainable way to solve the problem. >> >> Thanks, >> >> Joe Wertz >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > >-- >Brian Stansberry >Senior Principal Software Engineer >JBoss by Red Hat >_______________________________________________ >wildfly-dev mailing list >wildfly-dev at lists.jboss.org >https://lists.jboss.org/mailman/listinfo/wildfly-dev From frank.langelage at osnanet.de Thu Jun 26 17:24:49 2014 From: frank.langelage at osnanet.de (Frank Langelage) Date: Thu, 26 Jun 2014 23:24:49 +0200 Subject: [wildfly-dev] build on Solaris broken now Message-ID: <53AC8FA1.5070802@osnanet.de> for those who might want to build current 9.0.0-SNAPSHOT from github sources on Solaris: after commit of checking / downloading maven if needed this currently fails with a syntax error in tools/maven/bin/mvn after downloading the latest version 3.2.2. Maven 3.2.2 's bin/mvn contains incompatible syntax for Solaris /bin/sh and perhaps others: if [[ -z "$JAVA_HOME" && -x /usr/libexec/java_home ]] ; then # # Apple JDKs # export JAVA_HOME=$(/usr/libexec/java_home) // here it fails fi ;; I created a JIRA fpr Apache Maven to change this line to export JAVA_HOME=/usr/libexec/java_home Doing this change after download / unzip in my local repo make the build working again for me. Regards, Frank From tomaz.cerar at gmail.com Thu Jun 26 17:53:37 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Thu, 26 Jun 2014 23:53:37 +0200 Subject: [wildfly-dev] build on Solaris broken now In-Reply-To: <53AC8FA1.5070802@osnanet.de> References: <53AC8FA1.5070802@osnanet.de> Message-ID: Frank, can you send PR with a fix to download-maven.sh script? given that you have solaris at hand it will be probably the easiest. Thank you, tomaz On Thu, Jun 26, 2014 at 11:24 PM, Frank Langelage < frank.langelage at osnanet.de> wrote: > for those who might want to build current 9.0.0-SNAPSHOT from github > sources on Solaris: > after commit of checking / downloading maven if needed this currently > fails with a syntax error in tools/maven/bin/mvn after downloading the > latest version 3.2.2. > Maven 3.2.2 's bin/mvn contains incompatible syntax for Solaris /bin/sh > and perhaps others: > if [[ -z "$JAVA_HOME" && -x /usr/libexec/java_home ]] ; then > # > # Apple JDKs > # > export JAVA_HOME=$(/usr/libexec/java_home) // here it fails > fi > ;; > > I created a JIRA fpr Apache Maven to change this line to > export JAVA_HOME=/usr/libexec/java_home > > Doing this change after download / unzip in my local repo make the build > working again for me. > > Regards, Frank > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140626/d0f6a6a8/attachment.html From stuart.w.douglas at gmail.com Thu Jun 26 22:14:24 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Thu, 26 Jun 2014 22:14:24 -0400 Subject: [wildfly-dev] build on Solaris broken now In-Reply-To: References: <53AC8FA1.5070802@osnanet.de> Message-ID: <53ACD380.1020809@gmail.com> So is this an issue with the download script or with the Maven version upgrade? Stuart Toma? Cerar wrote: > Frank, > > can you send PR with a fix to download-maven.sh script? > given that you have solaris at hand it will be probably the easiest. > > Thank you, > tomaz > > > On Thu, Jun 26, 2014 at 11:24 PM, Frank Langelage > > wrote: > > for those who might want to build current 9.0.0-SNAPSHOT from github > sources on Solaris: > after commit of checking / downloading maven if needed this currently > fails with a syntax error in tools/maven/bin/mvn after downloading the > latest version 3.2.2. > Maven 3.2.2 's bin/mvn contains incompatible syntax for Solaris /bin/sh > and perhaps others: > if [[ -z "$JAVA_HOME" && -x /usr/libexec/java_home ]] ; > then > # > # Apple JDKs > # > export JAVA_HOME=$(/usr/libexec/java_home) // here it > fails > fi > ;; > > I created a JIRA fpr Apache Maven to change this line to > export JAVA_HOME=/usr/libexec/java_home > > Doing this change after download / unzip in my local repo make the build > working again for me. > > Regards, Frank > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From frank.langelage at osnanet.de Fri Jun 27 03:27:54 2014 From: frank.langelage at osnanet.de (Frank Langelage) Date: Fri, 27 Jun 2014 09:27:54 +0200 Subject: [wildfly-dev] build on Solaris broken now In-Reply-To: <53ACD380.1020809@gmail.com> References: <53AC8FA1.5070802@osnanet.de> <53ACD380.1020809@gmail.com> Message-ID: <53AD1CFA.4030405@osnanet.de> It's definitely a maven problem. Introduced in Apache Maven 3.2.2. So I not understood Tomaz request for a PR. Too fast reading? This would be a work around in WF to fix a problem in Maven. http://jira.codehaus.org/browse/MNG-5658 We might go back to Maven 3.2.1 for now waiting for 3.2.3 becoming available with this problem hopefully fixed, if there a re not other drawbacks from this downgrade. Regards, Frank On 27.06.14 04:14, Stuart Douglas wrote: > So is this an issue with the download script or with the Maven version > upgrade? > > Stuart > > Toma? Cerar wrote: >> Frank, >> >> can you send PR with a fix to download-maven.sh script? >> given that you have solaris at hand it will be probably the easiest. >> >> Thank you, >> tomaz >> >> >> On Thu, Jun 26, 2014 at 11:24 PM, Frank Langelage >> > wrote: >> >> for those who might want to build current 9.0.0-SNAPSHOT from github >> sources on Solaris: >> after commit of checking / downloading maven if needed this >> currently >> fails with a syntax error in tools/maven/bin/mvn after >> downloading the >> latest version 3.2.2. >> Maven 3.2.2 's bin/mvn contains incompatible syntax for Solaris >> /bin/sh >> and perhaps others: >> if [[ -z "$JAVA_HOME" && -x /usr/libexec/java_home ]] ; >> then >> # >> # Apple JDKs >> # >> export JAVA_HOME=$(/usr/libexec/java_home) // here it >> fails >> fi >> ;; >> >> I created a JIRA fpr Apache Maven to change this line to >> export JAVA_HOME=/usr/libexec/java_home >> >> Doing this change after download / unzip in my local repo make >> the build >> working again for me. >> >> Regards, Frank >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> >> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > From rory.odonnell at oracle.com Fri Jun 27 03:38:30 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Fri, 27 Jun 2014 08:38:30 +0100 Subject: [wildfly-dev] Early Access builds for JDK 9 b19, JDK 8u20 b20 are available on java.net Message-ID: <53AD1F76.9070902@oracle.com> Hi Guys, Early Access builds for JDK 9 b19 and JDK 8u20 b20 are available on java.net. As we enter the later phases of development for JDK 8u20 , please log any show stoppers as soon as possible. Rgds, Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140627/838050df/attachment.html From tomaz.cerar at gmail.com Fri Jun 27 04:59:49 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Fri, 27 Jun 2014 10:59:49 +0200 Subject: [wildfly-dev] build on Solaris broken now In-Reply-To: <53AD1CFA.4030405@osnanet.de> References: <53AC8FA1.5070802@osnanet.de> <53ACD380.1020809@gmail.com> <53AD1CFA.4030405@osnanet.de> Message-ID: Frank, I had in mind that download-maven.sh script could "patch" the bug in the maven executable after the download. Other option is that we just go back to 3.2.1 maven. as looking at http://maven.apache.org/docs/3.2.2/release-notes.html there are no real benefits that our build would need. -- tomaz On Fri, Jun 27, 2014 at 9:27 AM, Frank Langelage wrote: > It's definitely a maven problem. Introduced in Apache Maven 3.2.2. > So I not understood Tomaz request for a PR. Too fast reading? > This would be a work around in WF to fix a problem in Maven. > http://jira.codehaus.org/browse/MNG-5658 > > We might go back to Maven 3.2.1 for now waiting for 3.2.3 becoming > available with this problem hopefully fixed, if there a re not other > drawbacks from this downgrade. > > Regards, Frank > > > On 27.06.14 04:14, Stuart Douglas wrote: > >> So is this an issue with the download script or with the Maven version >> upgrade? >> >> Stuart >> >> Toma? Cerar wrote: >> >>> Frank, >>> >>> can you send PR with a fix to download-maven.sh script? >>> given that you have solaris at hand it will be probably the easiest. >>> >>> Thank you, >>> tomaz >>> >>> >>> On Thu, Jun 26, 2014 at 11:24 PM, Frank Langelage >>> > wrote: >>> >>> for those who might want to build current 9.0.0-SNAPSHOT from github >>> sources on Solaris: >>> after commit of checking / downloading maven if needed this currently >>> fails with a syntax error in tools/maven/bin/mvn after downloading >>> the >>> latest version 3.2.2. >>> Maven 3.2.2 's bin/mvn contains incompatible syntax for Solaris >>> /bin/sh >>> and perhaps others: >>> if [[ -z "$JAVA_HOME" && -x /usr/libexec/java_home ]] ; >>> then >>> # >>> # Apple JDKs >>> # >>> export JAVA_HOME=$(/usr/libexec/java_home) // here it >>> fails >>> fi >>> ;; >>> >>> I created a JIRA fpr Apache Maven to change this line to >>> export JAVA_HOME=/usr/libexec/java_home >>> >>> Doing this change after download / unzip in my local repo make the >>> build >>> working again for me. >>> >>> Regards, Frank >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >>> >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140627/14ab59b9/attachment.html From ewertz at redhat.com Fri Jun 27 06:10:27 2014 From: ewertz at redhat.com (Edward Wertz) Date: Fri, 27 Jun 2014 06:10:27 -0400 (EDT) Subject: [wildfly-dev] Automatically resolving expressions in the CLI In-Reply-To: <20140626162448.GE4045@beistet> References: <924955857.34071548.1403779269224.JavaMail.zimbra@redhat.com> <53AC3CBC.6050403@redhat.com> <20140626162448.GE4045@beistet> Message-ID: <24838160.7922.1403863823207.JavaMail.joe@dhcp-17-61.nay.redhat.com> Brian, Yes, the work so far searches the result that the existing 'ls' command receives for expressions and, if found, creates a ':resolve-expression(expression=___)' operation to resolve each one. On the domain side I can add an 'address' node to that command and target it towards a specific server. The existing ':resolve-expression(expression=___)' operation is already limited within the UX. Actually, I found out today, to a very strict extent. Which could be part of the user's confusion right now. The operation is currently unavailable anywhere deeper than the server level. Users are making requests to have expressions resolved in other commands and operations, which would certainly be convenient, but the tree level that they're giving as examples are all forbidden to the existing ':resolve-expression' operation. Which could be why they either don't know it exists, forget to use it, or even think it wouldn't work correctly. For an example, they execute ':read-resource' at 'socket-binding-group=full-sockets/socket-binding=remoting', see an expression, but would have to 'cd ../../../..' to get access to the ':resolve-expression' operation. Since it's restricted so much I imagine most people don't even know it exists. I'm not sure about the presentation. The user has to deliberate request the argument/param, so replacing the value might be fine, but having both would probably be nicer. St?le, Thanks for the heads up. That option activator sounds like exactly what I want to do with the CLI side command arguments. I'll try to get a bit more familiar with ?sh in the next few days. Joe ----- Original Message ----- > hi, this might be side stepping the topic a bit, but atm im working > on > rewriting the cli to work with the latest version of ?sh. > the new api in ?sh have capabilities like specifying: > - an option activator (the option will not be shown during completion > until activator is "validated") > - option validator (will be processed after user have typed in a > value) > - option completer > +++ > the homepage is here; http://aeshell.github.io/ the docs are not > quite > up to date, but they should give an idea of how it works. > > the plan is to have designated completers for "browsing" the tree > etc, > activators, validators etc. so the commands should be fairly simple > without much boilerplate. > > with this change it would also be easy to add custom commands to the > cli > if needed. > > ive just started and i cant dedicate 100% on this so i dont know when > ill be done, the plan is to target wildfly9. - if aleksey and brian > approves :) > > here is the branch im working against atm: > > https://github.com/stalep/wildfly/tree/aesh_upgrade_take_two > > st?le > > On 26.06.14 10:31, Brian Stansberry wrote: > >Thanks, Joe, for looking into this. > > > >I'm curious what you've done so far with your 'ls > >--resolve-expressions' > >work. Did you use the existing ':resolve-expression(expression=___)' > >low > >level operation to process any expressions found in the > >:read-resource > >response? > > > >There are a few aspects of this I'd like to explore. > > > >One is the UX one. Is allowing 'resolve-expressions' in some > >contexts > >and not others a good UX? Will users understand that? I'm ambivalent > >about that, and am interested in others' opinions. > > > >If it can work for a server and for anything under /host=*, then I'm > >ambivalent. Any restriction at all is unintuitive, but once the user > >learns that there is a restriction, that's a pretty understandable > >one. > >If it only works for a patchwork of stuff under /host=* then I'm > >real > >negative about it. An area of concern is /host=*/server-config=*, > >where > >an expression might be irrelevant to the host, only resolving > >correctly > >on the server that is created using that server-config. That will > >need > >careful examination. > > > >A second one is how this data would be displayed with 'ls'. A > >separate > >additional column? Or replacing the current data? The answer to this > >might impact how it would be implemented server side. > > > >The third aspect is the technical issue of how to make any > >'resolve-expressions' param or CLI argument available in certain > >contexts and not in others. That's very likely solvable on the > >server > >side; not sure how difficult it would be in the CLI high-level > >command. > > > >FYI, for others reading this, offline Joe pointed out there's a > >related > >JIRA for this: https://issues.jboss.org/browse/WFLY-1069. > > > >On 6/26/14, 5:41 AM, Edward Wertz wrote: > >> I'm looking into whether it's possible to automatically resolve > >> expressions when executing operations and commands in the CLI. > >> > >>>From my understanding, there are two variations of the problem. > >> > >> * Operations are server-side processes that are accessed via ':' > >> in the CLI and, currently, the CLI presents the results > >> returned as-is to the users. ex: ':read-resource' > >> > >> * Commands are processes that get manipulated by the CLI before > >> getting presented to users. ex: 'ls' > >> > >> I've been experimenting with adding arguments to the CLI commands, > >> like 'ls --resolve-expressions', and gotten it working for the > >> standalone and domain side of things. However, I can't control > >> the scope of the argument, so it's available in situations that > >> cannot accurately resolve expressions like the 'profile=full' > >> section of the domain tree. The results wouldn't be reliable. > >> > >> The same problem would apply to adding parameters to the > >> server-side operations. The scope of the operations themselves > >> can be controlled, but not their parameters. An execution like > >> ':read-resource(recursive=true resolve-expressions=true)' can't > >> resolve expressions unless it's used against an actual server or > >> host, but the operation is available almost everywhere. Again, > >> the results wouldn't be reliable. > >> > >> I'm wondering if anyone can suggest a way to attack this problem? > >> There is already a ':resolve-expression(expression=___)' > >> operation, so users can somewhat laboriously get the runtime > >> values they want, but I can't figure out a way to integrate the > >> values into the existing framework successfully. Other than > >> creating entirely new operations and commands, like 'ls-resolve' > >> and ':read-resource-resolve', which seems like an unsustainable > >> way to solve the problem. > >> > >> Thanks, > >> > >> Joe Wertz > >> _______________________________________________ > >> wildfly-dev mailing list > >> wildfly-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > >> > > > > > >-- > >Brian Stansberry > >Senior Principal Software Engineer > >JBoss by Red Hat > >_______________________________________________ > >wildfly-dev mailing list > >wildfly-dev at lists.jboss.org > >https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From alexey.loubyansky at redhat.com Fri Jun 27 09:27:56 2014 From: alexey.loubyansky at redhat.com (Alexey Loubyansky) Date: Fri, 27 Jun 2014 15:27:56 +0200 Subject: [wildfly-dev] Automatically resolving expressions in the CLI In-Reply-To: <53AC3CBC.6050403@redhat.com> References: <924955857.34071548.1403779269224.JavaMail.zimbra@redhat.com> <53AC3CBC.6050403@redhat.com> Message-ID: <53AD715C.8000505@redhat.com> On 06/26/2014 05:31 PM, Brian Stansberry wrote: > Thanks, Joe, for looking into this. > > I'm curious what you've done so far with your 'ls --resolve-expressions' > work. Did you use the existing ':resolve-expression(expression=___)' low > level operation to process any expressions found in the :read-resource > response? > > There are a few aspects of this I'd like to explore. > > One is the UX one. Is allowing 'resolve-expressions' in some contexts > and not others a good UX? Will users understand that? I'm ambivalent > about that, and am interested in others' opinions. > > If it can work for a server and for anything under /host=*, then I'm > ambivalent. Any restriction at all is unintuitive, but once the user > learns that there is a restriction, that's a pretty understandable one. > If it only works for a patchwork of stuff under /host=* then I'm real > negative about it. An area of concern is /host=*/server-config=*, where > an expression might be irrelevant to the host, only resolving correctly > on the server that is created using that server-config. That will need > careful examination. > > A second one is how this data would be displayed with 'ls'. A separate > additional column? Or replacing the current data? The answer to this > might impact how it would be implemented server side. Keep in mind that ls is an example. There are other commands that will have to support this feature once it's implemented in one place. Another example is read-attribute command. The ability to resolve expressions elsewhere will be a natural expectation then. So, it has to be thought of as a general features that can be applied to various cli commands. IMO, the values returned should just be replaced with the resolved ones for display. Some commands support --verbose argument, in which case additional info is displayed in columns, there we could include the original value. The output of the cli commands in some cases is parsed by scripts or other code, so keeping it simple will help there too. > The third aspect is the technical issue of how to make any > 'resolve-expressions' param or CLI argument available in certain > contexts and not in others. That's very likely solvable on the server > side; not sure how difficult it would be in the CLI high-level command. Current tab-completion supports dependencies of command arguments and their values on the current context (connection to the controller, standalone/domain mode, the presence of other arguments on the line and the values specified for them, etc). Technically, there shouldn't be an issue. I am more concerned about how intuitive that will look like for the user in various contexts. Alexey > FYI, for others reading this, offline Joe pointed out there's a related > JIRA for this: https://issues.jboss.org/browse/WFLY-1069. > > On 6/26/14, 5:41 AM, Edward Wertz wrote: >> I'm looking into whether it's possible to automatically resolve expressions when executing operations and commands in the CLI. >> >> >From my understanding, there are two variations of the problem. >> >> * Operations are server-side processes that are accessed via ':' in the CLI and, currently, the CLI presents the results returned as-is to the users. ex: ':read-resource' >> >> * Commands are processes that get manipulated by the CLI before getting presented to users. ex: 'ls' >> >> I've been experimenting with adding arguments to the CLI commands, like 'ls --resolve-expressions', and gotten it working for the standalone and domain side of things. However, I can't control the scope of the argument, so it's available in situations that cannot accurately resolve expressions like the 'profile=full' section of the domain tree. The results wouldn't be reliable. >> >> The same problem would apply to adding parameters to the server-side operations. The scope of the operations themselves can be controlled, but not their parameters. An execution like ':read-resource(recursive=true resolve-expressions=true)' can't resolve expressions unless it's used against an actual server or host, but the operation is available almost everywhere. Again, the results wouldn't be reliable. >> >> I'm wondering if anyone can suggest a way to attack this problem? There is already a ':resolve-expression(expression=___)' operation, so users can somewhat laboriously get the runtime values they want, but I can't figure out a way to integrate the values into the existing framework successfully. Other than creating entirely new operations and commands, like 'ls-resolve' and ':read-resource-resolve', which seems like an unsustainable way to solve the problem. >> >> Thanks, >> >> Joe Wertz >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > From frank.langelage at osnanet.de Fri Jun 27 09:29:44 2014 From: frank.langelage at osnanet.de (Frank Langelage) Date: Fri, 27 Jun 2014 15:29:44 +0200 Subject: [wildfly-dev] build on Solaris broken now In-Reply-To: References: <53AC8FA1.5070802@osnanet.de> <53ACD380.1020809@gmail.com> <53AD1CFA.4030405@osnanet.de> Message-ID: <53AD71C8.9030604@osnanet.de> Hi Tomaz, Okay, I modified download-maven.sh script to modify the faulty line in maven/bin/mvn. This now works for me. See pull-request #6447. Frank On 27.06.14 10:59, Toma? Cerar wrote: > Frank, > > I had in mind that download-maven.sh script could "patch" the bug in > the maven executable after the download. > > Other option is that we just go back to 3.2.1 maven. > as looking at http://maven.apache.org/docs/3.2.2/release-notes.html > there are no real benefits that our build would need. > > -- > tomaz > > > On Fri, Jun 27, 2014 at 9:27 AM, Frank Langelage > > wrote: > > It's definitely a maven problem. Introduced in Apache Maven 3.2.2. > So I not understood Tomaz request for a PR. Too fast reading? > This would be a work around in WF to fix a problem in Maven. > http://jira.codehaus.org/browse/MNG-5658 > > We might go back to Maven 3.2.1 for now waiting for 3.2.3 becoming > available with this problem hopefully fixed, if there a re not > other drawbacks from this downgrade. > > Regards, Frank > > > On 27.06.14 04:14, Stuart Douglas wrote: > > So is this an issue with the download script or with the Maven > version upgrade? > > Stuart > > Toma? Cerar wrote: > > Frank, > > can you send PR with a fix to download-maven.sh script? > given that you have solaris at hand it will be probably > the easiest. > > Thank you, > tomaz > > > On Thu, Jun 26, 2014 at 11:24 PM, Frank Langelage > > >> wrote: > > for those who might want to build current > 9.0.0-SNAPSHOT from github > sources on Solaris: > after commit of checking / downloading maven if needed > this currently > fails with a syntax error in tools/maven/bin/mvn after > downloading the > latest version 3.2.2. > Maven 3.2.2 's bin/mvn contains incompatible syntax > for Solaris /bin/sh > and perhaps others: > if [[ -z "$JAVA_HOME" && -x > /usr/libexec/java_home ]] ; > then > # > # Apple JDKs > # > export > JAVA_HOME=$(/usr/libexec/java_home) // here it > fails > fi > ;; > > I created a JIRA fpr Apache Maven to change this line to > export JAVA_HOME=/usr/libexec/java_home > > Doing this change after download / unzip in my local > repo make the build > working again for me. > > Regards, Frank > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140627/ef2238bd/attachment.html From tomaz.cerar at gmail.com Fri Jun 27 09:40:21 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Fri, 27 Jun 2014 15:40:21 +0200 Subject: [wildfly-dev] build on Solaris broken now In-Reply-To: <53AD71C8.9030604@osnanet.de> References: <53AC8FA1.5070802@osnanet.de> <53ACD380.1020809@gmail.com> <53AD1CFA.4030405@osnanet.de> <53AD71C8.9030604@osnanet.de> Message-ID: Hi Frank, that looks great! Thank you for the PR. -- tomaz On Fri, Jun 27, 2014 at 3:29 PM, Frank Langelage wrote: > Hi Tomaz, > > Okay, I modified download-maven.sh script to modify the faulty line in > maven/bin/mvn. > This now works for me. > See pull-request #6447. > > Frank > > > > On 27.06.14 10:59, Toma? Cerar wrote: > > Frank, > > I had in mind that download-maven.sh script could "patch" the bug in the > maven executable after the download. > > Other option is that we just go back to 3.2.1 maven. > as looking at http://maven.apache.org/docs/3.2.2/release-notes.html > there are no real benefits that our build would need. > > -- > tomaz > > > On Fri, Jun 27, 2014 at 9:27 AM, Frank Langelage < > frank.langelage at osnanet.de> wrote: > >> It's definitely a maven problem. Introduced in Apache Maven 3.2.2. >> So I not understood Tomaz request for a PR. Too fast reading? >> This would be a work around in WF to fix a problem in Maven. >> http://jira.codehaus.org/browse/MNG-5658 >> >> We might go back to Maven 3.2.1 for now waiting for 3.2.3 becoming >> available with this problem hopefully fixed, if there a re not other >> drawbacks from this downgrade. >> >> Regards, Frank >> >> >> On 27.06.14 04:14, Stuart Douglas wrote: >> >>> So is this an issue with the download script or with the Maven version >>> upgrade? >>> >>> Stuart >>> >>> Toma? Cerar wrote: >>> >>>> Frank, >>>> >>>> can you send PR with a fix to download-maven.sh script? >>>> given that you have solaris at hand it will be probably the easiest. >>>> >>>> Thank you, >>>> tomaz >>>> >>>> >>>> On Thu, Jun 26, 2014 at 11:24 PM, Frank Langelage >>>> > wrote: >>>> >>>> for those who might want to build current 9.0.0-SNAPSHOT from github >>>> sources on Solaris: >>>> after commit of checking / downloading maven if needed this >>>> currently >>>> fails with a syntax error in tools/maven/bin/mvn after downloading >>>> the >>>> latest version 3.2.2. >>>> Maven 3.2.2 's bin/mvn contains incompatible syntax for Solaris >>>> /bin/sh >>>> and perhaps others: >>>> if [[ -z "$JAVA_HOME" && -x /usr/libexec/java_home ]] ; >>>> then >>>> # >>>> # Apple JDKs >>>> # >>>> export JAVA_HOME=$(/usr/libexec/java_home) // here it >>>> fails >>>> fi >>>> ;; >>>> >>>> I created a JIRA fpr Apache Maven to change this line to >>>> export JAVA_HOME=/usr/libexec/java_home >>>> >>>> Doing this change after download / unzip in my local repo make the >>>> build >>>> working again for me. >>>> >>>> Regards, Frank >>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>>> >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>> >>> >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140627/ce38d343/attachment-0001.html From brian.stansberry at redhat.com Fri Jun 27 10:45:56 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Fri, 27 Jun 2014 09:45:56 -0500 Subject: [wildfly-dev] Automatically resolving expressions in the CLI In-Reply-To: <53AD715C.8000505@redhat.com> References: <924955857.34071548.1403779269224.JavaMail.zimbra@redhat.com> <53AC3CBC.6050403@redhat.com> <53AD715C.8000505@redhat.com> Message-ID: <53AD83A4.10004@redhat.com> On 6/27/14, 8:27 AM, Alexey Loubyansky wrote: > On 06/26/2014 05:31 PM, Brian Stansberry wrote: >> Thanks, Joe, for looking into this. >> >> I'm curious what you've done so far with your 'ls --resolve-expressions' >> work. Did you use the existing ':resolve-expression(expression=___)' low >> level operation to process any expressions found in the :read-resource >> response? >> >> There are a few aspects of this I'd like to explore. >> >> One is the UX one. Is allowing 'resolve-expressions' in some contexts >> and not others a good UX? Will users understand that? I'm ambivalent >> about that, and am interested in others' opinions. >> >> If it can work for a server and for anything under /host=*, then I'm >> ambivalent. Any restriction at all is unintuitive, but once the user >> learns that there is a restriction, that's a pretty understandable one. >> If it only works for a patchwork of stuff under /host=* then I'm real >> negative about it. An area of concern is /host=*/server-config=*, where >> an expression might be irrelevant to the host, only resolving correctly >> on the server that is created using that server-config. That will need >> careful examination. >> >> A second one is how this data would be displayed with 'ls'. A separate >> additional column? Or replacing the current data? The answer to this >> might impact how it would be implemented server side. > > Keep in mind that ls is an example. There are other commands that will > have to support this feature once it's implemented in one place. Another > example is read-attribute command. The ability to resolve expressions > elsewhere will be a natural expectation then. > So, it has to be thought of as a general features that can be applied to > various cli commands. > Good point. Joe, we'd need a clear understanding of all the commands that would be affected. > IMO, the values returned should just be replaced with the resolved ones > for display. Some commands support --verbose argument, in which case > additional info is displayed in columns, there we could include the > original value. > The output of the cli commands in some cases is parsed by scripts or > other code, so keeping it simple will help there too. > >> The third aspect is the technical issue of how to make any >> 'resolve-expressions' param or CLI argument available in certain >> contexts and not in others. That's very likely solvable on the server >> side; not sure how difficult it would be in the CLI high-level command. > > Current tab-completion supports dependencies of command arguments and > their values on the current context (connection to the controller, > standalone/domain mode, the presence of other arguments on the line and > the values specified for them, etc). Technically, there shouldn't be an > issue. Ok, good. > I am more concerned about how intuitive that will look like for the user > in various contexts. > Yes, I think the UX aspects are the more significant ones. > Alexey > >> FYI, for others reading this, offline Joe pointed out there's a related >> JIRA for this: https://issues.jboss.org/browse/WFLY-1069. >> >> On 6/26/14, 5:41 AM, Edward Wertz wrote: >>> I'm looking into whether it's possible to automatically resolve expressions when executing operations and commands in the CLI. >>> >>> >From my understanding, there are two variations of the problem. >>> >>> * Operations are server-side processes that are accessed via ':' in the CLI and, currently, the CLI presents the results returned as-is to the users. ex: ':read-resource' >>> >>> * Commands are processes that get manipulated by the CLI before getting presented to users. ex: 'ls' >>> >>> I've been experimenting with adding arguments to the CLI commands, like 'ls --resolve-expressions', and gotten it working for the standalone and domain side of things. However, I can't control the scope of the argument, so it's available in situations that cannot accurately resolve expressions like the 'profile=full' section of the domain tree. The results wouldn't be reliable. >>> >>> The same problem would apply to adding parameters to the server-side operations. The scope of the operations themselves can be controlled, but not their parameters. An execution like ':read-resource(recursive=true resolve-expressions=true)' can't resolve expressions unless it's used against an actual server or host, but the operation is available almost everywhere. Again, the results wouldn't be reliable. >>> >>> I'm wondering if anyone can suggest a way to attack this problem? There is already a ':resolve-expression(expression=___)' operation, so users can somewhat laboriously get the runtime values they want, but I can't figure out a way to integrate the values into the existing framework successfully. Other than creating entirely new operations and commands, like 'ls-resolve' and ':read-resource-resolve', which seems like an unsustainable way to solve the problem. >>> >>> Thanks, >>> >>> Joe Wertz >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> >> >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From brian.stansberry at redhat.com Fri Jun 27 11:00:40 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Fri, 27 Jun 2014 10:00:40 -0500 Subject: [wildfly-dev] Automatically resolving expressions in the CLI In-Reply-To: <24838160.7922.1403863823207.JavaMail.joe@dhcp-17-61.nay.redhat.com> References: <924955857.34071548.1403779269224.JavaMail.zimbra@redhat.com> <53AC3CBC.6050403@redhat.com> <20140626162448.GE4045@beistet> <24838160.7922.1403863823207.JavaMail.joe@dhcp-17-61.nay.redhat.com> Message-ID: <53AD8718.5060208@redhat.com> On 6/27/14, 5:10 AM, Edward Wertz wrote: > Brian, > > Yes, the work so far searches the result that the existing 'ls' command receives for expressions and, if found, creates a ':resolve-expression(expression=___)' operation to resolve each one. On the domain side I can add an 'address' node to that command and target it towards a specific server. > > The existing ':resolve-expression(expression=___)' operation is already limited within the UX. Actually, I found out today, to a very strict extent. Which could be part of the user's confusion right now. The operation is currently unavailable anywhere deeper than the server level. Users are making requests to have expressions resolved in other commands and operations, which would certainly be convenient, but the tree level that they're giving as examples are all forbidden to the existing ':resolve-expression' operation. Which could be why they either don't know it exists, forget to use it, or even think it wouldn't work correctly. > Yes, it's limited. IIRC that operation was largely written to support our own tooling (e.g. CLI, web console) and we recognized that its use was not user-friendly. If we start adding something for general, non-expert use then the UX requirements are much higher. > For an example, they execute ':read-resource' at 'socket-binding-group=full-sockets/socket-binding=remoting', see an expression, but would have to 'cd ../../../..' to get access to the ':resolve-expression' operation. Since it's restricted so much I imagine most people don't even know it exists. > FYI, you don't have to cd to invoke an operation. You can provide an absolute or relative address as part of any low level operation command by providing the address before the ":" in the operation. Starting with a "/" makes it an absolute address. [standalone at localhost:9990 /] cd socket-binding-group=standard-sockets/socket-binding=http [standalone at localhost:9990 socket-binding=http] /:resolve-expression(expression=${foo:bar}) { "outcome" => "success", "result" => "bar" } I'm not saying the above makes this easy to use for the general user; it's just an FYI about how to invoke low level ops. The thing that most makes resolve-expression an expert op is the param is the expression itself, not an identifier for some resource/attribute whose value is the expression. (I'm not proposing changing that, BTW. Some other solution is likely better.) > I'm not sure about the presentation. The user has to deliberate request the argument/param, so replacing the value might be fine, but having both would probably be nicer. > > > > St?le, Thanks for the heads up. That option activator sounds like exactly what I want to do with the CLI side command arguments. I'll try to get a bit more familiar with ?sh in the next few days. > > Joe > > > ----- Original Message ----- >> hi, this might be side stepping the topic a bit, but atm im working >> on >> rewriting the cli to work with the latest version of ?sh. >> the new api in ?sh have capabilities like specifying: >> - an option activator (the option will not be shown during completion >> until activator is "validated") >> - option validator (will be processed after user have typed in a >> value) >> - option completer >> +++ >> the homepage is here; http://aeshell.github.io/ the docs are not >> quite >> up to date, but they should give an idea of how it works. >> >> the plan is to have designated completers for "browsing" the tree >> etc, >> activators, validators etc. so the commands should be fairly simple >> without much boilerplate. >> >> with this change it would also be easy to add custom commands to the >> cli >> if needed. >> >> ive just started and i cant dedicate 100% on this so i dont know when >> ill be done, the plan is to target wildfly9. - if aleksey and brian >> approves :) >> >> here is the branch im working against atm: >> >> https://github.com/stalep/wildfly/tree/aesh_upgrade_take_two >> >> st?le >> >> On 26.06.14 10:31, Brian Stansberry wrote: >>> Thanks, Joe, for looking into this. >>> >>> I'm curious what you've done so far with your 'ls >>> --resolve-expressions' >>> work. Did you use the existing ':resolve-expression(expression=___)' >>> low >>> level operation to process any expressions found in the >>> :read-resource >>> response? >>> >>> There are a few aspects of this I'd like to explore. >>> >>> One is the UX one. Is allowing 'resolve-expressions' in some >>> contexts >>> and not others a good UX? Will users understand that? I'm ambivalent >>> about that, and am interested in others' opinions. >>> >>> If it can work for a server and for anything under /host=*, then I'm >>> ambivalent. Any restriction at all is unintuitive, but once the user >>> learns that there is a restriction, that's a pretty understandable >>> one. >>> If it only works for a patchwork of stuff under /host=* then I'm >>> real >>> negative about it. An area of concern is /host=*/server-config=*, >>> where >>> an expression might be irrelevant to the host, only resolving >>> correctly >>> on the server that is created using that server-config. That will >>> need >>> careful examination. >>> >>> A second one is how this data would be displayed with 'ls'. A >>> separate >>> additional column? Or replacing the current data? The answer to this >>> might impact how it would be implemented server side. >>> >>> The third aspect is the technical issue of how to make any >>> 'resolve-expressions' param or CLI argument available in certain >>> contexts and not in others. That's very likely solvable on the >>> server >>> side; not sure how difficult it would be in the CLI high-level >>> command. >>> >>> FYI, for others reading this, offline Joe pointed out there's a >>> related >>> JIRA for this: https://issues.jboss.org/browse/WFLY-1069. >>> >>> On 6/26/14, 5:41 AM, Edward Wertz wrote: >>>> I'm looking into whether it's possible to automatically resolve >>>> expressions when executing operations and commands in the CLI. >>>> >>>> >From my understanding, there are two variations of the problem. >>>> >>>> * Operations are server-side processes that are accessed via ':' >>>> in the CLI and, currently, the CLI presents the results >>>> returned as-is to the users. ex: ':read-resource' >>>> >>>> * Commands are processes that get manipulated by the CLI before >>>> getting presented to users. ex: 'ls' >>>> >>>> I've been experimenting with adding arguments to the CLI commands, >>>> like 'ls --resolve-expressions', and gotten it working for the >>>> standalone and domain side of things. However, I can't control >>>> the scope of the argument, so it's available in situations that >>>> cannot accurately resolve expressions like the 'profile=full' >>>> section of the domain tree. The results wouldn't be reliable. >>>> >>>> The same problem would apply to adding parameters to the >>>> server-side operations. The scope of the operations themselves >>>> can be controlled, but not their parameters. An execution like >>>> ':read-resource(recursive=true resolve-expressions=true)' can't >>>> resolve expressions unless it's used against an actual server or >>>> host, but the operation is available almost everywhere. Again, >>>> the results wouldn't be reliable. >>>> >>>> I'm wondering if anyone can suggest a way to attack this problem? >>>> There is already a ':resolve-expression(expression=___)' >>>> operation, so users can somewhat laboriously get the runtime >>>> values they want, but I can't figure out a way to integrate the >>>> values into the existing framework successfully. Other than >>>> creating entirely new operations and commands, like 'ls-resolve' >>>> and ':read-resource-resolve', which seems like an unsustainable >>>> way to solve the problem. >>>> >>>> Thanks, >>>> >>>> Joe Wertz >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>> >>> >>> -- >>> Brian Stansberry >>> Senior Principal Software Engineer >>> JBoss by Red Hat >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From alexey.loubyansky at redhat.com Fri Jun 27 11:19:28 2014 From: alexey.loubyansky at redhat.com (Alexey Loubyansky) Date: Fri, 27 Jun 2014 17:19:28 +0200 Subject: [wildfly-dev] Automatically resolving expressions in the CLI In-Reply-To: <53AD8718.5060208@redhat.com> References: <924955857.34071548.1403779269224.JavaMail.zimbra@redhat.com> <53AC3CBC.6050403@redhat.com> <20140626162448.GE4045@beistet> <24838160.7922.1403863823207.JavaMail.joe@dhcp-17-61.nay.redhat.com> <53AD8718.5060208@redhat.com> Message-ID: <53AD8B80.9030000@redhat.com> On 06/27/2014 05:00 PM, Brian Stansberry wrote: > On 6/27/14, 5:10 AM, Edward Wertz wrote: >> Brian, >> >> Yes, the work so far searches the result that the existing 'ls' command receives for expressions and, if found, creates a ':resolve-expression(expression=___)' operation to resolve each one. On the domain side I can add an 'address' node to that command and target it towards a specific server. >> >> The existing ':resolve-expression(expression=___)' operation is already limited within the UX. Actually, I found out today, to a very strict extent. Which could be part of the user's confusion right now. The operation is currently unavailable anywhere deeper than the server level. Users are making requests to have expressions resolved in other commands and operations, which would certainly be convenient, but the tree level that they're giving as examples are all forbidden to the existing ':resolve-expression' operation. Which could be why they either don't know it exists, forget to use it, or even think it wouldn't work correctly. >> > > Yes, it's limited. IIRC that operation was largely written to support > our own tooling (e.g. CLI, web console) and we recognized that its use > was not user-friendly. If we start adding something for general, > non-expert use then the UX requirements are much higher. I think, non-expert use is not a requirement for management operations. I think of operations as low (protocol-like) level of doing things. The priority here is to provide comprehensive management capabilities. It's up to the tools to bring on the comfort. In the cli, this would mean developing commands that perform all the necessary low level operations for the user behind the scenes and displaying the results in a non-expert format. Alexey > >> For an example, they execute ':read-resource' at 'socket-binding-group=full-sockets/socket-binding=remoting', see an expression, but would have to 'cd ../../../..' to get access to the ':resolve-expression' operation. Since it's restricted so much I imagine most people don't even know it exists. >> > > FYI, you don't have to cd to invoke an operation. You can provide an > absolute or relative address as part of any low level operation command > by providing the address before the ":" in the operation. Starting with > a "/" makes it an absolute address. > > [standalone at localhost:9990 /] cd > socket-binding-group=standard-sockets/socket-binding=http > [standalone at localhost:9990 socket-binding=http] > /:resolve-expression(expression=${foo:bar}) > { > "outcome" => "success", > "result" => "bar" > } > > I'm not saying the above makes this easy to use for the general user; > it's just an FYI about how to invoke low level ops. The thing that most > makes resolve-expression an expert op is the param is the expression > itself, not an identifier for some resource/attribute whose value is the > expression. (I'm not proposing changing that, BTW. Some other solution > is likely better.) > >> I'm not sure about the presentation. The user has to deliberate request the argument/param, so replacing the value might be fine, but having both would probably be nicer. >> >> >> >> St?le, Thanks for the heads up. That option activator sounds like exactly what I want to do with the CLI side command arguments. I'll try to get a bit more familiar with ?sh in the next few days. >> >> Joe >> >> >> ----- Original Message ----- >>> hi, this might be side stepping the topic a bit, but atm im working >>> on >>> rewriting the cli to work with the latest version of ?sh. >>> the new api in ?sh have capabilities like specifying: >>> - an option activator (the option will not be shown during completion >>> until activator is "validated") >>> - option validator (will be processed after user have typed in a >>> value) >>> - option completer >>> +++ >>> the homepage is here; http://aeshell.github.io/ the docs are not >>> quite >>> up to date, but they should give an idea of how it works. >>> >>> the plan is to have designated completers for "browsing" the tree >>> etc, >>> activators, validators etc. so the commands should be fairly simple >>> without much boilerplate. >>> >>> with this change it would also be easy to add custom commands to the >>> cli >>> if needed. >>> >>> ive just started and i cant dedicate 100% on this so i dont know when >>> ill be done, the plan is to target wildfly9. - if aleksey and brian >>> approves :) >>> >>> here is the branch im working against atm: >>> >>> https://github.com/stalep/wildfly/tree/aesh_upgrade_take_two >>> >>> st?le >>> >>> On 26.06.14 10:31, Brian Stansberry wrote: >>>> Thanks, Joe, for looking into this. >>>> >>>> I'm curious what you've done so far with your 'ls >>>> --resolve-expressions' >>>> work. Did you use the existing ':resolve-expression(expression=___)' >>>> low >>>> level operation to process any expressions found in the >>>> :read-resource >>>> response? >>>> >>>> There are a few aspects of this I'd like to explore. >>>> >>>> One is the UX one. Is allowing 'resolve-expressions' in some >>>> contexts >>>> and not others a good UX? Will users understand that? I'm ambivalent >>>> about that, and am interested in others' opinions. >>>> >>>> If it can work for a server and for anything under /host=*, then I'm >>>> ambivalent. Any restriction at all is unintuitive, but once the user >>>> learns that there is a restriction, that's a pretty understandable >>>> one. >>>> If it only works for a patchwork of stuff under /host=* then I'm >>>> real >>>> negative about it. An area of concern is /host=*/server-config=*, >>>> where >>>> an expression might be irrelevant to the host, only resolving >>>> correctly >>>> on the server that is created using that server-config. That will >>>> need >>>> careful examination. >>>> >>>> A second one is how this data would be displayed with 'ls'. A >>>> separate >>>> additional column? Or replacing the current data? The answer to this >>>> might impact how it would be implemented server side. >>>> >>>> The third aspect is the technical issue of how to make any >>>> 'resolve-expressions' param or CLI argument available in certain >>>> contexts and not in others. That's very likely solvable on the >>>> server >>>> side; not sure how difficult it would be in the CLI high-level >>>> command. >>>> >>>> FYI, for others reading this, offline Joe pointed out there's a >>>> related >>>> JIRA for this: https://issues.jboss.org/browse/WFLY-1069. >>>> >>>> On 6/26/14, 5:41 AM, Edward Wertz wrote: >>>>> I'm looking into whether it's possible to automatically resolve >>>>> expressions when executing operations and commands in the CLI. >>>>> >>>>> >From my understanding, there are two variations of the problem. >>>>> >>>>> * Operations are server-side processes that are accessed via ':' >>>>> in the CLI and, currently, the CLI presents the results >>>>> returned as-is to the users. ex: ':read-resource' >>>>> >>>>> * Commands are processes that get manipulated by the CLI before >>>>> getting presented to users. ex: 'ls' >>>>> >>>>> I've been experimenting with adding arguments to the CLI commands, >>>>> like 'ls --resolve-expressions', and gotten it working for the >>>>> standalone and domain side of things. However, I can't control >>>>> the scope of the argument, so it's available in situations that >>>>> cannot accurately resolve expressions like the 'profile=full' >>>>> section of the domain tree. The results wouldn't be reliable. >>>>> >>>>> The same problem would apply to adding parameters to the >>>>> server-side operations. The scope of the operations themselves >>>>> can be controlled, but not their parameters. An execution like >>>>> ':read-resource(recursive=true resolve-expressions=true)' can't >>>>> resolve expressions unless it's used against an actual server or >>>>> host, but the operation is available almost everywhere. Again, >>>>> the results wouldn't be reliable. >>>>> >>>>> I'm wondering if anyone can suggest a way to attack this problem? >>>>> There is already a ':resolve-expression(expression=___)' >>>>> operation, so users can somewhat laboriously get the runtime >>>>> values they want, but I can't figure out a way to integrate the >>>>> values into the existing framework successfully. Other than >>>>> creating entirely new operations and commands, like 'ls-resolve' >>>>> and ':read-resource-resolve', which seems like an unsustainable >>>>> way to solve the problem. >>>>> >>>>> Thanks, >>>>> >>>>> Joe Wertz >>>>> _______________________________________________ >>>>> wildfly-dev mailing list >>>>> wildfly-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>>> >>>> >>>> >>>> -- >>>> Brian Stansberry >>>> Senior Principal Software Engineer >>>> JBoss by Red Hat >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> > > From alexey.loubyansky at redhat.com Fri Jun 27 11:26:53 2014 From: alexey.loubyansky at redhat.com (Alexey Loubyansky) Date: Fri, 27 Jun 2014 17:26:53 +0200 Subject: [wildfly-dev] Automatically resolving expressions in the CLI In-Reply-To: <53AD83A4.10004@redhat.com> References: <924955857.34071548.1403779269224.JavaMail.zimbra@redhat.com> <53AC3CBC.6050403@redhat.com> <53AD715C.8000505@redhat.com> <53AD83A4.10004@redhat.com> Message-ID: <53AD8D3D.1020301@redhat.com> On 06/27/2014 04:45 PM, Brian Stansberry wrote: > On 6/27/14, 8:27 AM, Alexey Loubyansky wrote: >> On 06/26/2014 05:31 PM, Brian Stansberry wrote: >>> Thanks, Joe, for looking into this. >>> >>> I'm curious what you've done so far with your 'ls --resolve-expressions' >>> work. Did you use the existing ':resolve-expression(expression=___)' low >>> level operation to process any expressions found in the :read-resource >>> response? >>> >>> There are a few aspects of this I'd like to explore. >>> >>> One is the UX one. Is allowing 'resolve-expressions' in some contexts >>> and not others a good UX? Will users understand that? I'm ambivalent >>> about that, and am interested in others' opinions. >>> >>> If it can work for a server and for anything under /host=*, then I'm >>> ambivalent. Any restriction at all is unintuitive, but once the user >>> learns that there is a restriction, that's a pretty understandable one. >>> If it only works for a patchwork of stuff under /host=* then I'm real >>> negative about it. An area of concern is /host=*/server-config=*, where >>> an expression might be irrelevant to the host, only resolving correctly >>> on the server that is created using that server-config. That will need >>> careful examination. >>> >>> A second one is how this data would be displayed with 'ls'. A separate >>> additional column? Or replacing the current data? The answer to this >>> might impact how it would be implemented server side. >> >> Keep in mind that ls is an example. There are other commands that will >> have to support this feature once it's implemented in one place. Another >> example is read-attribute command. The ability to resolve expressions >> elsewhere will be a natural expectation then. >> So, it has to be thought of as a general features that can be applied to >> various cli commands. >> > > Good point. Joe, we'd need a clear understanding of all the commands > that would be affected. At this point, it's ls, read-attribute and commands handled by GenericTypeOperationHandler (which means [xa-]data-source, jms-topic, -queue, -connection-factory, etc). The generic handler includes action read-resource (e.g. w/o other optional arguments 'data-source read-resource --name=ExampleDS'), which is basically a formatted result of :read-resource. In general, it could be applied to any command displaying an attribute value to the user. Alexey > >> IMO, the values returned should just be replaced with the resolved ones >> for display. Some commands support --verbose argument, in which case >> additional info is displayed in columns, there we could include the >> original value. >> The output of the cli commands in some cases is parsed by scripts or >> other code, so keeping it simple will help there too. >> >>> The third aspect is the technical issue of how to make any >>> 'resolve-expressions' param or CLI argument available in certain >>> contexts and not in others. That's very likely solvable on the server >>> side; not sure how difficult it would be in the CLI high-level command. >> >> Current tab-completion supports dependencies of command arguments and >> their values on the current context (connection to the controller, >> standalone/domain mode, the presence of other arguments on the line and >> the values specified for them, etc). Technically, there shouldn't be an >> issue. > > Ok, good. > >> I am more concerned about how intuitive that will look like for the user >> in various contexts. >> > > Yes, I think the UX aspects are the more significant ones. > >> Alexey >> >>> FYI, for others reading this, offline Joe pointed out there's a related >>> JIRA for this: https://issues.jboss.org/browse/WFLY-1069. >>> >>> On 6/26/14, 5:41 AM, Edward Wertz wrote: >>>> I'm looking into whether it's possible to automatically resolve expressions when executing operations and commands in the CLI. >>>> >>>> >From my understanding, there are two variations of the problem. >>>> >>>> * Operations are server-side processes that are accessed via ':' in the CLI and, currently, the CLI presents the results returned as-is to the users. ex: ':read-resource' >>>> >>>> * Commands are processes that get manipulated by the CLI before getting presented to users. ex: 'ls' >>>> >>>> I've been experimenting with adding arguments to the CLI commands, like 'ls --resolve-expressions', and gotten it working for the standalone and domain side of things. However, I can't control the scope of the argument, so it's available in situations that cannot accurately resolve expressions like the 'profile=full' section of the domain tree. The results wouldn't be reliable. >>>> >>>> The same problem would apply to adding parameters to the server-side operations. The scope of the operations themselves can be controlled, but not their parameters. An execution like ':read-resource(recursive=true resolve-expressions=true)' can't resolve expressions unless it's used against an actual server or host, but the operation is available almost everywhere. Again, the results wouldn't be reliable. >>>> >>>> I'm wondering if anyone can suggest a way to attack this problem? There is already a ':resolve-expression(expression=___)' operation, so users can somewhat laboriously get the runtime values they want, but I can't figure out a way to integrate the values into the existing framework successfully. Other than creating entirely new operations and commands, like 'ls-resolve' and ':read-resource-resolve', which seems like an unsustainable way to solve the problem. >>>> >>>> Thanks, >>>> >>>> Joe Wertz >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>>> >>> >>> >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> > > From stuart.w.douglas at gmail.com Fri Jun 27 12:19:00 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Fri, 27 Jun 2014 12:19:00 -0400 Subject: [wildfly-dev] Pending core split Message-ID: <53AD9974.8020108@gmail.com> Hi all, So I am moderately confident that we will be ready to split out Wildfly core into a separate repository early next week (I'm not saying that it will definitely happen in this time frame, just that it should be possible). Once this is ready to go I think the basic process will be: - Code freeze on Master - Create the core repo, push new rewritten core history - Release core 1.0.0.Beta1 - Create PR against core WF repo that deletes everything in core, and uses the core 1.0.0.Beta1 release - End of code freeze Stuart From anmiller at redhat.com Fri Jun 27 12:44:06 2014 From: anmiller at redhat.com (Andrig Miller) Date: Fri, 27 Jun 2014 12:44:06 -0400 (EDT) Subject: [wildfly-dev] CDI overhead In-Reply-To: <25460110.745.1403887227625.JavaMail.andrig@worklaptop.miller.org> Message-ID: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> I should have posted this some time ago, but just forgot. In my early testing of Wildfly 8, CDI adds quite a bit of overhead (12% reduction in throughput) for even an application that only uses servlets. The only way I could get that back was to remove the subsystem. In talking with Stuart at the time, he was looking at ways to make the overhead less. Is there anything on the docket for making this overhead go away for deployments that don't require CDI? If not, can we get something going in that direction. It would be great to not have to remove the CDI subsystem, but not have it impact performance for deployments that don't use it. Thanks. -- Andrig (Andy) Miller Global Platform Director for JBoss Middle-ware Red Hat, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140627/9dca6bd1/attachment.html From stuart.w.douglas at gmail.com Fri Jun 27 12:54:41 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Fri, 27 Jun 2014 12:54:41 -0400 Subject: [wildfly-dev] CDI overhead In-Reply-To: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> References: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> Message-ID: <53ADA1D1.5020603@gmail.com> You can remove it per-deployment using exclude-subsystem in jboss-deployment-structure.xml. Also there was a performance problem in the CDI proxies that was not fixed until 2.2. Stuart Andrig Miller wrote: > I should have posted this some time ago, but just forgot. > > In my early testing of Wildfly 8, CDI adds quite a bit of overhead (12% > reduction in throughput) for even an application that only uses > servlets. The only way I could get that back was to remove the > subsystem. In talking with Stuart at the time, he was looking at ways to > make the overhead less. > > Is there anything on the docket for making this overhead go away for > deployments that don't require CDI? If not, can we get something going > in that direction. It would be great to not have to remove the CDI > subsystem, but not have it impact performance for deployments that don't > use it. > > Thanks. > > -- > Andrig (Andy) Miller > Global Platform Director for JBoss Middle-ware > Red Hat, Inc. > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From tomaz.cerar at gmail.com Fri Jun 27 14:04:14 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Fri, 27 Jun 2014 20:04:14 +0200 Subject: [wildfly-dev] CDI overhead In-Reply-To: <53ADA1D1.5020603@gmail.com> References: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> <53ADA1D1.5020603@gmail.com> Message-ID: Andy, given that we have Weld 2.2 in 8.x branch for upcoming 8.2 release. and also in master, can you guys try with wildfly build that uses Weld 2.2, to see if that CDI proxyies fix helped. -- tomaz On Fri, Jun 27, 2014 at 6:54 PM, Stuart Douglas wrote: > You can remove it per-deployment using exclude-subsystem in > jboss-deployment-structure.xml. > > Also there was a performance problem in the CDI proxies that was not > fixed until 2.2. > > Stuart > > > Andrig Miller wrote: > > I should have posted this some time ago, but just forgot. > > > > In my early testing of Wildfly 8, CDI adds quite a bit of overhead (12% > > reduction in throughput) for even an application that only uses > > servlets. The only way I could get that back was to remove the > > subsystem. In talking with Stuart at the time, he was looking at ways to > > make the overhead less. > > > > Is there anything on the docket for making this overhead go away for > > deployments that don't require CDI? If not, can we get something going > > in that direction. It would be great to not have to remove the CDI > > subsystem, but not have it impact performance for deployments that don't > > use it. > > > > Thanks. > > > > -- > > Andrig (Andy) Miller > > Global Platform Director for JBoss Middle-ware > > Red Hat, Inc. > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140627/892287c4/attachment.html From anmiller at redhat.com Fri Jun 27 14:50:56 2014 From: anmiller at redhat.com (Andrig Miller) Date: Fri, 27 Jun 2014 14:50:56 -0400 (EDT) Subject: [wildfly-dev] CDI overhead In-Reply-To: <53ADA1D1.5020603@gmail.com> References: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> <53ADA1D1.5020603@gmail.com> Message-ID: <32528880.829.1403895050984.JavaMail.andrig@worklaptop.miller.org> Okay, very good. That's good to know. We will update our Wildfly deployment to do just that. Andy ----- Original Message ----- > From: "Stuart Douglas" > To: "Andrig Miller" > Cc: "wildfly-dev" > Sent: Friday, June 27, 2014 10:54:41 AM > Subject: Re: [wildfly-dev] CDI overhead > > You can remove it per-deployment using exclude-subsystem in > jboss-deployment-structure.xml. > > Also there was a performance problem in the CDI proxies that was not > fixed until 2.2. > > Stuart > > > Andrig Miller wrote: > > I should have posted this some time ago, but just forgot. > > > > In my early testing of Wildfly 8, CDI adds quite a bit of overhead > > (12% > > reduction in throughput) for even an application that only uses > > servlets. The only way I could get that back was to remove the > > subsystem. In talking with Stuart at the time, he was looking at > > ways to > > make the overhead less. > > > > Is there anything on the docket for making this overhead go away > > for > > deployments that don't require CDI? If not, can we get something > > going > > in that direction. It would be great to not have to remove the CDI > > subsystem, but not have it impact performance for deployments that > > don't > > use it. > > > > Thanks. > > > > -- > > Andrig (Andy) Miller > > Global Platform Director for JBoss Middle-ware > > Red Hat, Inc. > > > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > From anmiller at redhat.com Fri Jun 27 14:51:16 2014 From: anmiller at redhat.com (Andrig Miller) Date: Fri, 27 Jun 2014 14:51:16 -0400 (EDT) Subject: [wildfly-dev] CDI overhead In-Reply-To: References: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> <53ADA1D1.5020603@gmail.com> Message-ID: <5951675.833.1403895071908.JavaMail.andrig@worklaptop.miller.org> Yes, we will try that, thanks. Andy ----- Original Message ----- > From: "Toma? Cerar" > To: "Stuart Douglas" > Cc: "Andrig Miller" , "wildfly-dev" > > Sent: Friday, June 27, 2014 12:04:14 PM > Subject: Re: [wildfly-dev] CDI overhead > Andy, > given that we have Weld 2.2 in 8.x branch for upcoming 8.2 release. > and also in master, can you guys try with wildfly build that uses > Weld 2.2, to see if that CDI proxyies fix helped. > -- > tomaz > On Fri, Jun 27, 2014 at 6:54 PM, Stuart Douglas < > stuart.w.douglas at gmail.com > wrote: > > You can remove it per-deployment using exclude-subsystem in > > > jboss-deployment-structure.xml. > > > Also there was a performance problem in the CDI proxies that was > > not > > > fixed until 2.2. > > > Stuart > > > Andrig Miller wrote: > > > > I should have posted this some time ago, but just forgot. > > > > > > > > In my early testing of Wildfly 8, CDI adds quite a bit of > > > overhead > > > (12% > > > > reduction in throughput) for even an application that only uses > > > > servlets. The only way I could get that back was to remove the > > > > subsystem. In talking with Stuart at the time, he was looking at > > > ways to > > > > make the overhead less. > > > > > > > > Is there anything on the docket for making this overhead go away > > > for > > > > deployments that don't require CDI? If not, can we get something > > > going > > > > in that direction. It would be great to not have to remove the > > > CDI > > > > subsystem, but not have it impact performance for deployments > > > that > > > don't > > > > use it. > > > > > > > > Thanks. > > > > > > > > -- > > > > Andrig (Andy) Miller > > > > Global Platform Director for JBoss Middle-ware > > > > Red Hat, Inc. > > > > > > > > _______________________________________________ > > > > wildfly-dev mailing list > > > > wildfly-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > _______________________________________________ > > > wildfly-dev mailing list > > > wildfly-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140627/8898a494/attachment.html From jharting at redhat.com Mon Jun 30 06:03:59 2014 From: jharting at redhat.com (Jozef Hartinger) Date: Mon, 30 Jun 2014 12:03:59 +0200 Subject: [wildfly-dev] CDI overhead In-Reply-To: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> References: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> Message-ID: <53B1360F.8000409@redhat.com> I guess that the Weld subsystem is enabled for your deployment because the deployment contains session beans. CDI is required to be enabled for such deployments since CDI 1.1 (even though CDI may not actually be used by your application). Alternatively to removing the Weld subsystems you can: 1) Suppress implicit bean archives - only archives with explicit beans.xml file will trigger CDI enablement. See http://weld.cdi-spec.org/documentation/#4 2) Enable CDI contexts for certain URL subset only: http://docs.jboss.org/weld/reference/2.2.2.Final/en-US/html/configure.html#context.mapping Jozef On 06/27/2014 06:44 PM, Andrig Miller wrote: > I should have posted this some time ago, but just forgot. > > In my early testing of Wildfly 8, CDI adds quite a bit of overhead > (12% reduction in throughput) for even an application that only uses > servlets. The only way I could get that back was to remove the > subsystem. In talking with Stuart at the time, he was looking at ways > to make the overhead less. > > Is there anything on the docket for making this overhead go away for > deployments that don't require CDI? If not, can we get something > going in that direction. It would be great to not have to remove the > CDI subsystem, but not have it impact performance for deployments that > don't use it. > > Thanks. > > -- > Andrig (Andy) Miller > Global Platform Director for JBoss Middle-ware > Red Hat, Inc. > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140630/34527842/attachment-0001.html From anmiller at redhat.com Mon Jun 30 10:53:59 2014 From: anmiller at redhat.com (Andrig Miller) Date: Mon, 30 Jun 2014 10:53:59 -0400 (EDT) Subject: [wildfly-dev] CDI overhead In-Reply-To: <53B1360F.8000409@redhat.com> References: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> <53B1360F.8000409@redhat.com> Message-ID: <6704787.295.1404140038273.JavaMail.andrig@worklaptop.miller.org> ----- Original Message ----- > From: "Jozef Hartinger" > To: "Andrig Miller" , "wildfly-dev" > > Sent: Monday, June 30, 2014 4:03:59 AM > Subject: Re: [wildfly-dev] CDI overhead > I guess that the Weld subsystem is enabled for your deployment > because the deployment contains session beans. CDI is required to be > enabled for such deployments since CDI 1.1 (even though CDI may not > actually be used by your application). Why is it required, if it will never be used? Is that really what the spec says? If so, why in the world would be support that in the spec? That simply doesn't make any sense to me. Perhaps I'm missing something here. Andy > Alternatively to removing the Weld subsystems you can: > 1) Suppress implicit bean archives - only archives with explicit > beans.xml file will trigger CDI enablement. See > http://weld.cdi-spec.org/documentation/#4 > 2) Enable CDI contexts for certain URL subset only: > http://docs.jboss.org/weld/reference/2.2.2.Final/en-US/html/configure.html#context.mapping > Jozef > On 06/27/2014 06:44 PM, Andrig Miller wrote: > > I should have posted this some time ago, but just forgot. > > > In my early testing of Wildfly 8, CDI adds quite a bit of overhead > > (12% reduction in throughput) for even an application that only > > uses > > servlets. The only way I could get that back was to remove the > > subsystem. In talking with Stuart at the time, he was looking at > > ways to make the overhead less. > > > Is there anything on the docket for making this overhead go away > > for > > deployments that don't require CDI? If not, can we get something > > going in that direction. It would be great to not have to remove > > the > > CDI subsystem, but not have it impact performance for deployments > > that don't use it. > > > Thanks. > > > -- > > > Andrig (Andy) Miller > > > Global Platform Director for JBoss Middle-ware > > > Red Hat, Inc. > > > _______________________________________________ > > > wildfly-dev mailing list wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140630/e1d3dcae/attachment.html From Anil.Saldhana at redhat.com Mon Jun 30 10:57:55 2014 From: Anil.Saldhana at redhat.com (Anil Saldhana) Date: Mon, 30 Jun 2014 09:57:55 -0500 Subject: [wildfly-dev] CDI overhead In-Reply-To: <6704787.295.1404140038273.JavaMail.andrig@worklaptop.miller.org> References: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> <53B1360F.8000409@redhat.com> <6704787.295.1404140038273.JavaMail.andrig@worklaptop.miller.org> Message-ID: <53B17AF3.9000306@redhat.com> On 06/30/2014 09:53 AM, Andrig Miller wrote: > > > ------------------------------------------------------------------------ > > *From: *"Jozef Hartinger" > *To: *"Andrig Miller" , "wildfly-dev" > > *Sent: *Monday, June 30, 2014 4:03:59 AM > *Subject: *Re: [wildfly-dev] CDI overhead > > I guess that the Weld subsystem is enabled for your deployment > because the deployment contains session beans. CDI is required to > be enabled for such deployments since CDI 1.1 (even though CDI may > not actually be used by your application). > > Why is it required, if it will never be used? Is that really what the > spec says? If so, why in the world would be support that in the > spec? That simply doesn't make any sense to me. Perhaps I'm missing > something here. > > Andy Can we not enable some flag at the deployment level to disable CDI scanning? > Alternatively to removing the Weld subsystems you can: > > 1) Suppress implicit bean archives - only archives with explicit > beans.xml file will trigger CDI enablement. See > http://weld.cdi-spec.org/documentation/#4 > > 2) Enable CDI contexts for certain URL subset only: > http://docs.jboss.org/weld/reference/2.2.2.Final/en-US/html/configure.html#context.mapping > > Jozef > > On 06/27/2014 06:44 PM, Andrig Miller wrote: > > I should have posted this some time ago, but just forgot. > > In my early testing of Wildfly 8, CDI adds quite a bit of > overhead (12% reduction in throughput) for even an application > that only uses servlets. The only way I could get that back > was to remove the subsystem. In talking with Stuart at the > time, he was looking at ways to make the overhead less. > > Is there anything on the docket for making this overhead go > away for deployments that don't require CDI? If not, can we > get something going in that direction. It would be great to > not have to remove the CDI subsystem, but not have it impact > performance for deployments that don't use it. > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140630/e00baf97/attachment.html From brian.stansberry at redhat.com Mon Jun 30 11:00:49 2014 From: brian.stansberry at redhat.com (Brian Stansberry) Date: Mon, 30 Jun 2014 10:00:49 -0500 Subject: [wildfly-dev] CDI overhead In-Reply-To: <53B17AF3.9000306@redhat.com> References: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> <53B1360F.8000409@redhat.com> <6704787.295.1404140038273.JavaMail.andrig@worklaptop.miller.org> <53B17AF3.9000306@redhat.com> Message-ID: <53B17BA1.4030208@redhat.com> On 6/30/14, 9:57 AM, Anil Saldhana wrote: > On 06/30/2014 09:53 AM, Andrig Miller wrote: >> >> >> ------------------------------------------------------------------------ >> >> *From: *"Jozef Hartinger" >> *To: *"Andrig Miller" , "wildfly-dev" >> >> *Sent: *Monday, June 30, 2014 4:03:59 AM >> *Subject: *Re: [wildfly-dev] CDI overhead >> >> I guess that the Weld subsystem is enabled for your deployment >> because the deployment contains session beans. CDI is required to >> be enabled for such deployments since CDI 1.1 (even though CDI may >> not actually be used by your application). >> >> Why is it required, if it will never be used? Is that really what the >> spec says? If so, why in the world would be support that in the >> spec? That simply doesn't make any sense to me. Perhaps I'm missing >> something here. >> >> Andy > > Can we not enable some flag at the deployment level to disable CDI scanning? > That's what Stuart's suggestion does: "You can remove it per-deployment using exclude-subsystem in jboss-deployment-structure.xml." >> Alternatively to removing the Weld subsystems you can: >> >> 1) Suppress implicit bean archives - only archives with explicit >> beans.xml file will trigger CDI enablement. See >> http://weld.cdi-spec.org/documentation/#4 >> >> 2) Enable CDI contexts for certain URL subset only: >> http://docs.jboss.org/weld/reference/2.2.2.Final/en-US/html/configure.html#context.mapping >> >> Jozef >> >> On 06/27/2014 06:44 PM, Andrig Miller wrote: >> >> I should have posted this some time ago, but just forgot. >> >> In my early testing of Wildfly 8, CDI adds quite a bit of >> overhead (12% reduction in throughput) for even an application >> that only uses servlets. The only way I could get that back >> was to remove the subsystem. In talking with Stuart at the >> time, he was looking at ways to make the overhead less. >> >> Is there anything on the docket for making this overhead go >> away for deployments that don't require CDI? If not, can we >> get something going in that direction. It would be great to >> not have to remove the CDI subsystem, but not have it impact >> performance for deployments that don't use it. >> >> Thanks. >> > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- Brian Stansberry Senior Principal Software Engineer JBoss by Red Hat From stuart.w.douglas at gmail.com Mon Jun 30 11:01:26 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Mon, 30 Jun 2014 11:01:26 -0400 Subject: [wildfly-dev] CDI overhead In-Reply-To: <6704787.295.1404140038273.JavaMail.andrig@worklaptop.miller.org> References: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> <53B1360F.8000409@redhat.com> <6704787.295.1404140038273.JavaMail.andrig@worklaptop.miller.org> Message-ID: <53B17BC6.2030303@gmail.com> Basically the intention is that every EE component can just use @Inject. Before EE7 there was not really any standard generally available injection mechanism, as CDI was not always present. Unfortunately because of the way the spec works is is not really possible to just look for CDI annotations and make a decision on wether to enable it or not, as in theory a deployment with no annotations could still look up and use the bean manager from JNDI. Hopefully the performance impact in 8.1 should be much less noticeable. Stuart Andrig Miller wrote: > > > ------------------------------------------------------------------------ > > *From: *"Jozef Hartinger" > *To: *"Andrig Miller" , "wildfly-dev" > > *Sent: *Monday, June 30, 2014 4:03:59 AM > *Subject: *Re: [wildfly-dev] CDI overhead > > I guess that the Weld subsystem is enabled for your deployment > because the deployment contains session beans. CDI is required to be > enabled for such deployments since CDI 1.1 (even though CDI may not > actually be used by your application). > > Why is it required, if it will never be used? Is that really what the > spec says? If so, why in the world would be support that in the spec? > That simply doesn't make any sense to me. Perhaps I'm missing something > here. > > Andy > > Alternatively to removing the Weld subsystems you can: > > 1) Suppress implicit bean archives - only archives with explicit > beans.xml file will trigger CDI enablement. See > http://weld.cdi-spec.org/documentation/#4 > > 2) Enable CDI contexts for certain URL subset only: > http://docs.jboss.org/weld/reference/2.2.2.Final/en-US/html/configure.html#context.mapping > > Jozef > > On 06/27/2014 06:44 PM, Andrig Miller wrote: > > I should have posted this some time ago, but just forgot. > > In my early testing of Wildfly 8, CDI adds quite a bit of > overhead (12% reduction in throughput) for even an application > that only uses servlets. The only way I could get that back was > to remove the subsystem. In talking with Stuart at the time, he > was looking at ways to make the overhead less. > > Is there anything on the docket for making this overhead go away > for deployments that don't require CDI? If not, can we get > something going in that direction. It would be great to not have > to remove the CDI subsystem, but not have it impact performance > for deployments that don't use it. > > Thanks. > > -- > Andrig (Andy) Miller > Global Platform Director for JBoss Middle-ware > Red Hat, Inc. > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From jharting at redhat.com Mon Jun 30 11:04:54 2014 From: jharting at redhat.com (Jozef Hartinger) Date: Mon, 30 Jun 2014 17:04:54 +0200 Subject: [wildfly-dev] CDI overhead In-Reply-To: <6704787.295.1404140038273.JavaMail.andrig@worklaptop.miller.org> References: <25639789.755.1403887442476.JavaMail.andrig@worklaptop.miller.org> <53B1360F.8000409@redhat.com> <6704787.295.1404140038273.JavaMail.andrig@worklaptop.miller.org> Message-ID: <53B17C96.8020201@redhat.com> The spec says: "An implicit bean archive is any other archive which contains one or more bean classes with a bean defining annotation as defined in Section 2.5.1, ?Bean defining annotations?, or one or more session beans." This makes CDI enabled by default in EE 7. From the point of view of legacy applications it may make sense to only enable CDI when a session bean actually needs it however, there is no sane way of figuring this out. Jozef On 06/30/2014 04:53 PM, Andrig Miller wrote: > > > ------------------------------------------------------------------------ > > *From: *"Jozef Hartinger" > *To: *"Andrig Miller" , "wildfly-dev" > > *Sent: *Monday, June 30, 2014 4:03:59 AM > *Subject: *Re: [wildfly-dev] CDI overhead > > I guess that the Weld subsystem is enabled for your deployment > because the deployment contains session beans. CDI is required to > be enabled for such deployments since CDI 1.1 (even though CDI may > not actually be used by your application). > > Why is it required, if it will never be used? Is that really what the > spec says? If so, why in the world would be support that in the > spec? That simply doesn't make any sense to me. Perhaps I'm missing > something here. > > Andy > > Alternatively to removing the Weld subsystems you can: > > 1) Suppress implicit bean archives - only archives with explicit > beans.xml file will trigger CDI enablement. See > http://weld.cdi-spec.org/documentation/#4 > > 2) Enable CDI contexts for certain URL subset only: > http://docs.jboss.org/weld/reference/2.2.2.Final/en-US/html/configure.html#context.mapping > > Jozef > > On 06/27/2014 06:44 PM, Andrig Miller wrote: > > I should have posted this some time ago, but just forgot. > > In my early testing of Wildfly 8, CDI adds quite a bit of > overhead (12% reduction in throughput) for even an application > that only uses servlets. The only way I could get that back > was to remove the subsystem. In talking with Stuart at the > time, he was looking at ways to make the overhead less. > > Is there anything on the docket for making this overhead go > away for deployments that don't require CDI? If not, can we > get something going in that direction. It would be great to > not have to remove the CDI subsystem, but not have it impact > performance for deployments that don't use it. > > Thanks. > > -- > Andrig (Andy) Miller > Global Platform Director for JBoss Middle-ware > Red Hat, Inc. > > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140630/9677048d/attachment-0001.html From ssilvert at redhat.com Mon Jun 30 17:47:22 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Mon, 30 Jun 2014 17:47:22 -0400 Subject: [wildfly-dev] Pending core split In-Reply-To: <53AD9974.8020108@gmail.com> References: <53AD9974.8020108@gmail.com> Message-ID: <53B1DAEA.5090305@redhat.com> I'm starting to have doubts about this split. Right now I'm trying to integrate the Keycloak (client-side) adapter into build-core so that the web console can use Keycloak for authentication. The problem is that there is a huge web of dependencies that must be moved over from build to build-core. What exactly is the split trying to solve? Stan On 6/27/2014 12:19 PM, Stuart Douglas wrote: > Hi all, > > So I am moderately confident that we will be ready to split out Wildfly > core into a separate repository early next week (I'm not saying that it > will definitely happen in this time frame, just that it should be possible). > > Once this is ready to go I think the basic process will be: > > - Code freeze on Master > - Create the core repo, push new rewritten core history > - Release core 1.0.0.Beta1 > - Create PR against core WF repo that deletes everything in core, and > uses the core 1.0.0.Beta1 release > - End of code freeze > > Stuart > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From jason.greene at redhat.com Mon Jun 30 18:03:14 2014 From: jason.greene at redhat.com (Jason Greene) Date: Mon, 30 Jun 2014 17:03:14 -0500 Subject: [wildfly-dev] Pending core split In-Reply-To: <53B1DAEA.5090305@redhat.com> References: <53AD9974.8020108@gmail.com> <53B1DAEA.5090305@redhat.com> Message-ID: <939FF6CA-DA9E-456A-9C28-F56428EF181A@redhat.com> The point of the core platform is to evolve the base manageable runtime independent of the traditional application server. This allows for many different frameworks and server runtimes to be based on WildFly. It?s a very real request we have had for a long time. The core platform has had a goal from the very beginning to have minimal deps, so if you find yourself adding deps, then it either needs an alternative solution, or it doesn?t belong in core. On Jun 30, 2014, at 4:47 PM, Stan Silvert wrote: > I'm starting to have doubts about this split. > > Right now I'm trying to integrate the Keycloak (client-side) adapter > into build-core so that the web console can use Keycloak for > authentication. The problem is that there is a huge web of dependencies > that must be moved over from build to build-core. > > What exactly is the split trying to solve? > > Stan > > On 6/27/2014 12:19 PM, Stuart Douglas wrote: >> Hi all, >> >> So I am moderately confident that we will be ready to split out Wildfly >> core into a separate repository early next week (I'm not saying that it >> will definitely happen in this time frame, just that it should be possible). >> >> Once this is ready to go I think the basic process will be: >> >> - Code freeze on Master >> - Create the core repo, push new rewritten core history >> - Release core 1.0.0.Beta1 >> - Create PR against core WF repo that deletes everything in core, and >> uses the core 1.0.0.Beta1 release >> - End of code freeze >> >> Stuart >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat From tomaz.cerar at gmail.com Mon Jun 30 18:15:33 2014 From: tomaz.cerar at gmail.com (=?UTF-8?B?VG9tYcW+IENlcmFy?=) Date: Tue, 1 Jul 2014 00:15:33 +0200 Subject: [wildfly-dev] Pending core split In-Reply-To: <53B1DAEA.5090305@redhat.com> References: <53AD9974.8020108@gmail.com> <53B1DAEA.5090305@redhat.com> Message-ID: Stan, what problems do you have? Last time I was looking into your integration code, you didn't have any deps that didn't exist in core. -- tomaz On Mon, Jun 30, 2014 at 11:47 PM, Stan Silvert wrote: > I'm starting to have doubts about this split. > > Right now I'm trying to integrate the Keycloak (client-side) adapter > into build-core so that the web console can use Keycloak for > authentication. The problem is that there is a huge web of dependencies > that must be moved over from build to build-core. > > What exactly is the split trying to solve? > > Stan > > On 6/27/2014 12:19 PM, Stuart Douglas wrote: > > Hi all, > > > > So I am moderately confident that we will be ready to split out Wildfly > > core into a separate repository early next week (I'm not saying that it > > will definitely happen in this time frame, just that it should be > possible). > > > > Once this is ready to go I think the basic process will be: > > > > - Code freeze on Master > > - Create the core repo, push new rewritten core history > > - Release core 1.0.0.Beta1 > > - Create PR against core WF repo that deletes everything in core, and > > uses the core 1.0.0.Beta1 release > > - End of code freeze > > > > Stuart > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140701/8b9d4d99/attachment.html From ssilvert at redhat.com Mon Jun 30 19:27:46 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Mon, 30 Jun 2014 19:27:46 -0400 Subject: [wildfly-dev] Pending core split In-Reply-To: <939FF6CA-DA9E-456A-9C28-F56428EF181A@redhat.com> References: <53AD9974.8020108@gmail.com> <53B1DAEA.5090305@redhat.com> <939FF6CA-DA9E-456A-9C28-F56428EF181A@redhat.com> Message-ID: <53B1F272.6010100@redhat.com> On 6/30/2014 6:03 PM, Jason Greene wrote: > The point of the core platform is to evolve the base manageable runtime independent of the traditional application server. This allows for many different frameworks and server runtimes to be based on WildFly. It?s a very real request we have had for a long time. > > The core platform has had a goal from the very beginning to have minimal deps, so if you find yourself adding deps, then it either needs an alternative solution, or it doesn?t belong in core. I spent most of last week trying out alternatives and teasing out dependencies. No joy so far. What if we moved domain-http into web-build? That would mean that the web console would not be available in core-build, but I'm not sure it makes sense to be there anyway. Does web console even run against core-build? > > On Jun 30, 2014, at 4:47 PM, Stan Silvert wrote: > >> I'm starting to have doubts about this split. >> >> Right now I'm trying to integrate the Keycloak (client-side) adapter >> into build-core so that the web console can use Keycloak for >> authentication. The problem is that there is a huge web of dependencies >> that must be moved over from build to build-core. >> >> What exactly is the split trying to solve? >> >> Stan >> >> On 6/27/2014 12:19 PM, Stuart Douglas wrote: >>> Hi all, >>> >>> So I am moderately confident that we will be ready to split out Wildfly >>> core into a separate repository early next week (I'm not saying that it >>> will definitely happen in this time frame, just that it should be possible). >>> >>> Once this is ready to go I think the basic process will be: >>> >>> - Code freeze on Master >>> - Create the core repo, push new rewritten core history >>> - Release core 1.0.0.Beta1 >>> - Create PR against core WF repo that deletes everything in core, and >>> uses the core 1.0.0.Beta1 release >>> - End of code freeze >>> >>> Stuart >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > -- > Jason T. Greene > WildFly Lead / JBoss EAP Platform Architect > JBoss, a division of Red Hat > From ssilvert at redhat.com Mon Jun 30 19:28:00 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Mon, 30 Jun 2014 19:28:00 -0400 Subject: [wildfly-dev] Pending core split In-Reply-To: References: <53AD9974.8020108@gmail.com> <53B1DAEA.5090305@redhat.com> Message-ID: <53B1F280.4050505@redhat.com> On 6/30/2014 6:15 PM, Toma? Cerar wrote: > Stan, > > what problems do you have? > Last time I was looking into your integration code, you didn't have > any deps that didn't exist in core. The Keycloak adapter relies on jackson, RestEasy, iharder, httpcomponents, and undertow-servlet. I was able to get rid of undertow-servlet, but the others end up pulling in a ton of stuff. > > -- > tomaz > > > On Mon, Jun 30, 2014 at 11:47 PM, Stan Silvert > wrote: > > I'm starting to have doubts about this split. > > Right now I'm trying to integrate the Keycloak (client-side) adapter > into build-core so that the web console can use Keycloak for > authentication. The problem is that there is a huge web of > dependencies > that must be moved over from build to build-core. > > What exactly is the split trying to solve? > > Stan > > On 6/27/2014 12:19 PM, Stuart Douglas wrote: > > Hi all, > > > > So I am moderately confident that we will be ready to split out > Wildfly > > core into a separate repository early next week (I'm not saying > that it > > will definitely happen in this time frame, just that it should > be possible). > > > > Once this is ready to go I think the basic process will be: > > > > - Code freeze on Master > > - Create the core repo, push new rewritten core history > > - Release core 1.0.0.Beta1 > > - Create PR against core WF repo that deletes everything in > core, and > > uses the core 1.0.0.Beta1 release > > - End of code freeze > > > > Stuart > > _______________________________________________ > > wildfly-dev mailing list > > wildfly-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20140630/493e6ca6/attachment.html From ssilvert at redhat.com Mon Jun 30 22:39:28 2014 From: ssilvert at redhat.com (Stan Silvert) Date: Mon, 30 Jun 2014 22:39:28 -0400 Subject: [wildfly-dev] Pending core split In-Reply-To: <53B1F272.6010100@redhat.com> References: <53AD9974.8020108@gmail.com> <53B1DAEA.5090305@redhat.com> <939FF6CA-DA9E-456A-9C28-F56428EF181A@redhat.com> <53B1F272.6010100@redhat.com> Message-ID: <53B21F60.9030002@redhat.com> On 6/30/2014 7:27 PM, Stan Silvert wrote: > On 6/30/2014 6:03 PM, Jason Greene wrote: >> The point of the core platform is to evolve the base manageable runtime independent of the traditional application server. This allows for many different frameworks and server runtimes to be based on WildFly. It?s a very real request we have had for a long time. >> >> The core platform has had a goal from the very beginning to have minimal deps, so if you find yourself adding deps, then it either needs an alternative solution, or it doesn?t belong in core. > I spent most of last week trying out alternatives and teasing out > dependencies. No joy so far. > > What if we moved domain-http into web-build? That would mean that the > web console would not be available in core-build, but I'm not sure it > makes sense to be there anyway. Does web console even run against > core-build? Still not sure about web console, but the add-user tool is broken. The module for it needs to be moved from build to core-build. > >> On Jun 30, 2014, at 4:47 PM, Stan Silvert wrote: >> >>> I'm starting to have doubts about this split. >>> >>> Right now I'm trying to integrate the Keycloak (client-side) adapter >>> into build-core so that the web console can use Keycloak for >>> authentication. The problem is that there is a huge web of dependencies >>> that must be moved over from build to build-core. >>> >>> What exactly is the split trying to solve? >>> >>> Stan >>> >>> On 6/27/2014 12:19 PM, Stuart Douglas wrote: >>>> Hi all, >>>> >>>> So I am moderately confident that we will be ready to split out Wildfly >>>> core into a separate repository early next week (I'm not saying that it >>>> will definitely happen in this time frame, just that it should be possible). >>>> >>>> Once this is ready to go I think the basic process will be: >>>> >>>> - Code freeze on Master >>>> - Create the core repo, push new rewritten core history >>>> - Release core 1.0.0.Beta1 >>>> - Create PR against core WF repo that deletes everything in core, and >>>> uses the core 1.0.0.Beta1 release >>>> - End of code freeze >>>> >>>> Stuart >>>> _______________________________________________ >>>> wildfly-dev mailing list >>>> wildfly-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >>> _______________________________________________ >>> wildfly-dev mailing list >>> wildfly-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/wildfly-dev >> -- >> Jason T. Greene >> WildFly Lead / JBoss EAP Platform Architect >> JBoss, a division of Red Hat >> > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev From stuart.w.douglas at gmail.com Mon Jun 30 22:43:14 2014 From: stuart.w.douglas at gmail.com (Stuart Douglas) Date: Mon, 30 Jun 2014 22:43:14 -0400 Subject: [wildfly-dev] Pending core split In-Reply-To: <53B1DAEA.5090305@redhat.com> References: <53AD9974.8020108@gmail.com> <53B1DAEA.5090305@redhat.com> Message-ID: <53B22042.5010508@gmail.com> It really sounds like this should not be part of core, but should be something extra that just integrates with the core. In all honesty we are highly unlikely to ever have accepted a PR that added all these dependencies to the core in any case, so it is a problem that would have had to be solved at some point anyway. Stuart Stan Silvert wrote: > I'm starting to have doubts about this split. > > Right now I'm trying to integrate the Keycloak (client-side) adapter > into build-core so that the web console can use Keycloak for > authentication. The problem is that there is a huge web of dependencies > that must be moved over from build to build-core. > > What exactly is the split trying to solve? > > Stan > > On 6/27/2014 12:19 PM, Stuart Douglas wrote: >> Hi all, >> >> So I am moderately confident that we will be ready to split out Wildfly >> core into a separate repository early next week (I'm not saying that it >> will definitely happen in this time frame, just that it should be possible). >> >> Once this is ready to go I think the basic process will be: >> >> - Code freeze on Master >> - Create the core repo, push new rewritten core history >> - Release core 1.0.0.Beta1 >> - Create PR against core WF repo that deletes everything in core, and >> uses the core 1.0.0.Beta1 release >> - End of code freeze >> >> Stuart >> _______________________________________________ >> wildfly-dev mailing list >> wildfly-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/wildfly-dev > > _______________________________________________ > wildfly-dev mailing list > wildfly-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/wildfly-dev