during development of WF11 we have done lots of work on making it build &
run on JDK9.
as release nears I would like to summarize what the current state is and
how to move on.
Currently most of our core  & full  testsuite passes on latest builds
Remaining failures are already addressed by  and 
**But** passing testsuite on JDK9 is not the same as using our binary
distribution under JDK9.
Currently as part of running build / testsuite we override version of
javassit to 3.22.0-CR2
which is currently the only version that works properly on JDK9.
As there is no .GA version of javassit that work on JDK9 avalible we
currently do not have it as default.
On top of that, hibernate as main user of javassit is not tested enough
with this version of javassist
unless hibernate / JPA team says otherwise.
That would in practice mean that users running WF11 on JDK9 would have
issues with JPA/Hibernate
Currently I see two options how to address this:
- upgrade javassist to 3.22.x in server, preferably ask for .GA release.
- produce additional WildFly.x.x.x-jdk9 zip that would include the newer
So question is do we even want to have working JDK9 build of WildFly 11
or should we postpone this for next update release.
I've converted confluence docs asciidoc   ones that will be part of
take a look at them and let me know if there are any big issues.
as most of you already know, I was working on moving our confluence based
 documentation to asciidoc based one.
result can be seen at  or rendered to html at 
A good side effect of conversion is that now docs are also browsable
directly on GitHub.
For example  or 
Currently I kept same structure as we had in confluence, which in practice
we have set of "guides" that than have lots of sub pages / includes that
produce "big" guides.
Currently such guides are:
- Admin Guide
- Developer Guide
- Getting started guide
- Getting Started Developing Applications Guide
- High Availability Guide
- Extending WildFly guide
- JavaEE 7(6 actually) Tutorial
- Elytron security guide
Problem is that some of this guide as such make sense, but not all of them
In some cases we have duplicated docs for same thing, in others we content
in wrong segment.
For example instead of having all subsystem reference docs under admin
some are under Developer Guide and some even under HA guide.
Going forward we should "refactor" docs a bit, so we would end up with 3-4
high quality guides.
We should also go trough all docs and remove/update the outdated content.
Plan is also to have documentation now part of WildFly codebase.
So when we would submit PR with new feature, it would also include
documentation for it as well.
Rendered docs can be build as part of our build / release process and can
be rendered to different formats.
for example default is HTML  or PDF 
I've send experimental PR to show how docs would fit into WildFly build 
Please take look at current docs and if you have any comments / suggestions
what we can improve before merging it let me know.
At this point I've not done much content-wise changes but just conversion +
Content updates can come after this is merged.
I've been experimenting with Alexey to update a customized provisioned server using the provisioning tool .
I'm using the syncing operations  that I created a while back by porting the domain synchronization operations to standalone (to
synchronize standalone instances in a cloud environment).
I'm looking for some feedback on this approach.
Updating is the process of applying a fix pack that will increment the micro version. There should be no compatiblity issue. Upgrading is
the transition of a minor version, compatiblity should be available but there are a lot more changes.
While the mecanisms discussed here are general, they might need more refinement for an upgrade.
The use case is quite simple: *I have version 1.0.0 installed and i want to update to 1.0.1 but I have locally customized my server and i’d
like to keep those changes*.
We have several local elements to take into account:
filesystem changes (files added, removed or deleted).
The basic idea is to diff the existing instance with a pure new installation of the same version then apply those changes to a new
provisioned version instance for staging.
We can keep it at the basic filesystem approach with some simple merge strategy (theirs, ours).
We can use the plugin to go into more details. For exemple using the model diff between standalone Wildfly instances we can create a cli
script to reconfigure the upgraded instance in a post-installation step.
<https://github.com/ehsavoie/pm/blob/upgrade-feature-pack/docs/guide/wildf...>Diffing the filesystem
The idea is to compare the instance to be upgraded with one instance provisioned with the same feature packs as the one we want to upgrade.
The plugin will provide a list of files or regexp to be excluded.
Each file will be hashed and we compare the hash + the relative path to deduce deleted, modified or added files.
For textual files we can provide a diff (and the means to apply it), but maybe that should be for a later version as some kind of
interaction with the user might be required.
This is a specialization of the upgrading algorithm:
Filtering out some of the 'unimportant' files (tmp, logs).
Creating diff of textual files (for example the realm properties) which will be applied (merging strategy à la git).
Using an embedded standalone it creates a jboss-cli script to reconfigure the server (adding/removing extensions and reconfiguring
Deleting files that were removed.
This is done on a staging upgraded instance before being copied over the old instance.
I have added a diff/sync operation in standalone that is quite similar to what happens when a slave HC connects to the DC. Thus I start the
current installation, and connect to it from an embedded server using the initial configuration and diff the models.
This is 'experimental' but it works nicely (I was able to 'upgrade' from the standalone.xml of wildfly-core to the standalone-full.xml of
I’m talking only about the model part, I leave the files to the filesystem 'diffing' but it wiill work with managed deployments are those
are added by the filesystem part and then the deployment reference is added in the model.
For a future version of the tooling/plugin we might look for a way to interract more with the user (for example for applying the textual
diffs to choose what to do per file instead of globally).
Also currently the filters for excluding files are defined by the plugin but we could enrich them from the tooling also.
update feature pack
From the initial upgrade mecanism Alexey has seen the potential to create a feature pack instead of my initial format.
Currently i’m able to create and installa a feature-pack that will supersede the initial installation with its own local modifications.
Thus from my customized instance I can produce a feature pack that will allow me to reinstall the same instance. Maybe this can be also use
to produce upgrade feature pack for patching.
<https://github.com/ehsavoie/pm/blob/upgrade-feature-pack/docs/guide/wildf...>WildFly domain mode
Domain mode is a bit more complex, and we need to think how to manager the model changes.
Those can be at the domain level or the host level and depending on the instance target we would need to get the changes from the domain.xml
or/and the host.xml.
I’m thinking about applying the same strategy as what was done for standalone : aka expose the sync operations to an embedded HC.
We had some discussion recently about improving the release process & development of WildFly after the release of WildFly 11.
One of the proposal was to avoid building and testing code that is not changing often from WildFly every time the project is built.
A good candidate for that kind of code is the legacy extensions.
A legacy extension is a WildFly extension that is no longer usable (they have no runtime) but still provides a management model that can be in some case be migrated to newer extensions.
* web > undertow
* messaging > messaging-activemq
* jacorb > iiop-openjdk
Other legacy extensions are not migrated (cmp, jaxr and configadmin).
(jacorb legacy extension is a bit different as it is still usable by leveraging the runtime of iiop-openjdk aiui).
The legacy extensions are frozen. Their management model is frozen and they only require changes when the new corresponding extensions have some changes that required to be taken into account during migration.
But every time we build and release WildFly, we have to compile and test that unchanged code.
I started a proof of concept that provides a feature pack (wildfly-legacy-feature-pack)
that contains legacy extensions so they can be removed from WildFly codebase:
These legacy extensions are provided by the wildfly-legacy-feature-pack that contains everything to install them in WildFly (module definitions and jars).
Wildfly’s own feature-pack then depends on it so that the actual distribution of WildFly is not different from the current one.
But a lot of code can be removed from the wildfly codebase by moving these extensions to a separate project.
Since legacy extensions have no runtime and only a management model, they have few dependencies and relies only on the wildfly-core-feature-pack.
There is just an interesting problem with the migrate operation that some of these extensions define (web and messaging).
The code of the :migrate operation itself is not dependent on the new extensions as it only manipulates DMR operations.
However the functional test of the :migrate operation requires the new extension to be able to validate that the management model of the legacy extension has been properly migrated to a valid management model of the new extension.
This means that the legacy extensions depends on the new extensions (defined in wildfly) *with a test scope*.
This introduces a circular dependency between wildfly (that depends on the wildlfy-legacy-feature-pack) and the wildfly-legacy-feature-pack (that depends on wildfly extensions *with test scope*).
Since one of the dependency is in scope test, I worked around that by depending on the n-1 version of wildfly from wildlfy-legacy-feature-pack. It’s not ideal but in practice I’m not sure it is a big issue.
Maintainers of legacy extensions can build local snapshots of WildFly and use it as a dependency for the legacy-feature-pack when there might be some changes that impact the legacy extensions.
The only legacy extension that I could not move is the jacorb one. This one is tightly bound to the new iiop-openjdk extension. It inherits from its classes to provide its runtime emulation.
Moving the other legacy extensions still reduces the size of WildFly with 230-ish files removed and almost 50K lines:
$ git diff --shortstat master legacy-feature-pack
233 files changed, 14 insertions(+), 49832 deletions(-)
What do you think?
Once we have released WildFly 11, would it be worth moving legacy extensions to a separate feature pack?
JBoss, a division of Red Hat
sorry if this is the wrong place to ask, but I am trying to run a Java agent with Wildfly 11 and Java 9. In previous Java versions, I had to add -Xbootclasspath/p:jboss-logmanager-2.0.7.Final.jar to JAVA_OPTS to make sure the Wildfly java.util.logging.manager implementation is available to the agent.
I understand that in Java 9 the -Xbootclasspath/p option is replaced with the --patch-module option. However, I am having trouble to figure out how to use this option.
What is the equivalent to -Xbootclasspath/p:jboss-logmanager-2.0.7.Final.jar is in Java 9 with Wildfly 11?
Thanks a lot
Three items to share with you today
* *JDK 9 General Availability
o GPL'd binaries from Oracle are available here:
o See Mark Reinhold's email for more details on the Release 
+ delivery of Project Jigsaw 
* Are you JDK 9 Ready ?
o The Quality Outreach wiki has been updated to include a JDK 9
o If you would like us to identify your project as JDK 9 ready ,
please let me know and I will add it to the wiki.
o Wildfly 11 is listed as JDK 9 Ready.
* Quality Outreach Report for September 2017**is available
o many thanks for your continued support and welcome to the new
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland
I was looking to find the full dependency list for Wildfly, so I took a
look at the dependency hierarchy of the Wildfly EE server. But I do not
see the Artemis server listed anywhere as a dependency.
My starting point was:
Am I looking in the wrong place, or is Artemis intentionally not included?
I'd like to see some info on WildFly and WildFly Core pull requests that
upgrade components where the submitter *is not* the "exclusive owner" of
the lib being upgraded.
An "owner" of a component is the person recorded in WFLY or WFCORE as the
lead of the JIRA component that tracks issues related to the functionality
the lib's used for. An owner is "exclusive" if his or her JIRA component is
the only relevant one.
The info I'd like:
1) Information showing the the modules that depend on the updated lib. This
is quite easy to get once you know the name of the module that includes the
lib (and if you don't know that you probably shouldn't be submitting the
$ git grep javax.json.api | grep module.xml
Since this lib is in core, it may have uses in full so include that too:
$ git grep javax.json.api | grep module.xml
<module name="javax.json.api" export="true"/>
2) An @ mention of the owners of the component identified above so they are
aware of and can approve the change. If you don't know who these people
are, skip this. But if you're someone who really should know this stuff,
please learn it and include it. Owners, please respond to these mentions.
I don't think asking for this stuff from the exclusive owner of a lib makes
sense; they should know what they are doing and the cost of added paperwork
exceeds any likely benefit.
I was working on a change and a testcase related to one of the JIRAs
open in WFLY and happened to notice that we now have a
testsuite/integration/legacy-ejb-client module. It looks like this
module has testcases that are also part of testsuite/integration/basic -
i.e. copy pasted. So we now have 2 copies of a testcase for example
EJBClientAPIUsageTestCase. What's the guideline when dealing with these
testcases? Should both these testcases be kept up-to-date with any
changes to the behaviour of the server interaction with the remote
client? The context of my question is this PR