Work is well underway on WildFly 26. Here's the schedule we're shooting
Fri Nov 26 -- Core Feature Freeze
Tue Nov 30 -- Feature freeze / WF 26 Beta1 Tag
Fri Dec 10 -- WildFly Core code freeze
Wed Dec 15 -- WF 26 tag
Thu Dec 16 -- WF 26 available on wildfly.org
The WildFly OpenShift images usually are released a few days after the zip
is available on wildfly.org, but for WildFly 26 we expect the delay to be
longer, with the images being released after the New Year.
This release has a shorter development cycle, as we'd like to complete the
zip release by mid-December so our great contributors can see it in the
rear view mirror and relax over the year -end quiet period. We delayed the
WildFly 25 release a couple weeks to finish up on our SE 17 work, but we
didn't want to push WildFly 26 as well and have work on it open over the
Project Lead, WildFly
Last July, I started a thread here about ways to get native jakarta
namespace variants of artifacts, which we need for our move to EE 10. One
of the items discussed there was "5) New maven module, transform source and
build". I'm posting here as a kind of status update and tutorial about that
At this point there are 11 maven modules in the WildFly main source tree
that are producing artifacts using the approach described there, and there
are PRs open for at least a couple more. I have filed or soon will file
JIRAs for all the remaining WF main tree's modules that produce
artifacts that WildFly Preview currently is transforming when it provisions
a server, so there will be more coming.
The high level overview of how these work is that a pom-only maven module
is created that corresponds to an existing module that produces an javax
artifact. The new module's pom uses the batavia maven plugin (and
Eclipse Transformer under the covers) to find the source code from the
existing maven module, transform it, and write the transformed output to
the target/ dir of the new module, specifically in the
'generated-resources', 'generated-sources', 'generated-test-resources' and
'generated-test-sources' dirs. That generated source is then associated
with the overall maven execution for the module, and thereafter gets
included as part of the subsequent compile, test, package, install and
deploy goals executed for the module. So, presto we have native jakarta
source available that is then compiled and tested and used to
package/install/deploy binary, source and javadoc jars.
The generated source does not get committed into the git repository. The
git repo only has the pom.
It's a common thing for generated source to be used in an artifact build;
for example it's the technique we use to generate implementation classes
from the XXXLogger interfaces we have for each of our subsystems. What
we're doing here just takes this concept to the max by generating 100% of
the source, including test source.
I'm going to use a PR I sent up yesterday to illustrate how to create one
1) Decide whether the module you want to work on is ready. If you're not
the component lead responsible for the module, ask that lead. To see if a
module is 'ready', look at its compile time dependencies and see if it
still depends on other artifacts for which a native jakarta variant is
needed and doesn't exist yet. Ask here or in zulip if you are not sure!
Things can change quickly, and there also may be some edge cases where
you'd think you need a jakarta namespace dependency but you really don't.
If there are dependencies that are not ready yet, stop and wait until they
are available. Or be prepared for your new module to not build, at which
point you can save your work and wait.
2) Create a new dir under ee-9/source-transform for your new module. If the
module you are transforming isn't directly under the root of the WF source,
then create a matching structure under ee-9/source-transform. For example,
I created ee-9/source-transform/jpa/spi to produce a jakarta.* variant of
3) Copy the pom.xml from an *existing source-transform module* into your
new dir. Easiest is to use one that comes from a dir the same # of levels
below source-transform as your new dir, so a couple relative paths in your
new pom are correct. For my PR I copied over
Don't start with the pom from your source input module. Unless you want to
work out how to adapt it to use the source transformation. :) Granted it's
not rocket science. But it's easier for code reviewers to look at these if
they look like the other ones.
4) Change the artifactId in your new pom to *exactly* match the artifactId
of the module you're transforming, but with "-jakarta" appended:
5) Change the 'name' tag to something appropriate, like
6) Change the 'transformer-input-dir' maven property to point to the root
of the module you are transforming:
This property is used in the batavia maven plugin config in the parent
source-transform dir. Each child module sets the property to point to the
correct input source for that module.
This one small tweak is the only thing you need to deal with to get the
source transformation set up.
7) Configure dependencies.
a) Delete the existing 'dependency' entries in your new pom, as they are
for whatever random file you copied in.
b) Copy over the 'dependency' entries from the pom of the module you are
c) For any entries where your new artifact uses a dependency with a
different groupId:artifactId from the one you're transforming, change the
GA. For example:
EE 8 JTA
became EE 9.1 JTA
Or, in another example
an EE 8 based 'ee' subsystem module dep:
became instead a dep on the new '-jakarta' variant:
d) A nice to have is to separate from the others in the dependency listing
deps where WF Preview is using a different artifact from what standard WF
is using. For example see
This separation helps code reviewers.
8) Tell your new module's parent to include it in the build:
9) The ee-9/pom.xml maintains a dependencyManagement set for all artifacts
that differ in WF Preview from what is in standard WF. These can either be
artifacts whose GA is not used at all in standard WF, or ones where WF
Preview uses a different version. Add your new artifact to this
10) Allow re-use of the module.xml that incorporates this artifact between
standard WF and WF Preview. This is done by modifying the module.xml to add
an expression that the maven-resource-plugin replaces when preparing the
module.xml resource file for use in the FP build:
The added @module.jakarta.suffix@ gets replaced either with an empty string
(standard WF) or "-jakarta" (WF Preview)
11) Add your new artifact as a dependency of ee-9/feature-pack/pom.xml.
This is needed so the wildfly-preview Galleon feature pack can utilize your
12) Tell the wildfly-preview feature-pack build to ignore the no-longer
relevant javax artifact you're replacing. This saves build time, compute
resources and false positives in the build log that make us think your
artifact hasn't been handled yet.
(Note the precise spot to put that 'exclude' may differ, a bit. Ask if this
13) Add an entry for your new artifact in the license.xml file for the
wildfly-preview feature back. Put it in the right alphabetical spot based
on its groupId and artifactId.
14) Do a build and if good, commit and send up a PR! If your input module
had tests, your new module should as well and they should run. If you're
curious, have a look in the new target/generated-xxx dirs to see the
This sounds like a lot, and I suppose it is, but other than step 7) it's
all very simple stuff, mostly things you'd do any time you add a new module
to the WF source tree. Granted I'm practiced at this, but it took me about
half an hour to work up the PR I've been using as an example.
If you're interested in doing one of these, and the component lead
responsible for the input module is agreeable, please go for it and feel
free to ask for help.
 Child modules under
 High level tracker issues for this work are
https://issues.redhat.com/browse/WFLY-15437. I'm filing separate issues for
individual pieces and am linking those to one or the other of these two
shows all artifacts (not just those from the WF source tree) that the
WildFly Galleon plugin was transforming in builds on Oct 14. The rows that
end with '-26.0.0.Beta1-SNAPSHOT.jar' are ones that are produced by WildFly
main itself. I add new tabs to that document weekly to track progress on
reducing the # of artifacts being transformed by the Galleon plugin.
which was produced from a maven module with nothing in it but a pom.
Project Lead, WildFly
The freeze for WildFly 25.0.1 will be Friday 29th October.
Please file any PRs against the 25.x
<https://github.com/wildfly/wildfly/tree/25.x>*) branch, and make sure
there is an associated Jira.
In the Jira for the issue, make sure that it has 25.0.1.Final*) as the Fix
Version. In the PR please also link to any new or already merged PRs
against the main branch.
Note that the payload is meant to be limited, containing only critical
fixes, things community members are eager for etc., and component upgrades
addressing similar issues.
*) - if the fix is needed for WildFly Core, use the 17.x
<https://github.com/wildfly/wildfly-core/tree/17.x> branch and 17.0.2.Final
release in Jira.
Any questions, please let me know.
As you know, WildFly is providing a Docker image for each release of WildFly from the jboss organization at https://hub.docker.com/r/jboss/wildfly/tags.
However hub.docker.com has recently changes their features and we are no longer able to automate new images whenever we tag our GitHub project at https://github.com/jboss-dockerfiles/wildfly. This is becoming a manual task that only a few people in jboss organization can do.
We are trying to find a new way to deliver these images in a sustainable fashion.
One approach would be to move the Docker images to quay.io which provides the basic features we need to build images from our GitHub repo.
We already have a wildfly organization on Quay.io (that provides our S2I images as well as the WildFly operator): https://quay.io/organization/wildfly
This would affect users that pulls our images as they would have to
Internally, there would be no changes: we would continue to build these Docker images from tags in https://github.com/jboss-dockerfiles/wildfly
The latest release for our Docker image was the 25.0 tag.
A transition would be:
0. Advertise that we will stop delivering images from hub.docker.com. At this point, the jboss/wildfly:latest tag will point to 25.0 and will no longer be in sync with new WildFly releases.
1. Set up the quay.io/wildfly/wildfly repo and push the 25.0 tag to it.
-> users can switch from "jboss/wildfly” to "quay.io/wildfly/wildfly” without any impact for their applications
1.a If we eventually release images for 25.0.x versions, we will push images to both hub.docker and quay.io repositories
2. When we release the next major version of WildFly (WildFly 26), the image will be made available only from "quay.io/wildfly/wildfly” with the 26.0 and latest tag
Does that approach sounds sensible?
Principal Software Engineer
WildFly 26 development has started and we have made some significant updates for WildFly provisioning and its use on the cloud.
This mail is an opportunity to showcase the upcoming features and give a taste of things to come for WildFly users :)
This work is still in alpha/beta stage and is subject to change before their final releases but the overall idea is already there.
The most important thing to explain is that we are leveraging the existing wildlfy-maven-plugin to provision WildFly and build application images for WildFly on the cloud.
This plugin now enables to provision and configure a WildFly runtime directly from the application's pom.xml.
There are few steps required to use this new plugin.
Let's use our simple helloworld-rs quickstart for that (https://github.com/wildfly/quickstart/tree/main/helloworld-rs).
We can checkout this project with:
git clone https://github.com/wildfly/quickstart.git
To use the provisioning feature of the wildfly-maven-plugin, we need to use its 3.0.0.Alpha1 release that has just been released:
This plugin configuration will provision WildFly using its org.wildfly:wildfly-ee-galleon-pack:25.0.0.Final feature pack and install the jaxrs-server layer that contains all we need to run a Jakarta Restful application.
I can now run `mvn clean package` to package the application, provision WildFly, and deploy the application in WildFly.
At the end of the execution, I have a WildFly 25.0.0.Final with the deployment in the target/server directory of my application project.
I can run my application with a simple `./target/server/bin/standalone.sh`:
09:10:25,564 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0010: Deployed "helloworld-rs.war" (runtime-name : "ROOT.war")
09:05:37,099 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly EE 25.0.0.Final (WildFly Core 17.0.1.Final) started in 12290ms - Started 274 of 359 services (138 services are lazy, passive or on-demand)
And I have now my application served by WildFly:
$ curl http://:8080/rest/json
Using this new plugin mechanism to provision WildFly will have tremendous impact on developing and running WildFly application:
* The application's pom.xml now can be the self-contained definition of the application as a whole (including the WildFly runtime configuration to run the application code)
* Arquillian tests can be run against a runtime consisting of the provisioned WildFly and the deployed application. You can test your actual deployment and its runtime during the application test phase.
* We are also developing new source-to-images (S2I) builder and runtime image that leverages this new provisioning capability of the wildfly-maven-plugin. These images will provide all the infrastructure to build application image that can be deployed on Kubernete platforms.
We will have a serie of blog posts that provides more details on these new developments.
Principal Software Engineer
I've just published a blog showing some of the MicroProfile Reactive
Messaging improvements we did in WildFly 25.
It showcases the new @Channel/Emitter functionality which is intended to
facilitate pushing data into Kafka from code initiated by the user.
It also briefly shows the new Kafka user API for more control over how
messages are sent to Kafka.
Finally, it shows a very simple Kafka Streams application to get data out
of Kafka, as an example of how you could process the data stored in Kafka.