TL;DR; We are proposing here a new WildFly s2i architecture in order to
remove existing pain points we are encountering today when building
WildFly application for the cloud.
Uncoupling the WildFly S2i image from WildFly server installations
Today, we are relying on the WildFly s2i builder image to build a
WildFly application image. There is a strong coupling inside this image
that implies releasing a new image for each new WildFly release. This
approach is not flexible. We should be able to build an application
image for any WildFly release (even for a locally built SNAPSHOT)
without having to wait for the WildFly s2i builder image to be deployed.
In addition, the WildFly server located inside the builder image,
enforces a set of configuration API (a set of environment variables) and
bash launch scripts. At startup time, a complex CLI script generation
that impacts the server startup time is mandatorily executed. Having
this level of configurability should be a user choice and not be
enforced by the WildFly s2i image.
To solve these pain points, we should remove the server and its
configuration aspects from the image. The WildFly s2i image should
become a generic image allowing to properly install and execute any
provisioned WildFly server. As a developer one should be able in max 2
steps to deploy to the cloud an app. Same for WildFly developers that
should deploy a locally built WildFly SNAPSHOT in one step (as done with
WildFly Bootable JAR today).
The role of a WildFly s2i builder image should be limited to:
* Provide JDK and maven tooling
* Provide automatic and configurable JVM configuration at server
startup (same as the openjdk builder image)
* Understand the execution constraints for the server and adapt the
server launch processing (e.g.: start the server in a way that would
allow a server killed by POD to terminate properly).
* Provide nice defaults for the minimal set of server configuration
items (e.g.: jboss.node.name derived from hostname, public interface
address, management address)
* Install the server in the image (eg: /opt/wildfly).
* Understand that the s2i is a build from source vs a build from binary.
* Handle incremental build (e.g.: cache maven repository)
* Built application Image must be runnable but also usable inside a
docker chained build to construct a final application image (that
doesn't contain any build tooling).
Nothing more. The server provisioning and configuration steps have been
removed from the s2i build process. All the server runtime
configurability based on env variables have been removed. (e.g.: ability
to add datasources, ability to simply configure elytron, https,
keycloak, …) We will see next how to bring them back but this time as a
desired choice not as a mandatory way to configure the server.
A Maven plugin to provision and configure WildFly
The steps that have been removed from the WildFly s2i build processing
are now aggregated inside a single point of configuration inside a new
WildFly Maven plugin. This plugin is very similar to the bootable JAR
maven plugin. Such a plugin handles Galleon provisioning, user CLI
scripts execution, ability to enrich server content and application(s)
This plugin used in conjunction with the new WildFly s2i image is all
what is needed to offer a flexible experience. We gain the flexibility
to provision/configure a WildFly instance to answer specific
requirements instead of relying on a fixed set of features packaged
inside a WildFly s2i builder image.
Adding back existing WildFly s2i configurability
To retrieve the WildFly s2i built-in server configurability, we are
introducing a WildFly Legacy Cloud Galleon feature-pack to provision a
“legacy” server, offering all the capabilities found today in WildFly
s2i. This becomes a user choice. By provisioning a vanilla WildFly, you
will get a vanilla WildFly behaving properly in the cloud. By
provisioning this legacy Galleon pack, one will get the fully featured
server found in current S2i image.
NB: In the future we could define a non legacy WildFly cloud galleon
feature-pack with some new WildFly cloud specific features.
Unified experience for application “Builder” Cloud images
This new architecture should allow an unified experience across the
various builder images technologies. For example, the same application
maven project could be used eventually with WildFly BuildPacks builder
or with WildFly s2i builder image.
Thank-you for your reading.
Since the WildFly 13 release when we began producing the official WildFly
distributions using Galleon, we've continued as well to produce the old
'legacy feature packs' that use pre-Galleon technology. We did this because
there were some projects that still used them for provisioning servers
based on the WildFly technology stack. But I believe those cases are gone
now, and in any case after three years IMHO it's time for those kinds of
things to move on.
Producing the legacy feature packs is a fair amount of work and is starting
to interfere with how we'd best organize the code for production of the
Galleon feature packs. So I'd like for WildFly 24 to be the last release
where we produce them.
Inputs on this are welcome.
The maven GAs of the artifacts I'm talking about are:
The source code for the FPs are mostly at the following locations, although
there is related code (e.g. "subsystem-template" dirs) scattered throughout
the code base.
For the last month we've been focusing quite a bit of energy toward seeing
what it will take for WildFly to run well on the upcoming JDK 17. This post
is one of two I plan. This one is an attempt to start a discussion around a
couple topics; the other will be more of a status update.
Status summary is things are progressing well, no show-stoppers so far, but
plenty more to do. More on that in the other post....
WF 23 runs well on SE 13. We want to get to 17. The key barriers are:
1) SE 14 dropped the java.security.acl package.
2) SE 15 introduced hidden classes (JEP 371)
3) SE 16 strongly encapsulated JDK internals by default (JEP 396)
The discussion points relate to #3. WildFly does quite a lot reflection
stuff, plus we have some use cases where end users may want to use internal
JDK classes. SE 16 locks this down. For a good primer on the basic things
SE allows us when we need things to be made available, see . Richard
Opalka did a lot of good analysis of what JPMS-related VM launch settings
we need for WF to work properly; see  and . It's not a huge set,
which is nice.
But, it's not complete, because it doesn't account for user applications.
If application code requires additional deep reflection, then additional VM
launch settings will be needed. I think that's ok in general; we provide
hooks for users to add things to the JAVA_OPTS flags that are passed to
java. But the less users need to do that the better. Hence the discussion
1) The ClassReflectionIndex constructor iterates over all fields and
methods in a class and marks them as accessible. For any class that is used
as an EE component type, *as well its superclasses*, a ClassReflectionIndex
is created. This means if an application uses some JDK class as a
superclass (ignore Object, which gets some special handling that makes it
not a problem in this discussion), then that superclass's package is going
to need to be opened. We have no way to know what superclasses our users'
component might have, so we can't open them up in for them as part of our
standard launch args.
My general understanding is we do this in order to allow things like
injection of values into fields or wrapping calls to non-public methods
Is there anything we can do about this? Any intelligence we can apply to
avoid doing unnecessary opening? (See  for a very specific example of
such a thing.)
Or is this maybe not a big problem? We already need to open the java.util
package for other reasons, so EE component based on classes in that package
won't have a problem.
2) There are cases where our configuration allows users to specify a class
to use as the impl of interface, as an instruction for the server to
instantiate an instance and use it. Examples include NamingContext and
java.security.Policy impls. In some cases well known examples of those
interfaces are internal JDK classes.
Should we identify likely cases of these things and proactively include
those packages in our server launch --add-opens set? My general instinct is
no, but there may be cases where my instincts are wrong.