I'm trying to debug some code, and I am often hitting classes in
Wildfly/Undertow/etc in my stack that I don't have the source code for.
I'd love to be able to add a dependency in my pom.xml so that Eclipse will
automatically d/l the sources from maven central for me and add them to my
debugger. I'm looking for an artifact that I'd be able to list something
That would then download all the sources for me, and I'd be in business.
Is there something like this BOM available for wildfly?
Suppose I have a few websites with different domain names, domainA.com,
I have setup the correct virtual hosts configuration for these sites in
And I am planning on buying mutlti domain SSL certificate from GoDaddy.
A Multi-domain SSL Certificate can secure your main domain + several SAN (
Subject Alternative Name) domain names in one Certificate.
My question is that can wildfly recognize this kind of multi domain SSL
In preparation for WildFly 12 Final, CR1 is now available for build testing:
Provided no blocking issues are discovered we will be releasing Final shortly.
WildFly 12 is the first release in our new quarterly delivery model. The most significant feature is delivery of new EE8 capabilities. As mentioned during the original 12 announcement, we are delivering EE8 functionality incrementally, as opposed to waiting for a big bang. WildFly 12 includes Servlet 4, JAX-RS 2.1, CDI 2.0, Bean Validation 2.0, JSF 2.3, JSON-B, JSON-P 1.1, and Javamail 1.6.
By default WildFly 12 runs in EE7 mode, but you can enable EE8 variants of the standard by starting the server with the special parameter “-Dee8.preview.mode=true”.
As many of you know we are planning to move to the new feature-packs and
the provisioning mechanism for our wildfly(-based) distributions. New
feature-packs will be artifacts in a repository (currently Maven). In this
email I'd like to raise a question about how to express a location
(coordinates) of a feature-pack, its identify (id) and a stream information
which is the source of version updates and patches.
Until this moment I've used the GAV (group, artifact, version) as both the
feature-pack ID and its coordinates in the repository. This is pretty much
enough for a static installation config (which is a list of feature-pack
GAVs and config options). The GAV-based config also makes the installation
build reproducible. Which is a hard requirement for the provisioning
On the other hand, we also want to be able to check for the updates in the
repository for the installed feature-packs and apply them to an existing
installation. Which means that the installation has to be also described in
terms of the consumed update streams. This will be a description of the
installation in terms of sources of the latest available versions. A build
from this kind of config is not guaranteed to be reproducible. This is
where the GAVs don't fit as well.
What I would like to achieve is to combine the static and dynamic parts of
the config into one. Here is what I'm considering. When I install a
feature-pack (using a tool or adding it manually into the installation
config) what ends up in the config is the following expression:
org.jboss:wildfly:12:beta:12.0.5.Beta4. This expression is going to be the
The meaning behind the parts.
Universe is supposed to be a registry of feature-pack streams for various
projects and products. In the example above the org.jboss universe would
include wildfly-core, wildfly and related projects that are consumed by
wildfly that also choose to provide feature-packs.
The family part would designate the project or product.
The branch would normally be a major version. The assumption is that
anything that comes from the branch is API and config backward compatible.
Branch + classifier is what identifies a stream. The idea is that there
could be multiple streams originating from the same branch. I.e. a stream
of final releases, a stream of betas, alphas, etc. A user could choose
which stream to subscribe to by providing the classifier.
In most cases that would be the release version.
universe:family:branch:build_id is going to be the feature-pack identity.
The classifier is not taken into account because the same feature-pack
build/release might appear in more than one stream. And so the build_id
must be unique for the branch.
Given the full feature-pack coordinates, the target feature-pack can
unmistakenly be identified and the installation can be reproduced. At the
same time, the coordinates include the stream information, so a tool can
check the stream for the updates, apply them and update the installation
config with the new feature-pack build_id.
If you see any problem with this approach or have a better idea, please
Any suggestions for what we should call the new Hibernate ORM 5.3 module?
We can probably drop the slot and just include the version in the module
I finally managed to deploy WildFly Model Graph to OpenShift .
WildFly Model Graph lets you analyse the WildFly management model using a
Neo4J graph database. For more information see .
The Neo4J databases are running on the Red Hat OpenShift Online employee
cluster. Applications on this cluster have limited resources. So you might
experience some latency when there's too much traffic.
There has been a lot of talk on this list about how startup time and
footprint of WildFly can be reduced even further. I have experimented
with AppCDS and the first results are encouraging.
As you may be aware Application Class-Data Sharing, or AppCDS for short,
is available in OpenJDK 10 .
My work is based on the excellent talk of Volker Simonis on the subject
at FOSEM  and his cl4cds  tool. I recommend viewing his
presentation first, it will provide a lot of background.
To get started you first need download an OpenJDK 10 early access build
from . It is important to not download an OracleJDK build as AppCDS
is missing there for some reason.
Then you need to dump the list of loaded classes
Followed by converting that to a class list suitable for AppCDS, this is
where the cl4cds tool comes in
$JAVA_HOME/bin/java -jar ~/git/cl4cds/target/cl4cds-1.0.0-SNAPSHOT.jar
Then you can create the shared archive
export PREPEND_JAVA_OPTS="-Xshare:dump -XX:+UseAppCDS
and then finally you can start WildFly with the shared archive
export PREPEND_JAVA_OPTS="-Xshare:on -XX:+UseAppCDS
I checked the startup time reported by an "empty" WildFly 11. I realize
this is not the most scientific way. The startup time went down from
about 2000ms to about 1500ms or by about 25%. I did not have a look at
the memory savings when running multiple WildFly versions is parallel.
One thing I noted is that the Xerces classes should probably be
recompiled with bytecode major version 49 (Java 1.5) or later, otherwise
they can not be processed by AppCDS.
Unfortunately AppCDS is quite hard to use, I don't know if the WildFly
project can somehow help to make this easier. One option would be to
ship a class list file but I don't know how portable that is. Also as
WildFly services are lazy it only a fraction of the required classes may
be in there.
We should have someone from our EAP/Wildfly build teams join the ee4j-build
list to keep abreast and help steer the migration of the Oracle Java EE
project into the Eclipse build infrastructure. On the last EE4j call, it
was brought up that the discussion around how the TCKs needed to be updated
to integrate into more modern CI environments. At some point we should be
able to reduce our TCK run efforts, and improve the TCK codebase by
leveraging the public EE4j TCKs, but we need to be involved to help that
move in the right direction to achieve this.
Embedded containers offer some interesting issues with regards to logging.
If the logging subsystem is present the container may attempt to configure
logging via the logging subsystem. If logging has already been configured
by the application starting the embedded container, this could cause errors
if the log manager was not installed correctly.
While just the first draft, I've created a design/requirements doc  on
how this should likely work or what's expected. Please, especially those
interested, have a look and let me know if I've missed something or I'm off
base on anything.
James R. Perkins
JBoss by Red Hat
I have been thinking a bit about the way we report errors in WildFly, and I
think this is something that we can improve on. At the moment I think we
are way to liberal with what we report, which results in a ton of services
being listed in the error report that have nothing to do with the actual
As an example to work from I have created , which is a simple EJB
application. This consists of 10 EJB's, one of which has a reference to a
non-existant data source, the rest are simply empty no-op EJB's (just
@Stateless on an empty class).
This app fails to deploy because the java:global/NonExistant data source is
missing, which gives the failure description in . This is ~120 lines
long and lists multiple services for every single component in the
application (part of the reason this is so long is because the failures are
reported twice, once when the deployment fails and once when the server
I think we can improve on this. I think in every failure case there will be
some root causes that are all the end user cares about, and we should limit
our reporting to just these cases, rather than listing every internal
service that can no longer start due to missing transitive deps.
In particular these root causes are:
1) A service threw and exception in its start() method and failed to start
2) A dependency is actually missing (i.e. not installed, not just not
I think that one or both of these two cases will be the root cause of any
failure, and as such that is all we should be reporting on.
We already do an OK job of handing case 1), services that have failed, as
they get their own line item in the error report, however case 2) results
in a huge report that lists every service that has not come up, no matter
how far removed they are from the actual problem.
I think we could make a change to the way this is reported so that only
direct problems are reported , so the error report would look something
like  (note that this commit only changes the operation report, the
container state reporting after boot is still quite verbose).
I am guessing that this is not as simple as it sounds, otherwise it would
have already been addressed, but I think we can do better that the current
state of affairs so I thought I would get a discussion started.