I'm trying to debug some code, and I am often hitting classes in
Wildfly/Undertow/etc in my stack that I don't have the source code for.
I'd love to be able to add a dependency in my pom.xml so that Eclipse will
automatically d/l the sources from maven central for me and add them to my
debugger. I'm looking for an artifact that I'd be able to list something
That would then download all the sources for me, and I'd be in business.
Is there something like this BOM available for wildfly?
during development of WF11 we have done lots of work on making it build &
run on JDK9.
as release nears I would like to summarize what the current state is and
how to move on.
Currently most of our core  & full  testsuite passes on latest builds
Remaining failures are already addressed by  and 
**But** passing testsuite on JDK9 is not the same as using our binary
distribution under JDK9.
Currently as part of running build / testsuite we override version of
javassit to 3.22.0-CR2
which is currently the only version that works properly on JDK9.
As there is no .GA version of javassit that work on JDK9 avalible we
currently do not have it as default.
On top of that, hibernate as main user of javassit is not tested enough
with this version of javassist
unless hibernate / JPA team says otherwise.
That would in practice mean that users running WF11 on JDK9 would have
issues with JPA/Hibernate
Currently I see two options how to address this:
- upgrade javassist to 3.22.x in server, preferably ask for .GA release.
- produce additional WildFly.x.x.x-jdk9 zip that would include the newer
So question is do we even want to have working JDK9 build of WildFly 11
or should we postpone this for next update release.
I've converted confluence docs asciidoc   ones that will be part of
take a look at them and let me know if there are any big issues.
as most of you already know, I was working on moving our confluence based
 documentation to asciidoc based one.
result can be seen at  or rendered to html at 
A good side effect of conversion is that now docs are also browsable
directly on GitHub.
For example  or 
Currently I kept same structure as we had in confluence, which in practice
we have set of "guides" that than have lots of sub pages / includes that
produce "big" guides.
Currently such guides are:
- Admin Guide
- Developer Guide
- Getting started guide
- Getting Started Developing Applications Guide
- High Availability Guide
- Extending WildFly guide
- JavaEE 7(6 actually) Tutorial
- Elytron security guide
Problem is that some of this guide as such make sense, but not all of them
In some cases we have duplicated docs for same thing, in others we content
in wrong segment.
For example instead of having all subsystem reference docs under admin
some are under Developer Guide and some even under HA guide.
Going forward we should "refactor" docs a bit, so we would end up with 3-4
high quality guides.
We should also go trough all docs and remove/update the outdated content.
Plan is also to have documentation now part of WildFly codebase.
So when we would submit PR with new feature, it would also include
documentation for it as well.
Rendered docs can be build as part of our build / release process and can
be rendered to different formats.
for example default is HTML  or PDF 
I've send experimental PR to show how docs would fit into WildFly build 
Please take look at current docs and if you have any comments / suggestions
what we can improve before merging it let me know.
At this point I've not done much content-wise changes but just conversion +
Content updates can come after this is merged.
I'm trying to use AspectJ to advise some core classes in Wildfly/undertow.
Specifically, I'm trying to advise some of the Undertow HttpSession methods
to get some more detailed logging when Sessions are created/expire/etc.
I've added AspectJ as a -javaagent which is launched on startup of
Wildfly. I had to follow some of the steps listed at:
But it works; I can see that the AJ weaver is loaded and present.
My problem now is that I don't know where to put my jar of aspects for it
to be loaded/visible by the weaver for all modules. I've managed to get it
to work by modifying the io.undertow.servlet module, adding my jar in the
folder and modifying the module.xml, but that only advises the
io.undertow.servlet.* classes. And it seems like quite an ugly hack to get
I tried creating an independent module for it, and declaring it as a global
module in my standalone.xml, but that didn't seem to work. Nor did
modifying the servlet module.xml and specifying my module as a dependency.
I tried to add it to the startup parameters in the standalone.conf file,
specifying it as -classpath path/to/myaspect.jar, but that only advised the
startup WF (org.jboss) classes and none of the modules.
At the moment, I'm looking to advise any implementation of
javax.servlet.http.HttpSession.invalidate(). In AJ language, the pointcut
is simple: execution(* javax.servlet.http.HttpSession+.invalidate(..)) will
match any implementation of the HttpSession interface. However, to make it
active, I need to get that Aspect in the classpath of every module loaded
and visible to the AJ weaver.
Is there a "generic" place I can declare the the aspects.jar so that it is
part of the classpath of every module loaded or is my only choice to modify
every module.xml in the system?
As initial work on trimming our IO usage by testsuite and build,
I've send https://github.com/wildfly/wildfly-core/pull/2880
which changes wildfly-core to never produce "inflated" build/distro with
But rather uses thin server provisioning (module.xml entries point to maven
As part of this work, "dist" folder is now used for just that, for
distribution and noting else.
So unless build is invoked with -Drelease which is something that is done
as part of release process, "dist" folder will be empty and wont produce
Sever for testing is still present in "build" directory as it always was.
This was done to reduce unnecessary IO work and producing multiple server
builds without need for them.
I am sending this mail mostly as FYI that further work on this is going to
happen, in core and in full, and such changes might clash with some
poeple's workflows, where they are used to expect server dist to be in
"dist" folder but it is not there yet.
I'm looking to use AspectJ Load Time Weaving with Wildfly 10. Looking
around at some posts, it is a little complicated to get Wildfly launched
properly with the AJ weaver due to the way the AJ library intializes the
logging subsystem differently than WF.
Digging around, I found a config that actually works. It is documented
here (obviously some of the class names/versions have to change):
But I'm not a fan of changing my conf file to something that has hardcoded
paths/jar names in it - for example adding:
Digging around some more in AJ, I saw that as of AJ 1.8.7, there is a way
to dynamically attach the weaver to the JVM. Very cool.
But in order to use the LTW effectively, I need to ensure that the weaver
is loaded prior to WF scanning and loading any of my classes (EJB,
annotated beans, pojos, etc). But I have no ideas how to do that.
In the case of a console application, it is pretty straight forward - make
it the first item in the application's main() method. But in the case of a
JEE app, I don't know of any main() equivalent.
Is there a way to hook into the classloading mechanism of WF instead to
tell it to load the weaver if it isn't already loaded? Can this be done
from within the EAR deployment? Or is there a single point of entry that
WF accesses before scanning any of the classes in the EAR? Or is there a
simpler way of configuring or attaching the AJ Weaver? I did find an old
ticket (https://issues.jboss.org/browse/WFLY-895) that related to this
issue, but it is marked as WONT FIX.
Am not sure of the best approach at this point.
I've been experimenting with Alexey to update a customized provisioned server using the provisioning tool .
I'm using the syncing operations  that I created a while back by porting the domain synchronization operations to standalone (to
synchronize standalone instances in a cloud environment).
I'm looking for some feedback on this approach.
Updating is the process of applying a fix pack that will increment the micro version. There should be no compatiblity issue. Upgrading is
the transition of a minor version, compatiblity should be available but there are a lot more changes.
While the mecanisms discussed here are general, they might need more refinement for an upgrade.
The use case is quite simple: *I have version 1.0.0 installed and i want to update to 1.0.1 but I have locally customized my server and i’d
like to keep those changes*.
We have several local elements to take into account:
filesystem changes (files added, removed or deleted).
The basic idea is to diff the existing instance with a pure new installation of the same version then apply those changes to a new
provisioned version instance for staging.
We can keep it at the basic filesystem approach with some simple merge strategy (theirs, ours).
We can use the plugin to go into more details. For exemple using the model diff between standalone Wildfly instances we can create a cli
script to reconfigure the upgraded instance in a post-installation step.
<https://github.com/ehsavoie/pm/blob/upgrade-feature-pack/docs/guide/wildf...>Diffing the filesystem
The idea is to compare the instance to be upgraded with one instance provisioned with the same feature packs as the one we want to upgrade.
The plugin will provide a list of files or regexp to be excluded.
Each file will be hashed and we compare the hash + the relative path to deduce deleted, modified or added files.
For textual files we can provide a diff (and the means to apply it), but maybe that should be for a later version as some kind of
interaction with the user might be required.
This is a specialization of the upgrading algorithm:
Filtering out some of the 'unimportant' files (tmp, logs).
Creating diff of textual files (for example the realm properties) which will be applied (merging strategy à la git).
Using an embedded standalone it creates a jboss-cli script to reconfigure the server (adding/removing extensions and reconfiguring
Deleting files that were removed.
This is done on a staging upgraded instance before being copied over the old instance.
I have added a diff/sync operation in standalone that is quite similar to what happens when a slave HC connects to the DC. Thus I start the
current installation, and connect to it from an embedded server using the initial configuration and diff the models.
This is 'experimental' but it works nicely (I was able to 'upgrade' from the standalone.xml of wildfly-core to the standalone-full.xml of
I’m talking only about the model part, I leave the files to the filesystem 'diffing' but it wiill work with managed deployments are those
are added by the filesystem part and then the deployment reference is added in the model.
For a future version of the tooling/plugin we might look for a way to interract more with the user (for example for applying the textual
diffs to choose what to do per file instead of globally).
Also currently the filters for excluding files are defined by the plugin but we could enrich them from the tooling also.
update feature pack
From the initial upgrade mecanism Alexey has seen the potential to create a feature pack instead of my initial format.
Currently i’m able to create and installa a feature-pack that will supersede the initial installation with its own local modifications.
Thus from my customized instance I can produce a feature pack that will allow me to reinstall the same instance. Maybe this can be also use
to produce upgrade feature pack for patching.
<https://github.com/ehsavoie/pm/blob/upgrade-feature-pack/docs/guide/wildf...>WildFly domain mode
Domain mode is a bit more complex, and we need to think how to manager the model changes.
Those can be at the domain level or the host level and depending on the instance target we would need to get the changes from the domain.xml
or/and the host.xml.
I’m thinking about applying the same strategy as what was done for standalone : aka expose the sync operations to an embedded HC.