Does anyone have an arquillian + wildfly config yet for testing wildfly
with Java 9? This is in the context of running our narayana test suite,
parts of which require arquillian + wildfly
t: +44 191 243 0870
Registered in England and Wales under Company Registration No. 03798903
Directors: Michael Cunningham (US), Paul Hickey (Ireland), Matt Parson
(US), Charles Peters (US)
Michael Cunningham (US), Charles Peters (US), Matt Parson (US), Michael
I would like to propose that we add support for HTTP/2 out of the box in
At the moment there are two main barriers to getting HTTP/2 two work:
- You need to set up a HTTPS connector, including generating keys etc. For
new users this is not as straightforward as it could be.
- You need to find the correct version of the Jetty ALPN jar and add it to
your boot class path. This is essentially a hack that modifies the JDK SSL
classes to allow them to support ALPN. A new version is needed for every
JDK8 release, so if you ever update the JVM HTTP/2 will stop working (JDK9
has support for ALPN so this is not nessesary).
I am proposing that we do the following to address these issues:
- Add support for lazily generated self signed certificates, and include
this in the default config. This would mean that we would have a working
HTTPS connector in the default config, although the first request would be
a bit slow as it would need to generate a new self signed certificate for
localhost. This allows for SSL out of the box, without any impact on
startup time or any need for an installer to generate the certificate.
- I have dealt with the ALPN issue in Undertow using a reflection based
hack. I have created some code that parses and modifies the SSL
Server/Client hello messages to add/read APLN information, and I then use
reflection to update the HandshakeHash maintained by the engine so the
engines internal hash state used to generate the Finished frames matches
the data that was actually sent over the wire.
Yes I am aware that this is a massive hack, however I think it is
preferable to the current boot classpath hack, which has a lot of a
drawbacks. If this ever stops working at some point due to internal JDK
changes the boot classpath hack would still be usable, however I don't
think this is particularly likely, as the part of the JDK that this
modifies seems unlikely to change.
I think this would be a great usability feature, allowing developers to get
started with HTTPS and HTTP/2 straight away.
TL;DR - I propose to simplify subsystem development by moving some of the validation logic from the resource definitions to the management resource registration. The goal is to provide a static representation of the resources and let the MMR dynamically pick the “meaningful” parts.
Last week an user complains that the messaging-activemq subsystem’s statistics were not updated in domain mode.
It turned out that he was reading the metrics on the DC (/profile=full/subsytem=messaging-activemq/…) instead of reading on the server (/host=master/server=server-one/subsystem=messaging-activemq/…)
It is a bug in the messaging-activemq because its resources register their metrics without checking whether that makes sense. The correct check is to look at context.isRuntimeOnlyRegistrationValid() to check whether a resource can register its metrics (the same check applies also for runtime attributes/operations).
I looked at other subsystems and undertow has the same issue.
This check does not work well with the PersistentResourceDefinition that is used throughout the messaging-activemq and undertow subsystems. This API works best when the definition of the resources uses a bunch of static instances for the resource itself, its attributes, metrics, etc. These static instances are also used by the companion PersistentResourceXMLDescription to provide a static XML representation of the subsystem.
If I have to pass this context.isRuntimeOnlyRegistrationValid() boolean to every resources in the subsystem, I get rid of the static representations used by the PersistentResourceDefinition and PersistentResourceXMLDescription and I have to add a lot of simple but boilerplate code in all my resource definitions.
The datasources subsystem does not exhibit this issue. It works around it by installing a Service at RUNTIME step to register (resp. unregister) statistics resource definitions when the service is started (res. stopped). Since services are only installed when runtime is supported, it ensures that the datasources metrics are available only on server and not on the DC.
It looks correct but I’m not a big fan of this solution. It makes the subsystem definition more complex to understand and it also involves boilerplate code that every subsystem providing runtime should write.
I was wondering if we could simplify the development of the subsystems by moving some of the logic dealing with that use case in the ManagementResourceRegistration instead.
My proposal would be to have the resource definitions be “complete”. The resource always defines its attributes/metrics/operations.
When the MMR takes this definition and registers the different parts, it would only register the “meaningful” one depending on the ProcessType and RunningMode. E.g. the MRR of the DC or a admin-only server would not register metrics & runtime attributes/operations while the MMR on a normal server would register everything.
This increase the complexity of the MMR which has to figure out whether to actually register something or discard it but it makes writing subsystem simpler and more robust.
Brian told me there might some exceptions (e.g. a runtime op that could be invoked on the DC) but these case could be handled by adding metadata to the resource definitions.
This approach segues quite well with the idea to generate subsystem using annotations. All the subsystem developers has to do is describe extensively the subsystem resources (using static objects now, annotations in a future version) and let the MMR decides which parts of the resources are actually registered.
To sum up, the definition of a resource is static, while its registration is dynamic.
Do you see any issue with that proposal?
As a first step, I’ll start by creating the corresponding WFCORE issue to fix WFLY-6546 issue by ensuring the MMR does not register metric if the runtime registration is not valid. This should not have any API impact (but the behaviour will certainly change).
JBoss, a division of Red Hat
Java 9 modules do not have a concept of a slot, and are identified only
by name. On the other hand, the module slot in JBoss Modules is
essentially an extension of the name, and is used mainly as a helper to
name parsing for things like the filesystem module loader to allow easy
multi-version or parallel installation support. A few projects use
slots for other purposes. In many module loaders, slots are not used at
all and are allowed to default to "main".
Among the changes coming to JBoss Modules for Java 9, my current plan
for this is to migrate towards the Java 9 way of doing things and
support only a general name field. For compatibility purposes, the
ModuleIdentifier API will continue to function, until/unless it is clear
that all major users have migrated off of it. It will work as a
frontend to plain String names - a ModuleIdentifier with name "name" and
slot "slot" will be translated behind the scenes as a module named
"name:slot". A module with a slot of "main" will be translated as just
"name". A simple character escaping scheme will be employed to ensure
that there is a lossless two-way mapping from plain names to
ModuleIdentifier-style names, in the event that there is a ':' in the
name part of the ModuleIdentifier, though in practice this may not come
The existing module loaders can continue to function more or less as
they are. For filesystem modules using module.xml, the slot could still
be used by way of the compatibility syntax scheme above. The filesystem
module loader will continue to use the same file name mapping scheme for
now, using the aforementioned compatibility scheme to achieve the same
effect that slots do now; we can look at ways to transition off of that
later if it proves necessary to do so.
The deployment module loader in WildFly can be transitioned to using
plain names easily, and this can probably be done at any time. We can
keep WildFly management APIs which reference modules as they are for now
- if a slot is present, it could simply be appended to the given module
name after a dividing ":", otherwise the module name is used as-is. The
slot attributes could be deprecated at any time.
Overall though I think the best way of approaching the change is that we
start thinking of "name:slot" as merely a ModuleLoader-specific name
syntax policy that some loaders use, and some do not. I suspect that
some module loaders will actively benefit from not having to deal with
the annoying possibility that a slot will be present and will not be the
expected "main" value; having a simple unrestricted String name allows
each ModuleLoader to have complete control over their syntax policy,
which is something that JBoss Modules has been moving towards for some
Ultimately slots are a pretty limited tool and are already essentially a
facade over a plain name, with a very thin convenience class over the
top of it to implement a parsing policy. While many people have taken
advantage of slots in many ways, it is my view that moving this
logic/policy into each module loader will afford more flexibility than
does simply dividing names into two fields. The ModuleIdentifier class
could be preserved as a convenience, though I would not recommend its
use (hence deprecation), especially as it may map awkwardly into things
like Java 9 module-aware stack traces. However this is something that
can be discussed before any decision is reached.
The estimated time frame for these changes relates to the time frame and
progress of Java 9, so it is not clear at the moment exactly when this
must happen, but it is certain that the changes will definitely not
occur before WildFly 12. Hopefully this will give everyone enough time
to recover from the shock. :-)
today trying to help some Hibernate users on SO I stumbled on a
question regarding an application being developed with Spring - and
apparently Spring Boot (?) - and deployed on WildFly.
Clearly the poster is having duplicate Hibernate classes, or possibly
mismatches with the version he's expecting vs the app server's
expectations, or more likely a mismatch between the Hibernate ORM and
Hibernate Search versions being loaded.
I'm familiar with JipiJapa, but while this automates some things -
like adding a module dependency to Hibernate ORM and/or Hibernate
Search automatically - this logic is usually controlled by application
settings in the `persistence.xml` and is aimed at standard JEE
deployments - so I guess it might actually be interfering in this
Beyond helping the specific question, do we have a general set of
tests or recommendations - maybe some example template - for Spring
Boot users on WildFly?
Anyone maybe interested in contributing some Spring related examples?
I've never used Spring myself but I guess I'd like to have a look to
better understand what kind of issues such users might bump into, and
what we could do to make it easier.
Jeff and I were chatting about https://issues.jboss.org/browse/WFCORE-1157 last week. There is currently a PR (https://issues.jboss.org/browse/WFCORE-1157?devStatusDetailDialog=pullreq...) to allow listening on the ControlledProcessState state changes. This is done via users registering NotificationHandlers on the runtime-configuration-state.
Since the notification handlers are executed asynchronously, there is no guarantee that e.g. on a stop that the notification handler is triggered for the 'stopping' and 'stopped' (the PR introduces this latter state) state changes since the server may be down before this happens. The PR works around this by making the controller execute the runtime-configuration-state handlers synchronously. However, this then means that the standard notifications and the runtime-configuration-state notifications end up being in separate streams, so that the 'stopping' handler may be invoked upon before the standard/async notification handlers reflecting earlier changes.
In fact looking at this a bit closer, the NonBlockingNotificationSupport class uses a thread pool with several threads. This means that for the standard async notifications, it is very likely that the handler for notification1 gets invoked before notification2's handler, but is is _not_ guaranteed. If the thread processing notification1 is paused for whatever reason, notification2 may end up being handled first. Should we change the executor in NonBlockingNotificationSupport to be a single thread executor?
Jeff also suggested, perhaps keeping the runtime-configuration-state notifications as asynchronous, but to add some constructs to make sure that these always get executed before server shutdown. This would keep the functionality from this PR where the notifications are always invoked, and also make sure that the order is preserved.
The upcoming DMR 1.4.0 version will introduce DMR Streaming API  -
The idea of this new feature grew up while I was working on DMR-9 issue .
In short the news in upcoming JBoss DMR 1.4.0 are:
* it will be compilable on JDK8 and above
* it will be 100% backward compatible on binary level with previous
* it replaces old cookcc based parser with new one based on DMR
* DMR parsing will be times faster with old model based API
* new DMR streaming API is highly memory efficient and really very fast
One example on how to use new DMR streaming API can be seen here .
I believe this new DMR Streaming API will become very handy for many of us
because it opens new opportunities to decrease memory and CPU usage in
At the moment in current prototype  I didn't touch DMR object model
because new DMR Streaming API does not (and most probably will not)
support pretty print feature
(because of its focus on speed and efficiency).
But I'm considering I might fix the object model writing flow when
pretty print is off.
Fixing it would speed it up significantly. Any objections?
As a final note today I run few benchmarks (see attachments).
Here are some outcomes from benchmark results in short:
* Writing small DMR structures in DMR format is 2,4 times faster with
new DMR streaming API
* Writing small DMR structures in JSON format is 2,6 times faster with
new DMR streaming API
* Reading small DMR structures in DMR format is 3,9 times faster with
old DMR object model API
* Reading small DMR structures in JSON format is 3,7 times faster with
old DMR object model API
* Reading small DMR structures in DMR format is 5 times faster with
new DMR streaming API
* Reading small DMR structures in JSON format is 4,4 times faster with
new DMR streaming API
* Reading big DMR structures in DMR format is 5,2 times faster with
old DMR object model API
* Reading big DMR structures in JSON format is 6 times faster with old
DMR object model API
* Reading big DMR structures in DMR format is 7,6 times faster with
new DMR streaming API
* Reading big DMR structures in JSON format is 8,8 times faster with
new DMR streaming API
PS: Hopefully new DMR streaming API will seamlessly integrate with GWT
(used in our console).
Principal Software Engineer
JBoss by Red Hat
Mobile: +420 731 186 942
I have written up analysis/design notes for the new notification registrar mechanism at https://developer.jboss.org/wiki/DesignNotesForAbilityToRegisterAListener...
My main doubt is about the usefulness of the NotificationRegistrarContext.getModelControllerClient() method. The NotificationRegistrarContext is used by NotificationRegistrar.registerNotificationListeners(), which in turn is called by a service's start() method. The service is installed by an add handler. Is my memory of calling the ModelControllerClient execute methods at this stage a bad thing correct? I mention that the MCC can be cached for later use by the handlers, which on one hand I think should be ok since the handlers are executed on asynchronously, but on the other hand having notification handlers mess around with the model seems a bit strange as well.
Recently a few of us had a meeting to discuss a new provisioning/patching
system for Wildfly. I have written up the notes for this meeting which are
The document contains the full details, but the basic idea is to design a
provisioning/patching/package management system for Wildfly and downstream
products. This will not be based on our existing patching or feature pack
code, but should take over the responsibilities of both.
Obviously this is still in the early stages, and any feedback is welcome.