Subsystem Inclusion Policy & Role of Feature Packs & Add-ons
by Jason Greene
Hello Everyone,
Recently there has been some confusion about how subsystems should be distributed, and whether or not they should be part of the WildFly repository.
There are three primary use-cases for distributing a subsystem.
#1 - Inclusion in an official WildFly distribution
#2 - A user installable "add-on distribution" which can be dropped on top of a WildFly Distribution (see [A])
#3 - A separate, independant, customized distribution, with a differing identity. Possibly build but not required as a layer (see [A])
If you are after #1, then the subsystem source code (defined as the portion of code which integrates with the server using the subsystem facilities) MUST be included in the WildFly repo. This is because subsystems heavily impact the stability of the server and our compliance with our strict management compatibility policy, and additionally it allows for us to keep all included subsystems up to date with core infrastructure changes such as capabilities and requirements, and the upcoming elytron security integration. Under this approach, a feature-pack is unlikely to be used, as it would likely just be part of the full feature-pack. It could very well be that we would introduce a different more expansive feature-pack in the future defining a larger distribution foot-print, however, there are currently no plans to do so.
If you are after #2, then you do not want a feature-pack, as feature-packs are just for building custom server distributions. If your use-case is #2 you are by definition not a custom server distribution, but rather a set of modules built the normal maven way.
If you are after #3, then you likely wish to use the feature-pack mechanism to make it easy to produce your custom distribution. This facility would allow you to keep your source repository limited to just the new subsystems you introduce, and pull the rest of the server bits via a maven dep. It is important, that you change the identity of the server (see [A]), such that patches for the official WildFly server are not accidentally installed.
Thanks!
[A] https://developer.jboss.org/wiki/LayeredDistributionsAndModulePathOrganiz...
--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat
7 years, 10 months
WildFly 11 Model and Schema Version Bumps
by Darran Lofthouse
Just a FYI,
In preparation for WildFly 11 I already have bumped the schema versions
and model for numerous parts of WildFly - if you plan to work on any
WildFly 11 changes that would also require a bump of any of these let me
know and I can point you to a branch where the changes have already been
made.
The models bumped so far are: -
- Core Management Model and Schema
- Remoting Subsystem
- Undertow Subsystem
- EJB Subsystem
- Security Subsystem
Regards,
Darran Lofthouse.
8 years, 4 months
Proposal to improve resource description and registration
by Jeff Mesnil
TL;DR - I propose to simplify subsystem development by moving some of the validation logic from the resource definitions to the management resource registration. The goal is to provide a static representation of the resources and let the MMR dynamically pick the “meaningful” parts.
Last week an user complains that the messaging-activemq subsystem’s statistics were not updated in domain mode.
It turned out that he was reading the metrics on the DC (/profile=full/subsytem=messaging-activemq/…) instead of reading on the server (/host=master/server=server-one/subsystem=messaging-activemq/…)
It is a bug[1] in the messaging-activemq because its resources register their metrics without checking whether that makes sense. The correct check is to look at context.isRuntimeOnlyRegistrationValid() to check whether a resource can register its metrics (the same check applies also for runtime attributes/operations).
I looked at other subsystems and undertow has the same issue.
This check does not work well with the PersistentResourceDefinition that is used throughout the messaging-activemq and undertow subsystems. This API works best when the definition of the resources uses a bunch of static instances for the resource itself, its attributes, metrics, etc. These static instances are also used by the companion PersistentResourceXMLDescription to provide a static XML representation of the subsystem.
If I have to pass this context.isRuntimeOnlyRegistrationValid() boolean to every resources in the subsystem, I get rid of the static representations used by the PersistentResourceDefinition and PersistentResourceXMLDescription and I have to add a lot of simple but boilerplate code in all my resource definitions.
The datasources subsystem does not exhibit this issue. It works around it by installing a Service at RUNTIME step to register (resp. unregister) statistics resource definitions when the service is started (res. stopped). Since services are only installed when runtime is supported, it ensures that the datasources metrics are available only on server and not on the DC.
It looks correct but I’m not a big fan of this solution. It makes the subsystem definition more complex to understand and it also involves boilerplate code that every subsystem providing runtime should write.
I was wondering if we could simplify the development of the subsystems by moving some of the logic dealing with that use case in the ManagementResourceRegistration instead.
My proposal would be to have the resource definitions be “complete”. The resource always defines its attributes/metrics/operations.
When the MMR takes this definition and registers the different parts, it would only register the “meaningful” one depending on the ProcessType and RunningMode. E.g. the MRR of the DC or a admin-only server would not register metrics & runtime attributes/operations while the MMR on a normal server would register everything.
This increase the complexity of the MMR which has to figure out whether to actually register something or discard it but it makes writing subsystem simpler and more robust.
Brian told me there might some exceptions (e.g. a runtime op that could be invoked on the DC) but these case could be handled by adding metadata to the resource definitions.
This approach segues quite well with the idea to generate subsystem using annotations. All the subsystem developers has to do is describe extensively the subsystem resources (using static objects now, annotations in a future version) and let the MMR decides which parts of the resources are actually registered.
To sum up, the definition of a resource is static, while its registration is dynamic.
Do you see any issue with that proposal?
As a first step, I’ll start by creating the corresponding WFCORE issue to fix WFLY-6546 issue by ensuring the MMR does not register metric if the runtime registration is not valid. This should not have any API impact (but the behaviour will certainly change).
jeff
[1] https://issues.jboss.org/browse/WFLY-6546
--
Jeff Mesnil
JBoss, a division of Red Hat
http://jmesnil.net/
8 years, 5 months
Feature pack provisioning
by Marko Strukelj
Currently wildfly-server-provisioning-maven-plugin always generates a full Wildfly distribution. For Keycloak project we have three different cases of provisioning, and it would be great to be able to cover it with a common wildfly provided tool:
1) full server distribution
2) overlay distribution (unzip into existing OOTB Wildfly distribution - your problem if you use unsupported Wildfly version)
3) provision into existing Wildfly server detecting version mismatches, and configuring existing and additional subsystems for Keycloak to run properly.
First one is what’s currently supported, and what we use.
Second one is what we currently hack up by extracting modules directory from (1) - it would support this use case better if wildfly-server-provisioning-maven-plugin could generate 'modules only' for example.
The third one requires a CLI installer tool. I’m not aware that currently there is something for that, and we are loath to develop one from scratch.
Is it realistic to expect 2) and 3) in not so distant future?
- marko
8 years, 6 months
Early Access builds of JDK 9 b116 & JDK 9 with Project Jigsaw b115 (#4909) are available on java.net
by Rory O'Donnell
Hi Jason/Tomaz,
Early Access b116 <https://jdk9.java.net/download/> for JDK 9 is
available on java.net, summary of changes are listed here
<http://download.java.net/java/jdk9/changes/jdk-9+116.html>.
Early Access b115 <https://jdk9.java.net/jigsaw/> (#4909) for JDK 9 with
Project Jigsaw is available on java.net.
Recent changes:
* in b114
o Replace
–com.apple.eawt
–com.apple.eio
With platform independent alternatives in java.awt
* in b115
o As per JEP 260, all non-Critical types/members should be moved
out of
sun/reflect and placed into a non-exported package. Only
critical APIs
should remain in sun.reflect.
We are very interested in hearing your experiences in testing any Early
Access builds. Have you have begun testing against
JDK 9 and or JDK 9 with Project Jigsaw EA builds, have you uncovered
showstopper issues that you would like to discuss?
We would really like to hear your findings so far, either reply to me or
via the mailing lists [1], [2].
Rgds,Rory
[1] http://mail.openjdk.java.net/pipermail/jigsaw-dev/
[2] http://mail.openjdk.java.net/pipermail/jdk9-dev/
--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland
8 years, 6 months
best way to get extensions to apply model changes immediately at runtime?
by John Mazzitelli
I'm in the need to support runtime changes to my subsystem extension (for example, someone should be able to use the CLI to add a child resource or change a resource's attribute and have those changes take effect immediately - i.e. support Flag.RESTART_NONE rather than Flag.RESTART_RESOURCE_SERVICES).
I'm assuming I need to do this in the AddStepHandler but I'm confused if I should override AbstractAddStepHandler.populateModel or AbstractAddStepHandler.performRuntime.
Can someone fill me in on which one is recommended to be used? I'm not sure under which conditions each of those should be used and why.
The other thing is, it seems this is only for adding child resources (or removing them; I assume it is analogous for Remove Step Handlers). How do you intercept the changing of an attribute value for an existing resource, particularly if my resource extends PersistentResourceDefinition?
Thanks for any help,
John Mazz
8 years, 6 months
syntax enhancement for cli boolean operation parameters
by Alexey Loubyansky
Just a brief announcement of a syntax enhancement in the CLI for boolean
parameters in operation requests. Just to advertise it and avoid
confusion with the changes in the tab-completion.
It's been quite some time ago since somebody (probably Kabir) gave me an
idea of how we could enhance the syntax for boolean parameters set to true.
I.e. instead of typing :read-resource(recursive=true) all the time
:read-resource(recursive) should be enough.
So, the presence of a boolean parameter name w/o a value would mean the
parameter is implicitly set by the user to true.
While I liked the idea, I never actually implemented it. So I mentioned
it to Jeff (jfdenise) who is getting his hands on the CLI now and he
went ahead and did it.
So now
:read-resource(recursive) is equivalent to :read-resource(recursive=true)
Both syntaxes are allowed.
False is still set explicitly, i.g. :read-resource(recursive=false)
The absence of the parameter still means the parameter was not provided
by the user.
Tab-completion has also been enhanced to suggest the shorter form for
true and still suggests the explicit form for false.
We hope you'll like it. Thanks Jeff for actually implementing it!
Alexey
8 years, 6 months
Pattern defined RBAC scoped roles
by Brian Stansberry
Yesterday afternoon I decided to scratch an itch I've had for a year. My
scratching seems to work so I'm tossing the idea out to get feedback on
whether it's wanted.
Idea is to let users create scoped roles for WildFly's management RBAC
feature that are based on address pattern matching. For example this
role would create a role where a user has non-sensitive write
permissions to resources in the logging subsystem, either on a DC or a
server:
<access-control provider="rbac">
<pattern-scoped-roles>
<role name="logmaint" base-role="Maintainer">
<pattern
regular-expression="(/profile=[^/]+)??/subsystem=logging(/.*)*"/>
</role>
</pattern-scoped-roles>
....
</access-control>
A role like this could be used when the server config is really meant to
be locked down after boot, but you want to allow logging tweaks to get
diagnostic data if necessary.
A scoped role in our RBAC impl is one where the user gets the
permissions of some other role if the target resource is considered "in
scope", while if it's not they get lesser permissions (just
non-sensitive read perms, or for some cases no perms at all.)
What we have now are server group scoped roles and host scoped roles,
where what's "in scope" is calculated based on a configurable list of
server groups or hosts. This new type of scoped role instead does
pattern matching of the target address against a configurable list of
regular expressions. If there is no match the user gets non-sensitive
read perms.
There can be more than one pattern; a match against any means the
scoping constraint is satisfied.
The address matching is against the CLI-style representation of the
target resource address. When authorizing JMX operations the match is
against the canonical form of the ObjectName. JMX ops against the mbeans
in the jboss.as[.expr] domains match against the CLI-style address of
the management resource underlying the facade mbeans.
I have this working, see [1]. Needs tests but it works with manual testing.
[1] https://github.com/wildfly/wildfly-core/pull/1516
--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
8 years, 7 months