Subsystem Inclusion Policy & Role of Feature Packs & Add-ons
by Jason Greene
Hello Everyone,
Recently there has been some confusion about how subsystems should be distributed, and whether or not they should be part of the WildFly repository.
There are three primary use-cases for distributing a subsystem.
#1 - Inclusion in an official WildFly distribution
#2 - A user installable "add-on distribution" which can be dropped on top of a WildFly Distribution (see [A])
#3 - A separate, independant, customized distribution, with a differing identity. Possibly build but not required as a layer (see [A])
If you are after #1, then the subsystem source code (defined as the portion of code which integrates with the server using the subsystem facilities) MUST be included in the WildFly repo. This is because subsystems heavily impact the stability of the server and our compliance with our strict management compatibility policy, and additionally it allows for us to keep all included subsystems up to date with core infrastructure changes such as capabilities and requirements, and the upcoming elytron security integration. Under this approach, a feature-pack is unlikely to be used, as it would likely just be part of the full feature-pack. It could very well be that we would introduce a different more expansive feature-pack in the future defining a larger distribution foot-print, however, there are currently no plans to do so.
If you are after #2, then you do not want a feature-pack, as feature-packs are just for building custom server distributions. If your use-case is #2 you are by definition not a custom server distribution, but rather a set of modules built the normal maven way.
If you are after #3, then you likely wish to use the feature-pack mechanism to make it easy to produce your custom distribution. This facility would allow you to keep your source repository limited to just the new subsystems you introduce, and pull the rest of the server bits via a maven dep. It is important, that you change the identity of the server (see [A]), such that patches for the official WildFly server are not accidentally installed.
Thanks!
[A] https://developer.jboss.org/wiki/LayeredDistributionsAndModulePathOrganiz...
--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat
7 years, 11 months
An integration test adding JBoss Module and running management commands?
by Peter Palaga
Hi *,
I wonder if we already have an integration test that requires to add
some custom JBoss Module and to run some Management commands?
I'd like to add an integration test for
https://issues.jboss.org/browse/WFLY-7412 . I was able to do everything
I need using various Maven plugins (most notably WF Plugin and some
resource copying) in this test project: https://github.com/ppalaga/WFLY-7412
However, I am kinda hesitant to send my solution to WF code base,
because it looks too different from the usual Arquillian way of writing
itests.
Therefore, I am asking if anybody could perhaps point me to an
Arquillian (or any other existing) test in WF code base that runs
management commands, installs custom JBoss modules, deploys a simple
test app and then sends some HTTP requests against the container?
Thanks,
Peter
8 years
WildFly status listener
by Gytis Trikleris
Hello,
I'm wondering if there is a way to register a listener which would be
invoked when server status has changed. More specifically when
application server completed start-up.
The reason for that is that after [1] commit was introduced our rest
transaction tests started to fail. The cause seems to be rest service
call during the start of one of our services. That call doesn't
necessarily have to be executed during the service start. However, the
sooner it's done the better and if it would be possible to register some
sort of callback to be invoked once start-up was done, that would be great.
Thanks,
Gytis
[1]
https://github.com/wildfly/wildfly/commit/d56cd18137d3acbcb5027744d5ce57f...
8 years
Re: [wildfly-dev] EJB Transactions Graceful Shutdown
by Stuart Douglas
On Sat, Dec 3, 2016 at 3:40 AM, Flavia Rainone <frainone(a)redhat.com> wrote:
> Hi,
>
> I'm creating this thread to discuss the remaining details of graceful
> shutdown for ejb transactions.
>
> This is more or less what I've done so far:
>
> https://github.com/fl4via/wildfly/commit/7017146522af9a979a8a8e0c92039e6a...
>
> While discussing this in the hip chat yesterday, Stuart mentioned that maybe
> we could have the transactions subsystem responsible for keeping track of
> how many active transactions we have, instead of putting that code in
> EjbRemoteTransactionsRepository.
>
> Stuart, does that include having the suspend callback being done at
> transactions subsystem as well? I'm thinking maybe not, because there are
> two points in the ejb subsystem we need to know if transactions suspension
> is over:
>
No, that still has to be handled at an EJB subsystem level.
Conceptually this is similar to what was done for the XTS subsytem, so
it should probably use a similar design. Ideally while the server is
in the running state the only graceful related code that is run is the
control point request tracking, however this may not be possible.
One other thing that came up on our hipchat discussion yesterday is TX
level graceful shutdown actually has some significant drawbacks, as
you cannot send out the module unavailability message until all the
transactions have been closed. This means that while we are waiting
for transactions to complete the node will still be part of a cluster,
and clients will send it requests that will be immediately rejected.
Stuart
> - at EjbSuspendInterceptor if it is over, no request is allowed, if it is
> not over, we need to check if current invocation contains a reference to an
> active transaction
>
> - at some point, we need to let control point notify that the ejb module is
> not longer available to ejb client after transaction suspension is over,
> i.e., we need to do that when suspend has been requested and there are no
> remaining active transactions available.
>
> On the other hand, it is hard to draw the line between what should be in the
> transactions subsystem and what shouldn't. If the callback is done at
> transactions subsystem, we need a way of having ejb3 notified that it is
> done. If it is not done at transactions subsystem, ejb3 has to be notified
> of the active transactions going to zero, which seems a lot of overhead, so
> from this point of view maybe the callback should be in the transactions
> system after all.
>
> Stuart and Gytis, any thoughts?
>
>
> --
> Flavia Rainone
> Principal Software Engineer
> JBoss EAP/WildFly Team
> M: (+55) 11 981-225-466
>
> Red Hat.
> Better technology.
> Faster innovation.
> Powered by community collaboration.
8 years
Accessing system properties in a subsystem
by Tom Jenkinson
Hi,
I have a subsystem that configures itself from system properties.
For example:
<system-properties>
<property name="RecoveryEnvironmentBean.expiryScannerClassNames"
value="com.arjuna.ats.internal.arjuna.recovery.ExpiredTransactionStatusManagerScanner
com.arjuna.ats.internal.arjuna.recovery.AtomicActionExpiryScanner"/>
</system-properties>
In earlier revisions of WFLY this worked fine. However I am now seeing that
the system property is not set until after my subsystem has started. I can
tell this as I have breakpoints on where I process the property. I can see
"MSC service thread 1-4" attempting to process the property (which is not
set). I do later see messages that suggest the system property is set but
at that the later point:
2016-12-12 17:57:48,042 TRACE
[org.jboss.as.controller.management-operation] (Controller Boot Thread)
Final response for step handler
org.jboss.as.server.operations.SystemPropertyAddHandler@784c8c5f handling
add in address [("system-property" =>
"RecoveryEnvironmentBean.expiryScannerClassNames")] is {"outcome" =>
"success"}
2016-12-12 17:57:48,093 TRACE
[org.jboss.as.controller.management-operation] (Controller Boot Thread)
Final response for step handler
org.jboss.as.controller.ValidateModelStepHandler@87b4493 handling
internal-model-validation in address [("system-property" =>
"RecoveryEnvironmentBean.expiryScannerClassNames")] is {"outcome" =>
"success"}
Does my subsystem need to depend on something to get the old behaviour of
being started after system properties are processed?
My subsystem is the transaction one and the service is the recovery manager.
Thanks!
Tom
8 years
Time to remove *-elytron.xml Configurations
by Darran Lofthouse
I think it is now time to start removing the *-elytron.xml
configurations from the feature packs, these were added so we could
start to get Elytron enabled in isolation. As more and more is
integrated it makes less sense to keep the isolation, and if anything it
is starting to make things harder such as testing.
The default configuration I have planned is for components to be updated
to reference Elytron and for the Elytron configuration to be closely
aligned with the existing default configuration, i.e. Digest
authentication, backed by properties file and local authentication.
We did think if we should enable stronger authentication immediately but
that will break the existing clients already out there, if we do this in
stages the clients should have had a chance to be updated to Elytron so
when we do switch to stronger authentication by default the clients will
be ready.
So step 1, I will move the Elytron extension and subsystem definition(s)
into the existing configurations we ship and remove the *-elytron.xml
definitions.
We will then incrementally update resources that reference security
services to reference Elytron capabilities.
Regards,
Darran Lofthouse.
8 years
JDK 9 b148 including a refresh of the module system is available on java.net
by Rory O'Donnell
Hi Jason/Tomaz,
JDK 9 build b148 <https://jdk9.java.net/download/> includes an important
Refresh of the module system [1] , summary of changes are listed here
<http://download.java.net/java/jdk9/changes/jdk-9+148.html>.
*This refresh includes a disruptive change that is important to understand.
*For those that have been trying out modules with regular JDK 9 builds
then be aware that `requires public` changes to `requires transitive`.
In addition, the binary representation of the module declaration
(module-info.class) has changed so that you need to recompile any
modules that were compiled with previous JDK 9 builds.
As things stand today in JDK 9 then you use setAccessible to break into
non-public elements of any type in exported packages. However, it cannot
be used to break into any type in non-exported package. The current
specified behavior was a compromise for the initial integration of the
module system. It is of course not very satisfactory, hence the
#AwkwardStrongEncapsulation issue [2] on the JSR 376 issues list. With
the updated proposal in the JSR, this refresh changes setAccessible
further so that it cannot be used to break into non-public types, or
non-public elements of public types, in exported packages. Code that
uses setAccessible to hack into the private constructor of
java.lang.invoke.MethodHandles.Lookup will be disappointed for example.
This change will expose hacks in many existing libraries and tools. As a
workaround then a new command line option `--add-opens` can be used to
open specific packages for "deep reflection". For example, a really
popular build tool fails with this refresh because it uses setAccessible
+ core reflection to hack into a private field of an unmodifiable
collection so that it can mutate it, facepalm! This code will continue
to work as before when run with `--add-opens
java.base/java.util=ALL-UNNAMED` to open the package java.util in module
java.base to "all unnamed modules" (think class path).
*Any help reporting issues to popular tools and libraries would be
appreciated. *
A debugging aid that is useful to identify issues is to run with
-Dsun.reflect.debugModuleAccessChecks=true to get a stack trace when
setAccessible fails, this is particularly useful when code swallows
exceptions without any logging.
Rgds,Rory
[1]
http://mail.openjdk.java.net/pipermail/jdk9-dev/2016-November/005276.html
<http://mail.openjdk.java.net/pipermail/jpms-spec-experts/2016-October/000...>
[2]
http://openjdk.java.net/projects/jigsaw/spec/issues/#AwkwardStrongEncapsu...
--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland
8 years
JBoss Modules - "advanced" PathFilter for ResourceLoaderSpec?
by Jaikiran Pai
(Using this list since I couldn't find a place to ask JBoss Modules
question anywhere else, feel free to direct me there if there's one)
Hello everyone :)
I have been using JBoss Modules for one of the projects I'm involved in.
The usage there is pretty much similar to how we use it for setting up
classloaders in WildFly server for deployments. A new module M1 gets
dynamically created and assigned to a component and this module is added
with dependencies to some pre-defined static modules A, B, C and such.
M1 also gets N number of resource roots (backed by ResourceLoaderSpec),
each pointing to a jar file within some well known directory. So to put
in some sort of a code, it looks like:
// add each jar as a resource root for the module
for (final File jar : jars) {
final ResourceLoader jarResourceLoader;
try {
jarResourceLoader =
ResourceLoaders.createJarResourceLoader(jar.getName(), new JarFile(jar));
} catch (IOException e) {
// log and continue
logger.warn("....", e);
continue;
}
moduleSpecBuilder.addResourceRoot(ResourceLoaderSpec.createResourceLoaderSpec(jarResourceLoader));
}
All works fine without any issues and the module M1 has access to the
resources in these jars. Now there are times where the components for
which I've created this module M1 and attach these jars, "accidentally"
ship/package jars which have resources (classes to be precise) which are
also exposed/present in one of the dependency modules (remember A, B,
C...). So this then leads to the same old thing where I have go back and
tell my users not to package such jars.
I want to try and make this a bit more robust and get away with having
to tell users not to package xyz jars. I had a look at the
ResourceLoaderSpec interface and it takes a PathFilter
https://github.com/jboss-modules/jboss-modules/blob/1.x/src/main/java/org...
which gets me one step closer to what I want to achieve. So if I know
that static module A, B, C etc... expose classes belonging to package
foo.bar.blah, I can setup a PathFilter on these jar resource loader to
skip/decline the path in the accept() method. I think that should work
out fine (I need to test it out tonight) and I wouldn't have to worry
that some jar packaged within that component will introduce these
classes belonging to the foo.bar.blah package.
However, although it might work, I then have to keep a very close vigil
or rather keep inspecting what packages (or resources in general) the
modules A, B and C provide. Instead what I'm thinking of is a "smart"
PathFilter or anything along those lines whose semantics would be to
"skip/don't accept/filter out all those resources, from a resource root,
if the resource is provided by any of the specified modules". So
something like:
ResourceLoaderSpec.createResourceLoaderSpec(jarResourceLoader,
*PathFilters.excludeResourcesExposedByModules("A**:slot", "B:slot",
"C:slot" ...)*);
Having looked at the JBoss Modules code, I don't think this is possible
currently. But that's OK. What I really want to check is, is this
something that would be feasible to implement (doesn't have to be in
JBoss Modules itself) and is there any obvious issues with the approach?
Also, is this something that would be useful to have in JBoss Modules
itself?
-Jaikiran
8 years