Capabilit integration from services and DeploymentUnitProcessors
by Brian Stansberry
tl;dr; DUPs may need access to the integration APIs provided by
capabilities. I've got a first cut solution to that, but there are edge
cases that don't work well and we may need to restrict how capabilities
work a bit.
Long version.
WildFly Core capabilities are meant to provide a standardized way for
parts of the server to integrate with each other.[1]
Use of these is proceeding pretty nicely in the OperationStepHandlers
the kernel and subsystems use for executing management operations. But
in some cases, the code that needs to integrate with a capability isn't
in an OSH. It can be in a service start method, particularly in a
DeploymentUnitProcessor. (DUPs run in a service start method.)
A simple example would be a deployment descriptor or annotation includes
a chunk of config that specifies the use of a subsystem-provided
resource, say a cache. The DUP then wants to figure out the service name
of that resource, using capabilities.
So I've started to give some thought to how to handle that. See
https://github.com/bstansberry/wildfly-core/commit/ba7321bc30374b2d0aa99c...
for a first cut.
Basically the OperationContext exposes a "CapabilityServiceSupport"
object that OSHs can provide to services they install. The services can
then use that to look up service names and any custom runtime API
provided by a capability.
Typical pattern would be the OSH for a "deploy" op would make the
CapabilityServiceSupport available to RootDeploymentUnitService, which
would expose it to DUPs via a DeploymentPhaseContext attachment. DUPs,
as they install services would use the CapabilityServiceSupport to look
up service names and add dependencies to that new service.
The service doesn't register any requirement for the capability. If it
tries to use a non-existent capability, an exception is thrown and the
service has to deal with it. *This is the main problem.* The caller is
unable to register a dependency (since it can't get the service name) so
the MSC behavior of letting the service be installed but fail to start
due to a missing dependency is short circuited.
In most cases, if a service is installed but fails to start due to a
missing dependency, the management op that installed it is rolled back,
so the end result is the same. But we do support the
"rollback-on-runtime-failure=false" management op header, and if that's
used our behavior would be changed for the worse.
A possible partial solution to this would be to tighten down the rules
for how service names are created from capability names. Basically, any
capability could only provide 1 service, and the name of the service
would be the result of passing the name of the capability to
ServiceName.parse(name). That's the default behavior now, but we support
other options.
That would fix the service name discovery problem, by making the name
discovery algorithm completely static. It wouldn't help with a case
where a DUP needs access to a custom runtime API exposed by a
capability. But that is much more of a corner case. Probably better to
discuss on a sub branch of this thread. :)
What would we lose if we tightened down the service name rules?
1) Capabilities would not be able to tell the service name discovery
logic to produce some legacy service naming pattern (i.e. keep service
names as they were in earlier releases.)
This one I don't think is a big deal, as the capability can always
register an alias for the service that uses the legacy service name.
2) A capability cannot provide services of > 1 value type. It's nice if
capabilities can represent something that's meaningful to an end user,
and there's no reason why something that's meaningful to an end user
might not expose more than one service to other capabilities. If we
limit capabilities to exposing a single service, then we may end up with
multiple capabilities. See [2] for an example case, where a proposed
"org.wildfly.iiop" (nice and simple for an end user to understand)
installs two services, an ORB and a NamingContextExt.
At this point though, capability names aren't really exposed to end
users, so perhaps we can put this problem aside for now. If it becomes
an issue later, there are solutions, e.g. a user-friendly
"org.wildfly.iiop" capability automatically registers the detailed
"org.wildfly.iiop.orb" and "org.wildfly.iiop.naming" capabilities,
helping the user ignore the details.
Sorry for the long tome; by now I expect folks are used to it!
[1] Specifically, to validate that references to other parts of the
configuration are valid, to determine the names of MSC services provided
by other parts of the system, and to access any custom runtime
integration API exposed by another part of the system.
[2]
https://github.com/bstansberry/wildfly/commit/b9de64e046404531df466288cf4...
--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
9 years, 4 months
WFCORE-648 Cascade Ear Exclusions Test Case
by Brandon Gaisford
Hey all,
I’ve completed my initial coding for this issue and spent 2-3 hours now trying to come up with a good test case. I’ve created a test ear that I was planning to include as a test resource to deploy during unit testing.
When I debug using the test ear during normal server startup, I’m seeing this operation and resource within the DeploymentDeployHandler:
operation
{
"operation" => "deploy",
"address" => [("deployment" => "test-ear.ear")]
}
resource
{
"persistent" => false,
"owner" => [
("subsystem" => "deployment-scanner"),
("scanner" => "default")
],
"runtime-name" => "test-ear.ear",
"content" => [{"hash" => bytes {
0x00, 0xbb, 0x8a, 0x3a, 0x61, 0x12, 0xa6, 0xec,
0x59, 0x69, 0xab, 0x9d, 0xda, 0xe1, 0x48, 0x0e,
0x68, 0x92, 0x9a, 0x03
}}],
"enabled" => undefined
}
I was thinking I could model my test case after the existing DeploymentAddHandlerTestCase (but instead target DeploymentDeployHandler) and then during the deployment phase inspect some internal state to verify the exclusions are correct in the ModuleStructureSpec. But now I’m questioning whether this is a bridge too far.
Does anyone have any bright ideas or advice on how I might move forward on this test case?
Thanks,
Brandon
9 years, 4 months
WildFly Core 2.0.0.Alpha5 Released
by James R. Perkins
WildFly Core 2.0.0.Alpha5 was released and a PR has been sent to update
WildFly Full.
Once merged into full there some new methods for dynamic capabilities
were introduced in this version of core for those starting to use the
new feature.
--
James R. Perkins
JBoss by Red Hat
9 years, 5 months
Getting ManagedThreadFactory in a subsystem
by Gytis Trikleris
Hello,
I’ve got an issue with my subsystem
(https://github.com/wildfly/wildfly/tree/master/transactions) where I
cannot access BeanManager or UserTransaction from JNDI (I can however
lookup datasources). This error is currently happening during periodic
recovery i.e. on PeriodicRecovery thread. My assumption is that the CDI
and EE JNDI environment are not initialized because PeriodicRecovery
extends Thread and was created with the “new” operator rather than
ManagedThreadFactory and I know that EE apps are meant to use this
approach to ensure their environments are initialized correctly - does
this restriction apply to subsystems too? One issue I am coming up
against during prototyping is that I’m not sure how to get the
ManagedThreadFactory during our subsystems boot time as it does not
appear in JNDI.
I would like to know if it is possible to inject the
ManagedThreadFactory into my subsystem so I am not reliant upon its
availability in JNDI.
Thanks,
Gytis
9 years, 5 months
Hacking on WildFly - 2015
by Brandon Gaisford
Hey All,
I’m hoping an effort is already underway to update the “Hacking on WildFly” development article (https://developer.jboss.org/wiki/HackingOnWildFly) based on all the latest project changes. I’m tempted to volunteer to do it, but I don’t have enough background yet to pull it off. The existing article is great and was my gateway into open source development. However, the article is out of date and needs to be updated to account for the new dual wildfly/wildfly-core project structure. I’d also like to pass along the development issues I’ve struggled with along the way so others don’t have to repeat the same. What do you guys think?
Brandon
9 years, 5 months
WildFly 10 Schedule
by Jason Greene
In addition to the biweekly Alphas, the following are the key dates for the WildFly 10 schedule:
WildFly 10 Beta1 - August, 6th
WildFly 10 CR1 - September 9th
WildFly 10 CR2 (if needed) - September 16th
WildFly 10 Final - October 8th*
All new feature development needs to be wrapped by August, and preferably most PRs already submitted in July.
Happy Hacking!
* As always, Final releases are contingent on the last CR being blocker-free. If we aren’t blocker free, we will introduce another CR
--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat
9 years, 5 months
Module dependencies WF10
by John O'Hara
When I build WF10 in a build env. and move the built app server to a
different env. I am unable to start WF10. This is due to the modules
now resolving jars from the local maven repo. The build process
downloads and installs the required jars to local maven repo. that are
not available on the target environment.
Is there a way to either
a) build WF10 and package all the module jars into the build so that it
is portable,
or b) for the WF10 bootstrap process to download missing packages from a
remote maven repo on startup where any modules are missing in the local
repo?
Thanks
--
John O'Hara
johara(a)redhat.com
JBoss, by Red Hat
Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom.
Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Michael O'Neill (Ireland).
9 years, 5 months