AddStepHandler can't add content to overlay
by Stan Silvert
I have an AddStepHandler that adds a keycloak server to the management
model using the "Mixed Approach"[1]. It also needs to add overlay
content if any new content is available. The overlay itself already
exists in standalone.xml.
<deployment-overlays>
<deployment-overlay name="main-auth-server.war-keycloak-overlay">
<deployment name="main-auth-server.war"/>
</deployment-overlay>
</deployment-overlays>
I insert this operation in my AddStepHandler:
{
"operation" => "add",
"address" => [
("deployment-overlay" => "main-auth-server.war-keycloak-overlay"),
("content" => "/WEB-INF/classes/META-INF/keycloak-server.json")
],
"content" => {"url" => "file:/mydir/keycloak-server.json"},
"rollback-on-runtime-failure" => false
}
But at this point in server startup, the overlay's add handler hasn't
bee called. So I get
14:09:20,729 ERROR [org.jboss.as.controller.management-operation]
(ServerService Thread Pool -- 26) JBAS014613: Operation ("add") failed -
address: ([
("deployment-overlay" => "main-auth-server.war-keycloak-overlay"),
("content" => "/WEB-INF/classes/META-INF/keycloak-server.json")
]) - failure description: "JBAS014766: Resource [(\"deployment-overlay\"
=> \"main-auth-server.war-keycloak-overlay\")] does not exist; a
resource at address [
(\"deployment-overlay\" => \"main-auth-server.war-keycloak-overlay\"),
(\"content\" => \"/WEB-INF/classes/META-INF/keycloak-server.json\")
] cannot be created until all ancestor resources have been added"
At startup, how can I make sure that my "add overlay content" operation
is executed after "add overlay" is done?
[1] https://developer.jboss.org/wiki/ExtendingAS7
10 years, 2 months
smoke testing failure
by Peter Cai
Hi,
It looks like that the current HEAD of Wildfly will failed when running
smoke testing with the following information:
Results :
Failed tests:
ServerInModuleDeploymentTestCase.testDeploymentStreamApi:93->testDeployments:614
expected:<[c767c5d5e516f6e04ec69f5a0f8ccdc0d63e6fa5,
342ae7aec9bff370e3de8704ed9642a718986e61]> but
was:<[342ae7aec9bff370e3de8704ed9642a718986e61]>
Any cues?
Regards,
10 years, 2 months
smoke testing failed
by Peter Cai
Hi,
It looks like the current HEAD of Wildfly will fail when running smoke
testing with the following failure information:
10 years, 2 months
Simplification of management "add" handlers
by Brian Stansberry
FYI, with the integration of wildfly-core 1.0.0.Alpha9 into WildFly one
of my screwups has been corrected[1], hopefully making life a bit
simpler for subsystem devs.
Previously, if you wrote a management handler for an 'add' op, you had
to deal with the ServiceVerificationHandler[2], plus record any services
you added so they could be rolled back. That's now all handled
automatically (as it should have been from the start.)
Before you might have had something like the following:
public FooAddHandler extends AbstractAddStepHandler {
....
protected void performRuntime(final OperationContext context,
final ModelNode operation,
final ModelNode model,
final ServiceVerificationHandler verificationHandler,
final List<ServiceController<?>> newControllers)
throws OperationFailedException {
Service<Foo> fooService = new FooService();
ServiceBuilder<Foo> = context.getServiceTarget()
.addService(FOO_SERVICE_NAME, fooService)
.addListener(verificationHandler);
newController.add(builder.install());
}
}
The above is a simple case; in many cases the boilerplate was worse,
e.g. if performRuntime instead delegated to a utility method used to
both add the service or to restore it if a "remove" op is rolled back
there would be null checks needed for 'verficationHandler' and
'newControllers'. Yuck.
Now that method can just be:
....
protected void performRuntime(final OperationContext context,
final ModelNode operation,
final ModelNode model)
throws OperationFailedException {
Service<Foo> fooService = new FooService();
context.getServiceTarget()
.addService(FOO_SERVICE_NAME, fooService)
.install();
}
Same thing applies with the performBoottime() method if your handler
extends AbstractBoottimeAddStepHandler.
The old methods with the 'verificationHandler' and 'newControllers'
params are still there and will be invoked if you don't override the
simpler overloaded variants, so existing code still works. The values of
the verificationHandler and newControllers params aren't used though;
they're just they to prevent NPEs.
I see Paul has decided to take advantage of this to simplify some of his
subsystems.[2] :-)
Cheers,
Brian
[1] https://github.com/wildfly/wildfly-core/pull/182
[2] The ServiceVerificationHandler was responsible for tracking services
associated with an operation so that if any of them have a problem
starting, the result for that step will include failure information.
This work still happens, but now it's all handled by the
OperationContext itself, with no need for devs to worry about it.
[3] https://github.com/wildfly/wildfly/pull/6798
--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
10 years, 2 months
Deliver 3rd party subsystem to customers?
by Heiko W.Rupp
Hey,
suppose I am writing a subsystem for WildFly, that I can or want not add to the generic WildFly codebase (e.g. because it is not open source or because WildFly team would consider it too special for general consumption).
Is there a way to package that up in a .zip or any other format (in the good 'ol days we used .shar files :-)
to deliver such a subsystem with its module but also the xsl to modify standalone.xml (or similar)?
If not (yet), would it make sense for WildFly to reconsider allowing to provide e.g.
* ext.xml
* subsystem.xml
* ports.xml
that will get merged / added to standalone.xml when the module is loaded for the very first time
(like when the module.jar is also indexed for the first time).
Thanks
Heiko
10 years, 2 months
Management: datasource "enable" operation and "allow-resource-service-restart" operation header
by Thomas Segismont
Hi everyone,
I understand the difference between calling the "disable" operation on a
datasource with the "allow-resource-service-restart" operation header
set to true and set to false.
I'm wondering if it has any impact on the "enable" operation. I assume
that when you enable a datasource, well, you're starting the service so
this operation header is useless.
Am I wrong?
Thanks,
Thomas
10 years, 2 months
Undertow statistics
by Heiko Braun
I am looking for some undertow metrics, like HTTP request/error rates, etc. I was expecting them to be exposed alongside with the listeners, but cannot find any. Is this missing or am I looking at the wrong resources?
/Heiko
10 years, 2 months
Deploying JDBC driver to WildFly
by Arun Gupta
I was talking to Adam Bien and he is an avid supporter of WildFly and
using it in all his projects now. One of his pet peeves is deployment
of JDBC driver.
He can either bundle it in WEB-INF/lib but prefer to deploy it on the
application server itself. Currently he deploys it as a JBoss module
but prefer it to be rather simplistic. His suggestion was to define a
directory like "standalone/lib/jdbc" and any JAR files copied there
should be automatically deployed as module.
What is the recommended way to deploy a JDBC driver to WildFly ?
Does it necessarily have to be deployed as JBoss module ? If yes, how
can this be simplified ?
Cheers
Arun
--
http://blog.arungupta.me
http://twitter.com/arungupta
10 years, 2 months
Steamable content in management API responses
by Brian Stansberry
tl;dr
I'm looking into supported streamed content in management API responses
and wanted to get feedback.
The admin console has a need for a streamed response, as we have a
requirement to let a user download a log file to store on local disk.
The browser can itself directly download a file using HTTP, but will not
let an app like the console make multiple DMR requests and itself append
data from the responses to a file on disk.
There are other likely uses cases for a similar thing, e.g. reading the
contents of the content (aka deployment) repository.
We already support attaching streams to requests; the proposal is to do
much the same with responses. For HTTP clients, if a request is for
something that attaches a stream to the response, the request URL can
include a query param instructing the server to pipe the stream for the
response instead of sending the standard JSON reponse value.
Long version:
Requirements:
1) Ability to send arbitrarily long amounts of data in management responses.
Currently everything in a management response has to be encodable in
DMR. DMR will allow you to send a byte[] as a response value, but for
memory reasons this isn't a practical approach for large responses.
2) Ability to send the data directly to an HTTP request, not wrapped in
a JSON wrapper. Even if we would include a huge value as a JSON encoded
byte[], a browser won't let an app like the console read those bytes out
of the JSON response and write them to disk. A direct response to an
HTTP GET/POST is needed.
3) Works for other remoting-based clients as well. This is a basic
requirement that HTTP and remoting-based clients can each do everything.
So, no HTTP only side features.
4) Requests go through the central management layer, so we have proper
security, audit logging etc. So, no special endpoints that bypass the
core management layer.
Proposal:
We already support attaching a stream to the request for native clients.
(This is how the CLI does deployments.) The client API lets the client
associate a stream with the request[1]. The stream has an index. The
operation that needs the stream takes a DMR param that tells it the
index of the stream. On the server side the handler uses that param
value to find the server-side representation of the stream. The remoting
layer of the remote management protocol in the background handles piping
the contents of the client side stream to the server.
The proposal is to simply mirror this in reverse. A server side
operation handler associates a stream with the response (via a call on
the OperationContext). The attached stream has an index. The normal DMR
response to the operation is the index of the stream. The client uses
that response value to find the attached stream. The remoting layer of
the remote management protocol in the background handles piping the
contents of the server side stream to the client.
This handles remoting based clients, including those using http upgrade.
For HTTP clients, the client can include a "useStreamAsResponse" query
param in the request URL. If this is present, the server side endpoint
will take the stream from the internal response and pipe it to the
client, instead of sending the normal JSON.
For example, a fictitious URL for a "stream-log-file" op against a
resource that represents the server.log file:
http://localhost:9090/management/subsystem/logging/log-file/server.log?op...
In the corner case where a request results in more than one attached
response, the useStreamAsResponse could include an index.
useStreamAsResponse=1
Status:
It was reasonably straightforward to get this working, as a fairly
polished prototype:
https://github.com/bstansberry/wildfly-core/commits/resp-stream
The 2nd commit there just hacks in an attribute to the logging subsystem
root resource to expose the server log as a stream. There's no intent to
actually do it that specific way of course; it was just an easy way to
demonstrate.
With that built an HTTP GET will download the log file to your desktop.
http://localhost:9990/management/subsystem/logging?operation=attribute&na...
The native interface works as well. The 2nd commit includes a test case
that confirms this. So the CLI easily enough could add a high level
command for log reading too.
TODOs:
1) Domain mode: proxy the streams around the domain
(server->HC->DC->client). I think this should be quite easy.
2) A cleaner variant of the remoting-based management protocol for
handling the streams, one that doesn't require hacks to determine the
length of the stream in advance. Should be relatively straightforward.
3) Logic to clean up server side streams if the client doesn't properly
consume them.
4) Make sure there a no issues with the thread pools used to handle
management requests.
5) Make sure POST works well. My assumption is this is lower priority,
as the real use cases would likely use a GET.
[1]
https://github.com/wildfly/wildfly-core/blob/master/controller-client/src...
--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
10 years, 2 months