Admin console
by Heiko W.Rupp
Hi,
we were talking about admin console before, but I am not sure if we reached any conclusion about
- console for standalone (=non-domain) server mode
- will we have a "dedicated" server within a domain to host the console or could
theoretically every server host the console?
- do users expect to see metrics for the last hour from the moment they first
access the console or already for some time before?
Heiko
--
Reg. Adresse: Red Hat GmbH, Otto-Hahn-Strasse 20, 85609 Dornach bei München
Handelsregister: Amtsgericht München HRB 153243
Geschaeftsführer: Brendan Lane, Charlie Peters, Michael Cunningham, Charles Cachera
13 years, 12 months
[Discuss] Making Shrinkwrap more module-classloader friendly
by David Bosschaert
Hi all,
I have been looking at getting Shrinkwrap to work in a module-based
environment, like OSGi and JBoss Modules. I ran into issues with this
basically because the interaction goes through the ShrinkWrap-API module
and some static methods in there while the implementation requires that
the ThreadContextClassLoader has visibility of Shrinkwrap-Impl module.
In a classloader setting where all the SW jars are visible to the same
classloader this works fine, but in a modules-based system these two
modules would be loaded by two different classloaders so the only way to
get the current approach work is to set the TCCL explicitly to the
classloader that loads the ShrinkWrap-Impl module, which is a little
awkward to do but also typically requires a dependency on a
ShrinkWrap-Impl class which is ugly (IMHO), e.g:
ClassLoader oldCL = Thread.currentThread().getContextClassLoader();
try {
Thread.currentThread().setContextClassLoader(
JavaArchiveImpl.class.getClassloader()); // The impl class
Archive a = ShrinkWrap.create(...);
} finally {
Thread.currentThread().setContextClassLoader(oldCL);
}
I can see two solutions to this.
1. The nicest one (IMHO) would be to let the impl module register a
ShrinkWrap service (with MSC and/or OSGi) which handles all the TCCL
details. The nice thing is that the user doesn't need to get into any SW
implementation detail. Just use the service and it works - the API of
the service would be similar to the ShrinkWrap class that's there today
and defined in the API module. I guess the disadvantage would be that
you need to obtain the service instance from the service registry, so
you can't use a static API like ShrinkWrap.create().
2. An alternative could be to add additional static ShrinkWrap.create()
(etc) methods that take a classloader as an argument. You would then
still need to get hold of that classloader somehow, but at least you're
freed of the TCCL setter wrapping code...
Thoughts anyone?
David
BTW more context can be found in
https://jira.jboss.org/browse/SHRINKWRAP-242 where I'm providing an
initial proposal for #1 above to work in OSGi.
14 years
Undeploy problem, ServerDeploymentManager, Bundle deployment
by Thomas Diesler
Folks,
I took another stab at bundle deployment, which now works again for
hot-deployment, please pull https://github.com/jbosgi/jboss-
as/tree/bootstrap
Status
#1 Undeploy does generally not take the DeploymentUnitService down.
#2 ServerDeploymentManager service is not installed
#3 Arquillian container cannot deploy because of #2
#4 Bundle deployment through JMX cannot go through processors
because of #2
When removing the deployment marker file, the deployment scanner does
not seem to pick that up. I've not yet looked into why this would be.
What's the plan for the ServerDeploymentManager service. Both the
Arquillian and the OSGi subsystem depend on that. The Arquillian
container does its test deployment through it. The OSGi subsystem
delegated to it from a hook into the BundleContext.installBundle(...) API.
AFAICS, the ServerDeploymentManager service is the last missing piece
before we can resurrect the smoke tests.
cheers
-thomas
--
xxxxxxxxxxxxxxxxxxxxxxxxxxxx
Thomas Diesler
JBoss OSGi Lead
JBoss, a division of Red Hat
xxxxxxxxxxxxxxxxxxxxxxxxxxxx
14 years
A few notes about Modules
by David M. Lloyd
Just wanted to give a quick summary of what's going on with Modules in
AS these last couple of weeks.
The "javax.api" module now includes ONLY non-java.* JDK classes (not
vendor-specific classes) which also do not appear in EE javax.* modules.
In particular, it now excludes other classes which may be on the
application classpath, so having the JTA API (for example) on the
classpath won't screw things up like it was before.
As a reminder, DO NOT USE the "system" module. You usually should be
using "javax.api" instead.
Some new and changed features of modules.xml files:
1. Before this release, services ("META-INF/services/*") were not
exported, to avoid issues with unwanted services being found on the
current module due to dependency import. Now, when "export" is set to
"true", everything is exported by default, and services are filtered on
the *import* side instead. And, there is a new attribute called
"services" which specifies whether you want to import services from the
target module, so you can choose whether you want to import services
from a module dependency in module.xml. The possible values are "none"
(the default) which refuses all services, "import" which brings in
services, and "export" which brings in and re-exports services from the
selected dependency.
2. There was some unclear behavior around the fact that there is both an
"export" attribute and an "exports" sub-element of dependencies. As of
Beta12, the listed includes/excludes found in the exports section
comprise a multi-step filter, in which the rules are evaluated in order
until a match is found. If no match is found, then the value of the
"export" attribute is used to determine whether to export the resource.
So instead of:
<module name="org.osgi.util" export="true">
<exports>
<include path="org/osgi/util/tracker"/>
<exclude path="**"/>
</exports>
</module>
We can do:
<module name="org.osgi.util" export="false">
<exports>
<include path="org/osgi/util/tracker"/>
</exports>
</module>
As a final note, if you're looking to build snapshots of MSC and/or
Modules (to work on the bootstrap AS branch for example), you must build
both, and you must build and install Modules *first* because MSC relies
on recent changes in Modules.
--
- DML
14 years
Metadata common updates
by Remy Maucherat
Hi,
If the common metadata module has been patched with fixes, please port
these fixes to the new metadata repo here:
https://github.com/jfclere/metadata
And then send a pull request to Jean-Frédéric (or me).
There are updates from Jaikiran, Weston, Richard and Alessio.
Thanks,
--
Remy Maucherat <rmaucher(a)redhat.com>
Red Hat Inc
14 years
ConfigAdmin service in AS7
by Thomas Diesler
Folks,
I completed the configuration admin service in AS
https://github.com/jbosgi/jboss-as/commit/237c53c665df53e5931d2ef6fc15ad7...
Any service/deployment can now add its configuration properties to the
model via the ConfigAdminService
<subsystem xmlns="urn:jboss:domain:configadmin:1.0" >
<configuration
pid="org.apache.felix.webconsole.internal.servlet.OsgiManager">
<property name="manager.root">jboss-osgi</property>
</configuration>
</subsystem>
There are demo test cases that show how to use the service. The notion
of ConfigurationListener is also supported, so service Foo can react to
configuration changes for service Bar.
The ConfigAdminService is also the persistent data backend for the
standard OSGi ConfigurationAdmin
(http://www.osgi.org/javadoc/r4v42/org/osgi/service/cm/ConfigurationAdmin....)
service. The way this currently works is that we deploy the Apache Felix
Config Admin bundle that takes care of configuration aspects for any
OSGi bundle deployed to AS7. The Felix CM bundle delegates data
persistence to the to ConfigAdmin subsystemn - so it shows up in
domain/standalone.xml.
As a side effect you can use the OSGi webconsole to manage the
configuration for any AS service that uses the ConfigAdminService.
The branch is currently waiting to get rebased onto 'bootstrap' when
that becomes generally available.
cheers
-thomas
--
xxxxxxxxxxxxxxxxxxxxxxxxxxxx
Thomas Diesler
JBoss OSGi Lead
JBoss, a division of Red Hat
xxxxxxxxxxxxxxxxxxxxxxxxxxxx
14 years
Re: [jboss-as7-dev] [Discuss] Making Shrinkwrap more module-classloader friendly
by David Bosschaert
Comments inline...
On 13/12/2010 10:42, Thomas Diesler wrote:
> Hi David,
>
> I applied your changes
Thanks!
> http://jbmuc.dyndns.org:8280/hudson/job/JBoss-7.0.0/27/changes
>
> There are a number of open questions about this however
>
> #1 TCCL issue in SW
>
> You seemed to have found a workaround that sets the TCCL explicitly to
> some SW impl class. I believe SW should not work like this. Instead
> there may be some initialization routine that SW must go through.
> Followed by ordinary SW usage where the client should not have to have a
> dependency on SW impl detail. OSGi bundles now seem to require an import
> on the SW impl package, which is against what we preach for modularity
> and separation of API/Impl. An ARQ test bundle may need to have a
> dependency on the SW API, but not never on the impl. Maybe this is work
> in progress, I just liked to mention it.
+1
Some ideas on how to do this have been floated in earlier postings in
this thread.
> #2 Binding to the OSGi protocol
>
> Currently, the archive provider functionality is still bound to the OSGi
> protocol. Other AS tests cannot use multiple test archives. Is it
> planned to make this functionality generally available?
In the call that we had a few weeks back I think Aslak agreed to look at
how we do the callback in OSGi and see if he could make that available
more generally in ARQ.
> #3 Assembling the archive in the test case
>
> Aslak and Kabir suggested that the test archive should be assembled on
> the client side and that the client side container should deploy the
> archive (instead of the test case it self). Is this going to change?
>
> cheers
> -thomas
Cheers,
David
14 years
Deployment Chain/Service interaction
by David M. Lloyd
This ridiculously long email is to explain how the deployment chain
services should be created and how the process should work, end-to-end,
in our post-Alpha1/MSC-1.0.0.Beta5 world and to open things up to
additional comments or critique.
Basic Theory
------------
The new architecture is hinged around the concept of modifying the way
deployment unit processor (DUP) chains are executed. Instead of
executing them once at deployment time and relying on services to clean
up (and dealing with the inherent asymmetry of having services which
perform unrelated actions on start compared to stop), the basis of this
idea is to do the following:
1. Break the DUP chain into segments (much like we have already laid out
in org.jboss.as.deployment.Phase).
2. Define a new service for each segment. On start, the "deploy" action
of all of the DUPs are executed in sequence. On stop, a compensating
"undeploy" action is executed on each DUP in reverse order, exactly as
Alexey was proposing in [1] (yes, I argued against it, but I was wrong).
If start fails, the DUP "deploy" action has to fully undo itself; in
addition the DUP context will call the "undeploy" action for all
previous DUPs in the segment, which allows start to be retried (as
defined by the MSC service contract). This combines the notion of
cleanup with undeploy exactly as Alexey described.
3. The action taken on deployment is simply to create and activate (i.e.
set its mode to ACTIVE) the service for the first DUP chain segment,
which then causes the service for the next DUP chain segment to be
created, etc. until the deployment is complete.
4. On undeploy, each DUP has an additional responsibility to remove all
services added by that DUP. However we can automate this by using a
special ServiceTarget in the DUP context, which tracks all services
added for the deployment (we need this anyway to correctly finish
undeployment). The DUP chain service can automatically trigger and
await service removal as part of its stop() action (made convenient by
the usage of MSC's async stop functionality).
This means that all you need to do to remove a deployment unit is remove
its root service, which will cascade to all ancillary services
automatically via MSC's dependency mechanism, including correct handling
of deployment interdependencies.
This also means that it is possible for a DUP chain to be partially
completed, rewound to an earlier point, and played forward again, which
will become essential to correctly handling redeployment when complex
dependencies are involved.
The DUP chain itself is represented by a single global DUP chain service
which is started during bootstrap. All deployment first-phase services
(see below) have a dependency upon this service.
Deployment Phases
-----------------
A DUP chain segment is defined by a "phase", which are already more or
less defined. The service corresponding to each phase is based on its
deployment, as we do today, e.g. "jboss.deployment.\"myapp-foo.war\"".
However since we have multiple phases, we must also append the phase name.
You may notice a superficial similarity between these phases and the
phases of an AS5/6 MC service. I noticed that too. :-)
1. Phase: "structure" -> "jboss.deployment.\"myapp-foo.war\".structure"
This phase is responsible for identifying the deployment type, delving
its structure, and creating and cataloging all of the VFS mounts which
correspond to this deployment unit. When processing of this phase is
complete, all mounts must be available and the deployment type must be
identified.
2. Phase: "validate" -> "jboss.deployment.\"myapp-foo.war\".validate"
This phase is not presently used for validation tasks though there is
currently one processor in this phase,
org.jboss.as.web.deployment.WarStructureDeploymentProcessor, which
probably doesn't belong there.
3. Phase: "parse" -> "jboss.deployment.\"myapp-foo.war\".parse"
In this phase, all descriptors which are available via the file system
are processed. This includes XML descriptors, manifest information,
etc. When processing of this phase is complete, all descriptors should
be parsed and stored in the DUP context.
4. Phase "dependencies" -> "jboss.deployment.\"myapp-foo.war\".dependencies"
In this phase, the data collected is used to assemble the list of
dependencies and class path resource roots. Class-path dependencies on
other deployments will be added to this deployment's "modularize" phase
as MSC dependencies on the dependency deployments' "structure" phase.
When processing of this phase is complete, all the information necessary
to construct the module for this deployment is available.
5. Phase "modularize" -> "jboss.deployment.\"myapp-foo.war\".modularize"
In this phase, the module is created based on the dependency
information, and any action which requires class loading to be enabled
can now be performed. This includes loading classes and reading
annotations via reflection.
6. Phase "post-module" -> "jboss.deployment.\"myapp-foo.war\".post-module"
This phase is simply a continuation of "modularize". See "Missing
Stuff" for further discussion.
7. Phase "install" -> "jboss.deployment.\"myapp-foo.war\".install"
In this phase, various deployment items spec'd out by earlier phases
(like servlets, EJBs) are converted into services and created. See
"Missing Stuff" for further discussion.
8. Phase "cleanup" -> "jboss.deployment.\"myapp-foo.war\".cleanup"
This phase is presently unused.
Missing Stuff
-------------
Missing from the above list:
1. Generation of Jandex annotation index.
2. Consumption of the Jandex index to identify classes to be processed
for annotations.
3. A mechanism for deployment module-to-module dependencies. Since
module dependencies can be circular, two phases are needed to correctly
make this work. One possibility is to add module dependencies and
relink as a prelude to "post-module", with "post-module" then depending
on the "modularize" phase of all deployments upon which there is a
module-level (not classpath-level) dependency. Post-module processors
can then be divided up based on whether external module dependencies
should be included in processing (i.e. annotation scanning).
4. Consideration for the necessary startup sequence for EJBs with
respect to servlets and so on. This will be covered in more detail in
the EJB requirements document. Note that since these phases are built
by the deployment system (as opposed to being ingrained in the service
itself), we can modify them as needed to accommodate the requirements of
all of our deployable service types.
Bootstrap
---------
The bootstrap process would necessarily be somewhat different. I'll
describe it by walking through a server boot:
1. Parsing/receiving the server updates. Boot updates always come in
order of extensions first, then subsystems, then deployments. Because
that's just how it is, and not by accident.
2. Run all updates in sequence. This all happens in one loop with the
following components:
2a. Execute extension updates. These updates immediately load
extensions, subsystem element handlers, and possibly add global
services. This all happens in the bootstrap thread.
2b. Execute subsystem add updates. The subsystem would use the
activation context provided to hook into any affected DUP chain(s).
2c. Execute deployments. *Updated*. Each deployment is executed by
creating its first-phase service, creating an injection dependency on a
global DUP chain service, and setting the service mode to ACTIVE.
2d. Execute the rest of the uninteresting stuff, if any.
3. Once all updates are executed, the DUP chain service is created and
set to ACTIVE, effectively "locking in" the deployment chain and
allowing deployments to proceed. The deployment services then fully
assemble themselves.
Undeployment
------------
This architecture provides a uniquely effective model for supporting
undeployment. With all services and dependencies correctly defined, it
should be possible to redeploy any deployment unit in the system, no
matter how complex the interdependencies, and all affected services are
cleanly stopped and restarted with correctly rewired dependencies, all
the way to the point of sending notification to load-balancers and so on.
See also
--------
[1] http://community.jboss.org/message/572259#572259
--
- DML
14 years