Feature-packs, versions / module slots and independent release cycles
by Sanne Grinovero
Hi all,
Tomaž is helping me figure out how we could publish a WildFly feature
pack when we do an Hibernate Search release.
We already did publish "modules" in a zip format, uploaded to the
Maven repository so that people could easily layer a bleeding edge
version on top of a slightly outdated WildFly version.
We have been doing this for a couple of years now.
It's important for our users to have a little bit of flexibility on
the exact versions.
While it's great for the huge population of Hibernate users to find
both Hibernate ORM and Hibernate Search "out of the box" in WildFly,
for the minority of those power users which actually help us by
contributing or testing each and every release they need to be able to
continue "layering" the latest of our releases without waiting for a
WildFly release.
Not least, that's essential for us to be able to catch bugs ourselves
and run every snapshot on integration tests on WildFly.
Ales, Tomaž and myself were looking for a way to DRY on the module
definitions: as we do far more integration tests of these modules in
"upstream" Hibernate Search than what is feasible for us to maintain
within the WildFly codebase, it recently happened that the modules in
WildFly were lagging behind (structurally different) and not good
enough for CapeDwarf purposes. Not an issue for the mass of Hibernate
users, but still something we'd want to prevent from happening again.
So one solution could be move all our integration tests into WildFly,
but that will slow down your build, and still force us all to maintain
two sets of identical modules.
The other idea is for WildFly to download our feature pack during the
assembly of the distribution, and rely on our integration tests. Tomaž
actually has a nice branch showcasing how this could work already.
But it looks like that if it's up to us to pre-package these modules
in advance, we should be using the "main" slot for the modules, as
they are copied "as is" into the distribution - while we always
considered the usage of the "main" slot as a prerogative of components
tightly coupled with the WildFly version.
This would imply that when someone fetches our "latest" version and
overlays it on WildFly, he would literally overwrite the bits which
were shipped with Wildfly, I don't like that idea.
Proposal:
we'll publish feature packs which make use of the slot to qualify each
distribution of our "upstream modules" with a version in format
"major.minor"; the WildFly build should then download these and
include them as-is, but also include an alias with slot "main"
pointing to the exact version which is being included in the WildFly
release.
We're already using a similar approach between Infinispan and
Hibernate Search versions - to use both qualified versions and aliases
- as they both publish modules (in different cycles) and yet depend on
each other. This approach made it feasible.
WDYT?
BTW we're paving the road with Hibernate Search, but consider it just
an example as several other projects are interested to follow up with
a similar scheme.
Thanks,
Sanne
9 years, 7 months
(WFCORE-640) Platform and User Specific Unit Tests
by Brandon Gaisford
Hello,
I’m code complete on WFCORE-640 and have completed my development testing on OS X and Windows 7. This feature deals with resolution of file permissions and I’m not sure how best to proceed with the creation of unit tests and subsequent Maven integration. Not being very familiar with the build and release process I thought others may have some advice.
Does the current build process get executed on both *nix and Windows platforms? Platform specific unit test init scripts need to be executed to construct a folder hierarchy with specific file/folder permissions required for later tests. On the Windows side, those init scripts should be executed as an administrator user and the unit tests executed as a non-administrator user.
Use case: Given a directory path and a properties file name, determine whether a given user of the system may update the properties file.
Test cases:
Directory path contains folders w/o read and execute permissions
Properties file has Read Only property set (Windows)
Properties file is not writable
Properties file in not readable
Etc.
Thanks,
Brandon
9 years, 7 months
WFLY-2456 Sources Sought
by Brandon Gaisford
Newbie contributor here,
Started looking into WFLY-2456, have forked/cloned/built the project and the server starts up as expected. Started digging through the source tree for the org.jboss.as.domain.management.security.adduser.AddUser class but it does’t appear to be there. I’m guessing the dependency is being sucked in by Maven during the build process.
A quick search through the dist tree reveals a couple jars that contain the class in question:
Starting search for org/jboss/as/domain/management/security/adduser/AddUser.class in /Volumes/Dev/punagroup/git/wildfly
Zipfile: /Volumes/Dev/punagroup/git/wildfly/build/target/wildfly-9.0.0.CR1-SNAPSHOT/bin/client/jboss-cli-client.jar contains entry name: org/jboss/as/domain/management/security/adduser/AddUser.class
Zipfile: /Volumes/Dev/punagroup/git/wildfly/dist/target/wildfly-9.0.0.CR1-SNAPSHOT/bin/client/jboss-cli-client.jar contains entry name: org/jboss/as/domain/management/security/adduser/AddUser.class
Zipfile: /Volumes/Dev/punagroup/git/wildfly/dist/target/wildfly-9.0.0.CR1-SNAPSHOT/modules/system/layers/base/org/jboss/as/domain-management/main/wildfly-domain-management-1.0.0.Beta2.jar contains entry name: org/jboss/as/domain/management/security/adduser/AddUser.class
Zipfile: /Volumes/Dev/punagroup/git/wildfly/testsuite/integration/smoke/target/jbossas/bin/client/jboss-cli-client.jar contains entry name: org/jboss/as/domain/management/security/adduser/AddUser.class
Zipfile: /Volumes/Dev/punagroup/git/wildfly/testsuite/integration/target/jbossas/bin/client/jboss-cli-client.jar contains entry name: org/jboss/as/domain/management/security/adduser/AddUser.class
Zipfile: /Volumes/Dev/punagroup/git/wildfly/web-dist/target/wildfly-web-9.0.0.CR1-SNAPSHOT/modules/system/layers/base/org/jboss/as/domain-management/main/wildfly-domain-management-1.0.0.Beta2.jar contains entry name: org/jboss/as/domain/management/security/adduser/AddUser.class
Exiting routine.
Looking for advice from the group, has this code not been migrated to the new source tree yet, or are there other intentions. Any pointers would be appreciated.
Brandon
9 years, 7 months
Using JavaScript with Wildfly
by Stuart Douglas
Hi all,
There has been some discussion about supporting JavaScript in Wildfly for a
while now, and as a result I have come up with a simple proof of concept of
the form I think this support could take.
At the moment this is actually not part of Wildfly at all, but rather a jar
file that you can include in your apps and allows you to register
JavaScript based handlers. These handlers can be mapped to URL's, and
inject container resources such as CDI beans and JNDI data sources. It also
provide some simple JavaScript wrappers to make some EE objects easier to
use from scripts.
At the moment handlers are mainly useful as REST endpoints, although if
there is interest I am planning on adding template engine support as well.
When combined with my external resources PR (
https://github.com/wildfly/wildfly/pull/7299) this allows for changes in
your script files to be immediately visible, without even needing to copy
to an exploded deployment.
I envisage the main use of this will not be creating node.js like apps that
are pure javascript, but rather to allow simpler parts of the app to be
written in JavaScript, and this avoiding the compile+redeploy cycle.
Full details are here: https://github.com/undertow-io/undertow.js
I have an example of the Kitchen Sink quickstart that has been re-done to
use this here:
https://github.com/wildfly/quickstart/compare/master...stuartwdouglas:js#...
At this stage I am really not sure how this will evolve, or if it will go
anywhere, I am just putting it out there to get some feedback.
Stuart
9 years, 7 months
Ordered child resources
by Kabir Khan
I am working on being able to order child resources, this is important for things like jgroups where the protocol order matters. On top of the domain operations work I inherited from Emanuel the order will get propagated through the domain. Currently for jgroups the only way to adjust the protocol order is to remove all protocols and add them again, and on the domain ops branch (before what I am outlining here) upon reconnect any new protocols end up at the end of the slave’s list.
The steps to make a child resource ordered are currently:
1) Make the ‘parent’ resource’s add handler call a different factory method:
@Override
protected Resource createResource(OperationContext context) {
Resource resource = Resource.Factory.create(false, “orderedA”, “orderedB"); //Names of the child types where ordering matters
context.addResource(PathAddress.EMPTY_ADDRESS, resource);
return resource;
}
2) In the ordered child resource definitions, override the new getOrderedChildResource() operation to
class OrderedChildResourceDefinition extends SimpleResourceDefinition {
public OrderedChildResourceDefinition(PathElement element) {
super(PathElement.pathElement(“orderedA", new NonResolvingResourceDescriptionResolver(),
new OrderedChildAddHandler(REQUEST_ATTRIBUTES), new ModelOnlyRemoveStepHandler());
}
@Override
protected boolean getOrderedChildResource() {
return true;
}
….
}
This has the effect of adding a parameter called ‘add-index’ to the ‘add’ operation’s description. So if you have
/some=where/orderedA=tree
/some=where/orderedA=bush
You can do e.g. /some=where/orderedA=hedge:add(add-index=1) and end up with:
/some=where/orderedA=tree
/some=where/orderedA=hedge
/some=where/orderedA=bush
3) The final part is to adjust the ordered child resource’s add handler to honour the add-index parameter:
class OrderedChildAddHandler extends AbstractAddStepHandler {
public OrderedChildAddHandler(AttributeDefinition... attributes) {
super(attributes);
}
@Override
protected Resource createResource(OperationContext context, ModelNode operation) {
if (!operation.hasDefined(ADD_INDEX) || operation.get(ADD_INDEX).asInt() < 0) {
return super.createResource(context);
}
return context.createResource(PathAddress.EMPTY_ADDRESS, operation.get(ADD_INDEX).asInt());
}
4) Not really related to what a user needs to do to create an ordered resource, but 1-3 are made possible by that on the resource interface I have two new methods:
/**
* Return the child types for which the order matters.
*
* @return {@code true} if the order of the children matters. If there are no ordered
* children and empty set is returned. This method should never return {@code null}
*/
Set<String> getOrderedChildTypes();
/**
* Register a child resource
*
* @param address the address
* @param index the index at which to add the resource. Existing children with this index and higher will be shifted one uo
* @param resource the resource
* @throws IllegalStateException for a duplicate entry or if the resource does not support ordered children
*/
void registerChild(PathElement address, int index, Resource resource);
The main question I have is whether 1-3 are too ‘fragile’ and if we need something to enforce/glue this together a bit more? At the same time ordered child resources should be the exception rather than the rule.
9 years, 7 months