Srcdeps in WildFly and WildFly Core
by Peter Palaga
Hi *,
this is not new to those of you who attended my talk on the F2F 2016 in
Brno. Let me explain the idea here again for all others who did not have
a chance to be there.
Srcdeps [1] is a tool to build Maven dependencies from their sources.
With srcdeps, wildfly-core can depend on a specific commit of, e.g.,
undertow:
<version.io.undertow>1.4.8.Final-SRC-revision-aabbccd</version.io.undertow>
where aabbccd is the git commit id to build when any undertow artifact
is requested during the build of wildfly-core.
[1] describes in detail, how it works.
The main advantage of srcdeps is that changes in components can be
integrated and tested in wildfly-core immediately after they are
committed to a public component branch. There is no need to wait for the
component release.
Here in the WildFly family of projects, it is often the case that
something needs to be fixed in a component, but the verification (using
bug reproducer, or integration test) is possible only at the level of
wildfly or wildfly-core. Engineers typically work with snapshots
locally, but when their changes need to get shared (CI, reviews) in a
reproducible manner, snapshots cannot be used anymore.
In such situations a source dependency come in handy: it is very easy to
share and it is as reproducible as a Maven build from a specific commit
can be. All CIs and reviewers can work with it, because all source
dependency compilation is done under the hood by Maven.
Developers working on changes that span over multiple interdependent git
repos can thus get feedback (i-tests, reviews) quickly without waiting
for releases of components.
Srcdeps emerged in the Hawkular family of projects to solve exactly this
kind of situation and is in use there since around October 2015.
When I said there is no need to wait for releases of components, I did
not mean that we can get rid of component releases altogether. Clearly,
we cannot, because i.a. for any tooling uninformed about how srcdeps
work, those source dependencies would simply be non-resolvable from
public Maven repositories. So, before releasing the dependent component
(such as wildfly-core) all its dependencies need to be released. To
enforce this, srcdeps is by default configured to make the release fail,
as long as there are source dependencies.
I have sent a PR introducing srcdeps to wildfly-core:
https://github.com/wildfly/wildfly-core/pull/2122
To get a feeling how it works, checkout the branch, switch to e.g.
<version.io.undertow>1.4.8.Final-SRC-revision-1bff8c32f0eee986e83a7589ae95ebbc1d67d6bd</version.io.undertow>
(that happens to be the commit id of the 1.4.8.Final tag)
and build wildfly-core as usual with "mvn clean install". You'll see in
the build log that undertow is being cloned to ~/.m2/srcdeps/io/undertow
and that it is built there. After the build, check that the
1.4.8.Final-SRC-revision-1bff8c32f0eee986e83a7589ae95ebbc1d67d6bd
version of Undertow got installed to your local Maven repo (usually
~/m2/repository/io/undertow/undertow-core )
Are there any questions or comments?
[1] https://github.com/srcdeps/srcdeps-maven#srcdeps-maven
Thanks,
Peter
P.S.: I will be talking about srcdeps on Saturday 2017-01-28 at 14:30 at
DevConf Brno.
7 years, 6 months
JPADependencyProcessor "infecting" classpath with the wrong Javassist version
by Sanne Grinovero
Hi all,
Scott sent a nice PR to Wildfly a while back to fix the problem:
- https://github.com/wildfly/wildfly/pull/9305
It wasn't merged, I guess it's not a priority for WildFly but let me
clarify that without such fixes it's impossible for people to use
newer versions of Hibernate ORM on WildFly, and I suspect lots of pain
as well for other libraries using Javassist.
There's quite some people in the Hibernate community who expressed
interest in using a not-so-stale version as the one which is typically
available in the latest stable release of WildFly.
To make this happen, all Hibernate projects are now publishing
"WildFly modules" which can be easily downloaded as additional drop-in
layer.
Granted these are not for everyone but we get good feedback from the
power users interested, and not least this allows us to develop all
our projects while regularly testing integration with WildFly, making
sure that the eventual integration goes smoother.
The current problem is that the WildFly JPADependencyProcessor adds
the wrong version of javassist to the deployments, and there's no way
for us to prevent this or override this, making it impossible to use a
recent version of Hibernate ORM as it requires a newer version.
- https://github.com/wildfly/wildfly/blob/6b61a6003f704221f66dcd9f418bcb7af...
We'd highly appreciate if that PR could be merged? Including on
product branches please, as enforcing a dependency which is neither
needed nor desired is going to break a long list of other frameworks
as well.
Thanks,
Sanne
7 years, 9 months
The final (?) property expression expander
by David M. Lloyd
The basic problem is that we have a variety of client libraries that
need property expansion, which are rolling their own or not doing it at
all at present. We have a couple implementations of property expansion
on the server. We have some potential future property expansion
requirements. Some or all of these things do (or need to do) property
expansion slightly differently. We've been balling up and depositing
the same properties code over and over again, evolving it slightly each
time, which in turn makes it harder to adapt to the next use case. So,
it's time to stop the madness.
In wildfly-common I'm introducing a new properties expander, implemented
as a pure recursive-descent parser instead of the previous NFA-ish
parser. It is divided into two parts: syntax and expansion.
Syntax is handled by the expression compiler, whose API consists of a
static factory method that accepts a pattern string and syntax flags,
and returns the resultant Expression.
The syntax flags currently allow for the following syntax behaviors:
• NO_TRIM: Do not trim leading and trailing whitespace off of the
expression string before parsing it.
• LENIENT_SYNTAX: Ignore syntax problems whenever possible instead of
throwing an exception.
• MINI_EXPRS: Support single-character expressions that can be
interpreted without wrapping in curly braces.
• NO_RECURSE_KEY: Do not support recursive expression expansion in the
key part of the expression.
• NO_RECURSE_DEFAULT: Do not support recursion in default values.
• NO_SMART_BRACES: Do not support smart braces (this is where you have
{something} inside of a key or default value).
• GENERAL_EXPANSION: Support Policy file style "general" expansion
alternate expression syntax. "Smart" braces will only work if the
opening brace is not the first character in the expression key.
• ESCAPES: Support standard Java escape sequences in plain text and
default value fields, which begin with a backslash character.
• DOUBLE_COLON: Treat expressions containing a double-colon initial
delimiter as special, encoding the entire content into the key.
More behaviors can be contributed (along with corresponding tests of
course).
Once an Expression is compiled, the resultant object can be used for
expansion by providing an expansion function. The function is given a
context which allows introspection into the key sub-expression, the
default value sub-expression, and the string builder target. In
addition, the function may throw at most one checked exception type of
the user's choice, allowing expansion problems to be reported in any way.
The API provides a few default expanders to support only simple system
properties and environment variables in the de-facto standard manner
that we have always done; this is useful for client libraries with basic
behavior.
Find the initial code here:
https://github.com/wildfly/wildfly-common/pull/10
I'll merge it pending a bit more testing & any feedback.
--
- DML
7 years, 9 months
New invocation merge
by Kabir Khan
Hi,
The new invocation library, which is the basis of for the ejb client etc. is currently developed in a branch. There are still about 130-140 test failures, but the team feels it is time to merge to wildfly master at some stage later this week. This will get more visibility of the failures and also lower the barrier of entry for whoever can jump in and help fix the failures.
Are there any objections to merging this?
>From my point of view we would need to @Ignore the failing tests since there are enough transient failures in our testsuite to make it hard to find usual suspects if the number of failed tests is large. We could set up another CI job against a branch which is master with the @Ignores reverted, and I could keep that up to date as I merge to master, so that we get a good picture of the current testsuite failures.
Thanks,
Kabir
7 years, 9 months
Always fixed thread pool in IO subsystem's worker
by Jan Kasik
Hi, I recently started writing test for newly exposed metrics from
underlying XNIO to IO subsystem (https://issues.jboss.org/browse/WFCORE-1341).
Because I didn't understand the values I dig into XNIO implementation where
I found out that the underlying thread pool in org.xnio.XnioWorker is
always fixed size (e.g.: 'corePoolSize == maxPoolSize' - see line 117). I
already talked with David M. Lloyd about this and from what he said I
understood that this is currently feature because of danger of race
conditions.
Because of this state there are several issues in context of IO subsystem
which bother me and I am not sure what to think about it:
* Newly exposed metric 'max-pool-size' is same value as already present
'task-max-threads' - are they really duplicitous?
* Newly exposed metrics 'max-pool-size' and 'core-pool-size' are always
equal
* There is no way for user to find out without studying implementation -
shouldn't be user informed about this?
* Since they are always fixed size, the attribute 'task-keepalive' has no
meaning - am I right?
Thank you for your ideas!
--
Jan (Honza) Kasik
Red Hat, Associate Quality Engineer, EAP QE
7 years, 10 months
JDK 9 EA Build 151 is available on java.net
by Rory O'Donnell
Hi Jason/Tomaz,
Best wishes for the New Year.
Dalibor and I will be at FOSDEM '17, Brussels 4 & 5 February. Let us
know if you will be there, hopefully we can meet up !
*JDK 9 Early Access* b151 <https://jdk9.java.net/download/> is
available on java.net
There have been a number of fixes to bugs reported by Open Source
projects since the last availability email :
* JDK-8171377 : Add sun.misc.Unsafe::invokeCleaner
* JDK-8075793 : Source incompatibility for inference using -source 7
* JDK-8087303 : LSSerializer pretty print does not work anymore
* JDK-8167143 :CLDR timezone parsing does not work for all locales
Other changes that maybe of interest:
* JDK-8066474 : Remove the lib/$ARCH directory from Linux and Solaris
images
* JDK-8170428 : Move src.zip to JDK/lib/src.zip
*JEPs intergrated:*
* JEP 295 <http://openjdk.java.net/jeps/295>: Ahead-of-Time
Compilation has been integrated in b150.
*Schedule - Milestones since last availability email *
* *Feature Extension Complete 22nd of December 2016*
* *Rampdown Started 5th of January 2017
*
o Phases in which increasing levels of scrutiny are applied to
incoming changes.
o In phase 1, only P1-P3 bugs can be fixed. In phase 2 only
showstopper bugs can be fixed.
Rgds,Rory
--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland
7 years, 10 months
Next version of the management console (HAL.next)
by Harald Pehl
We're currently working on the next version of the management console
(HAL.next [1]). HAL.next continues with the column based navigation
which was introduced in WildFly 9. It's also still written in Java and
based on GWT. However most parts of the console have been rewritten
for several reasons:
- Make use of new features in Java 8 and GWT 2.8 and prepare for GWT 3.0
- Introduce new features which were not possible with the old
architecture (keyboard navigation, bookmarks, macro recording, ...)
- Adopt PatternFly [2]
- Cleanup codebase (remove deprecated and unused code)
For more details about the motivation and the complete list of new
features see the README at [1].
Development is still ongoing and we plan to replace the existing
console with HAL.next in WildFly 11 earliest. We have now reached a
milestone where it makes sense to get feedback from a bigger audience.
That's why we like to invite everyone to take a look at the new
console. Suggestions, ideas, comments and bug reports are welcome.
Please use GitHub issues [3] to file issues and discuss enhancements.
To get started, please follow the README [4]. The easiest way is
probably to use the docker image at [5]. It contains the latest
WildFly 11 build with HAL.next as management console.
[1] https://github.com/hal/hal.next
[2] https://www.patternfly.org/
[3] https://github.com/hal/hal.next/issues
[4] https://github.com/hal/hal.next#running
[5] https://hub.docker.com/r/hpehl/hal-next/
--
Harald Pehl
JBoss by Red Hat
http://hpehl.info
7 years, 10 months
Subsystem Inclusion Policy & Role of Feature Packs & Add-ons
by Jason Greene
Hello Everyone,
Recently there has been some confusion about how subsystems should be distributed, and whether or not they should be part of the WildFly repository.
There are three primary use-cases for distributing a subsystem.
#1 - Inclusion in an official WildFly distribution
#2 - A user installable "add-on distribution" which can be dropped on top of a WildFly Distribution (see [A])
#3 - A separate, independant, customized distribution, with a differing identity. Possibly build but not required as a layer (see [A])
If you are after #1, then the subsystem source code (defined as the portion of code which integrates with the server using the subsystem facilities) MUST be included in the WildFly repo. This is because subsystems heavily impact the stability of the server and our compliance with our strict management compatibility policy, and additionally it allows for us to keep all included subsystems up to date with core infrastructure changes such as capabilities and requirements, and the upcoming elytron security integration. Under this approach, a feature-pack is unlikely to be used, as it would likely just be part of the full feature-pack. It could very well be that we would introduce a different more expansive feature-pack in the future defining a larger distribution foot-print, however, there are currently no plans to do so.
If you are after #2, then you do not want a feature-pack, as feature-packs are just for building custom server distributions. If your use-case is #2 you are by definition not a custom server distribution, but rather a set of modules built the normal maven way.
If you are after #3, then you likely wish to use the feature-pack mechanism to make it easy to produce your custom distribution. This facility would allow you to keep your source repository limited to just the new subsystems you introduce, and pull the rest of the server bits via a maven dep. It is important, that you change the identity of the server (see [A]), such that patches for the official WildFly server are not accidentally installed.
Thanks!
[A] https://developer.jboss.org/wiki/LayeredDistributionsAndModulePathOrganiz...
--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat
7 years, 10 months
Deployment Root type other than XML or ZIP
by Ramesh Reddy
Hi,
For Teiid project (http://teiid.org), we typically deploy a .xml or .VDB (zip archive) file to define a virtual database artifact. We are planning to deliver a feature where a virtual database is written in DDL, for this we would like to deploy a file artifact like "foo-vdb.ddl".
I have written deployment processors for it, and added DEPLOYMENT_ROOT mounter to recognize the deployment artifact etc, however during the deployment scanning, WildFly always looks at anything other than ".xml" file as zip archive, or a exploded zip archive, so that it can do VFS mount on that file. I would like to add this ".ddl" extension file exactly similar to ".xml" file. Is there any way to achieve this?
Thank you.
Ramesh..
7 years, 10 months