WildFly 12 Plans, EE8, and Move to Quarterly Iterative Releases
by Jason Greene
Hello Everyone,
Release Model Changes
————————————————————-
In order to bring new capabilities to the community quicker, we plan to move to a more iterative time-boxed approach, starting with WildFly 12 (and continuing with 13, 14, etc). By time-boxed, I mean the each release aims to have a fixed and reliable delivery window that approximates a calendar quarter. Since these time-frames are fixed, it’s important that any given feature or improvement not hold up a release. To facilitate this we need to make major changes to our development process. Currently development for any enhancement is merged in chunks, as it progresses from inception to completion. This means to have something worthy of release, we must either block for its completion or roll it back. The latter is often difficult, since at any given moment there are many features under active development, and their respective implementations can become co-dependent. Additionally, its common for component dependencies to start off as Alphas and/or Betas, and we end up needing to wait for those components to hit Final before we can cut the release.
The solution to this problem is to rely more on topic branches, and only merge fully completed work to master. By fully complete, I mean all PRs on master should be fully developed, tested, and documented[1]. Additionally any updated dependencies must only be against Final/GA components.
This has a number of advantages:
A. Master becomes always stable, always releasable. So at any given moment we can decided to cut a release[1]
B. Nightly builds become way more usable, and a great feedback channel (a release starts to have less importance)
C. If a feature takes longer than expected, no big deal, it will be picked up in the next cycle[2]
D. Should anything cause a major regression, not caught in testing it will be easier to revert, since the history will be cleaner
Since in-progress work will need to be based on topic branches, custom jobs on ci.wildfly.org will need to be relied upon instead of the automated CI that happens when you submit a PR (although that’s still important and still staying). Additionally if two changes/improvements are interrelated, then they will need to either share a topic branch, or find a way to construct the work independently (potentially adding and removing a temporary construct until both are merged).
[1] To make it easier to associate documentation with the PR, we are looking to move to an asciidoc based solution instead of confluence like we utilize today.
[2] While this is generally the case, there are some activities we can’t avoid before releasing, such as ensuring a TCK run has completed.
[3] An important aspect of C is that iterations have a short enough cycle, such that the pressure to make a particular iteration is low enough to avoid the urge to try and cram something in, and potentially compromise quality (e.g. no docs etc).
Java EE8
————————
As part of adopting this model, we aim to deliver Java EE8 in incremental chunks. Adding support for specs individually in batches. As an example for WildFly 12, I propose we target Servlet 4, JSON-B, and CDI. Due to unfortunate restrictions in EE certification, we will need to have a separate configuration profile or property to enable these additional APIs until we complete full EE8 certification.
Proposed WildFly 12 Goals [Target Release Date = Feb 28, 2018]
————————————————————————
+ Adopt new release model
+ Java 9 improvements
+ Servlet 4
+ JSON-B (incorporating Yasoon)
+ CDI 2
+ JSF 2.3
+ Metaspace usage improvements
+ early/initial changes to accommodate the new provisioning effort (easy slimming, updates, etc)
Proposed WildFly 13 Goals (Very Tentative) [May 2018]
———————————————————————————-
+ New EE Security Spec
+ Adoption of provisioning
+ JPA 2.2
+ JAX-RS 2.1
+ BV 2.0
+ Agroal inclusion
Just to highlight that with this new model, that these goals I am proposing are not something we would block on, any given item might be deferred to the next release if it’s not quite ready. Let me know if you have any additional major items you are planning to contribute towards 12.
Thanks!
-Jason
7 years
WildFly 12 Plans, EE8, and Move to Quarterly Iterative Releases
by Jason Greene
Hello Everyone,
Release Model Changes
————————————————————-
In order to bring new capabilities to the community quicker, we plan to move to a more iterative time-boxed approach, starting with WildFly 12 (and continuing with 13, 14, etc). By time-boxed, I mean the each release aims to have a fixed and reliable delivery window that approximates a calendar quarter. Since these time-frames are fixed, it’s important that any given feature or improvement not hold up a release. To facilitate this we need to make major changes to our development process. Currently development for any enhancement is merged in chunks, as it progresses from inception to completion. This means to have something worthy of release, we must either block for its completion or roll it back. The latter is often difficult, since at any given moment there are many features under active development, and their respective implementations can become co-dependent. Additionally, its common for component dependencies to start off as Alphas and/or Betas, and we end up needing to wait for those components to hit Final before we can cut the release.
The solution to this problem is to rely more on topic branches, and only merge fully completed work to master. By fully complete, I mean all PRs on master should be fully developed, tested, and documented[1]. Additionally any updated dependencies must only be against Final/GA components.
This has a number of advantages:
A. Master becomes always stable, always releasable. So at any given moment we can decided to cut a release[1]
B. Nightly builds become way more usable, and a great feedback channel (a release starts to have less importance)
C. If a feature takes longer than expected, no big deal, it will be picked up in the next cycle[2]
D. Should anything cause a major regression, not caught in testing it will be easier to revert, since the history will be cleaner
Since in-progress work will need to be based on topic branches, custom jobs on ci.wildfly.org will need to be relied upon instead of the automated CI that happens when you submit a PR (although that’s still important and still staying). Additionally if two changes/improvements are interrelated, then they will need to either share a topic branch, or find a way to construct the work independently (potentially adding and removing a temporary construct until both are merged).
[1] To make it easier to associate documentation with the PR, we are looking to move to an asciidoc based solution instead of confluence like we utilize today.
[2] While this is generally the case, there are some activities we can’t avoid before releasing, such as ensuring a TCK run has completed.
[3] An important aspect of C is that iterations have a short enough cycle, such that the pressure to make a particular iteration is low enough to avoid the urge to try and cram something in, and potentially compromise quality (e.g. no docs etc).
Java EE8
————————
As part of adopting this model, we aim to deliver Java EE8 in incremental chunks. Adding support for specs individually in batches. As an example for WildFly 12, I propose we target Servlet 4, JSON-B, and CDI. Due to unfortunate restrictions in EE certification, we will need to have a separate configuration profile or property to enable these additional APIs until we complete full EE8 certification.
Proposed WildFly 12 Goals [Target Release Date = Feb 28, 2018]
————————————————————————
+ Adopt new release model
+ Java 9 improvements
+ Servlet 4
+ JSON-B (incorporating Yasoon)
+ CDI 2
+ JSF 2.3
+ Metaspace usage improvements
+ early/initial changes to accommodate the new provisioning effort (easy slimming, updates, etc)
Proposed WildFly 13 Goals (Very Tentative) [May 2018]
———————————————————————————-
+ New EE Security Spec
+ Adoption of provisioning
+ JPA 2.2
+ JAX-RS 2.1
+ BV 2.0
+ Agroal inclusion
Just to highlight that with this new model, that these goals I am proposing are not something we would block on, any given item might be deferred to the next release if it’s not quite ready. Let me know if you have any additional major items you are planning to contribute towards 12.
Thanks!
-Jason
7 years
Run level as a factor for capabilities and requirements?
by Brian Stansberry
Something the current capabilities/requirements stuff doesn't handle is the
fact that some capabilities can be configured but won't be turned on in
some situations (i.e. admin-only). Which means other capabilities that
might require them and that are present in admin-only will pass
configuration consistency checks but will fail at runtime.
I'm not sure what to do about this. Some off the top of my head thoughts:
1) The capability description data on wildly-capabilities includes
something about this, so people who want to require the capability
understand whether it can be required.
This is easy, and helps avoids future bugs. It's just documentation so it
does nothing about the actual server behavior.
2) The registration for capabilities could include "minimal running-mode"
data, and then the capability resolution could check that and fail if it
finds a mismatch in the current running mode.
This is more work obviously. It may help surface problems earlier, i.e.
make it more likely that a testsuite catches a mismatch in time to correct
it before a .Final release. It would also have the minor benefit of perhaps
providing a better error message for a user who configures a mismatch.
3) The management layer could somehow makes this data available to
subsystems so they could utilize it. So, the requiror sees the required cap
is not available in the current run level so it in turn doesn't try and
install its own cap. Instead logs a WARN or something.
This is the most work, and I have huge doubts about its wisdom. The
software no longer is reasonably predictable, where something is on or off
in a given run level; now it's or off depending on whether something else
is on or off.
For any of these we'll need to formalize our existing concepts into a solid
run-level concept. I don't think that should be too hard.
[1] https://github.com/wildfly/wildfly-capabilities
--
Brian Stansberry
Manager, Senior Principal Software Engineer
Red Hat
7 years
WildFly-Camel 5.0.0 released
by Thomas Diesler
Folks,
I happy to announce that WildFly-Camel 5.0.0 <https://github.com/wildfly-extras/wildfly-camel/releases/tag/5.0.0> has been released.
It provides Camel-2.20.1 integration with WildFly-11.0.0
This is a major upgrade release for supported components, dataformats and languages, which now reaches feature parity with other runtimes.
i.e. all available dataformats and languages are now also supported on WildFly.
Additional components in the supported set <http://wildfly-extras.github.io/wildfly-camel/#_camel_components> are:
• apns
• asterisk
• atomix
• azure-blob
• azure-queue
• beanstalk
• caffeine
• chronicle-engine
• chunk
• cm-sms
• consul
• couchbase
• crypto-cms
• digitalocean
• docker
• elasticsearch5
• etcd
• flink
• google-bigquery
• google-calendar
• google-drive
• google-mail
• grpc
• guava-eventbus
• hazelcast
• headersmap
• hipchat
• iec60870
• jclouds
• jcr
• json-validator
• jt400
• ldif
• leveldb
• lumberjack
• master
• milo
• mongodb-gridfs
• nagios
• olingo4
• openstack-cinder
• openstack-glance
• openstack-keystone
• openstack-neutron
• openstack-nova
• openstack-swift
• printer
• pubnub
• quickfix
• reactor
• rmi
• shiro
• sip
• sips
• slack
• spring-javaconfig
• spring-ws
• stomp
• telegram
• thrift
• twilio
• xmpp
• yammer
• zookeeper-master
Additional data formats in the supported set <http://wildfly-extras.github.io/wildfly-camel/#_data_formats> are:
• asn1
• fastjson
• thrift
Component upgrades include
• WildFly-11.0.0
• Camel-2.20.1
• Hawtio-1.5.5
In addition to that, we also resolved a number of other tasks and bugs <https://github.com/wildfly-extras/wildfly-camel/blob/master/docs/Changelo...>.
For details please see the 5.0.0 Milestone <https://github.com/wildfly-extras/wildfly-camel/issues?q=milestone:5.0.0>.
Enjoy
— thomas
7 years
security/elytron CLI commands
by Jean-Francois Denise
Hi,
discussing with Darran on how to extend the CLI to help configure
elytron, he suggested that we should move the discussion to this list. I
have started a document [1] in order to collect feedbacks on new CLI
commands to address security configuration. Feel free to comment on the
document.
Thank-you.
JF
[1] https://developer.jboss.org/wiki/SSLCommandsForCLI
7 years
request.getSession() returns a different object for each request - Undertow's session object is the object retrieved by request.getSession()
by Eric B
I'm migrating a legacy (circa 2005) n-tier JBoss 4 application to run in
Wildfly 10. It was a struggle, but got pretty much everything done. We've
been trying as hard as possible not to refactor logic/code/etc except in
the cases where it is simply no longer compatible. The goal was to get the
application working "as-is" in WF, then start tackling the job of breaking
it apart and refactoring it into proper pieces.
The problem I have run into now is regarding session management. The
application was designed to run on multiple nodes, but in standalone
instances - ie: there is no shared memory, nor any distributed caches. So
I'm trying to get my WF10 application to follow the same pattern (even
though I realize all this exists in WF10 out of the box with
Infinispan/etc).
One of the requirements is to only have a single session active for a user
at any time in the "cluster". The JB4 application accomplished this by
maintaining a local cache (HashMap) of session objects keyed by username.
When a user logs into any node, a JMS message is broadcast to all the nodes
to invalidate any session belonging to that user. The listener on each
node then searches in his cache for a matching user/session object and does
a session.invalidate(). Some extra logic is used for WeakReferences for
the session objects (in case the session is destroyed by some other flow in
the container).
As ugly as the solution was, we've tried to follow the same pattern under
WF10. But this is failing in many different ways and reinforces my belief
that it isn't the right approach. However, I would like to understand
how/why this is failing.
My server configuration is using a <local-cache> for my "web" cache. So to
me this means it is only local to the machine. But:
1) on any specific node, request.getSession() returns a different object
for each request. The sessionId() remains the same, but the actual object
ID changes. This implies that it is a different representation of the
session object.
2) if I persist a local copy of the HttpSession object between requests
(ex: in a static map) and invalidate the session using the persisted
object, my request.getSession() object is not updated (ex: the invalid flag
is still set to false), but the session is dead. Trying to call
request.getSession().invalidate() throws IllegalStateException as do calls
to request.getSession().set/getAttribute()
3) over time, my JVM will actually crash with an EXCEPTION_ACCESS_VIOLATION
in a GC process. This always seems to correlate with a thread that is
trying to do some session invalidation via the persisted session copy.
Is anyone able to explain this behaviour? Why is the session object always
different between requests? Shouldn't it be the same request? What is
Undertow doing with the session objects between requests? Is the Undertow
object being passivated in some way and my attempt to invalidate if from
within my cached version causing this kind of access violation? Is my
cached object referencing memory that has been cleared by the GC (ex: does
the request.getSession() object only a WeakReference to the actual Undertow
object)?
Finally, what would the recommended approach be to doing something like
this? Using a distributed web-cache is unfortunately not an option at the
moment. So give that, is there some way to access the Undertow session
manager directly?
Thanks for any insight. I thought we had a functional solution but in
production (under real load), the intermittent JVM crashes are telling me
that our solution is broken.
Eric
7 years
XWiki in WildFly 10
by Rogério Luciano Santos
Hello.
Does anyone know XWIKI? i Want deply the xwiki war in WildFly 10 by a
error occurs:
Cannot upload deployment: {"WFLYCTL0080: Failed services" =>
{"jboss.deployment.unit.\"xwiki-9.10.1.war\".POST_MODULE" =>
"org.jboss.msc.service.StartException in service
jboss.deployment.unit.\"xwiki-9.10.1.war\".POST_MODULE: WFLYSRV0153: Failed
to process phase POST_MODULE of deployment \"xwiki-9.10.1.war\" Caused by:
java.lang.RuntimeException: WFLYSRV0177: Error getting reflective
information for class com.codahale.metrics.jetty9.InstrumentedHandler$7
with ClassLoader ModuleClassLoader for Module
\"deployment.xwiki-9.10.1.war:main\" from Service Module Loader Caused by:
java.lang.NoClassDefFoundError: Failed to link
com/codahale/metrics/jetty9/InstrumentedHandler (Module
\"deployment.xwiki-9.10.1.war:main\" from Service Module Loader):
org/eclipse/jetty/server/handler/HandlerWrapper"},"WFLYCTL0412: Required
services that are not installed:" =>
["jboss.deployment.unit.\"xwiki-9.10.1.war\".POST_MODULE"],"WFLYCTL0180:
Services with missing/unavailable dependencies" => undefined}
This application run in a glassfish server
7 years
write-timeout and ClosedChannelException
by Panos Konstantinidis
Hello,
apologies in advance if this is not the right place to ask questions, but I have some questions that require in-depth understanding of Wildfly and it seems that the users' forum is not the right place to ask them. We have a Wildfly instance installed (wildfly-10.1.0.Final) on standalone mode. We have added a write-timeout="45000" to http-listener & https-listener on production, but we noticed that we get a lot of ClosedChannelException errors: java.nio.channels.ClosedChannelException: nullat io.undertow.conduits.WriteTimeoutStreamSinkConduit.handleWriteTimeout(WriteTimeoutStreamSinkConduit.java:106)at io.undertow.conduits.WriteTimeoutStreamSinkConduit.write(WriteTimeoutStreamSinkConduit.java:122) I would have expected to only see WriteTimeoutException errors, as described here (WildFly 10.0 Model Reference ), which we do, but very occasionally. By looking at the undertow code (io.undertow.conduits.WriteTimeoutStreamSinkConduit) I noticed that the ClosedChannelException is thrown at the following piece of code: if (expireTimeVar != -1 && currentTime > expireTimeVar) { IoUtils.safeClose(connection); throw new ClosedChannelException(); } We managed to reproduce the problem on the UAT environment, with the following settings: <http-listener name="default" tcp-keep-alive="false" read-timeout="45000" write-timeout="10000" socket-binding="http" record-request-start-time="true" redirect-socket="https" enable-http2="true"/> <https-listener name="https" tcp-keep-alive="false" read-timeout="45000" write-timeout="10000" socket-binding="https" record-request-start-time="true" security-realm="ApplicationRealm" enable-http2="true"/>
and also with the following:
<http-listener name="default" tcp-keep-alive="true" read-timeout="45000" write-timeout="10000" socket-binding="http" record-request-start-time="true" redirect-socket="https" enable-http2="true"/> <https-listener name="https" tcp-keep-alive="true" read-timeout="45000" write-timeout="10000" socket-binding="https" record-request-start-time="true" security-realm="ApplicationRealm" enable-http2="true"/>
We cloned the undertow code and changed the WriteTimeoutStreamSinkConduit.java ourselves (we just added a few debug statements), rebuilt the undertow-core-1.4.0.Final.jar and replaced the one in the UAT environment with our custom version. I noticed the following in the logs [INFO ] 2017-12-05 12:48:57.240 [default task-6] [pt0_WuRKH4rsd6yPRrXen53Ee2OiPGlNed64iFrI] stdout [?:?] - Updating expire time to: currentTime: 1512470937239 + timeout: 10000[INFO ] 2017-12-05 12:49:11.452 [default task-4] [pt0_WuRKH4rsd6yPRrXen53Ee2OiPGlNed64iFrI] stdout [?:?] - Timeout is set to: 10000[INFO ] 2017-12-05 12:49:11.453 [default task-4] [pt0_WuRKH4rsd6yPRrXen53Ee2OiPGlNed64iFrI] stdout [?:?] - currentTime is: 1512470951453 and expireTime is: 1512470947239[INFO ] 2017-12-05 12:49:11.454 [default task-4] [pt0_WuRKH4rsd6yPRrXen53Ee2OiPGlNed64iFrI] stdout [?:?] - currentTime > expireTimeVar: true[ERROR] 2017-12-05 12:49:11.455 [default task-4] [pt0_WuRKH4rsd6yPRrXen53Ee2OiPGlNed64iFrI] g.c.RestExceptionHandler [RestExceptionHandler.java:24] - Exception Thrown: java.nio.channels.ClosedChannelException If you notice we are talking about two different requests (different thread name) under the same user (same session id) with 14'' delay. I *guess* this exception occurs because the same socket is reused for both requests (for both threads) and thus the expire time applies for all requests over the same socket. But in this case I would expect a WriteTimeoutException, as explained in the docs. So my questions are: a) How exactly does the write-timeout work? I would have thought that each new request resets the timer.b) Why don't I get a WriteTimeoutException instead? There are days that we see hundreds of ClosedChannelException but no WriteTimeoutException.c) Is there any way to avoid the ClosedChannelException? None of our users has complained yet, but the stack traces clog our log files. Regards Panos
7 years