WildFly GitHub Bot
by Martin Stefanko
Hi,
we created a custom wildfly-github-bot that is ready to be deployed to
wildfly/wildfly repository when we get the green light -
https://github.com/xstefank/wildfly-github-bot.
It is a Java application that listens for GitHub events and interacts with
GitHub's public API.
For now, it contains two main features:
1/ PR format verification
Verifies the PR is in the expected format (WFLY-XYZ Test). A simple video
showing this in action https://www.youtube.com/watch?v=RgD7RhEegdc.
2/ Automatic /cc comment based on the changed files in the PR:
For instance, in https://github.com/xstefank/wildfly/pull/65 or
https://github.com/xstefank/wildfly/pull/64 you can see that I'm mentioned
because the commit changed a file under microprofile/lra subdirectory.
The configuration lives in the project that is utilizing the bot -
https://github.com/xstefank/wildfly/blob/main/.github/wildfly-bot.yml. As
you can see, mostly everything is configurable.
The bot is currently deployed in the custom OSD (OpenShift Dedicated)
cluster that we got provisioned just for this purpose. I would be the main
maintainer plus a few guys from SET backing me.
The deployed bot is configured for the https://github.com/xstefank/wildfly
repository. Feel free to open as many PRs as you'd like to experiment with
the bot. Check the configuration file to see what you can do -
https://github.com/xstefank/wildfly/blob/main/.github/wildfly-bot.yml.
Of course, this is just a start. We can add automatic labels, CI triggers,
review requests (if wanted), or milestones, etc. We are only limited by
what can be done on GitHub.
Thanks,
Martin Stefanko
Principal Software Engineer
Middleware Runtimes Sustaining Engineering Team
Red Hat
1 year, 2 months
Turn on dependabot
by Brian Stansberry
Occasionally we've thought about turning on dependabot for the main WildFly
repo, and a couple current discussions (see [1] and [2]) relate to that, so
it seems a good time to discuss further and perhaps take action.
My main concern with dependabot is it doesn't integrate with JIRA. JIRA is
really important to how we're able to keep a handle on a project as complex
as WildFly. And I think it's important to track component upgrades in JIRA
so our users can keep an eye on what we're providing. Particularly
important in the world of ubiquitous CVE scanners.
But James Perkins has pointed out that such JIRA tracking is kind of
overkill for non-production dependencies (e.g. test and build deps) and I
agree.
So, how about we turn on dependabot and require a JIRA to be filed and
linked to the PR if the proposed upgrade is production code dep? For
non-production deps a JIRA would be optional.
The other thing I care about a lot is being able to grep the git log for
commits related to a JIRA. That would of course be lost for non-production
upgrades with no JIRA. Oh well. Also though dependabot wouldn't put our
JIRA in its commit messages. But for PRs where we file a JIRA we can
require human edit of the dependabot PR title to reference the JIRA. That
will result in the JIRA appearing in the log via the merge commit Github
generates. That solves the git log use case adequately enough IMO.
Thoughts?
[1]
https://lists.jboss.org/archives/list/wildfly-dev@lists.jboss.org/thread/...
[2]
https://lists.jboss.org/archives/list/wildfly-dev@lists.jboss.org/thread/...
Best regards,
Brian
1 year, 3 months
Changes to WildFly Preview test profile activation
by Brian Stansberry
I've filed https://github.com/wildfly/wildfly/pull/17049 which, if merged,
will change what developers need to do to run the testsuite against WildFly
Preview.
Instead of -Dts.ee9 you would use -Dts.preview.
For bootable jar testing instead of -Dts.bootable.ee9 you would use
-Dts.bootable.preview.
I configured the ci.wildfly.org jobs that test main to use both sets of
properties, so they can test the PR. Once it's merged I'll remove the ee9
variants from the job configs.
Best regards,
Brian Stansberry
He/Him/His
1 year, 3 months
WFCORE-6221 Incorporating preview/experimental features in WildFly
by Paul Ferraro
We have long wanted the ability to easily give users opt-in access to
less-than-stable features within WildFly.
To this end, WildFly currently includes a preview feature pack that
facilitates the delivery of preview features to WildFly users.
This gives us the ability to include new or alternate versions of modules
not included in our default feature pack.
However, this mechanism is not well utilized, as evident by the large
number of feature proposals sitting the pull request queue, largely because
the vast majority of "features" do not naturally arrive via a new module
(or different version of an existing module), but rather via changes to
existing modules usually residing within the WildFly codebase.
For the purpose of this proposal, I am mostly concerned with "features" as
defined as new runtime behavior enabled via configuration within a new or
existing subsystem.
Usually, development of a new feature not only involves the feature code
itself, which may be bundled with the wildfly codebase, or via an external
component; but also changes to the management model of the corresponding
subsystem otherwise required to enable the feature. This might be a new
subsystem, but more typically, a new resource within an existing subsystem,
or a new attribute of an existing resource, etc.
Rather than only being able to control the set of available features via
controlling the modules of a given feature pack, it would be more useful, I
think, to allow existing modules to enable features by filtering a
subsystem's management model, thus exposing/restricting the configuration
needed to enable feature's runtime behavior.
Several months ago, I created https://issues.redhat.com/browse/WFCORE-6221
which proposes to formalize the concept of a "feature stream" within the
WildFly kernel.
We currently only support the inclusion of stable, well-tested features,
which generally requires a new subsystem management model version. Let's
call this the STABLE feature stream, where a "feature stream" is a set of
features with specific stability guarantees, e.g. STABLE, PREVIEW,
EXPERIMENTAL, etc.
By associating incoming features with a non-STABLE "feature stream", e.g.
PREVIEW, EXPERIMENTAL, we can more quickly include new features into
WildFly, allowing users access to them via a simple opt-in mechanism. This
way we can more quickly evolve WildFly while still retaining the same
testing standards required for a feature to be deemed STABLE.
While we can complicate things later, let's assume for now that feature
streams are a nested hierarchy.
i.e.
- a server configured with the STABLE feature stream will only contain
STABLE features, not PREVIEW nor EXPERIMENTAL features
- a server configured with the PREVIEW feature stream will contain
STABLE and PREVIEW features, but not EXPERIMENTAL features.
- a server configured with the EXPERIMENTAL feature stream will contain
all features.
WFCORE-6221 proposes that the features exposed by a given subsystem are
defined, not just by its management model, but also the feature stream of
the server.
To achieve this, WFCORE-6221 proposes the following changes to WildFly core:
- Add the ability to start WildFly with a specific "feature stream"
- This takes inspiration from JEP 12, which introduced "preview
features" to OpenJDK (https://openjdk.org/jeps/12)
- e.g.
- ./standalone.sh --feature-stream=experimental
- ./domain.sh --feature-stream=experimental
- Add the ability to manipulate management model registration based
on the "feature stream" of the server
- Add support for "feature stream"-specific subsystem XML namespaces
A WildFly server instance is assigned a "feature stream" at startup, either
via the command line (for the standalone use case), or via its host
controller (for the managed domain use case). By default, a server will
use the STABLE feature stream.
Let's look at a few different use cases, and explore how each might be
handled. Forgive me in advance if all of my examples are
clustering-related... :)
In general, I will show 2 approaches: one using programmatic filtering, and
the other using auto-filtering.
I expect most users would use the auto-filtering approach.
1. Introducing an experimental feature enabled via a new subsystem
e.g. https://issues.redhat.com/browse/WFLY-14953
The module containing the extension for an experimental subsystem needs to
be made available within the target feature pack.
However, an experimental subsystem simply skips registration if the current
feature stream does not support EXPERIMENTAL features.
e.g.
public class FooExtension implements Extension {
@Override
public void initialize(ExtensionContext context) {
if (context.enables(FeatureStream.EXPERIMENTAL)) {
SubsystemRegistration subsystem =
context.registerSubsystem("foo",
FooSubsystemModel.VERSION_1_0.getVersion());
// ...
}
}
// ...
}
To promote this feature to the PREVIEW stream, we simply change our logic
accordingly:
Promotion to the STABLE stream can remove the condition entirely, since
context.enables(FeatureStream.STABLE) always return true.
However, promotion to STABLE will likely involve incrementing the
management model version, so existing processes for stable features will
apply.
Alternatively, if the subsystem is self-contained within its own extension
(as opposed to an existing extension), we can simply associate the
extension with a specific feature stream.
e.g.
public class FooExtension implements Extension {
// ...
@Override
public FeatureStream getFeatureStream() {
return FeatureStream.EXPERIMENTAL;
}
}
When the server loads the extension, it will automatically skip
initialization of any extensions not enabled by the current feature stream
of the server.
2. Introducing an experimental feature enabled via a new resource of an
existing subsystem
e.g. https://issues.redhat.com/browse/WFLY-16345
Similar to the above, we need to skip registration of the experimental
resource definition if the current feature stream does not support
EXPERIMENTAL features.
If the experimental resource is never registered, it never installs the
services required to enable the experimental feature.
e.g.
@Override
public void registerChildren(ManagementResourceRegistration parent) {
if (parent.enables(FeatureStream.EXPERIMENTAL)) {
parent.registerSubModel(new FooResourceDefinition(...));
}
}
Alternatively, we can simply associate the ResourceDefinition with a
specific feature stream.
e.g.
class FooResourceDefinition extends SimpleResourceDefinition {
// ...
@Override
public FeatureStream getFeatureStream() {
return FeatureStream.EXPERIMENTAL;
}
}
When registering this resource via
ManagementResourceRegistration.registerSubModel(new
FooResourceDefinition(...)), the server will omit registration if the
feature stream associated with the ResourceDefinition is not enabled by the
server.
N.B. Care must be taken when using this approach, as the
registerSubModel(...) method will return null if registration was skipped.
3. Introducing an experimental feature enabled via a new attribute of an
existing subsystem resource
https://issues.redhat.com/browse/WFLY-18000
Similar to the above, we need to skip registration of the experimental
attribute if the current feature stream does not support EXPERIMENTAL
features.
e.g.
class FooResourceDefinition extends SimpleResourceDefinition {
static final AttributeDefinition BAR = ...; // Our new attribute that
enables the new experimental feature
// ...
@Override
public void registerAttributes(ManagementResourceRegistration
registration) {
if (registration.enables(FeatureStream.EXPERIMENTAL)) {
registration.registerReadWriteAttribute(BAR, null, new
ReloadRequiredWriteAttributeHandler(FOO);
}
}
}
Unfortunately, the current registration mechanism available in
wildfly-core, which registers the OperationDefinition parameters of the add
operation parameters independently from resource attributes (via different
ResourceDefinition.registerXXX(...) methods), makes this awkward.
Additionally, resource add operation handlers and write-attribute operation
handlers are constructed with a separately defined set of parameters
(rather than using the parameters of the corresponding OperationDefinition).
For this reason, I submitted https://issues.redhat.com/browse/WFCORE-6407
(WIP https://github.com/wildfly/wildfly-core/pull/5563) which eliminates
the need to construct add resource operation handlers or write-attribute
operation handlers using a set of attributes.
Until that change is in place, most resource definitions for most
subsystems (i.e. those not using the registration mechanics from
wildfly-clustering-common) will require separate logic to exclude the
EXPERIMENTAL attributes from its add operation handler independently from
the resource's attributes. Consequently, until WFCORE-6407 is complete,
add operation parameter handling will be very awkward:
e.g.
class FooResourceDefinition extends SimpleResourceDefinition {
static final AttributeDefinition ATTRIBUTE = //... an existing attribute
// Our new experimental attribute
static final AttributeDefinition BAR = new
SimpleAttributeDefinitionBuilder("bar", ModelType.STRING);
// N.B. FeatureStream.complete(...) is a convenience method that
returns a full map of feature-per stream
// e.g. will auto-map FeatureStream.PREVIEW to the FeatureStream.STABLE
value
// In this way, the addition of a new feature stream will not affect
existing usage
static final Map<FeatureStream, Collection<AttributeDefinition>>
ATTRIBUTES = FeatureStream.complete(Map.of(FeatureStream.STABLE,
List.of(ATTRIBUTE), FeatureStream.EXPERIMENTAL, List.of(List.of(ATTRIBUTE,
BAR)));
// ...
public FooResourceDefinition(ManagementResourceRegistration parent) {
super(new Parameters(PATH, DESCRIPTION_RESOLVER).setAddHandler(new
ReloadRequiredAddStepHandler(ATTRIBUTES.get(parent.getFeatureStream()))));
}
// ...
}
W.R.T. runtime, if the experimental attribute is never registered, it will
not be allowed within our resource's add operation, and thus will always
resolve to its default value.
Alternatively, once WFCORE-6407 is complete, we can associate an
AttributeDefinition with a FeatureStream and perform the conditional
registration automatically.
e.g.
static final AttributeDefinition BAR = new
SimpleAttributeDefinitionBuilder("bar", ModelType.STRING)
.setRequired(false)
.setValidator(new EnumValidator<>(EnumSet.allOf(Baz.class))
.setFeatureStream(FeatureStream.EXPERIMENTAL)
.build();
The attribute registration methods of ManagementResourceRegistration will
omit registration of an attribute its associated feature stream is not
enabled by the server.
Similarly, the OperationDefinition of the add operation of the containing
ResourceDefinition will omit this attribute from its allowed parameters if
the feature stream associated with the AttributeDefinition is not enabled
by the server.
4. Introducing an experimental feature enabled via a new value of an
existing subsystem resource attribute.
e.g. https://issues.redhat.com/browse/WFLY-13904
Typically, this would involve adding a new value to an existing enum.
Here we need to conditionally register a ParameterValidator specific to the
current FeatureStream.
As with the previous example, selecting the appropriate validator for a
given "feature stream" is also awkward due to the way that resource
attributes vs resource add operation parameters are handled.
With the existing limitations, a "feature stream"-specific validator can be
registered using logic such as:
e.g.
Using our AttributeDefinition BAR from the above example, which specifies a
value enumerated by the enum Baz.
Our experimental feature involves a newly added QUX value to our Baz enum.
static final Map<FeatureStream, Set<Baz>> BAZ_VALUES =
FeatureStream.complete(Map.of(FeatureStream.STABLE,
Enum.complementOf(EnumSet.of(Baz.QUX)), FeatureStream.EXPERIMENTAL,
EnumSet.allOf(FeatureStream.class)));
During attribute registration, we specify the validator specific to the
current stream.
e.g.
@Override
public void registerAttributes(ManagementResourceRegistration registration)
{
ParameterValidator bazValidator = new
EnumValidator<>(BAZ_VALUES.get(registration.getFeatureStream()));
// Copy attribute and apply correct validator
AttributeDefinition attribute
= SimpleAttributeDefinitionBuilder.create(BAR).setValidator(bazValidator).build();
registration.registerReadWriteAttribute(attribute, null, new
ReloadRequiredWriteAttributeHandler(attribute));
}
Not so pleasant...
Due to the same limitation of the current registration mechanics as
described previously, a similar hack will be needed to ensure that the
AttributeDefinition provided to the constructor of the add
OperationStepHandler has the correct validator applied. Again, this
limitation will be addressed via WFCORE-6407.
Alternatively, with some minor changes to the ParameterValidator interface,
and once WFCORE-6407 is complete, we can associate a ParameterValidator
with an AttributeDefinition per feature stream and perform the selection
automatically wherever necessary, e.g. via the base OperationStepHandler
implementations. I have not completely thought this through, but my
current thinking is something like:
e.g.
static final AttributeDefinition BAR = new
SimpleAttributeDefinitionBuilder("bar", ModelType.STRING)
.setRequired(false)
.setValidator(new
FeatureStreamValidator(Map.of(FeatureStream.STABLE, new
EnumValidator<>(Enum.complementOf(EnumSet.of(Baz.QUX))),
FeatureStream.EXPERIMENTAL, new EnumValidator<>(Enum.allOf(Baz.class)))))
.build();
... where FeatureStreamValidator is a composite ParameterValidator
implementation that delegates to a specific ParameterValidator depending on
the feature-stream of the server.
5. Subsystem XML parsing
Just as the feature stream is a new dimension to a subsystem's management
model version - so too is the feature stream an optional dimension of a
subsystem configuration XML namespace.
Say the current version of an existing subsystem uses the XML namespace
"urn:wildfly:foo:2.1"
Implementing a new experimental feature would involve a new XML namespace
"urn:wildfly:foo:experimental:2.1"
If/when this feature is promoted to STABLE, we would need to increment the
schema version itself, e.g. "urn:wildfly:foo:2.2"
If instead, a new stable feature is added, and the experimental feature
remains experimental, we would increment the version for both the stable
and experimental schemas.
e.g. "urn:wildfly:foo:2.2", "urn:wildfly:foo:experimental:2.2"
W.R.T. XML parsing, filtering attributes/resource by stream must be done
inline with existing filtering by version.
e.g.
Consider the following set of subsystem namespaces:
public enum FooSubsystemSchema implements
PersistentSubsystemSchema<FooSubsystemSchema> {
VERSION_1_0(1),
VERSION_2_0(2),
VERSION_2_0_EXPERIMENTAL(2, FeatureStream.EXPERIMENTAL), // We added a
new experimental attribute
;
private final VersionedNamespace<IntVersion,
ExperimentalSubsystemSchema> namespace;
ExperimentalSubsystemSchema(int major) {
this(major, FeatureStream.DEFAULT);
}
ExperimentalSubsystemSchema(int major, FeatureStream stream) {
this.namespace =
SubsystemSchema.createSubsystemURN(FooSubsystemResourceDefinition.SUBSYSTEM_NAME,
new IntVersion(major), stream);
}
@Override
public VersionedNamespace<IntVersion, ExperimentalSubsystemSchema>
getNamespace() {
return this.namespace;
}
@Override
public PersistentResourceXMLDescription getXMLDescription() {
PersistentResourceXMLBuilder builder =
builder(FooSubsystemResourceDefinition.PATH, this.namespace);
if (this.namespace.since(VERSION_2_0)) {
// BAR is new since version 2.0, but only for specific feature
streams
builder.addAttributes(FooSubsystemResourceDefinition.ATTRIBUTES.stream().filter(this::enables));
} else {
// BAR does not exist prior to version 2.0
builder.addAttributes(FooSubsystemResourceDefinition.ATTRIBUTES.stream().filter(Predicates.not(BAR)));
}
return builder.build();
}
}
Registering subsystem parsers should generally look the same as it does
now, since the server can skip registration of schemas associated with a
feature stream not supported by the server.
e.g.
@Override
public void initializeParsers(ExtensionParsingContext context) {
// This will skip registration of
FooSubsystemSchema.VERSION_2_0_EXPERIMENTAL if the server does not support
it
context.setSubsystemXmlMappings(FooSubsystemResourceDefinition.SUBSYSTEM_NAME,
EnumSet.allOf(FooSubsystemSchema.class));
}
Subsystem extensions will also need to register the appropriate writer
based on the feature stream of the server.
// The "current" schema will depend on the feature stream of the server
static final Map<FeatureStream, FooSubsystemSchema> CURRENT_SCHEMAS =
FeatureStream.complete(Map.of(FeatureStream.STABLE, VERSION_2_0,
FeatureStream.EXPERIMENTAL, VERSION_2_0_EXPERIMENTAL));
@Override
public void initialize(ExtensionContext context) {
SubsystemRegistration subsystem =
context.registerSubsystem(FooSubsystemResourceDefinition.SUBSYSTEM_NAME,
FooSubsystemModel.VERSION_2_0.getVersion());
// ...
subsystem.registerXMLElementWriter(new
PersistentResourceXMLDescriptionWriter(CURRENT_SCHEMAS.get(context.getFeatureStream())));
}
6. Misc concerns
- Subsystem model transformers for mixed-domains
- I anticipate that we would restrict the use of mixed-domains to the
STABLE feature stream. That means that only STABLE features need to be
concerned with subsystem model transformations.
- Experimental/preview wildfly kernel features
- The above mechanisms should work for any features configured by a
ResourceDefinition/AttributeDefinition, even if they have no
corresponding
subsystem
- Anything else would need to conditionally enable based on the
feature stream of the controller
That's about all I have for now.
Again, I think this approach should cover the bulk of feature development
use cases in WildFly.
Let me know if anything was particularly unclear, confusing, or requires
elaboration; or if there are any major use cases that I have missed.
STATUS:
I have a pull request open for WFCORE-6221 [1] that implements most of the
above. It is still a work in progress - and needs to be rebased on my
WFCORE-6407 branch (once that is complete).
Please browse my topic branch [2], and leave any comments on the PR [3]. A
good place to start is the integration tests [4], which validates this
against a sample subsystem demonstrating several of the above use cases.
For any design-related discussion, either reply to this thread or to the
WFCORE-6221 jira itself.
Paul Ferraro
[1] https://issues.redhat.com/browse/WFCORE-6221
[2] https://github.com/pferraro/wildfly-core/tree/
[3] https://github.com/wildfly/wildfly-core/pull/5413
[4]
https://github.com/pferraro/wildfly-core/tree/WFCORE-6221/subsystem-test/...
1 year, 3 months
JDK 22 is in Rampdown Phase 2 | Annotation Processing Change Heads-up
by David Delabassee
Welcome to the OpenJDK Quality Outreach summer update.
JDK 21 is now in Rampdown Phase Two [1], its overall feature has been frozen a few weeks ago. Per the JDK Release Process [2] we have now turned our focus to P1 and P2 bugs, which can be fixed with approval [3]. Late enhancements are still possible, with approval, but the bar is now extraordinarily high [4]. That also means that the JDK 21 Initial Release Candidates are fast approaching, i.e., August 10 [5]. So, and in addition to testing your projects with the latest JDK 21 early-access builds, it is now also a good time to start testing with the JDK 22 early-access builds.
[1] https://mail.openjdk.org/pipermail/jdk-dev/2023-July/008034.html
[2] https://openjdk.org/jeps/3
[3] https://openjdk.org/jeps/3#Fix-Request-Process
[4] https://openjdk.org/jeps/3#Late-Enhancement-Request-Process
[5] https://openjdk.org/projects/jdk/21/
## Heads-up - JDK 21 & JDK 22: Note if implicit annotation processing is being used
Annotation processing by javac is enabled by default, including when no annotation processing configuration options are present. We are considering disabling implicit annotation processing by default in a future release, possibly as early as JDK 22 [6]. To alert javac users of this possibility, as of JDK 21 b29 and JDK 22 b04, javac prints a note if implicit annotation processing is being used [7]. The reported note is:
Annotation processing is enabled because one or more processors were
found on the class path. A future release of javac may disable
annotation processing unless at least one processor is specified by
name (-processor), or a search path is specified (--processor-path,
--processor-module-path), or annotation processing is enabled
explicitly (-proc:only, -proc:full).
Use -Xlint:-options to suppress this message.
Use -proc:none to disable annotation processing.
Good build hygiene includes explicitly configuring annotation processing. To ease the transition to a different default policy in the future, the new-in-JDK-21 `-proc:full` javac option requests the current default behavior of looking for annotation processors on the class path.
[6] https://bugs.openjdk.org/browse/JDK-8306819
[7] https://bugs.openjdk.org/browse/JDK-8310061
## Heads-up - JDK 22: JLine is now the Default Console Provider
In JDK 22, `System.console()` has been changed [8] to return a `Console` with enhanced editing features that improve the experience of programs that use the `Console` API. In addition, `System.console()` now returns a `Console` object when the standard streams are redirected or connected to a virtual terminal. Prior to JDK 22, `System.console()` instead returned `null` for these cases. This change may impact code that checks the return from `System.console()` to test if the JVM is connected to a terminal. If required, the `-Djdk.console=java.base` flag will restore the old behavior where the console is only returned when it is connected to a terminal. Starting JDK 22, one could also use the new `Console.isTerminal()` method to test if the console is connected to a terminal.
[8] https://bugs.openjdk.org/browse/JDK-8308591
## JDK 21 Early-Access Builds
The JDK 21 early-access builds 33 are available [9], and are provided under the GNU General Public License v2, with the Classpath Exception. The Release Notes are available here [10] and the Javadoc here [11].
[9] https://jdk.java.net/21/
[10] https://jdk.java.net/21/release-notes
[11] https://download.java.net/java/early_access/jdk21/docs/api/
## JDK 22 Early-Access Builds
The JDK 22 early-access builds 8 are available [12], and are provided under the GNU General Public License v2, with the Classpath Exception. The Release Notes are available here [13].
[12] https://openjdk.org/projects/jdk/22
[13] https://jdk.java.net/22/release-notes
### Changes in recent JDK 22 builds (b2-b8) that may be of interest:
Note that this is only a curated list of changes, make sure to check [14] for additional changes.
- JDK-8309882: LinkedHashMap adds an errant serializable field [Reported by Eclipse Collections]
- JDK-8312366: [arm32] Build crashes after JDK-8310233 [Reported by JaCoCo]
- JDK-8167252: Some of Charset.availableCharsets() does not contain itself [Reported by IntelliJ]
- JDK-8310061: Note if implicit annotation processing is being used
- JDK-8308591: JLine as the default Console provider
- JDK-8312019: Simplify and modernize java.util.BitSet.equals
- JDK-8308593: Add KEEPALIVE Extended Socket Options Support for Windows
- JDK-8227229: Deprecate the launcher -Xdebug/-debug flags that have not done anything since Java 6
- JDK-6983726: Reimplement MethodHandleProxies.asInterfaceInstance
- JDK-8281658: Add a security category to the java -XshowSettings option
- JDK-8310201: Reduce verbose locale output in -XshowSettings launcher option
- JDK-8295894: Remove SECOM certificate that is expiring in September 2023
- JDK-8027711: Unify wildcarding syntax for CompileCommand and CompileOnly
- JDK-8282797: CompileCommand parsing errors should exit VM
- JDK-8305104: Remove the old core reflection implementation
- JDK-8310460: Remove jdeps -profile option
- JDK-8309032: jpackage does not work for module projects unless --module-path is specified
- JDK-8291065: Creating a VarHandle for a static field triggers class initialization
- JDK-8312072: Deprecate for removal the -Xnoagent option
- JDK-8304885: Reuse stale data to improve DNS resolver resiliency=
- JDK-8310047: Add UTF-32 based Charsets into StandardCharsets
- JDK-8302483: Enhance ZIP performance
- JDK-8300596: New System Property to Control the Maximum Size of Signature Files
- JDK-8294323: ASLR Support for CDS Archive
- JDK-8311038: Incorrect exhaustivity computation
- JDK-8312089: Simplify and modernize equals, hashCode, and compareTo in java.nio…
- JDK-8311188: Simplify and modernize equals and hashCode in java.text
- JDK-8300285: Enhance TLS data handling
- JDK-8302475: Enhance HTTP client file downloading
[14] https://github.com/openjdk/jdk/compare/jdk-22%2B1...jdk-22%2B8
## JavaFX Early-Access Builds
These are early access builds of the JavaFX 21 Runtime, built from openjdk/jfx [15]. They enable JavaFX application developers to build and test their applications with JavaFX 21 on JDK 21.
The latest JavaFX 21 early-access builds (build 27 - 2023/7/21) are now available [16] with their related Javadoc [17]. Moreover, the initial JavaFX 22 early-access builds [18] are now also available. These early-access builds are provided under the GNU General Public License, version 2, with the Classpath Exception. Please send feedback to the openjfx-dev mailing list [19].
[15] https://github.com/openjdk/jfx
[16] https://jdk.java.net/javafx21/
[17] https://download.java.net/java/early_access/javafx21/docs/api/overview-su...
[18] https://jdk.java.net/javafx22/
[19] http://mail.openjdk.org/mailman/listinfo/openjfx-dev
## Topics of Interest:
Foreign Function & Memory API Summer Update
https://mail.openjdk.org/pipermail/panama-dev/2023-July/019510.html
What's Arriving for JFR in JDK 21 - Inside Java Newscast #53
https://inside.java/2023/07/20/java-21-jfr/
Java's Startup Booster: CDS - Stack Walker
https://inside.java/2023/07/11/javas-startup-booster-cds/
## July 2023 Critical Patch Update Released
As part of the July 2023 CPU, Oracle released OpenJDK 20.0.2, JavaFX 20.0.2, JDK 20.0.2, JDK 17.0.8 LTS, JDK 11.0.20 LTS, JDK 8u381, as well as JDK 8u381-perf.
~
We still have a few days before JDK 21 enters into the Release Candidate phase so please make sure to test your projects on the latest early-access builds and report any issue.
PS: Make sure to enjoy the summer and recharge your batteries! 😎
--David
1 year, 3 months