Testing with Testcontainers, Arquillian and JUnit 5
by Harald Pehl
TLDR;
I've setup a PoC [1] for an alternative way to test the HAL management console.
The PoC is based on Testcontainers [2], Arquillian [3] and JUnit 5. Although
this is primarily intended for UI testing, the usage of Testcontainers could
also be interesting for the WildFly test suite.
---
The existing HAL test suite [4] is a rich test suite for the HAL management
console. It contains 300+ UI tests based on Arquillian [3].
The UI tests require a browser and a running WildFly instance as an execution
environment. This makes is hard to run the tests on a CI server. Another issue
is that the tests are not very stable and often run into timeouts. To execute
the complete test suite you need a stable environment.
Recently I came across Testcontainers [2], which provide a nice API to start
arbitrary containers before / after (all) unit tests. The library also provides
an elegant way to start and use browsers in remote web containers.
Therefore, I decided to implement a PoC which provides an alternative and
more stable way to test the HAL management console. Tests are self
contained and can run in CI environments such as GH Actions or Teamcity.
At the same time the new approach has to be compatible with the existing test
suite. Tests should be runnable w/o major code changes using the new approach.
The result is [1]. If you're interested, feel free to take a look and let me
know what you think!
---
[1] https://github.com/hpehl/manatoko
[2] https://www.testcontainers.org/
[3] http://arquillian.org/arquillian-graphene/
[4] https://github.com/hal/testsuite.next
2 years, 9 months
Embedded broker layer
by Emmanuel Hugonnet
Hello,
Currently there is no way to use layers to provision an embedded broker and I've been thinking about to provide such a layer.
The embedded broker configuration is done using the "messaging-activemq" feature group [1]. This feature group configure the embedded broker
itself, the ee default connection factory and the mdb part in the ejb3 subsystem.
Introducing an embedded broker layer requires several changes in the current layers:
- it should not be the responsibility of the embedded broker layer to configure mbd in the ejb3 subsystem, thus i propose to introduce a
new ejb-mdb layer that would depend on the ejb layer [2] and have an optional dependency on the embedded-broker layer.
- the embedded-broker layer would depend on the current messaging-activemq layer [3] and configure the ee default connection factory and
broker according to wht is already defined in the "messaging-activemq" feature group [1].
What do you think about this plan ?
Cheers,
Emmanuel
[1]: https://github.com/wildfly/wildfly/blob/main/ee-feature-pack/galleon-comm...
[2]: https://github.com/wildfly/wildfly/tree/main/ee-feature-pack/galleon-comm...
[3]: https://github.com/wildfly/wildfly/tree/main/ee-feature-pack/galleon-comm...
2 years, 10 months
Finer grained tracking of progress on EE 10
by Brian Stansberry
I want to get things started re getting more fine-grained issues created
for the various EE 10 spec migration issues linked to
https://issues.redhat.com/browse/WFLY-15679. In many cases the issues
directly linked to WFLY-15679 are too coarse-grained, because there are a
number of distinct tasks that can be done separately for each, and to have
a realistic sense of where we stand we need to have visibility into how
things stand with those finer-grained pieces
A good recommendation I got from Tom Jenkinson is to add a comment on the
various per-spec issues outlining the various types of detailed issues the
assignee should consider filing, along with instructions as to how to do
so. The following is what I propose adding:
"Please consider creating subtasks of this issue for any of the following
or other similar activities that can be accomplished as discrete pieces of
work toward this JIRA's overall goal. Please add the 'EE10' label to any
such issue.
* Ensuring that the standalone TCK for this spec can be run against a
current build of WildFly Preview, and if possible reporting any issues to
spec's working group before it is finalized.[1]
* Creating a branch for the EE 10 variant in the github repo for the JBoss
fork of the spec, based on the appropriate code at Jakarta.[2]
* Integrating a milestone release of the spec API into WildFly's main
branch.
* Integrating a milestone release of the implementation artifacts into
WildFly's main branch.
* Integrating the final release of the spec API into WildFly's main branch.
* Integrating final releases of the implementation artifacts
Reminder: WildFly 27 is feature-boxed, so integrating non-final
dependencies into main is not just ok, it is encouraged. Doing so lets us
identify issues early.
We'll be using these finer-grained issues to better track progress on
https://issues.redhat.com/secure/RapidBoard.jspa?rapidView=14333&view=pla...
and elsewhere."
Note that I don't include passing standalone or parts of the platform TCK
here, because it's not clear to me that that kind of thing can be cleanly
decomposed into discrete pieces of work. We know we need to pass the TCKs.
Note also that this message and the related issues are focused on WildFly
Preview. Our ultimate goal is to convert the standard WildFly code to EE
10, ideally as soon as possible. But to do that we'll need to complete the
work of getting native jakarta.* namespace variants of all our components.
That work can proceed in parallel with our work on EE 10 if we continue to
do it in WFP for now.
[1] I'd only add this to issues where a standalone TCK exists.
[2] I'd only add this to issues where we are using a fork of the EE 9.1
spec API code in WildFly Preview.
Best regards,
Brian
2 years, 10 months
standalone/ directory shared by multiple Wildfly instances?
by Tomas Hofman
Hello,
could anybody confirm if it's OK (or not OK) to run multiple clustering
Wildfly instances that would share common standalone/ directory? Even if
the instances are supposed to be running the same deployments?
I always thought that the standalone/ directory should be separate for
each Wildfly/EAP instance, but I can't find any resources that would
clearly state that shared standalone/ is a problem.
The problems I think of are:
* instances could override their standalone.xml config,
* instances could override some transactional or cache data?
Thanks :),
Tomas
2 years, 10 months
FilterRef aware Filter Operations?
by Jason Lee
This is the email I mentioned on Zulip:
https://wildfly.zulipchat.com/#narrow/stream/174184-wildfly-developers/to...
I'm working on UNDERTOW-1593/WFLY-12459, which is an RFE to add support for
tracking long-running requests (similar to what was available in EAP6. See
EAP7-1188 for more details). I have some questions that are too long, I
think, to handle in Zulip very well, so I'm going to write what will likely
be a very long email instead. :)
Currently, my impl in Undertow add a new HttpHandler to handle the...
business logic, as well as a new POJO to hold the data. That side of the
change seems very simple, but things get more complicated on the WildFly
side. Currently, I have defined a new Filter, which handles creating this
new Undertow HttpHandler. Using it looks something like this:
jboss-cli.sh -c
"/subsystem=undertow/configuration=filter/active-request-tracking=requestTracker/:add"
jboss-cli.sh -c
"/subsystem=undertow/server=default-server/host=default-host/filter-ref=requestTracker/:add()"
jboss-cli.sh -c
"/subsystem=undertow/configuration=filter/active-request-tracking=requestTracker:list-active-requests()"
Excuse me (and correct me ;) if I get some of the terminology wrong, but,
as I understand it, this is what's happening:
- When the server starts, thanks to FilterDefinitions.FILTERS, an
instance of my new Filter, ActiveRequestTrackingFilter, is created.
- With the first CLI command, using that definition, a Filter, named
'requestTracker' is created and added to the system.
- Next, a reference to that Filter is added to default-host, which will
apply the Filter to any requests on the host.
- Finally, using the operation added to the Filter (but scoped, in
theory, to the named Filter), we query Undertow, using the
HttpHandler's public API, to get the list of active requests and print that
information.
While I may have the terms right, I think I understand the process well
enough, and it works. Mostly. Here's where my real question(s) come in.
When the user executes the list-active-requests operation, I have been
unable to find a way to restrict the output (or the data gathered) to only
the hosts to which the filter has been applied. For example, let's say I
have 2 hosts: H1, and H2, and that I have created two filters, F1 and F2,
one for each host. When the HttpHandler for each filter reference is
created, there is insufficient information provided to createHttpHandler()
to determine, in that method, the host to which the FilterRef is attached.
Since the OperationStepHandler is attached to the Filter (and it seems that
there is only once instance of ActiveRequestTrackingFilter in the system),
I (currently) naively add the HttpHandler to the operation instance held by
the Filter (ActiveRequestTrackingFilter.operation), which means that
jboss-cli.sh
-c
"/subsystem=undertow/configuration=filter/active-request-tracking=F1:list-active-requests()"
will
return data captured by F2 as well.
That's a really long way of getting to the question(s), which are there:
1. Are we ok with this operation returning ALL of the active requests on
the server despite the possibility of having multiple filter-refs defined.
This is a debug operation, so maybe it's not a big deal, but it seems odd
(if not necessarily wrong) to be able to define multiple
filter/filter-refs, but they all return the same data.
2. If that's not want we want, how do fix it so that the operation gets
enough information to differentiate?
1. For example, can Handler.createHttpHandler() be modified to accept
the FilterLocation/Host?
2. Is there a better way?
As it stands now, I think my implementation meets the requirements laid out
in the Analysis Doc ("The new feature consists of adding a way for users to
retrieve active requests, ie. requests that haven’t finished and are active
on the server."). There are some oddities in the behavior (as I've tried to
describe above), so I'd love some input on either how to fix them or if we
should just accept them (for now?)
Thanks!
Jason Lee
Principal Software Engineer
Red Hat JBoss EAP
2 years, 10 months