Configure vert.x in WildFly
by Kabir Khan
Hi,
At the moment the SmallRye/MP Reactive Messaging integration just uses a
default created vert.x.
I think otel uses vert.x too, probably in much the same way? Then there
might be other subsystems in various feature packs.
It seems there is some desire to be able to configure the vert.x instance
https://github.com/smallrye/smallrye-reactive-messaging/discussions/2725.
Being able to configure vert.x is TBH something I've totally ignored.
I guess this would mean a Vert.X subsystem, which can optionally create a
vert.x instance with the configured parameters. Does it make sense to
create more than one instance in case subsystems have slightly different
needs?
Then the reactive messaging subsystem can use that, if present (or perhaps
this should be a configuration parameter in the reactive messaging
subsystem). If not (or not configured to do so), it will use the default
like it does today.
Thanks,
Kabir
1 week, 6 days
[WFLY-19855] Opening WildFly Glow discovery to additional spaces
by Jean Francois Denise
Hi,
in the context of this Issue:
https://github.com/wildfly/wildfly-proposals/pull/627
we are defining a solution to allow for WildFly Glow to discover and
suggest Galleon feature-packs and layers defined in other spaces than the
default space (the current location in which feature-packs are registered).
The introduction of these spaces should help feature-pack developers to
follow the WildFly Feature Development Process. A feature-pack would be
able to migrate from spaces to spaces to finally reach the community
stability level and be discovered by WildFly Glow by default.
The first space we plan to define is the 'experimental' space. A space that
would contain feature-packs in active development.
A first candidate to be registered in this space could be the WildFly AI
feature-pack (https://github.com/wildfly-extras/wildfly-ai-feature-pack)
that is currently actively developed.
Thank-you.
JF
1 month
WildFly Cloud Tests
by Kabir Khan
Hi,
These tests need some modernisation, and there are two things in my opinion
that need addressing.
*1 Space issues*
Recently we were running out of space when running these tests. James fixed
this by deleting the built WildFly, but when trying to resurrect an old PR
I had forgotten all about, we ran out of space again.
I believe the issue is that the way the tests work at the moment, which is
to:
* Start minikube with the registry
* Build all the test images
* Run all the tests
Essentially we end up building all the server images (different layers)
before running the tests, which takes space, and then each test installs
the image into minikube's registry. Also, some tests also install other
images (e.g postgres, strimzi) into the minikube instance.
My initial thought was that it would be good to build the server images
more on demand, rather than before the tests, and to be able to call
'docker system prune' now and again.
However, this does not take into account the minikube registry, which will
also accumulate a lot of images. It will at least become populated with the
test images, I am unsure if it also becomes populated with the images
pulled from elsewhere (i.e. postgres, strimzi etc)?
If `minikube addons disable registry` followed by a 'minikube addons enable
registry' deletes the registry contents from the disk, having a hook to do
that between each test could be something easy to look into. Does anyone
know if this is the case?
An alternative could be to have one job building wildfly, and uploading the
maven repository as an artifact, and then have separate jobs to run each
test (or perhaps set of tests requiring the same WildFly server image).
However, as this test is quite fiddly since it runs remotely, I'm not sure
how the reporting would look.
*2 Pull request trigger*
PRs in wildfly/wildfly execute a remote dispatch which results in the job
getting run in the wildfly-extras/wildfly-cloud-tests repository.
There is no reporting back from the wildfly-extras/wildfly-cloud-tests
repository about the run id of the resulting run.
What I did when I implemented this was to have the calling wildfly/wildfly
job wait and poll a branch in wildfly-extras/wildfly-cloud-tests for the
results of the job (IIRC I have a file with the triggering PR number). The
job on the other side would then write to this branch once the job is done.
Which is all quite ugly!
However, playing in other repositories, I found
https://www.kenmuse.com/blog/creating-github-checks/. Basically this would
result in
* the WIldFly pull request trigger completing immediately once it has done
the remote dispatch
* When the wildfly-cloud-tests job starts it will do a remote dispatch to
wildfly, which will get picked up by a workflow which can add a status
check on the PR conversation page saying remote testing in
wildfly-cloud-tests is in progres
* Once the wildfly-cloud-tests job is done, it will do another remote
dispatch to wildfly, which will update the status check with success/failure
So we'd have two checks in the section rather than the current one.
*Other ideas*
While writing the above, the following occurred to me.
The reason for the split is that the cloud test framework is quite
involved, and IMO does not belong in WildFly. So the remote dispatch
approach was used.
However, I wonder now if a saner approach would be to update the
wildfly-cloud-tests workflow to be reusable so they can be used from
WildFly?
That would allow the tests, test framework etc., and the workflow to
continue to live in wildfly-cloud-tests, while running in wildfly itself.
That should get rid of the remote dispatch issues, and make that side of
things simpler.
It does not address the space issue, but I think if this approach works, it
will be easier to deal with the space issue.
Any thoughts/insights are welcome.
Thanks,
Kabir
1 month