[WFLY-19855] Opening WildFly Glow discovery to additional spaces
by Jean Francois Denise
Hi,
in the context of this Issue:
https://github.com/wildfly/wildfly-proposals/pull/627
we are defining a solution to allow for WildFly Glow to discover and
suggest Galleon feature-packs and layers defined in other spaces than the
default space (the current location in which feature-packs are registered).
The introduction of these spaces should help feature-pack developers to
follow the WildFly Feature Development Process. A feature-pack would be
able to migrate from spaces to spaces to finally reach the community
stability level and be discovered by WildFly Glow by default.
The first space we plan to define is the 'experimental' space. A space that
would contain feature-packs in active development.
A first candidate to be registered in this space could be the WildFly AI
feature-pack (https://github.com/wildfly-extras/wildfly-ai-feature-pack)
that is currently actively developed.
Thank-you.
JF
4 days, 6 hours
WildFly Cloud Tests
by Kabir Khan
Hi,
These tests need some modernisation, and there are two things in my opinion
that need addressing.
*1 Space issues*
Recently we were running out of space when running these tests. James fixed
this by deleting the built WildFly, but when trying to resurrect an old PR
I had forgotten all about, we ran out of space again.
I believe the issue is that the way the tests work at the moment, which is
to:
* Start minikube with the registry
* Build all the test images
* Run all the tests
Essentially we end up building all the server images (different layers)
before running the tests, which takes space, and then each test installs
the image into minikube's registry. Also, some tests also install other
images (e.g postgres, strimzi) into the minikube instance.
My initial thought was that it would be good to build the server images
more on demand, rather than before the tests, and to be able to call
'docker system prune' now and again.
However, this does not take into account the minikube registry, which will
also accumulate a lot of images. It will at least become populated with the
test images, I am unsure if it also becomes populated with the images
pulled from elsewhere (i.e. postgres, strimzi etc)?
If `minikube addons disable registry` followed by a 'minikube addons enable
registry' deletes the registry contents from the disk, having a hook to do
that between each test could be something easy to look into. Does anyone
know if this is the case?
An alternative could be to have one job building wildfly, and uploading the
maven repository as an artifact, and then have separate jobs to run each
test (or perhaps set of tests requiring the same WildFly server image).
However, as this test is quite fiddly since it runs remotely, I'm not sure
how the reporting would look.
*2 Pull request trigger*
PRs in wildfly/wildfly execute a remote dispatch which results in the job
getting run in the wildfly-extras/wildfly-cloud-tests repository.
There is no reporting back from the wildfly-extras/wildfly-cloud-tests
repository about the run id of the resulting run.
What I did when I implemented this was to have the calling wildfly/wildfly
job wait and poll a branch in wildfly-extras/wildfly-cloud-tests for the
results of the job (IIRC I have a file with the triggering PR number). The
job on the other side would then write to this branch once the job is done.
Which is all quite ugly!
However, playing in other repositories, I found
https://www.kenmuse.com/blog/creating-github-checks/. Basically this would
result in
* the WIldFly pull request trigger completing immediately once it has done
the remote dispatch
* When the wildfly-cloud-tests job starts it will do a remote dispatch to
wildfly, which will get picked up by a workflow which can add a status
check on the PR conversation page saying remote testing in
wildfly-cloud-tests is in progres
* Once the wildfly-cloud-tests job is done, it will do another remote
dispatch to wildfly, which will update the status check with success/failure
So we'd have two checks in the section rather than the current one.
*Other ideas*
While writing the above, the following occurred to me.
The reason for the split is that the cloud test framework is quite
involved, and IMO does not belong in WildFly. So the remote dispatch
approach was used.
However, I wonder now if a saner approach would be to update the
wildfly-cloud-tests workflow to be reusable so they can be used from
WildFly?
That would allow the tests, test framework etc., and the workflow to
continue to live in wildfly-cloud-tests, while running in wildfly itself.
That should get rid of the remote dispatch issues, and make that side of
things simpler.
It does not address the space issue, but I think if this approach works, it
will be easier to deal with the space issue.
Any thoughts/insights are welcome.
Thanks,
Kabir
6 days, 9 hours
WildFly 35 Schedule
by Brian Stansberry
We're aiming to have WildFly 35.0.0.Final available on wildfly.org on
Thursday, January 9. The beta should be available on December 12.
Following are the key dates:
Wed Dec 4 -- Feature Freeze. Mergeable feature PRs due
Mon Dec 9 -- WF Core Beta release
Wed Dec 11 -- Tag/deploy 35 Beta
Thu Dec 12 -- Announce release
Wed Dec 18 -- Mergeable Final PRs due
Thu Jan 2 -- Last minute PRs for Final due
Mon Jan 6 -- WF Core Final release
Wed Jan 8 -- Tag/deploy 35 Final
Thu Jan 9 -- Release available on wildfly.org
Fri Jan 10 -- Post release deliverables
As usual, there is a longer period between the beta and the final for the
January release. This results in two deadlines for PRs for the final. The
Dec 18 date is the normal date, two weeks after the feature freeze. That's
the deadline everyone should work toward.
Then there's a Jan 2 date to pick up fixes for any critical issues that
popped up after Dec 18.
Best regards,
--
Brian Stansberry
Principal Architect, Red Hat JBoss EAP
WildFly Project Lead
He/Him/His
1 week, 3 days