Ok, I vaguely thought about that too...

I can keep them in wildfly-extras for now, and improve the reporting as mentioned, and then look into how to deal with the space issue. I guess on the wildfly-extras side it will be a trigger job calling out to the other ones, so the overall status report probably will not be as tricky as I imagined.

On Wed, 28 Aug 2024 at 16:53, Brian Stansberry <brian.stansberry@redhat.com> wrote:


On Wed, Aug 28, 2024 at 5:50 AM Kabir Khan <kkhan@redhat.com> wrote:
Hi,

These tests need some modernisation, and there are two things in my opinion that need addressing.

1 Space issues
Recently we were running out of space when running these tests. James fixed this by deleting the built WildFly, but when trying to resurrect an old PR I had forgotten all about, we ran out of space again. 

I believe the issue is that the way the tests work at the moment, which is to:
* Start minikube with the registry
* Build all the test images
* Run all the tests

Essentially we end up building all the server images (different layers) before running the tests, which takes space, and then each test installs the image into minikube's registry. Also, some tests also install other images (e.g postgres, strimzi) into the minikube instance.

My initial thought was that it would be good to build the server images more on demand, rather than before the tests, and to be able to call 'docker system prune' now and again.

However, this does not take into account the minikube registry, which will also accumulate a lot of images. It will at least become populated with the test images, I am unsure if it also becomes populated with the images pulled from elsewhere (i.e. postgres, strimzi etc)?

If `minikube addons disable registry` followed by a 'minikube addons enable registry' deletes the registry contents from the disk, having a hook to do that between each test could be something easy to look into. Does anyone know if this is the case?

An alternative could be to have one job building wildfly, and uploading the maven repository as an artifact, and then have separate jobs to run each test (or perhaps set of tests requiring the same WildFly server image). However, as this test is quite fiddly since it runs remotely, I'm not sure how the reporting would look.

2 Pull request trigger
PRs in wildfly/wildfly execute a remote dispatch which results in the job getting run in the wildfly-extras/wildfly-cloud-tests repository.

There is no reporting back from the wildfly-extras/wildfly-cloud-tests repository about the run id of the resulting run.

What I did when I implemented this was to have the calling wildfly/wildfly job wait and poll a branch in wildfly-extras/wildfly-cloud-tests for the results of the job (IIRC I have a file with the triggering PR number). The job on the other side would then write to this branch once the job is done. Which is all quite ugly!

However, playing in other repositories, I found https://www.kenmuse.com/blog/creating-github-checks/. Basically this would result in
* the WIldFly pull request trigger completing immediately once it has done the remote dispatch
* When the wildfly-cloud-tests job starts it will do a remote dispatch to wildfly, which will get picked up by a workflow which can add a status check on the PR conversation page saying remote testing in wildfly-cloud-tests is in progres
* Once the wildfly-cloud-tests job is done, it will do another remote dispatch to wildfly, which will update the status check with success/failure

So we'd have two checks in the section rather than the current one.


Other ideas
While writing the above, the following occurred to me.

The reason for the split is that the cloud test framework is quite involved, and IMO does not belong in WildFly. So the remote dispatch approach was used.

However, I wonder now if a saner approach would be to update the wildfly-cloud-tests workflow to be reusable so they can be used from WildFly?

That would allow the tests, test framework etc., and the workflow to continue to live in wildfly-cloud-tests, while running in wildfly itself. That should get rid of the remote dispatch issues, and make that side of things simpler.

It does not address the space issue, but I think if this approach works, it will be easier to deal with the space issue.

A downside is that means the 3 actual test jobs (e.g. https://github.com/wildfly-extras/wildfly-cloud-tests/actions/runs/10583924772) run using the wildfly GH org's set of runners.

Relying on wildfly-extras to get around that is a hack though. But if we're going to move these I think we need to optimize as much as possible, e.g. not rebuild WildFly multiple times.


Any thoughts/insights are welcome.


Thanks,

Kabir








_______________________________________________
wildfly-dev mailing list -- wildfly-dev@lists.jboss.org
To unsubscribe send an email to wildfly-dev-leave@lists.jboss.org
Privacy Statement: https://www.redhat.com/en/about/privacy-policy
List Archives: https://lists.jboss.org/archives/list/wildfly-dev@lists.jboss.org/message/PCHKQZDV7DLNZ3N2NEA2GFZ2CJZAJY72/


--
Brian Stansberry
Principal Architect, Red Hat JBoss EAP
WildFly Project Lead
He/Him/His