On 8 Sep 2023, at 08:01, Martin Choma <mchoma(a)redhat.com> wrote:
It is Friday so I will go wild :) Having multiple sources on input and generate text
based on task definition sounds to me as ideal task for LLM :)
Most naive would be to provide Context URL, but even with that Bard provides meaningful
answers [1]. As more sophisticated that could probably be to upload public wildfly
embeddings on Hugging Face and reference that. Or even something else, I am not an expert.
But you see the point.
[I’m not a LLM expert either so take everything I write with a grain of salt]
I like the idea but the actual results are not good at the moment.
We need these task-oriented guides to be the « source of truth » of such LLM output. Their
current output goes from ok to plainly wrong. Relying on them from
wildfly.org without the
ability to control and verify their correctness would be a disservice to our community.
*
https://bard.google.com/share/b70dd63db532 is plainly wrong (non-existing
wildfly:29-alpine base image, using java -jar to run WildFly)
*
https://bard.google.com/share/8e83c3b46e44 is not bad but uses an old
microprofile-config-smallrye XML schema and does not rely on the wildfly MicroProfile BOM
to manage the dependencies.
If we have in place the building block for our user tasks, then LLM could help aggregate
that in a complete scenario.
For example, "I want to read a K8S config map from a Jakarta EE application » could
be the aggregation of simple guides (« use microprofile config », « deploy WildFly on
Kubernetes » , « use configmaps and secrets with micro profile config »).
Best regards,
Jeff
—
Jeff Mesnil
Engineer @ Red Hat JBoss EAP
http://jmesnil.net/