[
https://issues.jboss.org/browse/WFCORE-433?page=com.atlassian.jira.plugin...
]
James Strachan commented on WFCORE-433:
---------------------------------------
[~brian.stansberry] agreed with all that!
I think having a standard kubernetes based rolling upgrade approach has lots of value as a
single canonical way to do any kind of rolling upgrade; whether its a configuration
change, a patch to the operating system, JDK or wildfly container itself or a users new
deployment being rolled out etc. Then operations folks in testing / production have one
way to do rolling updates of all software etc. We can then all reuse really nice rolling
upgrade visualisations etc
However in development; all the things wildfly has done over the years to do incremental
updates (e.g. redeploying WARs on the fly, reloading classes, reconfiguring on the fly) is
really useful as developers build and test their code. i.e. developers want the fastest
possible feedback of changes when they are writing code - in the "pre commit"
flow; though once a commit is done (or a config value has changed), in testing &
production its kinda better to use immutable images that startup - as it just avoids all
kinds of complex bugs due to restarting things & possibly having concurrent / reload
edge cases that introduces bugs or resource leakage or whatever. i.e. its less risky and
more 'ops-ish" to just startup clean containers in production. Its more
"developer-ish" to mutate running containers. Both have value at different
stages in the overall software production line.
In the Karaf world, we've now a build system that generates an immutable application
server image; so that all the image can do is startup what its statically configured to
startup - you can't even try to deploy a new version or change a configuration value
at runtime. That kind of behaviour could be useful for testing/production environments -
while leaving development containers more open to incremental change so that developers
don't have to keep waiting for containers to restart to test out their changes etc
A few other thoughts on your main numbered points:
1) - you could do the git stuff either inside wildfly directly; or you can let kubernetes
do the git for you with the gitRepo volume; so that the wiidlfy container then just loads
config files from disk as usual. ConfigMap can work the same way too; you can mount the
files from ConfigMap to the canonical place on disk where the wildfly docker image expects
to load them. That way you've 1 wildfly image and you can use the kubernetes pod
template manifest to configure whether its using git or configmap. The docker image is
then unaware of the differences. i.e. its all just enabled by the kubernetes manifest (a
blob of JSON / YAML you generate as part of you build etc)
2) yes, the Admin Console needs a REST/DMR back end that it posts changes of the
configuration to. That service ideally would not be the same container as the cluster of
wildfly pods you're configuring. There could be a git based back end that does a git
commit/push per change, or a ConfigMap back end that does the change. (Or the WildFly
Admin console could directly post to kubernetes ConfigMap; but that might be too big a
change for the Admin Console? So it might be simpler to just reuse whatever REST / remote
API the AdminConsole uses right now to save changes; and have 2 implementations of the
back end; one for git, one for ConfigMap? Using a separate WF container image for this
sounds totally fine to me. It can be anything really - whatever you folks think is best!
3) I think there needs to be a WildFly specific management console to let folks configure
things in an environment (via git / ConfigMap). Running WF on Atomic/OpenShift should
generally try to look and feel like using a traditional DC - i.e. folks should have at
least the same capabilities and power.
I figured once the WF Admin Console could work nicely with WF inside kubernetes; we'd
then just wire the UI into the OpenShift console somehow so it feels like its one piece of
glass; but using links and so forth to join the 2 things. We need to do similar things
with the iPaaS where we have iPaaS specific screens linked into the OpenShift console.
They are actually separate web applications developed by different teams on different
release schedules; we just need to style them and link them so they feel like parts of one
universal console. e.g. we ship a jolokia / hawtio UI we link to on a per pod basis in
OpenShift right now which is really a totally separate UI - we just link them.
However I'd expect the WF team to own their admin console that changes the WF
configuration & provides any other WF specific visualisation, metrics, reporting or
whatever. I don't know the various console options in WF well enough to make a call on
whether a new console should be written or if we can reuse/refactor the current Admin
Console. Be that as it may I think folks will want to have a nice WF specific console when
using Atomic/OpenShift as the runtime environment - and if the admin console can work with
git / ConfigMap back ends; then thats a pretty compelling user experience IMHO.
4) Totally agreed. Both the iPaaS and BRMS folks have very similar requirements;
particularly when things change in git that there's a rolling upgrade process kick in
(using a slightly different Replication Controller manifest to use the latest git commit
revision) etc. So I'd hope thats all generic functionality that the WF team don't
have to do really.
In terms of the DC comparison; you could just say using Kubernetes along with gitRepo
volumes (and the revision in the ReplicationController's manifest) or ConfigMap is a
kind of DC implementation. You could still use the DC, conceptually and maybe some code
too, if you want to do the "rolling upgrade and fall back if there is a failure"
type logic. So I don't think you need to throw away the DC idea; just we could do with
an implementation that uses immutable containers and volumes (git or ConfigMap) rather
than swizzling runtime containers. So I see this as more of an implementation detail of
how DC should work for testing/production environments really. But its your call really on
if, pretending this new immutable containers + git/configmap approach is a DC is too
strange for WF users ;)
Though I guess rather like point 4) you raised; the DC concept (when git / config map
changes, do a rolling upgrade and if things go bad, rollback, maybe revert the change and
raise some user event to warn the user the change was bad) - is maybe a generic thing; as
its not really got much in there thats WF centric. e.g. that kind of logic could be reused
by the iPaaS and BRMS too
git backend for loading/storing the configuration XML for wildfly
-----------------------------------------------------------------
Key: WFCORE-433
URL:
https://issues.jboss.org/browse/WFCORE-433
Project: WildFly Core
Issue Type: Feature Request
Components: Domain Management
Reporter: James Strachan
Assignee: Jason Greene
when working with wildfly in a cloud/paas environment (like openshift, fabric8, docker,
heroku et al) it'd be great to have a git repository for the configuration folder so
that writes work something like:
* git pull
* write the, say, standalone.xml file
* git commit -a -m "some comment"
* git push
(with a handler to deal with conflicts; such as last write wins).
Then an optional periodic 'git pull' and reload configuration if there is a
change.
This would then mean that folks could use a number of wildfly containers using docker /
openshift / fabric8 and then have a shared git repository (e.g. the git repo in openshift
or fabric8) to configure a group of wildfly containers. Folks could then reuse the wildfly
management console within cloud environments (as the management console would, under the
covers, be loading/saving from/to git)
Folks could then benefit from git tooling when dealing with versioning and audit logs of
changes to the XML; along with getting the benefit of branching, tagging.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)