Thanks for this!
I want to make sure that what comes out of this are clear decisions about
what we're willing to handle in terms of various sorts of conflicts. What
kinds of things the tool is responsible for solving vs what the user has to
resolve, what tooling config settings we support for tweaking what the tool
will do for the user, etc.
When I say conflicts I don't just mean file conflicts; I also mean use case
conflicts. What's the right way / wrong way to do things and what
combinations are ok.
Part of what makes this interesting is the provisioning tool itself is
meant to be used for setting up the installation the way the user wants it.
So in many many cases if there is a diff it's because the user bypassed the
tool. Of course that's going to be very common, particularly when it comes
to the config files, where people have been using tools like the CLI to set
up their configs for years. Manually adding, removing and modifying
modules, hopefully we can wean people off that and get them to use the
tool, but still, people won't.
The 3-way diffing approach you describe sounds like it should be flexible
enough to handle various permutations.
Re: the "update feature pack" what happens if the user has produced and
stored one of those, then updates their base configuration, and then
applies that update feature pack. So, the user has taken an install
provisioned by the tool, with standalone.xml produced according to the
inputs to the tool. Then modified that via CLI/HAL and saved those mods in
an update feature pack. Then taken the bulk of those mods, adjusted their
inputs to the provisioning tool such that tool produces the desired config,
does an installation and wishes to apply that update feature pack?
Detail question:
"Filtering out some of the 'unimportant' files (tmp, logs)."
What does that mean? The tmp and logs dirs end up empty in the new
installation? That might be ok for tmp but for logs it sounds wrong. I'm
not sure it's necessary for tmp.
Brian Stansberry
Manager, Senior Principal Software Engineer
Red Hat
On Mon, Sep 25, 2017 at 4:06 AM, Emmanuel Hugonnet <ehugonne(a)redhat.com>
wrote:
Hey guys,
TL;DR
I've been experimenting with Alexey to update a customized provisioned
server using the provisioning tool [1].
I'm using the syncing operations [2] that I created a while back by
porting the domain synchronization operations to standalone (to
synchronize standalone instances in a cloud environment).
I'm looking for some feedback on this approach.
Cheers
Emmanuel
[1]:
https://github.com/ehsavoie/pm/tree/upgrade-feature-pack
[2]:
https://github.com/ehsavoie/wildfly-core/tree/model_diff
----
full version:
Updating WildFly
<
https://github.com/ehsavoie/pm/blob/upgrade-feature-pack/
docs/guide/wildfly/upgrade.adoc#terminology>Terminology
Updating is the process of applying a fix pack that will increment the
micro version. There should be no compatiblity issue. Upgrading is
the transition of a minor version, compatiblity should be available but
there are a lot more changes.
While the mecanisms discussed here are general, they might need more
refinement for an upgrade.
<
https://github.com/ehsavoie/pm/blob/upgrade-feature-pack/
docs/guide/wildfly/upgrade.adoc#use-case>Use case
The use case is quite simple: *I have version 1.0.0 installed and i want
to update to 1.0.1 but I have locally customized my server and i’d
like to keep those changes*.
We have several local elements to take into account:
*
filesystem changes (files added, removed or deleted).
*
configuration changes.
The basic idea is to diff the existing instance with a pure new
installation of the same version then apply those changes to a new
provisioned version instance for staging.
We can keep it at the basic filesystem approach with some simple merge
strategy (theirs, ours).
We can use the plugin to go into more details. For exemple using the model
diff between standalone Wildfly instances we can create a cli
script to reconfigure the upgraded instance in a post-installation step.
<
https://github.com/ehsavoie/pm/blob/upgrade-feature-pack/
docs/guide/wildfly/upgrade.adoc#diffing-the-filesystem>Diffing the
filesystem
The idea is to compare the instance to be upgraded with one instance
provisioned with the same feature packs as the one we want to upgrade.
The plugin will provide a list of files or regexp to be excluded.
Each file will be hashed and we compare the hash + the relative path to
deduce deleted, modified or added files.
For textual files we can provide a diff (and the means to apply it), but
maybe that should be for a later version as some kind of
interaction with the user might be required.
<
https://github.com/ehsavoie/pm/blob/upgrade-feature-pack/
docs/guide/wildfly/upgrade.adoc#wildfly-standalone-plugin>WildFly
standalone
plugin
This is a specialization of the upgrading algorithm:
*
Filtering out some of the 'unimportant' files (tmp, logs).
*
Creating diff of textual files (for example the realm properties)
which will be applied (merging strategy à la git).
*
Using an embedded standalone it creates a jboss-cli script to
reconfigure the server (adding/removing extensions and reconfiguring
subsystems).
*
Deleting files that were removed.
This is done on a staging upgraded instance before being copied over the
old instance.
I have added a diff/sync operation in standalone that is quite similar to
what happens when a slave HC connects to the DC. Thus I start the
current installation, and connect to it from an embedded server using the
initial configuration and diff the models.
This is 'experimental' but it works nicely (I was able to 'upgrade' from
the standalone.xml of wildfly-core to the standalone-full.xml of
wildfly).
I’m talking only about the model part, I leave the files to the filesystem
'diffing' but it wiill work with managed deployments are those
are added by the filesystem part and then the deployment reference is
added in the model.
For a future version of the tooling/plugin we might look for a way to
interract more with the user (for example for applying the textual
diffs to choose what to do per file instead of globally).
Also currently the filters for excluding files are defined by the plugin
but we could enrich them from the tooling also.
<
https://github.com/ehsavoie/pm/blob/upgrade-feature-pack/
docs/guide/wildfly/upgrade.adoc#producing-an-update-feature-pack>Producing
an
update feature pack
From the initial upgrade mecanism Alexey has seen the potential to create
a feature pack instead of my initial format.
Currently i’m able to create and installa a feature-pack that will
supersede the initial installation with its own local modifications.
Thus from my customized instance I can produce a feature pack that will
allow me to reinstall the same instance. Maybe this can be also use
to produce upgrade feature pack for patching.
<
https://github.com/ehsavoie/pm/blob/upgrade-feature-pack/
docs/guide/wildfly/upgrade.adoc#wildfly-domain-mode>WildFly domain mode
Domain mode is a bit more complex, and we need to think how to manager the
model changes.
Those can be at the domain level or the host level and depending on the
instance target we would need to get the changes from the domain.xml
or/and the host.xml.
I’m thinking about applying the same strategy as what was done for
standalone : aka expose the sync operations to an embedded HC.
_______________________________________________
wildfly-dev mailing list
wildfly-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/wildfly-dev