[
https://issues.jboss.org/browse/WFCORE-433?page=com.atlassian.jira.plugin...
]
Brian Stansberry edited comment on WFCORE-433 at 2/24/16 4:36 PM:
------------------------------------------------------------------
[~jastrachan]
Thanks for the inputs. I'll brain dump here...
First, to get this out of the way... while I don't think it will be an emphasis
outside of the WildFly project, I think it's important that WildFly's existing
domain mode (DC etc) can be made to work reasonably well in a kubernetes environment. For
a few reasons -- 1) it's a reasonable bet some customers will demand it so better to
be prepared; 2) Thomas Diesler, Heiko Braun and Harald Pehl were able to make it work
fairly easily in a prototype way a year ago, so it's something already there, and 3)
the PetSets proposal
(
https://github.com/smarterclayton/kubernetes/blob/petset/docs/proposals/p...) sounds
like it will address one of the main pain points with running a DC.
BUT, I don't expect using a DC to be the direction Red Hat's overall efforts go.
That's because it's specific to WildFly/EAP containers, while the goal in our
OpenShift offerings is to do things in a *consistent* way. Let Kubernetes control
container lifecycle, and have a common approach for how containers interact with
externalized configuration, and, hopefully, for how users update that configuration.
It's important that there's an agreed upon solution to the issues in the last
paragraph across the middleware cloud efforts. That's something that needs to get
sorted out amongst the various players in cloud enablement, xpaas and the various
middleware projects. WildFly/EAP putting effort into something that ends up not being
standard is a poor use of resources -- we already have something that's specific to
WildFly.
Enough, blah blah blah, now to dig into the architecture you outline. Each item below is
an element in the architecture. I'm basically just regurgitating here.
1) We have immutable pods running WildFly/EAP containers (standalone server) where
it's the container and not WildFly itself that cares about git. Container start pulls
from git to put the right files on disk and then starts WF. This is easy for WildFly; the
cloud enablement guys do the work creating the container images. ;) Pods are immutable so
there are no concerns about writes to git from these pods. (BTW WF supports a mode where a
user can tweak a normally persistent config setting, e.g. a log level, but the change is
not persisted to standalone.xml. So a semi-immutable pod is possible.)
AIUI for this piece of the architecture this same basic approach could be used, swapping
in ConfigMap for git, with the container and not WF/EAP itself being concerned with using
the ConfigMap.
2) Some process needs to take user config changes and write them to git or ConfigMap. This
isn't the WF/EAP Admin Console, which is a browser based client. It's a separate
controller type from what's used for pods, as there should only be one (or some
coordination to avoid write conflicts is needed). PetSets? And it needs to understand how
to write to git/ConfigMap. This sounds like a WF/EAP server running in --admin-only mode
(so it doesn't launch unneeded services to handle normal user requests, with possible
negative impacts like joining JGroups clusters with the type 1) pods). For this part, new
functionality in WF is needed -- the ability to update git/ConfigMap when the config is
persisted.
We use a WF process for this, because it understands our supported management API and
knows how take changes and persist them.
3) The management client. AIUI we are not planning on using WF's HAL Admin Console in
the cloud; the goal is to have an overall management console, not container-specific ones.
Is this changing? HAL is currently not enabled when the server is --admin-only, so to use
HAL this would have to be addressed somehow.
4) Something that reacts to changes and does all the new ReplicationController, rolling
upgrade stuff you outline. This sounds like a general function of the xpaas, not something
WF/EAP does. There would need to be some sort of information exposed by the type 1) and
type 2) containers though so the coordination stuff knows which type 1) pods are related
to which type 2).
I suppose this functionality can be run in the type 2) container, but it should be general
purpose logic.
Comparing all this to using a DC, it seems quite similar, as the type 2) element is quite
similar to a DC. Basic differences:
1) Writes are not directly pushed to servers; instead they only get picked up when a new
pod starts. I think it would be pretty simple to add that kind of semantic to servers
managed by a DC.
2) Type 1) pods don't need a running type 2) pod to start; they just need to be able
to pull git/ConfigMap. That's pretty important. Hmmm, is that true though? Something
needs to provide the remote git repo / ConfigMap. (Time to read up more.)
3) A DC provides a central point for coordinating reads of the servers, which is nice.
It's also non-standard.
4) The DC keeps a config history, which you can use to revert to a previous config.
It's not as simple as using git though.
Re: lifecycle hooks, the EAP images cloud enablement produces for OpenShift use those. WF
10 / EAP 7 are much better in terms of how graceful shutdown works though, so we make it
easier to write good hooks. I talked about this some with Ales Justin a couple weeks ago
at the cloud enablement meetings.
was (Author: brian.stansberry):
[~jastrachan]
Thanks for the inputs. I'll brain dump here...
First, to get this out of the way... while I don't think it will be an emphasis
outside of the WildFly project, I think it's important that WildFly's existing
domain mode (DC etc) can be made to work reasonably well in a kubernetes environment. For
a few reasons -- 1) it's a reasonable bet some customers will demand it so better to
be prepared; 2) Thomas Diesler, Heiko Braun and Harald Pehl were able to make it work
fairly easily in a prototype way a year ago, so it's something already there, and 3)
the PetSets proposal
(
https://github.com/smarterclayton/kubernetes/blob/petset/docs/proposals/p...) sounds
like it will address one of the main pain points with running a DC.
BUT, I don't expect using a DC to be the direction Red Hat's overall efforts go.
That's because it's specific to WildFly/EAP containers, while the goal in our
OpenShift offerings is to do things in a *consistent* way. Let Kubernetes control
container lifecycle, and have a common approach for how containers interact with
externalized configuration, and, hopefully, for how users update that configuration.
It's important that there's an agreed upon solution to the issues in the last
paragraph across the middleware cloud efforts. That's something that needs to get
sorted out amongst the various players in cloud enablement, xpaas and the various
middleware projects. WildFly/EAP putting effort into something that ends up not being
standard is a poor use of resources -- we already have something that's specific to
WildFly.
Enough, blah blah blah, now to dig into the architecture you outline. Each item below is
an element in the architecture. I'm basically just regurgitating here.
1) We have immutable pods running WildFly/EAP containers (standalone server) where
it's the container and not WildFly itself that cares about git. Container start pulls
from git to put the right files on disk and then starts WF. This is easy for WildFly; the
cloud enablement guys do the work creating the container images. ;) Pods are immutable so
there are no concerns about writes to git from these pods. (BTW WF supports a mode where a
user can tweak a normally persistent config setting, e.g. a log level, but the change is
not persisted to standalone.xml. So a semi-immutable pod is possible.)
AIUI for this piece of the architecture this same basic approach could be used, swapping
in ConfigMap for git, with the container and not WF/EAP itself being concerned with using
the ConfigMap.
2) Some process needs to take user config changes and write them to git or ConfigMap. This
isn't the WF/EAP Admin Console, which is a browser based client. It's a separate
controller type from what's used for pods, as there should only be one (or some
coordination to avoid write conflicts is needed). PetSets? And it needs to understand how
to write to git/ConfigMap. This sounds like a WF/EAP server running in --admin-only mode
(so it doesn't launch unneeded services to handle normal user requests, with possible
negative impacts like joining JGroups clusters with the type 1) pods). For this part, new
functionality in WF is needed -- the ability to update git/ConfigMap when the config is
persisted.
We use a WF process for this, because it understands our supported management API and
knows how take changes and persist them.
3) The management client. AIUI we are not planning on using WF's HAL Admin Console in
the cloud; the goal is to have an overall management console, not container-specific ones.
Is this changing? HAL is currently not enabled when the server is --admin-only, so to use
HAL this would have to be addressed somehow.
4) Something that reacts to changes and does all the new ReplicationController, rolling
upgrade stuff you outline. This sounds like a general function of the xpaas, not something
WF/EAP does. There would need to be some sort of information exposed by the type 1) and
type 2) containers though so the coordination stuff knows which type 1) pods are related
to which type 2).
I suppose this functionality can be run in the type 2) container, but it should be general
purpose logic.
Comparing all this to using a DC, it seems quite similar, as the type 2) element is quite
similar to a DC. Basic differences:
1) Writes are not directly pushed to servers; instead they only get picked up when a new
pod starts. I think it would be pretty simple to add that kind of semantic to servers
managed by a DC.
2) Type 1) pods don't need a running type 2) pod to start; they just need to be able
to pull git/ConfigMap. That's pretty important. Hmmm, is that true though? Something
needs to provide the remote git repo / ConfigMap. (Time to read up more.)
3) A DC provides a central point for coordinating reads of the servers, which is nice.
It's also non-standard.
Re: lifecycle hooks, the EAP images cloud enablement produces for OpenShift use those. WF
10 / EAP 7 are much better in terms of how graceful shutdown works though, so we make it
easier to write good hooks. I talked about this some with Ales Justin a couple weeks ago
at the cloud enablement meetings.
git backend for loading/storing the configuration XML for wildfly
-----------------------------------------------------------------
Key: WFCORE-433
URL:
https://issues.jboss.org/browse/WFCORE-433
Project: WildFly Core
Issue Type: Feature Request
Components: Domain Management
Reporter: James Strachan
Assignee: Jason Greene
when working with wildfly in a cloud/paas environment (like openshift, fabric8, docker,
heroku et al) it'd be great to have a git repository for the configuration folder so
that writes work something like:
* git pull
* write the, say, standalone.xml file
* git commit -a -m "some comment"
* git push
(with a handler to deal with conflicts; such as last write wins).
Then an optional periodic 'git pull' and reload configuration if there is a
change.
This would then mean that folks could use a number of wildfly containers using docker /
openshift / fabric8 and then have a shared git repository (e.g. the git repo in openshift
or fabric8) to configure a group of wildfly containers. Folks could then reuse the wildfly
management console within cloud environments (as the management console would, under the
covers, be loading/saving from/to git)
Folks could then benefit from git tooling when dealing with versioning and audit logs of
changes to the XML; along with getting the benefit of branching, tagging.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)