<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Thank Brian, I’d like to do a little more research with wildfly <a href="https://github.com/wildfly-extras/wildfly-camel/issues/93" class="">domain mode</a> in openshift before responding. Won’t be long ...<div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 5 Dec 2014, at 20:00, Brian Stansberry <<a href="mailto:brian.stansberry@redhat.com" class="">brian.stansberry@redhat.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">On 12/5/14, 7:36 AM, Thomas Diesler wrote:<br class=""><blockquote type="cite" class="">Folks,<br class=""><br class="">I’ve recently been looking at WildFly container deployments on OpenShift<br class="">V3. The following setup is documented here<br class=""><<a href="https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md" class="">https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md</a>><br class=""><br class=""><br class=""> The example architecture consists of a set of three high available<br class=""> (HA) servers running REST endpoints.<br class=""> For server replication and failover we use Kubernetes. Each server<br class=""> runs in a dedicated Pod that we access via Services.<br class=""><br class="">This approach comes with a number of benefits, which are sufficiently<br class="">explained in various OpenShift<br class=""><<a href="https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/" class="">https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/</a>>,<br class="">Kubernetes<br class=""><<a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md" class="">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md</a>> and<br class="">Docker <<a href="https://docs.docker.com/" class="">https://docs.docker.com/</a>> materials, but also with a number of<br class="">challenges. Lets look at those in more detail …<br class=""><br class="">In the example above Kubernetes replicates a number of standalone<br class="">containers and isolates them in a Pod each with limited access from the<br class="">outside world.<br class=""><br class="">* The management interfaces are not accessible<br class="">* The management consoles are not visible<br class=""><br class="">With WildFly-Camel we have a Hawt.io<br class=""><<a href="http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html" class="">http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html</a>> console<br class="">that allows us to manage Camel Routes configured or deployed to the<br class="">WildFly runtime.<br class="">The WildFly console manages aspects of the appserver.<br class=""><br class="">In a more general sense, I was wondering how the WildFly domain model<br class="">maps to the Kubernetes runtime environment and how these server<br class="">instances are managed and information about them relayed back to the<br class="">sysadmin<br class=""><br class=""></blockquote><br class="">Your questions below mostly relate (correctly) to what *should* be done <br class="">but I'll preface by discussing what *could* be done. Please forgive noob <br class="">mistakes as I'm an admitted Kubernetes noob.<br class=""><br class="">AIUI a Kubernetes services exposes a single endpoint to outside callers, <br class="">but the containers in the pods can open an arbitrary number of client <br class="">connections to other services.<br class=""><br class="">This should work fine with WildFly domain management, as there can be a <br class="">Service for the Domain Controller, which is the management interaction <br class="">point for the sysadmin. And then the WildFly instance in the container <br class="">for any other Service can connect and register with that Domain <br class="">Controller service. The address/port those other containers use can be <br class="">the same one that sysadmins use.<br class=""><br class=""><blockquote type="cite" class="">a) Should these individual wildfly instances somehow be connected to<br class="">each other (i.e. notion of domain)?<br class=""></blockquote><br class="">Depends on the use case, but I expect certainly some users will <br class="">centralized management, even if it's just for monitoring.<br class=""><br class=""><blockquote type="cite" class="">b) How would an HA singleton service work?<br class=""></blockquote><br class="">WildFly *domain management* itself does not have an HA singleton notion, but<br class=""><br class="">i) Kubernetes replication controllers themselves provide a form of this, <br class="">but I assume with a period of downtime while a new pod is spun up.<br class=""><br class="">ii) WildFly clustering has an HA singleton service concept that can be <br class="">used. There are different mechanisms JGroups supports for group <br class="">communication, but one involves each peer in the group connecting to a <br class="">central coordination process. So presumably that coordination process <br class="">could be deployed as a Kubernetes Service.<br class=""><br class=""><blockquote type="cite" class="">c) What level of management should be exposed to the outside?<br class=""></blockquote><br class="">As much as possible this should be a user choice. Architecturally, I <br class="">believe we can expose everything. I'm not real keen on trying to disable <br class="">things in Kubernetes-specific ways. But I'm quite open to features to <br class="">disable things that work in any deployment environment.<br class=""><br class=""><blockquote type="cite" class="">d) Should it be possible to modify runtime behaviour of these servers<br class="">(i.e. write access to config)?<br class=""></blockquote><br class="">See c). We don't have a true read-only mode, athough I think it would be <br class="">fairly straightforward to add such a thing if it were a requirement.<br class=""><br class=""><blockquote type="cite" class="">e) Should deployment be supported at all?<br class=""></blockquote><br class="">See c). Making removing deployment capability configurable is also <br class="">doable, although it's likely more work than a simple read-only mode.<br class=""><br class=""><blockquote type="cite" class="">f) How can a server be detected that has gone bad?<br class=""></blockquote><br class="">I'll need to get a better understanding of Kubernetes to say anything <br class="">useful about this.<br class=""><br class=""><blockquote type="cite" class="">g) Should logs be aggregated?<br class=""></blockquote><br class="">This sounds like something that belongs at a higher layer, or as a <br class="">general purpose WildFly feature unrelated to Kubernetes.<br class=""><br class=""><blockquote type="cite" class="">h) Should there be a common management view (i.e. console) for these<br class="">servers?<br class=""></blockquote><br class="">I don't see why not. I think some users will want that, others won't, <br class="">and others will want a console that spans things beyond WildFly servers.<br class=""><br class=""><blockquote type="cite" class="">i) etc …<br class=""><br class="">Are these concerns already being addressed for WildFly?<br class=""><br class=""></blockquote><br class="">Somewhat. As you can see from the above, a fair bit of stuff could just <br class="">work. I know Heiko Braun has been thinking a bit about Kubernetes use <br class="">cases too, or at least wanting to do so. ;)<br class=""><br class=""><blockquote type="cite" class="">Is there perhaps even an already existing design that I could look at?<br class=""><br class=""></blockquote><br class="">Kubernetes specific stuff? No.<br class=""><br class=""><blockquote type="cite" class="">Can such an effort be connected to the work that is going on in Fabric8?<br class=""><br class=""></blockquote><br class="">The primary Fabric8-related thing we (aka Alexey Loubyansky) are doing <br class="">currently is working to support non-xml based persistence of our config <br class="">files and a mechanism to support server detection of changes to the <br class="">filesystem, triggering updates to the runtime. Goal being to integrate <br class="">with the git-based mechanisms Fabric8 uses for configuration.<br class=""><br class=""><a href="https://developer.jboss.org/docs/DOC-52773" class="">https://developer.jboss.org/docs/DOC-52773</a><br class="">https://issues.jboss.org/browse/WFCORE-294<br class="">https://issues.jboss.org/browse/WFCORE-433<br class=""><br class=""><blockquote type="cite" class="">cheers<br class="">—thomas<br class=""><br class="">PS: it would be area that we @ wildfly-camel were interested to work on<br class=""></blockquote><br class="">Great! :)<br class=""><br class="">-- <br class="">Brian Stansberry<br class="">Senior Principal Software Engineer<br class="">JBoss by Red Hat<br class="">_______________________________________________<br class="">wildfly-dev mailing list<br class="">wildfly-dev@lists.jboss.org<br class="">https://lists.jboss.org/mailman/listinfo/wildfly-dev</div></blockquote></div><br class=""></div></body></html>