<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Folks,<div class=""><br class=""></div><div class="">I’ve recently been looking at WildFly container deployments on OpenShift V3. The following setup is documented <a href="https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md" class="">here</a></div><div class=""><br class=""></div><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;" class=""><div class=""><img apple-inline="yes" id="E02E517F-8F06-4883-BE05-7941CF4F08EF" height="181" width="381" apple-width="yes" apple-height="yes" src="cid:08DF468F-C210-4AD7-9E57-3CFF9641C350@fritz.box" class=""></div><div class=""><br class=""></div><div class="">The example architecture consists of a set of three high available (HA) servers running REST endpoints. </div><div class="">For server replication and failover we use Kubernetes. Each server runs in a dedicated Pod that we access via Services.</div><div class=""><br class=""></div></blockquote><div class="">This approach comes with a number of benefits, which are sufficiently explained in various <a href="https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/" class="">OpenShift</a>, <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md" class="">Kubernetes</a> and <a href="https://docs.docker.com/" class="">Docker</a> materials, but also with a number of challenges. Lets look at those in more detail …</div><div class=""><br class=""></div><div class="">In the example above Kubernetes replicates a number of standalone containers and isolates them in a Pod each with limited access from the outside world. </div><div class=""><br class=""></div><div class="">* The management interfaces are not accessible </div><div class="">* The management consoles are not visible</div><div class=""><br class=""></div><div class="">With WildFly-Camel we have a <a href="http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html" class="">Hawt.io</a> console that allows us to manage Camel Routes configured or deployed to the WildFly runtime. </div><div class="">The WildFly console manages aspects of the appserver.</div><div class=""><br class=""></div><div class="">In a more general sense, I was wondering how the WildFly domain model maps to the Kubernetes runtime environment and how these server instances are managed and information about them relayed back to the sysadmin</div><div class=""><br class=""></div><div class="">a) Should these individual wildfly instances somehow be connected to each other (i.e. notion of domain)?</div><div class="">b) How would an HA singleton service work?</div><div class="">c) What level of management should be exposed to the outside?</div><div class="">d) Should it be possible to modify runtime behaviour of these servers (i.e. write access to config)?</div><div class="">e) Should deployment be supported at all?</div><div class="">f) How can a server be detected that has gone bad?</div><div class="">g) Should logs be aggregated?</div><div class="">h) Should there be a common management view (i.e. console) for these servers?</div><div class="">i) etc …</div><div class=""><br class=""></div><div class="">Are these concerns already being addressed for WildFly? </div><div class=""><br class=""></div><div class="">Is there perhaps even an already existing design that I could look at?</div><div class=""><br class=""></div><div class=""><div class="">Can such an effort be connected to the work that is going on in Fabric8? </div></div><div class=""><br class=""></div><div class="">cheers</div><div class="">—thomas</div><div class=""><br class=""></div><div class="">PS: it would be area that we @ wildfly-camel were interested to work on</div><div class=""> </div></body></html>