<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Folks, <div class=""><br class=""></div><div class="">following up on this topic, I worked a little more on WildFly-Camel in Kubernetes/OpenShift. </div><div class=""><br class=""></div><div class="">These doc pages are targeted for the upcoming 2.1.0 release (01-Feb-2015)</div><div class=""><br class=""></div><div class=""><ul class=""><li class="">WildFly-Camel on <a href="https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/docker.md" class="">Docker</a></li><li class="">WildFly-Camel on <a href="https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/openshift.md" class="">OpenShift</a></li></ul></div><div class=""><br class=""></div><div class="">The setup looks like this</div><div class=""><br class=""></div><div class=""><img apple-inline="yes" id="9F43B51C-395F-43E1-ADD8-8D93C9B3FB2A" height="281" width="377" apple-width="yes" apple-height="yes" src="cid:5812BF05-0361-4512-B790-8227D7034943@fritz.box" class=""></div><div class=""><br class=""></div><div class="">We can now manage these individual wildfly nodes. The domain controller (DC) is replicated once, the host definition is replicated three times. </div><div class="">Theoretically, this means that there is no single point of failure with the domain controller any more - kube would respawn the DC on failure</div><div class=""><br class=""></div><div class="">Here some ideas for improvement …</div><div class=""><br class=""></div><div class="">In a kube env we should be able to swap out containers based on some criteria. It should be possible to define these criteria, emit events based on them create/remove/replace containers automatically. </div><div class="">Additionally a human should be able to make qualified decisions through a console and create/remove/replace containers easily.</div><div class="">Much of the needed information is in jmx. Heiko told me that there is a project that can push events to influx db - something to look at.</div><div class=""><br class=""></div><div class="">If information display contained in jmx in a console has value (e.g in hawtio) that information must be aggregated and visible for each node. </div><div class="">Currently, we have a round robin service on 8080 which would show a different hawtio instance on every request - this is nonsense.</div><div class=""><br class="">I can see a number of high level items: </div><div class=""><br class=""></div><div class="">#1 a thing that aggregates jmx content - possibly multiple MBeanServers in the DC VM that delegate to respective MBeanServers on other hosts, so that a management client can pickup the info from one service<br class="">#2 look at the existing inluxdb thing and research into how to automate the replacement of containers</div><div class="">#3 from the usability perspective, there may need to be an openshift profile in the console(s) because some operations may not make sense in that env</div><div class=""><br class=""></div><div class="">cheers</div><div class="">—thomas</div><div class=""><br class=""></div><div class="">PS: looking forward to an exiting ride in 2015</div><div class=""><br class=""></div><div class=""><img apple-inline="yes" id="05424E42-0416-469E-8A90-3D004D350A48" height="530" width="473" apple-width="yes" apple-height="yes" src="cid:C4733B63-AFDC-4302-B5C0-48F43CC2E116@fritz.box" class=""></div><div class=""> <br class=""><div><blockquote type="cite" class=""><div class="">On 5 Dec 2014, at 14:36, Thomas Diesler <<a href="mailto:tdiesler@redhat.com" class="">tdiesler@redhat.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Folks,<div class=""><br class=""></div><div class="">I’ve recently been looking at WildFly container deployments on OpenShift V3. The following setup is documented <a href="https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md" class="">here</a></div><div class=""><br class=""></div><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;" class=""><div class=""><span id="cid:08DF468F-C210-4AD7-9E57-3CFF9641C350@fritz.box"><example-rest-design.png></span></div><div class=""><br class=""></div><div class="">The example architecture consists of a set of three high available (HA) servers running REST endpoints. </div><div class="">For server replication and failover we use Kubernetes. Each server runs in a dedicated Pod that we access via Services.</div><div class=""><br class=""></div></blockquote><div class="">This approach comes with a number of benefits, which are sufficiently explained in various <a href="https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/" class="">OpenShift</a>, <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md" class="">Kubernetes</a> and <a href="https://docs.docker.com/" class="">Docker</a> materials, but also with a number of challenges. Lets look at those in more detail …</div><div class=""><br class=""></div><div class="">In the example above Kubernetes replicates a number of standalone containers and isolates them in a Pod each with limited access from the outside world. </div><div class=""><br class=""></div><div class="">* The management interfaces are not accessible </div><div class="">* The management consoles are not visible</div><div class=""><br class=""></div><div class="">With WildFly-Camel we have a <a href="http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html" class="">Hawt.io</a> console that allows us to manage Camel Routes configured or deployed to the WildFly runtime. </div><div class="">The WildFly console manages aspects of the appserver.</div><div class=""><br class=""></div><div class="">In a more general sense, I was wondering how the WildFly domain model maps to the Kubernetes runtime environment and how these server instances are managed and information about them relayed back to the sysadmin</div><div class=""><br class=""></div><div class="">a) Should these individual wildfly instances somehow be connected to each other (i.e. notion of domain)?</div><div class="">b) How would an HA singleton service work?</div><div class="">c) What level of management should be exposed to the outside?</div><div class="">d) Should it be possible to modify runtime behaviour of these servers (i.e. write access to config)?</div><div class="">e) Should deployment be supported at all?</div><div class="">f) How can a server be detected that has gone bad?</div><div class="">g) Should logs be aggregated?</div><div class="">h) Should there be a common management view (i.e. console) for these servers?</div><div class="">i) etc …</div><div class=""><br class=""></div><div class="">Are these concerns already being addressed for WildFly? </div><div class=""><br class=""></div><div class="">Is there perhaps even an already existing design that I could look at?</div><div class=""><br class=""></div><div class=""><div class="">Can such an effort be connected to the work that is going on in Fabric8? </div></div><div class=""><br class=""></div><div class="">cheers</div><div class="">—thomas</div><div class=""><br class=""></div><div class="">PS: it would be area that we @ wildfly-camel were interested to work on</div><div class=""> </div></div>_______________________________________________<br class="">wildfly-dev mailing list<br class=""><a href="mailto:wildfly-dev@lists.jboss.org" class="">wildfly-dev@lists.jboss.org</a><br class="">https://lists.jboss.org/mailman/listinfo/wildfly-dev</div></blockquote></div><br class=""></div></body></html>