Folks,

I’ve recently been looking at WildFly container deployments on OpenShift V3. The following setup is documented here


The example architecture consists of a set of three high available (HA) servers running REST endpoints. 
For server replication and failover we use Kubernetes. Each server runs in a dedicated Pod that we access via Services.

This approach comes with a number of benefits, which are sufficiently explained in various OpenShiftKubernetes and Docker materials, but also with a number of challenges. Lets look at those in more detail …

In the example above Kubernetes replicates a number of standalone containers and isolates them in a Pod each with limited access from the outside world. 

* The management interfaces are not accessible 
* The management consoles are not visible

With WildFly-Camel we have a Hawt.io console that allows us to manage Camel Routes configured or deployed to the WildFly runtime. 
The WildFly console manages aspects of the appserver.

In a more general sense, I was wondering how the WildFly domain model maps to the Kubernetes runtime environment and how these server instances are managed and information about them relayed back to the sysadmin

a) Should these individual wildfly instances somehow be connected to each other (i.e. notion of domain)?
b) How would an HA singleton service work?
c) What level of management should be exposed to the outside?
d) Should it be possible to modify runtime behaviour of these servers (i.e. write access to config)?
e) Should deployment be supported at all?
f) How can a server be detected that has gone bad?
g) Should logs be aggregated?
h) Should there be a common management view (i.e. console) for these servers?
i) etc …

Are these concerns already being addressed for WildFly? 

Is there perhaps even an already existing design that I could look at?

Can such an effort be connected to the work that is going on in Fabric8? 

cheers
—thomas

PS: it would be area that we @ wildfly-camel were interested to work on