[wildfly-dev] WildFly domain on OpenShift Origin

Brian Stansberry brian.stansberry at redhat.com
Wed Dec 17 09:42:18 EST 2014


On 12/17/14, 3:28 AM, Thomas Diesler wrote:
> Folks,
>
> following up on this topic, I worked a little more on WildFly-Camel in
> Kubernetes/OpenShift.
>
> These doc pages are targeted for the upcoming 2.1.0 release (01-Feb-2015)
>
>   * WildFly-Camel on Docker
>     <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/docker.md>
>   * WildFly-Camel on OpenShift
>     <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/openshift.md>
>

Great. :)

>
> The setup looks like this
>
>
> We can now manage these individual wildfly nodes. The domain controller
> (DC) is replicated once, the host definition is replicated three times.
> Theoretically, this means that there is no single point of failure with
> the domain controller any more - kube would respawn the DC on failure
>

I'm heading on PTO tomorrow so likely won't be able to follow up on this 
question for a while, but one concern I had with the Kubernetes respawn 
approach was retaining any changes that had been made to the domain 
configuration. Unless the domain.xml comes from / is written to some 
shared storage available to the respawned DC, any changes made will be lost.

Of course, if the DC is only being used for reads, this isn't an issue.

> Here some ideas for improvement …
>
> In a kube env we should be able to swap out containers based on some
> criteria. It should be possible to define these criteria, emit events
> based on them create/remove/replace containers automatically.
> Additionally a human should be able to make qualified decisions through
> a console and create/remove/replace containers easily.
> Much of the needed information is in jmx. Heiko told me that there is a
> project that can push events to influx db - something to look at.
>
> If information display contained in jmx in a console has value (e.g in
> hawtio) that information must be aggregated and visible for each node.
> Currently, we have a round robin service on 8080 which would show a
> different hawtio instance on every request - this is nonsense.
>
> I can see a number of high level items:
>
> #1 a thing that aggregates jmx content - possibly multiple MBeanServers
> in the DC VM that delegate to respective MBeanServers on other hosts, so
> that a management client can pickup the info from one service
> #2 look at the existing inluxdb thing and research into how to automate
> the replacement of containers
> #3 from the usability perspective, there may need to be an openshift
> profile in the console(s) because some operations may not make sense in
> that env
>
> cheers
> —thomas
>
> PS: looking forward to an exiting ride in 2015
>
>
>> On 5 Dec 2014, at 14:36, Thomas Diesler <tdiesler at redhat.com
>> <mailto:tdiesler at redhat.com>> wrote:
>>
>> Folks,
>>
>> I’ve recently been looking at WildFly container deployments on
>> OpenShift V3. The following setup is documented here
>> <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md>
>>
>>     <example-rest-design.png>
>>
>>     The example architecture consists of a set of three high available
>>     (HA) servers running REST endpoints.
>>     For server replication and failover we use Kubernetes. Each server
>>     runs in a dedicated Pod that we access via Services.
>>
>> This approach comes with a number of benefits, which are sufficiently
>> explained in various OpenShift
>> <https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/>,
>> Kubernetes
>> <https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md> and
>> Docker <https://docs.docker.com/> materials, but also with a number of
>> challenges. Lets look at those in more detail …
>>
>> In the example above Kubernetes replicates a number of standalone
>> containers and isolates them in a Pod each with limited access from
>> the outside world.
>>
>> * The management interfaces are not accessible
>> * The management consoles are not visible
>>
>> With WildFly-Camel we have a Hawt.io
>> <http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html> console
>> that allows us to manage Camel Routes configured or deployed to the
>> WildFly runtime.
>> The WildFly console manages aspects of the appserver.
>>
>> In a more general sense, I was wondering how the WildFly domain model
>> maps to the Kubernetes runtime environment and how these server
>> instances are managed and information about them relayed back to the
>> sysadmin
>>
>> a) Should these individual wildfly instances somehow be connected to
>> each other (i.e. notion of domain)?
>> b) How would an HA singleton service work?
>> c) What level of management should be exposed to the outside?
>> d) Should it be possible to modify runtime behaviour of these servers
>> (i.e. write access to config)?
>> e) Should deployment be supported at all?
>> f) How can a server be detected that has gone bad?
>> g) Should logs be aggregated?
>> h) Should there be a common management view (i.e. console) for these
>> servers?
>> i) etc …
>>
>> Are these concerns already being addressed for WildFly?
>>
>> Is there perhaps even an already existing design that I could look at?
>>
>> Can such an effort be connected to the work that is going on in Fabric8?
>>
>> cheers
>> —thomas
>>
>> PS: it would be area that we @ wildfly-camel were interested to work on
>> _______________________________________________
>> wildfly-dev mailing list
>> wildfly-dev at lists.jboss.org <mailto:wildfly-dev at lists.jboss.org>
>> https://lists.jboss.org/mailman/listinfo/wildfly-dev
>


-- 
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat


More information about the wildfly-dev mailing list