[wildfly-dev] WildFly domain on OpenShift Origin

Thomas Diesler tdiesler at redhat.com
Wed Dec 17 05:20:28 EST 2014


/reducing the cc noise

Yes, I was hoping to hear that this has already been thought about. 

Is there a design document for this JMX aggregation? 
What are the possible target environments and functional requirements? 
Would this be reusable in a plain WildFly domain?

cheers
—thomas

> On 17 Dec 2014, at 10:35, Rob Davies <rdavies at redhat.com> wrote:
> 
> Hi Thomas,
> 
> it would be great to see this as an example quickstart in fabric8 - then you could pick up the jmx aggregation etc for free :)
> 
>> 	Thomas Diesler <mailto:tdiesler at redhat.com>	17 December 2014 09:28
>> Folks, 
>> 
>> following up on this topic, I worked a little more on WildFly-Camel in Kubernetes/OpenShift. 
>> 
>> These doc pages are targeted for the upcoming 2.1.0 release (01-Feb-2015)
>> 
>> WildFly-Camel on  Docker <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/docker.md>
>> WildFly-Camel on  OpenShift <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/openshift.md>
>> 
>> The setup looks like this
>> 
>> 
>> 
>> We can now manage these individual wildfly nodes. The domain controller (DC) is replicated once, the host definition is replicated three times. 
>> Theoretically, this means that there is no single point of failure with the domain controller any more - kube would respawn the DC on failure
>> 
>> Here some ideas for improvement …
>> 
>> In a kube env we should be able to swap out containers based on some criteria. It should be possible to define these criteria, emit events based on them create/remove/replace containers automatically. 
>> Additionally a human should be able to make qualified decisions through a console and create/remove/replace containers easily.
>> Much of the needed information is in jmx. Heiko told me that there is a project that can push events to influx db - something to look at.
>> 
>> If information display contained in jmx in a console has value (e.g in hawtio) that information must be aggregated and visible for each node. 
>> Currently, we have a round robin service on 8080 which would show a different hawtio instance on every request - this is nonsense.
>> 
>> I can see a number of high level items: 
>> 
>> #1 a thing that aggregates jmx content - possibly multiple MBeanServers in the DC VM that delegate to respective MBeanServers on other hosts, so that a management client can pickup the info from one service
>> #2 look at the existing inluxdb thing and research into how to automate the replacement of containers
>> #3 from the usability perspective, there may need to be an openshift profile in the console(s) because some operations may not make sense in that env
>> 
>> cheers
>> —thomas
>> 
>> PS: looking forward to an exiting ride in 2015
>> 
>> <image.jpg>
>>  
>> 
>> 	Thomas Diesler <mailto:tdiesler at redhat.com>	5 December 2014 13:36
>> Folks,
>> 
>> I’ve recently been looking at WildFly container deployments on OpenShift V3. The following setup is documented here <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md>
>> 
>> This approach comes with a number of benefits, which are sufficiently explained in various OpenShift <https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/>, Kubernetes <https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md> and Docker <https://docs.docker.com/> materials, but also with a number of challenges. Lets look at those in more detail …
>> 
>> In the example above Kubernetes replicates a number of standalone containers and isolates them in a Pod each with limited access from the outside world. 
>> 
>> * The management interfaces are not accessible 
>> * The management consoles are not visible
>> 
>> With WildFly-Camel we have a Hawt.io <http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html> console that allows us to manage Camel Routes configured or deployed to the WildFly runtime. 
>> The WildFly console manages aspects of the appserver.
>> 
>> In a more general sense, I was wondering how the WildFly domain model maps to the Kubernetes runtime environment and how these server instances are managed and information about them relayed back to the sysadmin
>> 
>> a) Should these individual wildfly instances somehow be connected to each other (i.e. notion of domain)?
>> b) How would an HA singleton service work?
>> c) What level of management should be exposed to the outside?
>> d) Should it be possible to modify runtime behaviour of these servers (i.e. write access to config)?
>> e) Should deployment be supported at all?
>> f) How can a server be detected that has gone bad?
>> g) Should logs be aggregated?
>> h) Should there be a common management view (i.e. console) for these servers?
>> i) etc …
>> 
>> Are these concerns already being addressed for WildFly? 
>> 
>> Is there perhaps even an already existing design that I could look at?
>> 
>> Can such an effort be connected to the work that is going on in Fabric8? 
>> 
>> cheers
>> —thomas
>> 
>> PS: it would be area that we @ wildfly-camel were interested to work on
>>  

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/wildfly-dev/attachments/20141217/4314ca7f/attachment.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: compose-unknown-contact.jpg
Type: image/jpeg
Size: 770 bytes
Desc: not available
Url : http://lists.jboss.org/pipermail/wildfly-dev/attachments/20141217/4314ca7f/attachment.jpg 


More information about the wildfly-dev mailing list