Am 18.12.2014 um 09:26 schrieb Thomas Diesler
<tdiesler(a)redhat.com>:
Lets start with requirements and a design that everybody who has a stake in this can be
agreed on - I’ll get a doc started.
> On 18 Dec 2014, at 09:18, James Strachan <jstracha(a)redhat.com> wrote:
>
> If the EAP console is available as a Kubernetes Service we can easily add it to the
hawtio nav bar like we do with Kibana, Grafana et al.
>
>> On 17 Dec 2014, at 16:17, Thomas Diesler <tdiesler(a)redhat.com> wrote:
>>
>> Thanks James,
>>
>> I’ll look at the fabric8 hawtio console next I see if I can get it to work
alongside with the wildfly console. Then I think I should meet with Heiko/Harald (for a
long walk) and we talk about this some more.
>>
>> —thomas
>>
>> <PastedGraphic-1.tiff>
>>
>>> On 17 Dec 2014, at 15:59, James Strachan <jstracha(a)redhat.com> wrote:
>>>
>>> A persistent volume could be used for the pod running the DC; if the pod is
restarted or if it fails over to another host the persistent volume will be preserved
(using one of the shared volume mechanisms in kubernetes/openshift like
Ceph/Gluster/Cinder/S3/EBS etc)
>>>
>>>> On 17 Dec 2014, at 14:42, Brian Stansberry
<brian.stansberry(a)redhat.com> wrote:
>>>>
>>>>> On 12/17/14, 3:28 AM, Thomas Diesler wrote:
>>>>> Folks,
>>>>>
>>>>> following up on this topic, I worked a little more on WildFly-Camel
in
>>>>> Kubernetes/OpenShift.
>>>>>
>>>>> These doc pages are targeted for the upcoming 2.1.0 release
(01-Feb-2015)
>>>>>
>>>>> * WildFly-Camel on Docker
>>>>>
<
https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/docke...
>>>>> * WildFly-Camel on OpenShift
>>>>>
<
https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/opens...
>>>>
>>>> Great. :)
>>>>
>>>>>
>>>>> The setup looks like this
>>>>>
>>>>>
>>>>> We can now manage these individual wildfly nodes. The domain
controller
>>>>> (DC) is replicated once, the host definition is replicated three
times.
>>>>> Theoretically, this means that there is no single point of failure
with
>>>>> the domain controller any more - kube would respawn the DC on
failure
>>>>
>>>> I'm heading on PTO tomorrow so likely won't be able to follow up
on this question for a while, but one concern I had with the Kubernetes respawn approach
was retaining any changes that had been made to the domain configuration. Unless the
domain.xml comes from / is written to some shared storage available to the respawned DC,
any changes made will be lost.
>>>>
>>>> Of course, if the DC is only being used for reads, this isn't an
issue.
>>>>
>>>>> Here some ideas for improvement …
>>>>>
>>>>> In a kube env we should be able to swap out containers based on some
>>>>> criteria. It should be possible to define these criteria, emit
events
>>>>> based on them create/remove/replace containers automatically.
>>>>> Additionally a human should be able to make qualified decisions
through
>>>>> a console and create/remove/replace containers easily.
>>>>> Much of the needed information is in jmx. Heiko told me that there is
a
>>>>> project that can push events to influx db - something to look at.
>>>>>
>>>>> If information display contained in jmx in a console has value (e.g
in
>>>>> hawtio) that information must be aggregated and visible for each
node.
>>>>> Currently, we have a round robin service on 8080 which would show a
>>>>> different hawtio instance on every request - this is nonsense.
>>>>>
>>>>> I can see a number of high level items:
>>>>>
>>>>> #1 a thing that aggregates jmx content - possibly multiple
MBeanServers
>>>>> in the DC VM that delegate to respective MBeanServers on other hosts,
so
>>>>> that a management client can pickup the info from one service
>>>>> #2 look at the existing inluxdb thing and research into how to
automate
>>>>> the replacement of containers
>>>>> #3 from the usability perspective, there may need to be an openshift
>>>>> profile in the console(s) because some operations may not make sense
in
>>>>> that env
>>>>>
>>>>> cheers
>>>>> —thomas
>>>>>
>>>>> PS: looking forward to an exiting ride in 2015
>>>>>
>>>>>
>>>>>> On 5 Dec 2014, at 14:36, Thomas Diesler <tdiesler(a)redhat.com
>>>>>> <mailto:tdiesler@redhat.com>> wrote:
>>>>>>
>>>>>> Folks,
>>>>>>
>>>>>> I’ve recently been looking at WildFly container deployments on
>>>>>> OpenShift V3. The following setup is documented here
>>>>>>
<
https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabri...
>>>>>>
>>>>>> <example-rest-design.png>
>>>>>>
>>>>>> The example architecture consists of a set of three high
available
>>>>>> (HA) servers running REST endpoints.
>>>>>> For server replication and failover we use Kubernetes. Each
server
>>>>>> runs in a dedicated Pod that we access via Services.
>>>>>>
>>>>>> This approach comes with a number of benefits, which are
sufficiently
>>>>>> explained in various OpenShift
>>>>>>
<
https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernet...;,
>>>>>> Kubernetes
>>>>>>
<
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md> and
>>>>>> Docker <
https://docs.docker.com/> materials, but also with
a number of
>>>>>> challenges. Lets look at those in more detail …
>>>>>>
>>>>>> In the example above Kubernetes replicates a number of
standalone
>>>>>> containers and isolates them in a Pod each with limited access
from
>>>>>> the outside world.
>>>>>>
>>>>>> * The management interfaces are not accessible
>>>>>> * The management consoles are not visible
>>>>>>
>>>>>> With WildFly-Camel we have a Hawt.io
>>>>>>
<
http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html> console
>>>>>> that allows us to manage Camel Routes configured or deployed to
the
>>>>>> WildFly runtime.
>>>>>> The WildFly console manages aspects of the appserver.
>>>>>>
>>>>>> In a more general sense, I was wondering how the WildFly domain
model
>>>>>> maps to the Kubernetes runtime environment and how these server
>>>>>> instances are managed and information about them relayed back to
the
>>>>>> sysadmin
>>>>>>
>>>>>> a) Should these individual wildfly instances somehow be connected
to
>>>>>> each other (i.e. notion of domain)?
>>>>>> b) How would an HA singleton service work?
>>>>>> c) What level of management should be exposed to the outside?
>>>>>> d) Should it be possible to modify runtime behaviour of these
servers
>>>>>> (i.e. write access to config)?
>>>>>> e) Should deployment be supported at all?
>>>>>> f) How can a server be detected that has gone bad?
>>>>>> g) Should logs be aggregated?
>>>>>> h) Should there be a common management view (i.e. console) for
these
>>>>>> servers?
>>>>>> i) etc …
>>>>>>
>>>>>> Are these concerns already being addressed for WildFly?
>>>>>>
>>>>>> Is there perhaps even an already existing design that I could
look at?
>>>>>>
>>>>>> Can such an effort be connected to the work that is going on in
Fabric8?
>>>>>>
>>>>>> cheers
>>>>>> —thomas
>>>>>>
>>>>>> PS: it would be area that we @ wildfly-camel were interested to
work on
>>>>>> _______________________________________________
>>>>>> wildfly-dev mailing list
>>>>>> wildfly-dev(a)lists.jboss.org
<mailto:wildfly-dev@lists.jboss.org>
>>>>>>
https://lists.jboss.org/mailman/listinfo/wildfly-dev
>>>>
>>>>
>>>> --
>>>> Brian Stansberry
>>>> Senior Principal Software Engineer
>>>> JBoss by Red Hat
>>>
>>>
>>> James
>>> -------
>>> Red Hat
>>>
>>> Twitter: @jstrachan
>>> Email: jstracha(a)redhat.com
>>> Blog:
http://macstrac.blogspot.com/
>>>
>>> hawtio:
http://hawt.io/
>>> fabric8:
http://fabric8.io/
>>>
>>> Open Source Integration
>
>
> James
> -------
> Red Hat
>
> Twitter: @jstrachan
> Email: jstracha(a)redhat.com
> Blog:
http://macstrac.blogspot.com/
>
> hawtio:
http://hawt.io/
> fabric8:
http://fabric8.io/
>
> Open Source Integration