[Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env
Thomas Heute
theute at redhat.com
Tue Jul 5 02:17:58 EDT 2016
cattle vs pet monitoring is something I struggle with TBH...
It doesn't make much sense to keep all data about all elements of the
cattle as you are less interested by the performance of 1 member but more
about the overall performance.
With auto scaling, new containers are created/removed. You add one, you
remove it, you read one, there is no continuation unlike when you restart a
server... A configuration change is also not a continuation anymore, it's a
whole new image, whole new container (in good practice at least)
IMO we should keep thinking about those, and think more in terms of
collection for the cases when Middleware is running in (immutable)
containers...
Thomas
On Sun, Jul 3, 2016 at 11:44 AM, Heiko W.Rupp <hrupp at redhat.com> wrote:
> Hey,
>
> [ CC to Federico as he may have some ideas from the Kube/OS side ]
>
> Our QE has opened an interesting case:
>
> https://github.com/ManageIQ/manageiq/issues/9556
>
> where I first thought WTF with that title.
>
> But then when reading further it got more interesting.
> Basically what happens is that especially in environments like
> Kube/Openshift,
> individual containers/appservers are Kettle and not Pets: one goes down,
> gets
> killed, you start a new one somewhere else.
>
> Now the interesting question for us are (first purely on the Hawkular
> side):
> - how can we detect that such a container is down and will never come up
> with that id again (-> we need to clean it up in inventory)
> - can we learn that for a killed container A, a freshly started
> container A' is
> the replacement to e.g. continue with performance monitoring of the app
> or to re-associate relationships with other items in inventory-
> (Is that even something we want - again that is Kettle and not Pets
> anymore)
> - Could eap+embedded agent perhaps store some token in Kube which
> is then passed when A' is started so that A' knows it is the new A (e.g.
> feed id).
> - I guess that would not make much sense anyway, as for an app with
> three app servers all would get that same token.
>
> Perhaps we should ignore that use case for now completely and tackle
> that differently in the sense that we don't care about 'real' app
> servers,
> but rather introduce the concept of a 'virtual' server where we only
> know
> via Kube that it exists and how many of them for a certain application
> (which is identified via some tag in Kube). Those virtual servers
> deliver
> data, but we don't really try to do anything with them 'personally',
> but indirectly via Kube interactions (i.e. map the incoming data to the
> app and not to an individual server). We would also not store
> the individual server in inventory, so there is no need to clean it
> up (again, no pet but kettle).
> In fact we could just use the feed-id as kube token (or vice versa).
> We still need a way to detect that one of those kettle-as is on Kube
> and possibly either disable to re-route some of the lifecycle events
> onto Kubernetes (start in any case, stop probably does not matter
> if he container dies because the appserver inside stops or if kube
> just kills it).
>
>
> --
> Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
> Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
> Handelsregister: Amtsgericht München HRB 153243
> Geschäftsführer: Charles Cachera, Michael Cunningham, Michael O'Neill,
> Eric Shander
> _______________________________________________
> hawkular-dev mailing list
> hawkular-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/hawkular-dev
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/9c536a5c/attachment.html
More information about the hawkular-dev
mailing list