Is there some mechanism by which the agent can know if it (or the EAP server it is
managing) is running inside a container? I'm thinking of something analogous to
/etc/machine-id - perhaps when running in a container, Kube sets some environment
variable, file system token, something?
If there is a way to know, then the agent can just write some resource config property
somewhere to say "this 'thing' is running in a container." So when that
"thing" goes down, the server-side can be notified and do special things (clean
up inventr? send an alert?
----- Original Message -----
Hey,
[ CC to Federico as he may have some ideas from the Kube/OS side ]
Our QE has opened an interesting case:
https://github.com/ManageIQ/manageiq/issues/9556
where I first thought WTF with that title.
But then when reading further it got more interesting.
Basically what happens is that especially in environments like
Kube/Openshift,
individual containers/appservers are Kettle and not Pets: one goes down,
gets
killed, you start a new one somewhere else.
Now the interesting question for us are (first purely on the Hawkular
side):
- how can we detect that such a container is down and will never come up
with that id again (-> we need to clean it up in inventory)
- can we learn that for a killed container A, a freshly started
container A' is
the replacement to e.g. continue with performance monitoring of the app
or to re-associate relationships with other items in inventory-
(Is that even something we want - again that is Kettle and not Pets
anymore)
- Could eap+embedded agent perhaps store some token in Kube which
is then passed when A' is started so that A' knows it is the new A (e.g.
feed id).
- I guess that would not make much sense anyway, as for an app with
three app servers all would get that same token.
Perhaps we should ignore that use case for now completely and tackle
that differently in the sense that we don't care about 'real' app
servers,
but rather introduce the concept of a 'virtual' server where we only
know
via Kube that it exists and how many of them for a certain application
(which is identified via some tag in Kube). Those virtual servers
deliver
data, but we don't really try to do anything with them 'personally',
but indirectly via Kube interactions (i.e. map the incoming data to the
app and not to an individual server). We would also not store
the individual server in inventory, so there is no need to clean it
up (again, no pet but kettle).
In fact we could just use the feed-id as kube token (or vice versa).
We still need a way to detect that one of those kettle-as is on Kube
and possibly either disable to re-route some of the lifecycle events
onto Kubernetes (start in any case, stop probably does not matter
if he container dies because the appserver inside stops or if kube
just kills it).
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschäftsführer: Charles Cachera, Michael Cunningham, Michael O'Neill,
Eric Shander
_______________________________________________
hawkular-dev mailing list
hawkular-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hawkular-dev