[Hawkular-dev] Garbage collection of outdated Servers/Metrics - especially with (orchestrated) containers

Heiko W.Rupp hrupp at redhat.com
Thu Mar 9 07:13:33 EST 2017


On 9 Mar 2017, at 11:35, Lukas Krejci wrote:
>> The question is now: how long do we want/need to keep them?
>
> Isn't the question rather "how do we associate them with the  
> application(s)"?

Yes. And I think labels are the way of choice here.

> Because if we want to track e.g. CPU load generated by an application 
> in a container - isn't that something users would want to look at 
> history of? IMHO, using the ephemeral "container id" as (part of) 
> metric ids is a wrong thing to
> do, because really the user isn't interested in the container itself, 
> but the applications that are running in it and their consumption of 
> container's resources.

I think one needs to differentiate two time ranges here
- container active or just died
- historic view in the application
In both time ranges it is of course important to know how the overall
application behaves.

While the container is active one does not only want to know how the 
application behaves, but also how each individual container does. If 
there is only one running in parallel (scale=1) then it is less of an 
issue as application=container.
But once you scale the application up, this is no longer true. And 
individual containers may behave differently ( e.g. this memory hole I 
described does not always show ) so one may want to have the data to act 
accordingly. Either live or in a post mortem.

When aggregation kicks in, the difference between containers goes moot 
and just looking at it by label is ok.
I think for most (all?) aggregations it will not matter if the data is 
reported to one time-series for all parallel running containers or on 
one per container.


More information about the hawkular-dev mailing list