Re-using the state would mean there's always 1 or 0 running instance of
that server at a given time, but what happens if it's scaled up to 2
instances or more? Is it a possible scenario? If yes, it seems complicated
to use dc+version+tenant as the feed id.
I filed an issue in JIRA about introducing some kind of TTL :
https://issues.jboss.org/browse/HWKINVENT-205
Maybe those expired feeds fall in the scope of this issue.
On Tue, Jan 24, 2017 at 10:21 AM, Heiko W.Rupp <hrupp(a)redhat.com> wrote:
Hey,
when WildFly connects to Inventory for the 1st time, we sync
the EAP information into inventory, which also includes the information
about which metrics are available.
Now when WildFly is deployed into Kubernetes or OpenShift, we
will see that WildFly is started, syncs and then dies at some point
in time, where k8s will not re-use the existing one, but start
a new one, which will have a different FeedId.
This will leave a WildFly in Inventory, that is later detected as
down in Metrics/Alerting, but the one in Inventory will stay
forever. Consequences are
- Inventory will get "full" with no-longer needed information
- Clients will retrieve data about non-"usable" servers
We need to figure out how to deal with this situation like e.g.:
- have inventory listen on k8s events and remove the server
when k8s removes it (not when it is going down; stopped pods
can stay around for some time)
- Have a different way of creating the feed id / server id so that
the state is "re-used". Something like feedId/server name could
be the name of the deployment config + version + k8s tenant.
Thoughts?
_______________________________________________
hawkular-dev mailing list
hawkular-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hawkular-dev