----- Original Message -----
From: "John Mazzitelli" <mazz(a)redhat.com>
To: "Discussions around Hawkular development"
<hawkular-dev(a)lists.jboss.org>
Sent: Friday, April 8, 2016 3:38:42 PM
Subject: [Hawkular-dev] hawkular-agent, prometheus, openshift
I'm trying to figure out how the Hawkular WildFly Agent needs to be enhanced
to collect metrics from Prometheus (which is where a lot of Open Shift
metrics are going).
Here is how I originally understood the problem (which may be completely
wrong):
I am looking at this:
https://prometheus.io/docs/querying/api/
So if Open Shift components are storing metrics in Prometheus, the Agent
would need to query the data via something like:
http://localhost:9090/api/v1/query?query=my_metric_name_seconds{label_one...
The agent can take the data it gets and store it in Hawkular Metrics (using a
different metric name and/or labels if we want).
I am hoping Matt W. can clarify and if this is completely wrong, how does he
see it working?
Not quite.
There are some endpoints used in OpenShift/Kubernetes that stores metrics using the
Prometheus protocol. These endpoint expose metrics in which Prometheus (or something which
understands the protocol) can then bring into their system. We are looking at potentially
being able to read from these endpoint types, not being able to read from a Prometheus
server itself.
A prime example of this is the "/metrics" endpoint which is exposed on every
Kubernetes node. There is also the '/stats/ endpoint on the node which Heapster uses,
but it only contains a subset of what is available under '/metrics'. It would be
good to be able to bring in some of these extra metrics as well.
There are also custom metrics that a container can expose which also uses this protocol.
It is important when we bring in extra metrics that they contain similar tags that our
metrics from Heapster already contain. This is so we can perform similar queries
regardless of where the original metric came from. We should also try and generate the
metric ids in a similar manner as well.