Right now, the Hawkular OpenShift Agent (HOSA) can pass HTTP authentication headers to
endpoints it is monitoring, but you have to declare the credentials in the pod's
configmap's endpoints section:
endpoints:
- type: jolokia
credentials:
username: myuser
password: mypass
We would like to figure a better way. One way Heiko mentioned was to see if we can use
OpenShift's secrets right here in the credentials section.
So I created a PoC to see if and how it can work. I have it working here in my own
branch:
https://github.com/jmazzitelli/hawkular-openshift-agent/tree/use-secrets
After building and deploying the agent in my OpenShift environment, I then created a
secret (via the OS console) in my project where my jolokia pod lives - the secret is
called "foo2" which has two keys defined: "password" and
"username". I then tell the agent about this by passing in credentials as I
describe above, but I prefix the values with "secret:" to tell the agent to
expect the actual values to be found in the secret. The full syntax of the credentials
values are "secret:<secret name>/<secret key>". So for example, I
can have this in my configmap:
endpoints:
- type: jolokia
credentials:
username: secret:foo2/username
password: secret:foo2/password
It can optionally use bearer tokens:
endpoints:
- type: jolokia
credentials:
token: secret:foo2/password
There is one problem with this. I need to add a cluster role to the agent to read secrets
(I need verb "get" on resource "secrets" - for testing, I am using the
"system:node" role since that is one of the few that has that permission -
we'd really want a cluster role that only has "get"/"secrets" - we
don't need all the perms that "system:node" provides - we'd have to
create our role if need be).
But is this good? I do not know of any other way for the agent to be able to read secrets.
Is it OK to require the agent to have "get" "secrets" permission? Is
there another way to access secrets?