Reposting on Eric's behalf
Thanks Marc for the cc,
Just to give some background for the reasoning behind this.
Today the Kubernetes API server trusts ID Tokens issued to a single client. Refreshing a token requires a client_secret, hence the flags Marc provided in his example. Though we don't have official recommendations about what the properties of the client passed in those flags should be there's two obvious ways of going about this.
1) Have each kubectl share a client secret. Some authorization servers provide mechanisms for declaring "public clients" to imposes restrictions on its capabilities. For example Google, when it assumes client_secrets aren't secret, restricts the redirect URLs for embedded apps to only localhost and a magic OOB, doesn't let it do incremental authorization, etc. Though as Marc noted this may have unintended consequences with providers who assume this doesn't happen.
2) Another option is for kubectl to utilize the "azp" claim in the ID Token[0], which allows clients to request ID Tokens on behalf of other clients. This means each kubectl gets its own client_id and client_secret, with each one requesting ID Tokens minted for a common client_id. However that capability isn't generally supported by OIDC providers, though Google does[1]. This is probably a more secure option, but the actual implementations differ so widely that it becomes hard to make a general statement.
We'd be interested in knowing if either of these methods raise red flags when combined with keycloak.
Eric
[0] https://openid.net/specs/openid-connect-core-1_0.html# IDToken
[1] https://developers.google.com/identity/protocols/ CrossClientAuth On Wed, Sep 14, 2016 at 10:16 AM, Marc Boorshtein <marc.boorshtein@tremolosecurity.com > wrote:KC Team,
Eric Chiang from CoreOS (cc'd on this email) and I have been talking
on the Kubernetes sig-auth slack channel about how secret the "client
secret" in OIDC should be. The context for the question is that
Kubernetes' OIDC implementation uses the id_token as the bearer token
(as opposed to the access_token) to avoid a round trip. Since the
id_token should be short lived the question of how to get a new one
using a refresh_token. The current solution is to give kubectl the
refresh_token, the idp discovery url and the client id and secret:
kubectl config set-credentials --auth-provider=oidc
kubectl config set-credentials --auth-provider-args=idp-issuer-url=(
issuer url )
kubectl config set-credentials --auth-provider-args=client_id=( your client id )
kubectl config set-credentials --auth-provider-args=client_secret=(
your client secret )
kubectl config set-credentials --auth-provider-args=refresh-token=(
your refresh token )
This way kubectl can get a new id_token once the possessed one
expires. The question becomes does giving the client_secret directly
to users become a security issue since its now a shared credential?
Some issues I see are:
1. Rotation becomes harder - how many people have this?
2. While you can't generate an access_token with just this secret,
you CAN impersonate an RP so if your are monitoring which RPs are
making requests an attacker could generate excessive requests for a
single RP even if those requests fail
3. Since most IdPs will generate some kind of back-end record for a
request if you have the client id and secret you could more easily do
a DoS attack by flooding the server with authenticated requestes for
authentication
What are your thoughts? Google provides an example asserting that the
client secret ISN'T secret (reading through it I think the example
contradicts its self)
https://developers.google.com/api-client-library/python/auth /installed-app
Thanks
Marc Boorshtein
CTO Tremolo Security
marc.boorshtein@tremolosecurity.com
Twitter - @mlbiam / @tremolosecurity