So I certainly get that we want to be as close to the spec as possible -
wholeheartedly agree. However, I'd also like to reiterate that the main
purpose of this is for lower/developer environments in which there are a
large number of developers who are frequently spinning up sanboxes with
apps that need SSO capabilities. Unless I want to open up the GUI in these
environments to the world, I'm left without a good CM option for Keycloak.
Any suggestions on the management of this? Right now I'm looking at a high
amount of manual overhead, or scripting it out with some one-off config
scripts that I'll have to wind up maintaining. Neither option sounds
appealing.
Hope you didn't get the wrong impression from the PR - I noted a javascript
library that was shared across several pages within a number of
subdomains. All pages share a similar look and feel, but due to the nature
of the content and topic can live at different subdomains or even have
slightly different page content implementations. Are we really going to
make the assertion that a 'client' cannot span subdomains? That seems to
be the implication here. And if so, is that necessarily more 'secure', or
does that just mean that implementers could simply favor a single domain
name with varying paths instead of categorically organized subdomains?
Seems like an implementation detail that can easily be circumvented and
does not inherently make an enclave more or less secure.
I completely agree with your argument that we should be striving for the
finest level of granularity with respect to client definition. I
understand the intentional segregation of logical clients by the
specification so as to keep one compromised client from affecting the
entire SSO ecosystem. However, I do think that there is a solid case for a
single 'client' that does stuff like spans subdomains, and that such a
client could be used in a secure manner.
At the end of the day, it feels like we're trying to force a definition on
what is a client. The discussion seems to acknowledge that 'real world'
application of this spec find wildcards useful (as your suggestion for
supporting them in the path), however the manner in which they're used
appropriately is up for debate. If we're living outside the spec anyway,
do we really have a firm leg to stand on for the assertion that clients can
have different paths but not subdomains? I don't see a solid reason for
this one.
Some other thoughts I had on this that might be useful:
- Some of the rub here is that maintaining a list of valid redirects for
something like string matching is a CM nightmare (particularly in dev-ish
environments). Something like an SPI to drop an implementation in here
where I can apply a little more powerful logic would also do the job.
Could this be used nefariously or poorly to circumvent the specification?
yeah, sure - but so can Authenticators, and they're seen as a useful tool
whereby developers can extend necessary functionality.
- Would you also consider something like a 'development mode' flag that
allowed for different options such as wildcards in different URL parts?
Would have to add a little more validation to define what is and is not
allowed, but would be useful for this case.
Thanks for good the discussion. As always, learning much and enjoying it!
Josh Cain | Software Applications Engineer
*Identity and Access Management*
*Red Hat*
+1 256-452-0150
On Tue, Sep 20, 2016 at 1:20 AM, Stian Thorgersen <sthorger(a)redhat.com>
wrote:
I appreciate this feature might be useful, so there's no need to
discuss
that aspect. The only issue I have with this PR is with regards to security
and especially as it enables doing the "wrong" thing.
With regards to redirect URIs with confidential clients they are still
important, but not quite as important as they are for public client. This
means redirect URIs can typically be more flexible with confidential
clients without a significant risk.
For public clients it's very important to lock these down as much as
possible as they are the ONLY way to prevent malicious clients to gain
access to the SSO session. This means we should actually tighten the
requirements for redirect URIs not further relax them. For public clients
the redirect URIs:
* Should be as specific as possible. We should only allow wildcard in the
path. I believe we should introduce this for both public and confidential
clients.
* Require HTTPs unless it's
http://localhost. This is not so easy in
development, so maybe we should have an option to run the server in
"unsafe" mode for developers.
Here's a quote from the OIDC spec around this:
*"REQUIRED. Redirection URI to which the response will be sent. This URI
MUST exactly match one of the Redirection URI values for the Client
pre-registered at the OpenID Provider, with the matching performed as
described in Section 6.2.1 of [RFC3986] (Simple String Comparison). The
Redirection URI SHOULD use the https scheme; however, it MAY use the http
scheme, provided that the Client Type is confidential, as defined in
Section 2.1 of OAuth 2.0, and provided the OP allows the use of http
Redirection URIs in this case. The Redirection URI MAY use an alternate
scheme, such as one that is intended to identify a callback into a native
application."*
Looking at your comments on the PR it worries me slightly that you have a
shared client for a "library". A library is not a client. A client is an
instance of an application. Sharing the client will have impact on audit,
what clients a user believes they are authenticated to. With regards to
wildcard to allow any subdomains that is scary as your allowing any piece
of code running on any subdomain within your domain to authenticate via
that particular client. That could be an infected forum, something any user
has executing, etc.. As long as the redirect URI permits it an application
can obtain a token for a client for a user that is authenticated without
the user knowing about it. Unless you enable consent that is, but if the
user used the "real" client they would have given consent and the malicious
client on a different subdomain can take advantage of it.
In summary my opinion is that we can't accept this PR and that we further:
* Allow wildcard only in path. This is actually still looser than the OIDC
spec mandates as it requires a simple string comparison.
* Require HTTPS (or custom scheme) for public clients. We may need a
development mode that disables this.
On 19 September 2016 at 16:50, Josh Cain <josh.cain(a)redhat.com> wrote:
> Per KEYCLOAK-3585: <
https://issues.jboss.org/browse/KEYCLOAK-3585>
>
> Currently, valid redirect URI hostnames allow for wildcards at the end
> like so:
>
>
http://www.redhat.com/*
>
> I'm managing several environments where clients need 'n' number of
> available redirect URI's with different hostnames, I.E.
>
>
http://developer1.env.redhat.com
>
>
http://developer2.env.redhat.com
>
>
http://developer3.env.redhat.com
>
> Would really help to have the ability to wildcard hostnames too, I.E.:
>
> http://*.env.redhat.com
>
>
> I've submitted #3241 <
https://github.com/keycloak/keycloak/pull/3241> to
> address this issue, but there seem to be some concerns about allowing
> wildcards in other parts of the URL. See the PR for a more fleshed out
> discussion, but wanted to start a thread here on the mailing list.
> Particularly with respect to:
>
> - Does anyone have need of this feature or would find it useful?
> - Should this kind of wildcard be allowed as a configuration option
> by Keycloak?
>
> Josh Cain | Software Applications Engineer
> *Identity and Access Management*
> *Red Hat*
> +1 256-452-0150
>
> _______________________________________________
> keycloak-dev mailing list
> keycloak-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/keycloak-dev
>