On 22 September 2016 at 22:54, Josh Cain <josh.cain@redhat.com> wrote:
On Thu, Sep 22, 2016 at 2:27 AM, Stian Thorgersen <sthorger@redhat.com> wrote:


On 21 September 2016 at 16:10, Josh Cain <josh.cain@redhat.com> wrote:
On Wed, Sep 21, 2016 at 1:12 AM, Stian Thorgersen <sthorger@redhat.com> wrote:


On 20 September 2016 at 16:15, Josh Cain <josh.cain@redhat.com> wrote:
So I certainly get that we want to be as close to the spec as possible - wholeheartedly agree.  However, I'd also like to reiterate that the main purpose of this is for lower/developer environments in which there are a large number of developers who are frequently spinning up sanboxes with apps that need SSO capabilities.  Unless I want to open up the GUI in these environments to the world, I'm left without a good CM option for Keycloak.  Any suggestions on the management of this?  Right now I'm looking at a high amount of manual overhead, or scripting it out with some one-off config scripts that I'll have to wind up maintaining.  Neither option sounds appealing.

+1 This is a use-case we need to find a solution to

Cool.  Keep me posted - in the meantime I've started cobbling together a few JAXRS scripts against the admin endpoint, but would ideally like something more CM-y.  Maybe something like ansible integration...
 

Hope you didn't get the wrong impression from the PR - I noted a javascript library that was shared across several pages within a number of subdomains.  All pages share a similar look and feel, but due to the nature of the content and topic can live at different subdomains or even have slightly different page content implementations.  Are we really going to make the assertion that a 'client' cannot span subdomains?  That seems to be the implication here.  And if so, is that necessarily more 'secure', or does that just mean that implementers could simply favor a single domain name with varying paths instead of categorically organized subdomains?  Seems like an implementation detail that can easily be circumvented and does not inherently make an enclave more or less secure.

A shared JavaScript library is not an client/application it's a library. Not sure how you can argue that it's a client. A client has more than just a redirect uri. It has:

* Base URL - so users can find the application. This will be more useful in the future once we introduce a SSO landing page and provide links to applications from the account management console
* Consent - clients that require consents should allow users to give consent per-application, not per-library
* Audit/log - audit and log both in admin console and account management relies on knowing what application, not what library, accessed a users account

I really think you are misusing the concept of a client when you are using a client for a library and I disagree that this is a valid use-case. You are doing something wrong here.

Thanks for further examining this use case, and I'm certainly open to hearing you on this one if we're 'doing something wrong.'  I do have some further questions clarifying what that 'something' is.
  • Where does the Base URL requirement come from?  I've not seen that in any of the specifications.  Is it more of a pragmatic best-practice-ish kind of approach?  If so, do we have evidence of industry-wide adherence such that we can enforce it with a product?
  • Let me make sure I understand your point about mis-using a javascript 'library'.  It feels like you're indicating that something like a javascript status bar that uses keycloak.js can share the same 'client' if, and only if, they share the same domain.  So to illustrate, this would be allowed:


​Since the above pages share the same 'Base URL', they may rightfully be considered as 'valid' by Keycloak.  However, according to the current Keycloak implementation, the below diagram demonstrates how the same application would *not* be allowed to delineate itself with the use of subdomains:


​This constrains seems superficial for this use case.  Am I understanding this correctly?

Again, I would contest that a logical 'application' can be comprised of pages split across subdomains.  What's more, sharing the same base URL provides no guarantee of proper granularity for definition of a single 'application'.

Your example there makes sense to me and I agree that in this case it could make sense to use the same client across multiple subdomains. I assumed you had a login status library that was reused on different applications.

That doesn't justify having a wildcard for subdomain though. You should have separate redirect uris for each subdomain.

I think we might have to just agree to disagree on this one.  I think that there remains a justification for wildcarding a subdomain since it can be secured properly and would save on administrative burden.  Same use case for wildcards in the path, just a different URL element.

 

I completely agree with your argument that we should be striving for the finest level of granularity with respect to client definition.  I understand the intentional segregation of logical clients by the specification so as to keep one compromised client from affecting the entire SSO ecosystem.  However, I do think that there is a solid case for a single 'client' that does stuff like spans subdomains, and that such a client could be used in a secure manner.

Can you give me a real example of where an application has multiple subdomains? I just don't see it.

Sure.  amazon.com uses smile.amazon.com for its charitable donation giving.  At the end of the day it's exactly the *same* site, but with slightly different content.  Same "Application", but residing on a different subdomain.  Could probably crawl some frequently used sites to find more, but that one came to mind first.

Actually I think that's a bad example and in that case it should be two separate clients.
 
 
Really?  So an app that has almost *exactly* identical function with some stylistic and business rule differences deserves to be treated as a different client altogether. 

Will kindly disagree, but note that there's also disagreement on this one in implementations across the internet.  For instance, Stack overflow treats each different topic's forum as a client, and these happen to be delineated by subdomains (however, notifications are shared across all sites which is a bit of a mixed message if you asked me).  Whereas github (gist.github.com), Facebook (m.facebook.com, touch.facebook.com, etc), or Amazon (smile.amazon.com) will let a single app/account span subdomains.

I suppose the point was not so much to argue about which way is the purest implementation, but more to align product features with valid, real-world use cases.

No point in discussing that any further you're right. End of the day it doesn't matter as you've convinced me that an app can span multiple subdomains.
 
 

Are there any standards/restrictions/stated best practices against this?  This is the first time I've run up against problems with subdomains used in this way, but I'm certainly open to hearing if it's in violation of one of the above.

OIDC for one says simple string matching for redirect uris. I don't have any issue with splitting a client into subdomains if that makes sense (didn't think it did, but you've convinced me). My issue was purely with accepting wildcard for the subdomain as that allows all subdomains, not a clearly specified set.

See above - don't see how we can allow wildcards in path for pragmatic purposes (outside the spec) but exclude the domain from the same treatment.  I'm for allowing both.

To be honest we shouldn't have had in the path either. We should never have had the wildcard in the first place. It's down to a slightly less than optimal implementation of our adapters. They use any current URI as the redirect URI, but they should have had a specific callback URI ('.../app/oidc-callback' or something) that is the only callback URI for the app. The specific path should have been encoded into the state variable instead and once authenticated the adapter should forward to that page. That reduces the chance of any malicious code and especially any js snippets from intercepting the callback.

You could for instance using something similar in your app. Have a callback for all subdomains then redirect once you've set cookies accordingly. However, you do need a list of subdomains for an app to set a cookie, so that kinda implies you need a static list of subdomains involved in one "app" in either case.
 

Will take this one as a hard 'no' since the discussion has been ongoing for so long.  I think all the points are out there and have been fleshed out; no use in beating a dead horse.  Thanks again for the good discussion - has been educational for me and more importantly very amiable ;-)

I do have some follow-up questions on the client registration functionality in Keycloak.  Another thread perhaps?

Yep
 
 
 
 

At the end of the day, it feels like we're trying to force a definition on what is a client.  The discussion seems to acknowledge that 'real world' application of this spec find wildcards useful (as your suggestion for supporting them in the path), however the manner in which they're used appropriately is up for debate.  If we're living outside the spec anyway, do we really have a firm leg to stand on for the assertion that clients can have different paths but not subdomains?  I don't see a solid reason for this one.

We shouldn't have had the wildcard in the first place. The reason we need it is that our adapters use the current path for the application as the redirect-uri. In other OIDC providers you are limited to a simple string matching in which case it's usual to have a callback endpoint and encode the path in the state parameter.
 

Some other thoughts I had on this that might be useful:
  • Some of the rub here is that maintaining a list of valid redirects for something like string matching is a CM nightmare (particularly in dev-ish environments).  Something like an SPI to drop an implementation in here where I can apply a little more powerful logic would also do the job.  Could this be used nefariously or poorly to circumvent the specification?  yeah, sure - but so can Authenticators, and they're seen as a useful tool whereby developers can extend necessary functionality.
  • Would you also consider something like a 'development mode' flag that allowed for different options such as wildcards in different URL parts?  Would have to add a little more validation to define what is and is not allowed, but would be useful for this case.
I had the idea of an SPI or a developer mode, but ideally we'd find another way to deal with this.

Have you looked at dynamic client registration by the way? That allows registering clients without UI access and with a limited initial access token. We're also soon going to have a client registration CLI that allows doing this.
 
I'll dig into that more - I've not taken more than a cursory glance at it.  If we have more fine-grained control over token/client access it could certainly be useful.
 
 

Thanks for good the discussion.  As always, learning much and enjoying it!


Josh Cain | Software Applications Engineer
Identity and Access Management
Red Hat
+1 256-452-0150

On Tue, Sep 20, 2016 at 1:20 AM, Stian Thorgersen <sthorger@redhat.com> wrote:
I appreciate this feature might be useful, so there's no need to discuss that aspect. The only issue I have with this PR is with regards to security and especially as it enables doing the "wrong" thing.

With regards to redirect URIs with confidential clients they are still important, but not quite as important as they are for public client. This means redirect URIs can typically be more flexible with confidential clients without a significant risk.

For public clients it's very important to lock these down as much as possible as they are the ONLY way to prevent malicious clients to gain access to the SSO session. This means we should actually tighten the requirements for redirect URIs not further relax them. For public clients the redirect URIs:

* Should be as specific as possible. We should only allow wildcard in the path. I believe we should introduce this for both public and confidential clients.
* Require HTTPs unless it's http://localhost. This is not so easy in development, so maybe we should have an option to run the server in "unsafe" mode for developers.

Here's a quote from the OIDC spec around this:

"REQUIRED. Redirection URI to which the response will be sent. This URI MUST exactly match one of the Redirection URI values for the Client pre-registered at the OpenID Provider, with the matching performed as described in Section 6.2.1 of [RFC3986] (Simple String Comparison). The Redirection URI SHOULD use the https scheme; however, it MAY use the http scheme, provided that the Client Type is confidential, as defined in Section 2.1 of OAuth 2.0, and provided the OP allows the use of http Redirection URIs in this case. The Redirection URI MAY use an alternate scheme, such as one that is intended to identify a callback into a native application."

Looking at your comments on the PR it worries me slightly that you have a shared client for a "library". A library is not a client. A client is an instance of an application. Sharing the client will have impact on audit, what clients a user believes they are authenticated to. With regards to wildcard to allow any subdomains that is scary as your allowing any piece of code running on any subdomain within your domain to authenticate via that particular client. That could be an infected forum, something any user has executing, etc.. As long as the redirect URI permits it an application can obtain a token for a client for a user that is authenticated without the user knowing about it. Unless you enable consent that is, but if the user used the "real" client they would have given consent and the malicious client on a different subdomain can take advantage of it.

In summary my opinion is that we can't accept this PR and that we further:

* Allow wildcard only in path. This is actually still looser than the OIDC spec mandates as it requires a simple string comparison.
* Require HTTPS (or custom scheme) for public clients. We may need a development mode that disables this.



On 19 September 2016 at 16:50, Josh Cain <josh.cain@redhat.com> wrote:
Per KEYCLOAK-3585:

Currently, valid redirect URI hostnames allow for wildcards at the end like so:

I'm managing several environments where clients need 'n' number of available redirect URI's with different hostnames, I.E.

Would really help to have the ability to wildcard hostnames too, I.E.:

http://*.env.redhat.com

I've submitted #3241 to address this issue, but there seem to be some concerns about allowing wildcards in other parts of the URL.  See the PR for a more fleshed out discussion, but wanted to start a thread here on the mailing list.  Particularly with respect to:
  • Does anyone have need of this feature or would find it useful?
  • Should this kind of wildcard be allowed as a configuration option by Keycloak?
Josh Cain | Software Applications Engineer
Identity and Access Management
Red Hat
+1 256-452-0150

_______________________________________________
keycloak-dev mailing list
keycloak-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/keycloak-dev