It seems that for OIDC certification, we will need more proper support
for "scope" parameter. There are few tests from OIDC conformance
testsuite, which end with WARNING because of issues with "scope" parameter.
SUMMARY OF SPECS REQUIREMENTS
- In OIDC specification, the "scope" parameter is actually REQUIRED. And
you must add the scope value "openid" to all authorization requests.
Hence if you don't use "scope=openid", the request is pure OAuth2
request, but it's not OIDC request.
In https://issues.jboss.org/browse/KEYCLOAK-3147 we discuss the
possibility that we should change our adapters and add "scope=openid" to
all requests, and also the possibility to remove IDToken if it's not
OIDC request (and maybe other things). However it may be potential issue
with backward compatibility with older adapters (which don't add
"scope=openid" at all).
- OIDC also prescribes the "scope=offline_access", which you use if you
want offline token. We actually support this as we have realm role
"offline_access", with scopeParamRequired=true . So this role is applied
just if it's included in scope parameter. This is our only support of
scope param actually. ATM we reference the realm roles by name (role
name must match the value of scope parameter) and clientRoles by
"clientId/roleName" . So it's not very flexible and won't work well in
the future with role namespaces.
- OIDC defines four other scope values, which we don't support, with the
meaning like this:
OPTIONAL. This scope value requests access to the End-User's
default profile Claims, which are: "name", "family_name", "given_name",
"middle_name", "nickname", "preferred_username", "profile", "picture",
"website", "gender", "birthdate", "zoneinfo", "locale", and "updated_at".
OPTIONAL. This scope value requests access to the "email" and
OPTIONAL. This scope value requests access to the "address" Claim.
OPTIONAL. This scope value requests access to the "phone_number"
and "phone_number_verified" Claims.
- Not directly related to scopes, however OIDC also has one parameter
"claims" described in section
This allows to define some additional claims, which should be included
in IDToken or UserInfo endpoint in addition to claims specified by
HOW TO IMPLEMENT?
My current thinking is, that we will have 2 kinds of protocolMappers and
1) "Always applied" - Those roles/protocolMappers are always applied to
token even if they are not specified by scope parameter.
2) "Applied on demand" - Those roles/protocolMappers are applied just if
they are specifically requested by scope parameter
For roles, we already have that with "scope param required" flag defined
per roleModel. However for protocolMappers we don't have it yet.
IMO We will also need some more flexible way to specify how the value of
scope parameter will be mapped to roles and protocolMappers. For example
if I use "scope=foo", it can mean that I want realm role "foo1", client
role "client1/foo2" and protocolMapper for "firstName" and "lastName" etc.
I can see 2 possibilities:
a) Configure allowed scope param separately per each role / protocolMapper
If some role has "Scope param required" checked, you will have
possibility to configure list of available values of scope parameter,
which this role will be applied to. This will be configured per-each
Example: I have realm role "foo" . I check "scope param required" to
true. Then I will define "scope param values" : "bar" and "baz". It
means that if someone uses parameter "scope=bar" or
scope=baz", then role "foo" will be applied to token. Otherwise it won't
Similarly it will be for protocolMappers. We will add switch "Scope
param required" to protocolMappers and we will use list of available
values of scope parameter, which is configured per each protocolMapper
b) Configure scope parameter in separate place
We will have another tab "Scope parameter config" (or maybe rather
another sub-tab under existing "Scope" tab). Here you will define the
allowed values of scope parameter. For each allowed value, you will
define protocolMappers and roles to apply. Hence for example for
"profile" scope parameter, you will define all protocolMappers for
corresponding claims ( name, family_name, ...) here.
We will still need "scope param required" switch for protocolMappers in
My current thinking is to go with (a). So when you go to some role (or
protocolMapper) in admin console you will see if you need scope
parameter and what are available values of scope parameter to request it.
WDYT? Another ideas?
I've been using keycloak 2.4.0.FINAL.
I've implemented codes for RFC 7636 for Proof Key Code Exchange experimentally.
[Background: Why RFC7636 is necessary]
RFC 7636 is important for industries where high level security is required because it can prevent Authorization Code Interception and Substitution attacks for OAuth2.0. For example, it is required for both confidential and public clients in draft specification of Financial API of OpenID foundation. By implementing RFC 7636, keycloak will be used more widely.
[Description of the implementation]
My implementation is about 90steps for Authorization Server, 90steps for Client(only Servlet-OAuth), both excluded debug log codes in step counts. Please see the detail in below links.
* The implementation:
It is based on 2.4.0.FINAL. Hope we'll refine and rebase it onto master branch for PR if you accept our implementation proposal.
* Design document:
* PoC test:
I've validated my implementation and found worked well in following scenarios.
Flow: Authorization Code Flow
Client: RFC 7636 not supported
Flow: Authorization Code Flow
Client: RFC 7636 supported and operate properly
Flow: Authorization Code Flow
Client: RFC 7636 supported but operate illegally
(send invalid code_verifier to Token Endpoint)
For detail of PoC test, please see:
I am also willing to add tests to community’s testsuites according to the process as described in “Hacking on Keycloak”.
I've known that related ticket had already been issued as KEYCLOAK-2604.
Would you mind if I contribute this RFC 7636 support to Keycloak related with KEYCLOAK-2604 ticket ?
I need to deploy keycloak over MariaDB:
I downloaded the latest version of Keycloak, 2.5.1 and I want to connect
to my DB running over MariaDB.
I'm using the same JDBC driver that I also use for other applications:
I start WildFly, I open the console and I go to Deployment.
I load the mariadb-java-client-1.5.5.jar.
The driver is correctly loaded.
I shutdown the server, I edit the standalone.xml and I add:
<datasource jndi-name="java:jboss/MariaDBDS" pool-name="MariaDBDS">
I restart the server and I can see the new datasource.
I open it, I go to "Connection" and I receive the following error when I
try to test the connection:
12:12:51,248 ERROR [org.jboss.as.controller.management-operation]
(management task-7) WFLYCTL0013: Operation ("test-connection-in-pool")
failed - address: ([
("subsystem" => "datasources"),
("data-source" => "MariaDBDS")
]) - failure description: "WFLYJCA0040: failed to invoke operation:
WFLYJCA0047: Connection is not valid"
It's easy to imagine a provider that would integrate a third party
library which, together with transitive dependencies, might result in
dozens of JARs. A real-world example: OpenID 2.0 login protocol
implementation using openid4java, which in its turn pulls in another 10
What are the deployment options for configurations like that? Is it
really necessary to install each and every dependency as a WildFly
module? This could become a PITA if there are a lot of deps. Could it
be a single, self-sufficient artifact just to be put into deployments
subdir? If yes, what type of artifact it should be (EAR maybe)?
As of now, Keycloak supports impersonation by an admin user at the front end application level. However, if someone is using JWT token based API security, there is no existing way to get a user's JWT token "on behalf" of the user by admin u.
I understand and agree with Stian Thorgersen that this is not just adding the return of a JWT token to the current impersonation endpoint. But I believe if keycloak supports impersonation; we should support that for API security as well and not just front-end applications.
If we decide to incorporate it; one implementation approach can be to introduce an impersonation grant type which would perform client and admin user authentication before granting a token on behalf of the user it is requested for. Please let me know if this sounds completely absurd to you guys.
I installed mysql 5.7.17 with innodb-large-prefix=1 and created a
database using utf8mb4 charset. Then I installed, configured and
Everything almost "works". The only problem is the 1.9.1 database
A workaround is to let keycloak fails (assuming initializeEmpty is
true), then change the charset of the table "REALM" to utf8, restart
keycloak then stop it after the database is ready.
Finally change back the charset of the table "REALM" to utf8mb4 and
The alternative is to use the patch below. I can also open a PR in github.
What do you think ?
new file mode 100644
@@ -0,0 +1,30 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+ ~ Copyright 2016 Red Hat, Inc. and/or its affiliates
+ ~ and other contributors as indicated by the @author tags.
+ ~ Licensed under the Apache License, Version 2.0 (the "License");
+ ~ you may not use this file except in compliance with the License.
+ ~ You may obtain a copy of the License at
+ ~ http://www.apache.org/licenses/LICENSE-2.0
+ ~ Unless required by applicable law or agreed to in writing, software
+ ~ distributed under the License is distributed on an "AS IS" BASIS,
+ ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ ~ See the License for the specific language governing permissions and
+ ~ limitations under the License.
+ <changeSet author="keycloak" id="1.9.1">
+ <preConditions onSqlOutput="TEST" onFail="MARK_RAN">
+ <dbms type="mysql" />
+ <!-- Can't increase publicKey and certificate size on mysql
when utf8mb4 is used. Need to find better solution -->
+ <modifyDataType tableName="REALM" columnName="PRIVATE_KEY"
+ <!--<modifyDataType tableName="REALM" columnName="PUBLIC_KEY"
+ <!--<modifyDataType tableName="REALM" columnName="CERTIFICATE"
index c083bc9a2b..d67f97b903 100755
@@ -20,7 +20,10 @@
<changeSet author="keycloak" id="1.9.1">
<preConditions onSqlOutput="TEST" onFail="MARK_RAN">
<dbms type="db2" />
+ <dbms type="mysql" />
I want to contribute to keycloak and currently searching for (beginner)
tasks that need to done. In Jira I found this ticket:
https://issues.jboss.org/browse/KEYCLOAK-4315 (Remove dead code from SAML
As suggested in the "HackingOnKeycloak" guide I want to ask if this is a
wanted task and if there's something I need to keep in mind while removing
A customer has asked us to implement a feature in which there is a
browser endpoint on keycloak. This URL can be told to link to a
specific identity broker provider (Google, Facebook, etc.) and then the
browser would be redirected back to the client. Pretty much what exists
in the Account Service console, but without having to look at the
Account Service. The reason for this is that they are doing integration
with a specific social provider and don't want to have to go through the
Account Service pages. Seems pretty reasonable and valid use case...
I'm worried about a couple of things:
* The design of it
* The security implications.
The implementation would be simple enough, it would just extract code
from the accoutn service. The endpoint would take a "providerId"
paramter that would be the idp linking to, a "clientId" for the client
requesting the link, and a "redirect_uri" to redirect back to after the
link is successful. Obviously the redirect_uri would be validated
against the clientId.
Now comes the interesting part, the security implications. Account
linking in the Account Service is fine and dandy and doesn't have any
security holes because we guarantee that the Account Service initiated
the linking via a CSRF check. For this new Client-Requested-Linking
feature, if we do nothing, and just model it as above, we cannot
guarantee that the client initiated the linking request. What are the
implications of this? This feature would be vulnerable to CSRF. A
rogue website could initiate account linking to a specific configured
provider. Is this bad? I don't know. We can guarantee that the
redirect uri is valid. The browser would never get back to the rogue
website. So what can we do to improve this?
* We could do nothing and hope its ok that anybody can initiate a link.
* We could add a consent screen like this: "Application 'clientId' is
requesting that you link your user account to 'providerId'. Do you
agree to this? You will be redirected to this provider if you click
ok.". This of course relies on the user to actually read the page. Is
this good enough?
* My last thought would be OIDC specific. We could use the POST binding
trick that SAML does to do a browser redirect via a POST call. THe POST
would contain the access token the client has. We match this token up
against the browser cookie to make sure its the same session. We also
make sure that the access token has permission to request a link. There
are 2 downsides to this approach a) This requires some code on the
client side to obtain the access token and then to package it up into an
HTML document so the POST binding trick can be done. b) This will only
work for OIDC clients. SAML clients will be left in the dust, although
I guess we could allow SAML to push a signed assertion back.