bugs and limitations in alternative flows
by Bill Burke
User just came across this bug, (well I haven't tested it is a bug but
pretty sure it is):
Inside the Browser flow we have
Username Password Form
2SV - sub flow required
OTP execution - alternative
SMS execution - alternative
Neither OTP or SMS challenge is returned and both are just skipped.
Another problem is that if we fixed the above problem there is no code that handles the case where both alternatives are not configured. Finally, there is a limitation if all of this was fixed, what to do if both of these Authenticators are not configured? How is the required action formed and executed?
7 years, 1 month
Improve back-button and refreshes in authenticators?
by Marek Posolda
When working on login sessions, I wonder if we want to improve browser
back-button and browser refreshes.
In shortcut, I can see 3 basic options:
1) Keep same like now and rely on header "Cache-Control: no-store,
must-revalidate, max-age=0" . This works fine and users never saw
outdated form and never submit outdated form 2 times. However the
usability sucks a bit IMO. When you press back-button after POST
request, you can see the ugly browser page "Web page has expired" . And
if you press F5 on this, you will see the unfriendly Keycloak error page
"Error was occured. Please login again through your application" because
of invalid code.
2) Use the pattern with POST followed by the redirect to GET. Since we
will have loginSession with the ID in the cookie, the GET request can be
sent to the URL without any special query parameter. Something like
"http://localhost:8180/auth/realms/master/login-actions/authenticate" .
This will allow us that in every stage of authentication, user can press
back-button and will be always redirected to the first step of the flow.
When he refreshes the page, it will re-send just the GET request and
always brings him to the current execution.
This looks most user-friendly. But there is the issue with performance
though. As we will need to followup every POST request with one
additional GET request.
3) Don't do anything special regarding back-button or refresh. But in
case that page is refreshed AND the post with invalid (already used)
code will be re-submitted, we won't display the ugly page "Error was
occured.", but we will just redirect to current step of the flow.
Example:
a) User was redirected from the application to OIDC
AuthorizationEndpoint request. Login page is shown
b) User confirmed invalid username and password with POST request. Login
form with error page "Invalid password" is shown
c) User confirmed valid username and password with POST request. TOTP
page is shown.
d) User press back-button. Now he will see again the page with
username/password form.
e) User press F5. The POST request will be re-sent, but it will use
previous "code", which is outdated now. So in this case, we will
redirect to the current execution and TOTP form will be shown. No
re-submission of username/password form will happen.
In case 3, the username/password form will be shown again, but user
won't be able to resubmit it.
In shortcut: With 2 and 3, users will never see the browser page "Web
page is expired" or Keycloak "Error occured. Go back to the
application". With 2, there is additional GET request needed. With 3,
the back-button may show the authentication forms, which user already
successfully confirmed, but he won't be able to re-submit them. Is it
bad regarding usability? To me, it looks better than showing "Web page
is expired".
So my preference is 3,2,1. WDYT? Any other options?
Marek
7 years, 1 month
How to migrate all credentials stored in Keycloak to a new encoding algorithm?
by Thomas Darimont
Hello group,
Sorry - for the long read but the following contains a proposal with a
general solution for the problem.
TLDR; section at the end.
If you have been using Keycloak for a while, you probably have a number of
users in the
system, whose passwords are encoded by the default
Pbkdf2PasswordHashProvider which
currently uses the PBKDF2WithHmacSHA1 algorithm.
To change the algorithm, one could implement a custom password encoding via
Keycloak’s
PasswordHashProvider SPI. That works for user credential updates or newly
created users,
but what about the potentially large number of credentials of already
existing users
who are not active at the moment?
If you need to ensure that user credentials are encoded and stored with
the new algorithm, then you have to migrate all user credentials to the new
algorithm.
Storing and verifying stored passwords usually involves a single step of
hashing in each direction:
once stored as a hash, each try to enter the password is verified using the
same hash function and
comparing the hashes. If you have a collection of stored password hashes
and the hash function must
be changed, the only possibility (apart from re-initializing all password
hashes) is to apply the
second hash function to the existing hashes and remember to hash the
entered passwords twice, too.
That’s why it is unavoidable to remember which hash function was used to
create the first hash of
each password. If this information can be reconstructed, the sequence of
hash functions to apply to
a clear text password to produce a comparable hash can be reapplied. If the
hashes match, the given
password can then be hashed with the new hash function and stored as the
new hash value, effectively
migrating the password to use the new hash function. That’s what I propose
below.
The following describes an incremental method for credential updates,
verification and migration.
* Incremental Credential Migration
Imagine that you have two different credential encoding algorithms:
hash_old(input, ...) - The current encoding algorithm
hash_new(input, ...) - The new encoding algorithm
We now want to update all stored credentials to use the hash_new encoding
algorithm.
In order to achieve this the following two steps need to be performed.
1. Incrementally encode existing credentials
In this step the existing credentials are encoded with the new encoding
algorithm hash_new
and stored as the new credential value with additional metadata (old
encoding, new encoding)
annotated with a “migration_required” marker.
This marker is later used to detect credentials which needs migration
during credential validation.
Note that since we encode the already encoded credential value we do not
need to know the plain
text of the credential to perform the encoding.
The encoding all credentials will probably take some time and CPU
resources, depending on the number of credentials and the used encoding
function configuration.
Therefore it makes sense to perform this step incrementally and in parallel
to the credential validation described in Step 2. This is possible because
the newly encoded credential values
are annotated with a “migration_required” marker and all other credentials
will be handled by their associated encoding algorithm.
Eventually all credentials will be encoded with the new encoding algorithm.
Pseudo-Code: encode credentials with new encoding
for (CredentialModel credential: passwordCredentials) {
// checks if given credential should be migrated, e.g. uses hash_old
if (isCredentialMigrationRequired(credential)) {
metadata = credential.getConfig();
// credential.value: the original password encoded with hash_old
newValue = hash_new(credential.value, credential.salt, …);
metadata = updateMetadata(metadata, “hash_new”, “migration_required”)
updateCredential (credential, newValue, metadata)
}
}
2. Credential Validation and Migration
In this step the provided password is verified by comparing the stored
password hash against the
hash computed from the sequential application of the hash functions
hash_old and hash_new.
2.1 Credential Validation
For credentials marked with “migration_required”, compare the stored
credential hash value with the result of hash_new(hash_old(password,...
),...).
For all other credentials the associated credential encoding algorithm is
used.
Note that credential validation for non-migrated credentials are more
expensive due to the multiple
hash functions being applied in sequence.
If the hashes match, we know that the given password was valid and the
actual credential migration can be performed.
2.2 Credential Migration
After successful validation of a credential tagged with a
“migration_required” marker, the given
password is encoded with the new hash function via hash_new(password). The
credential is now stored with the new hash value and updated metadata with
the “migration_required” marker removed.
This concludes the migration of the credential. After the migration the
hash_new(...) function is
sufficient to verify the credential.
Pseudo-Code: validate and migrate credential
boolean verify(String rawPassword, CredentialModel cred) {
if (isMarkedForMigration(cred)){
// Step 2.1 Validate credential by encoding the rawPassword
// with the hash_old and then hash_new algorithm.
if (hash_new(hash_old(rawPassword, cred), cred) == cred.value) {
// Step 2.2 Perform the credential migration
migrateCredential(cred, hash_new(rawPassword, cred));
return true;
}
} else {
// verify credential with hash_new(...) OR hash_old(...)
}
return false;
}
TLDR: Conclusion
The proposed approach supports migration of credentials to a new encoding
algorithm in a two step process.
First the existing credential value, hashed with the old hash function, is
hashed again with the new hash
function. The resulting hash is then stored in the credential annotated
with a migration marker.
To verify a given password against the stored credential hash, the same
sequence of hash functions is applied to the
password and the resulting hash value is then compared against the stored
hash.
If the hash matches, the actual credential migration is performed by
hashing the given password again but
this time only with the new hash function.
The resulting hash is then stored with the credential without the migration
marker.
The main benefit of this method is that one can migrate existing credential
encoding mechanisms to new
ones without having to keep old credentials hashed with potentially
insecure algorithms around.
The method can incrementally update the credentials by using markers on the
stored credentials to
steer credential validation.
It comes with the cost of potentially more CPU intensive credential
validation for non-migrated
credentials that need to be verified and migrated.
Given the continuous progression in the fields of security and cryptography
it is only a matter of time
that one needs to change a credential encoding mechanism in order to comply
with the latest recommended
security standards.
Therefore I think this incremental credential migration would be a valuable
feature to add to
the Keycloak System.
What do you guys think?
Cheers,
Thomas
7 years, 1 month
Deploying provider with dependencies
by Dmitry Telegin
Hi,
It's easy to imagine a provider that would integrate a third party
library which, together with transitive dependencies, might result in
dozens of JARs. A real-world example: OpenID 2.0 login protocol
implementation using openid4java, which in its turn pulls in another 10
JARs.
What are the deployment options for configurations like that? Is it
really necessary to install each and every dependency as a WildFly
module? This could become a PITA if there are a lot of deps. Could it
be a single, self-sufficient artifact just to be put into deployments
subdir? If yes, what type of artifact it should be (EAR maybe)?
Thx,
Dmitry
7 years, 1 month
next-gen Keycloak proxy
by Bill Burke
Keycloak Proxy was written a few years ago to secure apps that can't use
an adapter provided by us. While Keycloak Proxy works (? mostly?)
,we've been pushing people to Apache + mod-auth-mellon or
mod-auth-openidc for non-Java apps. I predict that relying on Apache
to proxy and secure apps that can't use our adapters is going to quickly
become an issue for us. We already have a need to write extensions to
mod-auth-*, specifically to support Pedro's Authz work (which is really
nice BTW!). We could also do tighter integration to make the
configuration experience more user-friendly. The problem is we have
zero expertise in this area and none of us are C/C++ developers (I
haven't coded in C/C++ since 1999 when I was at Iona).
This brings me to what would be the next generation of the Keycloak
Proxy. The first thing I'd like to improve is that configuration would
happen within the admin console. This configuration could be made much
simpler as whatever protocol configuration that would be needed could be
hard-coded and pre-configured. Mappers would focus on mapping values
to HTTP headers.
Beyond configuration, things become more interesting and complex and
their are multiple factors in deciding the authentication protocol,
proxy design, and provisioning:
* Can/Should one Keycloak Proxy virtual host and proxy multiple apps in
same instance? One thing stopping this is SSL. If Keycloak Proxy is
handling SSL, then there is no possibility of virtual hosting. If the
load balancer is handling SSL, then this is a possibility.
* Keycloak Proxy currently needs an HttpSession as it stores
authentication information (JWS access token and Refresh Token) there so
it can forward it to the application. We'd have to either shrink needed
information so it could be stored in a cookie, or replication sessions.
THe latter of which would have the same issues with cross DC.
* Should we collocate Keycloak proxy with Keycloak runtime? That is,
should Keycloak Proxy have direct access to UserSession, CLientSession,
and other model interfaces? The benefits of this are that you could
have a really optimized auth protocol, you'd still have to bounce the
browser to set up cookies directly, but everything else could be handled
through the ClientSession object and there would be no need to generate
or store tokens.
* Collocation is even nicer if virtual hosting could be done and there
would be no configuration needed for the proxy. It would just be
configured as a Keycloak instance and pull which apps in would need to
proxy from the database.
7 years, 1 month
New Account Management Console and Account REST api
by Stian Thorgersen
As we've discussed a few times now the plan is to do a brand new account
management console. Instead of old school forms it will be all modern using
HTML5, AngularJS and REST endpoints.
The JIRA for this work is:
https://issues.jboss.org/browse/KEYCLOAK-1250
We where hoping to get some help from the professional UXP folks for this,
but it looks like that may take some time. In the mean time the plan is to
base it on the following template:
https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html#
Also, we'll try to use some newer things from PatternFly patterns to
improve the screens.
First pass will have the same functionality and behavior as the old account
management console. Second pass will be to improve the usability (pages
like linking, sessions and history are not very nice).
We will deprecate the old FreeMarker/forms way of doing things, but keep it
around so it doesn't break what people are already doing. This can be
removed in the future (probably RHSSO 8.0?).
We'll also need to provide full rest endpoints for the account management
console. I'll work on that, while Stan works on the UI.
As the account management console will be a pure HTML5 and JS app anyone
can completely replace it with a theme. They can also customize it a lot.
We'll also need to make sure it's easy to add additional pages/sections.
Rather than just add to AccountService I'm going to rename that
to DeprecatedAccountFormService remove all REST from there and add a new
AccountService that only does REST. All features available through forms at
the moment will be available as REST API, with the exception of account
linking which will be done through Bills work that was introduced in 3.0
that allows applications to initiate the account linking.
7 years, 1 month
Test SMTP settings for realm configuration
by Bruno Oliveira
Good morning,
Today if there's something wrong with SMTP settings, Keycloak admin will
only notice it when users start to see error messages at the "Forgot
password" form and complain or read the logs. I would like to add a button
to test the SMTP connection, like we do for LDAP.
For that I created the following Jira:
https://issues.jboss.org/browse/KEYCLOAK-4604
Does it make sense?
7 years, 1 month
Keycloak 3.0.0.CR1 released
by Stian Thorgersen
Keycloak 3.0.0.CR1 is released. Even though we've been busy wrapping up
Keycloak 2.5 we've managed to include quite a few new features.
To download the release go to the Keycloak homepage
<http://www.keycloak.org/downloads>.
This release is the first that comes without Mongo support.
Highlights
- *No import option for LDAP* - This option allows consuming users from
LDAP without importing into the Keycloak database
- *Initiate linking of identity provider from application* - In the past
adding additional identity brokering accounts could only be done through
the account management console. Now this can be done from your application
- *Hide identity provider* - It's now possible to hide an identity
provider from the login page
- *Jetty 9.4* - Thanks to reneploetz <https://github.com/reneploetz> we
now have support for Jetty 9.4
- *Swedish translations* - Thanks to Viktor Kostov for adding Swedish
translations
- *Checksums for downloads* - The website now has md5 checksums for all
downloads
- *BOMs* - We've added BOMs for adapters as well as Server SPIs
The full list of resolved issues is available in JIRA
<https://issues.jboss.org/issues/?jql=project%20%3D%20keycloak%20and%20fix...>
.
Upgrading
Before you upgrade remember to backup your database and check the migration
guide
<https://keycloak.gitbooks.io/documentation/server_admin/topics/MigrationF...>
.
7 years, 1 month
Profile SPI
by Stian Thorgersen
At the moment there is no single point to define validation for a user.
Even worse for the account management console and admin console it's not
even possible to define validation for custom attributes.
Also, as there is no defined list of attributes for a user there the
mapping of user attributes is error prone.
I'd like to introduce a Profile SPI to help with this. It would have
methods to:
* Validate users during creation and updates
* List defined attributes on a user
There would be a built-in provider that would delegate to ProfileAttribute
SPI. ProfileAttribute SPI would allow defining configurable providers for
single user attributes. I'm also considering adding a separate Validation
SPI, so a ProfileAttribute provider could delegate validation to a separate
validator.
Users could also implement their own Profile provider to do whatever they
want. I'd like to aim to make the SPI a supported SPI.
First pass would focus purely on validation. Second pass would focus on
using the attribute metadata to do things like:
* Have dropdown boxes in mappers to select user attribute instead of
copy/pasting the name
* Have additional built-in attributes on registration form, update profile
form and account management console that can be enabled/disabled by
defining the Profile. I'm not suggesting a huge amount here and it will be
limited to a few sensible attributes. Defining more complex things like
address would still be done through extending the forms.
7 years, 1 month
token service
by Bill Burke
There seems to be momentum building around token services, particular
features around:
* Token downgrades. Reducing the scope of an access token when
delegating to a separate less trusted service. For example, you have a
token with admin priveleges and you want to remove those privleges
before re-using the token against another service.
* Token exchanges. Ability to convert a foreign token to and from a
Keycloak one. For example, if you want to trust tokens issued by some
proprietary IBM IDM.
* Trusting tokens from other Keycloak domains. (Although I think this
can fall under token exchanges).
* Token revalidation (I think we have this).
There are some specs around this that Pedro pointed me to:
[1]https://tools.ietf.org/html/draft-richer-oauth-chain-00
[2]https://tools.ietf.org/html/draft-campbell-oauth-sts-01
<https://tools.ietf.org/html/draft-campbell-oauth-sts-01>
I don't think they are either missing things we need or too complex for
our needs.
* Token downgrades, or token redelgation/chaining
I don't want to require apps to know the exact scope they have to
downgrade to if they want to reduce the scope when interacting with
another service. Let's provide an additional extension to [1] and
supply a "client" parameter in which the clientId of the redelegation
you want to perform is used. The token returned would be a union of the
access token's scope and the configured scope of the target client.
* Token exchanges
For [2] Keycloak just doesn't have all the concepts that are spoken
about here. I also don't think the spec is good enough. Coverting
tokens would be handled by a Token Exchange SPI. A provider would be
configured per realm and implemented on top of the ComponentModel SPI.
Each of these provider instances would handle either converting from an
external token to a realm token and/or from a realm token to an external
token. There will also be a rest endpoint on the realm to convert from
external to Keycloak and a separate REST endpoint for converting from
Keycloak to an external token.
From externl to Keycloka:
This would be a form POST to /token/convert-from with these additional
form parameters
"token" - REQUIRED. string rep of the token
"provider" - REQUIRED. id of transformer register in the realm for the
token type
"requested-token-type" - OPTIONAL. "id", "access", "offline", or
"refresh". Default is "access".
"scope" - OPTIONAL. Same as oauth scope.
This operation is analogous to the code to token flow. Here we are
creating a token tailored to the authenticated client. So all scope
configurations and mappers that the client has are applied. This means
that the client must be registered as an OIDC client. The SPI would look
something like this:
interface TokenExchangeFromProvider extends Provider {
Transformer parse(ClientModel client, Map<String, String>
formParameters);
interface Transformer {
UserModel getUser();
IDToken convert(IDToken idToken);
AccessToken convert(AccessToken accessToken);
}
}
The getUser() method returns a user that was authenticated from the
external token. The convert() methods just gives the provider the
flexibility to do further transformations on the returned token.
The runtime would do something like this:
ClientModel authenticatedClient = ...;
ComponentModel model = realm.getComponent(formParams.get("provider"));
TokenExchangeFromProvider provider =
session.getProvider(TokenExchangeFromProvider.class, model);
Transformer transformer = provider.parse(formParams);
UserModel user = transformer.getUser();
if (formParam.get("requested-token-type").equals("access")) {
AccessToken accessToken = generateAccessToken(authenticatedClient,
user, ...);
accessToken = transformer.convert(accessToken).
}
Something similar would be done for converting a Keycloak token to an
external token:
This would be a form POST to /token/convert-to with these additional
form parameters
"token" - REQUIRED. string rep of the token
"provider" - REQUIRED. id of transformer register in the realm for the
token type
interface TokenExchangeToProvider extends Provider {
ResponseBuilder parse(ClientModel client, Map<String, String>
formParameters);
}
Since we're crafting something for an external token system, we give the
provider complete autonomy in crafting the HTTP response to this operation.
7 years, 1 month