I gave it a go and implemented an "async" authentication example. It's
rather simple what happens is:
* User authenticates with username only
* Then a "waiting" page is displayed, which is waiting for some external
callback. This could be an app or whatever that verifies the user then
sends the callback. In the example a CURL command is printed on sysout for
the server which you can run to "simulate" the callback from the app.
* Once the callback is received the user is authenticated without filling
in password or any other credentials in the main browser
Check it out here:
It's a bit hacky in the way it's implemented:
* Using notes for "callback" is a bit strange maybe?
* Had to use custom realm resource for callback endpoint. Is this strange?
* Probably won't work for cross DC, but in 7.2 Hynek has stuff that does
* No way to push change to browser, so have to pull every 2 seconds. Maybe
we could add a simple authentication event feature that uses websockets and
a small auth js lib to do the job of notification?
To program a client (in go) for the rest API, I want to map all the json
representations to structs in the code.
To this end it would be great if there were json schema files to represent
them. But as I understand it, those aren't available (
The next best thing would be to generate the ascii docs and parse those,
rather than go through parsing html directly. But I get errors when I try
to generate the docs in markup using the maven "gen-asciidoc" goal.
mvn -X -P jboss-release
returns a null pointer exception and a maven mojo failure.
I've checked that the correct versions of swagger2markup are getting
Does anyone else have that behaviour? Or is there some other way of
generating ascii documentation?
My environment is as vanilla as it can get, without any modifications to
the default maven settings.xml (it's pretty much empty except comments).
maven --version output :
Apache Maven 3.3.9 (NON-CANONICAL_2016-07-01T11:53:38Z_mockbuild;
Maven home: /usr/share/maven
Java version: 1.8.0_131, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-220.127.116.11-1.b12.fc25.x86_64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.11.6-201.fc25.x86_64", arch: "amd64", family:
I've create a repo  with the Docker image that I'm using for testing and
development. Specially when using Minishit.
Regarding running on Openshift, this image is very similar to what
Openshift.io team is using with support for clustering via KUBE_PING
Do we have anything like that for Openshift ? Is there any interested to
make it an upstream image ?
I would like to report some unexpected behavior while requesting access-
and refresh token pairs. It is possible to obtain a new access- and refresh
token pair using only an access token. To describe this more thoroughly; If
someone obtained a valid access token s/he can obtain a new access- and
refresh token pair without ever knowing the refresh token.
The problem is that refresh tokens never leave the client except when
requesting a new one at the authorization server. However, the access token
is sent to resource servers for obtaining resources (obviously). But now a
resource server is actually able to obtain a new access- and refresh token
pair on behalf of the user as well, which was never the user's intention
(since it can keep a valid token indefinitely by refreshing it).
Of course, since the resource server doesn't have client credentials for
private clients it cannot obtain a new access- and refresh token pair for
those. However, it can do so for public clients as only their name is
known. (In fact, it is available in the "azp" claim of the access token.)
Steps to reproduce (I tested this with a clean setup of Keycloak
1. We will use the admin-cli client and the admin account. You can do this
with any client and account, but since this is already set up for this
particular example, it makes things a bit more easy.
2. Using the admin account, fetch a new access- and refresh token pair
using any grant type. We will be using the password grant:
3. Grab the access_token value from the response and perform a refresh
grant using this access token:
4. You will now have a response including a new access- and refresh token
This unexpected behavior can be solved by either checking the "typ" claim
to be set to "Refresh", or, when time allows, using a different signing
secret for the access- and refresh tokens. I would prefer the latter
Thanks in advance,
For a customer of mine, I submitted a few PR's and now have to dig into a
changed behavior on one of the keycloak integration tests.
Sadly enough, on the dev machine I am working on @ my customer's site, the
port 8081 is in use by a McAfee service program which I cannot kill.
Hence my question, would it be acceptable if I made a PR with an effort to
make 8081 ( and I think 8082 also ) configurable via e.g. a maven property
so I can override this when running the tests from that dev machine?
Is an effort planned for this?
Just so I do not waste too much time on this one :)
We've tried using auth-server-wildfly on Travis in the past, but the
problem has been there's been to much logs generated (Travis kills the job
after the log file reaches 4mb). Currently it uses the embedded Undertow
which drops all org.keycloak logs which is not good either.
I came up with a simple solution which is a small Java app that the output
is piped into. For each test that is running it buffers the logs. If the
test passes the logs generated during the test are dropped, but if there is
a failure it will output the logs. Log output is also prefixed with the
name of the tests so it's easy to see what output is related to the test.
PR is here:
Log output when there's no failures can be seen here:
And when there is failures here:
We are adopting Keycloak and are trying to move our OTP tokens over to Keycloak. However, Keycloak can only use secrets that are alphanumeric strings whereas our existing implementation and most hard and software tokens we have used use the full range of binary values when generating secrets.
1: Is the lower entropy of the secrets generated by Keycloak a concern?
2: If we provided a PR that migrated the existing data by re-encoding all existing secrets as Base32 and updated the code to assume Base32 instead of string be acceptable?
This would be a non breaking change but allow anyone using existing OTP tokens to migrate their secrets which I don't think they can at the moment.
This topic has been around for a while, and I'm glad to know that it
has finally been greenlit. Here we'll discuss the requirements for the
upcoming Realm Admin Resource SPI, as well as class/method naming and
any other relevant stuff.
An admin resource is a privileged (protected) resource that exposes
Keycloak data model and business logic via REST. A the moment, it is
possible to create ad-hoc admin resources on top of existing Realm
Resource SPI, but 1) it requires a lot of boilerplate and workarounds,
and 2) there are limitations. For a while, we've been developing a
Keycloak extension that utilizes such ad-hoc admin resources (for those
interested, it brings into Keycloak support for hardware OTP tokens
with full lifecycle) . We've tried to summarize our experience in
BeerCloak , where most techniques are demonstrated; I'll refer to
this example in the process.
Now what makes admin resource different from the regular realm
resource? I'll talk about some major features; feel free to share your
thoughts in case I've forgotten something.
As admin resource most likely introduces some new functionality, it's
quite natural to limit access to this new functionality by custom
roles. For example, if we introduce feature "foo", most likely we'll
want "view-foo" and "manage-foo" roles. This requirement breaks down to
the following steps:
* create roles for existing realms. That means adding roles to "*-
realm" clients of master realm and to "realm-management" client of
* ditto for each newly added realm;
* add roles to the "admin" realm role of master realm;
* server-side authorization: provide an instance of
o.k.services.resources.admin.AdminAuth (or subclass) to REST resource
methods so that they could call AdminAuth::hasAppRole;
* client-side authorization: add viewFoo and manageFoo properties to
the "access" AngularJS object.
All of the above is doable on top of Realm Resource SPI (see
BeerCloak), but the code is 99% boilerplate. In fact, the only thing
that the provider (ideally) has to do is to declare the two roles. The
actual implementation could be moved to the SPI. What we need is to
discuss how the roles should be declared (callback, annotation etc.)
Most likely the admin resource will deal with custom JPA entities
defined using Entity SPI. Moreover, there will likely be a need to log
admin events about this entity and its operations. Currently, we are
limited to 4 generic operation types (see
o.k.events.admin.OperationType) and a list of 27 hardcoded
entity/resource types (see o.k.events.admin.ResourceType). This is a
serious limitation because any provider that defines custom entity
and/or REST operations will be unable to log its activity with Keycloak
API and present it with Keycloak GUI. As an example, the aforementioned
OTP extension would benefit from logging admin events with DEVICE
resource type and ENROLL/REVOKE/LOCK/UNLOCK/RESYNC additional operation
types. We can discuss now which way the provider would define its
custom resource operation and resource types. Under the hood, the
existing enums should be extended with the supplied values (possibly
using the extensible enum pattern ). The values should also
that log filtering could be applied in the GUI.
Last time when we discussed this, Stian said:
> Introducing this as part of a admin resource spi would make perfect
I agree that extending OperationType is more pertinent to Admin
Resource SPI ("I define an operation and I want to be able to log it").
At the same time, extending ResourceType seems to be more topical for
Entity SPI ("I define an entity and I want to log everything about
it"). We should decide if we should extend the said SPIs, or otherwise
create independent OperationTypeSpi+ResourceTypeSpi that could be mixed
into any provider, be it entity, admin resource or anything else.
At the moment, this couldn't be implemented without modifying Keycloak
Relation to FeatureProvider
Bill, once we discussed the potential FeatureProvider:
> I was also thinking of having a FeatureProvider that would be an
> "uber" component that could install sub components. i.e. an
> authenticator, user federation provider, etc.
I think we could revisit this topic too, since both SPIs seem to be
Thanks for reading this bulky post! Any feedback welcome,
CTO, CargoSoft LLC
 https://gist.github.com/KamilT/3192681 id="-x-evo-selection-start-
I've reported KEYCLOAK-4928, and I was wondering if somebody could take a
look at the primary key constraints we've created to make sure we've got it
correctly. Thanks in advance!