Provisioning keycloak in a docker image for testing
by keycloak-list@vergien.net
Hi list,
I'm working on getting a keycloak in a docker container with a defined
dataset for integration testing.
I'm using a json realm export and an json file with the users.
My problem is that the imported users only have a userId and password,
but not other attributes like firstName, etc.
If I add the users into the realm export it works fine, but I would like
to use tow different files, since the users tend to change more often
then the realm definition.
Is this behavior a bug or a design decision?
Here is a link to my Dockerfile:
https://git.loe.auf.uni-rostock.de/werbeo/docker/keycloak-werbeo
Thanks in advance
Daniel
6 years, 8 months
In which cases will the return value of org.keycloak.representations.idm.GroupRepresentation.getAttributes() NOT empty?
by Schenk, Manfred
At the moment I'm trying to read/view all users/groups from a certain realm using the REST interface with the help of the class "org.keycloak.admin.client.Keycloak".
I can get the list of available groups and their subgroups and also the ids and names oft he groups.
But I haven't figured out how I can achieve non-empty return values for the getAttributes() method oft he org.keycloak.representations.idm.GroupRepresentation instances.
Do I need some special configuration or request parameter fort he desired result?
Thanks in advance,
Manfred
--
Manfred Schenk, Fraunhofer IOSB
Informationsmanagement und Leittechnik
Fraunhoferstraße 1,76131 Karlsruhe, Germany
Telefon +49 721 6091-391
mailto:Manfred.Schenk@iosb.fraunhofer.de
http://www.iosb.fraunhofer.de
6 years, 8 months
Authorization Code Grant
by paolo lizarazu
Hi All, I have a keycloak running with a test realm and someone client,
this has an admin/admin user, I want to use Authorization Code
Grant(seems direct
access grant in keycloak) but not sure if I am misunderstanding something
my Application is a desktop one that has its own login window, inside this
i am sending a request to get a token like
POST http://localhost:9080/auth/realms/test/protocol/openid-connect/token
wit body
grant_type=password&clientid=someone&username=admin&password=admin
this is returning the json with access_token, expires_in, refresh_toke, etc
seems all good.
from here if I want to get user info
http://localhost:9080/auth/realms/test/protocol/openid-connect/userinfo
setting header authorization=Bearer code_token i am getting 401
unauthorized with
{
"error": "invalid_token",
"error_description": "Token invalid: Token is not active"
}
should I do a new request with
grant_type=authorization_code&code=access_token to interchange the current
for a long one? and handle the refresh token later?
sorry if this is a common or simple issue i am not getting working.
6 years, 8 months
Using Keycloak for authorisation only
by Cano Carballar, Enrique (GE Power)
Hi
I was wondering if anyone has got any experience using Keycloak as an authorization engine only, without the authentication part? The authentication would be done by an external OAuth2 product (CloudFoundry UAA).
Thanks.
6 years, 8 months
Teiid JDBC Access with DirectAccessGrantLoginModule leads to clear text password in log
by Joe Strathern
Hello Keycloak Community,
I've encountered a logging issue using the DirectAccessGrantLoginModule
that can lead to clear-text passwords exposed in the logs.
In my application, i am leveraging Teiid for Database Virtualization, and
have secured its JDBC access using the DirectAccessGrantLoginModule in a
custom security domain. This is to allow users to access the data from JDBC
clients by providing username/password instead of a token
However, using this approach, when the login occurs, the LoginModule at
DEBUG level will use the apache wire library and log the following line:
2017-12-12 09:23:18,263 DEBUG [org.apache.http.wire] (NIO8) >>
"grant_type=password&username=MyUser&password=MyPassword&client_id=MyClient"
In the log line above, *MyPassword* will be in clear text, and visible to
anyone reviewing the logs.
Is there any way to leverage this Login Module (or make improvements) to
ensure the users clear-text password is not shown in the log for security
reasons? Perhaps an option or property that could encrypt/redact that
password for that log message?
I can add custom Wildfly loggers to not display messages from this package
at DEBUG, but it would be great if there was another option available to
avoid missing out on other messages from this package.
Thanks,
Joe
6 years, 8 months
Offline client gone after redeploy
by Eivind Larsen
Hey Keycloak users!
We are running Keycloak 3.4.3.Final in HA mode on Kubernetes.
We have 3 nodes, named keycloak-0 keycloak-1 keycloak-2.
After a redeploy or killing of one or several nodes, some offline
clients are refused to refresh their tokens, and get this response on
refresh_token grant:
Status: 401
Body: {"error":"invalid_grant","error_description":"Session doesn't
have required client”}
In standalone-ha.xml we have set:
<subsystem xmlns="urn:jboss:domain:infinispan:4.0">
<cache-container name="keycloak" jndi-name="infinispan/Keycloak">
<distributed-cache name="sessions" mode="SYNC" owners="3"/>
<distributed-cache name="authenticationSessions" mode="SYNC" owners="3"/>
<distributed-cache name="offlineSessions" mode="SYNC" owners="3"/>
<distributed-cache name="clientSessions" mode="SYNC" owners="3"/>
<distributed-cache name="offlineClientSessions" mode="SYNC" owners="3"/>
<distributed-cache name="loginFailures" mode="SYNC" owners="3"/>
<distributed-cache name="actionTokens" mode="SYNC" owners="3">
…
</cache-container>
…
</subsystem>
…
<subsystem xmlns="urn:jboss:domain:jgroups:5.0">
<channels default="ee">
<channel name="ee" stack="tcp" cluster="ejb"/>
</channels>
<stacks>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="JDBC_PING">
<property
name="datasource_jndi_name">java:jboss/datasources/KeycloakDS</property>
<property
name="clear_table_on_view_change">true</property>
<property name="break_on_coord_rsp">true</property>
<property name="initialize_sql">CREATE TABLE
JGROUPSPING (own_addr varchar(200) NOT NULL, creation_timestamp
timestamp NOT NULL, cluster_name varchar(200) NOT NULL, ping_data
varbinary(5000) DEFAULT NULL, PRIMARY KEY (own_addr,
cluster_name))</property>
<property name="insert_single_sql">INSERT INTO
JGROUPSPING (own_addr, creation_timestamp, cluster_name, ping_data)
values (?, NOW(), ?, ?)</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS">
<property name="max_join_attempts">5</property>
</protocol>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
</subsystem>
But this error still occurs.
I do believe JGroups is working correctly as well.
Here is an excerpt from a node keycloak-1 while killing keycloak-2.
12:19:54,876 WARN [org.infinispan.CLUSTER] (remote-thread--p9-t16)
[Context=offlineClientSessions]ISPN000312: Lost data because of
graceful leaver keycloak-2
12:19:54,888 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t17)
ISPN000310: Starting cluster-wide rebalance for cache
authenticationSessions, topology CacheTopology{id=2299,
rebalanceId=1056, currentCH=DefaultConsistentHash{ns=256, owners =
(2)[keycloak-1: 134+33, keycloak-0: 122+53]},
pendingCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1:
132+124, keycloak-0: 124+132]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0]}
12:19:54,889 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t17)
[Context=authenticationSessions][Scope=keycloak-1]ISPN100002: Started
local rebalance
12:19:54,959 WARN [org.infinispan.CLUSTER] (remote-thread--p9-t19)
[Context=clientSessions]ISPN000312: Lost data because of graceful
leaver keycloak-2
12:19:54,966 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t20)
ISPN000310: Starting cluster-wide rebalance for cache offlineSessions,
topology CacheTopology{id=2296, rebalanceId=1053,
currentCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1:
134+33, keycloak-0: 122+53]}, pendingCH=DefaultConsistentHash{ns=256,
owners = (2)[keycloak-1: 132+124, keycloak-0: 124+132]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0]}
12:19:54,967 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t20)
[Context=offlineSessions][Scope=keycloak-1]ISPN100002: Started local
rebalance
12:19:54,969 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t17)
ISPN000310: Starting cluster-wide rebalance for cache sessions,
topology CacheTopology{id=2298, rebalanceId=1055,
currentCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1:
134+33, keycloak-0: 122+53]}, pendingCH=DefaultConsistentHash{ns=256,
owners = (2)[keycloak-1: 132+124, keycloak-0: 124+132]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0]}
12:19:54,969 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t17)
[Context=sessions][Scope=keycloak-1]ISPN100002: Started local
rebalance
12:19:54,966 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t23)
ISPN000310: Starting cluster-wide rebalance for cache loginFailures,
topology CacheTopology{id=2295, rebalanceId=1052,
currentCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1:
134+33, keycloak-0: 122+53]}, pendingCH=DefaultConsistentHash{ns=256,
owners = (2)[keycloak-1: 132+124, keycloak-0: 124+132]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0]}
12:19:55,016 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t23)
[Context=loginFailures][Scope=keycloak-1]ISPN100002: Started local
rebalance
12:19:55,017 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t22)
ISPN000310: Starting cluster-wide rebalance for cache actionTokens,
topology CacheTopology{id=2296, rebalanceId=1053,
currentCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1:
134+33, keycloak-0: 122+53]}, pendingCH=DefaultConsistentHash{ns=256,
owners = (2)[keycloak-1: 132+124, keycloak-0: 124+132]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0]}
12:19:55,017 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t22)
[Context=actionTokens][Scope=keycloak-1]ISPN100002: Started local
rebalance
12:19:55,019 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t21)
[Context=authenticationSessions][Scope=keycloak-0]ISPN100003: Finished
local rebalance
12:19:55,030 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t21)
[Context=sessions][Scope=keycloak-0]ISPN100003: Finished local
rebalance
12:19:55,031 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t14)
[Context=authenticationSessions][Scope=keycloak-1]ISPN100003: Finished
local rebalance
12:19:55,031 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t14) ISPN000336: Finished cluster-wide
rebalance for cache authenticationSessions, topology id = 2299
12:19:55,033 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t23)
[Context=offlineSessions][Scope=keycloak-0]ISPN100003: Finished local
rebalance
12:19:55,033 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t21)
[Context=actionTokens][Scope=keycloak-1]ISPN100003: Finished local
rebalance
12:19:55,035 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t16)
[Context=loginFailures][Scope=keycloak-1]ISPN100003: Finished local
rebalance
12:19:55,036 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t10)
[Context=sessions][Scope=keycloak-1]ISPN100003: Finished local
rebalance
12:19:55,036 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t10) ISPN000336: Finished cluster-wide
rebalance for cache sessions, topology id = 2298
12:19:55,115 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t22)
[Context=loginFailures][Scope=keycloak-0]ISPN100003: Finished local
rebalance
12:19:55,115 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t22)
ISPN000336: Finished cluster-wide rebalance for cache loginFailures,
topology id = 2295
12:19:55,117 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t23)
[Context=actionTokens][Scope=keycloak-0]ISPN100003: Finished local
rebalance
12:19:55,117 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t23)
ISPN000336: Finished cluster-wide rebalance for cache actionTokens,
topology id = 2296
12:19:55,121 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t25)
[Context=offlineSessions][Scope=keycloak-1]ISPN100003: Finished local
rebalance
12:19:55,121 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t25) ISPN000336: Finished cluster-wide
rebalance for cache offlineSessions, topology id = 2296
12:19:56,029 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(thread-1) ISPN000094: Received new cluster view for channel ejb:
[keycloak-1|831] (2) [keycloak-1, keycloak-0]
12:19:56,030 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(thread-1) ISPN000094: Received new cluster view for channel ejb:
[keycloak-1|831] (2) [keycloak-1, keycloak-0]
12:19:56,037 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(thread-1) ISPN000094: Received new cluster view for channel ejb:
[keycloak-1|831] (2) [keycloak-1, keycloak-0]
12:19:56,037 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(thread-1) ISPN000094: Received new cluster view for channel ejb:
[keycloak-1|831] (2) [keycloak-1, keycloak-0]
12:19:56,038 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(thread-1) ISPN000094: Received new cluster view for channel ejb:
[keycloak-1|831] (2) [keycloak-1, keycloak-0]
12:19:56,718 ERROR [org.jgroups.protocols.TCP]
(TransferQueueBundler,ejb,keycloak-1) JGRP000029: keycloak-1: failed
sending message to keycloak-2 (61 bytes):
java.net.SocketTimeoutException: connect timed out, headers: UNICAST3:
ACK, seqno=1486, ts=1332, TP: [cluster_name=ejb]
.... wait node comes up again:
12:21:00,042 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(thread-1) ISPN000094: Received new cluster view for channel ejb:
[keycloak-1|832] (3) [keycloak-1, keycloak-0, keycloak-2]
12:21:00,043 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(thread-1) ISPN000094: Received new cluster view for channel ejb:
[keycloak-1|832] (3) [keycloak-1, keycloak-0, keycloak-2]
12:21:00,043 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(thread-1) ISPN000094: Received new cluster view for channel ejb:
[keycloak-1|832] (3) [keycloak-1, keycloak-0, keycloak-2]
12:21:00,044 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(thread-1) ISPN000094: Received new cluster view for channel ejb:
[keycloak-1|832] (3) [keycloak-1, keycloak-0, keycloak-2]
12:21:00,047 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(thread-1) ISPN000094: Received new cluster view for channel ejb:
[keycloak-1|832] (3) [keycloak-1, keycloak-0, keycloak-2]
12:21:03,458 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
ISPN000310: Starting cluster-wide rebalance for cache
offlineClientSessions, topology CacheTopology{id=1649,
rebalanceId=719, currentCH=DefaultConsistentHash{ns=256, owners =
(2)[keycloak-1: 132+0, keycloak-0: 124+0]},
pendingCH=DefaultConsistentHash{ns=256, owners = (3)[keycloak-1: 87+0,
keycloak-0: 86+0, keycloak-2: 83+0]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0, keycloak-2]}
12:21:03,459 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
[Context=offlineClientSessions][Scope=keycloak-1]ISPN100002: Started
local rebalance
12:21:03,458 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
ISPN000310: Starting cluster-wide rebalance for cache loginFailures,
topology CacheTopology{id=2297, rebalanceId=1053,
currentCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1:
132+124, keycloak-0: 124+132]},
pendingCH=DefaultConsistentHash{ns=256, owners = (3)[keycloak-1:
87+82, keycloak-0: 86+75, keycloak-2: 83+99]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0, keycloak-2]}
12:21:03,459 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
[Context=loginFailures][Scope=keycloak-1]ISPN100002: Started local
rebalance
12:21:03,460 INFO [org.infinispan.CLUSTER] (transport-thread--p20-t3)
[Context=offlineClientSessions][Scope=keycloak-1]ISPN100003: Finished
local rebalance
12:21:03,460 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t18)
[Context=loginFailures][Scope=keycloak-1]ISPN100003: Finished local
rebalance
12:21:03,462 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t25)
ISPN000310: Starting cluster-wide rebalance for cache work, topology
CacheTopology{id=1646, rebalanceId=716,
currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[keycloak-1:
132, keycloak-0: 124]}, pendingCH=ReplicatedConsistentHash{ns = 256,
owners = (3)[keycloak-1: 87, keycloak-0: 86, keycloak-2: 83]},
unionCH=null, actualMembers=[keycloak-1, keycloak-0, keycloak-2]}
12:21:03,463 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t25)
[Context=work][Scope=keycloak-1]ISPN100002: Started local rebalance
12:21:03,463 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t17)
[Context=work][Scope=keycloak-1]ISPN100003: Finished local rebalance
12:21:03,476 INFO [org.infinispan.CLUSTER] (remote-thread--p6-t2)
ISPN000310: Starting cluster-wide rebalance for cache client-mappings,
topology CacheTopology{id=1657, rebalanceId=727,
currentCH=ReplicatedConsistentHash{ns = 256, owners = (2)[keycloak-1:
132, keycloak-0: 124]}, pendingCH=ReplicatedConsistentHash{ns = 256,
owners = (3)[keycloak-1: 87, keycloak-0: 86, keycloak-2: 83]},
unionCH=null, actualMembers=[keycloak-1, keycloak-0, keycloak-2]}
12:21:03,476 INFO [org.infinispan.CLUSTER] (remote-thread--p6-t2)
[Context=client-mappings][Scope=keycloak-1]ISPN100002: Started local
rebalance
12:21:03,515 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
[Context=offlineClientSessions][Scope=keycloak-0]ISPN100003: Finished
local rebalance
12:21:03,463 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t26)
ISPN000310: Starting cluster-wide rebalance for cache sessions,
topology CacheTopology{id=2300, rebalanceId=1056,
currentCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1:
132+124, keycloak-0: 124+132]},
pendingCH=DefaultConsistentHash{ns=256, owners = (3)[keycloak-1:
87+82, keycloak-0: 86+75, keycloak-2: 83+99]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0, keycloak-2]}
12:21:03,516 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t26)
[Context=sessions][Scope=keycloak-1]ISPN100002: Started local
rebalance
12:21:03,517 INFO [org.infinispan.CLUSTER]
(transport-thread--p19-t15)
[Context=client-mappings][Scope=keycloak-1]ISPN100003: Finished local
rebalance
12:21:03,519 INFO [org.infinispan.CLUSTER] (remote-thread--p6-t2)
[Context=client-mappings][Scope=keycloak-0]ISPN100003: Finished local
rebalance
12:21:03,522 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t12)
[Context=sessions][Scope=keycloak-1]ISPN100003: Finished local
rebalance
12:21:03,522 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t30)
[Context=loginFailures][Scope=keycloak-0]ISPN100003: Finished local
rebalance
12:21:03,522 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t26)
[Context=work][Scope=keycloak-0]ISPN100003: Finished local rebalance
12:21:03,523 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t30)
[Context=sessions][Scope=keycloak-0]ISPN100003: Finished local
rebalance
12:21:03,522 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
ISPN000310: Starting cluster-wide rebalance for cache offlineSessions,
topology CacheTopology{id=2298, rebalanceId=1054,
currentCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1:
132+124, keycloak-0: 124+132]},
pendingCH=DefaultConsistentHash{ns=256, owners = (3)[keycloak-1:
87+82, keycloak-0: 86+75, keycloak-2: 83+99]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0, keycloak-2]}
12:21:03,522 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t28)
ISPN000310: Starting cluster-wide rebalance for cache clientSessions,
topology CacheTopology{id=1648, rebalanceId=718,
currentCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1:
132+0, keycloak-0: 124+0]}, pendingCH=DefaultConsistentHash{ns=256,
owners = (3)[keycloak-1: 87+0, keycloak-0: 86+0, keycloak-2: 83+0]},
unionCH=null, actualMembers=[keycloak-1, keycloak-0, keycloak-2]}
12:21:03,523 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t28)
[Context=clientSessions][Scope=keycloak-1]ISPN100002: Started local
rebalance
12:21:03,523 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
[Context=offlineSessions][Scope=keycloak-1]ISPN100002: Started local
rebalance
12:21:03,523 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
ISPN000310: Starting cluster-wide rebalance for cache actionTokens,
topology CacheTopology{id=2298, rebalanceId=1054,
currentCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1:
132+124, keycloak-0: 124+132]},
pendingCH=DefaultConsistentHash{ns=256, owners = (3)[keycloak-1:
87+82, keycloak-0: 86+75, keycloak-2: 83+99]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0, keycloak-2]}
12:21:03,524 INFO [org.infinispan.CLUSTER]
(transport-thread--p20-t13)
[Context=offlineSessions][Scope=keycloak-1]ISPN100003: Finished local
rebalance
12:21:03,524 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
[Context=actionTokens][Scope=keycloak-1]ISPN100002: Started local
rebalance
12:21:03,523 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t27)
ISPN000310: Starting cluster-wide rebalance for cache
authenticationSessions, topology CacheTopology{id=2301,
rebalanceId=1057, currentCH=DefaultConsistentHash{ns=256, owners =
(2)[keycloak-1: 132+124, keycloak-0: 124+132]},
pendingCH=DefaultConsistentHash{ns=256, owners = (3)[keycloak-1:
87+82, keycloak-0: 86+75, keycloak-2: 83+99]}, unionCH=null,
actualMembers=[keycloak-1, keycloak-0, keycloak-2]}
12:21:03,524 INFO [org.infinispan.CLUSTER] (transport-thread--p20-t8)
[Context=clientSessions][Scope=keycloak-1]ISPN100003: Finished local
rebalance
12:21:03,524 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t27)
[Context=authenticationSessions][Scope=keycloak-1]ISPN100002: Started
local rebalance
12:21:03,525 INFO [org.infinispan.CLUSTER] (transport-thread--p20-t8)
[Context=actionTokens][Scope=keycloak-1]ISPN100003: Finished local
rebalance
12:21:03,525 INFO [org.infinispan.CLUSTER] (transport-thread--p20-t8)
[Context=authenticationSessions][Scope=keycloak-1]ISPN100003: Finished
local rebalance
12:21:03,526 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t27)
[Context=clientSessions][Scope=keycloak-0]ISPN100003: Finished local
rebalance
12:21:03,528 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t27)
[Context=actionTokens][Scope=keycloak-0]ISPN100003: Finished local
rebalance
12:21:03,528 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t27)
[Context=offlineSessions][Scope=keycloak-0]ISPN100003: Finished local
rebalance
12:21:03,529 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
[Context=authenticationSessions][Scope=keycloak-0]ISPN100003: Finished
local rebalance
12:21:03,683 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
[Context=sessions][Scope=keycloak-2]ISPN100003: Finished local
rebalance
12:21:03,683 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
ISPN000336: Finished cluster-wide rebalance for cache sessions,
topology id = 2300
12:21:03,750 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
[Context=loginFailures][Scope=keycloak-2]ISPN100003: Finished local
rebalance
12:21:03,750 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
ISPN000336: Finished cluster-wide rebalance for cache loginFailures,
topology id = 2297
12:21:03,765 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
[Context=actionTokens][Scope=keycloak-2]ISPN100003: Finished local
rebalance
12:21:03,765 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t24)
ISPN000336: Finished cluster-wide rebalance for cache actionTokens,
topology id = 2298
12:21:04,058 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
[Context=offlineClientSessions][Scope=keycloak-2]ISPN100003: Finished
local rebalance
12:21:04,059 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
ISPN000336: Finished cluster-wide rebalance for cache
offlineClientSessions, topology id = 1649
12:21:04,069 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
[Context=clientSessions][Scope=keycloak-2]ISPN100003: Finished local
rebalance
12:21:04,069 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
ISPN000336: Finished cluster-wide rebalance for cache clientSessions,
topology id = 1648
12:21:04,351 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
[Context=work][Scope=keycloak-2]ISPN100003: Finished local rebalance
12:21:04,351 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
ISPN000336: Finished cluster-wide rebalance for cache work, topology
id = 1646
12:21:04,374 INFO [org.infinispan.CLUSTER] (remote-thread--p6-t2)
[Context=client-mappings][Scope=keycloak-2]ISPN100003: Finished local
rebalance
12:21:04,374 INFO [org.infinispan.CLUSTER] (remote-thread--p6-t2)
ISPN000336: Finished cluster-wide rebalance for cache client-mappings,
topology id = 1657
12:21:04,460 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
[Context=offlineSessions][Scope=keycloak-2]ISPN100003: Finished local
rebalance
12:21:04,460 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t18)
ISPN000336: Finished cluster-wide rebalance for cache offlineSessions,
topology id = 2298
12:21:04,460 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t27)
[Context=authenticationSessions][Scope=keycloak-2]ISPN100003: Finished
local rebalance
12:21:04,460 INFO [org.infinispan.CLUSTER] (remote-thread--p9-t27)
ISPN000336: Finished cluster-wide rebalance for cache
authenticationSessions, topology id = 2301
Is this working as intended?
- Eivind
6 years, 8 months
Handling disabled users from LDAP
by Dockendorf, Trey
Currently we use Keycloak as an IdP tied to our LDAP environment. We are curious how we would go about having Keycloak reject logins from accounts we deem disabled in LDAP. Disabled could be for many reasons, one of which is password expiration. I see I could add a filter to our User Federation for LDAP, but the user would likely just show up as not found and get no kind of “Your account is disabled” message I presume.
Thanks,
- Trey
--
Trey Dockendorf
HPC Systems Engineer
Ohio Supercomputer Center
6 years, 8 months
Access redirect_uri within Freemarker Template
by Frederik Schmitt
Is it possible to somehow access the redirect_uri within the login Freemarker template like accessing the current locale?
We want to provide a link using this redirect_uri giving the user the ability to go back to where he came from.
Thanks in advance.
6 years, 8 months
Keys error with Authorization Code Flow
by Kamil Kitowski
Hi everyone.
I have a weird problem while using Authorization Code Flow when I'm trying
to get a token with Postman and/or my web app (Angular).
In both cases I get to see a Keycloak login page and after I provide valid
credentials, Keycloak presents me an error:
"Unexpected error when handling authentication request to identity
provider.",
and an exception is thrown (logs):
"13:19:52,345 ERROR [org.keycloak.keys.FailsafeAesKeyProvider] (default
task-36) No active keys found, using failsafe provider, please login to
admin console to add keys. Clustering is not supported.
13:19:52,348 WARN [org.keycloak.services] (default task-36)
KC-SERVICES0013: Failed authentication: java.lang.RuntimeException:
org.keycloak.jose.jwe.JWEException: java.security.InvalidKeyException:
Illegal key size
at
org.keycloak.services.managers.CodeGenerateUtil$AuthenticatedClientSessionModelParser.retrieveCode(CodeGenerateUtil.java:221)
at
org.keycloak.services.managers.CodeGenerateUtil$AuthenticatedClientSessionModelParser.retrieveCode(CodeGenerateUtil.java:162)
at
org.keycloak.services.managers.ClientSessionCode.getOrGenerateCode(ClientSessionCode.java:246)
at
org.keycloak.protocol.oidc.OIDCLoginProtocol.authenticated(OIDCLoginProtocol.java:200)
at
org.keycloak.services.managers.AuthenticationManager.redirectAfterSuccessfulFlow(AuthenticationManager.java:727)
at
org.keycloak.services.managers.AuthenticationManager.redirectAfterSuccessfulFlow(AuthenticationManager.java:681)
at
org.keycloak.services.managers.AuthenticationManager.finishedRequiredActions(AuthenticationManager.java:807)
at
org.keycloak.authentication.AuthenticationProcessor.authenticationComplete(AuthenticationProcessor.java:993)
at
org.keycloak.authentication.AuthenticationProcessor.authenticationAction(AuthenticationProcessor.java:863)
at
org.keycloak.services.resources.LoginActionsService.processFlow(LoginActionsService.java:290)
at
org.keycloak.services.resources.LoginActionsService.processAuthentication(LoginActionsService.java:261)
at
org.keycloak.services.resources.LoginActionsService.authenticate(LoginActionsService.java:257)
at
org.keycloak.services.resources.LoginActionsService.authenticateForm(LoginActionsService.java:318)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:140)
at
org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
at
org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
at
org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:138)
at
org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:101)
at
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:406)
at
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:213)
at
org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:228)
at
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
at
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at
org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(KeycloakSessionServletFilter.java:90)
at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
org.wildfly.extension.undertow.deployment.GlobalRequestControllerHandler.handleRequest(GlobalRequestControllerHandler.java:68)
at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292)
at
io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81)
at
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138)
at
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135)
at
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
at
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at
org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
at
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)
at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)
at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:326)
at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:812)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.keycloak.jose.jwe.JWEException:
java.security.InvalidKeyException: Illegal key size
at org.keycloak.jose.jwe.JWE.encodeJwe(JWE.java:142)
at org.keycloak.util.TokenUtil.jweDirectEncode(TokenUtil.java:151)
at
org.keycloak.services.managers.CodeGenerateUtil$AuthenticatedClientSessionModelParser.retrieveCode(CodeGenerateUtil.java:219)
... 70 more
Caused by: java.security.InvalidKeyException: Illegal key size
at javax.crypto.Cipher.checkCryptoPerm(Cipher.java:1039)
at javax.crypto.Cipher.init(Cipher.java:1393)
at javax.crypto.Cipher.init(Cipher.java:1327)
at
org.keycloak.jose.jwe.enc.AesCbcHmacShaEncryptionProvider.encryptBytes(AesCbcHmacShaEncryptionProvider.java:120)
at
org.keycloak.jose.jwe.enc.AesCbcHmacShaEncryptionProvider.encodeJwe(AesCbcHmacShaEncryptionProvider.java:68)
at org.keycloak.jose.jwe.JWE.encodeJwe(JWE.java:138)
... 72 more"
The mentioned error does not happen when i use a Direct Access Grant (using
the same user and his credentials)
and I'm able to get a proper access token. Definitely there is no problem
with LDAP connection.
Is this a configuration error and is there a workaround? If it's a key
problem or LDAP connection why does it pop up only with Authorication Code
Flow?
I tested it on versions 3.4.3 and 4.0.0.Beta1 and it happens on both of
them.
Best regards,
--
Kitowski Kamil
6 years, 8 months