Annotation-based protection?
by Craig Setera
We are working to replace our Picketlink-based application code with
Keycloak and OAuth/OpenID Connect. We have a number of JAX-RS services
that have "mixed" resource methods some requiring authentication, while
others do not require any authentication. We mark those that require
authentication with @LoggedIn and use the Picketlink method interception
support to manage access to that method.
What is the best way to replace this kind of functionality of mixed
resource methods, some requiring authentication and others not requiring
authentication? It does not seem like specifying this kind of information
via web.xml is the proper/best approach, since it may force authentication
for services that we don't want to make that a requirement. Is there any
built-in support in Keycloak for this kind of use case?
Thanks,
Craig
=================================
*Craig Setera*
*Chief Technology Officer*
*415-324-5861**craig(a)baseventure.com <craig(a)baseventure.com>*
6 years, 2 months
Spring Security and Path based multi tenancy
by Tony Harris
I am trying to convert a Spring MVC web app that uses Spring Security with Keycloak to a multi-tenancy application, for this I followed the standard example and implemented my own version of KeycloakWebSecurityConfigurerAdapter. Upon first access I am routed to my instance of KeycloakConfigResolver where I can extract the realm based on the path. However at some point I am redirected to {applicationContext}/sso/login by KeycloakAuthenticationEntryPoint and this too ends up in KeycloakConfigResolver but because there is no realm in the path I end authenticating to the default realm.
Have I missed something, has anyone made this work with Spring Security?
Tony Harris
________________________________
Please consider the environment: Think before you print!
This message has been scanned for malware by Websense. www.websense.com
6 years, 2 months
How to debug Tomcat 8 KeycloakAuthenticatorValve?
by Narendra Pathai
I am using Keycloak with Tomcat 8, and I am able to successfully able to
achieve OpenID connect based Single Sign On flow.
But I am facing issue with back-channel logout, when I click on logout from
Sessions tab, it shows success. But the application session is not
invalidated and the application session still can be used actively till the
token expiry.
So I wanted to debug the source and see if I could find the root cause and
solve the issue if any. Please help me in guiding how to debug
KeycloackAuthenticatorValve. I am using IntelliJ IDEA for development.
Let me know if any further details are required.
Regards,
Narendra Pathai
--
*Sterlite Tech Disclaimer:*
The content of this message may be legally
privileged and confidential and are for the use of the intended
recipient(s) only. It should not be read, copied and used by anyone other
than the intended recipient(s). If you have received this message in error,
please immediately notify the sender, preserve its confidentiality and
delete it. Before opening any attachments please check them for viruses and
defects. No employee or agent is authorised to conclude any binding
agreement on behalf of Sterlite Technologies Limited with another party by
email without express written confirmation by authorised person. Visit us
at www.sterlitetech.com <http://www.sterlitetech.com>
Please consider
environment before printing this email !
Registered office: E 1, MIDC
Industrial Area, Waluj,
Aurangabad, Maharashtra – 431 136 CIN –
L31300MH2000PLC269261
6 years, 2 months
Standalone HA tokens not immediately shared among nodes
by D V
Hi list,
I'm trying to cluster several Keycloak 4.0.0 nodes running in docker
containers. I'm seeing "interesting" (unexpected) behaviour when requesting
new OIDC tokens with "grant_type=password" (Direct Grant/ROPC) and then
attempting to get a new set with "grant_type=refresh_token" .
After I start two nodes (containers), if I issue a "grant_type=password"
request to the node that started first, requests for
"grant_type=refresh_token" on the newer node fail. If I issue the
"grant_type=password" request to the node that started last, requests for
"grant_type=refresh_token" succeed on any node AND all future
password/refresh_token requests work correctly no matter which node handles
the request.
So, let's say node1 starts first and node2 starts second:
1. Password auth on node1: OK
2. Refresh token auth on node2 with token from previous step: Error:
invalid_grant (Invalid refresh token)
3. Refresh token auth on node1 with token from step 1: OK (new set of
refresh+access tokens)
BUT!
4. Password auth on node2: OK
5. Refresh token auth on node1 with token from previous step: OK! (new set
of refresh+access tokens)
6. Refresh token auth on node2 with token from step 4: OK
7. Password auth sequence from steps 1-3: also OK!
It's as though the node that starts most recently needs a password auth
request to "wake up" and start communicating with the rest of the cluster.
Once it does, everything's in sync.
Some facts that are hopefully relevant:
* Keycloak 4.0.0 docker image base
* standalone-ha.xml from distribution with changes in JGroups subsystem.
I'm using JDBC PING configured for the same DB as Keycloak itself, which is
MySQL 5.7. See the subsystem config below.
* Custom org.keycloak.storage.UserStorageProviderFactory SPI, which creates
a provider that makes an HTTP call to an external authentication service to
validate username/password credentials.
* A couple of custom themes.
* One realm with a handful of clients provisioned via a shell script that
just calls kcadm.sh and jboss-cli.sh
* There's a simple LB in front of both instances
JGroups subsystem config:
<subsystem xmlns="urn:jboss:domain:jgroups:5.0">
<channels default="ee">
<channel name="ee" stack="tcp" cluster="ejb"/>
</channels>
<stacks>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp">
<property
name="external_addr">${env.HOST}</property>
<property
name="external_port">${env.PORT_7600}</property>
</transport>
<protocol type="org.jgroups.protocols.JDBC_PING">
<property name="datasource_jndi_name">
java:jboss/datasources/KeycloakDS
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
</subsystem>
$HOST and $PORT_7600 are set to external host:port combination that allows
the two instances to communicate.
There's also a socket-binding to a public interface:
<socket-binding name="jgroups-tcp" port="7600"/>
In the JGroups and Infinispan log entries I can see the two nodes do find
each other and are able to communicate. I haven't been able to get
ispn-cli.sh to connect to the internal Infinispan instances running in
containers, so I can't confirm that they have the same entries, but as
described in flows above they do eventually work together.
Is there a configuration change I'm missing somewhere to make the new node
joining the cluster become aware of the other one?
Thanks for any help,
DV
6 years, 2 months
Invalid parameter: redirect_uri behind reverse proxy
by Corentin Dupont
Hello,
wWhen opening the admin console: https://keycloak.mysite.com/auth/admin/.
The page is redirecting to:
https://keycloak.mysite.com/auth/realms/master/protocol/openid-connect/au...
But I get this message:
Invalid parameter: redirect_uri
It seems that keycloak doesn't like the https in the redirect. Can it be?
My Keycloak is behind a reverse proxy.
I setup the following tags in standalone.xml:
<http-listener name="default" socket-binding="http" enable-http2="true"
proxy-address-forwarding="true" redirect-socket="proxy-https"/>
<socket-binding name="proxy-https" port="443"/>
My reverse proxy is also setting headers: Host, X-Real-IP, X-Forwarded-For,
X-Forwarded-Proto.
Using tcpdump, I can see the following headers:
GET
/auth/resources/4.4.0.final/login/keycloak/node_modules/patternfly/dist/fonts/OpenSans-Light-webfont.woff2
HTTP/1.0
Host: keycloak.staging.waziup.io
X-Real-IP: 18.195.197.182
X-Forwarded-For: 217.77.82.229, 18.195.197.182
X-Forwarded-Proto: http
Connection: close
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:62.0) Gecko/20100101
Firefox/62.0
Accept: application/font-woff2;q=1.0,application/font-woff;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: identity
Referer:
https://keycloak.staging.waziup.io/auth/resources/4.4.0.final/login/keycl...
Cookie: _ga=GA1.2.823033289.1537866165; _gid=GA1.2.861449812.1537866165
Pragma: no-cache
Cache-Control: no-cache
Are they correct?
Thanks a lot
Corentin
6 years, 2 months
Session timeout and SIngle Logout
by Leonid Rozenblyum
Hello!
I'm using pac4j + Spring Security + keycloak as an Idp + SAML as an SSO
protocol.
I have a question about how to handle session timeout correctly for session
timeout scenario.
SCENARIO:
Let's have 2 web applications (WebApp1, WebApp2)
Let WebApp2 have some small session timeout (for easiness of testing, e.g.
1 minute)
Log-in into WebApp1
Open WebApp2 in another tab of the same browser (so the user will be
authenticated automatically through keycloak)
Close the tab with WebApp2
Wait till the session of WebApp2 expires
Try to log-out from WebApp1
EXPECTED:
Single Logout works
ACTUALLY:
We're relogined to WebApp1
Reason:
We got redirected to Idp, then to WebApp2, inside WebApp2
Security library cannot cannot load the SSO-related information because it
doesn't longer exist in the session (the session has been expired).
So the single Logout procedure fails and we are still logged-in.
Does keycloak have some support for this kind of scenario? Any workarounds
can be applied? It looks to be a not very rare situation when the user
closes the browser tab.
Thanks in advance for help!
6 years, 2 months
Keycloak 3.4.3 to 4.X.X Migration Fails - we have 400-500 realms
by rony joy
Dear All,
We have currently using keycloak 3.4.3 version and trying to migrate to
4.3.0 but the startup is failing due to the migration issue. We have around
400-500 realms in the database. Please find below exception. From the log
it is clear that
"org.keycloak.migration.migrators.MigrateTo4_0_0.migrate(MigrateTo4_0_0.java:51)"
is the one causing the exception (see the code below. Line 51 is in bold).
Is this because of large realms? any ideas?
@Override
public void migrate(KeycloakSession session) {
*session.realms().getRealms().stream().forEach(*
r -> {
migrateRealm(session, r, false);
}
);
}
22:16:17,002 WARN [com.arjuna.ats.arjuna] (Transaction Reaper)
ARJUNA012117: TransactionReaper::check timeout for TX
0:ffffac110004:-14e6f320:5ba958b2:12 in state RUN
22:16:17,070 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0)
ARJUNA012121: TransactionReaper::doCancellations worker Thread[Transaction
Reaper Worker 0,5,main] succ
essfully canceled TX 0:ffffac110004:-14e6f320:5ba958b2:f
22:16:17,073 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0)
ARJUNA012095: Abort of action id 0:ffffac110004:-14e6f320:5ba958b2:12
invoked while multiple threads ac
tive within it.
22:16:17,079 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0)
ARJUNA012381: Action id 0:ffffac110004:-14e6f320:5ba958b2:12 completed with
multiple threads - thread S
erverService Thread Pool -- 53 was in progress with
org.hibernate.event.internal.DefaultPersistEventListener.entityIsPersistent(DefaultPersistEventListener.java:163)
org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:128)
org.hibernate.internal.SessionImpl.firePersistOnFlush(SessionImpl.java:805)
org.hibernate.internal.SessionImpl.persistOnFlush(SessionImpl.java:798)
org.hibernate.engine.spi.CascadingActions$8.cascade(CascadingActions.java:340)
org.hibernate.engine.internal.Cascade.cascadeToOne(Cascade.java:423)
org.hibernate.engine.internal.Cascade.cascadeAssociation(Cascade.java:348)
org.hibernate.engine.internal.Cascade.cascadeProperty(Cascade.java:187)
org.hibernate.engine.internal.Cascade.cascadeCollectionElements(Cascade.java:456)
org.hibernate.engine.internal.Cascade.cascadeCollection(Cascade.java:388)
org.hibernate.engine.internal.Cascade.cascadeAssociation(Cascade.java:351)
org.hibernate.engine.internal.Cascade.cascadeProperty(Cascade.java:187)
org.hibernate.engine.internal.Cascade.cascade(Cascade.java:136)
org.hibernate.event.internal.AbstractSaveEventListener.cascadeAfterSave(AbstractSaveEventListener.java:445)
org.hibernate.event.internal.DefaultPersistEventListener.justCascade(DefaultPersistEventListener.java:172)
org.hibernate.event.internal.DefaultPersistEventListener.entityIsPersistent(DefaultPersistEventListener.java:164)
org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:128)
org.hibernate.internal.SessionImpl.firePersistOnFlush(SessionImpl.java:805)
org.hibernate.internal.SessionImpl.persistOnFlush(SessionImpl.java:798)
org.hibernate.engine.spi.CascadingActions$8.cascade(CascadingActions.java:340)
org.hibernate.engine.internal.Cascade.cascadeToOne(Cascade.java:423)
org.hibernate.engine.internal.Cascade.cascadeAssociation(Cascade.java:348)
org.hibernate.engine.internal.Cascade.cascadeProperty(Cascade.java:187)
org.hibernate.engine.internal.Cascade.cascadeCollectionElements(Cascade.java:456)
org.hibernate.engine.internal.Cascade.cascadeCollection(Cascade.java:388)
org.hibernate.engine.internal.Cascade.cascadeAssociation(Cascade.java:351)
org.hibernate.engine.internal.Cascade.cascadeProperty(Cascade.java:187)
org.hibernate.engine.internal.Cascade.cascade(Cascade.java:136)
org.hibernate.event.internal.AbstractFlushingEventListener.cascadeOnFlush(AbstractFlushingEventListener.java:150)
org.hibernate.event.internal.AbstractFlushingEventListener.prepareEntityFlushes(AbstractFlushingEventListener.java:141)
org.hibernate.event.internal.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:74)
org.hibernate.event.internal.DefaultAutoFlushEventListener.onAutoFlush(DefaultAutoFlushEventListener.java:44)
org.hibernate.internal.SessionImpl.autoFlushIfRequired(SessionImpl.java:1264)
org.hibernate.internal.SessionImpl.list(SessionImpl.java:1332)
org.hibernate.internal.QueryImpl.list(QueryImpl.java:87)
org.hibernate.jpa.internal.QueryImpl.list(QueryImpl.java:606)
org.hibernate.jpa.internal.QueryImpl.getResultList(QueryImpl.java:483)
org.keycloak.models.jpa.ClientAdapter.getClientScopes(ClientAdapter.java:353)
org.keycloak.models.cache.infinispan.entities.CachedClient.<init>(CachedClient.java:119)
org.keycloak.models.cache.infinispan.RealmCacheSession.cacheClient(RealmCacheSession.java:1069)
org.keycloak.models.cache.infinispan.RealmCacheSession.getClientById(RealmCacheSession.java:1029)
org.keycloak.models.jpa.RealmAdapter.getMasterAdminClient(RealmAdapter.java:1037)
org.keycloak.models.cache.infinispan.entities.CachedRealm.<init>(CachedRealm.java:235)
org.keycloak.models.cache.infinispan.RealmCacheSession.getRealm(RealmCacheSession.java:399)
org.keycloak.models.jpa.JpaRealmProvider.getRealms(JpaRealmProvider.java:102)
org.keycloak.models.cache.infinispan.RealmCacheSession.getRealms(RealmCacheSession.java:459)
org.keycloak.migration.migrators.MigrateTo4_0_0.migrate(MigrateTo4_0_0.java:51)
org.keycloak.migration.MigrationModelManager.migrate(MigrationModelManager.java:96)
org.keycloak.services.resources.KeycloakApplication.migrateModel(KeycloakApplication.java:245)
org.keycloak.services.resources.KeycloakApplication.migrateAndBootstrap(KeycloakApplication.java:186)
org.keycloak.services.resources.KeycloakApplication$1.run(KeycloakApplication.java:145)
org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:227)
org.keycloak.services.resources.KeycloakApplication.<init>(KeycloakApplication.java:136)
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:150)
org.jboss.resteasy.spi.ResteasyProviderFactory.createProviderInstance(ResteasyProviderFactory.java:2298)
org.jboss.resteasy.spi.ResteasyDeployment.createApplication(ResteasyDeployment.java:340)
org.jboss.resteasy.spi.ResteasyDeployment.start(ResteasyDeployment.java:253)
org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.init(ServletContainerDispatcher.java:120)
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.init(HttpServletDispatcher.java:36)
io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
org.wildfly.extension.undertow.security.RunAsLifecycleInterceptor.init(RunAsLifecycleInterceptor.java:78)
io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:103)
io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:250)
io.undertow.servlet.core.ManagedServlet.createServlet(ManagedServlet.java:133)
io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:565)
io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:536)
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction$$Lambda$1001/538179304.call(Unknown
Source)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$1002/1005208678.call(Unknown
Source)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$1002/1005208678.call(Unknown
Source)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$1002/1005208678.call(Unknown
Source)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$1002/1005208678.call(Unknown
Source)
io.undertow.servlet.core.DeploymentManagerImpl.start(DeploymentManagerImpl.java:578)
org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:100)
org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:81)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
org.jboss.threads.JBossThread.run(JBossThread.java:320)
22:16:17,085 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0)
ARJUNA012108: CheckedAction::check - atomic action
0:ffffac110004:-14e6f320:5ba958b2:12 aborting with 1
threads active!
22:16:17,099 WARN
[org.hibernate.resource.transaction.backend.jta.internal.synchronization.SynchronizationCallbackCoordinatorTrackingImpl]
(Transaction Reaper Worker 0) HHH000
451: Transaction afterCompletion called by a background thread; delaying
afterCompletion processing until the original thread can handle it.
[status=4]
22:16:17,101 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0)
ARJUNA012121: TransactionReaper::doCancellations worker Thread[Transaction
Reaper Worker 0,5,main] succ
essfully canceled TX 0:ffffac110004:-14e6f320:5ba958b2:12
--
Rony Joy
6 years, 2 months
SAML config using kcadmin.sh
by Jochen Mader
I am currently trying to automate setup of a SAML-client using kcadmin.sh.
Using 'kcadm.sh create clients -r SAML-DEMO -f saml-client.json' works when
using the Keycloak-specific JSON.
My SAML-Service Provider gives me a sp-metadata.xml. Using that file in the
UI (Clients -> Create -> Impoert(Select File)) works and it will create a
new client comntaining all the stuff provided in the XML-metadata.
Sadly that doesn't seem to work with kcadmin.sh. When providing that file
instead of the JSON it simply fails with 'Not a valid JSON document'.
Is there a way to use the XML file from kcadmin.sh?
Thanks,
Jochen
6 years, 2 months