[keycloak-user] Distributed Keycloak user sessions using Infinispan

Nair, Rajat rajat.nair at hp.com
Mon Jul 27 09:20:27 EDT 2015


Thanks for quick reply Stian.

> What version?
We are using Keycloak 1.3.1 Final.
 
> Did you remember to change userSessions provider to infinispan in keycloak-server.json?
Yes. We got following in keycloak-server.json - 
"userSessions": {
"provider": "infinispan"
}

> Firstly owners="2" should work fine as long as only one node dies and 
> the other remains active. Secondly it should return a NPE, but an 
> error if user session is not found.
Could you elaborate on your 2nd point? 

-- Rajat

-----Original Message-----
From: Stian Thorgersen [mailto:stian at redhat.com] 
Sent: 27 July 2015 18:07
To: Nair, Rajat
Cc: keycloak-user at lists.jboss.org
Subject: Re: [keycloak-user] Distributed Keycloak user sessions using Infinispan

Did you remember to change userSessions provider to infinispan in keycloak-server.json?

----- Original Message -----
> From: "Stian Thorgersen" <stian at redhat.com>
> To: "Rajat Nair" <rajat.nair at hp.com>
> Cc: keycloak-user at lists.jboss.org
> Sent: Monday, 27 July, 2015 2:24:17 PM
> Subject: Re: [keycloak-user] Distributed Keycloak user sessions using 
> Infinispan
> 
> What version?
> 
> Firstly owners="2" should work fine as long as only one node dies and 
> the other remains active. Secondly it should return a NPE, but an 
> error if user session is not found.
> 
> ----- Original Message -----
> > From: "Rajat Nair" <rajat.nair at hp.com>
> > To: keycloak-user at lists.jboss.org
> > Sent: Monday, 27 July, 2015 2:03:47 PM
> > Subject: [keycloak-user] Distributed Keycloak user sessions using 
> > Infinispan
> > 
> > 
> > 
> > Hi,
> > 
> > 
> > 
> > I’m in the process of setting up distributed user sessions using 
> > Infinispan on my Keycloak cluster. This is the configuration I use –
> > 
> > <cache-container name="keycloak"
> > jndi-name="java:jboss/infinispan/Keycloak">
> > 
> > <transport lock-timeout="60000"/>
> > 
> > <invalidation-cache name="realms" mode="SYNC"/>
> > 
> > <invalidation-cache name="users" mode="SYNC"/>
> > 
> > <distributed-cache name="sessions" mode="SYNC" owners="2"/>
> > 
> > <distributed-cache name="loginFailures" mode="SYNC" owners="1"/>
> > 
> > </cache-container>
> > 
> > 
> > 
> > 
> > 
> > And in server.logs, I can see my servers communicate –
> > 
> > 2015-07-27 10:27:24,662 INFO [org.infinispan.CLUSTER]
> > (remote-thread--p3-t7)
> > ISPN000310: Starting cluster-wide rebalance for cache users, 
> > topology CacheTopology{id=57, rebalanceId=17, 
> > currentCH=ReplicatedConsistentHash{ns
> > =
> > 60, owners = (1)[test-server-110: 60]}, 
> > pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[test-server-110:
> > 30, test-server-111: 30]}, unionCH=null, 
> > actualMembers=[test-server-110, test-server-111]}
> > 
> > 2015-07-27 10:27:24,665 INFO [org.infinispan.CLUSTER]
> > (remote-thread--p3-t10)
> > ISPN000310: Starting cluster-wide rebalance for cache realms, 
> > topology CacheTopology{id=57, rebalanceId=17, 
> > currentCH=ReplicatedConsistentHash{ns
> > =
> > 60, owners = (1)[test-server-110: 60]}, 
> > pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[test-server-110:
> > 30, test-server-111: 30]}, unionCH=null, 
> > actualMembers=[test-server-110, test-server-111]}
> > 
> > 2015-07-27 10:27:24,665 INFO [org.infinispan.CLUSTER]
> > (remote-thread--p3-t8)
> > ISPN000310: Starting cluster-wide rebalance for cache loginFailures, 
> > topology CacheTopology{id=57, rebalanceId=17, 
> > currentCH=DefaultConsistentHash{ns=80, owners = (1)[test-server-110:
> > 80+0]},
> > pendingCH=DefaultConsistentHash{ns=80, owners = (2)[test-server-110: 
> > 40+0,
> > test-server-111: 40+0]}, unionCH=null, 
> > actualMembers=[test-server-110, test-server-111]}
> > 
> > 2015-07-27 10:27:24,669 INFO [org.infinispan.CLUSTER]
> > (remote-thread--p3-t9)
> > ISPN000310: Starting cluster-wide rebalance for cache sessions, 
> > topology CacheTopology{id=56, rebalanceId=17, 
> > currentCH=DefaultConsistentHash{ns=80,
> > owners = (1)[test-server-110: 80+0]}, 
> > pendingCH=DefaultConsistentHash{ns=80,
> > owners = (2)[test-server-110: 40+0, test-server-111: 40+0]}, 
> > unionCH=null, actualMembers=[test-server-110, test-server-111]}
> > 
> > 2015-07-27 10:27:24,808 INFO [org.infinispan.CLUSTER]
> > (remote-thread--p3-t9)
> > ISPN000336: Finished cluster-wide rebalance for cache loginFailures, 
> > topology id = 57
> > 
> > 2015-07-27 10:27:24,810 INFO [org.infinispan.CLUSTER]
> > (remote-thread--p3-t12)
> > ISPN000336: Finished cluster-wide rebalance for cache sessions, 
> > topology id = 56
> > 
> > 2015-07-27 10:27:24,988 INFO [org.infinispan.CLUSTER]
> > (remote-thread--p3-t12)
> > ISPN000336: Finished cluster-wide rebalance for cache realms, 
> > topology id =
> > 57
> > 
> > 2015-07-27 10:27:25,530 INFO [org.infinispan.CLUSTER]
> > (remote-thread--p3-t8)
> > ISPN000336: Finished cluster-wide rebalance for cache users, 
> > topology id =
> > 57
> > 
> > 
> > 
> > I can successfully login, get a token and fetch user details with 
> > this token.
> > 
> > 
> > 
> > Problem is, if one of the nodes on the cluster goes down and if we 
> > try to reuse a token which was already issued (so workflow is – user 
> > logins in, get token, (a node in the cluster goes down) and then 
> > fetch user details using
> > token) – we see an internal server exception. From the logs –
> > 
> > 
> > 
> > 2015-07-27 10:24:25,714 ERROR [io.undertow.request] (default task-1)
> > UT005023: Exception handling request to
> > /auth/realms/scaletest/protocol/openid-connect/userinfo:
> > java.lang.RuntimeException: request path:
> > /auth/realms/scaletest/protocol/openid-connect/userinfo
> > 
> > at
> > org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(
> > KeycloakSessionServletFilter.java:54)
> > 
> > at 
> > io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:6
> > 0)
> > 
> > at
> > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(
> > FilterHandler.java:132)
> > 
> > at
> > io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandl
> > er.java:85)
> > 
> > at
> > io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.han
> > dleRequest(ServletSecurityRoleHandler.java:62)
> > 
> > at
> > io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest
> > (ServletDispatchingHandler.java:36)
> > 
> > at
> > org.wildfly.extension.undertow.security.SecurityContextAssociationHa
> > ndler.handleRequest(SecurityContextAssociationHandler.java:78)
> > 
> > at
> > io.undertow.server.handlers.PredicateHandler.handleRequest(Predicate
> > Handler.java:43)
> > 
> > at
> > io.undertow.servlet.handlers.security.SSLInformationAssociationHandl
> > er.handleRequest(SSLInformationAssociationHandler.java:131)
> > 
> > at
> > io.undertow.servlet.handlers.security.ServletAuthenticationCallHandl
> > er.handleRequest(ServletAuthenticationCallHandler.java:57)
> > 
> > at
> > io.undertow.server.handlers.PredicateHandler.handleRequest(Predicate
> > Handler.java:43)
> > 
> > at
> > io.undertow.security.handlers.AbstractConfidentialityHandler.handleR
> > equest(AbstractConfidentialityHandler.java:46)
> > 
> > at
> > io.undertow.servlet.handlers.security.ServletConfidentialityConstrai
> > ntHandler.handleRequest(ServletConfidentialityConstraintHandler.java
> > :64)
> > 
> > at
> > io.undertow.security.handlers.AuthenticationMechanismsHandler.handle
> > Request(AuthenticationMechanismsHandler.java:58)
> > 
> > at
> > io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHand
> > ler.handleRequest(CachedAuthenticatedSessionHandler.java:72)
> > 
> > at
> > io.undertow.security.handlers.NotificationReceiverHandler.handleRequ
> > est(NotificationReceiverHandler.java:50)
> > 
> > at
> > io.undertow.security.handlers.SecurityInitialHandler.handleRequest(S
> > ecurityInitialHandler.java:76)
> > 
> > at
> > io.undertow.server.handlers.PredicateHandler.handleRequest(Predicate
> > Handler.java:43)
> > 
> > at
> > org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.ha
> > ndleRequest(JACCContextIdHandler.java:61)
> > 
> > at
> > io.undertow.server.handlers.PredicateHandler.handleRequest(Predicate
> > Handler.java:43)
> > 
> > at
> > io.undertow.server.handlers.PredicateHandler.handleRequest(Predicate
> > Handler.java:43)
> > 
> > at
> > io.undertow.server.handlers.MetricsHandler.handleRequest(MetricsHand
> > ler.java:62)
> > 
> > at
> > io.undertow.servlet.core.MetricsChainHandler.handleRequest(MetricsCh
> > ainHandler.java:59)
> > 
> > at
> > io.undertow.servlet.handlers.ServletInitialHandler.handleFirstReques
> > t(ServletInitialHandler.java:274)
> > 
> > at
> > io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(S
> > ervletInitialHandler.java:253)
> > 
> > at
> > io.undertow.servlet.handlers.ServletInitialHandler.access$000(Servle
> > tInitialHandler.java:80)
> > 
> > at
> > io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(S
> > ervletInitialHandler.java:172)
> > 
> > at 
> > io.undertow.server.Connectors.executeRootHandler(Connectors.java:199
> > )
> > 
> > at 
> > io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:
> > 774)
> > 
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> > 
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
> > Source)
> > 
> > at java.lang.Thread.run(Unknown Source)
> > 
> > Caused by: org.jboss.resteasy.spi.UnhandledException:
> > java.lang.NullPointerException
> > 
> > at
> > org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(
> > ExceptionHandler.java:76)
> > 
> > at
> > org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHa
> > ndler.java:212)
> > 
> > at
> > org.jboss.resteasy.core.SynchronousDispatcher.writeException(Synchro
> > nousDispatcher.java:149)
> > 
> > at
> > org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDisp
> > atcher.java:372)
> > 
> > at
> > org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDisp
> > atcher.java:179)
> > 
> > at
> > org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher
> > .service(ServletContainerDispatcher.java:220)
> > 
> > at
> > org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.serv
> > ice(HttpServletDispatcher.java:56)
> > 
> > at
> > org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.serv
> > ice(HttpServletDispatcher.java:51)
> > 
> > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> > 
> > at
> > io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHan
> > dler.java:86)
> > 
> > at
> > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(
> > FilterHandler.java:130)
> > 
> > at
> > org.keycloak.services.filters.ClientConnectionFilter.doFilter(Client
> > ConnectionFilter.java:41)
> > 
> > at 
> > io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:6
> > 0)
> > 
> > at
> > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(
> > FilterHandler.java:132)
> > 
> > at
> > org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(
> > KeycloakSessionServletFilter.java:40)
> > 
> > ... 31 more
> > 
> > Caused by: java.lang.NullPointerException
> > 
> > at
> > org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issueUserInfo(
> > UserInfoEndpoint.java:128)
> > 
> > at
> > org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issueUserInfoG
> > et(UserInfoEndpoint.java:101)
> > 
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > 
> > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> > 
> > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> > 
> > at java.lang.reflect.Method.invoke(Unknown Source)
> > 
> > at
> > org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl
> > .java:137)
> > 
> > at
> > org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(Resourc
> > eMethodInvoker.java:296)
> > 
> > at
> > org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodI
> > nvoker.java:250)
> > 
> > at
> > org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(
> > ResourceLocatorInvoker.java:140)
> > 
> > at
> > org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocato
> > rInvoker.java:109)
> > 
> > at
> > org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(
> > ResourceLocatorInvoker.java:135)
> > 
> > at
> > org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocato
> > rInvoker.java:103)
> > 
> > at
> > org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDisp
> > atcher.java:356)
> > 
> > ... 42 more
> > 
> > 
> > 
> > 
> > 
> > The user guide says –
> > 
> > If you need to prevent node failures from requiring users to log in 
> > again, set the owners attribute to 2 or more for the sessions cache
> > 
> > 
> > 
> > Questions -
> > 
> > 1. Have we configured Infinispan incorrectly? We don’t want the 
> > users to login again if any of the nodes in the cluster go down.
> > 
> > 2. Will changing distributed-cache to replicated-cache help in this 
> > scenario?
> > 
> > 3. Any way we can see the contents of the cache?
> > 
> > 
> > 
> > -- Rajat
> > 
> > 
> > 
> > _______________________________________________
> > keycloak-user mailing list
> > keycloak-user at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/keycloak-user
> 
> _______________________________________________
> keycloak-user mailing list
> keycloak-user at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/keycloak-user



More information about the keycloak-user mailing list