[keycloak-user] Distributed Keycloak user sessions using Infinispan
Nair, Rajat
rajat.nair at hp.com
Mon Jul 27 09:41:36 EDT 2015
> Do you have both nodes fully up and running before you kill one node?
Yes.
This is what we tried -
Case 1
1. Two node cluster (both running Keycloak engines) - both up and running. Configured load balancing using mod_cluster.
2. Login and get token.
3. Bring down one node.
4. Get user info using existing token. This is when we get NPE.
Case 2
1. Start with 1 Node down. We bring it back up. We wait for some time so that Infinispan can sync.
2. Bring down other node.
3. Try to get user info using existing token. Again we see NPE.
>It's a bug - if session is expired it should return an error message, not a NPE (see https://issues.jboss.org/browse/KEYCLOAK-1710)
Thanks for tracking this.
-- Rajat
----- Original Message -----
> From: "Rajat Nair" <rajat.nair at hp.com>
> To: "Stian Thorgersen" <stian at redhat.com>
> Cc: keycloak-user at lists.jboss.org
> Sent: Monday, 27 July, 2015 3:20:27 PM
> Subject: RE: [keycloak-user] Distributed Keycloak user sessions using
> Infinispan
>
> Thanks for quick reply Stian.
>
> > What version?
> We are using Keycloak 1.3.1 Final.
>
> > Did you remember to change userSessions provider to infinispan in
> > keycloak-server.json?
> Yes. We got following in keycloak-server.json -
> "userSessions": {
> "provider": "infinispan"
> }
>
> > Firstly owners="2" should work fine as long as only one node dies
> > and the other remains active. Secondly it should return a NPE, but
> > an error if user session is not found.
> Could you elaborate on your 2nd point?
Do you have both nodes fully up and running before you kill one node?
It's a bug - if session is expired it should return an error message, not a NPE (see https://issues.jboss.org/browse/KEYCLOAK-1710)
>
> -- Rajat
>
> -----Original Message-----
> From: Stian Thorgersen [mailto:stian at redhat.com]
> Sent: 27 July 2015 18:07
> To: Nair, Rajat
> Cc: keycloak-user at lists.jboss.org
> Subject: Re: [keycloak-user] Distributed Keycloak user sessions using
> Infinispan
>
> Did you remember to change userSessions provider to infinispan in
> keycloak-server.json?
>
> ----- Original Message -----
> > From: "Stian Thorgersen" <stian at redhat.com>
> > To: "Rajat Nair" <rajat.nair at hp.com>
> > Cc: keycloak-user at lists.jboss.org
> > Sent: Monday, 27 July, 2015 2:24:17 PM
> > Subject: Re: [keycloak-user] Distributed Keycloak user sessions
> > using Infinispan
> >
> > What version?
> >
> > Firstly owners="2" should work fine as long as only one node dies
> > and the other remains active. Secondly it should return a NPE, but
> > an error if user session is not found.
> >
> > ----- Original Message -----
> > > From: "Rajat Nair" <rajat.nair at hp.com>
> > > To: keycloak-user at lists.jboss.org
> > > Sent: Monday, 27 July, 2015 2:03:47 PM
> > > Subject: [keycloak-user] Distributed Keycloak user sessions using
> > > Infinispan
> > >
> > >
> > >
> > > Hi,
> > >
> > >
> > >
> > > I’m in the process of setting up distributed user sessions using
> > > Infinispan on my Keycloak cluster. This is the configuration I use
> > > –
> > >
> > > <cache-container name="keycloak"
> > > jndi-name="java:jboss/infinispan/Keycloak">
> > > lock-timeout="60000"/>
> > >
> > > <invalidation-cache name="realms" mode="SYNC"/>
> > >
> > > <invalidation-cache name="users" mode="SYNC"/>
> > >
> > > <distributed-cache name="sessions" mode="SYNC" owners="2"/>
> > >
> > > <distributed-cache name="loginFailures" mode="SYNC" owners="1"/>
> > >
> > > </cache-container>
> > >
> > >
> > >
> > >
> > >
> > > And in server.logs, I can see my servers communicate –
> > >
> > > 2015-07-27 10:27:24,662 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t7)
> > > ISPN000310: Starting cluster-wide rebalance for cache users,
> > > topology CacheTopology{id=57, rebalanceId=17,
> > > currentCH=ReplicatedConsistentHash{ns
> > > =
> > > 60, owners = (1)[test-server-110: 60]},
> > > pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[test-server-110:
> > > 30, test-server-111: 30]}, unionCH=null,
> > > actualMembers=[test-server-110, test-server-111]}
> > >
> > > 2015-07-27 10:27:24,665 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t10)
> > > ISPN000310: Starting cluster-wide rebalance for cache realms,
> > > topology CacheTopology{id=57, rebalanceId=17,
> > > currentCH=ReplicatedConsistentHash{ns
> > > =
> > > 60, owners = (1)[test-server-110: 60]},
> > > pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[test-server-110:
> > > 30, test-server-111: 30]}, unionCH=null,
> > > actualMembers=[test-server-110, test-server-111]}
> > >
> > > 2015-07-27 10:27:24,665 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t8)
> > > ISPN000310: Starting cluster-wide rebalance for cache
> > > loginFailures, topology CacheTopology{id=57, rebalanceId=17,
> > > currentCH=DefaultConsistentHash{ns=80, owners = (1)[test-server-110:
> > > 80+0]},
> > > pendingCH=DefaultConsistentHash{ns=80, owners = (2)[test-server-110:
> > > 40+0,
> > > test-server-111: 40+0]}, unionCH=null,
> > > actualMembers=[test-server-110, test-server-111]}
> > >
> > > 2015-07-27 10:27:24,669 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t9)
> > > ISPN000310: Starting cluster-wide rebalance for cache sessions,
> > > topology CacheTopology{id=56, rebalanceId=17,
> > > currentCH=DefaultConsistentHash{ns=80,
> > > owners = (1)[test-server-110: 80+0]},
> > > pendingCH=DefaultConsistentHash{ns=80,
> > > owners = (2)[test-server-110: 40+0, test-server-111: 40+0]},
> > > unionCH=null, actualMembers=[test-server-110, test-server-111]}
> > >
> > > 2015-07-27 10:27:24,808 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t9)
> > > ISPN000336: Finished cluster-wide rebalance for cache
> > > loginFailures, topology id = 57
> > >
> > > 2015-07-27 10:27:24,810 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t12)
> > > ISPN000336: Finished cluster-wide rebalance for cache sessions,
> > > topology id = 56
> > >
> > > 2015-07-27 10:27:24,988 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t12)
> > > ISPN000336: Finished cluster-wide rebalance for cache realms,
> > > topology id =
> > > 57
> > >
> > > 2015-07-27 10:27:25,530 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t8)
> > > ISPN000336: Finished cluster-wide rebalance for cache users,
> > > topology id =
> > > 57
> > >
> > >
> > >
> > > I can successfully login, get a token and fetch user details with
> > > this token.
> > >
> > >
> > >
> > > Problem is, if one of the nodes on the cluster goes down and if we
> > > try to reuse a token which was already issued (so workflow is –
> > > user logins in, get token, (a node in the cluster goes down) and
> > > then fetch user details using
> > > token) – we see an internal server exception. From the logs –
> > >
> > >
> > >
> > > 2015-07-27 10:24:25,714 ERROR [io.undertow.request] (default
> > > task-1)
> > > UT005023: Exception handling request to
> > > /auth/realms/scaletest/protocol/openid-connect/userinfo:
> > > java.lang.RuntimeException: request path:
> > > /auth/realms/scaletest/protocol/openid-connect/userinfo
> > >
> > > at
> > > org.keycloak.services.filters.KeycloakSessionServletFilter.doFilte
> > > r(
> > > KeycloakSessionServletFilter.java:54)
> > >
> > > at
> > > io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java
> > > :6
> > > 0)
> > >
> > > at
> > > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilte
> > > r(
> > > FilterHandler.java:132)
> > >
> > > at
> > > io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHan
> > > dl
> > > er.java:85)
> > >
> > > at
> > > io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.h
> > > an
> > > dleRequest(ServletSecurityRoleHandler.java:62)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletDispatchingHandler.handleReque
> > > st
> > > (ServletDispatchingHandler.java:36)
> > >
> > > at
> > > org.wildfly.extension.undertow.security.SecurityContextAssociation
> > > Ha
> > > ndler.handleRequest(SecurityContextAssociationHandler.java:78)
> > >
> > > at
> > > io.undertow.server.handlers.PredicateHandler.handleRequest(Predica
> > > te
> > > Handler.java:43)
> > >
> > > at
> > > io.undertow.servlet.handlers.security.SSLInformationAssociationHan
> > > dl
> > > er.handleRequest(SSLInformationAssociationHandler.java:131)
> > >
> > > at
> > > io.undertow.servlet.handlers.security.ServletAuthenticationCallHan
> > > dl
> > > er.handleRequest(ServletAuthenticationCallHandler.java:57)
> > >
> > > at
> > > io.undertow.server.handlers.PredicateHandler.handleRequest(Predica
> > > te
> > > Handler.java:43)
> > >
> > > at
> > > io.undertow.security.handlers.AbstractConfidentialityHandler.handl
> > > eR
> > > equest(AbstractConfidentialityHandler.java:46)
> > >
> > > at
> > > io.undertow.servlet.handlers.security.ServletConfidentialityConstr
> > > ai
> > > ntHandler.handleRequest(ServletConfidentialityConstraintHandler.ja
> > > va
> > > :64)
> > >
> > > at
> > > io.undertow.security.handlers.AuthenticationMechanismsHandler.hand
> > > le
> > > Request(AuthenticationMechanismsHandler.java:58)
> > >
> > > at
> > > io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHa
> > > nd
> > > ler.handleRequest(CachedAuthenticatedSessionHandler.java:72)
> > >
> > > at
> > > io.undertow.security.handlers.NotificationReceiverHandler.handleRe
> > > qu
> > > est(NotificationReceiverHandler.java:50)
> > >
> > > at
> > > io.undertow.security.handlers.SecurityInitialHandler.handleRequest
> > > (S
> > > ecurityInitialHandler.java:76)
> > >
> > > at
> > > io.undertow.server.handlers.PredicateHandler.handleRequest(Predica
> > > te
> > > Handler.java:43)
> > >
> > > at
> > > org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.
> > > ha
> > > ndleRequest(JACCContextIdHandler.java:61)
> > >
> > > at
> > > io.undertow.server.handlers.PredicateHandler.handleRequest(Predica
> > > te
> > > Handler.java:43)
> > >
> > > at
> > > io.undertow.server.handlers.PredicateHandler.handleRequest(Predica
> > > te
> > > Handler.java:43)
> > >
> > > at
> > > io.undertow.server.handlers.MetricsHandler.handleRequest(MetricsHa
> > > nd
> > > ler.java:62)
> > >
> > > at
> > > io.undertow.servlet.core.MetricsChainHandler.handleRequest(Metrics
> > > Ch
> > > ainHandler.java:59)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequ
> > > es
> > > t(ServletInitialHandler.java:274)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest
> > > (S
> > > ervletInitialHandler.java:253)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletInitialHandler.access$000(Serv
> > > le
> > > tInitialHandler.java:80)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest
> > > (S
> > > ervletInitialHandler.java:172)
> > >
> > > at
> > > io.undertow.server.Connectors.executeRootHandler(Connectors.java:1
> > > 99
> > > )
> > >
> > > at
> > > io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:
> > > 774)
> > >
> > > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> > > Source)
> > >
> > > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> > > Source)
> > >
> > > at java.lang.Thread.run(Unknown Source)
> > >
> > > Caused by: org.jboss.resteasy.spi.UnhandledException:
> > > java.lang.NullPointerException
> > >
> > > at
> > > org.jboss.resteasy.core.ExceptionHandler.handleApplicationExceptio
> > > n(
> > > ExceptionHandler.java:76)
> > >
> > > at
> > > org.jboss.resteasy.core.ExceptionHandler.handleException(Exception
> > > Ha
> > > ndler.java:212)
> > >
> > > at
> > > org.jboss.resteasy.core.SynchronousDispatcher.writeException(Synch
> > > ro
> > > nousDispatcher.java:149)
> > >
> > > at
> > > org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDi
> > > sp
> > > atcher.java:372)
> > >
> > > at
> > > org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDi
> > > sp
> > > atcher.java:179)
> > >
> > > at
> > > org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatch
> > > er
> > > .service(ServletContainerDispatcher.java:220)
> > >
> > > at
> > > org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.se
> > > rv
> > > ice(HttpServletDispatcher.java:56)
> > >
> > > at
> > > org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.se
> > > rv
> > > ice(HttpServletDispatcher.java:51)
> > >
> > > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletH
> > > an
> > > dler.java:86)
> > >
> > > at
> > > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilte
> > > r(
> > > FilterHandler.java:130)
> > >
> > > at
> > > org.keycloak.services.filters.ClientConnectionFilter.doFilter(Clie
> > > nt
> > > ConnectionFilter.java:41)
> > >
> > > at
> > > io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java
> > > :6
> > > 0)
> > >
> > > at
> > > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilte
> > > r(
> > > FilterHandler.java:132)
> > >
> > > at
> > > org.keycloak.services.filters.KeycloakSessionServletFilter.doFilte
> > > r(
> > > KeycloakSessionServletFilter.java:40)
> > >
> > > ... 31 more
> > >
> > > Caused by: java.lang.NullPointerException
> > >
> > > at
> > > org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issueUserInf
> > > o(
> > > UserInfoEndpoint.java:128)
> > >
> > > at
> > > org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issueUserInf
> > > oG
> > > et(UserInfoEndpoint.java:101)
> > >
> > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >
> > > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> > >
> > > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> > >
> > > at java.lang.reflect.Method.invoke(Unknown Source)
> > >
> > > at
> > > org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorIm
> > > pl
> > > .java:137)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(Resou
> > > rc
> > > eMethodInvoker.java:296)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMetho
> > > dI
> > > nvoker.java:250)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObjec
> > > t(
> > > ResourceLocatorInvoker.java:140)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLoca
> > > to
> > > rInvoker.java:109)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObjec
> > > t(
> > > ResourceLocatorInvoker.java:135)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLoca
> > > to
> > > rInvoker.java:103)
> > >
> > > at
> > > org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDi
> > > sp
> > > atcher.java:356)
> > >
> > > ... 42 more
> > >
> > >
> > >
> > >
> > >
> > > The user guide says –
> > >
> > > If you need to prevent node failures from requiring users to log
> > > in again, set the owners attribute to 2 or more for the sessions
> > > cache
> > >
> > >
> > >
> > > Questions -
> > >
> > > 1. Have we configured Infinispan incorrectly? We don’t want the
> > > users to login again if any of the nodes in the cluster go down.
> > >
> > > 2. Will changing distributed-cache to replicated-cache help in
> > > this scenario?
> > >
> > > 3. Any way we can see the contents of the cache?
> > >
> > >
> > >
> > > -- Rajat
> > >
> > >
> > >
> > > _______________________________________________
> > > keycloak-user mailing list
> > > keycloak-user at lists.jboss.org
> > > https://lists.jboss.org/mailman/listinfo/keycloak-user
> >
> > _______________________________________________
> > keycloak-user mailing list
> > keycloak-user at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/keycloak-user
>
More information about the keycloak-user
mailing list