Is this working on Keycloak 1.3.1 Final?
Can you describe your node - network setup? I mean, did you run 2 Keycloak nodes in
standalone HA mode on different boxes? Only additional thing in our setup is Mod_cluster.
-- Rajat
-----Original Message-----
From: Stian Thorgersen [mailto:stian@redhat.com]
Sent: 27 July 2015 19:08
To: Nair, Rajat
Cc: keycloak-user(a)lists.jboss.org
Subject: Re: [keycloak-user] Distributed Keycloak user sessions using Infinispan
Just confirmed that this is working fine here with distributed cache and owners=2. I can
stop/start nodes and still get user info.
----- Original Message -----
From: "Stian Thorgersen" <stian(a)redhat.com>
To: "Rajat Nair" <rajat.nair(a)hp.com>
Cc: keycloak-user(a)lists.jboss.org
Sent: Monday, 27 July, 2015 3:26:54 PM
Subject: Re: [keycloak-user] Distributed Keycloak user sessions using
Infinispan
----- Original Message -----
> From: "Rajat Nair" <rajat.nair(a)hp.com>
> To: "Stian Thorgersen" <stian(a)redhat.com>
> Cc: keycloak-user(a)lists.jboss.org
> Sent: Monday, 27 July, 2015 3:20:27 PM
> Subject: RE: [keycloak-user] Distributed Keycloak user sessions
> using Infinispan
>
> Thanks for quick reply Stian.
>
> > What version?
> We are using Keycloak 1.3.1 Final.
>
> > Did you remember to change userSessions provider to infinispan in
> > keycloak-server.json?
> Yes. We got following in keycloak-server.json -
> "userSessions": {
> "provider": "infinispan"
> }
>
> > Firstly owners="2" should work fine as long as only one node dies
> > and the other remains active. Secondly it should return a NPE, but
> > an error if user session is not found.
> Could you elaborate on your 2nd point?
Do you have both nodes fully up and running before you kill one node?
It's a bug - if session is expired it should return an error message,
not a NPE (see
https://issues.jboss.org/browse/KEYCLOAK-1710)
>
> -- Rajat
>
> -----Original Message-----
> From: Stian Thorgersen [mailto:stian@redhat.com]
> Sent: 27 July 2015 18:07
> To: Nair, Rajat
> Cc: keycloak-user(a)lists.jboss.org
> Subject: Re: [keycloak-user] Distributed Keycloak user sessions
> using Infinispan
>
> Did you remember to change userSessions provider to infinispan in
> keycloak-server.json?
>
> ----- Original Message -----
> > From: "Stian Thorgersen" <stian(a)redhat.com>
> > To: "Rajat Nair" <rajat.nair(a)hp.com>
> > Cc: keycloak-user(a)lists.jboss.org
> > Sent: Monday, 27 July, 2015 2:24:17 PM
> > Subject: Re: [keycloak-user] Distributed Keycloak user sessions
> > using Infinispan
> >
> > What version?
> >
> > Firstly owners="2" should work fine as long as only one node dies
> > and the other remains active. Secondly it should return a NPE, but
> > an error if user session is not found.
> >
> > ----- Original Message -----
> > > From: "Rajat Nair" <rajat.nair(a)hp.com>
> > > To: keycloak-user(a)lists.jboss.org
> > > Sent: Monday, 27 July, 2015 2:03:47 PM
> > > Subject: [keycloak-user] Distributed Keycloak user sessions
> > > using Infinispan
> > >
> > >
> > >
> > > Hi,
> > >
> > >
> > >
> > > I’m in the process of setting up distributed user sessions using
> > > Infinispan on my Keycloak cluster. This is the configuration I
> > > use –
> > >
> > > <cache-container name="keycloak"
> > > jndi-name="java:jboss/infinispan/Keycloak">
> > > lock-timeout="60000"/>
> > >
> > > <invalidation-cache name="realms" mode="SYNC"/>
> > >
> > > <invalidation-cache name="users" mode="SYNC"/>
> > >
> > > <distributed-cache name="sessions" mode="SYNC"
owners="2"/>
> > >
> > > <distributed-cache name="loginFailures" mode="SYNC"
owners="1"/>
> > >
> > > </cache-container>
> > >
> > >
> > >
> > >
> > >
> > > And in server.logs, I can see my servers communicate –
> > >
> > > 2015-07-27 10:27:24,662 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t7)
> > > ISPN000310: Starting cluster-wide rebalance for cache users,
> > > topology CacheTopology{id=57, rebalanceId=17,
> > > currentCH=ReplicatedConsistentHash{ns
> > > =
> > > 60, owners = (1)[test-server-110: 60]},
> > > pendingCH=ReplicatedConsistentHash{ns = 60, owners =
> > > (2)[test-server-110:
> > > 30, test-server-111: 30]}, unionCH=null,
> > > actualMembers=[test-server-110, test-server-111]}
> > >
> > > 2015-07-27 10:27:24,665 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t10)
> > > ISPN000310: Starting cluster-wide rebalance for cache realms,
> > > topology CacheTopology{id=57, rebalanceId=17,
> > > currentCH=ReplicatedConsistentHash{ns
> > > =
> > > 60, owners = (1)[test-server-110: 60]},
> > > pendingCH=ReplicatedConsistentHash{ns = 60, owners =
> > > (2)[test-server-110:
> > > 30, test-server-111: 30]}, unionCH=null,
> > > actualMembers=[test-server-110, test-server-111]}
> > >
> > > 2015-07-27 10:27:24,665 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t8)
> > > ISPN000310: Starting cluster-wide rebalance for cache
> > > loginFailures, topology CacheTopology{id=57, rebalanceId=17,
> > > currentCH=DefaultConsistentHash{ns=80, owners = (1)[test-server-110:
> > > 80+0]},
> > > pendingCH=DefaultConsistentHash{ns=80, owners = (2)[test-server-110:
> > > 40+0,
> > > test-server-111: 40+0]}, unionCH=null,
> > > actualMembers=[test-server-110, test-server-111]}
> > >
> > > 2015-07-27 10:27:24,669 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t9)
> > > ISPN000310: Starting cluster-wide rebalance for cache sessions,
> > > topology CacheTopology{id=56, rebalanceId=17,
> > > currentCH=DefaultConsistentHash{ns=80,
> > > owners = (1)[test-server-110: 80+0]},
> > > pendingCH=DefaultConsistentHash{ns=80,
> > > owners = (2)[test-server-110: 40+0, test-server-111: 40+0]},
> > > unionCH=null, actualMembers=[test-server-110, test-server-111]}
> > >
> > > 2015-07-27 10:27:24,808 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t9)
> > > ISPN000336: Finished cluster-wide rebalance for cache
> > > loginFailures, topology id = 57
> > >
> > > 2015-07-27 10:27:24,810 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t12)
> > > ISPN000336: Finished cluster-wide rebalance for cache sessions,
> > > topology id = 56
> > >
> > > 2015-07-27 10:27:24,988 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t12)
> > > ISPN000336: Finished cluster-wide rebalance for cache realms,
> > > topology id =
> > > 57
> > >
> > > 2015-07-27 10:27:25,530 INFO [org.infinispan.CLUSTER]
> > > (remote-thread--p3-t8)
> > > ISPN000336: Finished cluster-wide rebalance for cache users,
> > > topology id =
> > > 57
> > >
> > >
> > >
> > > I can successfully login, get a token and fetch user details
> > > with this token.
> > >
> > >
> > >
> > > Problem is, if one of the nodes on the cluster goes down and if
> > > we try to reuse a token which was already issued (so workflow is
> > > – user logins in, get token, (a node in the cluster goes down)
> > > and then fetch user details using
> > > token) – we see an internal server exception. From the logs –
> > >
> > >
> > >
> > > 2015-07-27 10:24:25,714 ERROR [io.undertow.request] (default
> > > task-1)
> > > UT005023: Exception handling request to
> > > /auth/realms/scaletest/protocol/openid-connect/userinfo:
> > > java.lang.RuntimeException: request path:
> > > /auth/realms/scaletest/protocol/openid-connect/userinfo
> > >
> > > at
> > > org.keycloak.services.filters.KeycloakSessionServletFilter.doFil
> > > ter(
> > > KeycloakSessionServletFilter.java:54)
> > >
> > > at
> > > io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.ja
> > > va:6
> > > 0)
> > >
> > > at
> > > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFil
> > > ter(
> > > FilterHandler.java:132)
> > >
> > > at
> > > io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterH
> > > andl
> > > er.java:85)
> > >
> > > at
> > > io.undertow.servlet.handlers.security.ServletSecurityRoleHandler
> > > .han
> > > dleRequest(ServletSecurityRoleHandler.java:62)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletDispatchingHandler.handleReq
> > > uest
> > > (ServletDispatchingHandler.java:36)
> > >
> > > at
> > > org.wildfly.extension.undertow.security.SecurityContextAssociati
> > > onHa
> > > ndler.handleRequest(SecurityContextAssociationHandler.java:78)
> > >
> > > at
> > > io.undertow.server.handlers.PredicateHandler.handleRequest(Predi
> > > cate
> > > Handler.java:43)
> > >
> > > at
> > > io.undertow.servlet.handlers.security.SSLInformationAssociationH
> > > andl
> > > er.handleRequest(SSLInformationAssociationHandler.java:131)
> > >
> > > at
> > > io.undertow.servlet.handlers.security.ServletAuthenticationCallH
> > > andl
> > > er.handleRequest(ServletAuthenticationCallHandler.java:57)
> > >
> > > at
> > > io.undertow.server.handlers.PredicateHandler.handleRequest(Predi
> > > cate
> > > Handler.java:43)
> > >
> > > at
> > > io.undertow.security.handlers.AbstractConfidentialityHandler.han
> > > dleR
> > > equest(AbstractConfidentialityHandler.java:46)
> > >
> > > at
> > > io.undertow.servlet.handlers.security.ServletConfidentialityCons
> > > trai
> > > ntHandler.handleRequest(ServletConfidentialityConstraintHandler.
> > > java
> > > :64)
> > >
> > > at
> > > io.undertow.security.handlers.AuthenticationMechanismsHandler.ha
> > > ndle
> > > Request(AuthenticationMechanismsHandler.java:58)
> > >
> > > at
> > > io.undertow.servlet.handlers.security.CachedAuthenticatedSession
> > > Hand
> > > ler.handleRequest(CachedAuthenticatedSessionHandler.java:72)
> > >
> > > at
> > > io.undertow.security.handlers.NotificationReceiverHandler.handle
> > > Requ
> > > est(NotificationReceiverHandler.java:50)
> > >
> > > at
> > > io.undertow.security.handlers.SecurityInitialHandler.handleReque
> > > st(S
> > > ecurityInitialHandler.java:76)
> > >
> > > at
> > > io.undertow.server.handlers.PredicateHandler.handleRequest(Predi
> > > cate
> > > Handler.java:43)
> > >
> > > at
> > > org.wildfly.extension.undertow.security.jacc.JACCContextIdHandle
> > > r.ha
> > > ndleRequest(JACCContextIdHandler.java:61)
> > >
> > > at
> > > io.undertow.server.handlers.PredicateHandler.handleRequest(Predi
> > > cate
> > > Handler.java:43)
> > >
> > > at
> > > io.undertow.server.handlers.PredicateHandler.handleRequest(Predi
> > > cate
> > > Handler.java:43)
> > >
> > > at
> > > io.undertow.server.handlers.MetricsHandler.handleRequest(Metrics
> > > Hand
> > > ler.java:62)
> > >
> > > at
> > > io.undertow.servlet.core.MetricsChainHandler.handleRequest(Metri
> > > csCh
> > > ainHandler.java:59)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRe
> > > ques
> > > t(ServletInitialHandler.java:274)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletInitialHandler.dispatchReque
> > > st(S
> > > ervletInitialHandler.java:253)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletInitialHandler.access$000(Se
> > > rvle
> > > tInitialHandler.java:80)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletInitialHandler$1.handleReque
> > > st(S
> > > ervletInitialHandler.java:172)
> > >
> > > at
> > > io.undertow.server.Connectors.executeRootHandler(Connectors.java
> > > :199
> > > )
> > >
> > > at
> > > io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:
> > > 774)
> > >
> > > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> > > Source)
> > >
> > > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> > > Source)
> > >
> > > at java.lang.Thread.run(Unknown Source)
> > >
> > > Caused by: org.jboss.resteasy.spi.UnhandledException:
> > > java.lang.NullPointerException
> > >
> > > at
> > > org.jboss.resteasy.core.ExceptionHandler.handleApplicationExcept
> > > ion(
> > > ExceptionHandler.java:76)
> > >
> > > at
> > > org.jboss.resteasy.core.ExceptionHandler.handleException(Excepti
> > > onHa
> > > ndler.java:212)
> > >
> > > at
> > > org.jboss.resteasy.core.SynchronousDispatcher.writeException(Syn
> > > chro
> > > nousDispatcher.java:149)
> > >
> > > at
> > > org.jboss.resteasy.core.SynchronousDispatcher.invoke(Synchronous
> > > Disp
> > > atcher.java:372)
> > >
> > > at
> > > org.jboss.resteasy.core.SynchronousDispatcher.invoke(Synchronous
> > > Disp
> > > atcher.java:179)
> > >
> > > at
> > > org.jboss.resteasy.plugins.server.servlet.ServletContainerDispat
> > > cher
> > > .service(ServletContainerDispatcher.java:220)
> > >
> > > at
> > > org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.
> > > serv
> > > ice(HttpServletDispatcher.java:56)
> > >
> > > at
> > > org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.
> > > serv
> > > ice(HttpServletDispatcher.java:51)
> > >
> > > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> > >
> > > at
> > > io.undertow.servlet.handlers.ServletHandler.handleRequest(Servle
> > > tHan
> > > dler.java:86)
> > >
> > > at
> > > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFil
> > > ter(
> > > FilterHandler.java:130)
> > >
> > > at
> > > org.keycloak.services.filters.ClientConnectionFilter.doFilter(Cl
> > > ient
> > > ConnectionFilter.java:41)
> > >
> > > at
> > > io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.ja
> > > va:6
> > > 0)
> > >
> > > at
> > > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFil
> > > ter(
> > > FilterHandler.java:132)
> > >
> > > at
> > > org.keycloak.services.filters.KeycloakSessionServletFilter.doFil
> > > ter(
> > > KeycloakSessionServletFilter.java:40)
> > >
> > > ... 31 more
> > >
> > > Caused by: java.lang.NullPointerException
> > >
> > > at
> > > org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issueUserI
> > > nfo(
> > > UserInfoEndpoint.java:128)
> > >
> > > at
> > > org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issueUserI
> > > nfoG
> > > et(UserInfoEndpoint.java:101)
> > >
> > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >
> > > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> > >
> > > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown
> > > Source)
> > >
> > > at java.lang.reflect.Method.invoke(Unknown Source)
> > >
> > > at
> > > org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjector
> > > Impl
> > > .java:137)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(Res
> > > ourc
> > > eMethodInvoker.java:296)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMet
> > > hodI
> > > nvoker.java:250)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObj
> > > ect(
> > > ResourceLocatorInvoker.java:140)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLo
> > > cato
> > > rInvoker.java:109)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObj
> > > ect(
> > > ResourceLocatorInvoker.java:135)
> > >
> > > at
> > > org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLo
> > > cato
> > > rInvoker.java:103)
> > >
> > > at
> > > org.jboss.resteasy.core.SynchronousDispatcher.invoke(Synchronous
> > > Disp
> > > atcher.java:356)
> > >
> > > ... 42 more
> > >
> > >
> > >
> > >
> > >
> > > The user guide says –
> > >
> > > If you need to prevent node failures from requiring users to log
> > > in again, set the owners attribute to 2 or more for the sessions
> > > cache
> > >
> > >
> > >
> > > Questions -
> > >
> > > 1. Have we configured Infinispan incorrectly? We don’t want the
> > > users to login again if any of the nodes in the cluster go down.
> > >
> > > 2. Will changing distributed-cache to replicated-cache help in
> > > this scenario?
> > >
> > > 3. Any way we can see the contents of the cache?
> > >
> > >
> > >
> > > -- Rajat
> > >
> > >
> > >
> > > _______________________________________________
> > > keycloak-user mailing list
> > > keycloak-user(a)lists.jboss.org
> > >
https://lists.jboss.org/mailman/listinfo/keycloak-user
> >
> > _______________________________________________
> > keycloak-user mailing list
> > keycloak-user(a)lists.jboss.org
> >
https://lists.jboss.org/mailman/listinfo/keycloak-user
>
_______________________________________________
keycloak-user mailing list
keycloak-user(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/keycloak-user