Hi,

 

I’m in the process of setting up distributed user sessions using Infinispan on my Keycloak cluster. This is the configuration I use –

<cache-container name="keycloak" jndi-name="java:jboss/infinispan/Keycloak">

                <transport lock-timeout="60000"/>

                <invalidation-cache name="realms" mode="SYNC"/>

                <invalidation-cache name="users" mode="SYNC"/>

                <distributed-cache name="sessions" mode="SYNC" owners="2"/>

                <distributed-cache name="loginFailures" mode="SYNC" owners="1"/>

        </cache-container>

 

 

And in server.logs, I can see my servers communicate –

2015-07-27 10:27:24,662 INFO  [org.infinispan.CLUSTER] (remote-thread--p3-t7) ISPN000310: Starting cluster-wide rebalance for cache users, topology CacheTopology{id=57, rebalanceId=17, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[test-server-110: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[test-server-110: 30, test-server-111: 30]}, unionCH=null, actualMembers=[test-server-110, test-server-111]}

2015-07-27 10:27:24,665 INFO  [org.infinispan.CLUSTER] (remote-thread--p3-t10) ISPN000310: Starting cluster-wide rebalance for cache realms, topology CacheTopology{id=57, rebalanceId=17, currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[test-server-110: 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (2)[test-server-110: 30, test-server-111: 30]}, unionCH=null, actualMembers=[test-server-110, test-server-111]}

2015-07-27 10:27:24,665 INFO  [org.infinispan.CLUSTER] (remote-thread--p3-t8) ISPN000310: Starting cluster-wide rebalance for cache loginFailures, topology CacheTopology{id=57, rebalanceId=17, currentCH=DefaultConsistentHash{ns=80, owners = (1)[test-server-110: 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = (2)[test-server-110: 40+0, test-server-111: 40+0]}, unionCH=null, actualMembers=[test-server-110, test-server-111]}

2015-07-27 10:27:24,669 INFO  [org.infinispan.CLUSTER] (remote-thread--p3-t9) ISPN000310: Starting cluster-wide rebalance for cache sessions, topology CacheTopology{id=56, rebalanceId=17, currentCH=DefaultConsistentHash{ns=80, owners = (1)[test-server-110: 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = (2)[test-server-110: 40+0, test-server-111: 40+0]}, unionCH=null, actualMembers=[test-server-110, test-server-111]}

2015-07-27 10:27:24,808 INFO  [org.infinispan.CLUSTER] (remote-thread--p3-t9) ISPN000336: Finished cluster-wide rebalance for cache loginFailures, topology id = 57

2015-07-27 10:27:24,810 INFO  [org.infinispan.CLUSTER] (remote-thread--p3-t12) ISPN000336: Finished cluster-wide rebalance for cache sessions, topology id = 56

2015-07-27 10:27:24,988 INFO  [org.infinispan.CLUSTER] (remote-thread--p3-t12) ISPN000336: Finished cluster-wide rebalance for cache realms, topology id = 57

2015-07-27 10:27:25,530 INFO  [org.infinispan.CLUSTER] (remote-thread--p3-t8) ISPN000336: Finished cluster-wide rebalance for cache users, topology id = 57

 

I can successfully login, get a token and fetch user details with this token.

 

Problem is, if one of the nodes on the cluster goes down and if we try to reuse a token which was already issued (so workflow is – user logins in, get token, (a node in the cluster goes down) and then fetch user details using token) – we see an internal server exception. From the logs –

 

2015-07-27 10:24:25,714 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /auth/realms/scaletest/protocol/openid-connect/userinfo: java.lang.RuntimeException: request path: /auth/realms/scaletest/protocol/openid-connect/userinfo

        at org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(KeycloakSessionServletFilter.java:54)

        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)

        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132)

        at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:85)

        at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)

        at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)

        at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)

        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)

        at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)

        at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)

        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)

        at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)

        at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)

        at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58)

        at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:72)

        at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)

        at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76)

        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)

        at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)

        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)

        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)

        at io.undertow.server.handlers.MetricsHandler.handleRequest(MetricsHandler.java:62)

        at io.undertow.servlet.core.MetricsChainHandler.handleRequest(MetricsChainHandler.java:59)

        at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:274)

        at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:253)

        at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:80)

        at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:172)

        at io.undertow.server.Connectors.executeRootHandler(Connectors.java:199)

        at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:774)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

        at java.lang.Thread.run(Unknown Source)

Caused by: org.jboss.resteasy.spi.UnhandledException: java.lang.NullPointerException

        at org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(ExceptionHandler.java:76)

        at org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:212)

        at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:149)

        at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:372)

        at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:179)

        at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:220)

        at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)

        at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)

        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)

        at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:86)

        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:130)

        at org.keycloak.services.filters.ClientConnectionFilter.doFilter(ClientConnectionFilter.java:41)

        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)

        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132)

        at org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(KeycloakSessionServletFilter.java:40)

        ... 31 more

Caused by: java.lang.NullPointerException

        at org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issueUserInfo(UserInfoEndpoint.java:128)

        at org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issueUserInfoGet(UserInfoEndpoint.java:101)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)

        at java.lang.reflect.Method.invoke(Unknown Source)

        at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:137)

        at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:296)

        at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:250)

        at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:140)

        at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:109)

        at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:135)

        at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:103)

        at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:356)

        ... 42 more

 

 

The user guide says –

If you need to prevent node failures from requiring users to log in again, set the  owners attribute to 2 or more for the  sessions cache

 

Questions -

1.  Have we configured Infinispan incorrectly? We don’t want the users to login again if any of the nodes in the cluster go down.

2.  Will changing distributed-cache to replicated-cache help in this scenario?

3.  Any way we can see the contents of the cache?

 

-- Rajat