Follow up to our discussion -
I upgrade my nodes to Keycloak 1.4 Final. Dropped and re-created database Postgres
database (shared between both the nodes) and tested distributed user session using
following commands -
- Fetch access token using following curl from one server
curl --write-out " %{http_code}" -s --request POST --header "Content-Type:
application/x-www-form-urlencoded; charset=UTF-8" --data
"username=user1@email.com&password=testpassword(a)&client_id=admin-client&grant_type=password"
"http://test-server-110:8080/auth/realms/test/protocol/openid-connect/token"
- Validated the token on different server using
curl --write-out " %{http_code}" -s --request GET --header "Content-Type:
application/json" --header "Authorization: Bearer
[ACCESS_TOKEN_FROM_PREVIOUS_CALL]"
"http://test-server-111:8081/auth/realms/test/protocol/openid-connect/userinfo"
And we get this -
{"error":"invalid_grant","error_description":"Token
invalid"}
No more NPE and internal server error.
If we use the same token and try to fetch user details on server which issued the token -
we get the correct data. (Note - I have confirmed that the token has not expired)
One thing you can try is to make sure user session replication is
working properly:
1. Start two nodes
2. Open admin console directly on node 1 - login as admin/admin 3. Open admin console
directly on node 2 from another machine/browser or use incognito mode - login as
admin/admin 4. On node 1 go to users -> view all -> click on admin -> sessions -
> > you should see two sessions 5. On node 2 do the same and check you can see two
sessions there as well
Now this is where things get strange. I followed the steps
described - used 2 different browsers - and I can see 2 sessions listed!
Are the process we use to validate the token incorrect? Or is master console on the web
doing something different (like get the data from Postgres database used by both the
nodes).
-- Rajat
-----Original Message-----
From: Stian Thorgersen [mailto:stian@redhat.com]
Sent: 28 July 2015 10:19
To: Nair, Rajat
Cc: keycloak-user(a)lists.jboss.org
Subject: Re: [keycloak-user] Distributed Keycloak user sessions using Infinispan
----- Original Message -----
From: "Rajat Nair" <rajat.nair(a)hp.com>
To: "Stian Thorgersen" <stian(a)redhat.com>
Cc: keycloak-user(a)lists.jboss.org
Sent: Monday, 27 July, 2015 7:33:25 PM
Subject: RE: [keycloak-user] Distributed Keycloak user sessions using
Infinispan
> Can you send me your standalone-ha.xml and keycloak-server.json?
Files attached. The service is started like -
/opt/jboss/keycloak/bin/standalone.sh -c standalone-ha.xml
-b=test-server-110
-bmanagement=test-server-110 -u 230.0.0.4
-Djboss.node.name=test-server-110
> Also, any chance you can try it out with master? I've been testing
> with that as we're about to do 1.4 release soon
Glad to give back to the community. Will build and deploy the master
on my nodes. Will send findings tomorrow.
Regarding a scenario I described earlier - Case 2 1. Start with 1 Node
down. We bring it back up. We wait for some time so that Infinispan
can sync.
2. Bring down other node.
3. Try to get user info using existing token.
Is this a valid use-case?
Yes - I've tried the same use-case and it works fine every time. One caveat is that
access token can expire, but in this case you should get a 403 returned, not a NPE
exception and 500.
One thing you can try is to make sure user session replication is working properly:
1. Start two nodes
2. Open admin console directly on node 1 - login as admin/admin 3. Open admin console
directly on node 2 from another machine/browser or use incognito mode - login as
admin/admin 4. On node 1 go to users -> view all -> click on admin -> sessions -
you should see two sessions 5. On node 2 do the same and check you can see two sessions
there as well
-- Rajat
-----Original Message-----
From: Stian Thorgersen [mailto:stian@redhat.com]
Sent: 27 July 2015 19:16
To: Nair, Rajat
Cc: keycloak-user(a)lists.jboss.org
Subject: Re: [keycloak-user] Distributed Keycloak user sessions using
Infinispan
Also, any chance you can try it out with master? I've been testing
with that as we're about to do 1.4 release soon
----- Original Message -----
> From: "Stian Thorgersen" <stian(a)redhat.com>
> To: "Rajat Nair" <rajat.nair(a)hp.com>
> Cc: keycloak-user(a)lists.jboss.org
> Sent: Monday, 27 July, 2015 3:45:46 PM
> Subject: Re: [keycloak-user] Distributed Keycloak user sessions
> using Infinispan
>
> Can you send me your standalone-ha.xml and keycloak-server.json?
>
> ----- Original Message -----
> > From: "Rajat Nair" <rajat.nair(a)hp.com>
> > To: "Stian Thorgersen" <stian(a)redhat.com>
> > Cc: keycloak-user(a)lists.jboss.org
> > Sent: Monday, 27 July, 2015 3:41:36 PM
> > Subject: RE: [keycloak-user] Distributed Keycloak user sessions
> > using Infinispan
> >
> > > Do you have both nodes fully up and running before you kill one node?
> > Yes.
> > This is what we tried -
> > Case 1
> > 1. Two node cluster (both running Keycloak engines) - both up and
> > running.
> > Configured load balancing using mod_cluster.
> > 2. Login and get token.
> > 3. Bring down one node.
> > 4. Get user info using existing token. This is when we get NPE.
> >
> > Case 2
> > 1. Start with 1 Node down. We bring it back up. We wait for some
> > time so that Infinispan can sync.
> > 2. Bring down other node.
> > 3. Try to get user info using existing token. Again we see NPE.
> >
> > >It's a bug - if session is expired it should return an error
> > >message, not a NPE (see
> > >https://issues.jboss.org/browse/KEYCLOAK-1710)
> > Thanks for tracking this.
> >
> > -- Rajat
> >
> > ----- Original Message -----
> > > From: "Rajat Nair" <rajat.nair(a)hp.com>
> > > To: "Stian Thorgersen" <stian(a)redhat.com>
> > > Cc: keycloak-user(a)lists.jboss.org
> > > Sent: Monday, 27 July, 2015 3:20:27 PM
> > > Subject: RE: [keycloak-user] Distributed Keycloak user sessions
> > > using Infinispan
> > >
> > > Thanks for quick reply Stian.
> > >
> > > > What version?
> > > We are using Keycloak 1.3.1 Final.
> > >
> > > > Did you remember to change userSessions provider to infinispan
> > > > in keycloak-server.json?
> > > Yes. We got following in keycloak-server.json -
> > > "userSessions": {
> > > "provider": "infinispan"
> > > }
> > >
> > > > Firstly owners="2" should work fine as long as only one
node
> > > > dies and the other remains active. Secondly it should return a
> > > > NPE, but an error if user session is not found.
> > > Could you elaborate on your 2nd point?
> >
> > Do you have both nodes fully up and running before you kill one node?
> >
> > It's a bug - if session is expired it should return an error
> > message, not a NPE (see
> >
https://issues.jboss.org/browse/KEYCLOAK-1710)
> >
> > >
> > > -- Rajat
> > >
> > > -----Original Message-----
> > > From: Stian Thorgersen [mailto:stian@redhat.com]
> > > Sent: 27 July 2015 18:07
> > > To: Nair, Rajat
> > > Cc: keycloak-user(a)lists.jboss.org
> > > Subject: Re: [keycloak-user] Distributed Keycloak user sessions
> > > using Infinispan
> > >
> > > Did you remember to change userSessions provider to infinispan
> > > in keycloak-server.json?
> > >
> > > ----- Original Message -----
> > > > From: "Stian Thorgersen" <stian(a)redhat.com>
> > > > To: "Rajat Nair" <rajat.nair(a)hp.com>
> > > > Cc: keycloak-user(a)lists.jboss.org
> > > > Sent: Monday, 27 July, 2015 2:24:17 PM
> > > > Subject: Re: [keycloak-user] Distributed Keycloak user
> > > > sessions using Infinispan
> > > >
> > > > What version?
> > > >
> > > > Firstly owners="2" should work fine as long as only one
node
> > > > dies and the other remains active. Secondly it should return a
> > > > NPE, but an error if user session is not found.
> > > >
> > > > ----- Original Message -----
> > > > > From: "Rajat Nair" <rajat.nair(a)hp.com>
> > > > > To: keycloak-user(a)lists.jboss.org
> > > > > Sent: Monday, 27 July, 2015 2:03:47 PM
> > > > > Subject: [keycloak-user] Distributed Keycloak user sessions
> > > > > using Infinispan
> > > > >
> > > > >
> > > > >
> > > > > Hi,
> > > > >
> > > > >
> > > > >
> > > > > I’m in the process of setting up distributed user sessions
> > > > > using Infinispan on my Keycloak cluster. This is the
> > > > > configuration I use –
> > > > >
> > > > > <cache-container name="keycloak"
> > > > > jndi-name="java:jboss/infinispan/Keycloak">
> > > > > lock-timeout="60000"/>
> > > > >
> > > > > <invalidation-cache name="realms"
mode="SYNC"/>
> > > > >
> > > > > <invalidation-cache name="users"
mode="SYNC"/>
> > > > >
> > > > > <distributed-cache name="sessions"
mode="SYNC" owners="2"/>
> > > > >
> > > > > <distributed-cache name="loginFailures"
mode="SYNC"
> > > > > owners="1"/>
> > > > >
> > > > > </cache-container>
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > And in server.logs, I can see my servers communicate –
> > > > >
> > > > > 2015-07-27 10:27:24,662 INFO [org.infinispan.CLUSTER]
> > > > > (remote-thread--p3-t7)
> > > > > ISPN000310: Starting cluster-wide rebalance for cache users,
> > > > > topology CacheTopology{id=57, rebalanceId=17,
> > > > > currentCH=ReplicatedConsistentHash{ns
> > > > > =
> > > > > 60, owners = (1)[test-server-110: 60]},
> > > > > pendingCH=ReplicatedConsistentHash{ns = 60, owners =
> > > > > (2)[test-server-110:
> > > > > 30, test-server-111: 30]}, unionCH=null,
> > > > > actualMembers=[test-server-110, test-server-111]}
> > > > >
> > > > > 2015-07-27 10:27:24,665 INFO [org.infinispan.CLUSTER]
> > > > > (remote-thread--p3-t10)
> > > > > ISPN000310: Starting cluster-wide rebalance for cache
> > > > > realms, topology CacheTopology{id=57, rebalanceId=17,
> > > > > currentCH=ReplicatedConsistentHash{ns
> > > > > =
> > > > > 60, owners = (1)[test-server-110: 60]},
> > > > > pendingCH=ReplicatedConsistentHash{ns = 60, owners =
> > > > > (2)[test-server-110:
> > > > > 30, test-server-111: 30]}, unionCH=null,
> > > > > actualMembers=[test-server-110, test-server-111]}
> > > > >
> > > > > 2015-07-27 10:27:24,665 INFO [org.infinispan.CLUSTER]
> > > > > (remote-thread--p3-t8)
> > > > > ISPN000310: Starting cluster-wide rebalance for cache
> > > > > loginFailures, topology CacheTopology{id=57, rebalanceId=17,
> > > > > currentCH=DefaultConsistentHash{ns=80, owners =
> > > > > (1)[test-server-110:
> > > > > 80+0]},
> > > > > pendingCH=DefaultConsistentHash{ns=80, owners =
> > > > > (2)[test-server-110:
> > > > > 40+0,
> > > > > test-server-111: 40+0]}, unionCH=null,
> > > > > actualMembers=[test-server-110, test-server-111]}
> > > > >
> > > > > 2015-07-27 10:27:24,669 INFO [org.infinispan.CLUSTER]
> > > > > (remote-thread--p3-t9)
> > > > > ISPN000310: Starting cluster-wide rebalance for cache
> > > > > sessions, topology CacheTopology{id=56, rebalanceId=17,
> > > > > currentCH=DefaultConsistentHash{ns=80,
> > > > > owners = (1)[test-server-110: 80+0]},
> > > > > pendingCH=DefaultConsistentHash{ns=80,
> > > > > owners = (2)[test-server-110: 40+0, test-server-111: 40+0]},
> > > > > unionCH=null, actualMembers=[test-server-110,
> > > > > test-server-111]}
> > > > >
> > > > > 2015-07-27 10:27:24,808 INFO [org.infinispan.CLUSTER]
> > > > > (remote-thread--p3-t9)
> > > > > ISPN000336: Finished cluster-wide rebalance for cache
> > > > > loginFailures, topology id = 57
> > > > >
> > > > > 2015-07-27 10:27:24,810 INFO [org.infinispan.CLUSTER]
> > > > > (remote-thread--p3-t12)
> > > > > ISPN000336: Finished cluster-wide rebalance for cache
> > > > > sessions, topology id = 56
> > > > >
> > > > > 2015-07-27 10:27:24,988 INFO [org.infinispan.CLUSTER]
> > > > > (remote-thread--p3-t12)
> > > > > ISPN000336: Finished cluster-wide rebalance for cache
> > > > > realms, topology id =
> > > > > 57
> > > > >
> > > > > 2015-07-27 10:27:25,530 INFO [org.infinispan.CLUSTER]
> > > > > (remote-thread--p3-t8)
> > > > > ISPN000336: Finished cluster-wide rebalance for cache users,
> > > > > topology id =
> > > > > 57
> > > > >
> > > > >
> > > > >
> > > > > I can successfully login, get a token and fetch user details
> > > > > with this token.
> > > > >
> > > > >
> > > > >
> > > > > Problem is, if one of the nodes on the cluster goes down and
> > > > > if we try to reuse a token which was already issued (so
> > > > > workflow is – user logins in, get token, (a node in the
> > > > > cluster goes down) and then fetch user details using
> > > > > token) – we see an internal server exception. From the logs
> > > > > –
> > > > >
> > > > >
> > > > >
> > > > > 2015-07-27 10:24:25,714 ERROR [io.undertow.request] (default
> > > > > task-1)
> > > > > UT005023: Exception handling request to
> > > > > /auth/realms/scaletest/protocol/openid-connect/userinfo:
> > > > > java.lang.RuntimeException: request path:
> > > > > /auth/realms/scaletest/protocol/openid-connect/userinfo
> > > > >
> > > > > at
> > > > > org.keycloak.services.filters.KeycloakSessionServletFilter.d
> > > > > oF
> > > > > ilte
> > > > > r(
> > > > > KeycloakSessionServletFilter.java:54)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.
> > > > > java
> > > > > :6
> > > > > 0)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.d
> > > > > oF
> > > > > ilte
> > > > > r(
> > > > > FilterHandler.java:132)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.FilterHandler.handleRequest(Fil
> > > > > te
> > > > > rHan
> > > > > dl
> > > > > er.java:85)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.security.ServletSecurityRoleHan
> > > > > dl
> > > > > er.h
> > > > > an
> > > > > dleRequest(ServletSecurityRoleHandler.java:62)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.ServletDispatchingHandler.handl
> > > > > eR
> > > > > eque
> > > > > st
> > > > > (ServletDispatchingHandler.java:36)
> > > > >
> > > > > at
> > > > > org.wildfly.extension.undertow.security.SecurityContextAssoc
> > > > > ia
> > > > > tion
> > > > > Ha
> > > > > ndler.handleRequest(SecurityContextAssociationHandler.java:7
> > > > > 8)
> > > > >
> > > > > at
> > > > > io.undertow.server.handlers.PredicateHandler.handleRequest(P
> > > > > re
> > > > > dica
> > > > > te
> > > > > Handler.java:43)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.security.SSLInformationAssociat
> > > > > io
> > > > > nHan
> > > > > dl
> > > > > er.handleRequest(SSLInformationAssociationHandler.java:131)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.security.ServletAuthenticationC
> > > > > al
> > > > > lHan
> > > > > dl
> > > > > er.handleRequest(ServletAuthenticationCallHandler.java:57)
> > > > >
> > > > > at
> > > > > io.undertow.server.handlers.PredicateHandler.handleRequest(P
> > > > > re
> > > > > dica
> > > > > te
> > > > > Handler.java:43)
> > > > >
> > > > > at
> > > > > io.undertow.security.handlers.AbstractConfidentialityHandler
> > > > > .h
> > > > > andl
> > > > > eR
> > > > > equest(AbstractConfidentialityHandler.java:46)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.security.ServletConfidentiality
> > > > > Co
> > > > > nstr
> > > > > ai
> > > > > ntHandler.handleRequest(ServletConfidentialityConstraintHand
> > > > > le
> > > > > r.ja
> > > > > va
> > > > > :64)
> > > > >
> > > > > at
> > > > > io.undertow.security.handlers.AuthenticationMechanismsHandler.
> > > > > hand
> > > > > le
> > > > > Request(AuthenticationMechanismsHandler.java:58)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.security.CachedAuthenticatedSes
> > > > > si
> > > > > onHa
> > > > > nd
> > > > > ler.handleRequest(CachedAuthenticatedSessionHandler.java:72)
> > > > >
> > > > > at
> > > > > io.undertow.security.handlers.NotificationReceiverHandler.ha
> > > > > nd
> > > > > leRe
> > > > > qu
> > > > > est(NotificationReceiverHandler.java:50)
> > > > >
> > > > > at
> > > > > io.undertow.security.handlers.SecurityInitialHandler.handleR
> > > > > eq
> > > > > uest
> > > > > (S
> > > > > ecurityInitialHandler.java:76)
> > > > >
> > > > > at
> > > > > io.undertow.server.handlers.PredicateHandler.handleRequest(P
> > > > > re
> > > > > dica
> > > > > te
> > > > > Handler.java:43)
> > > > >
> > > > > at
> > > > >
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.
> > > > > ha
> > > > > ndleRequest(JACCContextIdHandler.java:61)
> > > > >
> > > > > at
> > > > > io.undertow.server.handlers.PredicateHandler.handleRequest(P
> > > > > re
> > > > > dica
> > > > > te
> > > > > Handler.java:43)
> > > > >
> > > > > at
> > > > > io.undertow.server.handlers.PredicateHandler.handleRequest(P
> > > > > re
> > > > > dica
> > > > > te
> > > > > Handler.java:43)
> > > > >
> > > > > at
> > > > > io.undertow.server.handlers.MetricsHandler.handleRequest(Met
> > > > > ri
> > > > > csHa
> > > > > nd
> > > > > ler.java:62)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.core.MetricsChainHandler.handleRequest(M
> > > > > et
> > > > > rics
> > > > > Ch
> > > > > ainHandler.java:59)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.ServletInitialHandler.handleFir
> > > > > st
> > > > > Requ
> > > > > es
> > > > > t(ServletInitialHandler.java:274)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.ServletInitialHandler.dispatchR
> > > > > eq
> > > > > uest
> > > > > (S
> > > > > ervletInitialHandler.java:253)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.ServletInitialHandler.access$00
> > > > > 0(
> > > > > Serv
> > > > > le
> > > > > tInitialHandler.java:80)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.ServletInitialHandler$1.handleR
> > > > > eq
> > > > > uest
> > > > > (S
> > > > > ervletInitialHandler.java:172)
> > > > >
> > > > > at
> > > > > io.undertow.server.Connectors.executeRootHandler(Connectors.
> > > > > ja
> > > > > va:1
> > > > > 99
> > > > > )
> > > > >
> > > > > at
> > > > >
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:
> > > > > 774)
> > > > >
> > > > > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> > > > > Source)
> > > > >
> > > > > at
> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> > > > > Source)
> > > > >
> > > > > at java.lang.Thread.run(Unknown Source)
> > > > >
> > > > > Caused by: org.jboss.resteasy.spi.UnhandledException:
> > > > > java.lang.NullPointerException
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.ExceptionHandler.handleApplicationEx
> > > > > ce
> > > > > ptio
> > > > > n(
> > > > > ExceptionHandler.java:76)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.ExceptionHandler.handleException(Exc
> > > > > ep
> > > > > tion
> > > > > Ha
> > > > > ndler.java:212)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.SynchronousDispatcher.writeException
> > > > > (S
> > > > > ynch
> > > > > ro
> > > > > nousDispatcher.java:149)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.SynchronousDispatcher.invoke(Synchro
> > > > > no
> > > > > usDi
> > > > > sp
> > > > > atcher.java:372)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.SynchronousDispatcher.invoke(Synchro
> > > > > no
> > > > > usDi
> > > > > sp
> > > > > atcher.java:179)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.plugins.server.servlet.ServletContainerDi
> > > > > sp
> > > > > atch
> > > > > er
> > > > > .service(ServletContainerDispatcher.java:220)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.plugins.server.servlet.HttpServletDispatc
> > > > > he
> > > > > r.se
> > > > > rv
> > > > > ice(HttpServletDispatcher.java:56)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.plugins.server.servlet.HttpServletDispatc
> > > > > he
> > > > > r.se
> > > > > rv
> > > > > ice(HttpServletDispatcher.java:51)
> > > > >
> > > > > at
> > > > > javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.ServletHandler.handleRequest(Se
> > > > > rv
> > > > > letH
> > > > > an
> > > > > dler.java:86)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.d
> > > > > oF
> > > > > ilte
> > > > > r(
> > > > > FilterHandler.java:130)
> > > > >
> > > > > at
> > > > > org.keycloak.services.filters.ClientConnectionFilter.doFilte
> > > > > r(
> > > > > Clie
> > > > > nt
> > > > > ConnectionFilter.java:41)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.
> > > > > java
> > > > > :6
> > > > > 0)
> > > > >
> > > > > at
> > > > > io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.d
> > > > > oF
> > > > > ilte
> > > > > r(
> > > > > FilterHandler.java:132)
> > > > >
> > > > > at
> > > > > org.keycloak.services.filters.KeycloakSessionServletFilter.d
> > > > > oF
> > > > > ilte
> > > > > r(
> > > > > KeycloakSessionServletFilter.java:40)
> > > > >
> > > > > ... 31 more
> > > > >
> > > > > Caused by: java.lang.NullPointerException
> > > > >
> > > > > at
> > > > > org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issueU
> > > > > se
> > > > > rInf
> > > > > o(
> > > > > UserInfoEndpoint.java:128)
> > > > >
> > > > > at
> > > > > org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issueU
> > > > > se
> > > > > rInf
> > > > > oG
> > > > > et(UserInfoEndpoint.java:101)
> > > > >
> > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > > > Method)
> > > > >
> > > > > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown
> > > > > Source)
> > > > >
> > > > > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown
> > > > > Source)
> > > > >
> > > > > at java.lang.reflect.Method.invoke(Unknown Source)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInje
> > > > > ct
> > > > > orIm
> > > > > pl
> > > > > .java:137)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget
> > > > > (R
> > > > > esou
> > > > > rc
> > > > > eMethodInvoker.java:296)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.ResourceMethodInvoker.invoke(Resourc
> > > > > eM
> > > > > etho
> > > > > dI
> > > > > nvoker.java:250)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTarge
> > > > > tO
> > > > > bjec
> > > > > t(
> > > > > ResourceLocatorInvoker.java:140)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(Resour
> > > > > ce
> > > > > Loca
> > > > > to
> > > > > rInvoker.java:109)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTarge
> > > > > tO
> > > > > bjec
> > > > > t(
> > > > > ResourceLocatorInvoker.java:135)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(Resour
> > > > > ce
> > > > > Loca
> > > > > to
> > > > > rInvoker.java:103)
> > > > >
> > > > > at
> > > > > org.jboss.resteasy.core.SynchronousDispatcher.invoke(Synchro
> > > > > no
> > > > > usDi
> > > > > sp
> > > > > atcher.java:356)
> > > > >
> > > > > ... 42 more
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > The user guide says –
> > > > >
> > > > > If you need to prevent node failures from requiring users to
> > > > > log in again, set the owners attribute to 2 or more for the
> > > > > sessions cache
> > > > >
> > > > >
> > > > >
> > > > > Questions -
> > > > >
> > > > > 1. Have we configured Infinispan incorrectly? We don’t want
> > > > > the users to login again if any of the nodes in the cluster
> > > > > go down.
> > > > >
> > > > > 2. Will changing distributed-cache to replicated-cache help
> > > > > in this scenario?
> > > > >
> > > > > 3. Any way we can see the contents of the cache?
> > > > >
> > > > >
> > > > >
> > > > > -- Rajat
> > > > >
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > keycloak-user mailing list
> > > > > keycloak-user(a)lists.jboss.org
> > > > >
https://lists.jboss.org/mailman/listinfo/keycloak-user
> > > >
> > > > _______________________________________________
> > > > keycloak-user mailing list
> > > > keycloak-user(a)lists.jboss.org
> > > >
https://lists.jboss.org/mailman/listinfo/keycloak-user
> > >
> >
>