[keycloak-user] Distributed Keycloak user sessions using Infinispan

Marek Posolda mposolda at redhat.com
Thu Aug 6 12:05:47 EDT 2015


It's clear that your infinispan cluster works as expected. I think the 
issue is, that you are sending curl requests directly to individual 
cluster nodes instead of loadbalancer. At least I saw it as an issue 
during my testing and I suspect that in your case it will be the same.

Like in your example you first retrieved the token by curl request to 
http://test-server-110:8080 and then used this token to retrieve 
UserInfo by accessing on http://test-server-111:8081 . In this case, it 
doesn't work as issuer (iss field) in access token starts with URL 
"http://test-server-110:8080" , which means that this access token will 
be rejected on "http://test-server-111:8081" when sending request to 
UserInfo endpoint here.

I've added a bit more info about error into UserInfo endpoint, so in 
latest master you can see additional details (I suspect that you will 
really see the error message about invalid issuer URL).

So the solution is, that you will need to send requests to loadbalancer 
instead of individual cluster nodes. Like if your curl request to 
retrieve access token will be send to "http://loadbalancer:8080" and the 
UserInfo will be also accessed through loadbalancer, it will work 
because the issuer in access token will be "http://loadbalancer:8080" . 
You can try this and then add or remove cluster nodes as you wish. It 
will work as long as at least one cluster node is available behind 
loadbalancer (no matter which one).

Maybe we can relax a bit on validation and validate just realm name 
instead of the full URL during issuer validation (we already had it like 
this few months back AFAIR). But not sure if it's really needed as in 
production, you will likely access your nodes always via loadbalancer . 
Is it correct?

Marek



On 6.8.2015 09:44, Marek Posolda wrote:
> I've finally reproduced the issue and looking at it. Will update you 
> once I have fix.
>
> Marek
>
> On 5.8.2015 14:14, Nair, Rajat wrote:
>> Some more information -
>>
>> Quick summary - I'm trying to test HA of Keycloak user sessions when 
>> one of the nodes goes down, users should not have to login again as 
>> their session information would be available on other node. Keycloak 
>> cluster is setup to store users session on Infinispan (testing using 
>> distributed-cache and replicated-cache).
>>
>> To see the cache information on Keycloak nodes, I setup hawtio on 
>> these node. Initial state of caches on both the nodes are captured in 
>> these images (see 110.DC.Start.png and 111.DC.Start.png)
>> Then user logs into server 111. We can see the session entry value 
>> increasing (see 111.DC.Login.On.111.PNG). When we look at session 
>> entry on 110 server, we see that the count has increased there too. 
>> That means that they session is being successfully replicated (see 
>> 110.DC.Login.On.111.PNG).
>>
>> To verify if this works other way around, we logged into server 110, 
>> and its session entry count increased (see 
>> 110.DC.Login.On.110.And.111.PNG). When we check 111, we can see that 
>> session entry count increased on this server too. (see 
>> 111.DC.Login.On.110.And.111.png).
>>
>> We initially suspected that our sessions were not getting replicated. 
>> Using hawtio, we can see session entry count increasing on both 
>> servers. Could this mean that there is a bug in Keycloak's code while 
>> fetching user sessions? Is there any other way we can validate user 
>> sessions?
>>
>> -- Rajat
>>
>> -----Original Message-----
>> From: Nair, Rajat
>> Sent: 03 August 2015 12:43
>> To: 'Stian Thorgersen'; Marek Posolda
>> Cc: keycloak-user at lists.jboss.org
>> Subject: RE: [keycloak-user] Distributed Keycloak user sessions using 
>> Infinispan
>>
>> Hi,
>>
>> Some more info from log of test-server-110 server (server names are 
>> test-server-110 and test-server-111) - Infinispan subsystem 
>> initialized logs for sessions cache - 
>> [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 
>> 39) WFLYCLINF0001: Activating Infinispan subsystem.
>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] 
>> (ServerService Thread Pool -- 62) ISPN000078: Starting JGroups 
>> channel keycloak 
>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] 
>> (ServerService Thread Pool -- 62) ISPN000094: Received new cluster 
>> view for channel keycloak: [test-server-111|0] (1) [test-server-111] 
>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] 
>> (ServerService Thread Pool -- 62) ISPN000079: Channel keycloak local 
>> address is test-server-111, physical addresses are [XX.XX.XX.XX:7600] 
>> [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 
>> 63) WFLYCLINF0002: Started sessions cache from keycloak container 
>> [org.infinispan.CLUSTER] (remote-thread--p3-t3) ISPN000310: Starting 
>> cluster-wide rebalance for cache sessions, topology 
>> CacheTopology{id=1, rebalanceId=1, 
>> currentCH=DefaultConsistentHash{ns=80, owners = (1)[test-server-111: 
>> 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = 
>> (2)[test-server-111: 40+40, test-server-110: 40+40]}, unionCH=null, 
>> actualMembers=[test-server-111, test-server-110]} 
>> [org.infinispan.CLUSTER] (remote-thread--p3-t2) ISPN000336: Finished 
>> cluster-wide rebalance for cache sessions, topology id = 1
>>
>> Session cache details -
>> [org.infinispan.jmx.JmxUtil] (ServerService Thread Pool -- 63) Object 
>> name 
>> jboss.infinispan:type=Cache,name="sessions(dist_sync)",manager="keycloak",component=Cache 
>> already registered [org.infinispan.topology.LocalTopologyManagerImpl] 
>> (ServerService Thread Pool -- 63)  Node test-server-110 joining cache 
>> sessions [org.infinispan.topology.LocalTopologyManagerImpl] 
>> (ServerService Thread Pool -- 63)  Updating local topology for cache 
>> sessions: CacheTopology{id=0, rebalanceId=0, 
>> currentCH=DefaultConsistentHash{ns=80, owners = (1)[test-server-111: 
>> 80+0]}, pendingCH=null, unionCH=null, 
>> actualMembers=[test-server-111]} 
>> [org.infinispan.topology.LocalTopologyManagerImpl] 
>> (transport-thread--p2-t5) Updating local topology for cache sessions: 
>> CacheTopology{id=1, rebalanceId=1, 
>> currentCH=DefaultConsistentHash{ns=80, owners = (1)[test-server-111: 
>> 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = 
>> (2)[test-server-111: 40+40, test-server-110: 40+40]}, unionCH=null, 
>> actualMembers=[test-server-111, test-server-110]} 
>> [org.infinispan.topology.LocalTopologyManagerImpl] 
>> (transport-thread--p2-t5) Starting local rebalance for cache 
>> sessions, topology = CacheTopology{id=1, rebalanceId=1, 
>> currentCH=DefaultConsistentHash{ns=80, owners = (1)[test-server-111: 
>> 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = 
>> (2)[test-server-111: 40+40, test-server-110: 40+40]}, unionCH=null, 
>> actualMembers=[test-server-111, test-server-110]} 
>> [org.infinispan.statetransfer.StateConsumerImpl] 
>> (transport-thread--p2-t5) Adding inbound state transfer for segments 
>> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 16, 19, 
>> 18, 21, 20, 23, 22, 25, 24, 27, 26, 29, 28, 31, 30, 34, 35, 32, 33, 
>> 38, 39, 36, 37, 42, 43, 40, 41, 46, 47, 44, 45, 51, 50, 49, 48, 55, 
>> 54, 53, 52, 59, 58, 57, 56, 63, 62, 61, 60, 68, 69, 70, 71, 64, 65, 
>> 66, 67, 76, 77, 78, 79, 72, 73, 74, 75] of cache sessions 
>> [org.infinispan.statetransfer.StateConsumerImpl] 
>> (transport-thread--p2-t5) Removing no longer owned entries for cache 
>> sessions [org.infinispan.statetransfer.InboundTransferTask] 
>> (stateTransferExecutor-thread--p5-t19) Finished receiving state for 
>> segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 
>> 16, 19, 18, 21, 20, 23, 22, 25, 24, 27, 26, 29, 28, 31, 30, 34, 35, 
>> 32, 33, 38, 39, 36, 37, 42, 43, 40, 41, 46, 47, 44, 45, 51, 50, 49, 
>> 48, 55, 54, 53, 52, 59, 58, 57, 56, 63, 62, 61, 60, 68, 69, 70, 71, 
>> 64, 65, 66, 67, 76, 77, 78, 79, 72, 73, 74, 75] of cache sessions 
>> [org.infinispan.statetransfer.StateConsumerImpl] 
>> (stateTransferExecutor-thread--p5-t24) Finished receiving of segments 
>> for cache sessions for topology 1.
>> [org.infinispan.topology.LocalTopologyManagerImpl] 
>> (transport-thread--p2-t10) Updating local topology for cache 
>> sessions: CacheTopology{id=2, rebalanceId=1, 
>> currentCH=DefaultConsistentHash{ns=80, owners = (2)[test-server-111: 
>> 40+40, test-server-110: 40+40]}, pendingCH=null, unionCH=null, 
>> actualMembers=[test-server-111, test-server-110]} 
>> [org.infinispan.statetransfer.StateConsumerImpl] 
>> (transport-thread--p2-t10) Removing no longer owned entries for cache 
>> sessions [org.infinispan.cache.impl.CacheImpl] (ServerService Thread 
>> Pool -- 63) Started cache sessions on test-server-110 
>> [org.wildfly.clustering.infinispan.spi.service.CacheBuilder] 
>> (ServerService Thread Pool -- 63) sessions keycloak cache started
>>
>>
>> Looks like the servers are talking to each other (setup on unicast) 
>> and session cache between the servers are shared, but we still cannot 
>> successfully fetch user info when token is generated by one server 
>> (test-server-110) and data from another (test-server-111).
>>
>> Any suggestions/debugging approaches appreciated.
>>
>> -- Rajat
>>
>> -----Original Message-----
>> From: Stian Thorgersen [mailto:stian at redhat.com]
>> Sent: 29 July 2015 22:51
>> To: Nair, Rajat; Marek Posolda
>> Cc: keycloak-user at lists.jboss.org
>> Subject: Re: [keycloak-user] Distributed Keycloak user sessions using 
>> Infinispan
>>
>> I'm away on holiday, Marek can you take a look at this?
>>
>> ----- Original Message -----
>>> From: "Rajat Nair" <rajat.nair at hp.com>
>>> To: "Stian Thorgersen" <stian at redhat.com>
>>> Cc: keycloak-user at lists.jboss.org
>>> Sent: Wednesday, 29 July, 2015 2:56:07 PM
>>> Subject: RE: [keycloak-user] Distributed Keycloak user sessions using
>>> Infinispan
>>>
>>> Follow up to our discussion -
>>>
>>> I upgrade my nodes to Keycloak 1.4 Final. Dropped and re-created
>>> database Postgres database (shared between both the nodes) and tested
>>> distributed user session using following commands -
>>> - Fetch access token using following curl from one server
>>>     curl --write-out " %{http_code}" -s --request POST --header 
>>> "Content-Type:
>>>     application/x-www-form-urlencoded; charset=UTF-8" --data
>>>     "username=user1 at email.com&password=testpassword@&client_id=admin-client&grant_type=password" 
>>>
>>>     "http://test-server-110:8080/auth/realms/test/protocol/openid-connect/token" 
>>>
>>>
>>> - Validated the token on different server using
>>>     curl --write-out " %{http_code}" -s --request GET --header 
>>> "Content-Type:
>>>     application/json" --header "Authorization: Bearer
>>>     [ACCESS_TOKEN_FROM_PREVIOUS_CALL]"
>>>     "http://test-server-111:8081/auth/realms/test/protocol/openid-connect/userinfo" 
>>>
>>>
>>> And we get this - {"error":"invalid_grant","error_description":"Token
>>> invalid"}
>>> No more NPE and internal server error.
>>>
>>> If we use the same token and try to fetch user details on server which
>>> issued the token - we get the correct data. (Note - I have confirmed
>>> that the token has not expired)
>>>
>>>> One thing you can try is to make sure user session replication is
>>>> working
>>>> properly:
>>>> 1. Start two nodes
>>>> 2. Open admin console directly on node 1 - login as admin/admin 3.
>>>> Open admin console directly on node 2 from another machine/browser
>>>> or use incognito mode - login as admin/admin 4. On node 1 go to
>>>> users -> view all
>>>> -> click on admin -> sessions - > > you should see two sessions 5.
>>>> -> On node
>>>> 2 do the same and check you can see two sessions there as well
>>> Now this is where things get strange. I followed the steps described -
>>> used 2 different browsers - and I can see 2 sessions listed!
>>>
>>> Are the process we use to validate the token incorrect? Or is master
>>> console on the web doing something different (like get the data from
>>> Postgres database used by both the nodes).
>>>
>>> -- Rajat
>>>
>>> -----Original Message-----
>>> From: Stian Thorgersen [mailto:stian at redhat.com]
>>> Sent: 28 July 2015 10:19
>>> To: Nair, Rajat
>>> Cc: keycloak-user at lists.jboss.org
>>> Subject: Re: [keycloak-user] Distributed Keycloak user sessions using
>>> Infinispan
>>>
>>>
>>>
>>> ----- Original Message -----
>>>> From: "Rajat Nair" <rajat.nair at hp.com>
>>>> To: "Stian Thorgersen" <stian at redhat.com>
>>>> Cc: keycloak-user at lists.jboss.org
>>>> Sent: Monday, 27 July, 2015 7:33:25 PM
>>>> Subject: RE: [keycloak-user] Distributed Keycloak user sessions
>>>> using Infinispan
>>>>
>>>>> Can you send me your standalone-ha.xml and keycloak-server.json?
>>>> Files attached. The service is started like -
>>>> /opt/jboss/keycloak/bin/standalone.sh -c standalone-ha.xml
>>>> -b=test-server-110
>>>> -bmanagement=test-server-110 -u 230.0.0.4
>>>> -Djboss.node.name=test-server-110
>>>>
>>>>> Also, any chance you can try it out with master? I've been testing
>>>>> with that as we're about to do 1.4 release soon
>>>> Glad to give back to the community. Will build and deploy the master
>>>> on my nodes. Will send findings tomorrow.
>>>>
>>>> Regarding a scenario I described earlier - Case 2 1. Start with 1
>>>> Node down. We bring it back up. We wait for some time so that
>>>> Infinispan can sync.
>>>> 2. Bring down other node.
>>>> 3. Try to get user info using existing token.
>>>>
>>>> Is this a valid use-case?
>>> Yes - I've tried the same use-case and it works fine every time. One
>>> caveat is that access token can expire, but in this case you should
>>> get a 403 returned, not a NPE exception and 500.
>>>
>>> One thing you can try is to make sure user session replication is
>>> working
>>> properly:
>>>
>>> 1. Start two nodes
>>> 2. Open admin console directly on node 1 - login as admin/admin 3.
>>> Open admin console directly on node 2 from another machine/browser or
>>> use incognito mode - login as admin/admin 4. On node 1 go to users ->
>>> view all -> click on admin -> sessions - you should see two sessions
>>> 5. On node 2 do the same and check you can see two sessions there as
>>> well
>>>
>>>> -- Rajat
>>>>
>>>> -----Original Message-----
>>>> From: Stian Thorgersen [mailto:stian at redhat.com]
>>>> Sent: 27 July 2015 19:16
>>>> To: Nair, Rajat
>>>> Cc: keycloak-user at lists.jboss.org
>>>> Subject: Re: [keycloak-user] Distributed Keycloak user sessions
>>>> using Infinispan
>>>>
>>>> Also, any chance you can try it out with master? I've been testing
>>>> with that as we're about to do 1.4 release soon
>>>>
>>>> ----- Original Message -----
>>>>> From: "Stian Thorgersen" <stian at redhat.com>
>>>>> To: "Rajat Nair" <rajat.nair at hp.com>
>>>>> Cc: keycloak-user at lists.jboss.org
>>>>> Sent: Monday, 27 July, 2015 3:45:46 PM
>>>>> Subject: Re: [keycloak-user] Distributed Keycloak user sessions
>>>>> using Infinispan
>>>>>
>>>>> Can you send me your standalone-ha.xml and keycloak-server.json?
>>>>>
>>>>> ----- Original Message -----
>>>>>> From: "Rajat Nair" <rajat.nair at hp.com>
>>>>>> To: "Stian Thorgersen" <stian at redhat.com>
>>>>>> Cc: keycloak-user at lists.jboss.org
>>>>>> Sent: Monday, 27 July, 2015 3:41:36 PM
>>>>>> Subject: RE: [keycloak-user] Distributed Keycloak user sessions
>>>>>> using Infinispan
>>>>>>
>>>>>>> Do you have both nodes fully up and running before you kill one 
>>>>>>> node?
>>>>>> Yes.
>>>>>> This is what we tried -
>>>>>> Case 1
>>>>>> 1. Two node cluster (both running Keycloak engines) - both up
>>>>>> and running.
>>>>>> Configured load balancing using mod_cluster.
>>>>>> 2. Login and get token.
>>>>>> 3. Bring down one node.
>>>>>> 4. Get user info using existing token. This is when we get NPE.
>>>>>>
>>>>>> Case 2
>>>>>> 1. Start with 1 Node down. We bring it back up. We wait for some
>>>>>> time so that Infinispan can sync.
>>>>>> 2. Bring down other node.
>>>>>> 3. Try to get user info using existing token. Again we see NPE.
>>>>>>
>>>>>>> It's a bug - if session is expired it should return an error
>>>>>>> message, not a NPE (see
>>>>>>> https://issues.jboss.org/browse/KEYCLOAK-1710)
>>>>>> Thanks for tracking this.
>>>>>>
>>>>>> -- Rajat
>>>>>>
>>>>>> ----- Original Message -----
>>>>>>> From: "Rajat Nair" <rajat.nair at hp.com>
>>>>>>> To: "Stian Thorgersen" <stian at redhat.com>
>>>>>>> Cc: keycloak-user at lists.jboss.org
>>>>>>> Sent: Monday, 27 July, 2015 3:20:27 PM
>>>>>>> Subject: RE: [keycloak-user] Distributed Keycloak user
>>>>>>> sessions using Infinispan
>>>>>>>
>>>>>>> Thanks for quick reply Stian.
>>>>>>>
>>>>>>>> What version?
>>>>>>> We are using Keycloak 1.3.1 Final.
>>>>>>>> Did you remember to change userSessions provider to
>>>>>>>> infinispan in keycloak-server.json?
>>>>>>> Yes. We got following in keycloak-server.json -
>>>>>>> "userSessions": {
>>>>>>> "provider": "infinispan"
>>>>>>> }
>>>>>>>
>>>>>>>> Firstly owners="2" should work fine as long as only one node
>>>>>>>> dies and the other remains active. Secondly it should return
>>>>>>>> a NPE, but an error if user session is not found.
>>>>>>> Could you elaborate on your 2nd point?
>>>>>> Do you have both nodes fully up and running before you kill one 
>>>>>> node?
>>>>>>
>>>>>> It's a bug - if session is expired it should return an error
>>>>>> message, not a NPE (see
>>>>>> https://issues.jboss.org/browse/KEYCLOAK-1710)
>>>>>>
>>>>>>> -- Rajat
>>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Stian Thorgersen [mailto:stian at redhat.com]
>>>>>>> Sent: 27 July 2015 18:07
>>>>>>> To: Nair, Rajat
>>>>>>> Cc: keycloak-user at lists.jboss.org
>>>>>>> Subject: Re: [keycloak-user] Distributed Keycloak user
>>>>>>> sessions using Infinispan
>>>>>>>
>>>>>>> Did you remember to change userSessions provider to infinispan
>>>>>>> in keycloak-server.json?
>>>>>>>
>>>>>>> ----- Original Message -----
>>>>>>>> From: "Stian Thorgersen" <stian at redhat.com>
>>>>>>>> To: "Rajat Nair" <rajat.nair at hp.com>
>>>>>>>> Cc: keycloak-user at lists.jboss.org
>>>>>>>> Sent: Monday, 27 July, 2015 2:24:17 PM
>>>>>>>> Subject: Re: [keycloak-user] Distributed Keycloak user
>>>>>>>> sessions using Infinispan
>>>>>>>>
>>>>>>>> What version?
>>>>>>>>
>>>>>>>> Firstly owners="2" should work fine as long as only one node
>>>>>>>> dies and the other remains active. Secondly it should return
>>>>>>>> a NPE, but an error if user session is not found.
>>>>>>>>
>>>>>>>> ----- Original Message -----
>>>>>>>>> From: "Rajat Nair" <rajat.nair at hp.com>
>>>>>>>>> To: keycloak-user at lists.jboss.org
>>>>>>>>> Sent: Monday, 27 July, 2015 2:03:47 PM
>>>>>>>>> Subject: [keycloak-user] Distributed Keycloak user
>>>>>>>>> sessions using Infinispan
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I’m in the process of setting up distributed user sessions
>>>>>>>>> using Infinispan on my Keycloak cluster. This is the
>>>>>>>>> configuration I use –
>>>>>>>>>
>>>>>>>>> <cache-container name="keycloak"
>>>>>>>>> jndi-name="java:jboss/infinispan/Keycloak">
>>>>>>>>>   lock-timeout="60000"/>
>>>>>>>>>
>>>>>>>>> <invalidation-cache name="realms" mode="SYNC"/>
>>>>>>>>>
>>>>>>>>> <invalidation-cache name="users" mode="SYNC"/>
>>>>>>>>>
>>>>>>>>> <distributed-cache name="sessions" mode="SYNC"
>>>>>>>>> owners="2"/>
>>>>>>>>>
>>>>>>>>> <distributed-cache name="loginFailures" mode="SYNC"
>>>>>>>>> owners="1"/>
>>>>>>>>>
>>>>>>>>> </cache-container>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> And in server.logs, I can see my servers communicate –
>>>>>>>>>
>>>>>>>>> 2015-07-27 10:27:24,662 INFO [org.infinispan.CLUSTER]
>>>>>>>>> (remote-thread--p3-t7)
>>>>>>>>> ISPN000310: Starting cluster-wide rebalance for cache
>>>>>>>>> users, topology CacheTopology{id=57, rebalanceId=17,
>>>>>>>>> currentCH=ReplicatedConsistentHash{ns
>>>>>>>>> =
>>>>>>>>> 60, owners = (1)[test-server-110: 60]},
>>>>>>>>> pendingCH=ReplicatedConsistentHash{ns = 60, owners =
>>>>>>>>> (2)[test-server-110:
>>>>>>>>> 30, test-server-111: 30]}, unionCH=null,
>>>>>>>>> actualMembers=[test-server-110, test-server-111]}
>>>>>>>>>
>>>>>>>>> 2015-07-27 10:27:24,665 INFO [org.infinispan.CLUSTER]
>>>>>>>>> (remote-thread--p3-t10)
>>>>>>>>> ISPN000310: Starting cluster-wide rebalance for cache
>>>>>>>>> realms, topology CacheTopology{id=57, rebalanceId=17,
>>>>>>>>> currentCH=ReplicatedConsistentHash{ns
>>>>>>>>> =
>>>>>>>>> 60, owners = (1)[test-server-110: 60]},
>>>>>>>>> pendingCH=ReplicatedConsistentHash{ns = 60, owners =
>>>>>>>>> (2)[test-server-110:
>>>>>>>>> 30, test-server-111: 30]}, unionCH=null,
>>>>>>>>> actualMembers=[test-server-110, test-server-111]}
>>>>>>>>>
>>>>>>>>> 2015-07-27 10:27:24,665 INFO [org.infinispan.CLUSTER]
>>>>>>>>> (remote-thread--p3-t8)
>>>>>>>>> ISPN000310: Starting cluster-wide rebalance for cache
>>>>>>>>> loginFailures, topology CacheTopology{id=57,
>>>>>>>>> rebalanceId=17, currentCH=DefaultConsistentHash{ns=80,
>>>>>>>>> owners =
>>>>>>>>> (1)[test-server-110:
>>>>>>>>> 80+0]},
>>>>>>>>> pendingCH=DefaultConsistentHash{ns=80, owners =
>>>>>>>>> (2)[test-server-110:
>>>>>>>>> 40+0,
>>>>>>>>> test-server-111: 40+0]}, unionCH=null,
>>>>>>>>> actualMembers=[test-server-110, test-server-111]}
>>>>>>>>>
>>>>>>>>> 2015-07-27 10:27:24,669 INFO [org.infinispan.CLUSTER]
>>>>>>>>> (remote-thread--p3-t9)
>>>>>>>>> ISPN000310: Starting cluster-wide rebalance for cache
>>>>>>>>> sessions, topology CacheTopology{id=56, rebalanceId=17,
>>>>>>>>> currentCH=DefaultConsistentHash{ns=80,
>>>>>>>>> owners = (1)[test-server-110: 80+0]},
>>>>>>>>> pendingCH=DefaultConsistentHash{ns=80,
>>>>>>>>> owners = (2)[test-server-110: 40+0, test-server-111:
>>>>>>>>> 40+0]}, unionCH=null, actualMembers=[test-server-110,
>>>>>>>>> test-server-111]}
>>>>>>>>>
>>>>>>>>> 2015-07-27 10:27:24,808 INFO [org.infinispan.CLUSTER]
>>>>>>>>> (remote-thread--p3-t9)
>>>>>>>>> ISPN000336: Finished cluster-wide rebalance for cache
>>>>>>>>> loginFailures, topology id = 57
>>>>>>>>>
>>>>>>>>> 2015-07-27 10:27:24,810 INFO [org.infinispan.CLUSTER]
>>>>>>>>> (remote-thread--p3-t12)
>>>>>>>>> ISPN000336: Finished cluster-wide rebalance for cache
>>>>>>>>> sessions, topology id = 56
>>>>>>>>>
>>>>>>>>> 2015-07-27 10:27:24,988 INFO [org.infinispan.CLUSTER]
>>>>>>>>> (remote-thread--p3-t12)
>>>>>>>>> ISPN000336: Finished cluster-wide rebalance for cache
>>>>>>>>> realms, topology id =
>>>>>>>>> 57
>>>>>>>>>
>>>>>>>>> 2015-07-27 10:27:25,530 INFO [org.infinispan.CLUSTER]
>>>>>>>>> (remote-thread--p3-t8)
>>>>>>>>> ISPN000336: Finished cluster-wide rebalance for cache
>>>>>>>>> users, topology id =
>>>>>>>>> 57
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I can successfully login, get a token and fetch user
>>>>>>>>> details with this token.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Problem is, if one of the nodes on the cluster goes down
>>>>>>>>> and if we try to reuse a token which was already issued
>>>>>>>>> (so workflow is – user logins in, get token, (a node in
>>>>>>>>> the cluster goes down) and then fetch user details using
>>>>>>>>> token) – we see an internal server exception. From the
>>>>>>>>> logs –
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2015-07-27 10:24:25,714 ERROR [io.undertow.request]
>>>>>>>>> (default
>>>>>>>>> task-1)
>>>>>>>>> UT005023: Exception handling request to
>>>>>>>>> /auth/realms/scaletest/protocol/openid-connect/userinfo:
>>>>>>>>> java.lang.RuntimeException: request path:
>>>>>>>>> /auth/realms/scaletest/protocol/openid-connect/userinfo
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.keycloak.services.filters.KeycloakSessionServletFilter
>>>>>>>>> .d
>>>>>>>>> oF
>>>>>>>>> ilte
>>>>>>>>> r(
>>>>>>>>> KeycloakSessionServletFilter.java:54)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.
>>>>>>>>> java
>>>>>>>>> :6
>>>>>>>>> 0)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.FilterHandler$FilterChainImpl
>>>>>>>>> .d
>>>>>>>>> oF
>>>>>>>>> ilte
>>>>>>>>> r(
>>>>>>>>> FilterHandler.java:132)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.FilterHandler.handleRequest(F
>>>>>>>>> il
>>>>>>>>> te
>>>>>>>>> rHan
>>>>>>>>> dl
>>>>>>>>> er.java:85)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.security.ServletSecurityRoleH
>>>>>>>>> an
>>>>>>>>> dl
>>>>>>>>> er.h
>>>>>>>>> an
>>>>>>>>> dleRequest(ServletSecurityRoleHandler.java:62)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.ServletDispatchingHandler.han
>>>>>>>>> dl
>>>>>>>>> eR
>>>>>>>>> eque
>>>>>>>>> st
>>>>>>>>> (ServletDispatchingHandler.java:36)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.wildfly.extension.undertow.security.SecurityContextAss
>>>>>>>>> oc
>>>>>>>>> ia
>>>>>>>>> tion
>>>>>>>>> Ha
>>>>>>>>> ndler.handleRequest(SecurityContextAssociationHandler.java
>>>>>>>>> :7
>>>>>>>>> 8)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.server.handlers.PredicateHandler.handleRequest
>>>>>>>>> (P
>>>>>>>>> re
>>>>>>>>> dica
>>>>>>>>> te
>>>>>>>>> Handler.java:43)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.security.SSLInformationAssoci
>>>>>>>>> at
>>>>>>>>> io
>>>>>>>>> nHan
>>>>>>>>> dl
>>>>>>>>> er.handleRequest(SSLInformationAssociationHandler.java:131
>>>>>>>>> )
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.security.ServletAuthenticatio
>>>>>>>>> nC
>>>>>>>>> al
>>>>>>>>> lHan
>>>>>>>>> dl
>>>>>>>>> er.handleRequest(ServletAuthenticationCallHandler.java:57)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.server.handlers.PredicateHandler.handleRequest
>>>>>>>>> (P
>>>>>>>>> re
>>>>>>>>> dica
>>>>>>>>> te
>>>>>>>>> Handler.java:43)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.security.handlers.AbstractConfidentialityHandl
>>>>>>>>> er
>>>>>>>>> .h
>>>>>>>>> andl
>>>>>>>>> eR
>>>>>>>>> equest(AbstractConfidentialityHandler.java:46)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.security.ServletConfidentiali
>>>>>>>>> ty
>>>>>>>>> Co
>>>>>>>>> nstr
>>>>>>>>> ai
>>>>>>>>> ntHandler.handleRequest(ServletConfidentialityConstraintHa
>>>>>>>>> nd
>>>>>>>>> le
>>>>>>>>> r.ja
>>>>>>>>> va
>>>>>>>>> :64)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.security.handlers.AuthenticationMechanismsHandler.
>>>>>>>>> hand
>>>>>>>>> le
>>>>>>>>> Request(AuthenticationMechanismsHandler.java:58)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.security.CachedAuthenticatedS
>>>>>>>>> es
>>>>>>>>> si
>>>>>>>>> onHa
>>>>>>>>> nd
>>>>>>>>> ler.handleRequest(CachedAuthenticatedSessionHandler.java:7
>>>>>>>>> 2)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.security.handlers.NotificationReceiverHandler.
>>>>>>>>> ha
>>>>>>>>> nd
>>>>>>>>> leRe
>>>>>>>>> qu
>>>>>>>>> est(NotificationReceiverHandler.java:50)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.security.handlers.SecurityInitialHandler.handl
>>>>>>>>> eR
>>>>>>>>> eq
>>>>>>>>> uest
>>>>>>>>> (S
>>>>>>>>> ecurityInitialHandler.java:76)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.server.handlers.PredicateHandler.handleRequest
>>>>>>>>> (P
>>>>>>>>> re
>>>>>>>>> dica
>>>>>>>>> te
>>>>>>>>> Handler.java:43)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler. 
>>>>>>>>>
>>>>>>>>> ha
>>>>>>>>> ndleRequest(JACCContextIdHandler.java:61)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.server.handlers.PredicateHandler.handleRequest
>>>>>>>>> (P
>>>>>>>>> re
>>>>>>>>> dica
>>>>>>>>> te
>>>>>>>>> Handler.java:43)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.server.handlers.PredicateHandler.handleRequest
>>>>>>>>> (P
>>>>>>>>> re
>>>>>>>>> dica
>>>>>>>>> te
>>>>>>>>> Handler.java:43)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.server.handlers.MetricsHandler.handleRequest(M
>>>>>>>>> et
>>>>>>>>> ri
>>>>>>>>> csHa
>>>>>>>>> nd
>>>>>>>>> ler.java:62)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.core.MetricsChainHandler.handleRequest
>>>>>>>>> (M
>>>>>>>>> et
>>>>>>>>> rics
>>>>>>>>> Ch
>>>>>>>>> ainHandler.java:59)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.ServletInitialHandler.handleF
>>>>>>>>> ir
>>>>>>>>> st
>>>>>>>>> Requ
>>>>>>>>> es
>>>>>>>>> t(ServletInitialHandler.java:274)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.ServletInitialHandler.dispatc
>>>>>>>>> hR
>>>>>>>>> eq
>>>>>>>>> uest
>>>>>>>>> (S
>>>>>>>>> ervletInitialHandler.java:253)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.ServletInitialHandler.access$
>>>>>>>>> 00
>>>>>>>>> 0(
>>>>>>>>> Serv
>>>>>>>>> le
>>>>>>>>> tInitialHandler.java:80)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.ServletInitialHandler$1.handl
>>>>>>>>> eR
>>>>>>>>> eq
>>>>>>>>> uest
>>>>>>>>> (S
>>>>>>>>> ervletInitialHandler.java:172)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.server.Connectors.executeRootHandler(Connectors.
>>>>>>>>> ja
>>>>>>>>> va:1
>>>>>>>>> 99
>>>>>>>>> )
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java: 
>>>>>>>>>
>>>>>>>>> 774)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
>>>>>>>>> Source)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
>>>>>>>>> Source)
>>>>>>>>>
>>>>>>>>> at java.lang.Thread.run(Unknown Source)
>>>>>>>>>
>>>>>>>>> Caused by: org.jboss.resteasy.spi.UnhandledException:
>>>>>>>>> java.lang.NullPointerException
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.ExceptionHandler.handleApplication
>>>>>>>>> Ex
>>>>>>>>> ce
>>>>>>>>> ptio
>>>>>>>>> n(
>>>>>>>>> ExceptionHandler.java:76)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.ExceptionHandler.handleException(E
>>>>>>>>> xc
>>>>>>>>> ep
>>>>>>>>> tion
>>>>>>>>> Ha
>>>>>>>>> ndler.java:212)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.SynchronousDispatcher.writeExcepti
>>>>>>>>> on
>>>>>>>>> (S
>>>>>>>>> ynch
>>>>>>>>> ro
>>>>>>>>> nousDispatcher.java:149)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.SynchronousDispatcher.invoke(Synch
>>>>>>>>> ro
>>>>>>>>> no
>>>>>>>>> usDi
>>>>>>>>> sp
>>>>>>>>> atcher.java:372)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.SynchronousDispatcher.invoke(Synch
>>>>>>>>> ro
>>>>>>>>> no
>>>>>>>>> usDi
>>>>>>>>> sp
>>>>>>>>> atcher.java:179)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.plugins.server.servlet.ServletContainer
>>>>>>>>> Di
>>>>>>>>> sp
>>>>>>>>> atch
>>>>>>>>> er
>>>>>>>>> .service(ServletContainerDispatcher.java:220)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.plugins.server.servlet.HttpServletDispa
>>>>>>>>> tc
>>>>>>>>> he
>>>>>>>>> r.se
>>>>>>>>> rv
>>>>>>>>> ice(HttpServletDispatcher.java:56)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.plugins.server.servlet.HttpServletDispa
>>>>>>>>> tc
>>>>>>>>> he
>>>>>>>>> r.se
>>>>>>>>> rv
>>>>>>>>> ice(HttpServletDispatcher.java:51)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> javax.servlet.http.HttpServlet.service(HttpServlet.java:79
>>>>>>>>> 0)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.ServletHandler.handleRequest(
>>>>>>>>> Se
>>>>>>>>> rv
>>>>>>>>> letH
>>>>>>>>> an
>>>>>>>>> dler.java:86)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.FilterHandler$FilterChainImpl
>>>>>>>>> .d
>>>>>>>>> oF
>>>>>>>>> ilte
>>>>>>>>> r(
>>>>>>>>> FilterHandler.java:130)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.keycloak.services.filters.ClientConnectionFilter.doFil
>>>>>>>>> te
>>>>>>>>> r(
>>>>>>>>> Clie
>>>>>>>>> nt
>>>>>>>>> ConnectionFilter.java:41)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.
>>>>>>>>> java
>>>>>>>>> :6
>>>>>>>>> 0)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> io.undertow.servlet.handlers.FilterHandler$FilterChainImpl
>>>>>>>>> .d
>>>>>>>>> oF
>>>>>>>>> ilte
>>>>>>>>> r(
>>>>>>>>> FilterHandler.java:132)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.keycloak.services.filters.KeycloakSessionServletFilter
>>>>>>>>> .d
>>>>>>>>> oF
>>>>>>>>> ilte
>>>>>>>>> r(
>>>>>>>>> KeycloakSessionServletFilter.java:40)
>>>>>>>>>
>>>>>>>>> ... 31 more
>>>>>>>>>
>>>>>>>>> Caused by: java.lang.NullPointerException
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issu
>>>>>>>>> eU
>>>>>>>>> se
>>>>>>>>> rInf
>>>>>>>>> o(
>>>>>>>>> UserInfoEndpoint.java:128)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.keycloak.protocol.oidc.endpoints.UserInfoEndpoint.issu
>>>>>>>>> eU
>>>>>>>>> se
>>>>>>>>> rInf
>>>>>>>>> oG
>>>>>>>>> et(UserInfoEndpoint.java:101)
>>>>>>>>>
>>>>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>>>>>> Method)
>>>>>>>>>
>>>>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown
>>>>>>>>> Source)
>>>>>>>>>
>>>>>>>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown
>>>>>>>>> Source)
>>>>>>>>>
>>>>>>>>> at java.lang.reflect.Method.invoke(Unknown Source)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodIn
>>>>>>>>> je
>>>>>>>>> ct
>>>>>>>>> orIm
>>>>>>>>> pl
>>>>>>>>> .java:137)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarg
>>>>>>>>> et
>>>>>>>>> (R
>>>>>>>>> esou
>>>>>>>>> rc
>>>>>>>>> eMethodInvoker.java:296)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.ResourceMethodInvoker.invoke(Resou
>>>>>>>>> rc
>>>>>>>>> eM
>>>>>>>>> etho
>>>>>>>>> dI
>>>>>>>>> nvoker.java:250)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTar
>>>>>>>>> ge
>>>>>>>>> tO
>>>>>>>>> bjec
>>>>>>>>> t(
>>>>>>>>> ResourceLocatorInvoker.java:140)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(Reso
>>>>>>>>> ur
>>>>>>>>> ce
>>>>>>>>> Loca
>>>>>>>>> to
>>>>>>>>> rInvoker.java:109)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTar
>>>>>>>>> ge
>>>>>>>>> tO
>>>>>>>>> bjec
>>>>>>>>> t(
>>>>>>>>> ResourceLocatorInvoker.java:135)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(Reso
>>>>>>>>> ur
>>>>>>>>> ce
>>>>>>>>> Loca
>>>>>>>>> to
>>>>>>>>> rInvoker.java:103)
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.jboss.resteasy.core.SynchronousDispatcher.invoke(Synch
>>>>>>>>> ro
>>>>>>>>> no
>>>>>>>>> usDi
>>>>>>>>> sp
>>>>>>>>> atcher.java:356)
>>>>>>>>>
>>>>>>>>> ... 42 more
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The user guide says –
>>>>>>>>>
>>>>>>>>> If you need to prevent node failures from requiring users
>>>>>>>>> to log in again, set the owners attribute to 2 or more for
>>>>>>>>> the sessions cache
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Questions -
>>>>>>>>>
>>>>>>>>> 1. Have we configured Infinispan incorrectly? We don’t
>>>>>>>>> want the users to login again if any of the nodes in the
>>>>>>>>> cluster go down.
>>>>>>>>>
>>>>>>>>> 2. Will changing distributed-cache to replicated-cache
>>>>>>>>> help in this scenario?
>>>>>>>>>
>>>>>>>>> 3. Any way we can see the contents of the cache?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -- Rajat
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> keycloak-user mailing list keycloak-user at lists.jboss.org
>>>>>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-user
>>>>>>>> _______________________________________________
>>>>>>>> keycloak-user mailing list
>>>>>>>> keycloak-user at lists.jboss.org
>>>>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-user
>
>



More information about the keycloak-user mailing list