[keycloak-user] Keycloak Clustering, other instance logs me out

Marek Posolda mposolda at redhat.com
Thu Mar 31 04:30:19 EDT 2016


On 31/03/16 09:25, Sarp Kaya wrote:
> Please ignore my previous e-mail. Turns out I was simply not putting 
> the private after address in interface name.
>
> Anyway I got it working this is how I get the logs now:
>
> 07:04:58,648 INFO  [org.jboss.as] (Controller Boot Thread) 
> WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
>
> 07:04:58,649 INFO  [org.jboss.as] (Controller Boot Thread) 
> WFLYSRV0025: Keycloak 1.9.1.Final (WildFly Core 2.0.10.Final) started 
> in 60797ms - Started 478 of 841 services (584 services are lazy, 
> passive or on-demand)
>
> 07:05:24,574 INFO 
> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] 
> (Incoming-2,ee,0abe138a46e3) ISPN000094: Received new cluster view for 
> channel server: [0abe138a46e3|1] (2) [0abe138a46e3, 69ef835de644]
>
> 07:05:24,578 INFO 
> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] 
> (Incoming-2,ee,0abe138a46e3) ISPN000094: Received new cluster view for 
> channel keycloak: [0abe138a46e3|1] (2) [0abe138a46e3, 69ef835de644]
>
> 07:05:24,589 INFO 
> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] 
> (Incoming-2,ee,0abe138a46e3) ISPN000094: Received new cluster view for 
> channel web: [0abe138a46e3|1] (2) [0abe138a46e3, 69ef835de644]
>
> 07:05:24,607 INFO 
> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] 
> (Incoming-2,ee,0abe138a46e3) ISPN000094: Received new cluster view for 
> channel hibernate: [0abe138a46e3|1] (2) [0abe138a46e3, 69ef835de644]
>
> 07:05:24,614 INFO 
> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] 
> (Incoming-2,ee,0abe138a46e3) ISPN000094: Received new cluster view for 
> channel ejb: [0abe138a46e3|1] (2) [0abe138a46e3, 69ef835de644]
>
> 07:05:25,873 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t1) 
> ISPN000310: Starting cluster-wide rebalance for cache realms, topology 
> CacheTopology{id=1, rebalanceId=1, 
> currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[0abe138a46e3: 
> 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = 
> (2)[0abe138a46e3: 30, 69ef835de644: 30]}, unionCH=null, 
> actualMembers=[0abe138a46e3, 69ef835de644]}
>
> 07:05:25,942 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t2) 
> ISPN000310: Starting cluster-wide rebalance for cache loginFailures, 
> topology CacheTopology{id=1, rebalanceId=1, 
> currentCH=DefaultConsistentHash{ns=80, owners = (1)[0abe138a46e3: 
> 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = 
> (2)[0abe138a46e3: 40+0, 69ef835de644: 40+0]}, unionCH=null, 
> actualMembers=[0abe138a46e3, 69ef835de644]}
>
> 07:05:25,945 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t5) 
> ISPN000310: Starting cluster-wide rebalance for cache offlineSessions, 
> topology CacheTopology{id=1, rebalanceId=1, 
> currentCH=DefaultConsistentHash{ns=80, owners = (1)[0abe138a46e3: 
> 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = 
> (2)[0abe138a46e3: 40+0, 69ef835de644: 40+0]}, unionCH=null, 
> actualMembers=[0abe138a46e3, 69ef835de644]}
>
> 07:05:25,944 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t6) 
> ISPN000310: Starting cluster-wide rebalance for cache sessions, 
> topology CacheTopology{id=1, rebalanceId=1, 
> currentCH=DefaultConsistentHash{ns=80, owners = (1)[0abe138a46e3: 
> 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = 
> (2)[0abe138a46e3: 40+0, 69ef835de644: 40+0]}, unionCH=null, 
> actualMembers=[0abe138a46e3, 69ef835de644]}
>
> 07:05:25,968 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t4) 
> ISPN000310: Starting cluster-wide rebalance for cache work, topology 
> CacheTopology{id=1, rebalanceId=1, 
> currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[0abe138a46e3: 
> 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = 
> (2)[0abe138a46e3: 30, 69ef835de644: 30]}, unionCH=null, 
> actualMembers=[0abe138a46e3, 69ef835de644]}
>
> 07:05:25,971 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t3) 
> ISPN000310: Starting cluster-wide rebalance for cache users, topology 
> CacheTopology{id=1, rebalanceId=1, 
> currentCH=ReplicatedConsistentHash{ns = 60, owners = (1)[0abe138a46e3: 
> 60]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = 
> (2)[0abe138a46e3: 30, 69ef835de644: 30]}, unionCH=null, 
> actualMembers=[0abe138a46e3, 69ef835de644]}
>
> 07:05:26,086 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t6) 
> ISPN000336: Finished cluster-wide rebalance for cache users, topology 
> id = 1
>
> 07:05:26,120 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t6) 
> ISPN000336: Finished cluster-wide rebalance for cache realms, topology 
> id = 1
>
> 07:05:26,301 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t5) 
> ISPN000336: Finished cluster-wide rebalance for cache offlineSessions, 
> topology id = 1
>
> 07:05:26,310 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t5) 
> ISPN000336: Finished cluster-wide rebalance for cache work, topology 
> id = 1
>
> 07:05:26,325 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t5) 
> ISPN000336: Finished cluster-wide rebalance for cache sessions, 
> topology id = 1
>
> 07:05:26,332 INFO  [org.infinispan.CLUSTER] (remote-thread--p10-t6) 
> ISPN000336: Finished cluster-wide rebalance for cache loginFailures, 
> topology id = 1
>
>
>
> However now I’m back to the initial scenario where:
>
>
>> I log into kc1 instance, and subsequent requests are authenticated.
>> Then I try viewing
>> host:8080/auth/realms/master/account
>> Which is also authenticated.
>>
>> Then I try to view this on kc1 by changing port like:
>> host:8081/auth/realms/master/account
>>
>> At this point I expect to see same page. However I get prompted for 
>> login for both kc1 and kc2. I see no logs at this point.
> So what could be wrong now?
If you want to check if clustering works, you can go to admin console 
and see it here. Try scenario like this:
- Login to admin console on kc1
- See the "sessions" tab of user and doublecheck that your current 
userSession (the one related to login to admin console) is here
- Login to admin console on kc2
- See the "sessions" tab of user again and doublecheck that you are 
seeing both userSessions there. Doublecheck that you are seeing same 
userSessions on both kc1 or kc2. If yes, clustering works.

The scenario you pointed probably doesn't work as you expected because 
when you access host:8081 (but you were previously logged on host:8080) 
the SSO cookie is not shared between host:8080 and host:8081, so you 
need to reauthenticate. In real scenario, you will need loadbalancer, 
which will be always accessed under some address like lb:8080 and it 
will re-send the request automatically either to host:8080 or host:8081. 
With loadbalancer cookie will be shared, so you should be always able to 
see the account page and being authenticated.

Marek
>
>
> From: Abdullah Sarp Kaya <akaya at expedia.com <mailto:akaya at expedia.com>>
> Date: Thursday, March 31, 2016 at 2:37 PM
> To: John Bartko <john.bartko at drillinginfo.com 
> <mailto:john.bartko at drillinginfo.com>>
> Cc: Marek Posolda <mposolda at redhat.com <mailto:mposolda at redhat.com>>, 
> "keycloak-user at lists.jboss.org <mailto:keycloak-user at lists.jboss.org>" 
> <keycloak-user at lists.jboss.org <mailto:keycloak-user at lists.jboss.org>>
> Subject: Re: [keycloak-user] Keycloak Clustering, other instance logs 
> me out
>
> Okay, so I gave up using that way and now using S3 instead, that way I 
> can see what they send to each other.
>
> First thing I noticed is the file that they modify looks like this:
> 74594d4ef42f e0172d5d-8392-5ee7-bba3-03d997c2ef6e 127.0.0.1:55200 T
>
> When another instance connects, it tries to connect to 127.0.0.1:55200 
> which obviously doesn’t work, and that logs out like this:
>
> 04:26:15,908 WARN  [org.jgroups.protocols.pbcast.GMS] (MSC service 
> thread 1-2) 74594d4ef42f: JOIN(74594d4ef42f) sent to e8a65867f61d 
> timed out (after 3000 ms), on try 1
>
>
> Again I looked reverse and found out that jgroups-udp uses this port 
> number. So I found that docker uses 172.17.0.x and assigned 2 IP 
> addresses. Like this:
>
>      <interface name="dockerIp"> <inet-address 
> value="${jboss.bind.address:172.17.0.3}"/></interface>
> <socket-binding name="jgroups-udp" interface="dockerIp" port="55200" 
> multicast-address="${jboss.default.multicast.address:230.0.0.4}" 
> multicast-port="45688"/>
> However right now I still get the previous problem:
>
> 04:28:31,418 ERROR [org.jboss.msc.service.fail] (MSC service thread 
> 1-2) MSC000001: Failed to start service jboss.jgroups.channel.ee: 
> org.jboss.msc.service.StartException in service 
> jboss.jgroups.channel.ee: java.security.PrivilegedActionException: 
> java.net.BindException: [UDP] /0.0.0.0 is not a valid address on any 
> local network interface
>
>         at 
> org.wildfly.clustering.jgroups.spi.service.ChannelBuilder.start(ChannelBuilder.java:80)
>
>         at 
> org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
>
>         at 
> org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
>
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>         at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.security.PrivilegedActionException: 
> java.net.BindException: [UDP] /0.0.0.0 is not a valid address on any 
> local network interface
>
>         at 
> org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:640)
>
>         at 
> org.jboss.as.clustering.jgroups.JChannelFactory.createChannel(JChannelFactory.java:98)
>
>         at 
> org.wildfly.clustering.jgroups.spi.service.ChannelBuilder.start(ChannelBuilder.java:78)
>
>         ... 5 more
>
> Why does it keep showing that my IP address is 0.0.0.0? Despite I 
> specifically put the right IP address?
>
> PS: I’m well aware this is a bad hack to put IP addresses 
> specifically, but at this point I’m trying to get it working.
>
>
> From: John Bartko <john.bartko at drillinginfo.com 
> <mailto:john.bartko at drillinginfo.com>>
> Date: Thursday, March 31, 2016 at 12:53 PM
> To: Abdullah Sarp Kaya <akaya at expedia.com <mailto:akaya at expedia.com>>
> Cc: Marek Posolda <mposolda at redhat.com <mailto:mposolda at redhat.com>>, 
> "keycloak-user at lists.jboss.org <mailto:keycloak-user at lists.jboss.org>" 
> <keycloak-user at lists.jboss.org <mailto:keycloak-user at lists.jboss.org>>
> Subject: Re: [keycloak-user] Keycloak Clustering, other instance logs 
> me out
>
> You're on the right track. Taking a look at my notes, the following 
> may be required:
>
>   * docker container with host mode networking and NET_ADMIN
>     capabilities if clustering across container hosts
>   * entrypoint parameters:
>     -Djgroups.bind_addr=HOST_IP -b HOST_IP --server-config
>     standalone-ha.xml
>     note the host default interface IP must be used and not a wildcard
>     of 0.0.0.0
>   * the socket-binding changed
>
>
> On Wed, Mar 30, 2016 at 7:49 PM, Sarp Kaya <akaya at expedia.com 
> <mailto:akaya at expedia.com>> wrote:
>
>     Sorry to send an e-mail again what I did is changed this:
>
>     <socket-binding name="jgroups-udp" interface="public" port="55200"
>     multicast-address="${jboss.default.multicast.address:230.0.0.4}"
>     multicast-port="45688"/>
>
>     Also set the public interface as:
>
>     <interface name="public"> <inet-address
>     value="${jboss.bind.address:0.0.0.0}"/> </interface>
>
>     But now I’m getting this error:
>
>     00:45:40,146 ERROR [org.jboss.msc.service.fail] (MSC service
>     thread 1-2) MSC000001: Failed to start service
>     jboss.jgroups.channel.ee <http://jboss.jgroups.channel.ee>:
>     org.jboss.msc.service.StartException in service
>     jboss.jgroups.channel.ee <http://jboss.jgroups.channel.ee>:
>     java.security.PrivilegedActionException: java.net.BindException:
>     [UDP] /0.0.0.0 <http://0.0.0.0> is not a valid address on any
>     local network interface
>
>           at
>     org.wildfly.clustering.jgroups.spi.service.ChannelBuilder.start(ChannelBuilder.java:80)
>
>           at
>     org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
>
>           at
>     org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
>
>           at
>     java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>           at
>     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>           at java.lang.Thread.run(Thread.java:745)
>
>     Caused by: java.security.PrivilegedActionException:
>     java.net.BindException: [UDP] /0.0.0.0 <http://0.0.0.0> is not a
>     valid address on any local network interface
>
>           at
>     org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:640)
>
>           at
>     org.jboss.as.clustering.jgroups.JChannelFactory.createChannel(JChannelFactory.java:98)
>
>           at
>     org.wildfly.clustering.jgroups.spi.service.ChannelBuilder.start(ChannelBuilder.java:78)
>
>           ... 5 more
>
>     Caused by: java.net.BindException: [UDP] /0.0.0.0 <http://0.0.0.0>
>     is not a valid address on any local network interface
>
>           at org.jgroups.util.Util.checkIfValidAddress(Util.java:3522)
>
>           at
>     org.jgroups.stack.Configurator.ensureValidBindAddresses(Configurator.java:903)
>
>           at
>     org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:118)
>
>           at
>     org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:57)
>
>           at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:477)
>
>           at org.jgroups.JChannel.init(JChannel.java:853)
>
>           at org.jgroups.JChannel.<init>(JChannel.java:159)
>
>           at
>     org.jboss.as.clustering.jgroups.JChannelFactory$1.run(JChannelFactory.java:95)
>
>           at
>     org.jboss.as.clustering.jgroups.JChannelFactory$1.run(JChannelFactory.java:92)
>
>           at
>     org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636)
>
>           ... 7 more
>
>
>     If I put a different IP address (say the docker machine IP
>     address) I get the same error as well.
>
>     From: John Bartko <john.bartko at drillinginfo.com
>     <mailto:john.bartko at drillinginfo.com>>
>     Date: Thursday, March 31, 2016 at 1:23 AM
>     To: Marek Posolda <mposolda at redhat.com <mailto:mposolda at redhat.com>>
>     Cc: Abdullah Sarp Kaya <akaya at expedia.com
>     <mailto:akaya at expedia.com>>, "keycloak-user at lists.jboss.org
>     <mailto:keycloak-user at lists.jboss.org>"
>     <keycloak-user at lists.jboss.org <mailto:keycloak-user at lists.jboss.org>>
>     Subject: Re: [keycloak-user] Keycloak Clustering, other instance
>     logs me out
>
>     When clustering across separate hosts, I had to change
>     the jgroups-udp socket binding to listen on the public interface
>     (binds to loopback by default).
>
>     On Wed, Mar 30, 2016 at 5:52 AM, Marek Posolda
>     <mposolda at redhat.com <mailto:mposolda at redhat.com>> wrote:
>
>
>         When you start the second instance, are you seeing something
>         like this in log of both servers?
>
>         INFO
>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>         (Incoming-10,shared=udp)
>         ISPN000094: Received new cluster view: [node1/keycloak|1] (2)
>         [node1/keycloak, node2/keycloak]
>
>         If not, then clustering doesn't work properly and the servers
>         doesn't form a cluster with each other. From the log you sent,
>         there is just startup of one server, which indicates that
>         clustering may not work.
>
>         Maybe multicast doesn't work in your network. Either disable
>         firewall/selinux/whatever to have multicast working or switch
>         to TCP JGroups channel instead of UDP. See the Wildfly and
>         JGroups documentation for more details.
>
>         Also I personally use the virtual hosts to test clustering of
>         2 servers on same machine (Ie. have virtual servers like
>         kc1:8080 and kc2:8080) . Using same host but differ just in
>         port number ( host:8080 and host:8081 ) may causing mess with
>         cookies, so
>         I am personally not using the setup like this.
>
>         Marek
>
>
>         On 30/03/16 08:38, Sarp Kaya wrote:
>>         I have tried using standalone-ha.xml with shared database. I
>>         thought that would be enough but it seems like it’s not. The
>>         problem is
>>
>>         I log into kc1 instance, and subsequent requests are
>>         authenticated.
>>         Then I try viewing
>>         host:8080/auth/realms/master/account
>>         Which is also authenticated.
>>
>>         Then I try to view this on kc1 by changing port like:
>>         host:8081/auth/realms/master/account
>>
>>         At this point I expect to see same page. However I get
>>         prompted for login for both kc1 and kc2. I see no logs at
>>         this point.
>>
>>         So now I have switched to using keycloak-ha-postgres because
>>         it seemed to me that it comes clustering enabled out of box.
>>         So I nearly did exactly what this page:
>>         https://hub.docker.com/r/jboss/keycloak-ha-postgres/builds/benk6w5cgdmrqonrxvu3bfu/
>>          told me to so. The only difference that I have done is
>>         adding ports (with –p 8080:8080 to one instance and –p
>>         8081:8080 to the another one) and adding a new user.
>>
>>         Once I start the I get this log:
>>
>>         05:28:49,888 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-1) ISPN000078: Starting JGroups channel
>>         keycloak
>>
>>         05:28:49,893 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-2) ISPN000078: Starting JGroups channel
>>         server
>>
>>         05:28:49,902 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-1) ISPN000094: Received new cluster
>>         view for channel keycloak: [a05014a5dc24|0] (1) [a05014a5dc24]
>>
>>         05:28:49,907 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-1) ISPN000079: Channel keycloak local
>>         address is a05014a5dc24, physical addresses are
>>         [127.0.0.1:55200 <http://127.0.0.1:55200>]
>>
>>         05:28:49,902 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-2) ISPN000094: Received new cluster
>>         view for channel server: [a05014a5dc24|0] (1) [a05014a5dc24]
>>
>>         05:28:49,914 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-2) ISPN000079: Channel server local
>>         address is a05014a5dc24, physical addresses are
>>         [127.0.0.1:55200 <http://127.0.0.1:55200>]
>>
>>         05:28:49,925 INFO
>>         [org.infinispan.factories.GlobalComponentRegistry] (MSC
>>         service thread 1-2) ISPN000128: Infinispan version:
>>         Infinispan 'Mahou' 8.1.0.Final
>>
>>         05:28:49,926 INFO
>>         [org.infinispan.factories.GlobalComponentRegistry] (MSC
>>         service thread 1-1) ISPN000128: Infinispan version:
>>         Infinispan 'Mahou' 8.1.0.Final
>>
>>         05:28:49,978 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-2) ISPN000078: Starting JGroups channel web
>>
>>         05:28:49,982 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-2) ISPN000094: Received new cluster
>>         view for channel web: [a05014a5dc24|0] (1) [a05014a5dc24]
>>
>>         05:28:49,984 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-2) ISPN000079: Channel web local
>>         address is a05014a5dc24, physical addresses are
>>         [127.0.0.1:55200 <http://127.0.0.1:55200>]
>>
>>         05:28:49,985 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-1) ISPN000078: Starting JGroups channel
>>         hibernate
>>
>>         05:28:49,986 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-1) ISPN000094: Received new cluster
>>         view for channel hibernate: [a05014a5dc24|0] (1) [a05014a5dc24]
>>
>>         05:28:49,987 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-1) ISPN000079: Channel hibernate local
>>         address is a05014a5dc24, physical addresses are
>>         [127.0.0.1:55200 <http://127.0.0.1:55200>]
>>
>>         05:28:50,028 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-1) ISPN000078: Starting JGroups channel ejb
>>
>>         05:28:50,030 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-1) ISPN000094: Received new cluster
>>         view for channel ejb: [a05014a5dc24|0] (1) [a05014a5dc24]
>>
>>         05:28:50,031 INFO
>>         [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
>>         (MSC service thread 1-1) ISPN000079: Channel ejb local
>>         address is a05014a5dc24, physical addresses are
>>         [127.0.0.1:55200 <http://127.0.0.1:55200>]
>>
>>         05:28:50,357 INFO [org.jboss.as.clustering.infinispan]
>>         (ServerService Thread Pool -- 56) WFLYCLINF0002: Started
>>         realmVersions cache from keycloak container
>>
>>         05:28:50,391 INFO [org.jboss.as.clustering.infinispan]
>>         (ServerService Thread Pool -- 54) WFLYCLINF0002: Started
>>         offlineSessions cache from keycloak container
>>
>>         05:28:50,397 INFO [org.jboss.as.clustering.infinispan]
>>         (ServerService Thread Pool -- 55) WFLYCLINF0002: Started
>>         loginFailures cache from keycloak container
>>
>>         05:28:50,396 INFO [org.jboss.as.clustering.infinispan]
>>         (ServerService Thread Pool -- 52) WFLYCLINF0002: Started
>>         sessions cache from keycloak container
>>
>>         05:28:50,392 INFO [org.jboss.as.clustering.infinispan]
>>         (ServerService Thread Pool -- 53) WFLYCLINF0002: Started
>>         realms cache from keycloak container
>>
>>         05:28:50,399 INFO [org.jboss.as.clustering.infinispan]
>>         (ServerService Thread Pool -- 51) WFLYCLINF0002: Started
>>         users cache from keycloak container
>>
>>         05:28:50,402 INFO [org.jboss.as.clustering.infinispan]
>>         (ServerService Thread Pool -- 50) WFLYCLINF0002: Started work
>>         cache from keycloak container
>>
>>
>>
>>         However I still have the same issue as above (I get logged
>>         out). Also I don’t get any new logs for the entire log-in,
>>         log-out processes.
>>
>>         Am I doing something wrong?
>>         Thanks,
>>         Sarp
>>
>>
>>         _______________________________________________
>>         keycloak-user mailing list
>>         keycloak-user at lists.jboss.org
>>         <mailto:keycloak-user at lists.jboss.org>https://lists.jboss.org/mailman/listinfo/keycloak-user
>
>
>         _______________________________________________
>         keycloak-user mailing list
>         keycloak-user at lists.jboss.org
>         <mailto:keycloak-user at lists.jboss.org>
>         https://lists.jboss.org/mailman/listinfo/keycloak-user
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/keycloak-user/attachments/20160331/17027556/attachment-0001.html 


More information about the keycloak-user mailing list