Thanks for clarification.
We’ll stay with owners=2 then.
Thanks,
Libor Krzyžanek
Principal Software Engineer
Middleware Engineering Services
On 14.02.2018, at 18:19, Marek Posolda <mposolda(a)redhat.com>
wrote:
We didn't try to test with replicated cache. I think the replicated cache is same
thing like distributed cache where number-of-owners is same like number of nodes in
cluster. For cluster with 2 nodes, I think there is no difference between replicated cache
and distributed cache with 2 owners.
For setup with more nodes, replication cache has the disadvantage that memory footprint
is bigger (every item is saved on all cluster nodes) and writes are more expensive. Reads
are less expensive with replicated cache as every item is available locally, but with
sticky sessions (which we have some support in latest Keycloak), the advantage of cheaper
reads is not so important as read items are usually available on the "local"
nodes anyway.
Marek
On 14/02/18 16:24, Libor Krzyžanek wrote:
> Hi,
> thanks for advice. It looks to be working.
>
> Any reason why we should rather use “replicated cache” instead of distributed with 2
owners? Is there any tricky implication?
> What would be your advice - stay with distributed cache with 2 osners or switch to
replicated cache?
>
> Thank you very much,
>
> Libor Krzyžanek
> Principal Software Engineer
> Middleware Engineering Services
>
>> On 13.02.2018, at 21:52, Marek Posolda <mposolda(a)redhat.com
<mailto:mposolda@redhat.com>> wrote:
>>
>> Hi Libor,
>>
>> you need to increase owners also for "clientSessions" and
"offlineClientSessions" .
>>
>> Marek
>>
>> On 13/02/18 10:23, Libor Krzyžanek wrote:
>>> And btw. this is output in log when one node is killed:
>>>
>>>
>>> 2018-02-12 15:16:44,794 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094:
Received new cluster view for channel ejb: [developer-keycloak04|26] (1)
[developer-keycloak04]
>>> 2018-02-12 15:16:44,794 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094:
Received new cluster view for channel ejb: [developer-keycloak04|26] (1)
[developer-keycloak04]
>>> 2018-02-12 15:16:44,795 WARN [org.infinispan.CLUSTER]
(transport-thread--p32-t6) [Context=client-mappings]ISPN000314: Lost at least half of the
stable members, possible split brain causing data inconsistency. Current members are
[developer-keycloak04], lost members are [developer-keycloak03], stable members are
[developer-keycloak04, developer-keycloak03]
>>> 2018-02-12 15:16:44,801 FATAL [org.infinispan.CLUSTER]
(transport-thread--p36-t10) [Context=authenticationSessions]ISPN000313: Lost data because
of abrupt leavers [developer-keycloak03]
>>> 2018-02-12 15:16:44,803 FATAL [org.infinispan.CLUSTER]
(transport-thread--p36-t10) [Context=sessions]ISPN000313: Lost data because of abrupt
leavers [developer-keycloak03]
>>> 2018-02-12 15:16:44,805 FATAL [org.infinispan.CLUSTER]
(transport-thread--p36-t10) [Context=clientSessions]ISPN000313: Lost data because of
abrupt leavers [developer-keycloak03]
>>> 2018-02-12 15:16:44,807 WARN [org.infinispan.CLUSTER]
(transport-thread--p36-t10) [Context=work]ISPN000314: Lost at least half of the stable
members, possible split brain causing data inconsistency. Current members are
[developer-keycloak04], lost members are [developer-keycloak03], stable members are
[developer-keycloak04, developer-keycloak03]
>>> 2018-02-12 15:16:44,810 FATAL [org.infinispan.CLUSTER]
(transport-thread--p36-t10) [Context=offlineSessions]ISPN000313: Lost data because of
abrupt leavers [developer-keycloak03]
>>> 2018-02-12 15:16:44,823 FATAL [org.infinispan.CLUSTER]
(transport-thread--p36-t10) [Context=loginFailures]ISPN000313: Lost data because of abrupt
leavers [developer-keycloak03]
>>> 2018-02-12 15:16:44,825 WARN [org.infinispan.CLUSTER]
(transport-thread--p36-t10) [Context=actionTokens]ISPN000314: Lost at least half of the
stable members, possible split brain causing data inconsistency. Current members are
[developer-keycloak04], lost members are [developer-keycloak03], stable members are
[developer-keycloak04, developer-keycloak03]
>>>
>>>
>>> Thanks,
>>>
>>> Libor Krzyžanek
>>> Principal Software Engineer
>>> Middleware Engineering Services
>>>
>>>> On 13.02.2018, at 10:20, Libor Krzyžanek <lkrzyzan(a)redhat.com
<mailto:lkrzyzan@redhat.com>> wrote:
>>>>
>>>> Hi,
>>>> we’re upgrading keycloak from 1.9. to 3.4 and caches changed quite a
lot.
>>>>
>>>> The setup is simply two nodes in HA mode. I see that nodes see each other
but it’s not clear to me what is the easiest way how to achieve failover with session
replication. In KC 1.9 we just increased owners=2 and it was enough.
>>>>
>>>> We tried the default setup with distributed-caches (most of them have
owners=“1”) and when one node is killed (not shutdown.sh but hard java kill) then user
lost session and is asked to login again once LB forward traffic to second node.
>>>>
>>>> We tried to increase owners on these caches
>>>> <distributed-cache name="sessions"
mode="SYNC" owners="2"/>
>>>> <distributed-cache name="offlineSessions"
mode="SYNC" owners="2"/>
>>>> but with no luck.
>>>>
>>>> I read this article:
http://blog.keycloak.org/2017/09/cross-datacenter-support-in-keycloak.html
<
http://blog.keycloak.org/2017/09/cross-datacenter-support-in-keycloak.htm...
<
http://blog.keycloak.org/2017/09/cross-datacenter-support-in-keycloak.html
<
http://blog.keycloak.org/2017/09/cross-datacenter-support-in-keycloak.htm... but
we don’t have JDG because it’s just simple cluster with two nodes within same datacenter.
>>>>
>>>> What is the best and easiest approach to achieve failover with session
replication?
>>>>
>>>> Thanks,
>>>>
>>>> Libor
>>>>
>>>> Libor Krzyžanek
>>>> Principal Software Engineer
>>>> Middleware Engineering Services
>>>>
>>> _______________________________________________
>>> keycloak-user mailing list
>>> keycloak-user(a)lists.jboss.org <mailto:keycloak-user@lists.jboss.org>
>>>
https://lists.jboss.org/mailman/listinfo/keycloak-user
<
https://lists.jboss.org/mailman/listinfo/keycloak-user>