[keycloak-user] Losing some sessions during clustering

Stian Thorgersen sthorger at redhat.com
Fri Nov 4 01:58:40 EDT 2016


Do you wait until the nodes are fully started or just 200 from admin
console? You need to wait for Infinispan to successfully transfer the state.

Try giving it at least a couple min before killing nodes.

Have you checked that clustering is working properly and that nodes see
each other?

We don't support Postgres for session storage, so not sure how you are
planning to switch to that.

On 3 November 2016 at 22:23, Chris Hairfield <chairfield at gmail.com> wrote:

> No dice, sadly. Here's our latest test:
>
> 1. Set owners to 2 for all 3 caches
> 2. Start 2 nodes
> 3. Perform a rolling release
>     a. Start node 3
>     b. Wait for node 3 to respond 200 on admin console
>     c. Kill node 1
>     d. Start node 4
>     e. Wait for node 4 to respond 200 on admin console
>     f.Kill node 2
>
> We lost sessions, even though there were always at least 2 nodes fully
> online. *(To be explicit, this was judged by signing into the admin
> console. Is this a fair test? The # of sessions reported by Keycloak stayed
> the same...)*
>
> We're considering switching over from Infinispan to Postgres for session
> storage, at least to see if it works. Still, any additional tips or
> thoughts would be great.
>
> Thanks so far,
> Chris
>
> On Thu, Nov 3, 2016 at 11:36 AM Chris Hairfield <chairfield at gmail.com>
> wrote:
>
> > Many thanks, John. This seems very likely. If there's no response from
> our
> > part, you may assume it's fixed.
> >
> > Cheers,
> > Chris
> >
> > On Thu, Nov 3, 2016 at 11:26 AM John Bartko <
> john.bartko at drillinginfo.com>
> > wrote:
> >
> > It sounds like sessions distributed-cache is not being replicated.
> >
> > From the Install/Config documentation on cache replication
> > <https://keycloak.gitbooks.io/server-installation-and-
> configuration/content/v/2.3/topics/cache/replication.html>:
> > "By default, Keycloak only specifies one owner for data. So if that one
> > node goes down that data is lost. This usually means that users will be
> > logged out and will have to login again."
> >
> > jboss-cli snippet:
> > /subsystem=infinispan/cache-container=keycloak/
> distributed-cache=sessions:write-attribute(name=owners,
> > value=2)
> >
> > Hope that helps,
> > -John Bartko
> >
> > On Thu, Nov 3, 2016 at 12:08 PM, Chris Hairfield <chairfield at gmail.com>
> > wrote:
> >
> > Hello Keycloak users,
> >
> > We're seeing strange behavior with the session handling when starting up
> a
> > new node. Keycloak doesn't retain all sessions. Here's our experiment:
> >
> >    1. start with 1 node containing a few dozen sessions
> >    2. start node 2 (nodes clustered via JGroups Ping table + infinispan)
> >    3. wait for 10 minutes
> >    4. stop node 1
> >
> > End result: *some* of the clients connected are forced to log back in.
> Most
> >
> >
> > sessions remain.
> >
> > We're still investigating, so I cannot infer beyond this point at the
> > moment. I'm simply curious whether anyone knows the following:
> >
> >    - are *all* sessions meant to be migrated to new nodes?
> >    - how long does it take to migrate sessions?
> >    - does a new node wait until sessions are migrated before it enables
> the
> >    admin interface?
> >    - is there any logic to prune sessions on clustering?
> >
> >
> >
> > Any thoughts would be greatly appreciated.
> >
> > Thanks,
> > Chris
> >
> > _______________________________________________
> > keycloak-user mailing list
> > keycloak-user at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/keycloak-user
> >
> >
> >
> _______________________________________________
> keycloak-user mailing list
> keycloak-user at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/keycloak-user
>


More information about the keycloak-user mailing list