On 07/03/18 13:51, Chervine Majeri wrote:
We're considering attempting the exact same setup, with 2 standalone
keycloaks connected to the same backend DB.
User session is one example. There are some other things, which won't
work. We never tried to test such setup and I wouldn't do it.
From what I've seen, only what's stored in the cache ends up being
different, meaning the HA models really only differ in that they have
a distributed cache. Is this correct? Or does it affect the connection
to the DB too?
From that assumption, seeing the content of "standalone-ha.xml", I see
that it's mostly session related stuff and things like loginFailures
that end up in the distributed cache.
Since we have a session cookie, unique for every session, can we use
session stickiness in the reverse-proxy to circumvent most the issues?
stickyness is usually not sufficient. The OpenID Connect
specification uses some "backchannel" requests, which are not sent as
part of browser session, but they are sent directly between client
application and Keycloak (For example code-to-token request, Refresh
token request etc). Those requests won't see sticky session cookie, and
hence can be directed to the other node, then the one who owns the session.
Only possibility, when everything may work is, if all your clients are
and so they can participate in sticky session as backchannel requests
are sent from browser as well).
There are also some other cases when sticky session is not sufficient.
For example in scenarios when mail is sent to user (EG. "Forget
password" functionality) and user clicks on the link, but the link is
opened in the other browser then the one, who "owns" sticky session
cookie. Then it may happen that request is served on the other browser
then the one, who owns the session.
Finally invalidations won't work. Keycloak uses caches to cache some
data for performance reasons. Those caches are "realms", "users" and
"keys" . Every cluster node cache the data locally, however when some
change happens (data are updated), then the node, who did the update,
must notify other nodes in cluster about the change. If you don't use
cluster, this won't work and other cluster nodes won't be notified and
will still see stale data in their caches. In other words, when for
example you update user "john" on node1, then node2 won't be aware about
this update and will still see stale (old) data of user "john" in it's
cache. The only possibilities how to workaround is:
- Disable cache entirely (See our docs for more details)
- Ensure that cache is cleared after every update (This is usually not
possible to achieve unless you have some special kind of deployment (EG.
something close to read-only deployment)).
Obviously the loginFailures feature wouldn't work all that well, but
that would be acceptable for my use-case.