On 3/17/17 5:01 AM, Stian Thorgersen wrote:
In summary I'm more open towards your approach, but still have
some
concerns around it. More inline.
On 16 March 2017 at 16:05, Bill Burke <bburke(a)redhat.com
<mailto:bburke@redhat.com>> wrote:
On 3/16/17 6:19 AM, Stian Thorgersen wrote:
> The Keycloak proxy shouldn't be tied directly to the database or
> caches. It should ideally be stateless and ideally there's no
> need for sticky sessions.
>
Please stop making broad blanket statements and back up your
reponse otherwise I'm just going to ignore you.
If the proxy implements pure OIDC it has to minimally store
refresh token and access token. Plus I foresee us wanting to
provide more complex proxy features which will require storing
more an more state. So, the proxy needs sessions which means many
users will want this to be fault tolerant, which means that the
proxy will require distributed sessions.
Can't the tokens just be stored in a cookie? That would make it fully
stateless and no need for sticky sessions.
I guess it comes down to what is more costly refresh token requests or
having a distributed "session" cache (which we already have).
I'm
worried about cookie size constraints. I'll do some measurements.
This issue is orthogonal to the other issues though I think.
> It should be capable of running collocated with the Keycloak
> Server for simplicity, but also should be possible to run in
> separate process. If it's done as an additional subsystem that
> allows easily configuring a Keycloak server to be IdP, IdP+Proxy
> or just Proxy.
>
> Further, it should leverage OpenID Connect rather than us coming
> up with a new separate protocol.
>
> My reasoning behind this is simple:
>
> * Please let's not invent another security protocol! That's a lot
> of work and a whole new vulnerability vector to deal with.
> * There will be tons more requests to a proxy than there are to
> the server. Latency overhead will also be much more important.
>
It wouldn't be a brand new protocol, just an optimized subset of
OIDC. For example, you wouldn't have to do a code to token
request nor would you have to execute refresh token requests. It
would also make things like revocation and backchannel logout much
easier, nicer, more efficient, and more robust.
I like removing the code to token request and refresh token requests.
However, doesn't the revocation and backchannel logout mechanism have
to be made simpler and more robust for "external apps" as well?
Wouldn't it be better to solve this problem in general and make it
available to external apps and not just our "embedded" proxy.
Client nodes currently register themselves with auth server on demand so
that they can receive revocation and backchannel logout events. The
auth server sends a message to each and every node when these events
happen. A proxy that has access to UserSession cache doesn't have to do
any of these things. This is the "simpler" and "more efficient"
argument. I forgot the "more robust" argument I had.
I Just see huge advantages with this approach: simpler
provisioning, simpler configuration, a real nice user experience
overall, and possibly some optimizations. What looking for is
disadvantages to this approach which I currently see are:
1) Larger memory footprint
2) More database connections, although these connections should
become idle after boot.
3) Possible extra distributed session replication as the
User/ClientSession needs to be visible on both the auth server and
the proxy.
4) Possible headache of too many nodes in a cluster, although a
proxy is supposed to be able to handle proxing multiple apps and
multiple instances of that app.
I would think it would make it even harder to scale to really big
loads. There will already be limits on a Keycloak cluster due to
invalidation messages and even more so the sessions. If we add even
more nodes and load to the same cluster that just makes the matter
even worse. There's also significantly more requests to applications
than there is for KC server. That's why it seems safer to keep it
separate.
Configuring a proxy in the admin console is a good thing right? If that
is an assumption, then the proxy needs to receive realm invalidation
events so that it can refresh the client view (config settings, mappers,
etc.).
It depends on what and how much of the db + cache we're talking
about.
Is it just user sessions then that can probably be handled with
distributed sessions.
realm info doesn't hit the db much, but user store will
be hit.
Hmm...didn't think of the user store hit. Something like LDAP would be
hit by the auth server and each proxy for each login session. That's a
downer... If a proxy could proxy all apps, then maybe there is a way to
maintain sticky sessions between the auth server and the proxy so they
shared the same node/cache for the same session. Still a huge negative
though as things just got a lot more complex.
Maybe we could just hook up the proxy to the realm store and realm
cache? I prefer this idea as then proxy setup isn't much different than
auth server setup. And all the configuration sync logic is already in
place as the proxy would receive realm invalidation events.
Bill