[keycloak-dev] Questions about Standalone HA

Tonnis Wildeboer tonnis at autonomic.ai
Fri Sep 1 18:53:19 EDT 2017


Thank you John. Super helpful. Following up on your responses:

If I choose to use a simple standalone (as opposed to a standalone-ha) 
configuration for all the nodes in AWS (whether in the same VPC or not) 
/and/ have session affinity configured in the load balancer, am I giving 
up a lot? I wonder because in this case none of the nodes are relying on 
a shared cache but the traffic from any given client is always routed to 
the same instance (as long as it is alive). I realize that depending on 
the affinity algorithm, there is the potential for uneven load 
balancing, and also, if any node goes down, its clients will lose their 
sessions. Just trying form a correct mental model and understand the 
trade-offs.

Meanwhile, I will look into JDBC_PING in jgroups...

Thanks again,

--Tonnis

On 09/01/2017 01:38 PM, John D. Ament wrote:
> Tonnis,
>
> Some comments in line.
>
> On Fri, Sep 1, 2017 at 4:10 PM Tonnis Wildeboer <tonnis at autonomic.ai 
> <mailto:tonnis at autonomic.ai>> wrote:
>
>     Hello,
>
>     I posted similar questions on keycloak-user, but it doesn't seem to be
>     the right audience. Please redirect me if these are not
>     appropriate for
>     this group.
>
>     *Background:*
>
>     I am running Keycloak in a kubernetes cluster with a shared postgres
>     (RDS) db. Everything is hosted on AWS. The Keycloak instances are
>     deployed using Helm.
>
>     I have read the clustering and caching documentation and from that it
>     seems that the appropriate clustering mode in this scenario would be
>     "Standalone Clustered Mode". Therefore, I am using the
>     "jboss/keycloak-ha-postgres" Docker image. Since I am using the nginx
>     Ingress controller I have the prescribedPROXY_ADDRESS_FORWARDING=true
>     environment variable. Upon inspection of the Docker image, however, I
>     noticed that the
>     $JBOSS_HOME/standalone/configuration/standalone-ha.xml
>     file in that image does not have the
>     proxy-address-forwarding="${env.PROXY_ADDRESS_FORWARDING}"
>     attribute in
>     the <http-listener ...> element. I also noticed that the
>     jboss-dockerfiles/keycloak-server base image has a sed command to add
>     this to the standalone.xml file but not to the standalone-ha.xml file.
>
>     (Also, for the benefit of others interested in this configuration, I
>     have configured session affinity in the Ingress to avoid the default
>     round-robin routing, which causes infinite redirects to Keycloak,
>     bouncing between instances.)
>
>
> This is probably your first sign of an issue.  This indicates that 
> your nodes aren't talking to one another, session affinity is not 
> actually required (I have an AWS deployment today that works fine)
>
>     Of the examples I have found via Google searches, I have not found
>     examples of deploying Keycloak this way, which is surprising. I have
>     seen examples with a single instance using the standalone postgres
>     image, but not standalone-ha ("Standalone Clustered").
>
>     *So here are my questions:*
>
>      1. Why doesn't the base jboss-dockerfiles/keycloak-server image also
>         modify the standalone-ha.xml file too, in the same way it modifies
>         the standalone.xml file:
>        
>     (https://github.com/jboss-dockerfiles/keycloak/blob/0a54ccaccd5e27e75105b904708ac4ccd80df5c5/server/Dockerfile#L23-L25)?
>
>
> Probably because the docker use case has only considered simple 
> development.  For what its worth, I don't deploy any prebuilt docker 
> containers, everything I deploy is a customized image in some way.
>
>      2. I assume the discovery of the nodes between one another (for the
>         sake of the shared caches) is accomplished through multicast
>     as long
>         as they are on the same subnet. Correct?
>
>
> Normally, yes.  However, in AWS multicast doesn't work (even when in 
> the same VPC/subnet).  To work around this, you can leverage JDBC_PING 
> in jgroups. https://developer.jboss.org/wiki/JDBCPING
>
>      3. What should I be looking for in the logs to see whether the peer
>         node discovery is happening? What logging level do I need to
>     see this?
>
>
> You'll see log messages from jgroups indicating that the cluster is 
> alive and a peer was discovered.  I turned this up very high to give 
> more information.
>
>      4. If the nodes (pods) are in different AWS VPCs and there is no
>         explicit routing set up between them such that they cannot
>     discover
>         one another, but they do share the same postgres instance, is
>     there
>         any harm in this? I would assume that each node would take
>         responsibility for being the cache owner, and that is not a
>     problem.
>
>
> If your cache isn't replicating, users will lose sessions frequently.
>
>      5. Is there any other documentation, etc that I should be looking at?
>
>
> The link I put above, and probably the existing clustering docs which 
> you may have already reviewed.
>
>
>     Thank you,
>
>     --Tonnis
>     _______________________________________________
>     keycloak-dev mailing list
>     keycloak-dev at lists.jboss.org <mailto:keycloak-dev at lists.jboss.org>
>     https://lists.jboss.org/mailman/listinfo/keycloak-dev
>



More information about the keycloak-dev mailing list