[keycloak-dev] Questions about Standalone HA
Tonnis Wildeboer
tonnis at autonomic.ai
Fri Sep 1 14:27:03 EDT 2017
Hello,
I posted similar questions on keycloak-user, but it doesn't seem to be
the right audience. Please redirect me if these are not appropriate for
this group.
*Background:*
I am running Keycloak in a kubernetes cluster with a shared postgres
(RDS) db. Everything is hosted on AWS. The Keycloak instances are
deployed using Helm.
I have read the clustering and caching documentation and from that it
seems that the appropriate clustering mode in this scenario would be
"Standalone Clustered Mode". Therefore, I am using the
"jboss/keycloak-ha-postgres" Docker image. Since I am using the nginx
Ingress controller I have the prescribedPROXY_ADDRESS_FORWARDING=true
environment variable. Upon inspection of the Docker image, however, I
noticed that the $JBOSS_HOME/standalone/configuration/standalone-ha.xml
file in that image does not have the
proxy-address-forwarding="${env.PROXY_ADDRESS_FORWARDING}" attribute in
the <http-listener ...> element. I also noticed that the
jboss-dockerfiles/keycloak-server base image has a sed command to add
this to the standalone.xml file but not to the standalone-ha.xml file.
(Also, for the benefit of others interested in this configuration, I
have configured session affinity in the Ingress to avoid the default
round-robin routing, which causes infinite redirects to Keycloak,
bouncing between instances.)
Of the examples I have found via Google searches, I have not found
examples of deploying Keycloak this way, which is surprising. I have
seen examples with a single instance using the standalone postgres
image, but not standalone-ha ("Standalone Clustered").
*So here are my questions:*
1. Why doesn't the base jboss-dockerfiles/keycloak-server image also
modify the standalone-ha.xml file too, in the same way it modifies
the standalone.xml file:
(https://github.com/jboss-dockerfiles/keycloak/blob/0a54ccaccd5e27e75105b904708ac4ccd80df5c5/server/Dockerfile#L23-L25)?
2. I assume the discovery of the nodes between one another (for the
sake of the shared caches) is accomplished through multicast as long
as they are on the same subnet. Correct?
3. What should I be looking for in the logs to see whether the peer
node discovery is happening? What logging level do I need to see this?
4. If the nodes (pods) are in different AWS VPCs and there is no
explicit routing set up between them such that they cannot discover
one another, but they do share the same postgres instance, is there
any harm in this? I would assume that each node would take
responsibility for being the cache owner, and that is not a problem.
5. Is there any other documentation, etc that I should be looking at?
Thank you,
--Tonnis
More information about the keycloak-dev
mailing list