I, and others are having problems using this in the real world because of the 'identity' of Keycloak.

I'm running Keycloak in a Docker(Rancher) container. Alongside it are my backend containers holding
the internal components of the application. On top of the application is an nginx container containing
an AngularJS application and proxying Angular's service calls to the backend container.

The problem comes when I sit an external load balancer/SSL layer in front of the application. The 
user is now contacting the application on its external hostname in our DMZ. Authentication then has
to be performed against Keycloak on a DMZ IP/URL. Easy enough to arrange, just use Nginx again
as a proxy for Keycloak. This all works for the frontend and the user can log in.

The problem occurs when the backend service containers try and validate the user token. They 
cannot do this directly to Keycloak inside the Docker ecosystem. All I get in that case is this 
token was issued by <external hostname:port> and you are presenting it to <internal hostname:port> 
(can't remember the exact wording).

I can get this to work by getting my backend containers to authenticate against <external hostname>
but that is creating traffic out of the docker LAN and back in again, not the most efficient way to 
do things. 

Would this be a good use case for Keycloak aliases? Then I can present a token issued by 
<external URL> to <internal URL> and Keycloak will understand that it was actually issued by
itself under a different identity. Better still I could proxy Keycloak within the URL of the front-end
application which would place the whole application; website, service and authentication under the
one hostname.

Kevin Thorpe
CTO