hostname-admin broken with docker and reverse proxy
See original GitHub issueDescribe the bug
The hostname-admin should enable to have a second hostname for a keycloak instance through which we can connect to the admin console. Unfortunately, after trying a lot of various combinations of keycloak settings (such as KC_HOSTNAME_ADMIN, KC_HOSTNAME_ADMIN_URL …), I’m still unable to connect to the admin console through the hostname_admin defined.
Note that keycloak is running in a docker image and is behind two different reverse proxies:
- one for the public access
- one the management access (admin console only)
Please, see below about how to reproduce.
Version
19.0.2
Expected behavior
Given the following keycloak configuration:
KC_HTTP_RELATIVE_PATH: "/auth"
KC_HOSTNAME_URL: "https://host1.local:8443/auth/"
KC_HOSTNAME_ADMIN_URL: "https://host2.local:7443/auth/"
KC_PROXY: "edge"
When I go to the url: https://host2.local:7443/auth/admin/master/console/ I should be able to authenticate and access the admin console WITHOUT being redirected to https://host1.local:8443/auth/
Actual behavior
Given the following keycloak configuration:
KC_HTTP_RELATIVE_PATH: "/auth"
KC_HOSTNAME_URL: "https://host1.local:8443/auth/"
KC_HOSTNAME_ADMIN_URL: "https://host2.local:7443/auth/"
KC_PROXY: "edge"
When I go to the url: https://host2.local:7443/auth/admin/master/console/ I am redirected to the url https://host1.local:8443/auth/ in order to authenticate.
Even worse, if a front end url is defined for the realm master (the realm used to access the admin console) with the value https://host2.local:7884/auth
, I am redirected to https://host2.local:8443/auth
(good host, wrong port!!)
We can see that in the iframe loaded (see authServerUrl
variable ):
<script id="environment" type="application/json">
{
"loginRealm": "master",
"authServerUrl": "https://host2.local:8443/auth",
"authUrl": "https://host2.local:7443/auth",
"consoleBaseUrl": "/auth/admin/master/console/",
"resourceUrl": "/auth/resources/up520/admin/keycloak.v2",
"masterRealm": "master",
"resourceVersion": "up520",
"commitHash": "dd67b3b3a4e80031d32fdf0ffd9e9d450a657d07",
"isRunningAsTheme": true
}
</script>
How to Reproduce?
I wrote a docker-compose.yml file so that we could reproduce the error and try to guess what could be the right combination of keycloak settings so that the admin url could work.
See the docker-compose below:
version: '3.3'
services:
db-master:
image: postgres:14.5
ports:
- 5432:5432/tcp
environment:
- POSTGRES_PASSWORD=password
volumes:
- pg14-master-volume:/var/lib/postgresql/data
networks:
- pg_pgdb
keycloak:
image: quay.io/keycloak/keycloak:19.0.2
hostname: keycloak
labels:
- "traefik.http.routers.keycloak-front.rule=PathPrefix(`/auth`)"
- "traefik.http.routers.keycloak-front.entrypoints=web"
- "traefik.http.routers.keycloak-front.tls=true"
- "traefik.http.routers.keycloak-front.service=keycloak-back"
- "traefik.http.services.keycloak-back.loadbalancer.server.scheme=http"
- "traefik.http.services.keycloak-back.loadbalancer.server.port=8080"
- "traefik.http.services.keycloak-back.loadbalancer.passhostheader=true"
- "traefik.enable=true"
ports:
- "9191:8080/tcp"
- "8787:8787/tcp"
command:
- start-dev
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: password
KC_DB: postgres
KC_DB_USERNAME: ukeycloak
KC_DB_PASSWORD: password
KC_DB_URL: jdbc:postgresql://db-master:5432/keycloak
KC_LOG_LEVEL: INFO
KC_HOSTNAME_STRICT: "false"
# KC_HOSTNAME: "host1.local"
# KC_HOSTNAME_PORT: "8443"
KC_HTTP_RELATIVE_PATH: "/auth"
KC_HOSTNAME_URL: "https://host1.local:8443/auth/"
KC_HOSTNAME_ADMIN_URL: "https://host2.local:7443/auth/"
KC_PROXY: "edge"
networks:
- privatezone
- pg_pgdb
traefikpub:
image: traefik:2.8.5
ports:
- "8443:8443/tcp"
- "8080:8080/tcp"
networks:
- privatezone
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:8443"
- "--entryPoints.web.forwardedHeaders.insecure=true"
- "--accesslog=true"
- "--api.dashboard=true"
- "--providers.docker.network=localdocker_privatezone"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
traefikpriv:
image: traefik:2.8.5
ports:
- "7443:7443/tcp"
- "7080:8080/tcp"
networks:
- privatezone
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:7443"
- "--entryPoints.web.forwardedHeaders.insecure=true"
- "--accesslog=true"
- "--api.dashboard=true"
- "--providers.docker.network=localdocker_privatezone"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
publiczone:
driver: bridge
privatezone:
driver: bridge
pg_pgdb:
driver: bridge
volumes:
pg14-master-volume:
driver: local
This docker-compose is made of:
- one postgres instance (needed for keycloak)
- one keycloak instance version 19.0.2
- a reverse proxy traefikpub that listen to port 8443 with https (with a self-signed certificate)
- a reverse proxy called traefikpriv that listen to port 7443 with https (with a self-signed certificate)
For this test to work, you still need to define two hostnames in your /etc/hosts as follow:
127.0.0.1 localhost host1.local host2.local
To start all the components defined in this file, run:
docker compose up -d
For keycloak to start, you will have to define a db user and a db schema. Just run the following commands to create both user and schema. Be sure to enter password
each time a password is required to the prompt:
createuser -c 20 -D -E -l -S -R -i -h localhost -p 5432 -U postgres -W ukeycloak -P
createdb -h localhost -p 5432 -U postgres -W -O ukeycloak -E utf-8 keycloak
Once the db use is created, just restart the compose to be sure keycloak starts without issue.
docker compose down
docker compose up -d
At this time, you should be able to go the urls:
for the public part and:
for the internal admin part.
With these settings, the public part works as expected. To try it, assuming you have a realm called public
, you can go to the url
https://host1.local:8443/auth/realms/public/account
and authenticate with a user defined on the public realm
Now, for the admin part, to reproduce the issue, please follow these steps:
Case 1:
- ensure no front-end url is defined for the master realm
- go to url
https://host2.local:7443/auth
- click on administration console link
Expected: we should remain on https://host2.local:7443 url in order to authenticate
Actual: we are redirected to https://host1.local:8443/auth . If we authenticate with a user defined in the master realm, we are indeed redirected to the admin console at the right url https://host2.local:7443/auth/
Now if we change the configuration of the master realm to add a front-end url, here are the steps to reproduce the bug:
Case 2:
- ensure the master realm has a front-end url defined with the value: https://host2.local:7443/auth
- go to url
https://host2.local:7443/auth
- click on administration console link
Expected : the login page should be displayed on the url https://host2.local:7443/auth
Actual: the login page hung. if we look at the source code the login page, we can see:
<script id="environment" type="application/json">
{
"loginRealm": "master",
"authServerUrl": "https://host2.local:8443/auth",
"authUrl": "https://host2.local:7443/auth",
"consoleBaseUrl": "/auth/admin/master/console/",
"resourceUrl": "/auth/resources/up520/admin/keycloak.v2",
"masterRealm": "master",
"resourceVersion": "up520",
"commitHash": "dd67b3b3a4e80031d32fdf0ffd9e9d450a657d07",
"isRunningAsTheme": true
}
</script>
the login page hung because the url defined in authServerUrl is plain wrong (the port 8443 is used for the public part while the host host2.local is used for the admin part). So there is a cookie mismatch because the host is wrong and login page hung.
Anything else?
No response
Issue Analytics
- State:
- Created a year ago
- Comments:6 (3 by maintainers)
Top GitHub Comments
Thank you very much @stianst for your quick and very clear answer.
Now I understand that redirecting to the public facing login page in order to authenticate with the admin account to the admin console is the intended behavior when using KC_HOSTNAME and KC_HOSTNAME_ADMIN settings.
Now, I must admit that I’m a little bit confused with this behavior :
By changing the configuration of keycloak and removing both KC_HOSTNAME_URL and KC_HOSTNAME_ADMIN_URL, I was able to get the wanted behavior:
Now I can authenticate to public realms using the public domain and I can fully authenticate to the admin console with the internal only url (without being redirected to the public url) so the admin password will never transit through the public proxy/urls. Furthermore, I can configure the public proxy to respect the exposed path recommendations and thus, I should be able to have a secure deployment for my keycloak instance. Do you confirm the approach?
Unless I’m missing something, I could not really recommend using the KC_HOSTNAME_URL and KC_HOSTNAME_ADMIN_URL as production ready settings… I wonder what could be the use cases for these settings if it’s not for security purpose?
I’m still not convinced we really need to have different login URLs for the admin console/endpoint.
Let’s summarise how it works today and what you can do today:
https://public-domain.org/kc-oidc
https://internal-url/kc-admin
When an admin logs in to
https://internal-url/kc-admin
the admin is redirected tohttps://public-domain.org/kc-oidc
for login.An external attacker is in theory able to get a token to the admin console/endpoints, but won’t be able to access them since they are not exposed externally.
Let’s for argument say that
https://internal-url/kc-admin
uses the internal URL to login/get tokens. This doesn’t add any additional security at all. In either case the attacker would have to be able to accesshttps://internal-url/kc-admin
to invoke the admin endpoints.There are better and more elegant ways we can add additional security around the admin endpoints than to use different login URLs for the admin endpoints than for other applications, like:
Although I’m not completely against the idea of being able to configure a different “login URL” for the admin endpoints, but that would be a enhancement/feature request rather than a bug for sure.