VIRTUAL_PORT is not filled in default.conf when container uses host networking
See original GitHub issueDescription
I have a container with this config
version: '3.6'
services:
jellyfin:
image: jellyfin
network_mode: 'host'
volumes:
- ./config:/config
- ./cache:/cache
- ./netdisk:/media
restart: always
environment:
- VIRTUAL_HOST=abc.example.com
- VIRTUAL_IP=127.0.0.1
- VIRTUAL_PORT=8096
- SSL_POLICY=Mozilla-Modern
- HTTPS_METHOD=noredirect
privileged: true
It uses host networking, so I provide VIRTUAL_IP
to specify the exact upstream address.
However, this service container cannot be reached via https://abc.example.com. It returns 502 Bad Gateway.
The log of nginx-proxy
shows
proxy_1 | nginx.1 | abc.example.com 10.246.8.238 - - [01/May/2019:03:07:25 +0000] "GET /favicon.ico HTTP/2.0" 502 575 "https://abc.example.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36"
proxy_1 | nginx.1 | 2019/05/01 03:07:25 [error] 2627#2627: *15370 no live upstreams while connecting to upstream, client: 10.246.8.238, server: abc.example.com, request: "GET / HTTP/2.0", upstream: "http://abc.example.com/", host: "abc.example.com"
I think maybe there is something wrong with the generated default.conf
. So I got inside of nginx-proxy
container and saw the relevant section in /etc/nginx/conf.d/default.conf
is:
upstream abc.example.com {
## Can be connected with "host" network
# jellyfindoc_jellyfin_1
server 127.0.0.1 down;
}
The upstream server does not fill in the value of VIRTUAL_PORT
as I provided.
I changed it to server 127.0.0.1:8086 down;
and performed nginx -s reload
, I still cannot reach the service container via https://abc.example.com.
Then I changed it to server 127.0.0.1:8086;
and performed nginx -s reload
, this issue was solved. My service container now publically reachable via https://abc.example.com.
So this is an issue with upstream auto-generation.
The expected behavior is to produce a config similar to
upstream VIRTUAL_HOST {
## Can be connected with "host" network
# jellyfindoc_jellyfin_1
server VIRTUAL_IP:VIRTUAL_PORT;
}
when the user’s container uses host networking.
Background Info
Configuration of nginx-proxy
service
version: '3.6'
services:
proxy:
image: jwilder/nginx-proxy:alpine
restart: always
network_mode: host
volumes:
- ./custom.conf:/etc/nginx/conf.d/custom.conf
- ./ssl/certs:/etc/nginx/certs/:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
content of custom.conf
client_max_body_size 0;
proxy_request_buffering off;
proxy_read_timeout 36000s;
proxy_send_timeout 36000s;
Issue Analytics
- State:
- Created 4 years ago
- Reactions:5
- Comments:8
Thank you very much for this. Exactly what I needed. Have you thought about adding the image to Dockerhub?
EDIT: I’ve forked your repo so that I could link it to Dockerhub and setup automatic image building 😃 https://hub.docker.com/r/freekers/nginx-proxy Thanks!
That’s the problem. Your PR offers a solution for a subset of the possible instances, when the solution to the superset of instances (grab the right internal IP for each instance) would solve both the host-networking and other problems. That’s why I don’t think the PR is a great solution.
That said, it’s not that important to me anymore as I’ve found another solution (Traefik) which seems to deal with these problems more effectively.