• 23-Jan-2023
Lightrun Team
Author Lightrun Team
Share
This article is about fixing 503 connect() failed (111 Connection refused) while connecting to upstream in nginx-proxy nginx-proxy

503 connect() failed (111: Connection refused) while connecting to upstream in nginx-proxy nginx-proxy

Lightrun Team
Lightrun Team
23-Jan-2023

Explanation of the problem

I have set up a Flask application and Jwilder with basic authentication and achieved a successful response code. However, when I attempted to set up Gunicorn and Jwilder, I encountered an error. The error message displayed is “[error] 52#52: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.20.0.1, server: my.local, request: “GET / HTTP/1.1”, upstream: “http://172.20.0.2:8000/“, host: “my.local”.

I am utilizing Docker Compose with the following configuration:

version: '3.7'
services:
  web:
    build: ./myapp
    restart: always
    networks:
      - proxynet
    environment:
      - VIRTUAL_HOST=my.local
      - VIRTUAL_PORT=5000
    expose:
      - 5000
  nginx-proxy:
    container_name: nginx-proxy
    restart: always
    image: jwilder/nginx-proxy:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx-proxy/secrets/htpasswd:/etc/nginx/htpasswd
      - ./nginx-proxy/conf.d:/etc/nginx/conf.d
      - ./nginx-proxy/vhost.d:/etc/nginx/vhost.d
      - ./nginx-proxy/html:/usr/share/nginx/html
      - ./nginx-proxy/certs:/etc/nginx/certs
      - /var/run/docker.sock:/tmp/docker.sock:ro
    networks: 
      - proxynet
networks:
  proxynet:
    external: true

I am using the following Dockerfile:

FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements
CMD ["python", "app.py"]

My app.py file contains the following code:

from flask import Flask
app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello world!'

if __name__ == '__main__':
    app.run(host='0.0.0.0')

When I attempt to use Gunicorn, I receive a 503/502 error. My Docker Compose file and Dockerfile for this configuration are as follows:

version: '3.7'
services:
  app:
    build: 
      context: .
      dockerfile: ./app/Dockerfile
    environment: 
      - VIRTUAL_HOST=my.local
      - VIRTUAL_PORT=8000
    expose:
      - 8000
    networks: 
      - proxynet
  nginx-proxy:
    container_name: nginx-proxy
    restart: always
    image: jwilder/nginx-proxy:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx-proxy/secrets/htpasswd:/etc/nginx/htpass

Troubleshooting with the Lightrun Developer Observability Platform

Getting a sense of what’s actually happening inside a live application is a frustrating experience, one that relies mostly on querying and observing whatever logs were written during development.
Lightrun is a Developer Observability Platform, allowing developers to add telemetry to live applications in real-time, on-demand, and right from the IDE.

  • Instantly add logs to, set metrics in, and take snapshots of live applications
  • Insights delivered straight to your IDE or CLI
  • Works where you do: dev, QA, staging, CI/CD, and production

Start for free today

Problem solution for 503 connect() failed (111: Connection refused) while connecting to upstream in nginx-proxy nginx-proxy

This error message indicates that there is a connection issue when attempting to connect to the upstream nginx-proxy service. The error code “503” suggests that the server is currently unavailable to handle the request, and the “connect() failed (111: Connection refused)” message suggests that the connection was actively refused by the server. This can occur for several reasons, such as a problem with the configuration of the service, an issue with the network connection, or a problem with the service itself.

To troubleshoot this issue, you may want to check the following:

  • Verify that the nginx-proxy service is running and reachable on the specified IP and port.
  • Check the logs of the service to see if there are any error messages or clues to the cause of the problem.
  • Review the configuration of the nginx-proxy service and ensure that it is correctly set up to handle connections from the Gunicorn service.
  • Ensure that the Gunicorn service is properly configured to connect to the nginx-proxy service, including the correct IP and port.
  • check the network settings and see if the connection is being blocked by any firewall.
  • check the service dependencies and see if one of them is causing the error.

It is also important to check that the versions of the software used are compatible with each other.

Other popular problems with nginx-proxy

Problem: Incorrect or missing host header

One of the most common problems with nginx-proxy is that it may not properly set the host header for proxied requests. This can cause issues with certain applications that rely on the host header for routing or authentication.

Solution:

The solution to this problem is to ensure that the host header is properly set in the nginx configuration for the proxy. The following code block shows an example of how to set the host header in the nginx configuration:

proxy_set_header Host $host;

Problem: Connection reset by peer

Another common problem with nginx-proxy is that it may close the connection to the proxied server unexpectedly, resulting in a “connection reset by peer” error. This can be caused by various factors such as a misconfigured timeout or a network issue.

Solution:

To fix this problem, you can try increasing the timeout values in the nginx configuration or troubleshoot the network issue. The following code block shows an example of how to increase the timeout values in the nginx configuration:

proxy_connect_timeout   600;
proxy_send_timeout      600;
proxy_read_timeout      600;
send_timeout            600;

Problem: SSL certificate issues

Nginx-proxy may also have issues with SSL certificates, such as the certificate not being recognized or the certificate chain not being complete.

Solution:

To fix this problem, you will need to ensure that you have the correct SSL certificate installed on your server and that it is properly configured in the nginx configuration. The following code block shows an example of how to configure SSL in the nginx configuration:

server {
    listen 443 ssl;
    ssl_certificate /etc/nginx/ssl/server.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;
    ssl_protocols       TLSv1.2;
    ssl_ciphers         HIGH:!aNULL:!MD5;
    ...
}

A brief introduction to nginx-proxy

nginx-proxy is a tool that allows for easy configuration of a reverse proxy using nginx. It is typically used in conjunction with Docker, and it automatically sets up nginx proxy configurations for containers running on the same host. This allows for easy and dynamic routing of incoming requests to the appropriate container, and it eliminates the need for manual configuration of nginx for each individual container.

The nginx-proxy works by listening on the host’s network interfaces and automatically configures nginx to proxy requests to the appropriate container based on the hostname and/or IP address. It uses the Docker API to automatically discover running containers and reconfigure nginx as containers are started, stopped, or moved. Additionally, the nginx-proxy also provides support for virtual host and IP-based routing, as well as SSL termination, making it a powerful tool for managing a large number of containers.

Most popular use cases for nginx-proxy

  1. Dynamic routing of incoming requests: One of the primary uses of nginx-proxy is to dynamically route incoming requests to the appropriate container based on the hostname or IP address. This can be done using the server_name directive in the nginx configuration, as shown in the following code block:
server {
    listen 80;
    server_name example.com;
    location / {
        proxy_pass http://container_name;
    }
}

In this example, all requests to example.com will be proxied to the container named “container_name”.

  1. SSL termination: nginx-proxy can also be used for SSL termination, which means it can handle the SSL encryption/decryption process on behalf of the proxied containers. This allows the containers to focus on handling application-level traffic, and it can also improve performance by offloading the SSL processing to nginx. An example configuration for SSL termination would look like:
server {
    listen 443 ssl;
    server_name example.com;
    ssl_certificate /path/to/cert.crt;
    ssl_certificate_key /path/to/cert.key;
    location / {
        proxy_pass http://container_name;
    }
}
  1. Virtual host and IP-based routing: nginx-proxy also provides support for virtual host and IP-based routing, which allows for routing of incoming requests based on the hostname or IP address in the request. This can be useful in situations where multiple applications are running on the same host, and they need to be accessed by different hostnames or IP addresses. An example of IP-based routing is:
server {
    listen 80;
    server_name 1.2.3.4;
    location / {
        proxy_pass http://container_name;
    }
}

In this example, all requests to 1.2.3.4 IP will be proxied to the container named “container_name”.

Share

It’s Really not that Complicated.

You can actually understand what’s going on inside your live applications. It’s a registration form away.

Get Lightrun

Lets Talk!

Looking for more information about Lightrun and debugging?
We’d love to hear from you!
Drop us a line and we’ll get back to you shortly.

By submitting this form, I agree to Lightrun’s Privacy Policy and Terms of Use.