This article is about fixing How to do logging in a FastApi container, any logging does not appear in tiangolo uvicorn-gunicorn-fastapi-docker
  • 18-Jan-2023
Lightrun Team
Author Lightrun Team
Share
This article is about fixing How to do logging in a FastApi container, any logging does not appear in tiangolo uvicorn-gunicorn-fastapi-docker

How to do logging in a FastApi container, any logging does not appear in tiangolo uvicorn-gunicorn-fastapi-docker

Lightrun Team
Lightrun Team
18-Jan-2023

Explanation of the problem

The problem described is related to the logging of a project that utilizes FastAPI, Gunicorn and Uvicorn workers, and Supervisor to keep the API up. The issue is that logs from files other than the FastAPI app are not coming through. The developer initially tried creating an ad-hoc script and changing the logging levels, but only had success when setting the logging level to DEBUG.

To further investigate the problem, the developer created a small test project to see if the issue would occur with a clean slate. However, the problem persisted. The developer also checked permissions issues by using chmod on the /var/log/ directory but this did not resolve the problem.

Here is an example of the code used for logging in the test project:

import logging
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
log.info('help!')

In order to reproduce the problem, the following steps should be taken:

  1. Run docker-compose up -d
  2. Run docker exec -it git-test_web_1 bash
  3. Run python3 ./appy.py

The test project can be found at: https://github.com/PunkDork21/fastapi-git-test The developer is asking for any recommendation or if anyone else has ran into this issue.

Troubleshooting with the Lightrun Developer Observability Platform

Getting a sense of what’s actually happening inside a live application is a frustrating experience, one that relies mostly on querying and observing whatever logs were written during development.
Lightrun is a Developer Observability Platform, allowing developers to add telemetry to live applications in real-time, on-demand, and right from the IDE.

  • Instantly add logs to, set metrics in, and take snapshots of live applications
  • Insights delivered straight to your IDE or CLI
  • Works where you do: dev, QA, staging, CI/CD, and production

Start for free today

Problem solution for How to do logging in a FastApi container, any logging does not appear in tiangolo uvicorn-gunicorn-fastapi-docker

The problem described is related to the logging of a project that utilizes FastAPI, Gunicorn and Uvicorn workers, and Supervisor to keep the API up. The issue is that logs from files other than the FastAPI app are not coming through. This is due to the fact that the log information for the HTTP request is stored in the uvicorn.access logs, and to see this information when running Uvicorn via Gunicorn, additional configuration is needed.

The first solution provided is to add the following snippet in the main.py file.

import logging
from fastapi.logger import logger as fastapi_logger

gunicorn_error_logger = logging.getLogger("gunicorn.error")
gunicorn_logger = logging.getLogger("gunicorn")
uvicorn_access_logger = logging.getLogger("uvicorn.access")
uvicorn_access_logger.handlers = gunicorn_error_logger.handlers

fastapi_logger.handlers = gunicorn_error_logger.handlers

if __name__ != "__main__":
    fastapi_logger.setLevel(gunicorn_logger.level)
else:
    fastapi_logger.setLevel(logging.DEBUG)

This code snippet allows for the gunicorn.error logger to handle the uvicorn.access logger, thus allowing the HTTP request information to come through. This is achieved by creating an instance of the logger for each component and assigning the appropriate handlers to them. The if __name__ != "__main__": block allows to set the level of the logger to the level of the gunicorn logger when the app is loaded via gunicorn. This way, the logger can use the same log level as gunicorn, instead of the default one. The else block is for when you run the app directly, in which case the log level is set to debug.

The second solution provided is to set the environment variable LOG_LEVEL to debug, and in the actual FastAPI app, adding the following code snippet:

from fastapi.logger import logger
import logging
gunicorn_logger = logging.getLogger('gunicorn.error')
logger.handlers = gunicorn_logger.handlers
if __name__ != "main":
    logger.setLevel(gunicorn_logger.level)
else:
    logger.setLevel(logging.DEBUG)

This way, if your app is loaded via gunicorn, you can tell the logger to use gunicorn’s log level instead of the default one. The else branch is for when you run the app directly, in which case the log level is set to debug. This code snippet allows to set the level of the logger to the level of the gunicorn logger when the app is loaded via gunicorn. This way, the logger can use the same log level as gunicorn, instead of the default one.

Other popular problems with uvicorn-gunicorn-fastapi-docker

Problem: Configuration of the Gunicorn workers.

By default, Gunicorn starts multiple worker processes to handle incoming requests, but these workers are not aware of the application state, which can lead to problems with concurrent access to shared resources, such as databases or in-memory caches.

Solution:

To solve this, you can use the –workers option to configure the number of worker processes, and also use a pre-fork worker class such as sync or event.

# example of using sync worker
gunicorn myproject.main:app --workers=4 --worker-class=sync

# example of using gevent worker
gunicorn myproject.main:app --workers=4 --worker-class=gevent

Problem: Handling of static files in a FastAPI application when using Docker.

By default, FastAPI does not serve static files.

Solution:

You need to configure a reverse proxy, such as Nginx, to handle the static files. Alternatively, you can use the fastapi_static_files package, which allows you to serve static files directly from FastAPI.

# in main.py
from fastapi_static_files import StaticFiles

app = FastAPI()
app.mount("/static", StaticFiles(directory="static"), name="static")

Problem: Handling environment variables when using Docker.

By default, environment variables defined in the Dockerfile are not available to the application code, which can lead to problems with configuration or secrets management.

Solution:

You can use the –env-file option to define a file containing environment variables, or use a tool such as dotenv to load environment variables from a .env file.

#example of using --env-file option
docker run -d --env-file=env.list myimage

#example of using python-dotenv
import os
from dotenv import load_dotenv

load_dotenv()
DATABASE_URL = os.getenv("DATABASE_URL")

It is worth mentioning that each solution provided is a general solution and it may not work for your specific case, it is always a good idea to check and test the solution before applying it.

A brief introduction to uvicorn-gunicorn-fastapi-docker

Uvicorn, Gunicorn, FastAPI, and Docker are a set of technologies that can be used to build and deploy high-performance web applications. Uvicorn is a lightweight, high-performance ASGI server that allows you to run your FastAPI application on multiple worker processes. Gunicorn is a pre-fork worker manager that can be used to run multiple Uvicorn worker processes, providing a balance between performance and reliability. FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints. Finally, Docker is a platform that allows you to package and deploy your application in a container, making it easy to run on any system with Docker installed.

When combined, Uvicorn-Gunicorn-FastAPI-Docker stack can be used to build and deploy high-performance web applications that can handle a large number of concurrent connections. The Uvicorn workers handle incoming requests, the Gunicorn manager balances the load between them, and FastAPI provides a modern and easy-to-use web framework. Docker allows for easy deployment and scaling of the application, and it also helps to isolate the application from the underlying system, making it more stable and predictable. Additionally, it allows for easy deployment and scaling of the application, making it more stable and predictable.

Most popular use cases for uvicorn-gunicorn-fastapi-docker

  1. Building and deploying high-performance web APIs: Uvicorn, Gunicorn and FastAPI provide a powerful stack for building and deploying web APIs that can handle a large number of concurrent connections. By using Uvicorn as the ASGI server, Gunicorn as the worker manager and FastAPI as the web framework, developers can easily build and deploy web APIs that can handle a high load of traffic.
# example of running the application using uvicorn and gunicorn
gunicorn myproject.main:app --workers=4 --worker-class=uvicorn.workers.UvicornWorker
  1. Isolating the application from the underlying system: By packaging the application in a Docker container, it becomes isolated from the underlying system and its dependencies. This makes the application more stable and predictable, and it also makes it easy to run on any system with Docker installed. This allows for easy deployment and scaling of the application.
  2. Automated testing and continuous integration: By using Docker to package the application, it becomes easy to create test environments that are identical to the production environment. This makes it easy to test the application in a variety of configurations, and it also makes it easy to automate the testing process using tools such as Jenkins or Travis CI. Additionally, the isolated nature of the container makes it easy to test the app with different dependencies or configurations.
# example of running the application in a Docker container
docker run -d -p 8000:8000 myimage

 

Share

It’s Really not that Complicated.

You can actually understand what’s going on inside your live applications.

Try Lightrun’s Playground

Lets Talk!

Looking for more information about Lightrun and debugging?
We’d love to hear from you!
Drop us a line and we’ll get back to you shortly.

By clicking Submit I agree to Lightrun’s Terms of Use.
Processing will be done in accordance to Lightrun’s Privacy Policy.