Use `gunicorn` instead `uvicorn` in main
See original GitHub issueFirst Check
- I added a very descriptive title to this issue.
- I used the GitHub search to find a similar issue and didn’t find it.
- I searched the FastAPI documentation, with the integrated search.
- I already searched in Google “How to X in FastAPI” and didn’t find any information.
- I already read and followed all the tutorial in the docs and didn’t find an answer.
- I already checked if it is not related to FastAPI but to Pydantic.
- I already checked if it is not related to FastAPI but to Swagger UI.
- I already checked if it is not related to FastAPI but to ReDoc.
Commit to Help
- I commit to help with one of those options 👆
Example Code
import uvicorn
os.environ["TOKENIZERS_PARALLELISM"] = "false"
if __name__ == "__main__":
uvicorn.run("start:app",
host="0.0.0.0",
port=8080,
workers=2,
limit_concurrency=70,
backlog=300
)
Description
I have an app using uvicorn
and I have problems with accumulating memory usage. In addition, inside the app itself i use multiprocessing.Process to execute model inference using tensorflow in parallel processes.
I have tried many configurations and settings on uvicorn
but the memory leak issue persists. At the moment I think that this is due to the thread handling done by uvicorn
. I got a suggestion to use gunicorn
instead of uvocorn
, but I am not sure how I could simply switch from one to another, programmatically. The application is working, and returning the requests, there are no error messages. In the documentation the examples use uvicorn
- can gunicorn
be used just the same way as uvicorn
in the examples?
My question is: would a switch from uvicorn
to gunicorn
make sense, and if so, would that impact the fastapi application?
Operating System
Linux
Operating System Details
Ubuntu 20.04 python3.7
pip packages: fastapi==0.61.1 fastapi-route-logger-middleware==0.1.3 grpcio==1.32.0 joblib==0.17.0 numpy==1.18.5 optional.py==1.1.0 pandas==0.25.3 prometheus-fastapi-instrumentator==5.7.1 pydantic==1.7.1 python-multipart==0.0.5 tensorflow-gpu==2.3.2 uvicorn==0.12.2
FastAPI Version
fastapi==0.61.1
Python Version
Python 3.7.12
Additional Context
The application is used for implementing prediction using 3 tensorflow models. Two of which are run inside a multiproessing.process call, to achieve parallel execution.
Issue Analytics
- State:
- Created 2 years ago
- Comments:10 (8 by maintainers)
Top GitHub Comments
For those arriving later, Sebastián just updated the docs: https://fastapi.tiangolo.com/deployment/server-workers/
Yeah… I’m going to talk to the maintainers to see if they can improve this on
uvicorn
. 😓