[CRITICAL] WORKER TIMEOUT
See original GitHub issueI’m running the uvicorn-gunicorn-fastapi:python3.7 Docker-Image on an Azure App Service (B2: 200 ACU, 2 Cores, 3.5 GB Memory, OS: Linux).
My Dockerfile looks as follows:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
WORKDIR /app
RUN apt-get update \
&& apt install -y tesseract-ocr tesseract-ocr-deu libgl1-mesa-dev poppler-utils \
&& apt clean
COPY /app .
RUN pip install -r /app/requirements.txt
The service accepts POST requests with a file attached and processes it using tesseract and open-cv. After the file has been processed, the service responds with the result of the processed file.
Oftentimes, however, the processing stops with the following error:
2020-11-04T13:48:58.000206215Z [2020-11-04 13:48:57 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)
2020-11-04T13:48:58.529238062Z [2020-11-04 13:48:58 +0000] [90] [INFO] Booting worker with pid: 90
2020-11-04T13:49:00.743342241Z [2020-11-04 13:49:00 +0000] [90] [INFO] Started server process [90]
2020-11-04T13:49:00.743447942Z [2020-11-04 13:49:00 +0000] [90] [INFO] Waiting for application startup.
2020-11-04T13:49:00.748887110Z [2020-11-04 13:49:00 +0000] [90] [INFO] Application startup complete.
This error does not occur after the default timeout of 120 seconds. Still, I tried to get rid of the error by using a custom gunicorn_conf.py and increased the timeout to 180 seconds. Additionally, I tried to solve the issue by increasing/decreasing the amount of workers per core. The error still remains.
I also checked the log-files on the App Service but there isn’t any further information about the error.
Changing the LOG_LEVEL within the gunicorn_conf-file didn’t help, either.
Does anyone know a solution for the problem? Running the Docker-Container locally works just fine (Windows 10, Docker Engine v19.03.13)
Issue Analytics
- State:
- Created 3 years ago
- Comments:5

Top Related StackOverflow Question
Hi @mateusjs. App Services in Azure time out after 230 seconds by default and - AFAIK - this timeout can’t be configured. So we managed to improve the services logic such that the timeout wasn’t a problem anymore.
Indeed this is a real issue, originally brought up in #46. The solution provided there could help in some cases, however, in the cases we’ve seen, we haven’t even come close to reaching the default 120 second graceful timeout period.