Workers go into restarting/crash cycle (WORKER TIMEOUT / signal 6)
See original GitHub issueI am struggling to know which layer is the root cause here.
My app runs fine, but then suddenly it is unable to serve requests for a while and then “fixes itself”. While it’s unable to serve requests my logs show:
[2022-01-18 08:36:46 +0000] [1505] [CRITICAL] WORKER TIMEOUT (pid:1548)
[2022-01-18 08:36:46 +0000] [1505] [CRITICAL] WORKER TIMEOUT (pid:1575)
[2022-01-18 08:36:46 +0000] [1505] [WARNING] Worker with pid 1548 was terminated due to signal 6
[2022-01-18 08:36:46 +0000] [1505] [WARNING] Worker with pid 1575 was terminated due to signal 6
[2022-01-18 08:36:46 +0000] [1783] [INFO] Booting worker with pid: 1783
[2022-01-18 08:36:46 +0000] [1782] [INFO] Booting worker with pid: 1782
[2022-01-18 08:36:47 +0000] [1505] [CRITICAL] WORKER TIMEOUT (pid:1577)
[2022-01-18 08:36:47 +0000] [1505] [CRITICAL] WORKER TIMEOUT (pid:1578)
[2022-01-18 08:36:47 +0000] [1505] [WARNING] Worker with pid 1578 was terminated due to signal 6
[2022-01-18 08:36:47 +0000] [1784] [INFO] Booting worker with pid: 1784
[2022-01-18 08:36:47 +0000] [1505] [WARNING] Worker with pid 1577 was terminated due to signal 6
[2022-01-18 08:36:47 +0000] [1785] [INFO] Booting worker with pid: 1785
[2022-01-18 08:36:51 +0000] [1505] [CRITICAL] WORKER TIMEOUT (pid:1545)
[2022-01-18 08:36:51 +0000] [1505] [CRITICAL] WORKER TIMEOUT (pid:1551)
[2022-01-18 08:36:51 +0000] [1505] [CRITICAL] WORKER TIMEOUT (pid:1559)
[2022-01-18 08:36:52 +0000] [1505] [WARNING] Worker with pid 1551 was terminated due to signal 6
Initially, I thought it was related to load and resource limits, but it seems to also happen during “typical load” and when resources are nowhere near their limits.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:16
- Comments:20
Top Results From Across the Web
Lots of "Uncaught signal: 6" errors in Cloud Run - Stack Overflow
This means background threads and processes can fail, connections can timeout, etc. I cannot think of any benefits to running background workers ......
Read more >How to resolve the gunicorn critical worker timeout error?
Restart Nginx server. See nginx docs on timeouts. If above fix doesn't work, then increase Gunicorn timeout flag in Gunicorn configuration, ...
Read more >uvicorn-gunicorn-fastapi-docker - Bountysource
Workers go into restarting/crash cycle (WORKER TIMEOUT / signal 6) $ 0 ... Created 10 months ago in tiangolo/uvicorn-gunicorn-fastapi-docker with 19 comments. I ......
Read more >Checkmk 2.1 : agent-receiver crash after a few seconds - Reddit
Hi, I have an issue with the registration of some agents on a site : when I run the ... [1464] [WARNING] Worker...
Read more >AzureML endpoint - gunicorn worker timeout - Microsoft Q&A
I am trying to deploy a large model using AzureML endpoint. The model is made up of many sub-models which get loaded by...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

yup i tried with 1 worker, still no luck.
Adding some info if it helps. Gunicorn config below is run via
supervisor, and was fine for a while. Added FastAPI Cache, all was good as well but crash rate has increased dramatically in past few days.Server RAM: 1.9GB
Thanks