question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[BUG] Received duplicated signal from Gunicorn

See original GitHub issue

Checklist

  • The bug is reproducible against the latest release and/or master.
  • There are no similar issues or pull requests to fix it yet.

Describe the bug

Currently, when we send SIGINT to the gunicorn process that is running with uvicorn workers, the SIGINT is not propagated to the workers. On its turn, a SIGQUIT is sent to them. By the time of this issue, the reason for that decision is unclear, but you can follow this gunicorn issue to understand more.

The mentioned behavior has an effect on uvicorn, that I’m unsure how we should act. The effect is that when running gunicorn with uvicorn workers, the shutdown event doesn’t happen. Which, in theory, is fine. As SIGINT and SIGQUIT, should be terminating the worker immediately, but that is not what is in place for standalone uvicorn i.e. SIGNIT triggers the shutdown event.

Well, all of what I mentioned above should actually be a separated issue, but I was not able to solve it without workarounds, that I’m unsure if we want them. The reason for that is the uvicorn worker receiving two signals when we send SIGINT to the gunicorn process, one is SIGINT, which is sent directly to the uvicorn worker and the other is the SIGQUIT which is sent by the gunicorn process.

As you see, even if we fix SIGQUIT by sending SIGINT, the issue is not solved i.e. uvicorn will receive double SIGINT, and we will force exit (via force_exit attribute).

To reproduce

Just create any ASGI app, with a shutdown event:

# test.py
from fastapi import FastAPI

app = FastAPI()

@app.on_event("shutdown")
async def shutdown():
    print("This will not be triggered")

Then run gunicorn with uvicorn workers:

gunicorn -k uvicorn.workers.UvicornWorker test:app

Feel free to press CTRL + C and you’ll see this log:

❯ gunicorn -k uvicorn.workers.UvicornWorker test:app
[2021-07-10 20:08:01 +0200] [43323] [INFO] Starting gunicorn 20.1.0
[2021-07-10 20:08:01 +0200] [43323] [INFO] Listening at: http://127.0.0.1:8000 (43323)
[2021-07-10 20:08:01 +0200] [43323] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2021-07-10 20:08:01 +0200] [43325] [INFO] Booting worker with pid: 43325
[2021-07-10 20:08:01 +0200] [43325] [INFO] Started server process [43325]
[2021-07-10 20:08:01 +0200] [43325] [INFO] Waiting for application startup.
[2021-07-10 20:08:01 +0200] [43325] [INFO] Application startup complete.
^C[2021-07-10 20:08:01 +0200] [43323] [INFO] Handling signal: int
[2021-07-10 20:08:01 +0200] [43325] [INFO] Shutting down
[2021-07-10 20:08:01 +0200] [43325] [INFO] Error while closing socket [Errno 9] Bad file descriptor
[2021-07-10 20:08:01 +0200] [43325] [INFO] Finished server process [43325]
[2021-07-10 20:08:01 +0200] [43325] [INFO] Worker exiting (pid: 43325)
[2021-07-10 20:08:01 +0200] [43323] [INFO] Shutting down: Master

As you see, the message on the shutdown event doesn’t appear.

Expected behavior

If we want to have consistency with the standalone uvicorn, which I’m really unsure (as it doesn’t feel right to wait the process to finish gracefully with SIGINT), then we should match this behavior and gunicorn with uvicorn workers should trigger the shutdown event as well.

In case we decide that the shutdown event should actually not be triggered by standalone uvicorn, then the issue here is no more an issue, and we should stop handling the SIGINT on the server.py. I believe this is unlikely to happen, because of practicality when developing.

Actual behavior

The shutdown event is not triggered running gunicorn with uvicorn workers when SIGINT is sent.

Environment

  • OS / Python / Uvicorn version: Running uvicorn 0.14.0 with CPython 3.8.10 on Linux
  • gunicorn (version 20.1.0)

Additional context

More context can be found on Gitter. Here are some messages, but you can follow the discussion there: https://gitter.im/encode/community?at=60e3592b457e19611a37a5d6 https://gitter.im/encode/community?at=60e35dd8f862a72a30eeb8dc https://gitter.im/encode/community?at=60e5966e24f0ae2a244a5e0d

\cc @tomchristie @florimondmanca

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:3
  • Comments:13 (7 by maintainers)

github_iconTop GitHub Comments

3reactions
dmrzcommented, Nov 19, 2021

Here is a workaround I am using (is not battle-tested much yet though):

@app.on_event("shutdown")
async def shutdown():
    await database.disconnect()


if "gunicorn" in os.environ.get("SERVER_SOFTWARE", ""):
    loop = asyncio.get_event_loop()
    loop.add_signal_handler(signal.SIGQUIT, lambda _: asyncio.create_task(shutdown()))
1reaction
Kludexcommented, Oct 14, 2022

PR is welcome to modify the UvicornWorker.

Read more comments on GitHub >

github_iconTop Results From Across the Web

FastAPI on Twitter: "This looks very useful if you're developing with ...
[BUG] Received duplicated signal from Gunicorn · Issue #1116 · encode/uvicorn. Checklist The bug is reproducible against the latest release and/or master.
Read more >
Airflow's Gunicorn is spamming error logs - Stack Overflow
I managed to solve the problem by setting an environment variable: GUNICORN_CMD_ARGS="--log-level WARNING".
Read more >
20.1.0 PDF - Gunicorn Documentation
Called when a worker received the SIGABRT signal. This call generally happens on timeout. The callable needs to accept one instance variable ...
Read more >
Gunicorn - Datadog Docs
The Datadog Agent collects one main metric about Gunicorn: the number of worker processes running. It also sends one service check: whether or...
Read more >
gunicorn + django + nginx unix://socket failed (11 - Server Fault
I was able to get around this issue by editing /proc/sys/net/core/somaxconn from 128 to 20000. This allows larger bursts of traffic.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found