question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Sentry not logging Gunicorn timeout errors

See original GitHub issue

We have quite a standard gunicorn setup with the default sync worker with the default 1 thread, running a Python app.

Gunicorn is started with this command line:

gunicorn \
    --workers=16 \
    --timeout=120 \
    -b 127.0.0.1:8082 \
    --paste production.ini \
    2>> api-err.log 1>> api-out.log

The Python Sentry integration is loaded with the following:

import sentry_sdk
from sentry_sdk.integrations.pyramid import PyramidIntegration

sentry_sdk.init(
    dsn="https://xxx@sentry.io/123",
    integrations=[PyramidIntegration()],
    release='1.2.3'
)

config = Configurator(settings=settings)
...

The issue is that whenever a Gunicorn worker timeout occurs, it is not logged to Sentry. The only way we can see that there was a timeout is when we see “[CRITICAL] WORKER TIMEOUT” in api-err.log and the matching error code 499 lines in nginx.log.

It’d be important to log these timeout events to Sentry.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:8 (3 by maintainers)

github_iconTop GitHub Comments

6reactions
darwinyipcommented, Nov 8, 2019

Bump on the issue. Similar configuration but with gunicorn + falcon + sentry sdk.

1reaction
antonpirkercommented, Feb 21, 2022

I have now reproduced the problem with the information above. I have a Falcon project running under Gunicorn: https://github.com/antonpirker/sample-projects-for-sentry/tree/main/falcon-project

I run gunicorn like this:

gunicorn \
    --workers=16 \
    --timeout=1 \
    -b 127.0.0.1:8000 \
    falcon-project:app

In my view I have a time.sleep(2) so the sample project always runs into a timeout.

To capture the Gunicorn Timeout you need to change the gunicorn executable. This is how you do it:

  • Find the gunicorn executable script:
$ which gunicorn
/Users/antonpirker/code/sample-projects-for-sentry/falcon-project/.venv/bin/gunicorn
$ _
  • Edit this file and Init Sentry SDK in it. Before it looks something like this:
#!/Users/antonpirker/code/sample-projects-for-sentry/falcon-project/.venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from gunicorn.app.wsgiapp import run

if __name__ == '__main__':
    sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
    sys.exit(run())
  • Edit this file and Init Sentry SDK in it. After editing and initializing the Sentry sdk it looks like this:
#!/Users/antonpirker/code/sample-projects-for-sentry/falcon-project/.venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from gunicorn.app.wsgiapp import run

import os
import sentry_sdk
from sentry_sdk.integrations.falcon import FalconIntegration
sentry_sdk.init(
   dsn=os.environ.get('SENTRY_DSN', None),
   integrations=[FalconIntegration()]
)

if __name__ == '__main__':
    sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
    sys.exit(run())
  • Now start your project with the same Gunicorn command
  • The timeout errors should end up in your Sentry Issues: Screenshot 2022-02-21 at 11 45 13

This should solve this issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Django Gunicorn different type of timeout - Stack Overflow
I serve a Django app with Gunicorn (using Heroku) and Sentry for monitoring. I regularly receive 2 different types of timeout error on...
Read more >
How to resolve the gunicorn critical worker timeout error?
1 · Trace the worker to see where it's getting stuck: strace -p <PID> -e trace=network -t · What is the application you...
Read more >
Galaxy Troubleshooting
gunicorn/gravity logs; handler logs (if not all-in-one) ... Startup problems; Web/UI problems; Tool failures ... 504 Gateway Timeout.
Read more >
Handling Application Errors — Flask Documentation (2.2.x)
After installation, failures leading to an Internal Server Error are automatically reported to Sentry and from there you can receive error notifications. See ......
Read more >
Exact correlation of workers, threads and timeout in Gunicorn
You can start with defaults, and watch your logs for "CRITICAL WORKER TIMEOUT" but you should also setup sentry to be able to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found