First exception not flushed on Celery+Python until second occurrence
See original GitHub issueHere is an issue that’s a little hard to understand. We run Sentry on Django+Celery.
Django==2.2.12
celery==4.4.2
sentry-sdk==0.14.3
We run many packages on that project, so I suspect it’s a conflict with another one but I do not know where to start.
Details
- Exceptions are reported to Sentry as expected on the Django wsgi
- Exceptions from code running on Celery will not be reported immediately
- Triggering the same exception (same fingerprint, different message) a second time will “flush” both and they will both appear on Sentry
- Triggering 2 different exceptions on a single Celery task will not report either of them to Sentry
- Calling
Hub.current.client.flush()
doesn’t change any of it
Celery task
@app.task
def sentry_logging_task(what):
try:
raise Exception("Sentry say what? {}".format(what))
except Exception as e:
logger.exception(str(e))
Sentry init
sentry_sdk.init(
dsn=settings.DNS,
integrations=[DjangoIntegration(), CeleryIntegration()],
environment=settings.ENV
)
ignore_logger('django.security.DisallowedHost')
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:6 (1 by maintainers)
Top Results From Across the Web
Multiple Django Celery Tasks are trying to save to the same ...
The problem that I believe I am running into is that they are all trying to update the same user profile object before...
Read more >Two years with Celery in Production: Bug Fix Edition - Medium
Looks like when there are tasks already in the queue, and a worker is consuming from multiple queues, this bug makes an appearance....
Read more >Celery task retry guide - Ines Panker
The worker starts executing the task and either finishes it with the status SUCCESS (no exception and no retry occur) or FAILURE (an...
Read more >Change history for Celery 1.0 — Celery 5.2.7 documentation
python manage.py camqadm exchange.delete celeryresults ... celery.execute.apply: Should return exception, not ExceptionInfo on error. See issue #111.
Read more >Using Celery With Flask - miguelgrinberg.com
The first example I will show you does not require this functionality, but the second does, so it's best to have it configured...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’ve confirmed that this bug is still present in sentry-sdk 1.5.1 and master although the behaviour has changed due to commit a6cc9718fe398acee134e6ee9297e0fddea9b359.
There is still the issue that the first logged error doesn’t get processed immediately and could stay pending on the queue indefinitely. However:
I still consider this an issue because the Celery worker may not be terminated for quite some time and so the error may not be reported until it is too late.
The code in https://github.com/getsentry/sentry-python/issues/687#issuecomment-837738001 can still be used to replicate the issue. By default Celery autoscales down inactive workers after 30 seconds so the error will be reported after 30 seconds. This can be changed with the
AUTOSCALE_KEEPALIVE
environment variable. For example, settingAUTOSCALE_KEEPALIVE=600
will demonstrate the error doesn’t get reported for 10 minutes.This issue has gone three weeks without activity. In another week, I will close it.
But! If you comment or otherwise update it, I will reset the clock, and if you label it
Status: Backlog
orStatus: In Progress
, I will leave it alone … forever!“A weed is but an unloved flower.” ― Ella Wheeler Wilcox 🥀