[0.7.0] `CeleryIntegration` captures retries
See original GitHub issueGreetings fellows!
We are having an issue with CeleryIntegration
in Sentry SDK.
Current versions
Python 3.6.7 Django 2.1.5 Celery 4.1.1 Sentry SDK 0.7.0-0.7.1
Current behavior
In our code (internal and 3rd-party) we are using Celery tasks retry functionality.
The app.Task.retry() call will raise an exception so any code after the retry won’t be reached. This is the Retry exception, it isn’t handled as an error but rather as a semi-predicate to signify to the worker that the task is to be retried, so that it can store the correct state when a result backend is enabled.
We did switch recently from Raven to Sentry SDK 0.6.9, everything seemed working as before. But today we updated it to 0.7.0 release (and later to 0.7.1)
This caused every celery.exceptions.Retry
to be sent to Sentry, which quickly filled Sentry server with thousands of events.
Previously (in old SDK and Raven), those exceptions were ignored and not sent to Sentry server.
Expected behaviour
CeleryIntegration
is not flooding Sentry server with every retry exception. Basically, the same behavior as it was in Raven and Sentry SDK<0.7.0.
Open questions
I am not sure if the old behavior was done intentionally or by mistake.
If that was intended, we should reimplement it in current integration.
If not, there should be a way to filter/ignore that kind of exceptions (I am not sure if we can filter all retries from internal and 3rd-party code inbefore_send
in a clean way).
Could you help me to clarify this issue?
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (4 by maintainers)
Top GitHub Comments
#253 should work, or you can use this:
This is absolutely a regression. Thanks for reporting! We recently rewrote the Celery integration in an attempt to fix other bugs. We will fix this within 1-2 days