question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[0.7.0] `CeleryIntegration` captures retries

See original GitHub issue

Greetings fellows!

We are having an issue with CeleryIntegration in Sentry SDK.

Current versions

Python 3.6.7 Django 2.1.5 Celery 4.1.1 Sentry SDK 0.7.0-0.7.1

Current behavior

In our code (internal and 3rd-party) we are using Celery tasks retry functionality.

The app.Task.retry() call will raise an exception so any code after the retry won’t be reached. This is the Retry exception, it isn’t handled as an error but rather as a semi-predicate to signify to the worker that the task is to be retried, so that it can store the correct state when a result backend is enabled.

We did switch recently from Raven to Sentry SDK 0.6.9, everything seemed working as before. But today we updated it to 0.7.0 release (and later to 0.7.1)

This caused every celery.exceptions.Retry to be sent to Sentry, which quickly filled Sentry server with thousands of events. Previously (in old SDK and Raven), those exceptions were ignored and not sent to Sentry server.

Expected behaviour

CeleryIntegration is not flooding Sentry server with every retry exception. Basically, the same behavior as it was in Raven and Sentry SDK<0.7.0.

Open questions

I am not sure if the old behavior was done intentionally or by mistake. If that was intended, we should reimplement it in current integration. If not, there should be a way to filter/ignore that kind of exceptions (I am not sure if we can filter all retries from internal and 3rd-party code inbefore_send in a clean way).

Could you help me to clarify this issue?

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
untitakercommented, Feb 6, 2019

#253 should work, or you can use this:

def before_send(event, hint):
    try:
        if isinstance(hint['exc_info'][1], Retry):
            return None
    except Exception:
        pass
    return event

2reactions
untitakercommented, Feb 6, 2019

This is absolutely a regression. Thanks for reporting! We recently rewrote the Celery integration in an attempt to fix other bugs. We will fix this within 1-2 days

Read more comments on GitHub >

github_iconTop Results From Across the Web

sentry Changelog - PyUp.io
fix(browser): Set severity level for events captured by the global error handler (4460) ... Retry` spamming in Celery integration. ... 0.7.0; 0.6.9; 0.6.8 ......
Read more >
1.1.7 (core) / 0.17.7 (libraries) - Dagster Docs
This means that for the in_process executor, where all steps are executed in the same process, the captured compute logs for all steps...
Read more >
Celery - Sentry Documentation
The Celery integration adds support for the Celery Task Queue System . ... CeleryIntegration(), ], # Set traces_sample_rate to 1.0 to capture 100%...
Read more >
conda-forge - :: Anaconda.org
aiapy, 0.7.0, BSD-3-Clause, X, Python package for AIA analysis. ... aiohttp-retry, 2.8.3, MIT, X, Simple retry client for aiohttp.
Read more >
[fedora-arm] arm rawhide report: 20150721 changes
... to PostScript converter New package: entangle-0.7.0-3.fc23 Tethered ... kjumpingcube-15.04.3-1.fc23 Territory capture game New package: ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found