question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Issue working with Sentry celery integration

See original GitHub issue

I’m trying to use celery-once with the Sentry celery integration, but I’m running into an issue with how Sentry patches the task tracer.

The issue I’m seeing is that the key before and after the task is patched are different. I believe this is because of how inspect.getcallargs is working with Sentry’s wrapped run method.

Would there be anything wrong with storing the key on the task instance like this:

    def get_key(self, args=None, kwargs=None):
        """
        Generate the key from the name of the task (e.g. 'tasks.example') and
        args/kwargs.
        """
        if not hasattr(self, '_key'):
            restrict_to = self.once.get('keys', None)
            args = args or {}
            kwargs = kwargs or {}
            call_args = getcallargs(
                    getattr(self, '_orig_run', self.run), *args, **kwargs)
            # Remove the task instance from the kwargs. This only happens when the
            # task has the 'bind' attribute set to True. We remove it, as the task
            # has a memory pointer in its repr, that will change between the task
            # caller and the celery worker
            if isinstance(call_args.get('self'), Task):
                del call_args['self']
            self._key = queue_once_key(self.name, call_args, restrict_to)
        return self._key

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
cameronmaskecommented, Oct 17, 2019

3.0.1 should resolve this issue. I’ll re-open if anyone else experiences this.

0reactions
cameronmaskecommented, Aug 23, 2019

Can you try 3.0.1 and see if that fixes the error? It changes how keys are generated.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Celery - Sentry Documentation
The Celery integration adds support for the Celery Task Queue System . ... Additionally, the Sentry Python SDK will set the transaction on...
Read more >
Celery integration not capturing error with ... - GitHub
The celery integration is failing to capture the exception when I use a celery factory pattern which patches the celery task with Flask's ......
Read more >
How to integrate Sentry for Django and Celery?
Sentry · Go to Sentry and signup · Create a new project and select Django · pip install sentry-sdk install the sentry SDK...
Read more >
Celery tasks uncaught exceptions not being sent to Sentry
I only see in Sentry the log.error(...) but not the IndexError uncaught exception. I've tried using a try-except block around the ...
Read more >
Performance Monitoring for Celery | Sentry Documentation
Performance Monitoring is available for the Sentry Python SDK version ≥ 0.11.2. ... (The Flask integration, for example, will send a transaction for...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found