question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Django-q calls task twice or more

See original GitHub issue

My background process is called twice (or more) but I’m really sure that should not be happening. My settings for Django Q:

Q_CLUSTER = {
    'name': 'cc',
    'recyle': 10,
    'retry': -1,
    'workers': 2,
    'save_limit': 0,
    'orm': 'default'
}

My test task function:

def task_test_function(email, user):
    print('test')

calling it from the commandline:

> python manage.py shell
>>> from django_q.tasks import async
>>> async('task_test_function', 'email', 'user')
'9a0ba6b8bcd94dc1bc129e3d6857b5ee'

Starting qcluster (after that I called the async)

> python manage.py qcluster
13:48:08 [Q] INFO Q Cluster-33552 starting.
...
13:48:08 [Q] INFO Q Cluster-33552 running.
13:48:34 [Q] INFO Process-1:2 processing [mobile-utah-august-indigo]
test
13:48:34 [Q] INFO Process-1:1 processing [mobile-utah-august-indigo]
test
13:48:34 [Q] INFO Processed [mobile-utah-august-indigo]
13:48:34 [Q] INFO Processed [mobile-utah-august-indigo]
...

And the function is called twice… For most functions I wouldn’t really care if they run twice (or more) but I have a task that calls send_mail and people that are invited receive 2 or more mails…

Is this a bug in Django Q or in my logic?

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Reactions:2
  • Comments:10 (2 by maintainers)

github_iconTop GitHub Comments

5reactions
cyliangcommented, Aug 14, 2016

I’ve been facing similar problem that some of my long tasks are being processed again and again even successfully processed.

I eventually resolved this by configuring a longer retry time.

According to http://django-q.readthedocs.io/en/latest/configure.html#retry, retry is

The number of seconds a broker will wait for a cluster to finish a task, before it’s presented again.

So I think that’s why my tasks are pushed into the queue again after 60 seconds (the default retry time) of unfinished execution and processed repeatedly

I’m not sure if this can help you but I think there might be someone like me being confused by similar problem.

Sorry for my poor English.

4reactions
jordanmkonczcommented, Dec 7, 2017

I was having the same problem as you @Eagllus. I had a task which would run for about 180 seconds, so I had configured a timeout of 600 seconds to accommodate this task, but I still had my retry configured to 120 seconds (like it’s configured in the ORM example at http://django-q.readthedocs.io/en/latest/configure.html#orm). As a result, my task would get picked up by a worker, then 120 seconds later the same task would be picked up by a second worker, and then both workers would successfully process the task (in my case this means generating a report twice and sending it by email twice, which is obviously not what I want).

Configuring my retry to be longer than my timeout (I made it 60 seconds longer) resolved the issue for me too. I don’t feel like it’s obvious from the docs right now that django-q will behave the way it does if you have your retry configured to be shorter than your timeout and have a task which will take longer to process than your retry time. If someone hadn’t already read through the comments on this issue, I feel like it would be easy to misinterpret the docs for the retry setting.

I’m struggling to imagine a scenario where you’d actually want to have retry configured to be shorter than your timeout and get the duplicate task processing behaviour that occurs as a result. I assume there is a scenario where this would make sense, but it seems like for the majority of use cases you’d want to make sure your retry is longer than your timeout so that you can avoid this duplicate task processing behaviour. Maybe the docs for the retry setting could be updated to explain all of this, i.e. explain how django-q will behave when retry is shorter or longer than timeout, and recommend configuring it to be longer for most use cases.

Read more comments on GitHub >

github_iconTop Results From Across the Web

DjangoQ is Triggering Twice For Each Task, also on Startup
EDIT: I was able to pin-point it even more - whenever I call this save method, it seems to immediately start and run...
Read more >
Configuration — Django Q 1.3.6 documentation - Read the Docs
So in this example, time.sleep was called 5 times. Note also that the above issue might cause all workers to run the same...
Read more >
Making queries | Django documentation
The two most common ways to refine a QuerySet are: filter(**kwargs): Returns a new QuerySet containing objects that match the given lookup parameters....
Read more >
Koed00/django-q - Gitter
In case a task is worked on twice, you will see a duplicate key error in the cluster logs. Duplicate tasks do generate...
Read more >
Filtering for Empty or Null Values in a Django QuerySet - Chartio
One final tip is that it is possible to combine multiple field lookups by chaining together filter() or exclude() calls. Here, we'll use...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found