Django-q calls task twice or more
See original GitHub issueMy background process is called twice (or more) but I’m really sure that should not be happening. My settings for Django Q:
Q_CLUSTER = {
'name': 'cc',
'recyle': 10,
'retry': -1,
'workers': 2,
'save_limit': 0,
'orm': 'default'
}
My test task function:
def task_test_function(email, user):
print('test')
calling it from the commandline:
> python manage.py shell
>>> from django_q.tasks import async
>>> async('task_test_function', 'email', 'user')
'9a0ba6b8bcd94dc1bc129e3d6857b5ee'
Starting qcluster (after that I called the async)
> python manage.py qcluster
13:48:08 [Q] INFO Q Cluster-33552 starting.
...
13:48:08 [Q] INFO Q Cluster-33552 running.
13:48:34 [Q] INFO Process-1:2 processing [mobile-utah-august-indigo]
test
13:48:34 [Q] INFO Process-1:1 processing [mobile-utah-august-indigo]
test
13:48:34 [Q] INFO Processed [mobile-utah-august-indigo]
13:48:34 [Q] INFO Processed [mobile-utah-august-indigo]
...
And the function is called twice… For most functions I wouldn’t really care if they run twice (or more) but I have a task that calls send_mail and people that are invited receive 2 or more mails…
Is this a bug in Django Q or in my logic?
Issue Analytics
- State:
- Created 7 years ago
- Reactions:2
- Comments:10 (2 by maintainers)
Top Results From Across the Web
DjangoQ is Triggering Twice For Each Task, also on Startup
EDIT: I was able to pin-point it even more - whenever I call this save method, it seems to immediately start and run...
Read more >Configuration — Django Q 1.3.6 documentation - Read the Docs
So in this example, time.sleep was called 5 times. Note also that the above issue might cause all workers to run the same...
Read more >Making queries | Django documentation
The two most common ways to refine a QuerySet are: filter(**kwargs): Returns a new QuerySet containing objects that match the given lookup parameters....
Read more >Koed00/django-q - Gitter
In case a task is worked on twice, you will see a duplicate key error in the cluster logs. Duplicate tasks do generate...
Read more >Filtering for Empty or Null Values in a Django QuerySet - Chartio
One final tip is that it is possible to combine multiple field lookups by chaining together filter() or exclude() calls. Here, we'll use...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I’ve been facing similar problem that some of my long tasks are being processed again and again even successfully processed.
I eventually resolved this by configuring a longer
retrytime.According to http://django-q.readthedocs.io/en/latest/configure.html#retry,
retryisSo I think that’s why my tasks are pushed into the queue again after 60 seconds (the default
retrytime) of unfinished execution and processed repeatedlyI’m not sure if this can help you but I think there might be someone like me being confused by similar problem.
Sorry for my poor English.
I was having the same problem as you @Eagllus. I had a task which would run for about 180 seconds, so I had configured a
timeoutof 600 seconds to accommodate this task, but I still had myretryconfigured to 120 seconds (like it’s configured in the ORM example at http://django-q.readthedocs.io/en/latest/configure.html#orm). As a result, my task would get picked up by a worker, then 120 seconds later the same task would be picked up by a second worker, and then both workers would successfully process the task (in my case this means generating a report twice and sending it by email twice, which is obviously not what I want).Configuring my
retryto be longer than mytimeout(I made it 60 seconds longer) resolved the issue for me too. I don’t feel like it’s obvious from the docs right now thatdjango-qwill behave the way it does if you have yourretryconfigured to be shorter than yourtimeoutand have a task which will take longer to process than yourretrytime. If someone hadn’t already read through the comments on this issue, I feel like it would be easy to misinterpret the docs for theretrysetting.I’m struggling to imagine a scenario where you’d actually want to have
retryconfigured to be shorter than yourtimeoutand get the duplicate task processing behaviour that occurs as a result. I assume there is a scenario where this would make sense, but it seems like for the majority of use cases you’d want to make sure yourretryis longer than yourtimeoutso that you can avoid this duplicate task processing behaviour. Maybe the docs for theretrysetting could be updated to explain all of this, i.e. explain howdjango-qwill behave whenretryis shorter or longer thantimeout, and recommend configuring it to be longer for most use cases.