question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Queued tasks (OrmQ) are not always acknowledged

See original GitHub issue

Here’s our current config, we are using Django 2.2.16:

VERSION: 1.3.4
ACK_FAILURES: True
BULK: 1
CACHE: default
CACHED: False
CATCH_UP: True
COMPRESSED: False
CPU_AFFINITY: 0
DAEMONIZE_WORKERS: True
DISQUE_FASTACK: False
GUARD_CYCLE: 0.5
LABEL: Django Q
LOG_LEVEL: INFO
MAX_ATTEMPTS: 0
ORM: default
POLL: 0.2
PREFIX: DjangORM
QSIZE: True
QUEUE_LIMIT: 50
Q_STAT: django_q:DjangORM:cluster
RECYCLE: 500
REDIS: {}
RETRY: 2147483647
SAVE_LIMIT: 10000
SCHEDULER: True
SYNC: False
TESTING: False
TIMEOUT: 3300
WORKERS: 4

We have 3 clusters and 12 workers in total. QInfo: Screenshot 2021-04-29 at 14 25 21

QMonitor: Screenshot 2021-04-28 at 17 57 30

Sometimes queued tasks are not acknowledged and relevant Task instance (from the OrmQ payload) does not exist. This seems to happen mostly with tasks that have long execution time (200-400 seconds). TIMEOUT should be big enough and we get no errors from worker (we are using Sentry for error reporting). Any ideas? Oh and even though the SAVE_LIMIT is set to 10000, the limit doesn’t always hold i.e. it seems to sometimes ignore this part:

        if task["success"] and 0 < Conf.SAVE_LIMIT <= Success.objects.count():
            Success.objects.last().delete()

As you can see from qinfo, at the moment there are 10485 successful tasks in the database.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:8 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
Koed00commented, May 8, 2021

@kennyhei already merged the limit fix, but need a bit more time to review the qmemory pr. Thanks for work, I’m sure you helped out a bunch of other people.

0reactions
kennyheicommented, May 8, 2021

We haven’t encountered any problems after lowering the recycle setting so that memory is released more frequently. Going to close this issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Airflow 1.9.0 is queuing but not launching tasks - Stack Overflow
Now select the first task and click on Task Instance. In the paragraph Task Instance Details you will see why a DAG is...
Read more >
Django-q Queued Task Still Run After Deletion : Forums
With the django-q module, it works fine just deletion of the queued task doesn't stop it from running, I tried the same code...
Read more >
Shared task queue - not showing all tasks of the team at once ...
Hi all,. I am administrator of our HubSpot CRM and I just set up a shared task queue for one of our teams....
Read more >
REST Resource: projects.locations.queues.tasks - Google Cloud
The other Attempt information is not retained by Cloud Tasks. ... If set, appEngineRoutingOverride is used for all tasks in the queue, no...
Read more >
Task Queue Resource | Twilio
Among tasks of the same priority, the oldest task will always be assigned first. ... In a multitasking Workspace, a Worker's status does...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found