queue length for SQLiteQueue is incorrect when running in multiple processes
See original GitHub issuePossibly expected behavior, but I think it’s worth reporting, because the queue looks usable otherwise.
The queue size is set only once on queue creation. self.total = self._count()
, so if we have a producer in 1 process and a consumer in another process, we end up with size in the negatives.
To reproduce, we need producer and a consumer that’s faster than the producer.
# producer process
import persistqueue as Q; q = Q.SQLiteQueue('queue', multithreading=True)
while True: q.put('hi'); time.sleep(0.01)
# consumer process
import persistqueue as Q; q = Q.SQLiteQueue('queue', auto_commit=False, multithreading=True)
while True:
try:
q.qsize(), q.get(block=False); q.task_done()
except persistqueue.exceptions.Empty:
pass
Calling q._count() returns the correct size, because it hits the DB, of course.
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
queue length for SQLiteQueue is incorrect when running in multiple ...
The queue size is set only once on queue creation. self.total = self._count() , so if we have a producer in 1 process...
Read more >Optimize/correct behaviour of a combination of Python ...
Your Queue has multiple consumers running simultaneously, the empty() function will sometimes return False but another Process happens to ...
Read more >persist-queue - PyPI
A thread-safe disk based persistent queue in Python. ... ack_failed: there might be something wrong during process, so just mark item as failed....
Read more >multiprocessing — Process-based parallelism — Python 3.11 ...
Note that data in a pipe may become corrupted if two processes (or threads) try ... Otherwise (block is False ), put an...
Read more >Multiprocessing Queue in Python
Python provides a number of process-safe queues, ... Once a size limited queue is full, new items cannot be added and calls to...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Yes, this seems to be a bug. So instead of
use the workaround
Recent updates have added
max(0, count)
to remove a negativeqsize()
. That doesn’t change the overall issue, but prevents impossible size results. On the Ack Queues, a newactive_size()
was added which includes the nack cache. It may be anecdotal, but I believe this has produced a more accurate return in my multi-threaded environment as it’s calculating when an item is put/ack/ack_failed, and not on put/get/nack. But that’s more of a decision on when you think the queue size should be decremented, on get or on completion.