question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Kombu 4.1.0 - Memory usage increase (leak?) on a worker when using kombu queues

See original GitHub issue

Hi,

I have implemented a worker using Kombu’s SimpleQueue. The implementation is as given below. When I run this worker for a few hours on a Ubuntu 16.04 system with redis as the backend, I notice a gradual memory build up on the process. When I run this worker for over a day, it ends up consuming all memory on the system and the system ends up being unusable, until the worker is killed.

On redis server, I have it configured with a timeout set to 5 seconds and a tcp_keepalive set to 60 seconds.

Worker Code:

from kombu import Connection

myqueue_name = 'test_queue'
backendURL = 'redis://127.0.0.1:6379/'

def GetConnection():
    conn = Connection(backendURL)

    return conn

def dequeue():
    conn = GetConnection()
    with conn:
        myqueue = conn.SimpleQueue(myqueue_name)

        item = None

        try:
            qItem = myqueue.get(block=True, timeout=2)
            item = qItem.payload
            qItem.ack()
        except Exception as e:
            qItem = None

        myqueue.close()

    conn.close()
    conn.release()

    return item

if __name__ == '__main__':
    try:
        i = 1
        while True:
            print 'Iteration %s: %s' % (i, dequeue())
            i = i + 1
    except (KeyboardInterrupt, SystemExit):
        print 'Terminating'

Here’s a plot of free memory on the system:

image

What is going wrong here? Did I miss anything in the implementation?

Any help here will be greatly appreciated.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:3
  • Comments:13 (12 by maintainers)

github_iconTop GitHub Comments

2reactions
pawlcommented, Dec 22, 2021

@auvipy Nice find, it seems like this could definitely be related to: https://github.com/celery/celery/issues/4843#issuecomment-999168967

2reactions
sradhakrishnacommented, Apr 9, 2018

I’ve tried the same with rabbitmq as the backend. Saw the same behavior in that scenario too. Seems that the issue might not be in the backend specific implementation.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Kombu 4.1.0 - Memory usage increase (leak?) on a worker ...
When I run this worker for a few hours on a Ubuntu 16.04 system with redis as the backend, I notice a gradual...
Read more >
Kombu 4.1.0 - Memory Usage Increase (Leak?) On A Worker ...
The queues created by the AMQP result backend are always unique so caching the declarations caused a slow memory leak. Worker: Fixed crash...
Read more >
Two years with Celery in Production: Bug Fix Edition
Instantly the rate increased to ~250 tasks per second (from 17) and the CPU usage also settled down. Huge win. Memory leaks are...
Read more >
Kombu Documentation
Kombu is using Sphinx, and the latest documentation can be found here: ... Binding exchanges and queues to a connection will make it...
Read more >
Change history — Kombu 5.2.4 documentation - Celery
Use SISMEMBER instead of SMEMBERS command to check if queue exists in a set. Time complexity is increased from O(N) to O(1) where...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found