question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Using UvicornWorkers in Gunicorn cause OOM on K8s

See original GitHub issue

Checklist

  • The bug is reproducible against the latest release and/or master.
  • There are no similar issues or pull requests to fix it yet.

Describe the bug

I’m developing a FastApi application deployed on a Kubernetes cluster using gunicorn as process manager.

I’m also using UvicornWorkers for sure, because of the async nature of fastapi.

After the application deployment I can see the memory growing up at rest, until OOM.

This happen just when I use UvicornWorker.

Tests made by me:

  • Comment all my code to ensure is not a my application mem leak (leak present);
  • Start the application using uvicorn instead of gunicorn (no leak present);
  • Start application using gunicorn sync workers (no leak present);
  • Start application using gunicorn + UvicornWorker (leak present);
  • Start application using gunicorn + UvicornWorker + max_requests (leak present);

Plus, this happens just on the Kubernetes cluster, when I run my application locally (MacBook pro 16) (is the same docker image used on k8s) the leak is not present.

Anyone else had a similar problem?

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:8
  • Comments:41 (20 by maintainers)

github_iconTop GitHub Comments

6reactions
evalkazcommented, Nov 29, 2021

EDIT: it seems that I cannot reproduce the numbers.

Hey, I encounter the same issue. My code snippet to reproduce the issue:
async def app(scope, receive, send):
    assert scope['type'] == 'http'

    data = [0] * 10_000_000

    await send({
        'type': 'http.response.start',
        'status': 200,
        'headers': [
            [b'content-type', b'text/plain'],
        ],
    })
    await send({
        'type': 'http.response.body',
        'body': b'Hello, world!',
    })

Launched using uvicorn main:app --host 0.0.0.0 --port 8000. After sending ~10k requests to the API, memory usage goes from ~18MB to 39MB. The same thing happens with startlette and apidaora (launching them with uvicorn) and the memory usage patterns looks quite similar. I also tested flask with gunicorn, sanic and their memory usage stayed ± the same. I also checked PR#1244, but the issue persists.

Library ASGI memory (start) memory (end) requests
uvicorn uvicorn 18 MB 38 MB 11k
starlette uvicorn 19 MB 39 MB 10k
apidoara uvicorn 18 MB 39 MB 12k
uvicorn PR#1244 uvicorn 18 MB 39 MB 10k
quart hypercorn 26 MB 63 MB 10k
apidoara hypercorn 16 MB 42 MB 10k
starlette daphne 34 MB 46 MB 10k
apidoara daphne 32 MB 45 MB 10k
flask gunicorn 35 MB 35 MB 10k
sanic sanic 17 MB 22 MB 10k
5reactions
Kludexcommented, Jan 28, 2022

For the record: Issue was solved by uvicorn 0.17.1.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Gunicorn worker terminated with signal 9 - Stack Overflow
I came across this faq, which says that "A common cause of SIGKILL is when OOM killer terminates a process due to low...
Read more >
Server Workers - Gunicorn with Uvicorn - FastAPI
You can use Gunicorn (or also Uvicorn) as a process manager with Uvicorn workers to take advantage of multi-core CPUs, to run multiple...
Read more >
Signal Handling — Gunicorn 20.1.0 documentation
A brief description of the signals handled by Gunicorn. We also document the signals used internally by Gunicorn to communicate with the workers....
Read more >
We have to talk about this Python, Gunicorn, Gevent thing
The problem that's described here - "green" threads being CPU bound for too long and causing other requests to time out is one...
Read more >
How to Deploy Python WSGI Apps Using Gunicorn HTTP ...
Gunicorn is a stand-alone WSGI web application server which offers a lot of ... Comes with various worker types and configurations.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found