Channels 3 Memory Leak / Overconsumption
See original GitHub issueEvery Django Channels project, including the Channels Tutorial project, has a memory leak.
Specifically, the MEM %
used in a Docker image keeps rising, and it assumes you run them on Daphne.
To reproduce, I Dockerized the Tutorial project in this repo and the memory consumption issues are still present.
Steps to reproduce:
git clone https://github.com/alamorre/channels_3_leak.git
cd channels_3_leak
docker-compose build
docker-compose up -d
docker stats channels_3_leak_web_1
Stress test with the following command:
brew install chrome-cli
for i in {1..100}; do chrome-cli open http://127.0.0.1:8000/chat/lobby/; done
The MEM %
will keep going up!
Personally, it starts around 2.5%
and will go up to 4%
with lots of tabs open. Then never goes back down once the tabs close.
Again, this project precisely follows the Tutorial and Deploying sections from the Docs. And it also seems like a popular discussion amongst users.
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
RabbitMQ possible memory leak - Google Groups
We are encountering a situation where RabbitMQ is unexpectedly closing the AMQP connections of a simple producer/consumer POC (attached).
Read more >3 Troubleshoot Memory Leaks - Java - Oracle Help Center
Detecting a slow memory leak can be hard. A typical symptom could be the application becoming slower after running for a long time...
Read more >A Tale of Two Memory Leaks in Go - Orange Matter - SolarWinds
In this post I'll illustrate two ways I've accidentally caused slow but steady memory consumption in Go programs. The phrase “memory leak” ...
Read more >Precise memory leak detection for java software using ...
A memory leak in a Java program occurs when object references that are no longer needed are unnecessarily maintained. Such leaks are difficult...
Read more >Channels with CancellationTokenSource with timeout memory ...
Channels don't leak when used properly. A channel isn't a queue, it's a different container with different usage idioms. All of the code...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Once these deps were upgraded the mem leak went away!
In my scenario, memory usage starts at around 260 MB for 3 Hypercorn workers and one miscellaneous subprocess (Huey). Over time, it inflates to ~600 MB and sits there (in a range of ±30 MB).
Seeing how it’s not exponential, this is probably Python doing some in-memory caching.