Memory leak if writing to a channel that is never read from later
See original GitHub issueIt appears that if you write a message to a channel, for example via group_send
, and no reader ever appears on that channel, the messages will remain in the in-memory queue channels.layers.channel_layers.backends['default'].receive_buffer
indefinitely when using the RedisChannelLayer
backend. In particular I have captured a server that has over 100k items in that dictionary.
One way to avoid this problem would be to extend the API for group_send
with a time-to-live parameter so that messages would expire over time if they weren’t read. Thoughts?
My pip freeze
, in case it’s useful:
channels==2.1.2
channels-redis==2.2.1
Issue Analytics
- State:
- Created 4 years ago
- Comments:18 (14 by maintainers)
Top Results From Across the Web
What happens to a value never consumed from a Buffered ...
When the reference to chan is dropped (i.e. at the end of f ), the memory it takes can be freed by the...
Read more >Goroutine Leaks - The Forgotten Sender - Ardan Labs
Goroutine leaks are a common source of memory leaks in concurrent programs. This post defines Goroutine leaks and provides one example of a ......
Read more >Common Goroutine Leaks that You Should Avoid
A Goroutine leak is a memory leak that occurs when a Goroutine is not terminated and is left hanging in the background for...
Read more >An example of a goroutine leak and how to debug one
Leaking goroutine is basically a type of memory leak. You start a goroutine but that will never terminate, forever occupying a memory it...
Read more >A story of a memory leak in GO: How to properly use time.After()
That's a good question. It's just about possible that you might still want the event pending on the channel (or don't care, in...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
channels.consumer.AsyncConsumer.__call__
:A
channel
object is an async iterator:__anext()__
returns a message, and__aclose()__
stops the world.I think it would be easier to write against this API. I don’t have time to actually submit and test a pull request, though 😃.
@davidfstr @carltongibson
A version of this bug that I’ve also seen - you don’t just see this if the channel is never read from later. You can also see if it the channel is read from too slowly. In the default
channels_redis
implementation, per-channelasyncio.Queue
objects grow in an unbounded way; if they’re not read at the same rate as insertion on the other end, Daphne will continue to just grow in memory consumption forever.I’d argue that these per-channel
Queue
objects should probably be bound in size. There’s already acapacity
argument; maybe the per-channel buffer should respect that, and only buffer up to that many objects before dropping old ones?https://github.com/django/channels_redis#capacity
I do think a goal of passive cleanup makes sense, but I think a reasonable upper bound on queue size would likely prevent many people from getting into bad situations in the first place.