question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Memory leak if writing to a channel that is never read from later

See original GitHub issue

It appears that if you write a message to a channel, for example via group_send, and no reader ever appears on that channel, the messages will remain in the in-memory queue channels.layers.channel_layers.backends['default'].receive_buffer indefinitely when using the RedisChannelLayer backend. In particular I have captured a server that has over 100k items in that dictionary.

One way to avoid this problem would be to extend the API for group_send with a time-to-live parameter so that messages would expire over time if they weren’t read. Thoughts?

My pip freeze, in case it’s useful:

channels==2.1.2
channels-redis==2.2.1

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:18 (14 by maintainers)

github_iconTop GitHub Comments

1reaction
adamhoopercommented, Aug 13, 2020

…clients poll for messages instead of subscribing to messages.

Really happy to look at sketches of a reworking there.

channels.consumer.AsyncConsumer.__call__:

    async def __call__(self, receive, send):
        """
        Dispatches incoming messages to type-based handlers asynchronously.
        """
        with contextlib.AsyncExitStack() as stack:
            # Initialize channel layer
            self.channel_layer = get_channel_layer(self.channel_layer_alias)
            if self.channel_layer is not None:
                channel = await stack.enter_async_context(self.channel_layer.new_channel_v2())
                self.channel_name = channel.name
            # Store send function
            if self._sync:
                self.base_send = async_to_sync(send)
            else:
                self.base_send = send
            # Pass messages in from channel layer or client to dispatch method
            try:
                if self.channel_layer is not None:
                    await await_many_dispatch(
                        [receive, channel], self.dispatch
                    )
                else:
                    await await_many_dispatch([receive], self.dispatch)
            except StopConsumer:
                # Exit cleanly
                pass

A channel object is an async iterator: __anext()__ returns a message, and __aclose()__ stops the world.

I think it would be easier to write against this API. I don’t have time to actually submit and test a pull request, though 😃.

1reaction
ryanpetrellocommented, Aug 12, 2020

@davidfstr @carltongibson

A version of this bug that I’ve also seen - you don’t just see this if the channel is never read from later. You can also see if it the channel is read from too slowly. In the default channels_redis implementation, per-channel asyncio.Queue objects grow in an unbounded way; if they’re not read at the same rate as insertion on the other end, Daphne will continue to just grow in memory consumption forever.

I’d argue that these per-channel Queue objects should probably be bound in size. There’s already a capacity argument; maybe the per-channel buffer should respect that, and only buffer up to that many objects before dropping old ones?

https://github.com/django/channels_redis#capacity

I do think a goal of passive cleanup makes sense, but I think a reasonable upper bound on queue size would likely prevent many people from getting into bad situations in the first place.

Read more comments on GitHub >

github_iconTop Results From Across the Web

What happens to a value never consumed from a Buffered ...
When the reference to chan is dropped (i.e. at the end of f ), the memory it takes can be freed by the...
Read more >
Goroutine Leaks - The Forgotten Sender - Ardan Labs
Goroutine leaks are a common source of memory leaks in concurrent programs. This post defines Goroutine leaks and provides one example of a ......
Read more >
Common Goroutine Leaks that You Should Avoid
A Goroutine leak is a memory leak that occurs when a Goroutine is not terminated and is left hanging in the background for...
Read more >
An example of a goroutine leak and how to debug one
Leaking goroutine is basically a type of memory leak. You start a goroutine but that will never terminate, forever occupying a memory it...
Read more >
A story of a memory leak in GO: How to properly use time.After()
That's a good question. It's just about possible that you might still want the event pending on the channel (or don't care, in...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found