Garbage collector cleans up pool during creation in ASGI server
See original GitHub issueI’ve been tracking down an exception in my FastAPI backend that occurred roughly once every 20 startups. This led me to a huge rabbit hole of debugging that ended up uncovering the following error when using aioredis with an ASGI server:
example.py
:
pool = None
async def app(scope, receive, send):
global pool
if scope['type'] == 'lifespan':
message = await receive() # On startup
pool = await aioredis.create_redis_pool('redis://localhost:6379')
await send({"type": "lifespan.startup.complete"})
message = await receive() # Wait until shutdown
else:
await pool.ping() # (Use pool during requests)
When running this with uvicorn example:app
it seems like everything works (the app starts up correctly), but if we force garbage collection on a specific line within the event loop, we consistently encounter the following error:
Task was destroyed but it is pending!
task: <Task pending name='Task-3' coro=<RedisConnection._read_data() running at .../site-packages/aioredis/connection.py:186> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7fb4af031f70>()]> cb=[RedisConnection.__init__.<locals>.<lambda>() at .../site-packages/aioredis/connection.py:168]>
Task was destroyed but it is pending!
task: <Task pending name='Task-2' coro=<LifespanOn.main() running at .../site-packages/uvicorn/lifespan/on.py:55> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7fb4aefbf070>()]>>
While a bit hacky, we can force this garbage collection in the event loop as follows:
- Edit
/usr/lib/python3.*/asyncio/base_events.py
and withindef _run_once
, near the bottom immediately within thefor i in range(ntodo):
, addimport gc; gc.collect()
. - Force Uvicorn to use the
asyncio
event loop so that it uses this modified code by running with:uvicorn example:app --loop asyncio
After doing this, we see the above error every single startup.
Notes
- Note, since garbage collection can run at any time, this bug will appear randomly in real world situations (which is what initially started this investigation)
- This error occurs even without the
gc.collect()
modification if we writeawait create_redis_pool(...)
instead ofloop = await create_redis_pool(...)
. I think this might be expected though because we are awaiting an rvalue. - Strangely, without running in uvicorn (ie. passing dummy functions to receive and send), the error doesn’t appear
- Additionally, it doesn’t happen when using hypercorn
I’m hesitant to say it’s an error with hypercorn, however, because hypercorn isn’t doing anything but calling the handlers. Perhaps hypercorn holds on to some extra references which is why the error doesn’t happen there?
Does anyone have any insight on this (specifically, RedisConnection._read_data() running at .../site-packages/aioredis/connection.py:186...
)? According to this SO post, there can be some weirdness on awaiting futures without hard references. If it seems to be an issue with Uvicorn, I can close this and create it there.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:9
- Comments:13 (1 by maintainers)
I’ve just realized this is actually a bug in uvicorn and have created a PR as shown above to fix it. Funny enough, I also realized this same error was caused by another line near the top of my call stack that I wrote as follows:
Since the task wasn’t assigned to anything, the entire partially executed coroutine could be garbage collected. So, TL;DR:
uvicorn
, this PR might solve the problemasyncio.create_task(foo())
without assigning the result to some hard reference.@MatthewScholefield We also got an exception in Sanic backend:
py code:
@seandstewart Is there any example code about how to resolved?