Idle database connections not reaped before database_sync_to_async code
See original GitHub issueThis has the same effects (and solution) as #871
Steps to reproduce:
- Run a
@database_sync_to_asyncfunction - Let
CONN_MAX_AGEexpire - Have the database drop the connection
- Run a
@database_sync_to_asyncfunction in the same thread as in step 1
Expected results: In step 4, Channels reconnects Actual results: In step 4, Channels tries to use the expired connection
Obvious fix: make @database_sync_to_async call close_old_connections() before calling the inner function.
To reproduce:
import asyncio
import os
import unittest
from channels.db import database_sync_to_async
from django.db import connection, close_old_connections
from django.conf import settings
if settings.DATABASES['default']['CONN_MAX_AGE'] != 1:
raise AssertionError(
"Please set settings.DATABASES['default']['CONN_MAX_AGE'] to 1. "
'This test needs the connection to die.'
)
os.environ['ASGI_THREADS'] = '1'
if os.environ.get('ASGI_THREADS') != '1':
raise AssertionError(
'Please set the environment variable ASGI_THREADS=1. '
'This test depends on Channels running two queries on the same '
'database connection.'
)
@database_sync_to_async
def ping_database():
# Intended behavior: close_old_connections() before running this code
# close_old_connections()
with connection.cursor() as cursor:
cursor.execute('SELECT 1')
class BugTest(unittest.TestCase):
def test_break_db(self):
async def go():
print('Pinging database...')
await ping_database()
input(
'Now, go destroy the database connection somehow '
'(for instance, by dropping it on the server side) '
'and press Enter:'
)
print('Pinging database again...')
await ping_database()
print('Whew -- connection is still alive')
loop = asyncio.get_event_loop()
loop.run_until_complete(go())
… deleting the connection (Postgres, in my case) when prompted:
> SELECT pg_terminate_backend((SELECT pid FROM pg_stat_activity WHERE query = 'SELECT 1'));
… and after pressing Enter in the test, I see:
System check identified no issues (0 silenced).
Pinging database...
Now, go destroy the database connection somehow (for instance, by dropping it on the server side) and press Enter:
Pinging database again...
E
======================================================================
ERROR: test_break_db (server.tests.test_bug.BugTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.6/site-packages/django/db/backends/utils.py", line 62, in execute
return self.cursor.execute(sql)
psycopg2.OperationalError: terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
...
(If I uncomment close_old_connections() in the database function, the test passes.)
OS and runtime environment: Linux, Postgres 10, Channels 2.1.6, Django 1.11.17, psycopg2 2.7.1.
This test is a bit of a hack – sorry about that. But I trust the bug and solution are clear.
Issue Analytics
- State:
- Created 5 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
Why are my database pool connections not being released ...
This would indicate that idle pool connections should be released after ten minutes. Checks are made not less than ten minutes apart to...
Read more >Performance impact of idle PostgreSQL connections
In this post, I discuss how idle connections impact PostgreSQL performance. Transaction rate impact. When PostgreSQL needs data, it first looks ...
Read more >Does reaper work with oracle-enhanced adapter (OCI) to ...
As you say Reaper is only available in Rails 4. However, even with rails 4, oracle-enhanced adapter still does not work with reaper....
Read more >Concurrency and Database Connections in Ruby with ...
ActiveRecord::ConnectionTimeoutError - could not obtain a database connection within 5 seconds. The max pool size is currently 5; ...
Read more >Rails 5.2 DB connection pool not always re-using idle ...
Dead threads' connections are cleaned up by reap , which runs every 60 seconds by default. This is a fallback mechanism, because code...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

@blueyed it is indeed Django’s job to reap old connections. And it does reap old connections before and after every request handler is invoked.
@database_sync_to_async()lets Channels users access the database, skipping the regular request-handler routine. Therefore, Channels needs to mimic Django’s behavior.@blueyed I’m open to things, I’m just not sure what “try” means…? Can I install a channel version based on that issue/pull request?
edit Okay so I cloned your repository (https://github.com/blueyed/channels/tree/sync-db), modified setup.py for daphne dependencies (2.3 needs asgiref 3.0+) and after waiting for the connection to expire, I still get the same message.