Using dependency injection to get SQLAlchemy session can lead to deadlock
See original GitHub issueFirst check
- I added a very descriptive title to this issue.
- I used the GitHub search to find a similar issue and didn’t find it.
- I searched the FastAPI documentation, with the integrated search.
- I already searched in Google “How to X in FastAPI” and didn’t find any information.
- I already read and followed all the tutorial in the docs and didn’t find an answer.
- I already checked if it is not related to FastAPI but to Pydantic.
- I already checked if it is not related to FastAPI but to Swagger UI.
- I already checked if it is not related to FastAPI but to ReDoc.
- After submitting this, I commit to one of:
- Read open issues with questions until I find 2 issues where I can help someone and add a comment to help there.
- I already hit the “watch” button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
- Implement a Pull Request for a confirmed bug.
I’ve noticed that when using dependency injection with SQLAlchemy, a large number of concurrent requests can leave the app in a deadlocked state. This is especially noticeable with a small SQLAlchemy connection pool size (relative to the FastAPI thread pool size). Below is a self-contained example (you might have to tweak the pool size and the request body’s sleep length but this should be a good starting point).
View app.py
"""
Setup: pip install fastapi sqlalchemy uvicorn
Run: python app.py
"""
import time
import uvicorn
from fastapi import Depends, FastAPI, Request
from sqlalchemy import create_engine
from sqlalchemy.orm import Session, sessionmaker
from sqlalchemy.pool import QueuePool
# SQLAlchemy setup
engine = create_engine(
'sqlite:///test.db',
connect_args={'check_same_thread': False},
poolclass=QueuePool,
pool_size=4,
max_overflow=0,
pool_timeout=None, # Wait forever for a connection
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# FastAPI
app = FastAPI()
def get_db(request: Request):
db = SessionLocal()
try:
yield db
finally:
db.close()
@app.get('/')
def index(db: Session = Depends(get_db)):
# Some blocking work
_ = db.execute('select 1')
time.sleep(1)
return {'hello': 'world'}
# Run
if __name__ == '__main__':
uvicorn.run('app:app', reload=True, host='0.0.0.0', port=80)
When running the above with 100 concurrent requests (I used locust), I noticed that only around 5 requests are served, and then the app freezes and is unable to serve any more requests. Below is the locustfile.
View locustfile.py
"""
Setup: pip install locust
Run: Save as locustfile.py and run locust in terminal. Open http://localhost:8089 and run with 100, 100, http://localhost.
"""
from locust import HttpUser, task
class User(HttpUser):
@task
def index(self):
self.client.get("/")
I suspect the following is happening. (Note that SessionLocal()
is lazy, so db = SessionLocal()
will return immediately even if no connections are available.)
- The first
N
requests come in (whereN
>= thread pool size). Theirget_db
dependencies run and yield, and we start executing their path operation functions. At this point, the entire thread pool is full. Onlypool_size
(4) requests are able to get a connection, and the remaining requests wait (in their path operation functions). - The path operation functions that were able to get a connection return, opening up
pool_size
(4) spots in the thread pool. Because dependencies and requests run in separate threads, thefinally
blocks for these requests’get_db
dependencies have not run yet, so the connections for these requests have not returned to the SQLAlchemy pool. - More requests come in, and like step 1, their
get_db
dependencies run, and we start executing their path operation functions. No connections have returned to the SQLAlchemy pool, so these requests wait. At this point, the entire thread pool is full, and every thread is waiting for a connection. - For the requests that finished in step 2, we try to schedule the
finally
blocks for theirget_db
dependencies in a new thread so we can free the connections, but all of the threads are busy, so we end up waiting. - None of the threads will ever finish because they are waiting for a connection, and no connections will be released because the thread pool is full, leaving the app in a deadlocked state.
This doesn’t really seem like a bug in FastAPI or in SQLAlchemy, but it suggests that we should not use dependency injection like this when using synchronous database libraries. The only workaround I’ve found for this is to use a context manager to handle the session in the endpoint itself instead of injecting the database session directly.
Another thing I’ve noticed is that changing get_db
to be an async function prevents deadlock (as does using the middleware approach), but only if the endpoint does not have a response_model
. If it has a response_model
then the app will still lock up. I believe this is because if response_model
is defined, then when we run serialize_response
, field
will be non-None, and we will attempt to run field.validate
in a separate thread. If the thread pool is full with requests waiting for connections, we won’t be able to serialize the response and won’t be able to close the database connection. Maybe we could serialize the response in the same thread as the path operation function; I’m not sure what the benefit of serializing in a separate thread is.
There is similar discussion in https://github.com/tiangolo/full-stack-fastapi-postgresql/issues/104 and many others came to the conclusion that using a context manager is the right approach, but nothing really came of it. If others can validate that my suspicion is correct, then maybe we should change the docs to recommend using a context manager within the endpoint itself until a better solution is available.
Environment
- OS: macOS Big Sur (11.3)
- FastAPI Version: 0.63.0
- SQLAlchemy Version: 1.4.13
- Python version: 3.9.4
Issue Analytics
- State:
- Created 2 years ago
- Reactions:18
- Comments:23 (8 by maintainers)
Top GitHub Comments
Hey all. I’ve been digging into this as well and I don’t think this is a FastAPI issue per se. There are a few things going on that seem to be leading to the issue.
tl;dr
Dependencies and path operations defined as functions (
def
only) are run in ananyio
threadpool.db.execute
is a blocking call, causing all the workers in the pool to block in the path operation function. This prevents dependency generators from invoking theirfinally
block, thereby preventing SQLAlchemy connections from being released.Workarounds?
The suggested workaround to use a context manager within your path operation is by far the easiest solution. You’re scoping the SQLAlchemy session lifecycle within the same coroutine, guaranteeing that connections can be acquired and released within that coroutine. This prevents the resource contention described in the “deep dive” section.
While I haven’t tried it yet… It may be worth using the SQLAlchemy 1.4
asyncio
beta. Part of the core issue here is related todb.execute
being a blocking call. If SQLAlchemy natively supports async operations, it may resolve the issue.Deep dive
^ First, it should be noted that
db = SessionLocal()
is a non-blocking operation. Prior to the deadlock, all N requests will be able to create a session object and yield it to the path operation.^
db.execute
is a blocking operation. Per the SQLAlchemy docs, the session requests a connection from the connection pool once queries are issued.In the example code above, the SQLAlchemy connection pool is of size 4. This means that 4 requests will be able to check out a connection, while (N-4) connections will block on this line waiting for a connection to become available. (Side note, it’s good practice to define connection timeouts on your
engine
object to avoid waiting interminably for a connection. Though that alone won’t solve the problem).^ Also keep in mind that neither the dependency function nor the path operation are defined as coroutines. Both FastAPI and Starlette seem to want everything to run asynchronously, so these functions are invoked in a threadpool. See Starlette code, FastAPI doc 1, and FastAPI doc 2 for more information.
Here is where things break.
anyio
manages it’s own threadpool of ~40 workers, each with it’s own job queue. There are (N-4) workers blocked waiting for a SQLAlchemy connection to be relinquished. The 4 path operations that secured a connection can complete. FastAPI will then attempt to invoke the__exit__
method of your dependency generator to clean up the session (which implicitly checks the connection back into the connection pool). However, the dependency generator is not a coroutine, so it’s passed off to the threadpool.And here is our deadlock. All
anyio
worker threads are blocked waiting for SQLAlchemy connections. The code to release the connections is blocked waiting for a worker to become available.But what about native coroutines?
While the root issue with the example code is related to the use of
anyio
thread pools, we have observed the deadlock when using native coroutines in both path operations and dependency generators. This case is a little more straightforward… (Note, the FastAPI documentation already recommends against this pattern)^ As noted in the earlier section
db.execute
is a blocking operation, as it implicitly requests a connection from the SQLAlchemy connection pool. However, the connection can’t be retrieved because the connection pool has exhausted. SQLAlchemy blocks waiting for a connection to become available.Here-in lies the issue. The event loop doesn’t have the power to interrupt this coroutine; there is no
await
syntax indentifying the coroutine can be interrupted to yield control back to the event loop. The event loop is now fully blocked.I’d still argue this isn’t really a FastAPI issue. The coroutine is executing a blocking call; whether it’s SQLAlchemy or another third-party library, blocking calls pose this risk.
However, I would argue the FastAPI However, I would argue the FastAPI documentation needs to be updated. The “Dependencies with yield” documentation and the “FastAPI & SQLAlchemy” example can both lead to deadlocks.
We’re seeing this issue too.
I’m working on going in and figuring out the bug, but @Blue9 seems to be correct in the problem stemming from the operation code being run in a different thread than the dependency seems to be the locus.
Temporary work arounds are to use a context manager, or use an async function with
sqlalchemy.ext.asyncio
, both of which ensure it all happens in the same thread. But you do not have to do both.I would highly suggest to the FastAPI maintainers that they add a note in the documentation and on https://github.com/tiangolo/full-stack-fastapi-postgresql that there is a potential issue, as right now it is not production ready for a load greater than 30 or so concurrent users.