[QUESTION] Unexpected behaviour of BackgroundTask in AWS Lambda
See original GitHub issueFirst check
- I used the GitHub search to find a similar issue and didn’t find it.
- I searched the FastAPI documentation, with the integrated search.
- I already searched in Google “How to X in FastAPI” and didn’t find any information.
Description
I’m running FastAPI on AWS Lambda using Magnum ASGI adapter like below where handler
is the entry point for API endpoints specified in serverless.yml
.
from fastapi import FastAPI
from mangum import Mangum
app = FastAPI()
handler = Mangum(app)
I have recently implemented a BackgroundTask on one of the route endpoints as documented in FastAPI which has to generates a CSV file and upload to S3 and then send an email to the user. The setup works fine locally using uvicorn
(bypassing Magnum) and returns a quick HTTP response while the background task takes its own sweet time and does its work.
However, when deployed on AWS Lambda, the request doesn’t complete until the background task is also completed in the same request which increases the response latency and defeats the whole purpose of using a BackgroundTask.
What could be going wrong here?
Additional context
Python: 3.8 FastAPI: 0.54
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (1 by maintainers)
@renkasiyas The way lambda works is that once the request is completed, the lambda returns a response and is ready to handle other requests. So any background tasks you queue in async fashion are lost. I am using SQS to push those background tasks and they are handled via a different lambda function.
Thanks for the help here @phy25 ! 👏 🙇
Thanks for reporting back and closing the issue @anil-grexit 👍
Yeah, as I understand those function services like lambda start the app only for the request and kill it right after.