FastAPI gets terminated when child multiprocessing process terminated
See original GitHub issueDescribe the bug
Make a multiprocessing Process and start it. Right after terminate the process, fastapi itself(parent) terminated.
To Reproduce
Start command: /usr/local/bin/uvicorn worker.stts_api:app --host 127.0.0.1 --port 8445
- Create a file with:
from fastapi import FastAPI
app = FastAPI()
@app.post('/task/run')
def task_run(task_config: TaskOptionBody):
proc = multiprocessing.Process(
target=task.run,
args=(xxxx,))
proc.start()
return task_id
@app.get('/task/abort')
def task_abort(task_id: str):
proc.terminate()
return result_OK
- Run task_run and while the process alive, trigger task_abort
- After child process terminated then parent(fastApi) terminated as well.
Expected behavior
Parent process should not be terminated after child terminated.
Environment
- OS: Linux
- FastAPI Version 0.54.1
- Python version 3.8.2
Additional context
I tried same code with Flask with gunicorn, it never terminated.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:6
- Comments:23 (12 by maintainers)
Top Results From Across the Web
How to do multiprocessing in FastAPI - Stack Overflow
Below is an example that performs minimal task tracking. One instance of the application running is assumed. import asyncio from concurrent.
Read more >Troubleshooting usage of Python's multiprocessing module in ...
Make a multiprocessing Process and start it. Right after terminate the process, fastapi itself(parent) terminated.
Read more >torch.multiprocessing — PyTorch 1.13 documentation
torch.multiprocessing is a wrapper around the native multiprocessing module. ... process does not terminate, the process termination will go unnoticed.
Read more >Background Tasks - FastAPI
This is useful for operations that need to happen after a request, ... you can return a response of "Accepted" (HTTP 202) and...
Read more >Python Multiprocessing graceful shutdown in the proper order
First the worker() process is stopped, then the result_queue() process ... that the child processes do not get terminated by these signals.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi, I have discovered this situation and came to the next conclusions:
You cannot set signals handlers not from the main threadThetask
function will be executed in the ThreadPoolExecutor, so as I say early - you cannot change signal handlers in this function;The second and third conclusions is not true, the real problem was founded and described below.
But it still possible to solve this problem (without changing FastAPI or uvicorn) - you can change
start_method
for multiprocessing tospawn
method and your child process will be clear (without inherited signals handles, thread pools and other stuff).It’s work for me (python3.7, macOS 10.15.5)
@victorphoenix3 It seems like your process need to have some long running code. Just try to add “time.sleep(30)” and try to abort this within the time.
I tried your code and there is no issue. (Because the subprocess already terminated…?) 1 21742 INFO: 127.0.0.1:52778 - “POST /task/run HTTP/1.1” 200 OK INFO: 127.0.0.1:52780 - “GET /task/abort?pid=21742 HTTP/1.1” 200 OK
But after I adding “sleep 30 seconds”, and the issue comes. 1 21982 INFO: 127.0.0.1:52802 - “POST /task/run HTTP/1.1” 200 OK INFO: 127.0.0.1:52804 - “GET /task/abort?pid=21982 HTTP/1.1” 200 OK INFO: Shutting down INFO: Waiting for application shutdown. INFO: Application shutdown complete. INFO: Finished server process [21973]