question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

FastApi is not running every request on same endpoint in separate Thread

See original GitHub issue

So, I think I understand all async def and def stuff. I have this piece of code and I am running it with uvicorn main:app

import time
from fastapi import FastAPI

app = FastAPI()


@app.get("/")
def root():
    print("Hitted Root")
    time.sleep(10)
    return {"message": "Hello World"}


@app.get("/hi")
def root_hi():
    print("Hitted Root Hi")
    time.sleep(10)

If I visit /hi and / at the same time. The print statement comes instantly and they each end after 10 seconds approximately which must mean that they are starting at the same time in different threads

However if I open two requests to /hi the first one ends and the second one then starts i.e. I see the print statement from the first one and then 10 seconds later the print statement from the second one which must mean they are not running on different threads.

I want to know why is that the case and if this is the default behaviour where requests to different endpoints run in different threads but requests to the same endpoint run one after the other. I also wonder if there is a way to make the requests to the same endpoint run in different threads and at the same time without using multiple uvicorn workers.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
raphaelauvcommented, Jun 11, 2021

I think I understand all async def and def stuff

apparently no

1reaction
David-Lorcommented, Jun 16, 2021

@nilansaha How are you performing the requests? I found some time ago that (some) browsers (Firefox in my case) seem to avoid performing simultaneous requests to the same endpoint (or localhost?).

Maybe try this code to perform N concurrent requests using Python, I’d say your example would work as expected after trying it:

import threading
import requests

N_REQUESTS = 5

def req():
	requests.get("http://localhost:8000/hi")

threads = [threading.Thread(target=req, daemon=True) for _ in range(N_REQUESTS)]
[th.start() for th in threads]
[th.join() for th in threads]

Output from fastapi server:

INFO:     Started server process [33905]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
Hitted Root Hi
Hitted Root Hi
Hitted Root Hi
Hitted Root Hi
Hitted Root Hi
INFO:     127.0.0.1:52508 - "GET /hi HTTP/1.1" 200 OK
INFO:     127.0.0.1:52506 - "GET /hi HTTP/1.1" 200 OK
INFO:     127.0.0.1:52510 - "GET /hi HTTP/1.1" 200 OK
INFO:     127.0.0.1:52504 - "GET /hi HTTP/1.1" 200 OK
INFO:     127.0.0.1:52512 - "GET /hi HTTP/1.1" 200 OK
Read more comments on GitHub >

github_iconTop Results From Across the Web

FastApi is not running every request on same endpoint in ...
I want to know why is that the case and if this is the default behaviour where requests to different endpoints run in...
Read more >
[QUESTION] about threads issue with fastapi. #603 - GitHub
Hi, I have a question about the threads issue with fastapi. When I run the example from tutorial uvicorn main:app --reload --port 4681 ......
Read more >
Concurrency with FastAPI - Medium
The processes all run at the same time, essentially multiprocessing on different processors. Concurrency and parallelism both relate to “different things ...
Read more >
Middleware - FastAPI
You can add middleware to FastAPI applications. A "middleware" is a function that works with every request before it is processed by any...
Read more >
First Steps - FastAPI
FastAPI generates a "schema" with all your API using the OpenAPI standard for defining APIs. "Schema"¶. A "schema" is a definition or description...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found