question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Run async operations on separate threads

See original GitHub issue

First Check

  • I added a very descriptive title to this issue.
  • I used the GitHub search to find a similar issue and didn’t find it.
  • I searched the FastAPI documentation, with the integrated search.
  • I already searched in Google “How to X in FastAPI” and didn’t find any information.
  • I already read and followed all the tutorial in the docs and didn’t find an answer.
  • I already checked if it is not related to FastAPI but to Pydantic.
  • I already checked if it is not related to FastAPI but to Swagger UI.
  • I already checked if it is not related to FastAPI but to ReDoc.

Commit to Help

  • I commit to help with one of those options 👆

Example Code

# some_router.py
@router.get("/")
async def get_some_things():
   return await some_long_taking_method()

Description

Hey, I wanted to ask a question that has been buzzing me all over these days. It’s about FastAPI and it behavior with threads and async/not async functions. As you can see in the sample code, I have a router with an async endpoint that runs an operation that takes for example 15 seconds. After doing some research I found this: “Thus, def (sync) routes run in a separate thread from a threadpool, or, in other words, the server processes the requests concurrently, whereas async def routes run on the main (single) thread, i.e., the server processes the requests sequentially”. So, to summarize, what I achieve currently is an endpoint that that takes some time to process and if 2 users hit it at the same time, one has to wait until the other finishes up… and that only speaking with only 2 users. Is there any way to wrap these endpoints to run on another thread? I tried using loads of things that some people suggested but none of them worked, so I anyone can give me a hand with it I would really appreciate it! Thanks in advance.

Operating System

Windows

Operating System Details

No response

FastAPI Version

0.79

Python Version

3.9.13

Additional Context

No response

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:18 (7 by maintainers)

github_iconTop GitHub Comments

3reactions
JarroVGITcommented, Sep 8, 2022

To start off, this issue makes me proud being part of this community! Great help from all over the place, that is really cool to see!

I had a brief conversation on LinkedIn with @gonzacastro , and I now understand what he is trying to achieve. I created an example that demonstrates that the FastAPI can still be requested, while other requests are still waiting for the 3rd party API to respond.

import asyncio
import random
import time

from fastapi import FastAPI, Request

app = FastAPI()

# keep a count of requests, makes reading the log easier.
app.state.request_no = 0


async def get_response_from_external_api(url: str, request_no) -> None:
    # here you would put something like httpx to make async calls to your endpoint.
    # to simulate this takes a while, we will sleep here. And the return value is the
    # url, the time it should take (sleeptime) and the time it did take.
    sleep_time = random.randint(1, 5)
    print(f"({request_no}) Calling {url}, should take {sleep_time} seconds...")
    start = time.time()
    await asyncio.sleep(sleep_time)
    end = time.time()
    print(f"({request_no}) Done calling {url}, took {end-start} seconds...")
    return (url, sleep_time, end - start)


@app.get("/get_response")
async def get_response(r: Request):
    # Using app.state because it will change with every incoming request
    r.app.state.request_no += 1
    request_no = r.app.state.request_no

    # Construct the URLs
    urls = [f"url.com/api/{i}" for i in range(1, 5)]
    print(f"({request_no}) Request has come in.")

    start = time.time()
    result = await asyncio.gather(
        *[get_response_from_external_api(url, request_no) for url in urls]
    )

    end = time.time()
    print(f"({request_no}) Request has been processed.")
    return result


@app.get("/")
async def root():
    return {"hello": "world"}


if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="0.0.0.0", port=8000)

I added some verbosity for clarity purposes. If I call the get_response endpoint 3 times rapidly, the logs look like this:

INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
(1) Request has come in.
(1) Calling url.com/api/1, should take 3 seconds...
(1) Calling url.com/api/2, should take 1 seconds...
(1) Calling url.com/api/3, should take 3 seconds...
(1) Calling url.com/api/4, should take 5 seconds...
(2) Request has come in.
(2) Calling url.com/api/1, should take 2 seconds...
(2) Calling url.com/api/2, should take 2 seconds...
(2) Calling url.com/api/3, should take 2 seconds...
(2) Calling url.com/api/4, should take 2 seconds...
(3) Request has come in.
(3) Calling url.com/api/1, should take 5 seconds...
(3) Calling url.com/api/2, should take 4 seconds...
(3) Calling url.com/api/3, should take 1 seconds...
(3) Calling url.com/api/4, should take 3 seconds...
(1) Done calling url.com/api/2, took 1.0014400482177734 seconds...
(3) Done calling url.com/api/3, took 1.0007548332214355 seconds...
(2) Done calling url.com/api/1, took 2.0016140937805176 seconds...
(2) Done calling url.com/api/2, took 2.0016257762908936 seconds...
(2) Done calling url.com/api/3, took 2.0016050338745117 seconds...
(2) Done calling url.com/api/4, took 2.0015769004821777 seconds...
(2) Request has been processed.
INFO:     127.0.0.1:56387 - "GET /get_response HTTP/1.1" 200 OK
(1) Done calling url.com/api/1, took 3.0016939640045166 seconds...
(1) Done calling url.com/api/3, took 3.0017738342285156 seconds...
(3) Done calling url.com/api/4, took 3.001483201980591 seconds...
(3) Done calling url.com/api/2, took 4.001469850540161 seconds...
(1) Done calling url.com/api/4, took 5.001488208770752 seconds...
(1) Request has been processed.
INFO:     127.0.0.1:56386 - "GET /get_response HTTP/1.1" 200 OK
(3) Done calling url.com/api/1, took 5.001416921615601 seconds...
(3) Request has been processed.
INFO:     127.0.0.1:56388 - "GET /get_response HTTP/1.1" 200 OK

Note how request 1 took longer than request 2, and although request 2 came in later than request 1, it returned it response earlier.

You just have to change the asyncio.sleep() into a asynchronous api call (use httpx) and your good to go. Hope this answers your question!

2reactions
JarroVGITcommented, Sep 6, 2022

@gonzacastro My apologies, I hadn’t had my morning coffee yet. I meant that when you have synchronous blocking code in an async call stack, it will result in blocking the event loop (not thread, like i said earlier). That is, in my opinion, a design flaw of the software. You can either:

  • Use synchronous code (and sync path operation function), so FastAPI will run it in a seperate thread, and Python will yield control back when possible (e.g. when waiting for IO), or;
  • Use asynchronous code (and async path operation function), so FastAPI will run it in the event loop, and your code will yield control back when possible.

If your code (and I mean the 15-second-runtime-code) is not IO bound but CPU bound (e.g. you are performing some crazy calculation), then again I would recommend to reconsider your design and make the entire request-response loop more asynchronisous. With that, I mean going from request -> calculation -> return response to request -> put work in queue and return response so your client can check if the result is available later on. However, if it is IO bound (the 15 second work), you should either make the IO bound work async, or use sync all the way. Mixing them both in a IO heavy situation (at least in FastAPI) is a bad idea and should be avoided.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Does "async" run in a separate thread? - Stack Overflow
No, it does not. It MAY start another thread internally and return that task, but the general idea is that it does not...
Read more >
Practical Guide to Asyncio, Threading & Multiprocessing
In this article, we discuss what it means in practice for a task to run async vs in a separate thread vs in...
Read more >
CompletableFuture : A new era of asynchronous programming
Asynchronous programming is a form of parallel programming that allows a unit of work to run separately from the primary application thread.
Read more >
The Difference Between Asynchronous And Multi-Threading
Async programming is about non-blocking execution between functions, and we can apply async with single-threaded or multithreaded programming.
Read more >
C# - Threading, Tasks, Async Code and Synchronization ...
Run () or similar constructs, a task runs on a separate thread (mostly a managed thread-pool one), managed by the .NET CLR.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found