Performance regression of resolving dependencies
See original GitHub issueFirst Check
- I added a very descriptive title to this issue.
- I used the GitHub search to find a similar issue and didn’t find it.
- I searched the FastAPI documentation, with the integrated search.
- I already searched in Google “How to X in FastAPI” and didn’t find any information.
- I already read and followed all the tutorial in the docs and didn’t find an answer.
- I already checked if it is not related to FastAPI but to Pydantic.
- I already checked if it is not related to FastAPI but to Swagger UI.
- I already checked if it is not related to FastAPI but to ReDoc.
Commit to Help
- I commit to help with one of those options 👆
Example Code
from fastapi import FastAPI, Depends
from fastapi.testclient import TestClient
app = FastAPI()
Dep0 = Depends(lambda: "dep0")
Dep1 = Depends(lambda: "dep1")
Dep2 = Depends(lambda: "dep2")
Dep3 = Depends(lambda: "dep3")
Dep4 = Depends(lambda: "dep4")
Dep5 = Depends(lambda: "dep5")
Dep6 = Depends(lambda: "dep6")
Dep7 = Depends(lambda: "dep7")
Dep8 = Depends(lambda: "dep8")
Dep9 = Depends(lambda: "dep9")
@app.get("/")
async def read_main(dep0: str = Dep0,
dep1: str = Dep1,
dep2: str = Dep2,
dep3: str = Dep3,
dep4: str = Dep4,
dep5: str = Dep5,
dep6: str = Dep6,
dep7: str = Dep7,
dep8: str = Dep8,
dep9: str = Dep9):
return {"msg": "Hello World"}
client = TestClient(app)
def test_read_main():
response = client.get("/")
assert response.status_code == 200
assert response.json() == {"msg": "Hello World"}
if __name__ == '__main__':
import timeit
print(timeit.timeit("test_read_main()", globals=globals(), number=3000))
# 0.68.2: 12.487527622
# 0.69.0: 21.643492927
# 0.70.0: 21.014488872
Description
The update to fastapi 0.69.0 introduced a performance regression of nearly when using dependencies. The runtime of the sync version with 10 dependencies as shown in the code example increased by around 70%. The runtime of the async version increased by 30%. This was likely introduced by the starlette upgrade to version 0.15 and their switch to anyio. As a workaround dependencies can be switched to async calls with a smaller performance hit. Are there any plans to optimize the sync version or to document this behavior?
Operating System
macOS
Operating System Details
No response
FastAPI Version
0.69.0
Python Version
Python 3.9.7
Additional Context
No response
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (5 by maintainers)
Top Results From Across the Web
What is Regression Testing? Definition, Tools, Method, and ...
Unit Regression is done during the Unit Testing phase and code is tested in isolation i.e. any dependencies on the unit to be...
Read more >(PDF) Modeling of Parametric Dependencies for Performance ...
This paper identifies run-time specific parametric dependency features, which are not supported by existing work. Therefore, this paper proposes ...
Read more >Performance regression between R8 with AGP 3.4.1 and R8 ...
I've run a couple builds of our project with the updated maven URL and r8 tool version. This is with AGP 3.5.3 and...
Read more >Linear Dependencies Between Continuous Variables
high performance statistical queries, covariance, correlation coefficient, coefficient of determination, linear regression.
Read more >Detailed Dependency Analysis and I/O Profiling | Altair Breeze
With Altair Breeze, you can quickly solve software deployment problems and ... Automate performance and regression testing with detailed I/O profiling.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
We had some discussion on AnyIO’s gitter regarding this. I have no idea how to link to a gitter message, so I’ll summarize here.
I made a gist comparing anyio to asyncio’s performance: https://gist.github.com/adriangb/4769659899abd24f5d184332a2cdbee8
I’d guess the use case you present is even more pathological @crea since not only is async stuff being done, but there’s also thread pools and stuff involved.
Quoting @agronholm:
The TLDR is that AnyIO does considerably more stuff so for small workloads it has quite a bit of overhead. For something like sending an HTTP request, the overhead is negligible. In your test @crea , I suspect that if you add a
sleep(1e-4)
or something in your dependencies, the difference will disappear.Regarding my work @Kludex mentioned, I have a proposal in #3641 but that may be a bit ambitious, so I think FastAPI’s best concrete bet is #3902 (it doesn’t directly address the performance regression you are noticing - in fact it would probably make things worse for the toy use case above, but it does improve performance of the DI system overall).
One thing that I hadn’t noticed until now is that you are using TestClient. I think TestClient isn’t appropriate to benchmark with (it’s doing a bunch of complex stuff internally to juggle event loops, lifespans, etc.). I re-wrote your example using httpx instead: https://gist.github.com/adriangb/9615851b30ea107f0d2860f45a56f523#file-fastapi_bench-py
I think the findings are pretty much the same, but at lest we know it’s not the TestClient that’s slower now.
I think it’d be nice if FastAPI at least gave you the option to disable moving sync dependencies to threads, I’m guessing that’s a big part of the performance hit your seeing (moving a sync function that does no IO to a thread is bad, and anyio only exacerbated the problem because it has more overhead than asyncio). I benchmarked this using Xpresso (which does give you that option) and got
4.4 sec
(same gist).