Custom `CapacityLimiter`
See original GitHub issueStarlette is a web framework that supports both async and sync functions. The sync part code runs in a threadpool.
The threadpool contains a maximum number of threads: 40.
The concern on this issue is that the threads are shared between the ones that handle the endpoint, and the background tasks.
Assume we have a simple application:
from time import sleep
from starlette.applications import Starlette
from starlette.background import BackgroundTasks
from starlette.requests import Request
from starlette.responses import JSONResponse
from starlette.routing import Route
num = 0
def count_sleep():
global num
num += 1
print(f"Running number {num}.")
sleep(10)
def endpoint(request: Request) -> JSONResponse:
tasks = BackgroundTasks()
tasks.add_task(count_sleep)
return JSONResponse({"message": "Hello, world!"}, background=tasks)
app = Starlette(routes=[Route("/", endpoint)])
Running it with uvicorn
:
uvicorn main:app
And performing some requests (using httpie):
for run in {1..100}; do
http :8000 &
done
We can observe that:
- We can see
Running number 40.
. - Wait 10 seconds…
- We can see
Running number 80.
. - Wait 10 seconds…
- We can see
Running number 100.
.
I’m just bringing this up, so people are aware.
@agronholm proposed on Gitter that we create a separated CapacityLimiter
dedicated only for handling the application (i.e. request_response()
). This means that n
(depending on the number of tokens we choose) number of threads would be dedicated for request_response()
.
Issue Analytics
- State:
- Created a year ago
- Comments:14 (12 by maintainers)
Top Results From Across the Web
Trio's core functionality — Trio 0.21.0+dev documentation
run_sync() uses a CapacityLimiter to limit the number of threads running at once; see trio.to_thread.current_default_thread_limiter for details. If you're ...
Read more >python-trio/general - Gitter
I think unless you want to build a custom capacitylimiter with more abilities, you'll have to find a way to cancel all of...
Read more >Issues · encode/starlette - GitHub
RunTimeError: got Future <Future pending> attached to a different loop when using custom loop in sync fixtures when upgrading from 0.14.2 to 0.15.0 ......
Read more >How to gather results and using limit with parent child functions
When I run your code, I get: File "/tmp/zigb.py", line 12, in child await sender.send('Parent {0}, Child {1}: exiting!
Read more >Atlas Gunworks Mag Capacity Limiter - Shooters Connection
Atlas Gunworks Magazine Capacity Limiter for 140mm. Plate and Rivet System; Works with STI Gen 2 Magazines with Atlas Gunworks Drilled Witness Holes...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
We had a similar situation in Gradio, and resolved it via this kind of approach. Wanted to share to support the issue.
I’m going to write it here how to change the default
CapacityLimiter
, as it may be relevant…Right now, you can modify the number of
token_tokens
on the defaultCapacityLimiter
. Let’s use the same application as described above:You can perform the same query as mentioned:
This time, you are NOT going to have the same behavior as mentioned on:
The behavior now is:
No waiting time.