question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Custom `CapacityLimiter`

See original GitHub issue

Starlette is a web framework that supports both async and sync functions. The sync part code runs in a threadpool.

The threadpool contains a maximum number of threads: 40.

https://github.com/agronholm/anyio/blob/4f3a8056a8b14dbe43c95039a0d731ede1083cb7/src/anyio/_backends/_asyncio.py#L2071-L2077

The concern on this issue is that the threads are shared between the ones that handle the endpoint, and the background tasks.

Assume we have a simple application:

from time import sleep

from starlette.applications import Starlette
from starlette.background import BackgroundTasks
from starlette.requests import Request
from starlette.responses import JSONResponse
from starlette.routing import Route

num = 0


def count_sleep():
    global num
    num += 1
    print(f"Running number {num}.")
    sleep(10)


def endpoint(request: Request) -> JSONResponse:
    tasks = BackgroundTasks()
    tasks.add_task(count_sleep)
    return JSONResponse({"message": "Hello, world!"}, background=tasks)


app = Starlette(routes=[Route("/", endpoint)])

Running it with uvicorn:

uvicorn main:app

And performing some requests (using httpie):

for run in {1..100}; do
  http :8000 &
done

We can observe that:

  1. We can see Running number 40..
  2. Wait 10 seconds…
  3. We can see Running number 80..
  4. Wait 10 seconds…
  5. We can see Running number 100..

I’m just bringing this up, so people are aware.

@agronholm proposed on Gitter that we create a separated CapacityLimiter dedicated only for handling the application (i.e. request_response()). This means that n (depending on the number of tokens we choose) number of threads would be dedicated for request_response().

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:14 (12 by maintainers)

github_iconTop GitHub Comments

1reaction
FarukOzderimcommented, Aug 8, 2022

We had a similar situation in Gradio, and resolved it via this kind of approach. Wanted to share to support the issue.

1reaction
Kludexcommented, Jul 8, 2022

I’m going to write it here how to change the default CapacityLimiter, as it may be relevant…

Right now, you can modify the number of token_tokens on the default CapacityLimiter. Let’s use the same application as described above:

import anyio
from time import sleep

from fastapi import FastAPI
from starlette.applications import Starlette
from starlette.background import BackgroundTasks
from starlette.requests import Request
from starlette.responses import JSONResponse
from starlette.routing import Route

num = 0


def count_sleep():
    global num
    num += 1
    print(f"Running number {num}.")
    sleep(10)


def endpoint(request: Request) -> JSONResponse:
    tasks = BackgroundTasks()
    tasks.add_task(count_sleep)
    return JSONResponse({"message": "Hello, world!"}, background=tasks)

# THIS IS THE ADDITION
async def startup():
    limiter = anyio.to_thread.current_default_thread_limiter()
    limiter.total_tokens = 100


app = Starlette(routes=[Route("/", endpoint)], on_startup=[startup])

You can perform the same query as mentioned:

for run in {1..100}; do
  http :8000 &
done

This time, you are NOT going to have the same behavior as mentioned on:

  • We can see Running number 40…
  • Wait 10 seconds…
  • We can see Running number 80…
  • Wait 10 seconds…
  • We can see Running number 100…

The behavior now is:

  • We can see Running number 100

No waiting time.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Trio's core functionality — Trio 0.21.0+dev documentation
run_sync() uses a CapacityLimiter to limit the number of threads running at once; see trio.to_thread.current_default_thread_limiter for details. If you're ...
Read more >
python-trio/general - Gitter
I think unless you want to build a custom capacitylimiter with more abilities, you'll have to find a way to cancel all of...
Read more >
Issues · encode/starlette - GitHub
RunTimeError: got Future <Future pending> attached to a different loop when using custom loop in sync fixtures when upgrading from 0.14.2 to 0.15.0 ......
Read more >
How to gather results and using limit with parent child functions
When I run your code, I get: File "/tmp/zigb.py", line 12, in child await sender.send('Parent {0}, Child {1}: exiting!
Read more >
Atlas Gunworks Mag Capacity Limiter - Shooters Connection
Atlas Gunworks Magazine Capacity Limiter for 140mm. Plate and Rivet System; Works with STI Gen 2 Magazines with Atlas Gunworks Drilled Witness Holes...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found