Requests hang when running quart with uvicorn
See original GitHub issueChecklist
- The bug is reproducible against the latest release and/or
master
. - There are no similar issues or pull requests to fix it yet.
Describe the bug
I did some perf tests about running quart app with ‘hypercorn vs gunicorn with uvicorn worker’, and found an issue about gunicorn with uvicorn worker / or just uvicorn only. The code of the simple quart app:
from quart import Quart, request
app = Quart(__name__)
@app.route('/', methods=['POST'])
async def hello():
data = await request.get_data()
return data
It seems that requests will hang on there when I run this quart app with uvicorn. The QPS was very low and I got a lot of timeout error. I think the quart app just received the first batch of requests, after that it can’t receive any requests from uvicorn. But everything goes well when running it with hypercorn.
Meanwhile, if you change the return data
to return data, 200, {'Connection': 'close'}
which means disabled reusing socket, the QPS will go up. It also means you cannot take advantage of keeping alive connections.
Here are the numbers:
ASGI Server | connections | workers | Requests/sec | P50 | P95 | P99 | Remarks |
---|---|---|---|---|---|---|---|
hypercorn | 512 | 4 | 2640.69 | 144.83 | 381.60 | 431.00 | |
gunicorn with uvicorn worker | 512 | 4 | 7673.80 | 64.77 | 105.91 | 122.54 | ‘Connection’: ‘close’ |
gunicorn with uvicorn worker | 512 | 4 | 11.77 | 72.89 | 103.14 | 105.12 | HTTP Status 408 Count: 184 (total 707) |
I used the wrk
tool to test it. Here is the information about wrk:
wrk -c 512 -d 1m -t 16 -s payload.lua http://127.0.0.1:8080
wrk.method = "POST"
wrk.headers["Content-Type"] = "application/json"
wrk.body = [[
{"Inputs":[
{"Text":"They have been given more opportunities to influence the formation and activities of the legislative and executive bodiies.","ModeOverride":"Proactive"}
],"RequestVersion":2}
]]
To reproduce
Run the above example code with gunicorn -b 0.0.0.0:8080 -w 4 -k uvicorn.workers.UvicornWorker example:app
or
uvicorn --host 0.0.0.0 --port 8080 --workers 4 --log-level debug example:app
Expected behavior
QPS 8000 or more is expected.
Actual behavior
Requests hang there when using keep-alive Http connections.
Debugging material
Requests hang after the first batch.
INFO: 127.0.0.1:46572 - "POST / HTTP/1.1" 200 OK
INFO: 127.0.0.1:46554 - "POST / HTTP/1.1" 200 OK
INFO: 127.0.0.1:46528 - "POST / HTTP/1.1" 200 OK
INFO: 127.0.0.1:45868 - "POST / HTTP/1.1" 408 Request Timeout
INFO: 127.0.0.1:45826 - "POST / HTTP/1.1" 408 Request Timeout
INFO: 127.0.0.1:45902 - "POST / HTTP/1.1" 408 Request Timeout
Environment
- OS: Ubuntu 18.04.4 LTS
- Python: 3.7.9
- Uvicorn: Running uvicorn 0.12.2 with CPython 3.7.9 on Linux
Additional context
- issue to the quart: https://github.com/pgjones/quart/issues/113
- previous keep alive handling issue: https://github.com/encode/uvicorn/issues/241
So far, I haven’t seen the same issues from the other asgi app, e.g. starlette or directly simple asgi app, it may be a compatibility problem with quart.
Issue Analytics
- State:
- Created 3 years ago
- Comments:10 (5 by maintainers)
Top GitHub Comments
@euri10 If you add a
break
intohandle_messages
function, the issue will go away. Like this:There may be some race condition issues in the disconnect logic of uvicorn, make us cannot reach the
elif message["type"] == "http.disconnect":
Since it’s related to the above code, I still use this channel.Sure.