http.response.start+body
See original GitHub issueHi everyone.
Separated calls like the following one add a lot of latency:
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
await send(
{
"type": "http.response.start",
"status": self.status_code,
"headers": self.raw_headers,
}
)
await send({"type": "http.response.body", "body": self.body})
Reduce the number of calls in the chain leads to removing all useless awaits here and significantly improve performance:
def __call__(self, scope: Scope, receive: Receive, send: Send) -> Awaitable[Any]:
return send(
{
"type": "http.response.start+body",
"status": self.status_code,
"headers": self.raw_headers,
"body": self.body,
}
)
Please pay ATTN
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
HTTP message body - Wikipedia
HTTP Message Body is the data bytes transmitted in an HTTP transaction message immediately following the headers if there are any HTTP messageEdit....
Read more >HTTP responses - IBM
An HTTP response contains: A status line. A series of HTTP headers, or header fields. A message body, which is usually needed. As...
Read more >mod_proxy - Apache HTTP Server Version 2.4
ProxyBadHeader Directive. Description: Determines how to handle bad header lines in a response. Syntax: ProxyBadHeader IsError|Ignore|StartBody.
Read more >Http Response- What is it? What is the structure of an HTTP ...
An HTTP response object typically represents the HTTP packet (response packet) sent back by Web Service Server in response to a client request....
Read more >unable to start body-transformer with wiremock-standalone ...
I am trying to start the transformer with wiremock standalone in proxy mode using the following command - java -cp ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I don’t see what this is really gaining for the async case - if you want ultimate response performance, you probably shouldn’t be using ASGI in the first place. The latency it adds should be minimal with most server implementations, too, especially if you’re already handling a lot of concurrent connections.
Unless there’s a good argument to take this on other than “it’s slightly more efficient”, I will have to say no to this one.
@andrewgodwin In a superfast app like Falcon, the difference is about 10% in RPS.