Client-streaming RPC backpressure is broken on Kestrel
See original GitHub issueRelated to #104. Demo code: https://github.com/Avilad/grpc-dotnet-flow-control-test.
When a client streams many small messages to a grpc-dotnet server, and the server consumes these messages slower than they are produced, WriteAsync()
calls on the client are blocked as expected, but for each call the server makes to MoveNext()
, WriteAsync()
accepts more than one call before being blocked. This (presumably) causes a buildup of messages in Kestrel’s buffers, eventually leading to a “Request body too large” exception.
When the Grpc.Core server is used instead, the backpressure works as expected, and allows on average one WriteAsync()
call for each MoveNext()
call.
Issue Analytics
- State:
- Created 4 years ago
- Comments:14 (9 by maintainers)
Top Results From Across the Web
How does gRPC client-streaming flow control work in go?
There will be backpressure. Messages are only successfully sent when there's enough flow control window for them, otherwise SendMsg() will block ...
Read more >Seamless back-pressure handling in gRPC-Kotlin
Back-pressure occurs when the producer is faster than the consumer. If the producer ignores it and doesn't slow down, excessive buffering ...
Read more >Troubleshoot gRPC on .NET
Mismatch between client and service SSL/TLS configuration; Call a gRPC service with an untrusted/invalid certificate; Call insecure gRPC ...
Read more >How does grpc streaming work?
My understanding is that the sender of the stream of messages does not wait for the consumer to "pull" it, but rather it...
Read more >Performance best practices with gRPC
Only gRPC calls can be load balanced between endpoints. Once a streaming gRPC call is established, all messages sent over the stream go...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
FIXED
@Avilad seriously, great job at debugging/diagnosing this issue!