Low throughput with large messages and high latency (25ms RTT)
See original GitHub issueGreetings!
I am having performance issues receiving/sending large messages (~100k-100MB) over a connection with 25 ms RTT. Transfer rate is ~2 MB/s but when serving static files of the same size on the same kestrel instance >25 MB/s (tested in Chrome with stopwatch). When testing on localhost, 200 MB/s.
I see the issue both with large unary operations or streaming (breaking up into chunks between 10K and 10MB). Poking about in WireShark, it looks like the large static downloads dramatically increase the TCP window to compensate for the latency during the transfer while gRPC transfers don’t. I assume this is the issue. I have not been able to find any settings which change this.
Any ideas? Thanks!
The Service
public override Task<DataAccessResponse> TestDownload(DataAccessRequest request, ServerCallContext context)
{
_bigchunk ??= new byte[100_000_000];
var response = new DataAccessResponse
{
DataChunk = ByteString.CopyFrom(_bigchunk)
};
return Task.FromResult(response);
}
Server Setup
services.AddGrpc(grpc =>
{
grpc.MaxReceiveMessageSize = 100_000_000;
});
webBuilder.UseKestrel(options =>
{
//I have used the defaults and played around with these settings - no difference
options.Limits.Http2.InitialConnectionWindowSize = int.MaxValue;
options.Limits.Http2.InitialStreamWindowSize = int.MaxValue;
options.Limits.Http2.MaxFrameSize = 16_777_215;
options.AllowSynchronousIO = true;
});
}
Dependencies
<PackageReference Include="Google.Protobuf" Version="3.13.0" />
<PackageReference Include="Grpc.AspNetCore.Server" Version="2.35.0" />
<PackageReference Include="Grpc.Net.Common" Version="2.35.0" />
<PackageReference Include="Grpc.Net.ClientFactory" Version="2.35.0" />
<PackageReference Include="Grpc.Core.Api" Version="2.35.0" />
.NET 5.0
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (5 by maintainers)
Top Results From Across the Web
Latency vs Throughput – Understanding the Difference
Latency measures the speed of packet transfers whereas bandwidth is used to refer to the maximum capacity of the network. The simplest way...
Read more >Round-Trip Time - an overview
Network latency (the “round-trip time” of a packet of data) has a direct effect on the performance or throughput of a window-based protocol...
Read more >MMOG. RTT, Input Lag, and How to Mitigate Them
If you want a world-wide game, then maximum-possible RTT goes up to 220ms. Worse than that, there is also significant difference for different ......
Read more >We need to talk about low latency
I think the low-latency 5G Emperor is almost naked. ... 1ms – or even 10ms – especially over wide areas, or for high-throughput...
Read more >What is Latency?
A low-latency network connection experiences small delay times, while a high-latency connection experiences long delays. Besides propagation ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Panthers unite!
Thanks for the help, I have workarounds until this is fixed in hopefully 6.0.