UNAVAILABLE: GOAWAY closed buffered stream. HTTP/2 error code: NO_ERROR
See original GitHub issueWhat version of gRPC-Java are you using?
1.37.0
What is your environment?
Amazon ECS. With amazoncorretto:11 (jdk) as build and run image.
What did you expect to see?
Normal inter-service communication with no errors on client and server.
What did you see instead?
Errors with message UNAVAILABLE: GOAWAY closed buffered stream. HTTP/2 error code: NO_ERROR
after every fixed interval in gRPC client.
Steps to reproduce the bug
- Create a gRPC server with blocking stub
- Create a client for this server
- If the gRPC server stub method takes longer durations to execute we start seeing the above error in logs after every minute or so.
Additional info: Don’t see these errors when the stub method is not doing much processing and just returning some dummy data. The error rate is increased in following scenarios:
- When we increase the rpm or concurrent calls to server
- When the rpc method starts taking more time to process
Error message: UNAVAILABLE: GOAWAY closed buffered stream. HTTP/2 error code: NO_ERROR
Stack trace:
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:262)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:243)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:156)
at com.demo.gpc.DemoGrpc$DemoBlockingStub.hasAccess(DemoGrpc.java:209)
at com.demo.service.GrpcClient.hasAccess(GrpcClient.java:27)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (4 by maintainers)
Top Results From Across the Web
Abrupt GOAWAY returned from nginx - Google Groups
StatusRuntimeException: UNAVAILABLE: Abrupt GOAWAY closed sent stream. HTTP/2 error code: NO_ERROR. I went through the detailed conversation ...
Read more >How to handle HTTP/2 GOAWAY with HttpClient?
1 Answer 1 ... A server is entitled to close connections at any time, for any reason. In the HTTP/2 GOAWAY frame there...
Read more >HTTP/2 | Node.js v16 API - NodeJS Dev
errorCode number The HTTP/2 error code specified in the GOAWAY frame. ... Gracefully closes the Http2Session , allowing any existing streams to complete...
Read more >Vert.x Core Manual
When deploying a verticle using a verticle name, you can specify the number of verticle instances that you want to deploy: DeploymentOptions options...
Read more >HTTP2: Fix handling of GOAWAY frames (Iddee9385)
http2 unknown - stream 1 was reset FAIL! : tst_Http2::goaway(GracefulNoRace) Compared values are not the same Actual (nRequests) : 2 Expected ( ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
A “buffered” stream is one that wasn’t sent yet because of the server’s MAX_CONCURRENT_STREAMS setting. It seems likely the proxy you are using cycles connections after they reach a certain age. RPCs are eagerly assigned to transports, so if that connection is killed it kills those buffered RPCs.
For more minor cases, enabling retry can address this since there is transparent retry. Unfortunately retry is incompatible with stats-keeping at the moment. It will also only transparently retry in this sort of case once. We are considering removing that limit and making it unlimited, though.
Because the client is hitting the server’s MAX_CONCURRENT_STREAMS there will be extra latency as gRPC waits for RPCs to complete before it starts new ones. You may consider creating multiple Channels or using a load balancer like round_robin that will create multiple connections.
Closing as this is long-standing expected behavior, but watch #8269 for the solution. If I missed something, comment and we can reopen.