[grpc-js] Error: 13 INTERNAL: Received RST_STREAM with code 0
See original GitHub issueProblem description
grpc-js
client is throwing the following exception under high load:
ERROR Error: 13 INTERNAL: Received RST_STREAM with code 0
at Object.callErrorFromStatus (/repro/node_modules/@grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/repro/node_modules/@grpc/grpc-js/build/src/client.js:176:52)
at Object.onReceiveStatus (/repro/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:342:141)
at Object.onReceiveStatus (/repro/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:305:181)
at Http2CallStream.outputStatus (/repro/node_modules/@grpc/grpc-js/build/src/call-stream.js:117:74)
at Http2CallStream.maybeOutputStatus (/repro/node_modules/@grpc/grpc-js/build/src/call-stream.js:156:22)
at Http2CallStream.endCall (/repro/node_modules/@grpc/grpc-js/build/src/call-stream.js:142:18)
at ClientHttp2Stream.<anonymous> (/repro/node_modules/@grpc/grpc-js/build/src/call-stream.js:420:22)
at ClientHttp2Stream.emit (events.js:314:20)
at emitCloseNT (internal/streams/destroy.js:81:10) {
code: 13,
details: 'Received RST_STREAM with code 0',
metadata: Metadata { internalRepr: Map(0) {}, options: {} }
}
Reproduction steps
I have a repro repository in the works and will update with the link.
Environment
- OS name, version and architecture: macOS Big Sur v11.0 [20A5343j]
- Node version: v14.7.0 (seems to be irrelevant, also reproduces with v10.22.0 and v12.18.3
- Node installation method: nvs
- If applicable, compiler version: n/a
- Package name and version: grpc-js v1.1.3
Additional context
Issue Analytics
- State:
- Created 3 years ago
- Reactions:16
- Comments:18 (4 by maintainers)
Top Results From Across the Web
JS SDK v2.0.16 Error: 13 INTERNAL: received RST_STREAM ...
At the moment, there are three endpoints which aren't working very well and result in an RST_STREAM error which the SDK doesn't handle...
Read more >grpc/grpc - Gitter
Anyone have experience w mutual TLS in node.js? I'm getting Received RST_STREAM with code 2 (Internal server error) which I think means auth...
Read more >Received RST_STREAM with code 0 - Bountysource
[grpc-js] Error: 13 INTERNAL: Received RST_STREAM with code 0.
Read more >avoid RST on half-close? - Google Groups
It looks like Envoy resets bidirectional gRPC streams when only the client side of the stream closes. ... INTERNAL, Received RST_STREAM with error...
Read more >grpc/grpc-node @grpc/grpc-js@1.1.2 on GitHub
... Add the string "Internal server error" to the commonly reported error "Received RST_STREAM with code 2" to clarify what causes that error...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I am seeing this as well. I am running a Typescript React App with a node.js server. The server is throwing this error fairly regularly. Something like once an hour. I also get this similar error:
@firebase/firestore: Firestore (7.19.1): Connection GRPC stream error. Code: 14 Message: 14 UNAVAILABLE: Stream refused by server
but about 20% as frequently.I had a crash on my site on Monday where I was not able to read or write the database and I am trying to figure out if its related.
These are the 3 types of errors I am receiving of this type:
@firebase/firestore: Firestore (7.19.1): Connection GRPC stream error. Code: 13 Message: 13 INTERNAL: Received RST_STREAM with code 2
@firebase/firestore: Firestore (7.19.1): Connection GRPC stream error. Code: 14 Message: 14 UNAVAILABLE: Stream refused by server
@firebase/firestore: Firestore (7.19.1): Connection GRPC stream error. Code: 13 Message: 13 INTERNAL: Received RST_STREAM with code 0
if it helps, we had these errors when running on GKE (google cloud’s kubernetes engine), when under high load and resulting memory pressure. Essentially, our upper limit on the memory was too low, which caused the workloads to use more memory than its limit, which in turn put memory pressure on the nodes, which in turn made it fall apart…
In the specific case, we had a PubSub listener with surges at times.