question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Half-closed without a request

See original GitHub issue

gRPC version 1.40.0

Environment A client and server communicating over localhost.

I have a simple client and server that does nothing beside responding to an incoming RPC:

  @Override
  public void concat(PingRequest request, StreamObserver<PingResponse> responseObserver) {
    responseObserver.onNext(PingResponse.newBuilder().build());
    responseObserver.onCompleted();
  }

The client sends 100 RPCs with a 100ms deadline in parallel over a single channel that is in a READY state and shutdowns after all RPCs has been sent.

Server side:
13:49:11.197  INFO [ault-executor-5] com.example.Main             : trace-seq: 97 - ServerCall.Listener.close with status code: INTERNAL, desc: Half-closed without a request

Client side:
13:49:10.881 [grpc-nio-worker-ELG-1-2] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x3141a0f1, L:/127.0.0.1:50244 - R:/127.0.0.1:5993] OUTBOUND HEADERS: streamId=145 headers=GrpcHttp2OutboundHeaders[:authority: 127.0.0.1:5993, :path: /example.test.v1.PingService/Ping, :method: POST, :scheme: http, content-type: application/grpc, te: trailers, user-agent: grpc-java-netty/1.40.0, trace-seq: 97, grpc-accept-encoding: gzip, grpc-trace-bin: AAB33fg6kgpaWIFFOW9S6A41Af/EVN8t8akZAgA, grpc-timeout: 27946551n] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false

13:49:10.979 [grpc-nio-worker-ELG-1-2] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x3141a0f1, L:/127.0.0.1:50244 - R:/127.0.0.1:5993] OUTBOUND DATA: streamId=145 padding=0 endStream=true length=16 bytes=000000000b0a04666f6f6f1203626172

13:49:11.004 [grpc-nio-worker-ELG-1-2] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x3141a0f1, L:/127.0.0.1:50244 - R:/127.0.0.1:5993] OUTBOUND RST_STREAM: streamId=145 errorCode=8

(trace-seq: 97 is metadata that gets sent along with each RPC)

After the JVM has started we get a bunch of RPC with status: INTERNAL, desc: Half-closed without a request server side, client report the expected DEADLINE_EXCEEDED . So before any JIT or lazy class loading.

Re-running the client (keeping the server running) doesn’t not produce those error and we can see that only onCancel gets called when the client cancel an RPC. Instead of close(Status.INTERNAL) -> onHalfClose -> onCancel that we saw before.

I guess that gRPC can’t do much about the slow JVM startup but is the INTERNAL, desc: Half-closed without a request expected in those cases?

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:16 (12 by maintainers)

github_iconTop GitHub Comments

1reaction
ejona86commented, Feb 3, 2022

There’s no compression in the reproduction. So I think this might be it:

  1. Transport receives full request
  2. In the successful cases the application request()s a message, so delivery to the application begins. In failure case, this is too slow and doesn’t happen
  3. Transport receives RST_STREAM, which closes the deframer with an option to throw away any messages. https://github.com/grpc/grpc-java/blob/74b054b6b107f0f60dcedfe5278fd1cee597d0c7/core/src/main/java/io/grpc/internal/AbstractServerStream.java#L286
  4. The deframer cleans up, but it hadn’t yet started processing a message, so nextFrame == null so hasPartialMessage == false. https://github.com/grpc/grpc-java/blob/7308d920346e2e1cae640ed1f7e4dfad1b032df8/core/src/main/java/io/grpc/internal/MessageDeframer.java#L216
  5. That makes AbstractServerStream think this is a graceful end, so it inserts halfClosed(), even though it didn’t deliver the message. https://github.com/grpc/grpc-java/blob/74b054b6b107f0f60dcedfe5278fd1cee597d0c7/core/src/main/java/io/grpc/internal/AbstractServerStream.java#L231

Could the fix be as easy as avoiding the entire endOfStream case if immediateCloseRequested == true? Looks like it. I haven’t run the reproduction yet to check.

@tommyulfsparre, the reproduction looks to have been very helpful here.

1reaction
ejona86commented, Feb 3, 2022

The current implementation is categorizing it as a server error.

@irock, INTERNAL is not a “server error.” It’s a “things are fundamentally broken.” The INTERNAL usage on that line is appropriate, because the client believes the RPC is client-streaming instead of unary, so the client and server appear not to agree on the schema.

It is a bug though that line is executed. But the “wrong status” part of it doesn’t matter too much because the client already cancelled and so won’t see it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Reason for Java gRPC server "Half-closed without a request"
For unary RPCs, the service is expected to receive exactly one request. The ServerCallListener can get notified with halfClosed before receiving ...
Read more >
grpc/grpc - Gitter
so server side most of the rpc calls are just quick request response... but ... The "half closed" part is because if it...
Read more >
io.grpc.internal.ServerStreamListener.halfClosed java code ...
Called when the remote side of the transport gracefully closed, indicating the client had no more data to send. No further messages will...
Read more >
abstract gRPC protocol clarification: client half-closing
I've noticed that in case of the java client code, calling `requestObserver.onCompleted()` results in a call to `io.netty.handler.codec.http2.
Read more >
what is TCP Half Open Connection and TCP half closed ...
The History of Half Closed Connection. While RFC 793 only describes the raw mechanism without even mentioning the term "half closed", RFC 1122 ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found