Half-closed without a request
See original GitHub issuegRPC version
1.40.0
Environment A client and server communicating over localhost.
I have a simple client and server that does nothing beside responding to an incoming RPC:
@Override
public void concat(PingRequest request, StreamObserver<PingResponse> responseObserver) {
responseObserver.onNext(PingResponse.newBuilder().build());
responseObserver.onCompleted();
}
The client sends 100 RPCs with a 100ms
deadline in parallel over a single channel that is in a READY
state and shutdowns after all RPCs has been sent.
Server side:
13:49:11.197 INFO [ault-executor-5] com.example.Main : trace-seq: 97 - ServerCall.Listener.close with status code: INTERNAL, desc: Half-closed without a request
Client side:
13:49:10.881 [grpc-nio-worker-ELG-1-2] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x3141a0f1, L:/127.0.0.1:50244 - R:/127.0.0.1:5993] OUTBOUND HEADERS: streamId=145 headers=GrpcHttp2OutboundHeaders[:authority: 127.0.0.1:5993, :path: /example.test.v1.PingService/Ping, :method: POST, :scheme: http, content-type: application/grpc, te: trailers, user-agent: grpc-java-netty/1.40.0, trace-seq: 97, grpc-accept-encoding: gzip, grpc-trace-bin: AAB33fg6kgpaWIFFOW9S6A41Af/EVN8t8akZAgA, grpc-timeout: 27946551n] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false
13:49:10.979 [grpc-nio-worker-ELG-1-2] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x3141a0f1, L:/127.0.0.1:50244 - R:/127.0.0.1:5993] OUTBOUND DATA: streamId=145 padding=0 endStream=true length=16 bytes=000000000b0a04666f6f6f1203626172
13:49:11.004 [grpc-nio-worker-ELG-1-2] DEBUG io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler - [id: 0x3141a0f1, L:/127.0.0.1:50244 - R:/127.0.0.1:5993] OUTBOUND RST_STREAM: streamId=145 errorCode=8
(trace-seq: 97
is metadata that gets sent along with each RPC)
After the JVM has started we get a bunch of RPC with status: INTERNAL, desc: Half-closed without a request
server side, client report the expected DEADLINE_EXCEEDED
. So before any JIT or lazy class loading.
Re-running the client (keeping the server running) doesn’t not produce those error and we can see that only onCancel gets called when the client cancel an RPC. Instead of close(Status.INTERNAL) -> onHalfClose -> onCancel that we saw before.
I guess that gRPC can’t do much about the slow JVM startup but is the INTERNAL, desc: Half-closed without a request
expected in those cases?
Issue Analytics
- State:
- Created 2 years ago
- Comments:16 (12 by maintainers)
Top GitHub Comments
There’s no compression in the reproduction. So I think this might be it:
request()
s a message, so delivery to the application begins. In failure case, this is too slow and doesn’t happennextFrame == null
sohasPartialMessage == false
. https://github.com/grpc/grpc-java/blob/7308d920346e2e1cae640ed1f7e4dfad1b032df8/core/src/main/java/io/grpc/internal/MessageDeframer.java#L216Could the fix be as easy as avoiding the entire
endOfStream
case ifimmediateCloseRequested == true
? Looks like it. I haven’t run the reproduction yet to check.@tommyulfsparre, the reproduction looks to have been very helpful here.
@irock, INTERNAL is not a “server error.” It’s a “things are fundamentally broken.” The INTERNAL usage on that line is appropriate, because the client believes the RPC is client-streaming instead of unary, so the client and server appear not to agree on the schema.
It is a bug though that line is executed. But the “wrong status” part of it doesn’t matter too much because the client already cancelled and so won’t see it.