grpc OOME and NPE in simple JMH benchmark
See original GitHub issueFirst I post here and put all things in this gist.
The problem is the memory goes very high, sooner or later will cause OOME, and there is a strange NPE
Exception in thread "grpc-default-executor-68" java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:645)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:228)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:204)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:132)
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:262)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:157)
at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:93)
at io.grpc.netty.NettyWritableBufferAllocator.allocate(NettyWritableBufferAllocator.java:66)
at io.grpc.internal.MessageFramer.writeKnownLength(MessageFramer.java:182)
at io.grpc.internal.MessageFramer.writeUncompressed(MessageFramer.java:135)
at io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:125)
at io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:165)
at io.grpc.internal.AbstractServerStream.writeMessage(AbstractServerStream.java:108)
at io.grpc.internal.ServerImpl$ServerCallImpl.sendMessage(ServerImpl.java:496)
at io.grpc.stub.ServerCalls$ResponseObserver.onNext(ServerCalls.java:241)
at play.bench.BenchGRPC$CounterImpl$1.onNext(BenchGRPC.java:194)
at play.bench.BenchGRPC$CounterImpl$1.onNext(BenchGRPC.java:191)
at io.grpc.stub.ServerCalls$2$1.onMessage(ServerCalls.java:191)
at io.grpc.internal.ServerImpl$ServerCallImpl$ServerStreamListenerImpl.messageRead(ServerImpl.java:546)
at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1.run(ServerImpl.java:417)
at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
io.grpc.StatusRuntimeException: CANCELLED
at io.grpc.Status.asRuntimeException(Status.java:430)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:266)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:320)
at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Exception while executing runnable io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$2@312546d9
java.lang.NullPointerException
at io.netty.buffer.PoolChunk.initBufWithSubpage(PoolChunk.java:378)
at io.netty.buffer.PoolChunk.initBufWithSubpage(PoolChunk.java:369)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:194)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:132)
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:262)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:157)
at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:93)
at io.grpc.netty.NettyWritableBufferAllocator.allocate(NettyWritableBufferAllocator.java:66)
at io.grpc.internal.MessageFramer.writeKnownLength(MessageFramer.java:182)
at io.grpc.internal.MessageFramer.writeUncompressed(MessageFramer.java:135)
at io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:125)
at io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:165)
at io.grpc.internal.AbstractServerStream.writeMessage(AbstractServerStream.java:108)
at io.grpc.internal.ServerImpl$ServerCallImpl.sendMessage(ServerImpl.java:496)
at io.grpc.stub.ServerCalls$ResponseObserver.onNext(ServerCalls.java:241)
at play.bench.BenchGRPCOOME$CounterImpl.inc(BenchGRPCOOME.java:150)
at play.bench.CounterServerGrpc$1.invoke(CounterServerGrpc.java:171)
at play.bench.CounterServerGrpc$1.invoke(CounterServerGrpc.java:166)
at io.grpc.stub.ServerCalls$1$1.onHalfClose(ServerCalls.java:154)
at io.grpc.internal.ServerImpl$ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerImpl.java:562)
at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$2.run(ServerImpl.java:432)
at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Issue Analytics
- State:
- Created 8 years ago
- Reactions:1
- Comments:12 (6 by maintainers)
Top Results From Across the Web
gRPC OOME and NPE in simple JMH benchmark
The problem is that StreamObserver.onNext does not block, so there is no push-back when you write too much. There is an open issue....
Read more >Benchmarking - gRPC
This page describes performance benchmarking tools, scenarios considered by tests, and the testing infrastructure.
Read more >Release Notes · Airframe - wvlet.github.io
21.3.0. This version enhances airframe-rx with convenient operators: Rx.cache, Rx.andThen(Future). This version also includes various bug fixes for gRPC support ...
Read more >Avoiding Benchmarking Pitfalls on the JVM - Oracle
Looking at the implementation, we can see that this simple benchmarking method measures a throughput. The time a benchmark takes to run is...
Read more >[Solved]-Selenium Implicit Timeout stopped working-Java
... Exclude javax.servlet package from com.google.gwt dependency on pom.xml · gRPC OOME and NPE in simple JMH benchmark · what is the difference...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
With 7902017 you can now easily observe flow control when sending on server-side.
We will give it spin. Thanks for the update.