If the request contains a string of millions of lengths, and if the request is sent at a very high rate, will it cause an io.netty.util.internal.OutOfDirectMemoryError exception?
See original GitHub issueIf the request contains a string of millions of lengths, and if the request is sent at a very high rate, will it cause an io.netty.util.internal.OutOfDirectMemoryError
exception?
BTW, I have tried many times and have not found the LEAK log…
Expected behavior
Netty service runs steadily without reporting OOM exceptions.
Actual behavior
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 788529159, max: 804257792)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:742)
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:697)
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:758)
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:734)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:245)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:215)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:147)
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:356)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:777)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe$1.run(AbstractEpollChannel.java:387)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:387)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:748)
Steps to reproduce
Many requests containing a string of millions of lengths are sent at a very high rate
Minimal yet complete reproducer code (or URL to code)
Netty version
4.1.47.Final
JVM version (e.g. java -version
)
java version "1.8.0_241"
Java(TM) SE Runtime Environment (build 1.8.0_241-b07)
Java HotSpot(TM) 64-Bit Server VM (build 25.241-b07, mixed mode)
OS version (e.g. uname -a
)
Linux x86_64 GNU/Linux
Issue Analytics
- State:
- Created 3 years ago
- Comments:13 (6 by maintainers)
Top Results From Across the Web
How to find a root cause of the following netty error: io.netty.util ...
In your case, the direct memory size has been exhausted. According to the code from Class io.netty.util.internal.
Read more >Debugging OOM exceptions and job abnormalities - AWS Glue
You can debug out-of-memory (OOM) exceptions and job abnormalities in AWS Glue. The following sections describe scenarios for debugging out-of-memory ...
Read more >lettuce-io/Lobby - Gitter
Under load testing i've increased this to 2Gb, but still we see the issue. io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 893386752 byte(s) ...
Read more >How to Avoid Common Mistakes When Using Reactor Netty
PrematureCloseException: Connection has been closed BEFORE response, while sending request body Client - Upload Limit Client port 138.
Read more >LI80623: AVOID CASSANDRA OUT OF DIRECT MEMORY ...
java:629 - Unexpected exception during request; io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16921 byte(s) of direct memory (used: 67101489 ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Closing this
Hi, @normanmaurer . I am so glad to tell you that this problem has been solved by limiting the number of active channels. As mentioned by @johnou, the speed of writing data from Netty is faster than the speed we can process in the back-end program, which cause the direct memory couldn’t be released in time, resulting in OOM exceptions. Thank you both for following up this issue, thank you very much!