question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

NettyClientWorkerThread cpu 100%

See original GitHub issue

issue

It 's a strange issue. Tomcat application run normally A few hours until 3 NettyClientWorkerThread use 100% cpu. jstack is bellow. there is no memeory error, but PooledByteBuf.deallocate action seems not work correctly.

I reserve 1G memory for netty direct memory, keep watching whether there is improvement.

Any idea will be helpful, thankyou.

jstack

"NettyClientWorkerThread_2" #15648 prio=5 os_prio=0 tid=0x0000ffff78104000 nid=0x67ef runnable [0x0000fffe4d1fb000]
   java.lang.Thread.State: RUNNABLE
	at io.netty.util.internal.shaded.org.jctools.queues.BaseMpscLinkedArrayQueue.poll(BaseMpscLinkedArrayQueue.java:340)
	at io.netty.util.internal.shaded.org.jctools.queues.MpscChunkedArrayQueue.poll(MpscChunkedArrayQueue.java:43)
	at io.netty.util.Recycler$LocalPool.claim(Recycler.java:262)
	at io.netty.util.Recycler.get(Recycler.java:158)
	at io.netty.util.internal.ObjectPool$RecyclerObjectPool.get(ObjectPool.java:84)
	at io.netty.buffer.PoolThreadCache$MemoryRegionCache.newEntry(PoolThreadCache.java:454)
	at io.netty.buffer.PoolThreadCache$MemoryRegionCache.add(PoolThreadCache.java:358)
	at io.netty.buffer.PoolThreadCache.add(PoolThreadCache.java:187)
	at io.netty.buffer.PoolArena.free(PoolArena.java:227)
	at io.netty.buffer.PooledByteBuf.deallocate(PooledByteBuf.java:171)
	at io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110)
	at io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:285)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1371)
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1234)
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1283)
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:61)
	at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:370)
	at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at java.lang.Thread.run(Thread.java:748)

Netty version

netty-all 4.1.72.Final

JVM version (e.g. java -version)

openjdk 1.8.0_212

OS version

docker container 4C 4G centos 7

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:32 (26 by maintainers)

github_iconTop GitHub Comments

1reaction
normanmaurercommented, Jan 7, 2022

A workaround was merged via https://github.com/netty/netty/pull/11972. That said I still suspect its a JDK bug which would be fixed by upgrading your JDK @hxnan

Read more comments on GitHub >

github_iconTop Results From Across the Web

Redisson netty thread consumes ~100% CPU with ... - GitHub
The issue being faced is that redisson netty thread consumes ~100% CPU and when a redis operation is performed we get RedisTimeoutException.
Read more >
New I/O server worker threads consuming 100% CPU - Netty ...
We have a message middleware based on Netty in place which basically works as a http proxy. It's running on Windows 2003, 1...
Read more >
Netty threads eats all of CPU - Elasticsearch - Elastic Discuss
We use transport client to connect ES server in our application. We noticed that our application eats 100% of available CPU.
Read more >
Occasionally Ktor utilizes 100% CPU without any load after ...
KTOR-1082 100% CPU usage on "Ktor-apache-client" threads after upgrading from 1.3.2 to ... Exception in thread "ktor-cio-dispatcher-worker-3" java.lang.
Read more >
How Is Netty Used to Write a High-Performance Distributed ...
epollWait returns a bug of 100% CPU usage caused by empty polling. ... Each worker thread has a selector, meaning each worker has...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found