io.netty.util.internal.OutOfDirectMemoryError
See original GitHub issueHowdy,
We have about 2000 IoT devices connect to our thingsboard service via mqtt. Each post about 3000 length string every three minutes. Also we have a cronjob which push shared-attributes to all the devices with a loop. Additionally, we have RPC to device randomly. Sometimes I trigger the RPC to all the devices. Thus there could be 2000 rpcs running concurrently.
The following error happens every 4 or 5 days. And I have to restart thingsboard.service to workaround
it.
2019-06-11 00:22:04,467 [nioEventLoopGroup-5-1] INFO o.t.s.t.mqtt.MqttTransportHandler - [mqtt423227] Processing connect msg for client: 32_IFHEQVKJJZHUET2ELEFA_20190116020078AB5DB- DISKSN! 2019-06-11 00:22:04,466 [nioEventLoopGroup-5-9] ERROR o.t.s.t.mqtt.MqttTransportHandler - [mqtt423393] Unexpected Exception io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 4177526784, max: 4189454336) at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640) at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594) at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:764) at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:740) at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244) at io.netty.buffer.PoolArena.allocate(PoolArena.java:214) at io.netty.buffer.PoolArena.allocate(PoolArena.java:146) at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176) at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:137) at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) 2019-06-11 00:22:04,467 [nioEventLoopGroup-5-1] WARN o.t.s.t.mqtt.MqttTransportHandler - peer not authenticated 2019-06-11 00:22:04,469 [nioEventLoopGroup-5-4] INFO o.t.s.t.mqtt.MqttTransportHandler - [mqtt423361] Processing connect msg for client: 32_ME2TQMRUGM2TCNI_SCRW18122702G248861FD1F-D ISKSN! 2019-06-11 00:22:04,469 [nioEventLoopGroup-5-4] WARN o.t.s.t.mqtt.MqttTransportHandler - peer not authenticated 2019-06-11 00:22:04,469 [nioEventLoopGroup-5-1] INFO o.t.s.t.mqtt.MqttTransportHandler - [mqtt423229] Processing connect msg for client: 18230088H1010F! 2019-06-11 00:22:04,469 [nioEventLoopGroup-5-1] WARN o.t.s.t.mqtt.MqttTransportHandler - peer not authenticated 2019-06-11 00:22:04,472 [nioEventLoopGroup-5-1] INFO o.t.s.t.mqtt.MqttTransportHandler - [mqtt423228] Processing connect msg for client: 32_IFHEQVKJJZHUET2ELEFA_I49031H005538A6E991- DISKSN!
Forget to mention that we are using v2.0.3 + postgresql(on another server B) + redis(on another server C). And the server has 16G memory with 2 cpu core. And I just upgrade to the latest v2.3.1 but the memory usage still growing with time. Please kindly do let me know if there is a fix / workaround for it.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:11 (1 by maintainers)
After 3 days of running, I could confirm the memory usage is super stable! Thanks a lot for your fix, @ashvayka !
I would like to close the issue as I can’t reproduce the oom anymore. Thanks a lot!