Excessive CPU use in idle mode
See original GitHub issueThe Pulsar 2.9.1 Docker container is using ~10…20% CPU in idle mode.
81.0% - 2,384 s io.netty.util.concurrent.FastThreadLocalRunnable.run
80.7% - 2,376 s io.netty.util.internal.ThreadExecutorMap$2.run
80.7% - 2,376 s io.netty.util.concurrent.SingleThreadEventExecutor$4.run
80.7% - 2,376 s io.netty.channel.epoll.EpollEventLoop.run
47.5% - 1,398 s io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange
47.5% - 1,398 s io.netty.channel.epoll.Native.epollWait
47.5% - 1,398 s io.netty.channel.epoll.Native.epollWait
47.5% - 1,398 s io.netty.channel.epoll.Native.epollWait
33.2% - 978 s io.netty.channel.epoll.EpollEventLoop.epollWait
33.2% - 978 s io.netty.channel.epoll.Native.epollWait
33.2% - 978 s io.netty.channel.epoll.Native.epollWait0
0.0% - 17,667 µs io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks
0.0% - 9,033 µs io.netty.channel.nio.NioEventLoop.run
0.2% - 6,377 ms java.util.concurrent.ThreadPoolExecutor$Worker.run
0.0% - 1,118 ms io.netty.util.HashedWheelTimer$Worker.run
16.6% - 489 s io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run
16.6% - 489 s io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run
16.6% - 489 s io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run
16.6% - 489 s io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.run
9.5% - 279 s io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.epollWait
7.1% - 209 s io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange
To Reproduce Steps to reproduce the behavior:
docker run -d -p 6650:6650 -p 8080:8080 apachepulsar/pulsar:2.9.1 bin/pulsar standalone
- Add a postgres source:
pulsar-admin source localrun --source-config-file pg.yaml
(not sure if this is relevant)
pg.yaml:tenant: "public" namespace: "default" name: "pg-source" topicName: "pg-topic" archive: "/mnt/pulsar-io-debezium-postgres-2.9.1.nar" parallelism: 1 configs: plugin.name: "pgoutput" database.hostname: "postgres" database.port: "5432" database.user: "postgres" database.password: "postgres" database.dbname: "postgres" database.server.name: "postgres" schema.whitelist: "public" database.history.pulsar.service.url: "pulsar://localhost:6650"
- Wait a couple of minutes
- Notice 10…20% cpu use in the java process inside the container.
Runningtop
inside the container:PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 10.5g 1.1g 34452 S 14.0 9.1 3:27.74 java 711 root 20 0 4244 3528 2988 S 0.0 0.0 0:00.04 bash 724 root 20 0 6124 3288 2760 R 0.0 0.0 0:00.00 top
Desktop (please complete the following information):
- OS: Windows 10 x64, WSL2 Linux x64 running Docker
Logs: Couldn’t see anything wrong in the logs. See attached: pulsar.log
Issue Analytics
- State:
- Created 2 years ago
- Comments:12 (9 by maintainers)
Top Results From Across the Web
High CPU usage at idle. Win 10 Pro - Microsoft Community
Whenever I check with MSI Afterburner, it says that the CPU usage sits at about 90 percent.
Read more >High CPU Usage at Idle Solved - Windows 10 Forums
Under Windows 10, the CPU is throttled back to 5% when the computer is idle and then gradually goes back up depending on...
Read more >System Idle Causes High CPU Usage
A high System Idle Process value is a good thing -- it means the CPU is not overloaded with tasks. If you are...
Read more >System Idle Process High CPU usage on Windows 10 Version ...
The System Idle Process is a system process that tells you the percentage of time your CPU is idle. If you see system...
Read more >Solved: System Idle Process high CPU usage on Windows 10 ...
When the processor within a computer is idle, it has a System Idle Process High CPU usages column, often in the '70s to...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Did some tests, not much improvement from
managedLedgerCacheEvictionFrequency=1000
andPULSAR_GC=-XX:+UseG1GC
.--no-functions-worker --no-stream-storage
together bring ~15% CPU usage down to ~4%. Around 2/3 of that improvement seems to come from--no-stream-storage
alone. Still, it is not ideal that an idle server uses 4% CPU, it drains the battery…This might be related to https://twitter.com/normanmaurer/status/1499778839563718664
The fix is scheduled for Netty 4.1.76.Final release.