Possible file descriptor leak due to #782
See original GitHub issue- docker-plugin 1.1.9
- docker-java-api 3.1.5
- jenkins 2.204.6
- docker 18.06.3-ce
- docker-connection via tcp://
- slave connection via ssh
We experienced a leak of file descriptors since the update to the above plugin versions (causing the NoClassDefFoundErrors com/github/dockerjava/netty/WebTarget as seen in #782).
The below stacktrace does not necessarily point to the leak, but merely to the place where the limit was first reached.
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: java.io.IOException: Too many open files
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at java.base/sun.nio.ch.EPoll.create(Native Method)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at java.base/sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:79)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at java.base/sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:166)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: Caused: io.netty.channel.ChannelException: failed to open a new selector
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:168)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:142)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:127)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:36)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: Caused: java.lang.IllegalStateException: failed to create a child event loop
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:77)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:72)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:59)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.jenkins.docker.client.NettyDockerCmdExecFactory$InetSocketInitializer.init(NettyDockerCmdExecFactory.java:310)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.jenkins.docker.client.NettyDockerCmdExecFactory.init(NettyDockerCmdExecFactory.java:233)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at com.github.dockerjava.core.DockerClientImpl.withDockerCmdExecFactory(DockerClientImpl.java:193)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at com.github.dockerjava.core.DockerClientBuilder.build(DockerClientBuilder.java:45)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.jenkins.docker.client.DockerAPI.makeClient(DockerAPI.java:249)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.jenkins.docker.client.DockerAPI.getOrMakeClient(DockerAPI.java:199)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.jenkins.docker.client.DockerAPI.getClient(DockerAPI.java:168)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at io.jenkins.docker.client.DockerAPI.getClient(DockerAPI.java:151)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at com.nirima.jenkins.plugins.docker.DockerCloud.countContainersInDocker(DockerCloud.java:613)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at com.nirima.jenkins.plugins.docker.DockerCloud.canAddProvisionedSlave(DockerCloud.java:632)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at com.nirima.jenkins.plugins.docker.DockerCloud.provision(DockerCloud.java:352)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at hudson.slaves.NodeProvisioner$StandardStrategyImpl.apply(NodeProvisioner.java:725)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at hudson.slaves.NodeProvisioner.update(NodeProvisioner.java:332)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at hudson.slaves.NodeProvisioner.access$900(NodeProvisioner.java:63)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at hudson.slaves.NodeProvisioner$NodeProvisionerInvoker.doRun(NodeProvisioner.java:819)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at hudson.triggers.SafeTimerTask.run(SafeTimerTask.java:70)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at jenkins.security.ImpersonatingScheduledExecutorService$1.run(ImpersonatingScheduledExecutorService.java:58)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
Apr 3 10:27:27 host0125 16fef0b13d6f[1228]: at java.base/java.lang.Thread.run(Thread.java:834)
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (4 by maintainers)
Top Results From Across the Web
Jetty socket file descriptor leak - java - Stack Overflow
Jetty socket file descriptor leak ... IOException: Too many open files at sun.nio.ch. ... runJob(QueuedThreadPool.java:782) ...
Read more >System is running out of file descriptors when many threads ...
After ~300-350 of such iterations application is being forcibly closed due to RuntimeException: E/InputChannel-JNI( 1041): Error 24 dup receive pipe fd 258.
Read more >[PATCH v4] Use io_uring_register_ring_fd() to skip fd ... - kernel
Fam > + } > > + return s; > } > > void luring_cleanup(LuringState *s) > -- > Use error_reportf_err to avoid...
Read more >[PATCH v4] Use io_uring_register_ring_fd() to skip fd operations
Linux recently added a new io_uring(7) optimization API that QEMU doesn't take advantage of yet. The liburing library that QEMU uses has ...
Read more >[ARM Stable Update] 2022-04-13 - Firefox, Mesa, Electron ...
I discovered a huge amount of file descriptors named anon_inode:sync_file, so it seems that the issue is due to mesa update: I found...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I have some doubt those places are in the docker-plugin. The docker-plugin caches the
DockerClientconnections that it successfully creates … but if it doesn’t manage to obtain a connection in the first place then it’s got nothing it can free up/dispose of. I suspect that the issue will be buried deep inside thedocker-javalibrary (and may even be fixed in later releases of that library, although we’re now rather wary of switching to different versions of that…)By all means take a look through the code and let me know if you can spot any easy improvements (and, ideally, submit a PR) but I think that what we’ve got “at this end” is as good as we can reasonably get.
OK, I’ll merge it; it’ll be fixed in the next release.