ConflictException from docker-java when starting many containers at once
See original GitHub issue-  docker-plugin version you use: docker-plugin-1.1.4
-  jenkins version you use: 2.107.1
-  docker engine version you use: Version = swarm/1.2.8, API Version = 1.22, Docker 17.09.0-ce
- stack trace / logs / any technical details that could help diagnose this issue
We are using the docker-plugin for basically all builds in a fairly large Jenkins installation (~70k builds / day). We infrequently (~5/day) see the following in the Jenkins log, followed by an automatic 5-minute “disabling” of a docker template by the plugin:
May 09, 2018 3:46:03 PM com.nirima.jenkins.plugins.docker.DockerCloud$1 run
SEVERE: Error in provisioning; template='DockerTemplate{configVersion=2, labelString='old-sl sl swarm_node sl_swarm_node sl_always_online sl_swarm_node_latest', connector=io.jenkins.docker.connector.DockerComputerSSHConnector@36cee2cb, remoteFs='/home/jenkins', instanceCap=500, mode=EXCLUSIVE, retentionStrategy=com.nirima.jenkins.plugins.docker.strategy.DockerOnceRetentionStrategy@3ec64559, dockerTemplateBase=DockerTemplateBase{image='taas-docker-public.artifactory.swg-devops.com/swarm/dk-jenkins-slave:latest', pullCredentialsId='public.shared.artifactory.docker.registry.username.password', registry=null, dockerCommand='./setup_slave.sh', hostname='', dnsHosts=[9.9.9.9], network='', volumes=[], volumesFrom2=[], environment=[], bindPorts='', bindAllPorts=false, memoryLimit=null, memorySwap=null, cpuShares=null, privileged=true, tty=false, macAddress='null', extraHosts=[]}, removeVolumes=true, pullStrategy=PULL_NEVER, nodeProperties=[], disabled=BySystem,1 ms,4 min 59 sec,Template provisioning failed.}' for cloud='taas-internal-swarm'
com.github.dockerjava.api.exception.ConflictException: Conflict: The name 25bf9974eba5b0 is already assigned. You have to delete (or rename) that container to be able to assign 25bf9974eba5b0 to a container again.
	at com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:107)
	at com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:33)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
	at java.lang.Thread.run(Thread.java:748)
I suspect the problem lies in how the plugin chooses a name for each container: https://github.com/jenkinsci/docker-plugin/blob/master/src/main/java/com/nirima/jenkins/plugins/docker/DockerTemplate.java#L492-L495. The container names are chosen by hashing System.nanoTime(), but per https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#nanoTime-- the available resolution for System.nanoTime() is only guaranteed to be “least as good as that of currentTimeMillis(),” and so we may be finding a collision there.
Issue Analytics
- State:
- Created 5 years ago
- Comments:9 (7 by maintainers)

 Top Related Medium Post
Top Related Medium Post Top Related StackOverflow Question
Top Related StackOverflow Question
I’ve updated PR #651 so that the 5-minute back-off period is now configurable.
Code changes #651 have been merged, so this will be FITR.