Container.status returns 'created' for 'exited' container
See original GitHub issuepip freeze output:
appdirs==1.4.3
argh==0.26.2
click==6.7
coverage==4.1
discover==0.4.0
docker==2.1.0
docker-pycreds==0.2.1
extras==1.0.0
fixtures==3.0.0
flake8==3.3.0
-e git+http://github.com/waynr/jankman@ac6c6d26e88b424e92e328c4da4040f43a4e11e5#egg=jankman
-e git+http://github.com/openstack-infra/jenkins-job-builder@161473d1b731b0fd84abf99f8f158383d21c83a0#egg=jenkins_job_builder
Jinja2==2.9.5
linecache2==1.0.0
MarkupSafe==1.0
mccabe==0.6.1
multi-key-dict==2.0.3
nose==1.3.7
packaging==16.8
pathtools==0.1.2
pbr==2.0.0
pluggy==0.3.1
py==1.4.33
pycodestyle==2.3.1
pyflakes==1.5.0
pyparsing==2.2.0
python-jenkins==0.4.14
python-mimeparse==1.6.0
PyYAML==3.12
requests==2.13.0
six==1.10.0
stevedore==1.20.0
testtools==2.2.0
tox==2.3.1
traceback2==1.4.0
unittest2==1.1.0
virtualenv==15.1.0
watchdog==0.8.3
websocket-client==0.40.0
Python version
Python 3.5.2
docker version
Client:
Version: 17.03.0-ce
API version: 1.24 (downgraded from 1.26)
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 07:59:18 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.3
API version: 1.24 (minimum version )
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
Experimental: false
docker info
Containers: 26
Running: 10
Paused: 0
Stopped: 16
Images: 27
Server Version: 1.12.3
Storage Driver: devicemapper
Pool Name: docker-253:2-67128345-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 40.55 GB
Data Space Total: 107.4 GB
Data Space Available: 66.82 GB
Metadata Space Used: 59.68 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.088 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-12-01)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay null host bridge
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary:
containerd version:
runc version:
init version:
Security Options:
seccomp
Kernel Version: 3.10.0-327.10.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 251.7 GiB
Name: meow
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
the problem
I am writing some test fixtures using docker-py. The docker image I am running is exiting within seconds of starting because the command fails. In my test fixture I have a loop that attempts to wait until the service running in the container is ready to begin accepting queries. One of the things this loop does on each pass is to check the Container.status
property of the container. No matter what, it always returns created
. In my logic, created
is an acceptable state for the Container
object to be in since the created
state seems to always precede the running
state. I know the container is actually in exited
state because I ran docker inspect -f '{{.State.Status}}'
which returns exited
.
My short term work around is going to be to simply call docker inspect -f '{{.State.Status}}'
in my loop, but I thought it would be a good idea to file an issue here also. Actually, in retrospect I probably should have attempted docker 2.2.0 before filing this issue…
Issue Analytics
- State:
- Created 6 years ago
- Comments:5
Top GitHub Comments
Yes, the
reload
method was missing in our docs, that’s now been fixed: http://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.reloadAs to the rationale for caching, not all environments are low-latency, and server requests can be expensive, which is why it makes sense to me to give the developer control over when they want to retrieve new data from the remote with an explicit call. I hope that makes sense!
@shin- sure, that makes sense–thanks for updating the docs. I wonder if it would make sense to create a subclass of
Container
for the cache (or non-cache) use case. (just thinking out loud)