question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to Enable the GPU in local Docker run

See original GitHub issue

I have the problem that I can’t enable the GPU when I run the Docker. I am using a NVIDIA P100.

nvidia-docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all -it …

This is how I call call_variant (time /opt/deepvariant/bin/call_variants --outfile "${CALL_VARIANTS_OUTPUT}" --examples "${EXAMPLES}" --checkpoint "${MODEL}" --execution_hardware accelerator) >"${LOG_DIR}/call_variants.log" 2>&1

The output from the call_variant log:

2018-06-23 22:47:42.743518: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA WARNING: Logging before flag parsing goes to stderr. I0623 22:47:43.324297 140315677046528 call_variants.py:329] Initializing model from /dv2/models/DeepVariant-inception_v3-0.6.0+cl-191676894.data-wes_standard/model.ckpt INFO:tensorflow:Restoring parameters from /dv2/models/DeepVariant-inception_v3-0.6.0+cl-191676894.data-wes_standard/model.ckpt I0623 22:47:44.415543 140315677046528 tf_logging.py:82] Restoring parameters from /dv2/models/DeepVariant-inception_v3-0.6.0+cl-191676894.data-wes_standard/model.ckpt Traceback (most recent call last): File “/tmp/Bazel.runfiles_dEDnzG/runfiles/com_google_deepvariant/deepvariant/call_variants.py”, line 388, in <module> tf.app.run() File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py”, line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File “/tmp/Bazel.runfiles_dEDnzG/runfiles/com_google_deepvariant/deepvariant/call_variants.py”, line 379, in main batch_size=FLAGS.batch_size) File “/tmp/Bazel.runfiles_dEDnzG/runfiles/com_google_deepvariant/deepvariant/call_variants.py”, line 335, in call_variants 'execution_hardware is set to accelerator, but no accelerator ’ main.ExecutionHardwareError: execution_hardware is set to accelerator, but no accelerator was found real 0m6.241s user 0m6.872s sys 0m2.256s

When I run the docker and check for the GPU with nvidia-smi it works, here the output

root@4811225a908b:/# nvidia-smi Sat Jun 23 22:53:46 2018 ±----------------------------------------------------------------------------+ | NVIDIA-SMI 396.26 Driver Version: 396.26 | |-------------------------------±---------------------±---------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla P100-PCIE… Off | 00000000:00:05.0 Off | 0 | | N/A 29C P0 29W / 250W | 0MiB / 16280MiB | 0% Default | ±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | ±----------------------------------------------------------------------------+

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
JoelDaoncommented, Sep 5, 2018

Yes it worked for me. Thanks a lot for the support!

1reaction
pichuancommented, Aug 24, 2018

Hi @JoelDaon , were you able to run this? What I found recently is that I actually needed to install nvidia-docker in addition to GPU driver. I documented it for myself here: https://gist.github.com/pichuan/6465d5f7ab56dd15a8f0d5f4d2763724

Once you have nvidia-docker, you’ll run something like:

( time sudo nvidia-docker run \
    -v /home/${USER}:/home/${USER} \
    gcr.io/deepvariant-docker/deepvariant_gpu:"${BIN_VERSION}" \
    /opt/deepvariant/bin/call_variants \
    --outfile "${CALL_VARIANTS_OUTPUT}" \
    --examples "${EXAMPLES}" \
    --checkpoint "${MODEL}"
) >"${LOG_DIR}/call_variants.log" 2>&1

I’d love to hear whether you’re able to get it work or not. Thank you!!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Using GPU from a docker container? - cuda - Stack Overflow
Using GPU from a docker container? · Environment · Install nvidia driver and cuda on your host · Install Docker · Find your...
Read more >
Using Your GPU in a Docker Container - Roboflow Blog
Exposing GPU Drivers to Docker using the NVIDIA Toolkit ... The best approach is to use the NVIDIA Container Toolkit. The NVIDIA Container...
Read more >
Enabling GPU access with Compose - Docker Documentation
Enabling GPU access with Compose. Compose services can define GPU device reservations if the Docker host contains such devices and the Docker Daemon...
Read more >
How to access the GPU using Docker - Scaleway
Connect to your GPU Instance via SSH. · Choose a Docker image from the containers shipped with your GPU Instance. · Use the...
Read more >
Docker - NVIDIA Documentation Center
Use dockerd to add the nvidia runtime: $ sudo dockerd --add-runtime=nvidia=/usr/bin/nvidia-container- ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found