[Feature Request]: allow to specify which gpu to use
See original GitHub issueAs far as I could see, there is no such option. To clarify, this is useful when running a script on a machine with more than one GPU, having a flag such as --gpu 1
or --device cuda:1
would be of great help!
Thanks for all the work!
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (5 by maintainers)
Top Results From Across the Web
Detecting GPU Features and Metal Software Versions
Detecting GPU Features and Metal Software Versions. Use the device object's properties to determine how you perform tasks in Metal.
Read more >Schedule GPUs - Kubernetes
Configure and schedule GPUs for use as a resource by nodes in a cluster.
Read more >Enabling GPU access with Compose - Docker Documentation
Enabling GPU access to service containers . Docker Compose v1.28.0+ allows to define GPU reservations using the device structure defined in the...
Read more >Using Graphical Processing Units (GPUs) - GitLab Docs
GitLab Runner supports the use of Graphical Processing Units (GPUs). The following section describes the required configuration to enable GPUs for various ...
Read more >A guide to GPU implementation and activation - TechTarget
There are two main ways to enable GPU resources. The first approach is to install the GPU subsystem as an aftermarket upgrade onto...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The GPU ID specification is different from the environment, and we did not prepare such options (we use to have that though). The safe way is to do that through CUDA_VISIBLE_DEVICES, e.g.,
CUDA_VISIBLE_DEVICES=0,1,2 ./run.sh --ngpu 3
Oh, this is my fault and must be a bug, but It’s just giving args.ngpu with None to train(), so simply the program will be aborted.