[tune] Option --no-cuda is misleading in mnist_pytorch.py example
See original GitHub issueSystem information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
- Ray installed from (source or binary): binary
- Ray version: 0.6.2
- Python version: 3.5
- Exact command to reproduce:
python3 mnist_pytorch.py
Describe the problem
Argument parser’s option --no-cuda
which is False by default is misleading as resources_per_trial
does not allocate a GPU.
Thanks
Issue Analytics
- State:
- Created 5 years ago
- Comments:5 (5 by maintainers)
Top Results From Across the Web
MNIST PyTorch Example — Ray 2.2.0
Original Code here: # https://github.com/pytorch/examples/blob/master/mnist/main.py import os import argparse from filelock import FileLock import torch ...
Read more >mnist-pytorch - Databricks documentation
Distributed deep learning training using PyTorch with HorovodRunner for MNIST. This notebook illustrates the use of HorovodRunner for distributed training ...
Read more >Hyperparameter tuning with Ray Tune - PyTorch
Lastly, the batch size is a choice between 2, 4, 8, and 16. At each trial, Ray Tune will now randomly sample a...
Read more >MNIST Training using PyTorch - Amazon SageMaker Examples
The mnist.py script provides all the code we need for training and hosting a SageMaker model ( model_fn function to load a model)....
Read more >Tune PyTorch Model on MNIST - AutoGluon
AutoGluon is a framework agnostic HPO toolkit, which is compatible with any training code written in python. The PyTorch code used in this...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
If you leave “gpu”: 1, the example will not run if Ray does not detect a GPU.
Thanks, Richard
On Tue, Jan 29, 2019 at 2:37 AM vfdev notifications@github.com wrote:
@richardliaw yes, but this is handled inside the trial code: https://github.com/ray-project/ray/blob/eddd60e14e95a2aadb06192bd141d06c68d5f082/python/ray/tune/examples/mnist_pytorch.py#L62 so we can leave savely
"gpu": 1
.