question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[tune] Option --no-cuda is misleading in mnist_pytorch.py example

See original GitHub issue

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
  • Ray installed from (source or binary): binary
  • Ray version: 0.6.2
  • Python version: 3.5
  • Exact command to reproduce:
python3 mnist_pytorch.py

Describe the problem

Argument parser’s option --no-cuda which is False by default is misleading as resources_per_trial does not allocate a GPU.

https://github.com/ray-project/ray/blob/eddd60e14e95a2aadb06192bd141d06c68d5f082/python/ray/tune/examples/mnist_pytorch.py#L45-L49

https://github.com/ray-project/ray/blob/eddd60e14e95a2aadb06192bd141d06c68d5f082/python/ray/tune/examples/mnist_pytorch.py#L178-L180

Thanks

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
richardliawcommented, Jan 29, 2019

If you leave “gpu”: 1, the example will not run if Ray does not detect a GPU.

Thanks, Richard

On Tue, Jan 29, 2019 at 2:37 AM vfdev notifications@github.com wrote:

I think maybe do something like “gpu”: int(not args.no_cuda), since you don’t want to set it like that when CUDA is disabled.

@richardliaw https://github.com/richardliaw yes, but this is handled inside the trial code:

https://github.com/ray-project/ray/blob/eddd60e14e95a2aadb06192bd141d06c68d5f082/python/ray/tune/examples/mnist_pytorch.py#L62 so we can leave savely “gpu”: 1.

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/ray-project/ray/issues/3873#issuecomment-458491495, or mute the thread https://github.com/notifications/unsubscribe-auth/AEUc5bFOAUQgKygCBnihLCLHdRu14Yvoks5vICRugaJpZM4aU24_ .

0reactions
vfdev-5commented, Jan 29, 2019

I think maybe do something like “gpu”: int(not args.no_cuda), since you don’t want to set it like that when CUDA is disabled.

@richardliaw yes, but this is handled inside the trial code: https://github.com/ray-project/ray/blob/eddd60e14e95a2aadb06192bd141d06c68d5f082/python/ray/tune/examples/mnist_pytorch.py#L62 so we can leave savely "gpu": 1.

Read more comments on GitHub >

github_iconTop Results From Across the Web

MNIST PyTorch Example — Ray 2.2.0
Original Code here: # https://github.com/pytorch/examples/blob/master/mnist/main.py import os import argparse from filelock import FileLock import torch ...
Read more >
mnist-pytorch - Databricks documentation
Distributed deep learning training using PyTorch with HorovodRunner for MNIST. This notebook illustrates the use of HorovodRunner for distributed training ...
Read more >
Hyperparameter tuning with Ray Tune - PyTorch
Lastly, the batch size is a choice between 2, 4, 8, and 16. At each trial, Ray Tune will now randomly sample a...
Read more >
MNIST Training using PyTorch - Amazon SageMaker Examples
The mnist.py script provides all the code we need for training and hosting a SageMaker model ( model_fn function to load a model)....
Read more >
Tune PyTorch Model on MNIST - AutoGluon
AutoGluon is a framework agnostic HPO toolkit, which is compatible with any training code written in python. The PyTorch code used in this...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found