question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Native automatic mixed precision support

See original GitHub issue

Native automatic mixed precision support (torch.cuda.amp) is finally merged: https://pytorch.org/docs/master/amp.html https://pytorch.org/docs/master/notes/amp_examples.html

Apex Amp has many known pain points (extension builds, forward/backward compatibilty, DataParallel support, flaky checkpointing, i don’t even know if it can be hacked to handle double backward/gradient penalty, others…). torch.cuda.amp fixes all these, the interface is more flexible and intuitive, and the tighter integration with pytorch brings more future optimizations into scope.

I think the torch.cuda.amp API is a good fit for a higher-level library because its style is more functional (as in, it doesn’t statefully alter anything outside itself). The necessary torch.cuda.amp calls don’t have silent/weird effects elsewhere.

If you want to talk about adding torch.cuda.amp to Ignite, with an eye towards it becoming the future-proof source of mixed precision, message me on Pytorch slack anytime. I pinged you there as well but I’m not sure if you monitor it habitually.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:4
  • Comments:6

github_iconTop GitHub Comments

1reaction
mcarillicommented, Apr 23, 2020

Debugged the cyclegan example you sent, the problem appears unrelated to either Amp or your cyclegan script: https://github.com/pytorch/pytorch/issues/37157

1reaction
vfdev-5commented, Apr 4, 2020

To close this issue, we can provide a colab notebook based on Training Cycle-GAN on Horses to Zebras

cc @ykumards

Read more comments on GitHub >

github_iconTop Results From Across the Web

Introducing native PyTorch automatic mixed precision for ...
An open source machine learning framework that accelerates the path from research prototyping to production deployment.
Read more >
Automatic Mixed Precision for Deep Learning
With Automatic Mixed Precision, we've realized a 50% speedup in TensorFlow-based ASR model training without loss of accuracy via a minimal code change....
Read more >
PyTorch's Native Automatic Mixed Precision Enables ...
Both TensorFlow and PyTorch enable mixed precision training. Now, PyTorch introduced native automatic mixed precision training.
Read more >
Automatic Mixed Precision Training in PyTorch - YouTube
Learn how to use mixed - precision to accelerate your deep learning (DL) training.
Read more >
Using automatic mixed precision training with PyTorch 1.6
In this Tips N Tricks video I show you how to use automatic mixed precision training ( #amp ) with #pytorch 1.6 to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found