question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Native Amp Support

See original GitHub issue

Native automatic mixed precision support (torch.cuda.amp) is finally merged: https://pytorch.org/docs/master/amp.html https://pytorch.org/docs/master/notes/amp_examples.html Apex Amp has many known pain points (extension builds, forward/backward compatibilty, DataParallel support, flaky checkpointing, i don’t even know if it can be hacked to handle double backward/gradient penalty, others…). torch.cuda.amp fixes all these, the interface is more flexible and intuitive, and the tighter integration brings more future performance optimizations into scope.

If you want to talk about adding torch.cuda.amp to Lightning, with an eye towards it becoming the true source of mixed precision and replacing Apex, message me on Pytorch slack anytime. I pinged you there as well, but I’m not sure if you monitor it habitually.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:3
  • Comments:7 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
mcarillicommented, Apr 4, 2020

hmm i don’t know the lightning codebase at all, aside from the interface. It would take me longer than early next week to be sure I was making the right changes in the right places. The version is a more complex string though, so I’d use something like

version_ge_16 = False
TORCH_MAJOR = int(torch.__version__.split('.')[0])
TORCH_MINOR = int(torch.__version__.split('.')[1])
if (TORCH_MAJOR > 1) or (TORCH_MAJOR == 1 and TORCH_MINOR >= 6):
    version_ge_16 = True
1reaction
mcarillicommented, Apr 2, 2020

I think the torch.cuda.amp API is a much better fit for Lightning because its style is more functional (functional as in, it doesn’t statefully alter anything outside itself). The necessary torch.cuda.amp calls could be contained entirely within trainer.fit() without any silent/weird effects elsewhere.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Introducing AMP Support in Astra
Astra's native AMP support takes care of allowed CSS size and manages the design of the mobile pages accordingly. Astra tries to retain...
Read more >
An Introduction to Native AMP - XWP
AMP for WordPress is designed to offer a lightning fast experience for end ... Native AMP is the application of AMP across a...
Read more >
Native Amp Support · Issue #1337 · Lightning-AI ... - GitHub
Native automatic mixed precision support (torch.cuda.amp) is finally merged: https://pytorch.org/docs/master/amp.html ...
Read more >
Traffic native ads in AMPHTML - Google Ad Manager Help
For traditional native ads, select an AMP setting in the HTML & CSS editor. For programmatic native ads, you don't need to do...
Read more >
Native Amp or Paired on Neve theme - WordPress.org
https://docs.themeisle.com/article/1078-amp-support-in-neve. There I see option to Remove Neve's paired mode by adding piece of code in child theme's ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found