Native Amp Support
See original GitHub issueNative automatic mixed precision support (torch.cuda.amp) is finally merged:
https://pytorch.org/docs/master/amp.html
https://pytorch.org/docs/master/notes/amp_examples.html
Apex Amp has many known pain points (extension builds, forward/backward compatibilty, DataParallel support, flaky checkpointing, i don’t even know if it can be hacked to handle double backward/gradient penalty, others…). torch.cuda.amp fixes all these, the interface is more flexible and intuitive, and the tighter integration brings more future performance optimizations into scope.
If you want to talk about adding torch.cuda.amp to Lightning, with an eye towards it becoming the true source of mixed precision and replacing Apex, message me on Pytorch slack anytime. I pinged you there as well, but I’m not sure if you monitor it habitually.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:3
- Comments:7 (3 by maintainers)

Top Related StackOverflow Question
hmm i don’t know the lightning codebase at all, aside from the interface. It would take me longer than early next week to be sure I was making the right changes in the right places. The version is a more complex string though, so I’d use something like
I think the
torch.cuda.ampAPI is a much better fit for Lightning because its style is more functional (functional as in, it doesn’t statefully alter anything outside itself). The necessarytorch.cuda.ampcalls could be contained entirely withintrainer.fit()without any silent/weird effects elsewhere.