Native automatic mixed precision support
See original GitHub issueNative automatic mixed precision support (torch.cuda.amp
) is finally merged:
https://pytorch.org/docs/master/amp.html
https://pytorch.org/docs/master/notes/amp_examples.html
Apex Amp has many known pain points (extension builds, forward/backward compatibilty, DataParallel support, flaky checkpointing, i don’t even know if it can be hacked to handle double backward/gradient penalty, others…). torch.cuda.amp
fixes all these, the interface is more flexible and intuitive, and the tighter integration with pytorch brings more future optimizations into scope.
I think the torch.cuda.amp
API is a good fit for a higher-level library because its style is more functional (as in, it doesn’t statefully alter anything outside itself). The necessary torch.cuda.amp
calls don’t have silent/weird effects elsewhere.
If you want to talk about adding torch.cuda.amp
to Ignite, with an eye towards it becoming the future-proof source of mixed precision, message me on Pytorch slack anytime. I pinged you there as well but I’m not sure if you monitor it habitually.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:4
- Comments:6
Top GitHub Comments
Debugged the cyclegan example you sent, the problem appears unrelated to either Amp or your cyclegan script: https://github.com/pytorch/pytorch/issues/37157
To close this issue, we can provide a colab notebook based on Training Cycle-GAN on Horses to Zebras
cc @ykumards