Replace uses of `apex.amp` with PyTorch's amp in references
See original GitHub issueSome reference scripts have added support for AMP via APEX a couple of years ago, see https://github.com/pytorch/vision/blob/4bf608633f9299aeda61b08ae126961acaadec22/references/classification/train.py#L17 for one example
PyTorch now natively supports AMP in the torch.cuda.amp
namespace, so we should use PyTorch’s AMP instead of apex for simplicity.
The replacement needs to happen on the following scripts:
- Classification
- Detection #4933
- Segmentation #4994
- Video Classification
cc @datumbox
Issue Analytics
- State:
- Created 2 years ago
- Reactions:2
- Comments:7 (7 by maintainers)
Top Results From Across the Web
torch.cuda.amp > apex.amp · Issue #818 · NVIDIA/apex - GitHub
For a while now my main focus has been moving mixed precision functionality into Pytorch core. It was merged about a month ago: ......
Read more >Automatic Mixed Precision package - torch.amp - PyTorch
torch.amp provides convenience methods for mixed precision, where some operations use the torch.float32 ( float ) datatype and other operations use lower ...
Read more >Automatic Mixed Precision — PyTorch Tutorials 1.12.1+cu102 ...
Ordinarily, “automatic mixed precision training” uses torch.autocast and torch.cuda.amp.GradScaler together. This recipe measures the performance of a simple ...
Read more >Torch.cuda.amp vs Nvidia apex? - PyTorch Forums
Yes, apex.amp was our first implementation of mixed-precision training, is deprecated now, and replaced with torch.cuda.amp . @seungjun posted ...
Read more >CUDA Automatic Mixed Precision examples - PyTorch
Autocasting automatically chooses the precision for GPU operations to improve performance while maintaining accuracy. Instances of torch.cuda.amp.GradScaler ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@prabhat00155 I don’t think it’s a problem to change the name. We don’t have any models that actually use it for training and BC on the references is not our primary focus anyway. My view is that it’s fine to update it.
We won’t need to install apex anymore, and everything will be handled by PyTorch now. We only used apex for amp in the reference scripts