Invoked 'with amp.scale_loss`, but internal Amp state has not been initialized
See original GitHub issueThank you for publishing this source code.
I am trying to retrain QueryDet-PyTorch
on MS-COCO but face the following error.
RuntimeError: Invoked 'with amp.scale_loss
, but internal Amp state has not been initialized. model, optimizer = amp.initialize(model, optimizer, opt_level=...) must be called before
with amp.scale_loss`.
Amp state is initialized if comm.get_world_size() > 1
. I guess you haven’t handled when comm.get_world_size() == 1
.
Issue Analytics
- State:
- Created a year ago
- Comments:5 (1 by maintainers)
Top Results From Across the Web
raise RuntimeError("Invoked 'with amp.scale_loss`, but ...
RuntimeError: Invoked 'with amp.scale_loss`, but internal Amp state has not been initialized. model, optimizer = amp.initialize(model, ...
Read more >Source code for apex.amp.handle
... RuntimeError("Invoked 'with amp.scale_loss`, but internal Amp state has not been initialized. " "model, optimizer = amp.initialize(model, optimizer, ...
Read more >Automatic Mixed Precision package - torch.amp - PyTorch
torch.amp provides convenience methods for mixed precision, where some operations use ... If unscale_() is not called explicitly, gradients will be unscaled ...
Read more >Automatic Mixed Precision Using PyTorch - Paperspace Blog
In this overview of Automatic Mixed Precision (AMP) training with ... Faster training of deep neural networks has been achieved via the ...
Read more >Tools for Easy Mixed-Precision Training in PyTorch
However, using FP32 for all operations is not… ... but what Amp essentially does on a call to amp.init() is insert monkey patches...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
You can just move
model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level)
above thecomm.get_world_size() > 1
condition.Anyone was able to solve this issue for 1 gpu ?