question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Invoked 'with amp.scale_loss`, but internal Amp state has not been initialized

See original GitHub issue

Thank you for publishing this source code.

I am trying to retrain QueryDet-PyTorch on MS-COCO but face the following error.

RuntimeError: Invoked 'with amp.scale_loss, but internal Amp state has not been initialized. model, optimizer = amp.initialize(model, optimizer, opt_level=...) must be called before with amp.scale_loss`.

Amp state is initialized if comm.get_world_size() > 1. I guess you haven’t handled when comm.get_world_size() == 1.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
sotoycommented, Aug 31, 2022

You can just move model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level) above the comm.get_world_size() > 1 condition.

0reactions
bhargavipatelcommented, Aug 10, 2022

Anyone was able to solve this issue for 1 gpu ?

Read more comments on GitHub >

github_iconTop Results From Across the Web

raise RuntimeError("Invoked 'with amp.scale_loss`, but ...
RuntimeError: Invoked 'with amp.scale_loss`, but internal Amp state has not been initialized. model, optimizer = amp.initialize(model, ...
Read more >
Source code for apex.amp.handle
... RuntimeError("Invoked 'with amp.scale_loss`, but internal Amp state has not been initialized. " "model, optimizer = amp.initialize(model, optimizer, ...
Read more >
Automatic Mixed Precision package - torch.amp - PyTorch
torch.amp provides convenience methods for mixed precision, where some operations use ... If unscale_() is not called explicitly, gradients will be unscaled ...
Read more >
Automatic Mixed Precision Using PyTorch - Paperspace Blog
In this overview of Automatic Mixed Precision (AMP) training with ... Faster training of deep neural networks has been achieved via the ...
Read more >
Tools for Easy Mixed-Precision Training in PyTorch
However, using FP32 for all operations is not… ... but what Amp essentially does on a call to amp.init() is insert monkey patches...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found