'AmpOptimizerState' object has no attribute 'all_fp32_params'
See original GitHub issueI initialized the model and optimizer with amp
optimizer = Adam(filter(lambda p: p.requires_grad, model.parameters()), args.lr)
scheduler = ReduceLROnPlateau(optimizer, patience=1, factor=0.1, verbose=True, mode='max')
model, optimizer = amp.initialize(model, optimizer, opt_level="O1", verbosity=0)
and then I tried to add a group of params to the optimizer before training
optimizer.add_param_group({'params':unfreezed_params, 'lr':lr})
I got the following error
AttributeError: 'AmpOptimizerState' object has no attribute 'all_fp32_params'
System info: Ubuntu 18.04, cuda 10.0.130, pytorch 1.1.0
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
AttributeError: '' object has no attribute '' - python - Stack Overflow
Your NewsFeed class instance n doesn't have a Canvas attribute. If you want to pass the Canvas defined in your Achtergrond class instance ......
Read more >AttributeError: 'DatasetConversionInfo' object has no attribute ...
Solved: my commad line is pot -c SR-fsrcnn.json my json configuration file is here { /* Model parameters */ "model" : {
Read more >AttributeError: 'function' object has no attribute - Databricks
Problem You are selecting columns from a DataFrame and you get an error message. ERROR: AttributeError: 'function' object has no attribute ...
Read more >'Window' object has no attribute '_progSignedTexFont' - Builder
I keep getting the error AttributeError: 'Window' object has no attribute '_progSignedTexFont' ################ Experiment ended with exit ...
Read more >Python AttributeError: 'tuple' object has no attribute
AttributeError: 'tuple' object has no attribute. Learn Data Science with. This error occurs when attempting to access the values of a tuple incorrectly....
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@thuwyh Quick workaround is to do the following:
I think I know what’s going on. Amp relies on some internal stash that has knowledge of the parameters. Typically this is silently lazy-initialized by the backward context manager, or an initial call to optimizer.zero_grad().
I anticipated that people would only want to add param groups later on in training, after many steps had already been taken and the lazy stash initialization had taken place, so that is what the tests cover. I will make sure add_param_group also implements initialization if necessary.