optimizer = Ranger21(params=model.parameters(), lr=learning_rate) File "/mnt/Drive1/florian/msblob/Ranger21/ranger21/ranger21.py", line 179, in __init__ self.total_iterations = num_epochs * num_batches_per_epoch TypeError: unsupported operand type(s) for *: 'NoneType' and 'NoneType'
See original GitHub issueI get the following error when starting my training:
Traceback (most recent call last):
File "tr_baseline.py", line 75, in <module>
optimizer = Ranger21(params=model.parameters(), lr=learning_rate)
File "/mnt/Drive1/florian/msblob/Ranger21/ranger21/ranger21.py", line 179, in __init__
self.total_iterations = num_epochs * num_batches_per_epoch
TypeError: unsupported operand type(s) for *: 'NoneType' and 'NoneType'
initializing ranger with:
# ranger:
optimizer = Ranger21(params=model.parameters(), lr=learning_rate)
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
Issues · lessw2020/Ranger21 - GitHub
Ranger deep learning optimizer rewrite to use newest components - Issues ... TypeError: unsupported operand type(s) for *: 'NoneType' and 'NoneType'.
Read more >TypeError: unsupported operand type(s) for *: 'NoneType' and ...
I am not sure if I need to change L2 to a float. This is my first time working with matrixes on python....
Read more >Ranger — pytorch-forecasting documentation - Read the Docs
Ranger seems to be benefiting most models. Parameters. params – iterable of parameters to optimize or dicts defining parameter groups. lr – learning...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Well, have you tried as shown below,
One further question, I have a training where I use multiple training data loaders with different batch length…is it possible to apply ranger21 in this context?