[Question] Update learning rate of model
See original GitHub issueQuestion
I am trying to update the learning rate of a model. As far as I understand it’s possible to change parameters using set_parameters. But maybe I am wrong.
import gym
from stable_baselines3 import A2C
# Create a new environment
env = gym.make('CartPole-v1')
# Create a new model with learning rate 0.00001
model = A2C(
policy='MlpPolicy',
env=env,
verbose=1,
learning_rate=0.00001,
)
# Get the parameters that where saved with the model
params = model.get_parameters()
# Print the initial learning rate
print("INITIAL LR: {}".format(params['policy.optimizer']['param_groups'][0]['lr']))
# THIS PRINTS
# INITIAL LR: 1e-05
# Change the learning rate
params['policy.optimizer']['param_groups'][0]['lr'] = 0.000005
# Set the parameters on the model
model.set_parameters(params, exact_match=True)
new_params = model.get_parameters()
# Print the initial learning rate
print("NEW LR: {}".format(new_params['policy.optimizer']['param_groups'][0]['lr']))
# THIS PRINTS
# NEW LR: 5e-06
# Start training
model.learn(total_timesteps=1000)
However the training output tells me it’s still using learning rate 1e-05
------------------------------------
| rollout/ | |
| ep_len_mean | 22.2 |
| ep_rew_mean | 22.2 |
| time/ | |
| fps | 1050 |
| iterations | 100 |
| time_elapsed | 0 |
| total_timesteps | 500 |
| train/ | |
| entropy_loss | -0.693 |
| explained_variance | -0.0593 |
| learning_rate | 1e-05 |
| n_updates | 99 |
| policy_loss | 2.15 |
| value_loss | 11.6 |
------------------------------------
Checklist
- I have read the documentation (required)
- I have checked that there is no similar issue in the repo (required)
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (1 by maintainers)
Top Results From Across the Web
How to Configure the Learning Rate When Training Deep ...
The amount of change to the model during each step of this search process, or the step size, is called the “learning rate”...
Read more >Frequently asked questions on learning rate | by Sebai Dorsaf
This article is aimed to address common questions about learning rate, also known as step size, that are frequently asked by my students....
Read more >Keras: change learning rate - Stack Overflow
You can change the learning rate as follows: from keras import backend as K K.set_value(model.optimizer.learning_rate, 0.001).
Read more >Newest 'learning-rate' Questions
In several courses and tutorials about neural networks, people often say that the learning rate (LR) should be the first hyper-parameter to be...
Read more >decreasing learning rates between epochs · Issue #898 ...
@sergeyf please update the answer inside your initial question, because model.optimizer.lr.set_value() is no longer valid. the actual method should be model.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@Miffyli thank you very much, the custom_objects parameter is exactly what I was looking for.
For anybody who needs the solution:
Hello, if you want to change the learning rate with a custom schedule that doesn’t depends on the remaining number of timesteps, you need to update
lr_schedule
andlearning_rate
: