question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Question] Update learning rate of model

See original GitHub issue

Question

I am trying to update the learning rate of a model. As far as I understand it’s possible to change parameters using set_parameters. But maybe I am wrong.

import gym
from stable_baselines3 import A2C

# Create a new environment
env = gym.make('CartPole-v1')

# Create a new model with learning rate 0.00001
model = A2C(
    policy='MlpPolicy',
    env=env,
    verbose=1,
    learning_rate=0.00001,
)

# Get the parameters that where saved with the model
params = model.get_parameters()

# Print the initial learning rate
print("INITIAL LR: {}".format(params['policy.optimizer']['param_groups'][0]['lr']))

# THIS PRINTS
# INITIAL LR: 1e-05

# Change the learning rate
params['policy.optimizer']['param_groups'][0]['lr'] = 0.000005

# Set the parameters on the model
model.set_parameters(params, exact_match=True)

new_params = model.get_parameters()
# Print the initial learning rate
print("NEW LR: {}".format(new_params['policy.optimizer']['param_groups'][0]['lr']))

# THIS PRINTS
# NEW LR: 5e-06

# Start training
model.learn(total_timesteps=1000)

However the training output tells me it’s still using learning rate 1e-05

------------------------------------
| rollout/              |          |
|    ep_len_mean        | 22.2     |
|    ep_rew_mean        | 22.2     |
| time/                 |          |
|    fps                | 1050     |
|    iterations         | 100      |
|    time_elapsed       | 0        |
|    total_timesteps    | 500      |
| train/                |          |
|    entropy_loss       | -0.693   |
|    explained_variance | -0.0593  |
|    learning_rate      | 1e-05    |
|    n_updates          | 99       |
|    policy_loss        | 2.15     |
|    value_loss         | 11.6     |
------------------------------------

Checklist

  • I have read the documentation (required)
  • I have checked that there is no similar issue in the repo (required)

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

3reactions
berrygoudswaardcommented, Jan 24, 2022

@Miffyli thank you very much, the custom_objects parameter is exactly what I was looking for.

For anybody who needs the solution:

custom_objects = { 'learning_rate': learning_rate }
model = A2C.load('model.zip', custom_objects=custom_objects)
0reactions
araffincommented, Feb 22, 2022

Hello, if you want to change the learning rate with a custom schedule that doesn’t depends on the remaining number of timesteps, you need to update lr_schedule and learning_rate:

from stable_baselines3 import A2C

model = A2C("MlpPolicy", "CartPole-v1", verbose=1)
model.learn(1000)

# Update lr_schedule, which is called to determine current learning rate
# here a constant learning rate
model.lr_schedule = lambda _: 0.0001
# Update `learning_rate` too in case we want to save/load the model
# (cf. remark below)
model.learning_rate = lambda _: 0.0001

model.learn(1000)

# Can be done at load time
# here we update learning rate because the schedule
# is recreated from `learning_rate` at load time
model.save("a2c_cartpole")
model = A2C.load("a2c_cartpole", env=model.get_env(), learning_rate=lambda _: 0.003)
model.learn(1000)

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Configure the Learning Rate When Training Deep ...
The amount of change to the model during each step of this search process, or the step size, is called the “learning rate”...
Read more >
Frequently asked questions on learning rate | by Sebai Dorsaf
This article is aimed to address common questions about learning rate, also known as step size, that are frequently asked by my students....
Read more >
Keras: change learning rate - Stack Overflow
You can change the learning rate as follows: from keras import backend as K K.set_value(model.optimizer.learning_rate, 0.001).
Read more >
Newest 'learning-rate' Questions
In several courses and tutorials about neural networks, people often say that the learning rate (LR) should be the first hyper-parameter to be...
Read more >
decreasing learning rates between epochs · Issue #898 ...
@sergeyf please update the answer inside your initial question, because model.optimizer.lr.set_value() is no longer valid. the actual method should be model.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found