question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

"lr_schedule" option ignored using torch framework and PPO algorithm

See original GitHub issue

Ray version and other system information (Python version, TensorFlow version, OS):

What is the problem?

Setting the hyperparameter “lr_schedule” as no effect when using PyTorch as backend framework and PPO learning algorithm.

Reproduction (REQUIRED)

import ray 
from ray.rllib.agents.ppo import PPOTrainer, DEFAULT_CONFIG 
 
config = DEFAULT_CONFIG.copy() 
for key, val in { 
    "env": "CartPole-v0", 
    "num_workers": 0, 
    "use_pytorch": False, 
    "lr": 1.0e-5, 
    "lr_schedule": [ 
        [0, 1.0e-6], 
        [1, 1.0e-7], 
    ] 
}.items(): config[key] = val 
 
ray.init() 

for use_pytorch in [False, True]: 
    config["use_pytorch"] = use_pytorch 
    agent = PPOTrainer(config, "CartPole-v0") 
    for _ in range(2): 
        result = agent.train() 
        print(f"use_pytorch: {use_pytorch} - Current learning rate: "\
              f"{result['info']['learner']['default_policy']['cur_lr']}")
  • I have verified my script runs in a clean environment and reproduces the issue.
  • I have verified the issue also occurs with the latest wheels.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
duburcqacommented, May 23, 2020

Nice, thank you ! The learning rate using tensorflow comes from a conversion from float32 to float64 that must be done somewhere. If you want to check:

import numpy as np
print(np.float64(np.float32(1.0e-5)))
print(np.float64(np.float32(1.0e-7)))
0reactions
janblumenkampcommented, May 23, 2020

Hmmm I made some more experiments and I am not convinced that the lr is actually properly updated… Is it possible that the learning rate in cur_lr is different from the actual learning rate?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Algorithms — Ray 2.2.0 - the Ray documentation
Algorithm. Frameworks ... PPO. tf + torch. Yes +parametric ... APPO is not always more efficient; it is often better to use standard...
Read more >
Proximal Policy Optimization — Spinning Up documentation
PPO is an on-policy algorithm. PPO can be used for environments with either discrete or continuous action spaces. The Spinning Up implementation of...
Read more >
Proximal Policy Optimization (PPO) is Easy With PyTorch
Proximal Policy Optimization is an advanced actor critic algorithm designed to improve performance by constraining updates to our actor ...
Read more >
Lessons from Implementing 12 Deep RL Algorithms in TF and ...
Internally, this tells RLlib to try to use the torch version of a policy for ... system throughput (ignoring learning) across a few...
Read more >
How to print the adjusting learning rate in Pytorch?
While I use torch.optim.Adam and exponential decay_lr in my PPO algorithm: ... Then I print the lr in my epoch dynamiclly with:
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found