question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[question] VecNormalize with hyper parameter tuning

See original GitHub issue

Hi, I’m trying to do some hyper parameter tuning, but am getting this error: ValueError("Trying to set venv of already initialized VecNormalize wrapper.") How am I supposed to use the normalization statistics from the training phase during validation without saving it to file during every trial?

def objective_fn(trial):
    model_params = optimize_ppo(trial)

    train_env, validation_env = initialize_envs()
    norm_env = VecNormalize(train_env, norm_obs=True, norm_reward=True, training=True)

    model = PPO(policy, 
                norm_env,
                device=device,
                **model_params)

    train_maxlen = len(train_env.get_attr('df')[0].index) - 1
    try:
        model.learn(train_maxlen)
    except Exception as error:
        print(error)
        raise optuna.structs.TrialPruned()

    norm_env.set_venv(validation_env)
    norm_env.training = False
    norm_env.norm_reward = False

    mean_reward, _ = evaluate_policy(model, norm_env, n_eval_episodes=5)

    if mean_reward == 0:
        raise optuna.structs.TrialPruned()

    return -mean_reward

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6

github_iconTop GitHub Comments

2reactions
araffincommented, Jun 15, 2021

How am I supposed to use the normalization statistics from the training phase during validation without saving it to file during every trial?

You will find your answer here 😉: https://github.com/DLR-RM/stable-baselines3/issues/473 (we have a sync_env_normalization for that.

https://github.com/DLR-RM/stable-baselines3/blob/75b6f3b3b0f207456d9dcac2c6e86e8e2a22115f/stable_baselines3/common/vec_env/__init__.py#L59-L72

0reactions
aleksanderhancommented, Jun 15, 2021

Thank you both very much!

Read more comments on GitHub >

github_iconTop Results From Across the Web

What does the hyperparameter "normalize" refer to in PPO? #64
e.g. I see A2C tunes the normalize_advantage parameter, but that's not a hyperparameter for PPO. PPO has a boolean to normalize_image, but don't ......
Read more >
Reinforcement Learning Tips and Tricks - Stable Baselines
Read about RL and Stable Baselines · Do quantitative experiments and hyperparameter tuning if needed · Evaluate the performance using a separate test...
Read more >
HyperParameter Tuning, Batch Normalization - Kaggle
This problem is known as covariant shift where there is a shift in the data ... Hyperparameter tuning: methods to search for optimal...
Read more >
Hyperparameter tuning - GeeksforGeeks
The aim of this article is to explore various strategies to tune hyperparameters for Machine learning models.
Read more >
03_hyperparameter-tuning-batch-normalization-and ...
In this video, I want to share with you some guidelines, some tips for how to systematically organize your hyperparameter tuning process, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found