question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

hyperparameter optimization with customised environment

See original GitHub issue

Hi, first question I have is that is it possible to run hyperparameter optimization using customised environment? If possible, what should be the file structure for train.py to recognize the environment. I have tried the following structure

│   train.py    
|   setup.py
│
└───gym_environment/
│   │   envs/
│   │   __init__.py

where __init__.py registers

    id='FullFilterEnv-v0',
    entry_point='gym_environment.envs:FullFilterEnv',
    max_episode_steps=10,

Then I run python train.py --algo td3 --env FullFilterEnv-v0 -n 50000 -optimize --n-trials 1000 --n-jobs 2 --sampler random --pruner median .
But the following error pops up ValueError: FullFilterEnv-v0 not found in gym registry, you maybe meant AntBulletEnv-v0?

Should I put the env into the yml file in hyperparams/ ? Perhaps a good example in the documents would be helpful.

Thanks a lot!

System Info

  • stable-baselines3-0.8.0a0

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
araffincommented, Jun 25, 2020

run hyperparameter optimization using customised environment?

Yes, it is totally possible, in fact we are already doing that for pybullet envs. Best is to create a python package and import it in utils/import_envs.py. Otherwise, you can use --gym-packages package_name, see https://github.com/DLR-RM/rl-baselines3-zoo#minigrid-envs

Should I put the env into the yml file in hyperparams/

You should also do that too.

Related: https://github.com/araffin/rl-baselines-zoo/issues/29

0reactions
araffincommented, Oct 10, 2020

Closing this as the original question was answered.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Best Tools for Model Tuning and Hyperparameter Optimization
The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in ...
Read more >
Reinforcement Learning Tips and Tricks - Stable Baselines3
Read about RL and Stable Baselines3 · Do quantitative experiments and hyperparameter tuning if needed · Evaluate the performance using a separate test...
Read more >
Scalable Hyperparameter Tuning — Ray 2.2.0
Tune is a Python library for experiment execution and hyperparameter tuning at any scale. You can tune your favorite machine learning framework (PyTorch, ......
Read more >
Create a hyperparameter tuning job | Vertex AI - Google Cloud
In the Dataset drop-down list, select No managed dataset. Select Custom training (advanced). Click Continue. On the Model details step, choose Train new...
Read more >
Hyperparameter Optimization using Ray tune for FinRL models
To register a custom environment as a Gym environment, you need to register it. For this, we need an environment name and configuration...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found