How to use Optuna for custom environments
See original GitHub issueThis isn’t a bug or anything like that, but I wonder if anyone could point me in the right direction.
One can do this:
python -m train.py --algo ppo2 --env MountainCar-v0 -n 50000 -optimize --n-trials 1000 --n-jobs 2 --sampler random --pruner median
But when you’ve created a custom environment…
env=DummyVecEnv([lambda: RunEnv(...)])
model= A2C(CnnPolicy,env).learn(total_timesteps)
… how can I enter the Optuna parameters - or is it even possible?
Of course I can create a custom Gym environment, but that’s a bit clunky.
Thankful for feedback
Kind regards
Issue Analytics
- State:
- Created 4 years ago
- Reactions:7
- Comments:13 (8 by maintainers)
Top Results From Across the Web
Hyperparameter tuning using optuna for FinRL
OPTUNA · Optuna is a hyperparameter tuning library that works across multiple frameworks. For the modelling part, we are using Stable baselines3 ...
Read more >Optuna Guide: How to Monitor Hyper-Parameter ...
We'll focus on Optuna – arguably the simplest one to use of all. ... Efficient sampling algorithm and pruning algorithm that allows some...
Read more >FAQ — Optuna 3.0.4 documentation
Can I use Optuna with X? (where X is your favorite ML library) ... main.py , the easiest way is to set CUDA_VISIBLE_DEVICES...
Read more >Optuna - A hyperparameter optimization framework
Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning.
Read more >Running Tune experiments with Optuna
In this example we minimize a simple objective to briefly demonstrate the usage of Optuna with Ray Tune via OptunaSearch , including examples...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I spent a while trying to get zoo to work with my custom env. It kept freezing during the training. Finally, I found this (non-zoo) simple approach. This worked for me with tf 1.15.0 and baselines 2.10.0
Have you considered registering your env instead?
Cf doc: https://github.com/openai/gym/wiki/Environments