For those who are struggling to find positions for many optimized parameters
See original GitHub issueHow to find the position correspondences for the random-named optimized parameters?
During the tuning process, there are some important outputs which can help to locate.
First, check the definition of get_space
function in Hypers search space
from the tuning outputs. You will find something like this:
This information helps us to know the naming for all parameters to be tuned.
Second, check the Resulting replaced keras model
from the tuning outputs. Something like this:
You will see some codes are replaced by space['Dropout']
, space['Dropout_1']
. This is the corresponding positions for the optimized parameters. At this point, it’s easy to fill in the optimized parameters.
Hope this helps.
Issue Analytics
- State:
- Created 6 years ago
- Comments:5
Top Results From Across the Web
How to optimize your careers site to attract more candidates
Discover simple tactics and tips for optimizing your careers site and attracting more candidates.
Read more >6 recommendations for optimizing a Spark job | by Simon Grah
In this article we have detailed a strategy for optimizing a Spark job. Its main objective is to provide a framework for anyone...
Read more >Query Optimization Techniques in SQL Server: Parameter ...
In this blog post we will walk you through one of the SQL Query Optimization Techniques in SQL Server - Parameter Sniffing.
Read more >Optuna Guide: How to Monitor Hyper-Parameter Optimization ...
We need to think about effective strategies to search for optimal hyper-parameter values. A naive approach to hyper-parameter search is grid ...
Read more >FAQ — Optuna 3.0.4 documentation
How to save machine learning models trained in objective functions? How can I obtain reproducible optimization results? How are exceptions from trials handled?...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I have the following parameters in my model: model = Sequential() model.add(Dense(512, input_shape=(784,))) model.add(Activation(‘relu’)) model.add(Dropout({{uniform(0, 1)}})) model.add(Dense({{choice([256, 512, 1024])}})) model.add(Activation({{choice([‘relu’, ‘sigmoid’])}})) model.add(Dropout({{uniform(0, 1)}}))
After the hyperas is done finding the most optimal hyperparameters, it gives the following info: {‘Activation’: 1, ‘Activation_1’: 1, ‘Dense’: 2, ‘Dense_1’: 0, ‘Dropout’: 0.1602501347478713, ‘Dropout_1’: 0.11729755246044238, ‘Dropout_2’: 1, ‘Dropout_3’: 0.41266207281071243, ‘add’: 1, ‘batch_size’: 1, ‘optimizer’: 1}
So I understand till ‘Dropout_1’, after that I don’t understand ‘Dropout_2’:1, ‘add’:1. I have only added three dense layers (with the first dense layer taking the input being fixed), except the output layer and here I am getting Dropout for 4 dense layers with ‘Dropout_2’ being 1. I am probably missing some caveat and is therefore hoping if someone could look at it and help me out.
Thanks a lot in advance.
If you want optim.minimize to print the values instead of the indices of the best parameters, use eval_space=True as an extra argument.