Inactive hyperparameters having an effect on BayesianOptimization
See original GitHub issueCorrect me if I’m wrong, but (and I see this is a recurring issue) since the tuner seems to tune hyperparameters regardless of whether they’re active or not, wouldn’t this have an effect on the way Bayesian Optimization works since the probability score being built around inactive hyperparameters has an effect on the way the tuner perceives the final score being generated (sorry for my poor wording)
In any case, my current code looks like this:
def model_builder(hp):
tf.keras.backend.clear_session()
hp_timesteps = hp.Int('timesteps' , min_value = 4, max_value = 30, step = 1)
hp_optimizer = hp.Choice('optimizer', values = ['adam', 'rmsprop', 'adamax', 'nadam'])
hp_layers = hp.Int('num_layers', min_value = 1, max_value = 4, step =1)
x_train, y_train = create_dataset(trainS, trainS.Aggregate, hp_timesteps)
x_test, y_test = create_dataset(testS, testS.Aggregate, hp_timesteps)
if hp_layers == 1:
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(hp.Int('hp_units_single', min_value=8, max_value=128, step=8)))
model.add(tf.keras.layers.Dense(units=1))
model.compile(loss=loss, optimizer=hp_optimizer)
return model
if hp_layers > 1:
model = tf.keras.Sequential()
for i in range(hp_layers - 1):
model.add(tf.keras.layers.LSTM(hp.Int(f'hp_units_{i}', min_value=8, max_value=128, step=8), return_sequences=True))
model.add(tf.keras.layers.LSTM(hp.Int('hp_units_final', min_value=8, max_value=128, step=8)))
model.add(tf.keras.layers.Dense(units=1))
model.compile(loss=loss, optimizer=hp_optimizer)
return model
Is there any way to restrict the tuner from attempting to pass values to inactive hyperparameters?
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (3 by maintainers)
Top Results From Across the Web
Bayesian Optimization for Conditional ... - ResearchGate
The comparison of an active condition with an inactive condition is defined as being false, returning a zero kernel value (hence no shared...
Read more >How Hyperparameter Tuning Works - Amazon SageMaker
Amazon SageMaker hyperparameter tuning uses either a Bayesian or a random search strategy to find the best values for hyperparameters.
Read more >Overview of hyperparameter tuning | Vertex AI - Google Cloud
If the hyperparameter is shared, the tuning job uses what it has learned from LINEAR_REGRESSION and DNN trials to tune the learning rate....
Read more >Bayesian Multi-objective Hyperparameter Optimization for ...
In Parsa et al. (2019b), we used a single objective hyperparameter Bayesian optimization to optimize performance of spiking neuromorphic systems ...
Read more >A Conceptual Explanation of Bayesian Hyperparameter ...
If you said below 200 estimators, then you already have the idea of Bayesian optimization! We want to focus on the most promising ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@KareemAlSaudi-RUG There’s a pretty recent paper that suggests nothing really significantly beats a good random-search variant yet.
Interesting. Any recommendation then, between RandomSearch and Hyperband? I suppose that on the paper you linked, Random search with Early Stopping refers to Hyperband, or?
Thanks