question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

BayesianOptimization Tuner doesn't return the model with the best val_loss and "Best val_loss So Far" increases sometimes

See original GitHub issue

The BayesianOptimization tuner doesn’t return the model with the best val_loss and “Best val_loss So Far” increases sometimes. Am I misunderstanding how the tuner works? Wouldn’t the “Best val_loss So Far” never increase again?

Here’s an excerpt of the log: (I’ve omitted log lines from training with ...)

Search: Running Trial #62

Hyperparameter    |Value             |Best Value So Far 
lstm_reg          |0.01              |0                 
lstm_units        |384               |416               
learning_rate     |0.01741           |0.00062759        

Epoch 1/200
58/58 - 8s - loss: 5.8378 - mean_absolute_error: 0.8131 - val_loss: 2.1253 - val_mean_absolute_error: 0.6561
...
Epoch 26/200
58/58 - 5s - loss: 0.4074 - mean_absolute_error: 0.4579 - val_loss: 0.8352 - val_mean_absolute_error: 0.5948
Trial 62 Complete [00h 02m 37s]
val_loss: 0.5230200886726379

Best val_loss So Far: 0.396116703748703
Total elapsed time: 04h 32m 29s

Search: Running Trial #63

Hyperparameter    |Value             |Best Value So Far 
lstm_reg          |0.001             |0                 
lstm_units        |288               |416               
learning_rate     |0.00073415        |0.00062759        

Epoch 1/200
58/58 - 5s - loss: 0.8142 - mean_absolute_error: 0.6041 - val_loss: 0.8935 - val_mean_absolute_error: 0.5796
...
Epoch 45/200
58/58 - 5s - loss: 0.1761 - mean_absolute_error: 0.2561 - val_loss: 0.8256 - val_mean_absolute_error: 0.6804
Trial 63 Complete [00h 04m 04s]
val_loss: 0.527589738368988

Best val_loss So Far: 0.396116703748703
Total elapsed time: 04h 36m 34s

Search: Running Trial #64

Hyperparameter    |Value             |Best Value So Far 
lstm_reg          |0.01              |0                 
lstm_units        |384               |416               
learning_rate     |0.00011261        |0.00062759        

Epoch 1/200
58/58 - 6s - loss: 4.1151 - mean_absolute_error: 0.6866 - val_loss: 3.3185 - val_mean_absolute_error: 0.4880
...
Epoch 94/200
58/58 - 6s - loss: 0.3712 - mean_absolute_error: 0.3964 - val_loss: 0.7933 - val_mean_absolute_error: 0.5781
Trial 64 Complete [00h 09m 06s]
val_loss: 0.6574578285217285

Best val_loss So Far: 0.43126755952835083
Total elapsed time: 04h 45m 40s

Search: Running Trial #65

Hyperparameter    |Value             |Best Value So Far 
lstm_reg          |0.0001            |0                 
lstm_units        |480               |256               
learning_rate     |0.010597          |0.05              

Epoch 1/200
58/58 - 6s - loss: 1.1511 - mean_absolute_error: 0.7090 - val_loss: 1.1972 - val_mean_absolute_error: 0.6724
...

Issue Analytics

  • State:open
  • Created 2 years ago
  • Reactions:1
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
douiracommented, Sep 7, 2021

Thanks for finding this issue in the code! Now I don’t have to worry that something is being calculated wrong. Fixing the printing is a good idea!

1reaction
haifeng-jincommented, Sep 7, 2021

@douira After inspecting the code, this is exactly what is happening. It is using the average of multiple executions as the objective value.

We should fix the printing. The best value should never increase.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Keras BayesianOptimization tuner makes the "best val_loss ...
It saves the model weights of the model at the epoch with the lowest (best) val_loss . After tuning, the tuner method get_best_models...
Read more >
Keras Tuner: Lessons Learned From Tuning Hyperparameters ...
Keras Tuner did an incredible job finding the best set for model parameters, showing a twofold increase in metric growth;; We, as engineers,...
Read more >
TensorFlow 2: With Keras Tuner: RandomSearch, Hyperband ...
This article will explore the options available in Keras Tuner for hyperparameter optimization with example TensorFlow 2 codes for CIFAR100 ...
Read more >
Hyperparameter tuning with Keras Tuner - The TensorFlow Blog
Keras Tuner is an easy-to-use, distributable hyperparameter optimization framework that solves the pain points of performing a hyperparameter ...
Read more >
Improve your model performance with Bayesian Optimization ...
If you have started using ML for your projects or simply for fun you might have realized how challenging the task of tuning...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found