In lightgbm_tuner_simple.py example early stopping is not working properly.
See original GitHub issueIn lightgbm_tuner_simple.py example, early stopping is not working properly as follows. For example, I got the following log by execute lightghtgbm_tuner_simple.py.
# First trial
[1] valid_0's binary_logloss: 0.581604 valid_1's binary_logloss: 0.587863
...
[43] valid_0's binary_logloss: 0.0236828 valid_1's binary_logloss: 0.145822
...
[143] valid_0's binary_logloss: 4.13863e-05 valid_1's binary_logloss: 0.265754
Early stopping, best iteration is:
[43] valid_0's binary_logloss: 0.0236828 valid_1's binary_logloss: 0.145822
# Early stopping works fine in first trial.
# Second trial
[1] valid_0's binary_logloss: 0.580784 valid_1's binary_logloss: 0.586189
[2] valid_0's binary_logloss: 0.514544 valid_1's binary_logloss: 0.524775
...
[43] valid_0's binary_logloss: 0.0207709 valid_1's binary_logloss: 0.149757
...
[143] valid_0's binary_logloss: 3.01071e-05 valid_1's binary_logloss: 0.318618
Early stopping, best iteration is:
[43] valid_0's binary_logloss: 0.0236828 valid_1's binary_logloss: 0.145822
# Early stopping does not work correctly in second trial. It seems to the result of the first trial.
I think this case is because early_stopping() function creates a closure and variables in the closure are shared between each trials. If I use early_stopping_rounds parameter instead of early_stopping callback, early stopping works properly even though the following warning is displayed.
UserWarning: 'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. Pass 'early_stopping()' callback via 'callbacks' argument instead.
_log_warning("'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. "
Environment
- Optuna version: 2.10.0
- Python version: 3.8.18
- OS: Ubuntu 20.04.2
Issue Analytics
- State:
- Created 2 years ago
- Comments:11
Top Results From Across the Web
Lightgbm early stopping not working properly - Stack Overflow
It is working properly : as said in doc for early stopping : will stop training if one metric of one validation data...
Read more >tf.keras.callbacks.EarlyStopping not working as expected
EarlyStopping is used to terminate a training if a monitored quantity ... For example, in the following code snippet, the training will stop ......
Read more >Use Early Stopping to Halt the Training of Neural Networks At ...
This example provides a template for applying early stopping to your own neural network for classification and regression problems. Binary ...
Read more >Early Stopping in Practice: an example with Keras and ...
One solution is to stop only after the validation error has been above the minimum for some time (when you are confident that...
Read more >Early Stopping in PyTorch to Prevent Overfitting (3.4) - YouTube
It can be difficult to know how many epochs to train a neural network for. Early stopping stops the neural network from training...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Thank you for reporting the bug! Indeed, I could reproduce the same behaviour with the latest lightgbm on colab notebook. I think this issue might be related
optuna.integration.lightgbm
not example. So I’ll transfer this issue to the optuna/optuna repo.Code I used:
This issue was closed automatically because it had not seen any recent activity. If you want to discuss it, you can reopen it freely.