question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Feature] Hook to implement early stopping

See original GitHub issue

Short Question Description

I would like to implement a hook for the user to be able to implement his own stopping strategy. Is this interesting for you? How would I go about implementing the hook myself?

Context Information

First off, I really like the project and I’m very much impressed with what you have accomplished. The autoML engine works extremely well for our use cases.

Different datasets require different training lengths. In some cases, I noticed that the autoML engine already finds the most optimal configuration in a matter of seconds, whereas for others, longer training times do benefit the performance. Without knowing the dataset in advance, it is hard to find the optimal training time - we are trying to minimize computation time while keeping the same model performance.

This could be done by providing a hook to the user that is called after every new model is trained. I would then check the new model’s performance with the best one so far, and make a heuristic decision to continue training based on this. The time_left_for_this_task would still be the maximum training time - the hook would thus implement some sort of early stopping strategy.

Similar Work

I did not find a similar example/tutorial in the documentation, nor a similar GitHub Issue.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:3
  • Comments:11 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
mfeurercommented, Aug 20, 2021

While functionally it’s no different from the current parameter to pass callbacks, it’s a lot clearer as an entry point to using this functionality.

Wouldn’t it maybe suffice to rename the current argument and improve its documentation?

0reactions
eddiebergmancommented, Nov 17, 2021

Raising this in a new issue so we can clearly state what needs to be done on our end. Please see #1304

Read more comments on GitHub >

github_iconTop Results From Across the Web

Migrate early stopping | TensorFlow Core
In TensorFlow 2, you can implement early stopping in a custom training loop if you're not training and evaluating with the built-in Keras...
Read more >
How to build an Early Stopping Hook · Issue #531 - GitHub
Early Stopping is an useful mechanism, already integrated in several libraries and frameworks, which can help when training several models for ...
Read more >
Implement early stopping in tf.estimator.DNNRegressor using ...
Here is a EarlyStoppingHook sample implementation: import numpy as np import tensorflow as tf import logging from tensorflow.python.training ...
Read more >
Use Early Stopping to halt the training of neural networks at ...
Early stopping is a strategy that facilitates you to mention an arbitrary large number of training epochs and stop training after the model...
Read more >
Stop Training Jobs Early - Amazon SageMaker
Stop the training jobs that a hyperparameter tuning job launches early when they are not improving significantly as measured by the objective metric....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found