Optional logging of validation loss (and other metrics) in KerasModel
See original GitHub issueKerasModel (and TensorGraph) currently don’t support periodic logging of validation loss.
Would it be a good idea to have this in the fit_generator
and fit
API?
This would be an optional argument with the modified API looking something like:
def fit(self, dataset, nb_epoch=10, max_checkpoints_to_keep=5,
checkpoint_interval=1000, deterministic=False, restore=False,
submodel=None, val_dataset=None, eval_interval=1000, **kwargs):
It would make it easier to do things like early stopping, which was done in the ChemNet Transfer Learning paper, and in general can be useful as well
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:9 (9 by maintainers)
Top Results From Across the Web
Training & evaluation with the built-in methods - Keras
Introduction. This guide covers training, evaluation, and prediction (inference) models when using built-in APIs for training & validation ...
Read more >How to Use Metrics for Deep Learning with Keras in Python
The Keras library provides a way to calculate and report on a suite of standard metrics when training deep learning models.
Read more >Keras Loss Functions: Everything You Need to Know
It is usually a good idea to monitor the loss function on the training and validation set as the model is training. Looking...
Read more >[Feature Request] Logging of validation metrics when using ...
fit. Allowing the user to pass another (optional) parameter to the CSVLogger constructor called, say, 'missing_value_string' which is then used ...
Read more >How to return history of validation loss in Keras - Stack Overflow
fit method returns a History callback, which has a history attribute containing the lists of successive losses and other metrics. hist = model....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Agreed, this would be very useful. We should give some thought to how this should work. For
fit()
, it could work to just provide another dataset to use for validation. Forfit_generator()
that isn’t necessarily possible. After all, one of the purposes of that method is to support models that take multiple inputs and therefore require more than just theX
array from a dataset.You also mentioned the possibility of tracking other metrics, which would also be useful. So perhaps we can have a unified mechanism that supports all of those things.
I borrowed the
BaseLogger
term from Keras Callbacks, but it was motivated by your invoked “after every step” statement:Using the same idea as Keras, this CallBack would compute the loss at every step, and then display every certain iterations. This is used by default in every model. It can slso compute some metrics on the training set, if needed.
In case of Keras (https://github.com/keras-team/keras/blob/master/keras/callbacks.py), the EarlyStoppingCallback keeps track of the best weights if a toggle is turned on. So having one class should be sufficient.