Custom configuration how/where to save the best model with ModelCheckpoint
See original GitHub issueIn case when user would like to integrate ModelCheckpoint
with a package for experiment tracking, e.g. mlflow, polyaxon, etc.
In such case logging, model weights etc can be stored on a cloud storage, e.g.
exp_tracking.log_artifact(filepath)
Be default ModelCheckpoint
is saving the model to the provided path dirname
.
Idea is to provide a flexibility to execute a custom code when model is saved and be eable to store everywhere we would like.
What do you think ?
cc @elanmart
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Keras Callbacks and How to Save Your Model from Overtraining
In this article, you will learn how to use the ModelCheckpoint callback in Keras to save the best version of your model during...
Read more >tf.keras.callbacks.ModelCheckpoint | TensorFlow v2.11.0
Whether only weights are saved, or the whole model is saved. Note: If you get WARNING:tensorflow:Can save best model only with <name> available, ......
Read more >Saving best model in keras - Stack Overflow
EarlyStopping and ModelCheckpoint is what you need from Keras documentation. You should set save_best_only=True in ModelCheckpoint.
Read more >Save Best Model using Checkpoint and Callbacks - YouTube
Model Checkpoint in Tensorflow | Save Best Model using Checkpoint and ... TensorFlow Tutorial 14 - Callbacks with Keras and Writing Custom ......
Read more >How to Checkpoint Deep Learning Models in Keras
Checkpointing is set up to save the network weights only when there is an ... Checkpoint the weights for best model on validation...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@Bibonaut thanks for sharing the code ! Looks nice! We can think to put it into
contrib
module.This sounds nice. I also did some hacking and just want to share the code, in case it can be useful for anyone of you. In my case, I needed a custom save method. I didn’t want to use
torch.save()
, because my own model class is still under development and I want to have compatibility between all its versions. My save method simply saves hyperparameters and weights from which the class is recreated when it is loaded.I inherited my ModelSaver class from
ignite.handlers.ModelCheckpoint
and overloaded the_internal_save
method. There are two more little features: Save the model on exception and when the training is completed. Sorry for the incomplete documentation, but at the moment, I have only little time.Long story short… here is my code: