Saving internal states of optimizers for successive calls to model.fit
See original GitHub issueI might be wrong, but I think that every call to model.fit
resets the accumulated states (such as momentum, etc.) in the optimizers. I think it would be great if the optimizers could essentially pick up where they left off training, or at least have the option of doing so. I don’t think this would be difficult to do, but I’d like to have some feedback before trying to make this work.
Issue Analytics
- State:
- Created 8 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Customize what happens in Model.fit | TensorFlow Core
You can do this whether you're building Sequential models, Functional API models, or subclassed models. Let's see how that works.
Read more >Save and load model optimizer state - python - Stack Overflow
However, I need a method for saving and loading the states of the optimizers of my trainer models. It seems as though keras...
Read more >Training & evaluation with the built-in methods - Keras
Introduction. This guide covers training, evaluation, and prediction (inference) models when using built-in APIs for training & validation ...
Read more >Does keras' model.fit() remember learning rate when called ...
Provided that you are in the same scope, will remember not only the learning rate but the current state of all tensor, hyper...
Read more >How to Save and Load Your Keras Deep Learning Model
Model weights; Model architecture; Model compilation details (loss and metrics); Model optimizer state. This means that you can load and use ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
https://github.com/fchollet/keras/issues/454
how is optimizer state different from weights ? I mean if I could save weights, wouldn’t I be able to resume the training from just that.