Support training restorage
See original GitHub issueSubject of the feature
Currently, we are using model.load_weights
to start the supervised training from non-random weights.
However, if the intention is to resume a stopped run, this will not be perfect as the status of optimizer is not saved in the checkpoint.
The following piece of code can be helpful: https://github.com/tensorflow/tensorflow/issues/27861#issuecomment-487455939
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Restore Training - Training Christian Women
Restore Training is an online, self-paced training program for Anabaptist women interested in engaging in anti-trafficking and restoration efforts.
Read more >Core Restore - Back Support & Core Training Tool | Teeter.com
Soothe your tired back with the Teeter Core Restore, a multi-functional lumbar support and core training tool designed to help relieve back pain....
Read more >Restoration Training & Education
Restoration Technical Institute Offers the Following Comprehensive Courses: Water Damage Technician, Applied Structural Drying, Media Blasting, & so Much ...
Read more >IICRC: Welcome
Customer ServiceThe IICRC customer service team can assist you with all your needs. Certified FirmsExamsRenewalsReinstatementsRetestsStandards ...
Read more >Facilitator training & support - Restore Small Groups
Join A Group Coaching & Speaking Facilitator training & support ... Over Restore's 20-year history, small group facilitators have been at the heart...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@YipengHu
If you do not have the optimizer status, beta, gamma in Adam for example. After reloading the checkpoint, the first gradient update might give a big harmful update which leads to inferior performance.
With optimizer status, you can continue the training as you never stopped it. This piece of code has been tested in other projects, so it’s verified already.
train for an epoch resolves optimiser weights not loaded warning