Resuming Training
See original GitHub issueMy model
and optimizer
are prepared using .prepare()
,
my scheduler
is left without “preparation” as was done in the nlp_example.py
If I want to resume training, do I need to use the .prepare()
function again after loading the previously saved model?
Or would I need to use the .prepare()
function only if I loaded the trained model before calling .prepare()
?
Issue Analytics
- State:
- Created 2 years ago
- Comments:14 (5 by maintainers)
Top Results From Across the Web
Resume training definition and meaning
Resume training definition: If you resume an activity or if it resumes , it begins again. [...] | Meaning, pronunciation, translations and examples....
Read more >3 tips for resuming training after the summer break
1. Resume training gradually. Resume training in two phases. · 2. Focus on recovery. Your muscles and bones need to recover to adapt...
Read more >Resuming Training in High-Level ... - Sports Medicine - Open
Players stopped all physical activity for 7 days, and on day 8, they resumed physical activity in isolation. ECG, echocardiography and stress ...
Read more >How to resume training in neural networks properly?
take my previous model weight, and then train a few new pictures on. This will make the new-model a type of transfer learning...
Read more >5 Tips for Resuming Training - Total Physiotherapy
2. Rest is just as important as training. Following a cardio or strength session it takes 24-48 hours for your body to recover...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi, @sgugger . I find one accelerator.gather() issue related to saving method in distributed evaluation. if I
accelerator.save(wrapped_model)
, and for loading,prepare
,load(wrapped_model_ckpt)
, just like you said before on this issue page, when I appendedaccelerator.gather()
(same codes in cv_example.py) to my eval function, I got stuck at the second iteration, no errors, no exits, just stuck(I use tqdm, progressbar didn’t go.)So I followed your official document to save unwrapped_model first, thing goes well. Maybe it’s not an issue cause saving unwrapped_model is recommended, just FYI. 😃
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.