Training clarification
See original GitHub issueHi @Cyanogenoid !
First of all many thanks for sharing your great work!
I’m currently training the model using the train.py
command. I see the training alternates between train and val sets, as you mentioned in the repo. The info below is what I can see from the command line:
train E000: 100% 1698/1698 [17:03<00:00, 1.66it/s, acc=0.4774, loss=1.9324]
and
val E000: 100% 838/838 [03:40<00:00, 3.80it/s, acc=0.5126, loss=1.7152]
What do the numbers 1698 and 838 refer to?
Thank you so much!
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Clarification of training certification. | Occupational Safety and ...
The standard clearly states in paragraph (e)(6) that employees who have successfully completed their required off-site instruction: "shall be certified by their ...
Read more >Interpreter Training (Clarifying) - YouTube
Tips for consecutive interpreting. Proper interpreter clarifying. Clarification mistakes and errors. interpretation and satisfying your ...
Read more >[1904.02281] Answer-based Adversarial Training for ... - arXiv
We propose that modeling hypothetical answers (to clarification questions) as latent variables can guide our approach into generating more ...
Read more >Clarification of the Training Requirements for Working on ...
Clarification of the Training Requirements for Working on Electric Vehicles. International Journal of Advanced Corporate Learning (iJAC), ...
Read more >Lifelines Teacher Training Program Values Clarification
If I had made a suicide attempt. It really doesn't matter if in the past I'd do all I could to people know...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi @Cyanogenoid ! Sorry for the late response. Sure, I’m closing the issue. I ended up decreasing the batch size from 256 to 128, after that the program run a bit faster on my machine. Thank you
Are things good now? Can this issue be closed?