A version of Trainer that calculates the eval metrics (e.g. accuracy) on the training set as well
See original GitHub issue🚀 Feature request
The Trainer
class only calculates accuracy (assuming that is one of the eval metrics) on the evaluation dataset. However, I would like these metrics to be computed on the training set as well.
Note that TensorFlow
automatically calculates accuracy on the training set.
Motivation
Training set accuracy is crucial for some applications/investigations in ML.
Your contribution
I want to make sure I am not reinventing the wheel before subclassing Trainer
myself?
Issue Analytics
- State:
- Created 2 years ago
- Comments:13 (12 by maintainers)
Top Results From Across the Web
Fine-tuning a model with the Trainer API - Hugging Face Course
Here, we can see our model has an accuracy of 85.78% on the validation set and an F1 score of 89.97. Those are...
Read more >Trainer — PyTorch Lightning 1.8.5.post0 documentation
You can perform an evaluation epoch over the validation set, outside of the training loop, using validate() . This might be useful if...
Read more >Are evaluation metrics computed on training dataset?
The metrics calculated on the training set tell you how does the model performs on the "seen" data, if the performance is poor,...
Read more >How to get the accuracy per epoch or step for the huggingface ...
You can can determine the evaluation interval of the Trainer with the evaluation_strategy training parameter. It currently accepts 3 values:.
Read more >Masterful CLI Trainer: Model Evaluation - Masterful 0.6.0 ...
Masterful will choose a representative set of metrics for your computer ... it still used that data to calculate training hyperparameters (such as...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
What I means is, I have no idea where this comes from since there is no randomness called during the evaluation.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.