How to get test predictions (and other non-scalar metrics)?
See original GitHub issue❓ Questions and Help
What is your question?
If I have a trained model and I want to test it using Trainer.test()
, how do I get the actual predictions of the model on the test set?
I tried to log the predictions and writing a Callback to get the logs at test end, but it seems like I can only log scalar Tensors in the dictionary returned by my model’s test_end()
.
Issue Analytics
- State:
- Created 4 years ago
- Comments:11 (5 by maintainers)
Top Results From Across the Web
How to get test predictions (and other non-scalar metrics)?
If I have a trained model and I want to test it using Trainer.test() , how do I get the actual predictions of...
Read more >Evaluate predictions - Hugging Face
To learn more about how to use metrics, take a look at the library Evaluate! In addition to metrics, you can find more...
Read more >Going Beyond Scalar Metrics: Behavioral Testing of NLP Models
There are 4 different user perspectives that shape a behavioral test suite. ... DIR tests expect predictions to change in a certain way, ......
Read more >A Comprehensive Guide on How to Monitor Your Models in ...
In terms of model predictions, the most important thing to monitor is model performance in line with business metrics. Model evaluation metrics.
Read more >3.3. Metrics and scoring: quantifying the quality of predictions
There are 3 different APIs for evaluating the quality of a model's predictions: Estimator ... Use sklearn.metrics.get_scorer_names() to get valid options.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
That’s not the same as logging, that was not clear in your original question. You will have to collect your predictions in test_step in a variable like
self.predictions
which is a list or something. Then after you calltrainer.test()
you can accessmodel.predictions
in your notebook. What do you think?I know this is a old question. But I think this question & answer could be a walk around.