Errors about evaluation metric
See original GitHub issueIn dataset config I set
evaluation = dict(interval=1, metric='mAP', save_best='mAP')
watched the model be trained for an epoch before the first evaluation, where I got an error
KeyError: 'metric mAP is not supported'
.
I am not using original mmdetection but a modified mmdetection folder. I would like to know how to solve this error. Thank you.
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Error Metrics: How to Evaluate Your Forecasting Models - Jedox
This is done by calculating suitable error metrics. An error metric is a way to quantify the performance of a model and provides...
Read more >The evaluation metrics and error analysis in ML projects
This blog post explored the idea of setting evaluation metrics and performing the error analysis. The evaluation metric can give us a better...
Read more >11 Evaluation Metrics Data Scientists Should Be Familiar with
#1 — RMSE (Root Mean Squared Error) · #2 — RMSLE (Root Mean Squared Logarithmic Error) · #3 — MAE (Mean Absolute Error)...
Read more >Evaluation Metric for Regression Models - Analytics Vidhya
This evaluation metric quantifies the overall bias and captures the average bias in the prediction. It is almost similar to MAE, the only ......
Read more >7 Important model evaluation error metrics that everyone ...
You build a model. Get feedback from metrics, make improvements and continue until you achieve a desirable accuracy. Evaluation metrics explain the performance ......
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Which dataset did you use? If you used CocoDataset, the metric should be ‘bbox’.
I will try and do this. Thank you so much!