question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Errors about evaluation metric

See original GitHub issue

In dataset config I set evaluation = dict(interval=1, metric='mAP', save_best='mAP') watched the model be trained for an epoch before the first evaluation, where I got an error KeyError: 'metric mAP is not supported'.

I am not using original mmdetection but a modified mmdetection folder. I would like to know how to solve this error. Thank you.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
RangiLyucommented, Jan 10, 2022

Which dataset did you use? If you used CocoDataset, the metric should be ‘bbox’.

0reactions
dr-GitHub-accountcommented, Jan 10, 2022

I will try and do this. Thank you so much!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Error Metrics: How to Evaluate Your Forecasting Models - Jedox
This is done by calculating suitable error metrics. An error metric is a way to quantify the performance of a model and provides...
Read more >
The evaluation metrics and error analysis in ML projects
This blog post explored the idea of setting evaluation metrics and performing the error analysis. The evaluation metric can give us a better...
Read more >
11 Evaluation Metrics Data Scientists Should Be Familiar with
#1 — RMSE (Root Mean Squared Error) · #2 — RMSLE (Root Mean Squared Logarithmic Error) · #3 — MAE (Mean Absolute Error)...
Read more >
Evaluation Metric for Regression Models - Analytics Vidhya
This evaluation metric quantifies the overall bias and captures the average bias in the prediction. It is almost similar to MAE, the only ......
Read more >
7 Important model evaluation error metrics that everyone ...
You build a model. Get feedback from metrics, make improvements and continue until you achieve a desirable accuracy. Evaluation metrics explain the performance ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found