question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

AP is calculated as 0 despite exact bbox match

See original GitHub issue

I’m having a problem where even if I copy my detection results directly from the ground truth (adding a confidence score of 1.0), certain bounding boxes give an AP of 0 and bring down the mAP unnecessarily.

For example, try adding 1.txt to detection-results containing: dog 1.0 206 394 173 405 and 1.txt to ground-truth containing: dog 206 394 173 405

You’ll see that it’ll return a mAP of zero, despite the bboxes being exactly the same.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5

github_iconTop GitHub Comments

2reactions
toddwylcommented, May 6, 2019

I find my mistake.I have wrong with generating the txt and making the top greater than bottom. <left> <top> <right> <bottom> I forgot to check the bottom>=top,right>=left. And your mistake is making left greater than right. By the way, where is the official cocoapi to calculate mAP? Thanks.

0reactions
jiteshm17commented, Jun 19, 2019

I have the groundtruths and detections in the form of x_min,y_min,x_max,y_max. Can I use the order x_min,y_max,x_max,y_min to match the corresponding order of <left> <top> <right> <bottom> mentioned in the txt file or should I change it something else?

Read more comments on GitHub >

github_iconTop Results From Across the Web

3d_lidar_detection_evaluation/readme.md at master · jacoblambert ...
For a given match threshold we calculate average precision (AP) by integrating the recall vs precision curve for recalls and precisions > 0.1....
Read more >
mAP (mean Average Precision) for Object Detection
AP (Average precision) is a popular metric in measuring the accuracy of object detectors like Faster R-CNN, SSD, etc. Average precision computes the...
Read more >
Mean Average Precision (mAP) Using the COCO Evaluator
Here 0.0 represents no overlap between the predicted and ground-truth bounding box, while 1.0 is the most optimal, meaning the predicted ...
Read more >
Compatibility of MMDetection 2.x — MMDetection 2.14.0 文档
The new calculation does not affect the overall mask AP evaluation and is ... related to the bbox and pixel selection, which is...
Read more >
Object Detection Metrics With Worked Example
Average Precision (AP) and mean Average Precision (mAP) are the most popular metrics used to evaluate object detection models such as Faster R_CNN, ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found