question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

where can I find the means of the evaluation?

See original GitHub issue

how to understand the four evaluation indicators as follows: ①Car AP@0.70, 0.70, 0.70 ②Car AP_R40@0.70, 0.70, 0.70 ③Car AP@0.70, 0.50, 0.50 ④Car AP_R40@0.70, 0.50, 0.50 the first Car AP@0.70, 0.70, 0.70 and the third Car AP@0.70, 0.50, 0.50 have the same number 0.70 at begin, but AP of the former is 89.3476,while it of the latter is 96.2342.

`2021-01-12 23:10:45,241 INFO *************** EPOCH 8369 EVALUATION ***************** 2021-01-12 23:17:41,700 INFO *************** Performance of EPOCH 8369 ***************** 2021-01-12 23:17:41,713 INFO Generate label finished(sec_per_example: 0.1105 second). 2021-01-12 23:17:41,713 INFO recall_roi_0.3: 0.968447 2021-01-12 23:17:41,713 INFO recall_rcnn_0.3: 0.968561 2021-01-12 23:17:41,713 INFO recall_roi_0.5: 0.928466 2021-01-12 23:17:41,713 INFO recall_rcnn_0.5: 0.934389 2021-01-12 23:17:41,713 INFO recall_roi_0.7: 0.717394 2021-01-12 23:17:41,713 INFO recall_rcnn_0.7: 0.759483 2021-01-12 23:17:41,716 INFO Average predicted number of objects(3769 samples): 9.230 2021-01-12 23:18:04,213 INFO Car AP@0.70, 0.70, 0.70: bbox AP:96.2470, 89.4992, 89.2430 bev AP:90.0894, 87.9004, 87.4072 3d AP:89.3476, 83.6901, 78.7028 aos AP:96.22, 89.39, 89.07 Car AP_R40@0.70, 0.70, 0.70: bbox AP:98.2662, 94.4210, 92.2765 bev AP:93.0239, 90.3255, 88.5319 3d AP:92.1047, 84.3605, 82.4830 aos AP:98.25, 94.26, 92.07 Car AP@0.70, 0.50, 0.50: bbox AP:96.2470, 89.4992, 89.2430 bev AP:96.2810, 89.4982, 89.2886 3d AP:96.2342, 89.4774, 89.2535 aos AP:96.22, 89.39, 89.07 Car AP_R40@0.70, 0.50, 0.50: bbox AP:98.2662, 94.4210, 92.2765 bev AP:98.2607, 94.5896, 94.4319 3d AP:98.2422, 94.5277, 94.3272 aos AP:98.25, 94.26, 92.07 Pedestrian AP@0.50, 0.50, 0.50: bbox AP:73.1477, 68.0799, 64.3542 bev AP:65.1821, 59.4169, 54.5101 3d AP:63.1230, 54.8428, 51.7816 aos AP:67.84, 62.49, 58.73 Pedestrian AP_R40@0.50, 0.50, 0.50: bbox AP:73.6837, 68.2715, 64.3622 bev AP:65.9365, 58.5166, 54.1258 3d AP:62.7110, 54.4902, 49.8798 aos AP:67.82, 62.17, 58.07 Pedestrian AP@0.50, 0.25, 0.25: bbox AP:73.1477, 68.0799, 64.3542 bev AP:76.2555, 71.8445, 69.4931 3d AP:76.2398, 71.8001, 69.4345 aos AP:67.84, 62.49, 58.73 Pedestrian AP_R40@0.50, 0.25, 0.25: bbox AP:73.6837, 68.2715, 64.3622 bev AP:78.2616, 73.1740, 69.9717 3d AP:78.2458, 73.0349, 69.8725 aos AP:67.82, 62.17, 58.07 Cyclist AP@0.50, 0.50, 0.50: bbox AP:96.1222, 81.3613, 76.4936 bev AP:88.5292, 73.3251, 70.3690 3d AP:86.0637, 69.4789, 64.5046 aos AP:95.98, 81.07, 76.17 Cyclist AP_R40@0.50, 0.50, 0.50: bbox AP:97.1514, 82.4180, 78.2196 bev AP:93.4584, 74.5322, 70.1025 3d AP:89.1011, 70.3809, 66.0168 aos AP:97.04, 82.12, 77.88 Cyclist AP@0.50, 0.25, 0.25: bbox AP:96.1222, 81.3613, 76.4936 bev AP:95.0958, 78.2760, 73.3191 3d AP:95.0958, 78.2670, 73.3121 aos AP:95.98, 81.07, 76.17 Cyclist AP_R40@0.50, 0.25, 0.25: bbox AP:97.1514, 82.4180, 78.2196 bev AP:96.2402, 79.1335, 75.8222 3d AP:96.2402, 79.1278, 75.7990 aos AP:97.04, 82.12, 77.88

2021-01-12 23:18:04,217 INFO Result is save to /home/hby/hdd/chenyanbin/OpenPCDet/output/kitti_models/pv_rcnn/default/eval/epoch_8369/val/default 2021-01-12 23:18:04,217 INFO ****************Evaluation done.*****************`

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:11

github_iconTop GitHub Comments

5reactions
triasamo1commented, Mar 26, 2021

Hey man,

  • The metrics go like this. AP@ Easy , Medium , Hard (3 columns) .
  • AP@0.70 means Average Precision with Intersection over Union (IoU) with threshold at 70% . [which basically means that predictions with IoU less than 70% won’t be considered as correct].
  • AP_R40 means Average precision with 40 points approximation on the Precision-Recall curve. (when you see plain ‘AP’ it’s with 11 points).
4reactions
andraspalffycommented, Aug 6, 2021

I think I did answer this. While you think it uses the same threshold 0.7, it actually does not, see thresholds here: https://github.com/open-mmlab/OpenPCDet/blob/0642cf06d0fd84f50cc4c6c01ea28edbc72ea810/pcdet/datasets/kitti/kitti_object_eval_python/eval.py#L639

The “Car AP@0.70, 0.70, 0.70:” and “Car AP@0.70, 0.50, 0.50” lines do not apply to the 3d AP lines. It is confusing indeed.

Read more comments on GitHub >

github_iconTop Results From Across the Web

EVALUATION | definition in the Cambridge English Dictionary
the process of judging or calculating the quality, importance, amount, or value of something: Evaluation of this new treatment cannot take place ...
Read more >
Evaluation Definition & Meaning - Merriam-Webster
The meaning of EVALUATION is the act or result of evaluating : determination of the value, nature, character, or quality of something or...
Read more >
Evaluation - Definition, Meaning & Synonyms | Vocabulary.com
An evaluation is an appraisal of something to determine its worth or fitness. For example, before you start an exercise program, get a...
Read more >
What is evaluation? Perspectives of how evaluation differs (or ...
Evaluation is the process of delineating, obtaining, and providing useful information for judging decision alternatives. Scriven (1991) p.
Read more >
Evaluation - Wikipedia
Evaluation is a systematic determination and assessment of a subject's merit, worth and significance, using criteria governed by a set of standards.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found