question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Using evaluate.py - is it functional?

See original GitHub issue

Since I see no mention of it in anywhere, I’m wondering if evaluate.py is functional?

First I had to modify it to use it at all, then when I tried using it at all on my custom data, I got bad results despite knowing the model is performing quite well.

So to test it, I took 20 ground truth labels, copied them and added fake scores between 0.9 and 1.0. I may be misunderstanding mAP (low number of examples makes the metric weird, but I have ‘perfect’ detection here), but it seems I should be getting reasonably high scores? For cars, I get

Car AP@0.70, 0.70, 0.70:
bbox AP:27.2727, 54.5455, 63.6364
bev  AP:0.0000, 0.0000, 0.0000
3d   AP:0.0000, 0.0000, 0.0000
aos  AP:27.27, 54.55, 63.64

Not even consistent between evaluations. Why 0 BEV and 3D, is it not running on those metrics? I don’t see anything to do with calibration in the eval code, so shouldn’t be an issue with reference frames. I’m lost. Is something wrong with this code?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:14 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
jacoblambertcommented, Sep 22, 2020

Since I couldn’t solve this in PCDet repo, I’m resorting to using official eval code. I made some script to transfer whatever your format is to KITTI format, then you can run the cpp code simply: https://github.com/jacoblambert/label_to_kitti_format

1reaction
jhultmancommented, Sep 28, 2020

@jacoblambert I think the problem you mention is the same as the one discussed here and here. The rotated IOU algorithm in the eval library cannot handle the case where the orientations are identical (this is a bug of course).

Read more comments on GitHub >

github_iconTop Results From Across the Web

Python eval(): Evaluate Expressions Dynamically - Real Python
Python's eval() allows you to evaluate arbitrary Python expressions from a string-based or compiled-code-based input. This function can be handy when you're ...
Read more >
Python eval() built-in-function - Towards Data Science
Answer: eval is a built-in- function used in python, eval function parses the expression argument and evaluates it as a python expression. In ......
Read more >
eval in Python - GeeksforGeeks
Python eval() function parse the expression argument and evaluate it as a python expression and runs python expression (code) within the program ...
Read more >
The Eval Function in Python: A Powerful but Dangerous Weapon
As its name implies, the eval() function is used to evaluate a Python expression. We just need to put an expression inside the...
Read more >
How Python Eval Function Work with Examples - eduCBA
In common terms, the evaluation () method/methods that run the python programming code (which is actually passed as the argument) within the program....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found