question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Simple detection evaluator

See original GitHub issue

❓ Questions and Help

General questions about detectron2.

Thanks for all the great work! I have my own custom detection dataset(s) and a split to train/validation. I would like to run periodic evaluation during training.

I set:

cfg.DATASETS.TEST = ("car_parts/valid",)
cfg.TEST.EVAL_PERIOD = 2000

If I understand correctly I need to set MetadataCatalog.get(dataset_name).evaluator_type but not sure what to use as evaluator. I have my own get_json() method since my data is not in any usual format. Is there a ‘Simple detection evaluator’?

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:1
  • Comments:13 (3 by maintainers)

github_iconTop GitHub Comments

6reactions
botcscommented, Oct 21, 2019

there are multiple reasons for having a generic set of utility functions for evaluating the same metric:

  • transparency: it is extremely hard to reverse engineer what happens in cocoapi when trying to understand the evaluation process. Although this retrospective summary and this medium post is super helpful in this matter, I think it should be beneficial to have a straightforward implementation of the metric that is easy to see through, modify and improve.
  • flexibility: currently the end goal of the majority of the SOTAs is to improve the mAP without acknowledging any trade-off from the viewpoint of different metrics. This may hide some shortcomings of the algorithms when applied to custom datasets, that mAP is unaware of. Allowing people to easily compute multiple metrics in one pass, without reorganizing the data structure for the 100th time could help setting a new trend.
  • canonical: it is extremely frustrating that mAP scores reported can refer to VOC, COCO, or CityScapes, many papers (even useful / popular ones) pass without explicitly marking the reference implementation for that metric. In the very rare case when the source is published with trained baselines, it can be still painful to format the I/O pairs for each metric to find out which one was used exactly. OK, one might ask the authors as well, but… meh.
4reactions
bconsolvo-zvelocommented, Aug 28, 2020

@ppwwyyxx, if I understood correctly the thing that is missing from a general “simple” evaluator is a DatasetRegistry 2 COCO-json converter, is it correct?

In the meantime I have started to modularize the mAP evaluation process that works with COCO-format json. My plan is to reproduce results from VOC, COCO and CityScapes with the same toolkit: you can find it here. So far the Precision and Recall curve computing is ready

As soon as I can validate the COCO scores with it, I’ll make a PR

I was hoping to use your evaluator to get a recall curve with my COCO dataset. Just noticed though that it says “AUC evaluation done, precision recall computation is wrong”, so I was a bit hesitant to try to use it. Any updates here? Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Multisensory Gains in Simple Detection Predict Global ...
Here, we provide the first evidence of a predictive relationship between multisensory benefits in simple detection and higher-level ...
Read more >
detectron2.evaluation
Evaluate object proposal and instance detection/segmentation outputs using LVIS's metrics and evaluation API. __init__ (dataset_name, tasks=None, distributed= ...
Read more >
Introduction to Object Detection Model Evaluation
To evaluate if an object was located we use the Intersection over Union (IoU) as a similarity measure. It's given by the area...
Read more >
A THRESHOLD THEORY FOR SIMPLE DETECTION ...
A simple learning process is suggested ... A THRESHOLD THEORY FOR SIMPLE DETECTION EXPERIMENTS ... permit some experimental evaluation of the model.
Read more >
Experimental evaluation of a simple lesion detection task with ...
In a non-TOF scanner, the location of the annihilation point along the line-of-response (LOR) is not precisely known, and the reconstruction ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found