Simple detection evaluator
See original GitHub issue❓ Questions and Help
General questions about detectron2.
Thanks for all the great work! I have my own custom detection dataset(s) and a split to train/validation. I would like to run periodic evaluation during training.
I set:
cfg.DATASETS.TEST = ("car_parts/valid",)
cfg.TEST.EVAL_PERIOD = 2000
If I understand correctly I need to set MetadataCatalog.get(dataset_name).evaluator_type
but not sure what to use as evaluator. I have my own get_json()
method since my data is not in any usual format.
Is there a ‘Simple detection evaluator’?
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:13 (3 by maintainers)
Top Results From Across the Web
Multisensory Gains in Simple Detection Predict Global ...
Here, we provide the first evidence of a predictive relationship between multisensory benefits in simple detection and higher-level ...
Read more >detectron2.evaluation
Evaluate object proposal and instance detection/segmentation outputs using LVIS's metrics and evaluation API. __init__ (dataset_name, tasks=None, distributed= ...
Read more >Introduction to Object Detection Model Evaluation
To evaluate if an object was located we use the Intersection over Union (IoU) as a similarity measure. It's given by the area...
Read more >A THRESHOLD THEORY FOR SIMPLE DETECTION ...
A simple learning process is suggested ... A THRESHOLD THEORY FOR SIMPLE DETECTION EXPERIMENTS ... permit some experimental evaluation of the model.
Read more >Experimental evaluation of a simple lesion detection task with ...
In a non-TOF scanner, the location of the annihilation point along the line-of-response (LOR) is not precisely known, and the reconstruction ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
there are multiple reasons for having a generic set of utility functions for evaluating the same metric:
I was hoping to use your evaluator to get a recall curve with my COCO dataset. Just noticed though that it says “AUC evaluation done, precision recall computation is wrong”, so I was a bit hesitant to try to use it. Any updates here? Thanks!