question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Evaluate existing coco model on own dataset

See original GitHub issue

I want to evaluate a custom dataset with an existing coco model for reference. I followed the Colab Notebook for general prediction after training, but I always get AP of 0.00 even though I visualized the output and it does a pretty good job

Loading and preparing results...
DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.03s).
Accumulating evaluation results...
DONE (t=0.02s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

OrderedDict([('bbox',
              {'AP': 0.0,
               'AP50': 0.0,
               'AP75': 0.0,
               'APs': 0.0,
               'APm': -100.0,
               'APl': -100.0,
               'AP-1': 0.0,
               'AP-2': 0.0,
               'AP-3': nan,
               'AP-4': 0.0,
               'AP-5': nan,
               'AP-6': nan,
               'AP-7': nan,
                   ...
               'AP-75': nan,
               'AP-76': nan,
               'AP-77': nan,
               'AP-78': nan,
               'AP-79': nan,
               'AP-80': nan})])

My code is as follows. The dataset has, like the coco dataset 80 classes. It however detects the 3 classes in the small test dataset, because AP-1, AP-2, AP-4 have a score of 0.0 while the other classes are nan.

model = build_model(cfg)
DetectionCheckpointer(model).load(weights_path)

evaluator = COCOEvaluator("testset_val", cfg, False, output_dir="./output/")
val_loader = build_detection_test_loader(cfg, "testset_val")
inference_on_dataset(model, val_loader, evaluator)

Any suggestions how to evaluate the right way?

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:6

github_iconTop GitHub Comments

2reactions
dhaivat1729commented, Feb 1, 2020

I had the exact same problem. Whenever you run evaluation, I recommend you to delete contents of ./output/ directory. For me, it turned out that the new json file was not being created, causing it to fail. If you make any changes to the code, it’s a good idea to delete ./output/ directory to ensure that you are saving correct files

1reaction
ppwwyyxxcommented, Nov 28, 2019

The category ids in your dataset has to have the same meaning as the one in COCO for a COCO model to work.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to work with object detection datasets in COCO format
A comprehensive guide to defining, loading, exploring, and evaluating object detection datasets in COCO format using FiftyOne.
Read more >
The COCO Dataset: Best Practices for Downloading ...
How to download, visualize, and explore the COCO dataset or subsets with FiftyOne and add model predictions and evaluate them with ...
Read more >
Tutorial 2: Customize Datasets - MMDetection's documentation!
The simplest way is to convert your dataset to existing dataset formats (COCO or PASCAL VOC). The annotation json files in COCO format...
Read more >
An Introduction to the COCO Dataset
The computer vision research community benchmarks new models and enhancements to existing models to test model performance.
Read more >
70-Page Report on the COCO Dataset and Object ...
It facilitates visualization and access to COCO data resources and serves as an evaluation tool for model analysis on COCO. Here's the official ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found