question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Different AP after training

See original GitHub issue

Hi,

I’m facing to a very strange behavior. I’m using the same evaluation method during the training and after. The results, in terms of average precision are different when I reload and evaluate my model. I’m using the same parameters and the same evaluation dataset during the evaluation after and during the training.

Training results : cabbage 0.9206 colza 0.8249 greensalad 0.9800 leekonion 0.7145 redsalad 0.9764 mAP: 0.8833 Epoch 00033: mAP improved from 0.88021 to 0.88327, saving model to …/snapshots/resnet101_csv.h5

Evaluation results (with …/snapshots/resnet101_csv.h5): cabbage 0.7723 colza 0.6389 greensalad 0.9561 leekonion 0.3722 redsalad 0.9596 mAP: 0.7398

I tested two different ways to evaluate the model. But, both products the same result.

# create object that stores backbone information
backbone = models.backbone('resnet101')
backbone_retinanet = backbone.retinanet

# model = models.load_model(model_path, backbone_name='resnet101', convert=True)
model = backbone_retinanet(validation_generator.num_classes())
model.load_weights(model_path, by_name=True, skip_mismatch=True)

# make prediction model
from keras_retinanet.models.retinanet import retinanet_bbox
prediction_model = retinanet_bbox(model=model)

# Evaluate
average_precisions, recalls, precisions, infer_time = evaluate(validation_generator,
																prediction_model,
																score_threshold=0.3,
																max_detections=100, 
																iou_threshold=0.5)

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:7 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
JulienDufourcommented, Jul 5, 2018

Thank you very much hgaiser ! I passed over this line several times… And, it was so obvious. I’m feeling worse than ever.

0reactions
JerryInduscommented, Jul 26, 2019

There’s your problem then, the second argument is the iou_threshold. It’s better to name the argument, like score_threshold=score_threshold. Then you know for sure you’re setting the right thing.

Excuse me, sorry to bother you. But I meet a problem which is similar with this one. I separate my own dataset into training data, validation data and test data. And the mAP performs well on validation data. After i convert model into inference model, the mAP performs very bad on test data (even validation data ). I donnot know why. When i reading this issue#549 , i guess, maybe i also need to get the same detections during training and inference. But i can’t understand what do you say. Can you express it more explitly?

Read more comments on GitHub >

github_iconTop Results From Across the Web

AP Courses and Exams - AP Students - College Board
The AP Art and Design Program includes three different courses: AP 2-D Art and Design, AP 3-D Art and Design, and AP Drawing....
Read more >
Complete List of AP Courses and Tests - PrepScholar Blog
What are all the AP Courses available for you to take, and which ones should you take? Find out more in our complete...
Read more >
Advanced Placement Training for Teachers | AP Teachers
Advanced Placement (AP) subjects are taught by qualified high school teachers. The College Board does not require teachers to have special training but...
Read more >
AP Workload Information | MVRHS
Students who wish to take multiple AP classes should consider the time needed to ... with respect to the time needed to pursue...
Read more >
Getting Started For Teachers New to AP - YouTube
Learn how to navigate the AP Program as a first-time teacher of an AP course. This 1-hour introductory webinar will incorporate key ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found