question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to resize gt-bbox when using RandomAffine for evaluating COCO val-set

See original GitHub issue

Hi, @RangiLyu I meet the following question:

[Background] I am reproducting NanoDet-m-0.5x based on mmdet2. In official NanoDet code, you get resize_warpAffine_matrix first to resize the image when evaluating COCO val-set. To align the inference results with provided checkpoint weights, I must done the same thing as you do.

[My solution] I using the following config to try to reproduct:

test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(320, 320),
        flip=False,
        transforms=[
            # dict(type='Resize', keep_ratio=False),
            dict(type='RandomAffine', max_rotate_degree=0, max_translate_ratio=0, scaling_ratio_range=(1, 1), max_shear_degree=0, \
                border_val=(0, 0, 0),min_bbox_size=0,min_area_ratio=0,max_aspect_ratio=20000),
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(type='Normalize', **img_norm_cfg),
            # dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img']),
        ])
]

However, I found that alough I have get the image I wanted, the corresponding gt bboxs are not resized. If I don’t use your checkpoint, and I train from scratch. the final COCO mAP is 13.0. In RandomAffine when test period, the bbox_fields is not exist, so what should I do.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:1
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
RangiLyucommented, Nov 30, 2021

As I know, the resize method will not influence the performance so much.

0reactions
Senwang98commented, Nov 30, 2021

@RangiLyu Sorry for disturb, my param and gflops are the same as your nanodet, but the final mAP is 13.0 without data aug when training. If I use RandomAffine and PhotoMetricDistortion when training, the final mAP is lower than 12.5. I have no idea about how to reproduct your nanodet based on mmdet2. If I want to resize test image just like your nanodet do (get resize affine matrix), what should I do?

Read more comments on GitHub >

github_iconTop Results From Across the Web

RandomAffine — Torchvision main documentation - PyTorch
Will keep original scale by default. shear (sequence or number, optional) – Range of degrees to select from. If shear is a number,...
Read more >
How to perform random affine transformation of an image?
In the following Python3 program, we translate and scale the image along with affine transformation. # import required libraries import torch ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found