question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Adding Anchor scales

See original GitHub issue

Hi I’m adding smaller anchor scales to detect smaller objects with also large objects.

I’m making changes to this line

args.set_cfgs = ['ANCHOR_SCALES', '[8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '20']

to

args.set_cfgs = ['ANCHOR_SCALES', '[4, 8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '20']

it works for the training script. but when I run it in demo.py it has issues:

error occurs here: link fasterRCNN.load_state_dict(checkpoint['model'])

error message: fasterRCNN.load_state_dict(checkpoint['model']) File "/home/ubuntu/py3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for resnet: While copying the parameter named "RCNN_rpn.RPN_cls_score.bias", whose dimensions in the model are torch.Size([18]) and whose dimensions in the checkpoint are torch.Size([24]). While copying the parameter named "RCNN_rpn.RPN_cls_score.weight", whose dimensions in the model are torch.Size([18, 512, 1, 1]) and whose dimensions in the checkpoint are torch.Size([24, 512, 1, 1]). While copying the parameter named "RCNN_rpn.RPN_bbox_pred.bias", whose dimensions in the model are torch.Size([36]) and whose dimensions in the checkpoint are torch.Size([48]). While copying the parameter named "RCNN_rpn.RPN_bbox_pred.weight", whose dimensions in the model are torch.Size([36, 512, 1, 1]) and whose dimensions in the checkpoint are torch.Size([48, 512, 1, 1]).

this is assuming the incorrect architecture is provided. I suspect there is something wrong with the configuration file fed in. but I’m not sure. Has anyone experienced this and has a solution?

thank you in advance!

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:11

github_iconTop GitHub Comments

7reactions
andres-frcommented, May 2, 2019

@kangkang59812 The important thing is that (ANCHOR_SCALES times ANCHOR_RATIOS) should be the number of anchors that your architecture expects. The paper and the default config have 3*3=9, but the COCO pretrained model expects 12. Therefore you can more or less “choose” the combination you want, as long as they multiply to 12 (CLARIFICATION: ideally you should reproduce the exact set of anchors that was used to train, I don’t know how to reverse engineer that.). The anchor config for the pretrained COCO model seems to be here:

https://github.com/jwyang/faster-rcnn.pytorch/blob/aec4244532cd7affbc1f0cdbed81900ee8cacf3c/trainval_net.py#L166

In my case (at the PyTorch 1.0 branch), the demo runs flawlessly with the Pascal pretrained model, but I had this issue when trying it with COCO as @CyanideCentral reports. Apart from what I already mentioned, I also had a class mismatch using the demo (21 vs. 81). This is because the demo instantiates with the pascal_classes, but the COCO pretrained model expects the COCO classes. Changing the list (in the official order given at the link) plus __background__ at the beginning made the demo work with COCO. For the copypasters:

coco_classes = np.asarray(["__background__", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"])

Hope this helps!
Andres

1reaction
CyanideCentralcommented, Jul 1, 2018

This problem appears for model trained on COCO dataset. I fixed this by changing this line to [4,8,16,32].

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to add more anchor's scale? · Issue #90 - GitHub
Current rpn_head force only one anchor scale(stride) on each fpn feature maps. See details in init() and forward() function.
Read more >
scaleEffect(_:anchor:) | Apple Developer Documentation
Use scaleEffect(_:anchor:) to scale a view by applying a scaling transform of a specific size, specified by scale .
Read more >
Tutorial Scale - But Not Around the Anchor Point! - Mamoworld
A better solution is to use the Transform effect. It allows you to scale around any point easily without changing the anchor point...
Read more >
14.4. Anchor Boxes - Dive into Deep Learning
As you can see, the blue anchor box with a scale of 0.75 and an aspect ratio of ... Below we add an...
Read more >
When Does Scale Anchoring Work? A Case Study - JSTOR
by the statistical analysts and test developers involved with the test. In addition, scale anchoring involves considerable use of subjective judgment, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found