ERROR: pytorch1.0 test size mismatch
See original GitHub issuetrain on coco dataset:
python trainval_net.py --dataset coco --net vgg16 --bs 12 --nw 4 --lr 0.01 --lr_decay_step 1000 --cuda --mGPUs
then test:
python demo.py --net vgg16 --checksession 1 --checkepoch 8 --checkpoint 19543 --cuda --load_dir models --dataset coco
But I got the result:
load checkpoint models/vgg16/coco/faster_rcnn_1_8_19543.pth
Traceback (most recent call last):
File "demo.py", line 195, in <module>
fasterRCNN.load_state_dict(checkpoint['model'])
File "/home/xuan/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for vgg16:
size mismatch for RCNN_rpn.RPN_cls_score.weight: copying a param with shape torch.Size([24, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 512, 1, 1]).
size mismatch for RCNN_rpn.RPN_cls_score.bias: copying a param with shape torch.Size([24]) from checkpoint, the shape in current model is torch.Size([18]).
size mismatch for RCNN_rpn.RPN_bbox_pred.weight: copying a param with shape torch.Size([48, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([36, 512, 1, 1]).
size mismatch for RCNN_rpn.RPN_bbox_pred.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([36]).
size mismatch for RCNN_cls_score.weight: copying a param with shape torch.Size([81, 4096]) from checkpoint, the shape in current model is torch.Size([21, 4096]).
size mismatch for RCNN_cls_score.bias: copying a param with shape torch.Size([81]) from checkpoint, the shape in current model is torch.Size([21]).
size mismatch for RCNN_bbox_pred.weight: copying a param with shape torch.Size([324, 4096]) from checkpoint, the shape in current model is torch.Size([84, 4096]).
size mismatch for RCNN_bbox_pred.bias: copying a param with shape torch.Size([324]) from checkpoint, the shape in current model is torch.Size([84]).
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (2 by maintainers)
Top Results From Across the Web
How to solve size mismatch error in pytorch? - Stack Overflow
This is the cell that downloads and splits the dataset into train, validation and test. transform = torchvision.transforms.Compose( [torchvision ...
Read more >Getting size mismatch error - PyTorch Forums
I am creating designing a Convolutional Neural Network to predict labels of the images, using CIFAR10. num_workers = 2 # how many samples ......
Read more >TSN在pytorch1.0以上版本出现size mismatch for ... - CSDN博客
RuntimeError: Error(s) in loading state_dict for BNInception: size mismatch for conv1_7x7_s2_bn.weight: copying a param with shape torch.Size([1 ...
Read more >PyTorch 1.7.0 Now Available | Exxact Blog
BCELoss size mismatch warning is now an error (#41426). This is the end of the deprecation cycle for this op to make sure...
Read more >Changelog — MMDetection 2.20.0 documentation
Fix aug test error when the number of prediction bboxes is 0 (#6398) ... Fix size mismatch bug in multiclass_nms (#4980).
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I went ahead and compared the config of my training, with the config of the demo… turns out there were some configs snuck in… before the yaml file were loaded…
see this example… https://github.com/jwyang/faster-rcnn.pytorch/blob/master/test_net.py#L118
double check, and fix your config to match how you trained and you should be gtg
I knew how to work it out, modify the lib/model/utils/config.py line 295 as __C.ANCHOR_SCALES = [4,8,16,32],then the demo.py could be run successfully.