How to finetune from pretrained models trained on coco data with different number of classes?
See original GitHub issueIs there a config option to load pretrained coco models for finetuning? The last layers where the number of classes may be different, so those weights should not be loaded.
If I just use faster_rcnn_r50_fpn_1x_20181010-3d1b3351.pth
, Then I got
While copying the parameter named bbox_head.fc_cls.weight, whose dimensions in the model are torch.Size([21, 1024]) and whose dimensions in the checkpoint are torch.Size([81, 1024]).
Or, how should I modify the pretrained weight for my model?
Thank you!
Issue Analytics
- State:
- Created 4 years ago
- Comments:11 (2 by maintainers)
Top Results From Across the Web
Tutorial 7: Finetuning Models - MMDetection's documentation!
Tutorial 7: Finetuning Models · Inherit base configs · Modify head · Modify dataset · Modify training schedule · Use pre-trained model.
Read more >Fine-tune a pretrained model - Hugging Face
This is known as fine-tuning, an incredibly powerful training technique. In this tutorial, you will fine-tune a pretrained model with a deep learning ......
Read more >Finetune model with different number of classes using object ...
Yes it is mainly the idea of the Tensorflow object detection garden to fine-tune the models! You should change :
Read more >08. Finetune a pretrained detection model
Finetuning from pre-trained models can help reduce the risk of overfitting. Finetuned model may also generalizes better if the previously used dataset is...
Read more >The practical guide for Object Detection with YOLOv5 algorithm
To further compensate for a small dataset size, we'll use the same backbone as the pretrained COCO model, and only train the model's...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Thank you so much @spytensor , I’ve solved the problem by this
Can you please share your experience with fine tuning like this? What lr schedule did you use? How many epochs?