About using pre-trained model from PyTorch
See original GitHub issueI want to use pre-trained model from PyTorch to train a faster-rcnn. And I see:
If you want to use pytorch pre-trained models, please remember to transpose images from BGR to RGB, and also use the same data transformer (minus mean and normalize) as used in pretrained model.
Does this mean I need to do the following two things?
By the way, I didn’t see any code about normalizing the images with a stdv value. Is there a configuration option about this or do I need to add code in lib/roi_data_layer/minibatch.py
?
Thank you!
Issue Analytics
- State:
- Created 6 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Image Classification using Pre-trained Models in PyTorch
Pre-trained models are Neural Network models trained on large benchmark datasets like ImageNet. The Deep Learning community has greatly ...
Read more >Models and pre-trained weights — Torchvision main ... - PyTorch
TorchVision offers pre-trained weights for every provided architecture, using the PyTorch torch. hub . Instancing a pre-trained model will download its weights ...
Read more >Use pretrained PyTorch models - Kaggle
This dataset has the PyTorch weights for some pre-trained networks. We have to copy the pretrained models to the cache directory (~/.torch/models) where...
Read more >Quick Glance on PyTorch Pretrained Models - eduCBA
When a model built in PyTorch can be used to solve the similar kind of problems, those models are called pretrained models and...
Read more >Using Predefined and Pretrained CNNs in PyTorch - Glass Box
You can also load pre-trained models. In torchvision.models, all pre-trained models are pre-trained on ImageNet, meaning that their parameters ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
yes, the order of commands should be:
im /= 255. # Convert range to [0,1] im -= pixel_means # Minus mean im /= pixel_stdens # divide by stddev
you do not need to add “im = im[:, :, ::-1]” since im from scipy.misc.imread is already RGB.
To facilitate the transfer from caffe models to pytorch models, I will make some changes to the code. Stay tuned.
Hi, @clavichord93 ,
Yes, to use pytorch pre-tained model, you need to make three changes:
convert image from BGR to RGB
normalize image range from [0, 255] to [0, 1]
use transformer:
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
to transform the image data.
Details can be found here: http://pytorch.org/docs/master/torchvision/models.html