How to train Inceptionv3 model with high-resolution images (ex. 512x512, 2048x2048)?
See original GitHub issueI’d like to train the Inceptionv3 model with high-resolution images like 512x512.
As I know, the Inceptionv3 is designed to work with variable size of images.
However, if I train it with images which is not 299x299 size, it return with errors like:
size mismatch, m1: [8 x 19200], m2: [768 x 1000]
It’s looks like have some problem with this line.
if self.training and self.aux_logits: aux = self.AuxLogits(x)`
Anybody can help me?
Thanks a lot in advance
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Advanced Guide to Inception v3 | Cloud TPU
This document discusses aspects of the Inception model and how they come together to make the model run efficiently on Cloud TPU. It...
Read more >Train High Definition images with Tensorflow and inception V3 ...
1 Answer 1 ... If you want to use a different image resolution than the pre-trained model uses , you should use only...
Read more >(PDF) A Reliable Defect Detection Method for Patterned Wafer ...
PDF | In the semiconductor manufacturing process, wafer inspection images have valuable information on defects and wafer yield. It is worthy to analysis....
Read more >128x128 pixels images - miocittadino.it
How to Resize Images to 128x128 / 500x500 / 256x256 Online FREE? Part 3. ... Features 128x128 high resolution, 65K colors, clearly displays...
Read more >On the feasibility of deep learning applications using raw ...
learning models already trained for the task of natural image classification. ... a training scheme that uses higher image resolution and deals.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I guess this can be closed due to #744.
Hi,
The issue is that we have a fixed
avg_pool2din https://github.com/pytorch/vision/blob/master/torchvision/models/inception.py#L117 instead of a global avg pooling.The solution for now is to copy-paste the implementation and modify that line to use a
F.adaptive_avg_pool2d(x, (1, 1)).