Adjust pred_scales for different input image size?
See original GitHub issueThe comment
https://github.com/dbolya/yolact/issues/242#issuecomment-562907739
mentions we should adjust pred_scales
/set max_size
if our image is not 550x550 (backbone input size), mostly to avoid upscaling.
- should this be done automatically? (detect image dimensions, or is it applied if
max_size
is set? ) - how do I handle non-square (640x480) images?
Thank you
Issue Analytics
- State:
- Created 4 years ago
- Comments:9 (9 by maintainers)
Top Results From Across the Web
How to Handle Images of Different Sizes in a Convolutional ...
Resize - Resize the variable-sized images to the same size image. We can easily implement this using tf.data input pipeline.
Read more >How to prepare the varied size input in CNN prediction
Resizing the image is simpler and is used most often. As mentioned by Media in the above answer, it is not possible to...
Read more >Change input shape dimensions for fine-tuning with Keras
In this tutorial, you will learn how to change the input shape tensor ... Why might you want to utilize different image dimensions?...
Read more >Changing input size of pre-trained models in Keras
The model has been adapted to a new input image size. ... fully connected layers will have a different number of neurons depending...
Read more >Can Keras deal with input images with different size?
Yes. Just change your input shape to shape=(n_channels, None, None). Where n_channels is the number of channels in your input image.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’ve done the black padding before and it’s not that desirable for images with very variable image size like COCO, since you lose a lot of pixels that way. In #270 he was specifically trying to overfit onto one image, so it’s not like the network needed that many pixels to be able to classify anyway.
For non square images right now, you can try adding that black pixel border, but the better implementation that I have on the TODO list is to just have everything a fixed non-square aspect ratio. Note that I can’t change the size of the image arbitrarily while training because of the way the prototypes create masks (the features expect a consistent image size, so the size has to be fixed at the start).
As for whether the changes to the scales should be done automatically, I don’t think so. If you look at the im400 and im700 configs you can see what changes are necessary there and they’re quite simple to extrapolate your config to those changes. I don’t want to be touching the scales automatically because what scales you want depends on your dataset, since some datasets tend to have bigger objects and others tend to have smaller ones.
It is a “pre-processing” step, but the size of the input image determines the size of the backbone layers, since P2 for instance is input size // 2, P3 is input size // 4, etc. The issue with change after training is that the weights were trained expecting a certain image size, so they probably won’t work on a different image size.