resizing the image instead of using CropOrPad?
See original GitHub issueQuick question, Is it possible to to resize the image instead of using CropOrPad
? like torchvision.transforms.Resize
transform ? I basically want to make use of interpolation methods to assign new target shape.
Issue Analytics
- State:
- Created 3 years ago
- Comments:14 (8 by maintainers)
Top Results From Across the Web
tf.image.resize_with_crop_or_pad | TensorFlow v2.11.0
Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros.
Read more >Tensorflow resize_image_with_crop_or_pad - Stack Overflow
Now you can use tf.image.resize_image_with_crop_or_pad to resize the batch (that how has a shape of [n, H, W, 1] (4-d tensor)):. resized ......
Read more >Preprocessing - TorchIO - Read the Docs
The solution to change an image size is typically applying Resample and CropOrPad . Parameters: target_shape – Tuple ...
Read more >Crop or Pad images to make them square - Fast.ai forums
If your model takes 224x224 images, then resize the image so that the ... It's probably a good idea to use the same...
Read more >Crop transforms (augmentations.crops.transforms)
Note: It is recommended to use uint8 images as input. ... Note: This transformation automatically resizes images back to their original size.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Resizing an image means resampling it, if you think about it. You’re reducing the number of samples (pixels/voxels) while keeping the field of view by reducing the sampling rate (e.g. keeping every other voxel in an image). This is the equivalent of using resample and specify a larger target spacing than the original one.
In computer vision, you typically don’t care about the pixel size because it doesn’t really have a meaning in most cases. In medical images, however, there is inherent spatial information for each voxels and that needs to be taken into account when processing the images.
Maybe section 2 of the preprint will help you understand all these issues: TorchIO: a Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning.
Glad to hear that! But beware, you are actually losing a bit of information when using 2 bytes for your float variables in the convolutions instead of 4.