question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Random cropping with scaling option for super-resolution data augmentation

See original GitHub issue

Is your feature request related to a problem? Please describe. In super-resolution networks, a low resolution input is upsampled using a neural networks often with an integer factor (e.g. x2, x3, x4 etc). When performing data augmentation, a good approach is to use random cropping with fixed size, especially if input images are bigger than what can fit in memory for network activations.

Describe the solution you’d like To address the data augmentation issue of random cropping, a possible solution could be to get a random crop window in low resolution image and apply the same (but scaled up) window to target ground truth image.

For example, in a x4 upsampling network cropping a 56 x 56 window in low res input will correspond to 224 x 224 window cropping in target ground truth

Describe alternatives you’ve considered Writing my own MONAI transforms, or manually cropping data

Additional context This may be useful for super-resolution, upsampling or demosaicing networks that typically take low res input and up resolve it

Apologies if this is addressed by an existing transform, in which case can someone guide me on how I can achieve the above with an existing transform…

Issue Analytics

  • State:open
  • Created a year ago
  • Reactions:1
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
wylicommented, Jul 17, 2022

not at the moment @masadcv. but as we have the MetaTensor implementation on dev, I hope it could be a universal solution of specifying roi_sizes in terms of physical units instead of number of voxels (for example the hires_scale should be computed from MetaTensor’s pixdim property). would you still be interested/have the bandwidth to contribute?

1reaction
masadcvcommented, Jul 17, 2022

Hi @wyli , Just wanted to check if anyone is working on this? If not I will like to contribute with this transform…

Read more comments on GitHub >

github_iconTop Results From Across the Web

Why and How to Implement Random Crop Data Augmentation
Random crop is a data augmentation technique wherein we create a random subset of an original image. This helps our model generalize better ......
Read more >
Rethinking the Random Cropping Data Augmentation Method ...
The traditional random cropping data augmentation method used for target detection first randomly selects a target in the original image before cropping, and ......
Read more >
A survey on Image Data Augmentation for Deep Learning
This is done by randomly cropping 224 × 224 patches from the original images, flipping them horizontally, and changing the intensity of the...
Read more >
You Only Cut Once: Boosting Data Augmentation with ... - arXiv
We test multi- ple crop scale (α, 1). common practice, neural networks mostly gain such ability from the random resized crop. Applying YOCO...
Read more >
Data augmentation: A comprehensive survey of modern ...
Generative modeling methods can be used independently as an alternative data synthesis method [212]. They can also be used as a refinement step...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found