question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

🚀 Feature

Add support to transform images in GPU devices.

Motivation

Some transforms would benefit a lot from hardware acceleration. I suspect the two main sources of improvement would be resampling (Resample, RandomElasticDeformation, RandomAffine, RandomAnisotropy, RandomMotion) and Fourier transforms (RandomMotion, RandomSpike, RandomGhosting).

Most users want to keep their GPU for training and not for preprocessing / augmentation, but if there are enough resources available, it’s nice to add GPU support to transforms.

Pitch

Supporting FFT is easy and seems to help, using the new torch.fft module in PyTorch 1.7. I have added some support in the fourier branch.

On CPU:

In [1]: import torchio as tio
   ...: t1 = tio.datasets.FPG().t1
   ...: t1.load()
   ...: transform = tio.RandomSpike()
   ...: %timeit transform(t1)
1.35 s ± 3.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

On GPU:

In [1]: import torchio as tio
   ...: t1 = tio.datasets.FPG().t1
   ...: t1.load()
   ...: transform = tio.RandomSpike()
   ...: %timeit transform(t1)
155 ms ± 820 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

Supporting resampling is more complex, because of the way PyTorch deals with coordinates etc. Ideally, we would be able to convert from a “world” affine transform to a PyTorch one.

Some discussions about converting to/from PyTorch conventions for affine transformations:

The steps for this transition to happen would be:

  1. Make sure everything works normally with tensors on GPU
  2. Make sure the run time for FFT transforms is improved
  3. Figure out how to resample medical images properly using PyTorch
  4. Make sure the run time for resampling transforms is improved
  5. Check if run time for other transforms gets better as well
  6. Test that everything works as before, on multiple PyTorch versions

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:2
  • Comments:12 (10 by maintainers)

github_iconTop GitHub Comments

2reactions
fepegarcommented, Jul 23, 2021
2reactions
efirdccommented, Dec 29, 2020

I think a good reason for doing this would be to support differentiable augmentation. This was done in StyleGAN2-ADA to train GANs using limited data. The augmentations are performed on the generated images before they are input to the discriminator, so they have to be differentiable.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Enabling GPU access with Compose - Docker Documentation
Enabling GPU access to service containers . Docker Compose v1.28.0+ allows to define GPU reservations using the device structure defined in the...
Read more >
How to install a graphics card - PCWorld
Firmly insert the card into the slot, then push down the plastic lock on the end of the PCI-E slot to hold it...
Read more >
Add-on: gpu - MicroK8s
This addon enables NVIDIA GPU support on MicroK8s using the NVIDIA GPU Operator. You can enable this addon with the following command: microk8s...
Read more >
Add or remove GPUs - Compute Engine - Google Cloud
Expand the CPU platform and GPU section. Click Add GPU. The machine configuration section. Specify the GPU type and Number of GPUs. If...
Read more >
CUDA Installation Guide for Microsoft Windows
NVIDIA GeForce GPUs (excluding GeForce GTX Titan GPUs) do not support TCC mode. ... First add a CUDA build customization to your project...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found