question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RandomPerspective and RandomAffine fail when used wih pytorch-lightning precision=16

See original GitHub issue

Describe the bug

HI, I am using the following transform: train_transforms = torch.nn.Sequential( K.augmentation.RandomHorizontalFlip(), K.augmentation.RandomPerspective(distortion_scale=0.02), K.augmentation.RandomAffine(degrees=(-5.0, 5.0), translate=(0.02, 0.02), scale=(0.9, 1.1), shear=(-0.02, 0.02), resample=‘bilinear’), )

pl version: 1.5.4 kornia version: 0.6,1 pytorch version: 1.9.1+cu111

The error is: RuntimeError: expected scalar type float but found c10::Half

Reproduction steps

1.Create a simple PyTorch-lightning model
2.set precision=16 in the trainer parameters
3.
...

Expected behavior

Support 16bit precision.

Environment

wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
  • PyTorch Version (e.g., 1.0):
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:


### Additional context

_No response_

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:1
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
Bordacommented, Dec 3, 2021

I think that the half-precision on CPU is a future PyTorch feature coming in 1.10

0reactions
edgarribacommented, Apr 18, 2022

Closing this – unless you have a kornia specific non-working code please reopen. Otherwise touch base the lightning team in their channels.

Read more comments on GitHub >

github_iconTop Results From Across the Web

N-Bit Precision (Intermediate) - PyTorch Lightning
It combines FP32 and lower-bit floating-points (such as FP16) to reduce memory footprint and increase performance during model training and evaluation.
Read more >
RandomAffine — Torchvision main documentation - PyTorch
Random affine transformation of the image keeping center invariant. ... a shear parallel to the x axis in the range (-shear, +shear) will...
Read more >
Error while creating train transform using torch vision
You can only use scriptable transformations in torch.nn.Sequential and transforms. ... RandomAffine(0, shear=0.2)]), p=0.3), transforms.
Read more >
PyTorch Lightning
The ultimate PyTorch research framework. Scale your models, without the boilerplate.
Read more >
How to perform random affine transformation of an image?
RandomAffine () transformation accepts both PIL and tensor images. ... We could use the following steps to perform random affine transform of ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found