NaNs in Gradient of RGB to LAB Transformation
See original GitHub issueNaNs in Gradient of RGB to LAB Transformation
To Reproduce
Steps to reproduce the behavior:
- Apply
rgb_to_lab
transform to image. - Compute gradient of sum over new image pixels with respect to original input image.
x_new = kornia.color.rgb_to_lab(x)
grad = torch.autograd.grad(x_new.sum(), x)[0].detach()
Expected behavior
There were no NaNs in the output image x_new
, so if this transform is differentiable, there should be no NaNs in the gradient.
Environment
PyTorch version: 1.7.1 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: Could not collect
Python version: 3.6 (64-bit runtime) Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce GTX 1080 Ti GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 450.57 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A
Versions of relevant libraries: [pip3] numpy==1.19.5 [pip3] pytorch-colors==0.1.0 [pip3] torch==1.7.1 [pip3] torchvision==0.8.1 [conda] Could not collect
Issue Analytics
- State:
- Created 3 years ago
- Comments:11 (6 by maintainers)
Top GitHub Comments
@cceyda It was the range issue. Sorted thank you.
@cceyda any final decision on this ?