Occupancy masks for geometric transforms
See original GitHub issueConsideer an equivariance loss scenario: we have a source image src
, we get view1 = k1(src); view2 = k2(src)
. Now we’d like to put on some loss on corresponding locations in view1
and view2
.
For most of geometric augmentations, not all pixel locations in view1
would have correspondences in view2
, and loss must be computed only for valid locations, i.e. we should not compute loss for black pixels in rotated images in https://github.com/kornia/kornia-examples/blob/master/data_augmenation_segmentation.ipynb
In this notebook is advised to use k1(img, k1._params)
for repeating the sampled transformation on another input. Is there an easy way of performing an inverse transform given _params
?
I think the correct expression for mask of valid locations in view
is given by mask_valid_1 = k1(k2(k2(ones_like(src), k2._params), k2._params, INVERSE = True), k1._params)
, but one needs an easy way of doing an inverse transform.
(related https://github.com/kornia/kornia/issues/476#issuecomment-833147695)
As a workaround, I’m not manually using return_transform
, and doing warp_affine
with inverse transforms
Issue Analytics
- State:
- Created 2 years ago
- Comments:12 (3 by maintainers)
Top GitHub Comments
absolutely - adding a
return_inverse
flag could be definitely quite neat. /cc @ducha-aiki @shijianjian @lferraz thoughts ?merged in #1013