Applying Transform Chain to Geometric Objects
See original GitHub issueIs your feature request related to a problem? Please describe. Sometimes we need to apply transforms to geometric objects (point clouds, lines, curves, and meshes), particularly in radiotherapy planning and object detection.
Describe the solution you’d like A good solution would be a branch core library that can handle all the geometric transforms while accounting for the image metadata like image spacing and size. A geometric transform chain should be callable in a similar way as we call the regular image transforms. Something like
from monai.transforms.geometric import LoadPoints, RotatePoints
In an ideal case when we have an image annotation pair. We apply the same set of transformation to both objects. But for geometric objects there should be some way of passing messages to the annotations transform chain. For example you are rotating images and annotations. You can apply the rotation to the annotation, but after rotation the size of the image changes in different axes and sometimes it is important that the geometric transform should know about these changes for accurately applying the transforms. So there should be a message passing mechanism between two transform chains. More relevance in discussion #4024
Describe alternatives you’ve considered I have written my own transforms to do all these things but it is a bit hacky and not standardized. I have my geometric labels from open-source tools like labelme. My code handles labelme annotations (in a json) but going in I think we will need to have support for STL, DICOM RT annotations.
Additional context If there are some people working on it, please let me know as I have been working on it and will like to contribute to this branch.
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:7 (4 by maintainers)
Top GitHub Comments
We really need to make progress on this PR before adding in point transforms (which is definitely on our agenda).
The error I get for
relates to spatial shape because points are understood:
ValueError: Unsupported spatial_dims: 1, available options are [2, 3].
TheAffine
class currently only transforms image data, what we would want to do is extend it to work with points as well, such that if the input has shape[2,N]
or[3,N]
for some array of points, it can apply the transform to the points treating them as coordinates in space rather than some weird 1D image. How we detect which sort of data we have is something I was discussing with Richard in relation to the MetaTensor idea of combining metadata with a tensor which would state the data type.