MONAI transforms can't only support `Tensor` backend
See original GitHub issueIs your feature request related to a problem? Please describe.
Currently, some transforms only support Tensor backend, which will break previous numpy based transform chains.
Need to update them to make sure all the transforms support both numpy
and Tensor
backends or only numpy
backend.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:11 (11 by maintainers)
Top Results From Across the Web
Transforms — MONAI 1.1.0 Documentation
If input is a dictionary, it also supports to only decollate specified keys. ... the numpy (cpu tensor)/cupy (cuda tensor) backends will be...
Read more >to using Pytorch APIs for monai.transforms (ETA TBD) #2231
Default to using Pytorch APIs for monai.transforms (ETA TBD) #2231 ... MONAI transforms can't only support Tensor backend #2933.
Read more >Accelerated Inference for Large Transformer Models Using ...
Currently, TensorFlow op only supports a single GPU, while PyTorch op and Triton backend both support multi-GPU and multi-node.
Read more >Build a medical image analysis pipeline on Amazon ...
Medical imaging has transformed healthcare in a way that allows clinicians ... MONAI has transforms that support both Dictionary and Array ...
Read more >torchvision.transforms - PyTorch
Only number is supported for torch Tensor. Only int or str or tuple value is supported for PIL Image. padding_mode (str) –. Type...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Ok, sounds reasonable.
My vision is that, if necessary, each transform will convert to the correct image type. That is to say, if the computation requires
torch.Tensor
and input is numpy, then convert. To reduce the number of conversions, I don’t convert back to the original type at the end of the transform. This means that all transforms will need to be able to accept torch or numpy input. By the time I’ve finished updating all transforms, this will be the case.However, it seems that since we’re currently part-way through the changes, some transforms will be in chains that would have originally received numpy input, but now they’re getting torch tensors instead.
I suppose as a temporary fix, we could modify all problematic transforms to have something like this at the start of the
__call__
function: