Improve transforms latency and remove excessive np->torch->np calls
See original GitHub issueIs your feature request related to a problem? Please describe. It seems that many of the transforms are implemented with torch, but only one or two support outputting the image as a tensor. Instead, I see this sort of pattern:
resized = torch.nn.functional.interpolate( # type: ignore
input=torch.as_tensor(np.ascontiguousarray(img), dtype=torch.float).unsqueeze(0),
size=spatial_size,
mode=self.mode.value if mode is None else InterpolateMode(mode).value,
align_corners=self.align_corners if align_corners is None else align_corners,
)
resized = resized.squeeze(0).detach().cpu().numpy()
Where the image is converted to numpy, converted to torch, operated in torch, detach and forced to cpu, and back to numpy. This sort of translation back and forth happens for lots of the transforms, and thus it incurs a huge runtime cost, especially when composing many of these back to back. Making a clear distinction between torch ready and non-torch only transformations would help make things cleaner from a user perspective, and also allow for the removal of this redundancy (and a HUGE speedup).
For further example, the PyTorch tutorials themselves used Pillow’s resizing function instead of the torch resize function because the pillow function is actually faster on CPU it seems. However, the cropping done in monai is the worst of both worlds - it converts from pillow image, to torch, does torch op, and goes back to an image.
Describe the solution you’d like Adding tensor support for inputs and outputs on all torch based transforms and adding pillow as a first choice for cropping and resizing due to speed, as is done in torchvision. Further, adding GPU support as torch devices would also increase speed by a huge amount. Going from torch input to torch output also removes the need to constantly transfer frameworks and/or devices.
Describe alternatives you’ve considered Not using monai or an entire re-write of the transforms.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:22 (12 by maintainers)
Top GitHub Comments
Mostly addressed by https://github.com/Project-MONAI/MONAI/issues/2231
Sure, I think that would a good idea and benefit others as well. I did try using monai for one of my work projects (which is why I couldn’t share the code) and I would love to use it if I didn’t have to have the bottlenecks I was experiencing. I can try to make some more benchmarks in a bit.
And yes, I recognize that there is a lot of hard work going into the project, which is great and part of why I wanted to try it. My intent was not to knock the authors, but rather I pushed the issue because I want myself and others to get the full benefits of using the library.