question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Improve transforms latency and remove excessive np->torch->np calls

See original GitHub issue

Is your feature request related to a problem? Please describe. It seems that many of the transforms are implemented with torch, but only one or two support outputting the image as a tensor. Instead, I see this sort of pattern:

        resized = torch.nn.functional.interpolate(  # type: ignore
            input=torch.as_tensor(np.ascontiguousarray(img), dtype=torch.float).unsqueeze(0),
            size=spatial_size,
            mode=self.mode.value if mode is None else InterpolateMode(mode).value,
            align_corners=self.align_corners if align_corners is None else align_corners,
        )
        resized = resized.squeeze(0).detach().cpu().numpy()

Where the image is converted to numpy, converted to torch, operated in torch, detach and forced to cpu, and back to numpy. This sort of translation back and forth happens for lots of the transforms, and thus it incurs a huge runtime cost, especially when composing many of these back to back. Making a clear distinction between torch ready and non-torch only transformations would help make things cleaner from a user perspective, and also allow for the removal of this redundancy (and a HUGE speedup).

For further example, the PyTorch tutorials themselves used Pillow’s resizing function instead of the torch resize function because the pillow function is actually faster on CPU it seems. However, the cropping done in monai is the worst of both worlds - it converts from pillow image, to torch, does torch op, and goes back to an image.

Describe the solution you’d like Adding tensor support for inputs and outputs on all torch based transforms and adding pillow as a first choice for cropping and resizing due to speed, as is done in torchvision. Further, adding GPU support as torch devices would also increase speed by a huge amount. Going from torch input to torch output also removes the need to constantly transfer frameworks and/or devices.

Describe alternatives you’ve considered Not using monai or an entire re-write of the transforms.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:2
  • Comments:22 (12 by maintainers)

github_iconTop GitHub Comments

1reaction
wylicommented, Nov 11, 2021
1reaction
ndalton12commented, Apr 8, 2021

Sure, I think that would a good idea and benefit others as well. I did try using monai for one of my work projects (which is why I couldn’t share the code) and I would love to use it if I didn’t have to have the bottlenecks I was experiencing. I can try to make some more benchmarks in a bit.

And yes, I recognize that there is a lot of hard work going into the project, which is great and part of why I wanted to try it. My intent was not to knock the authors, but rather I pushed the issue because I want myself and others to get the full benefits of using the library.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to improve network latency in 3 steps - TechTarget
High latency can snarl network traffic and disrupt communications, resulting in unhappy end users. Check out these three steps to reduce and improve...
Read more >
How to reduce latency and improve VoIP calls. - Spearline
Latency leads to poor quality VoIP calls. VoIP calls are real-time, even the slightest delay is noticeable. Know what causes Latency & how...
Read more >
How to Reduce VoIP Latency: A Technical Guide to Testing ...
Extreme latency becomes a big issue for one-on-one conversations and also decreases the quality of real-time communication. How Can Latency ...
Read more >
4.2 Latency Reducing Techniques
Unlike other optimizations that improve execution speed by reducing the number of ... That is, they attempt to reduce the average CPI. ......
Read more >
Reducing service latency with aggressive cache reload.
These network calls are time-consuming and add extra latency to the ... With aggressive cache reload we can even reduce latency when there ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found