Add `AsContiguous` utility transform
See original GitHub issueIs your feature request related to a problem? Please describe. Our internal QA team reported a performance issue that: Running exact same spleen segmentation program on MONAI 0.7+ is 50% slower than on MONAI 0.5.3, in the same PyTorch 21.10 docker. Then @yiheng-wang-nv and I spent much time to investigate the issue, finally, I found that in the new code, we can skip the padding if not necessary: https://github.com/Project-MONAI/MONAI/blob/dev/monai/transforms/croppad/array.py#L128 While in the 0.5.3 code, it always pad the array. And this pad API will potentially convert a non-contiguous NumPy array to a contiguous array, which will make the following transforms much faster when running with 0.5.3.
I think we faced several times contiguous
related issues or discussions, would be nice to add an AsContiguous
transform for both NumPy array and PyTorch Tensor. Then users can explicitly add this transform to whatever place of the transform chain.
It would be obviously useful to convert to contiguous before CacheDataset
caching.
@ericspod @wyli @rijobro What do you guys think?
Issue Analytics
- State:
- Created 2 years ago
- Comments:8 (8 by maintainers)
Top GitHub Comments
Hi @wyli ,
I submitted PR https://github.com/Project-MONAI/MONAI/pull/3614 to optionally support
contiguous
in CacheDataset. Verified with the training performance of spleen segmentation pipeline.Thanks.
Hi @wyli ,
In the spleen training pipeline, when the cached
image
,label
is not contiguous, theRandCropByPosNegLabeld
transform will be slower, because the numpy computation is much slower(5x slower) here: https://github.com/Project-MONAI/MONAI/blob/dev/monai/transforms/utils.py#L300 I verified that on local machine, @yiheng-wang-nv will help double confirm on other machines soon. Then I will try to implement theAsContiguous
transform in our nextv0.8.1rc3
tag if you guys don’t other concerns.Thanks.