question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

CropForegroundd only works for image with same size and orientation

See original GitHub issue

Describe the bug The bounding box for the crop is not projected into the space of the target image. This leads to wrong crops if the target image has another FoV or orientation than the image where the bounding box was taken.

To Reproduce

  1. Download the following two images: seg.nii.gz, thalamic.nii.gz
  2. Run the code below. This assumes that the fix proposed in https://github.com/Project-MONAI/MONAI/issues/3167 is used.
from pathlib import Path

import numpy as np
from monai.transforms.compose import Compose
# Use the fixed transform as proposed in #3167 
from path.to.the.fixed.transform import CropForegroundd
from monai.transforms.io.dictionary import LoadImaged, SaveImaged
from monai.transforms.utility.dictionary import EnsureChannelFirstd, ToTensord

path = Path('scratch/monai_crop_bug')

SEG_KEY = 'seg'
THALA_KEY = 'thalamic'
FILE_KEYS = [SEG_KEY, THALA_KEY]

data = {
    SEG_KEY: path / 'seg.nii.gz',
    THALA_KEY: path / 'thalamic.nii.gz',
}

margin = 20

process = Compose([
    LoadImaged(FILE_KEYS),
    ToTensord(FILE_KEYS),
    EnsureChannelFirstd(FILE_KEYS),
    CropForegroundd(FILE_KEYS, source_key=SEG_KEY, margin=margin),
    SaveImaged(
        FILE_KEYS,
        output_dir='scratch/monai_crop_bug',
        output_postfix=f"crop_{margin}",
        resample=False,
        output_dtype=np.int8,
        separate_folder=False,
    ),
])

results = process([data])
  1. You should get: thalamic_crop_20.nii.gz The white area is the original and the green area the area after the crop. It should not have been cropped (it should still show the whole area). wrong crop

Expected behavior

The crop works for combinations of images of all sizes and orientations.

The best solution is probably so project the bounding box into the same space of the target image. I’m not yet 100% sure on how this could be implemented but I guess it would be something like this:

  1. Project the bounding box into mm space of the target image using the target image’s affine.
  2. Somehow project the bounding box back into voxel space of the target image. <- I’m unsure how to do this step.

Environment

Click to view the environment

Printing MONAI config…

MONAI version: 0.7.0 Numpy version: 1.19.2 Pytorch version: 1.8.1+cu111 MONAI flags: HAS_EXT = False, USE_COMPILED = False MONAI rev id: bfa054b9c3064628a21f4c35bbe3132964e91f43

Optional dependencies: Pytorch Ignite version: NOT INSTALLED or UNKNOWN VERSION. Nibabel version: 3.1.1 scikit-image version: 0.18.0 Pillow version: 8.3.2 Tensorboard version: 2.7.0 gdown version: NOT INSTALLED or UNKNOWN VERSION. TorchVision version: NOT INSTALLED or UNKNOWN VERSION. tqdm version: 4.62.3 lmdb version: NOT INSTALLED or UNKNOWN VERSION. psutil version: NOT INSTALLED or UNKNOWN VERSION. pandas version: 1.2.4 einops version: NOT INSTALLED or UNKNOWN VERSION. transformers version: NOT INSTALLED or UNKNOWN VERSION.

For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies

Printing system config…

System: Windows Win32 version: (‘10’, ‘10.0.19041’, ‘SP0’, ‘’) Platform: Windows-10-10.0.19041-SP0 Processor: Intel64 Family 6 Model 63 Stepping 2, GenuineIntel Machine: AMD64 Python version: 3.7.7 Process name: python.exe Command: [‘C:\Users\SebastianPenhouet\AppData\Local\Programs\Python\Python37\python.exe’, ‘-c’, ‘import monai; monai.config.print_debug_info()’] Open files: [popenfile(path=‘C:\Windows\System32\de-DE\KernelBase.dll.mui’, fd=-1), popenfile(path=‘C:\Windows\System32\de-DE\kernel32.dll.mui’, fd=-1)] Num physical CPUs: 6 Num logical CPUs: 12 Num usable CPUs: 12 CPU usage (%): [16.3, 11.8, 26.1, 20.3, 12.4, 7.2, 17.6, 16.3, 15.0, 9.2, 12.4, 58.2] CPU freq. (MHz): 3501 Load avg. in last 1, 5, 15 mins (%): [0.0, 0.0, 0.0] Disk usage (%): 97.8 Avg. sensor temp. (Celsius): UNKNOWN for given OS Total physical memory (GB): 31.9 Available memory (GB): 18.6 Used memory (GB): 13.3

Printing GPU config…

Num GPUs: 1 Has CUDA: True CUDA version: 11.1 cuDNN enabled: True cuDNN version: 8005 Current device: 0 Library compiled for CUDA architectures: [‘sm_37’, ‘sm_50’, ‘sm_60’, ‘sm_61’, ‘sm_70’, ‘sm_75’, ‘sm_80’, ‘sm_86’, ‘compute_37’] GPU 0 Name: Quadro K2200 GPU 0 Is integrated: False GPU 0 Is multi GPU board: False GPU 0 Multi processor count: 5 GPU 0 Total memory (GB): 4.0 GPU 0 CUDA capability (maj.min): 5.0

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:8 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
ericspodcommented, Nov 9, 2021

I think I understand the issue here and the fix though @rijobro would be best to consult with on transforms when he’s back.

0reactions
Spenhouetcommented, Nov 15, 2021

Hi @rijobro,

I don’t think our use case is clear. For our data we need this. There is no way around it. Prior resampling is not possible since it would not fit the ram. The individual images are from different areas of the image and therefore do mostly not overlap.

I would not see this as this large undertaking. Why not just fix the ones that are reported. This could also be a community driven improvement (as with this issue and the implementation provided by me). Also we are extensively using MONAI and on most methods this is not an issue. So as it currently stands, there are 3 transforms which would need an adjustment. This does not sound so bad to me (or like a huge commitment).

My suggested change just makes the code more generic / applicable. I therefore do not see a downside to it. Btw. before there is something implemented, please ping me. I believe we made further internal code changes which I could sync here.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Transforms — MONAI 1.1.0 Documentation
The composed transforms will be set the same global random seed if user called ... As most of the TorchVision transforms only work...
Read more >
unwanted distortion: Core image filter is rotating my image
The image has different orientation. After applying the filter you lose the orientation. You should compensate the orientation with simple ...
Read more >
Preprocessing 3D Volumes for Tumor Segmentation ... - PYCAD
In medical imaging, we can work with 2D images, which can be dicoms ... depth), so we need to generalize all of them...
Read more >
Registration and segmentation from a variety of FOV and ...
Cropping to approximately the same size as atlas is a fair initialization. Extract image filter is usually used to reduce dimension, e.g. extract...
Read more >
Apply geometric transformation to image - MATLAB imwarp
B = imwarp( A , D ) transforms image A according to the displacement field D . ... same 2-D transformation to all...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found