question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

CropForegroundd does not adjust the affine accordingly

See original GitHub issue

Describe the bug

We are using the CropForegroundd transform. We perform this on multiple images with different FoV. After storing all of them they no longer match (cannot be overlayed).

I’m unsure if there are multiple bugs but the first one I found is this one. Apparently, the CropForegroundd transform does not update the affine.

To Reproduce

  1. Download the following NIfTI file: thalamic.nii.gz

  2. Run the following code:

from pathlib import Path

import numpy as np
from monai.transforms.compose import Compose
from monai.transforms.croppad.dictionary import CropForegroundd
from monai.transforms.io.dictionary import LoadImaged, SaveImaged
from monai.transforms.utility.dictionary import EnsureChannelFirstd, ToTensord

path = Path('.')

THALA_KEY = 'thalamic'
FILE_KEYS = [THALA_KEY]

data = {
    THALA_KEY: path / 'thalamic.nii.gz',
}

process = Compose([
    LoadImaged(FILE_KEYS),
    ToTensord(FILE_KEYS),
    EnsureChannelFirstd(FILE_KEYS),
    CropForegroundd(FILE_KEYS, source_key=THALA_KEY),
    SaveImaged(
        FILE_KEYS,
        output_dir='output',
        output_postfix="crop",
        resample=False,
        output_dtype=np.int16,
        separate_folder=False,
    ),
])

process([data])

This should result in the following file being stored: thalamic_crop.nii.gz

  1. Load both images with MRIcroGL (or your favorite NIfTI viewer).

Expected behavior

We would expect that the affine is changed relative to the crop so that the cropped image still aligns in 3D space. This is not the case. The affine is still the same as before the crop.

Screenshots

Here you can see the mismatch in 3D space alignment due to the missing affine change. White is the original image and red is the cropped image.

3D alignment mismatch

Environment

Click to view the environment

Printing MONAI config…

MONAI version: 0.7.0 Numpy version: 1.19.2 Pytorch version: 1.8.1+cu111 MONAI flags: HAS_EXT = False, USE_COMPILED = False MONAI rev id: bfa054b9c3064628a21f4c35bbe3132964e91f43

Optional dependencies: Pytorch Ignite version: NOT INSTALLED or UNKNOWN VERSION. Nibabel version: 3.1.1 scikit-image version: 0.18.0 Pillow version: 8.3.2 Tensorboard version: 2.7.0 gdown version: NOT INSTALLED or UNKNOWN VERSION. TorchVision version: NOT INSTALLED or UNKNOWN VERSION. tqdm version: 4.62.3 lmdb version: NOT INSTALLED or UNKNOWN VERSION. psutil version: NOT INSTALLED or UNKNOWN VERSION. pandas version: 1.2.4 einops version: NOT INSTALLED or UNKNOWN VERSION. transformers version: NOT INSTALLED or UNKNOWN VERSION.

For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies

Printing system config…

System: Windows Win32 version: (‘10’, ‘10.0.19041’, ‘SP0’, ‘’) Platform: Windows-10-10.0.19041-SP0 Processor: Intel64 Family 6 Model 63 Stepping 2, GenuineIntel Machine: AMD64 Python version: 3.7.7 Process name: python.exe Command: [‘C:\Users\SebastianPenhouet\AppData\Local\Programs\Python\Python37\python.exe’, ‘-c’, ‘import monai; monai.config.print_debug_info()’] Open files: [popenfile(path=‘C:\Windows\System32\de-DE\KernelBase.dll.mui’, fd=-1), popenfile(path=‘C:\Windows\System32\de-DE\kernel32.dll.mui’, fd=-1)] Num physical CPUs: 6 Num logical CPUs: 12 Num usable CPUs: 12 CPU usage (%): [16.3, 11.8, 26.1, 20.3, 12.4, 7.2, 17.6, 16.3, 15.0, 9.2, 12.4, 58.2] CPU freq. (MHz): 3501 Load avg. in last 1, 5, 15 mins (%): [0.0, 0.0, 0.0] Disk usage (%): 97.8 Avg. sensor temp. (Celsius): UNKNOWN for given OS Total physical memory (GB): 31.9 Available memory (GB): 18.6 Used memory (GB): 13.3

Printing GPU config…

Num GPUs: 1 Has CUDA: True CUDA version: 11.1 cuDNN enabled: True cuDNN version: 8005 Current device: 0 Library compiled for CUDA architectures: [‘sm_37’, ‘sm_50’, ‘sm_60’, ‘sm_61’, ‘sm_70’, ‘sm_75’, ‘sm_80’, ‘sm_86’, ‘compute_37’] GPU 0 Name: Quadro K2200 GPU 0 Is integrated: False GPU 0 Is multi GPU board: False GPU 0 Multi processor count: 5 GPU 0 Total memory (GB): 4.0 GPU 0 CUDA capability (maj.min): 5.0

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:17 (16 by maintainers)

github_iconTop GitHub Comments

2reactions
Constantin-Jehncommented, Apr 8, 2022

Thank you @Spenhouet , for bringing up the issue. I am working on slice to volume registration from multiple stacks and want to use cropping during preprocessing. But as the meta-data is not updated during cropping the spatial relationship between the stacks gets lost.

If you need it only for preprocessing, torchio (https://torchio.readthedocs.io/transforms/preprocessing.html#croporpad) does the job very well.

1reaction
rijobrocommented, Oct 22, 2021

We have the base classes Pad and SpatialCrop. If these were updated to optionally update a given dictionary of meta data, then all crop/pad transforms, whether dictionary or array, would benefit from the updates. This seems to me like the best way to update, what do you guys think?

class Pad(Transform)
    def __init__(
        self,
        to_pad: List[Tuple[int, int]],
        mode: Union[NumpyPadMode, PytorchPadMode, str] = NumpyPadMode.CONSTANT,
        **kwargs,
    ) -> None:
        ...

 def __call__(self, img, mode = None, meta_data: Optional[Dict] = None) -> Union[NdarrayOrTensor], Tuple[NdarrayOrTensor, Dict]]:
    # do the padding
    ...
    if meta_data is not None:
        # update the affine
        ...
        return img, updated_meta
    return img

The problem is that once we have that, we should really have it for all transforms. Otherwise, the affine will be wrong if we apply e.g., RandRotate90d beforehand and the affine isn’t updated accordingly.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Source code for monai.transforms.transforms
If False, this transform preserves the axes orientation, orthogonal rotation and translation components from the original affine. This option will not ...
Read more >
kornia.geometry.transform - Read the Docs
The transformation maps the rotation center to itself If this is not the target, adjust the shift. Parameters. center ( Tensor ) –...
Read more >
simple itk is not behaving as expected while flip using affine ...
according to the this torurial there are two ways of doing it. using slicing; using an affine transform. However, the result of the...
Read more >
Affine Transformations · Arcane Algorithm Archive
Initially, A is set to be the identity matrix and ℓ=[0,0], such that there is no transformation or translation to the input vectors....
Read more >
Geometric transforms (augmentations.geometric.transforms)
Augmentation to apply affine transformations to images. ... Scaling factor to use, where 1.0 denotes "no change" and 0.5 is zoomed out to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found