question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error in Compose of Array implementation of transforms (Spacing and Orientation)

See original GitHub issue

Describe the bug Compose for Array implemented spacing transform and orientation transform are throwing error. This is primarily due to the reason that implementation returns more than one output which doesn’t work well as input for the next transform.

To Reproduce Code to reproduce the error (the error exists in both MONAI 3.x and 4.x versions):

import os
import glob
import numpy as np
import torch
import monai
from monai.transforms import *
from monai.utils import first
root_data_path = '<data path>'
resample_spacing = [1.83723155, 1.83723155, 2.42438879]
patch_size=[128, 128, 128]
train_images = sorted(glob.glob(os.path.join(root_data_path, 'imagesTr', '*.nii.gz')))
train_labels = sorted(glob.glob(os.path.join(root_data_path, 'labelsTr', '*.nii.gz')))
train_image_transform = Compose(
                                [LoadNifti(),  
                                Spacing(resample_spacing, mode='bilinear'), 
                                Orientation(axcodes = "RAS"), 
                                ScaleIntensity(), 
                                AddChannel(), 
                                ToTensor()])
train_label_transform = Compose(
                                [LoadNifti(),  
                                Spacing(resample_spacing, mode='nearest'), 
                                Orientation(axcodes = "RAS"), 
                                AddChannel(), 
                                ToTensor()])
train_files = [{"image": img, "label": seg} for img, seg in zip(train_images, train_labels)]
ds = monai.data.ArrayDataset(img = train_images, img_transform = train_image_transform, seg = train_labels, seg_transform = train_label_transform)
train_ds = monai.data.GridPatchDataset(ds, patch_size = patch_size)
train_loader = torch.utils.data.DataLoader(train_ds, batch_size=2, num_workers=1, pin_memory=torch.cuda.is_available())
im, seg = first(train_loader)

Expected behavior This should work seamlessly without throwing any errors. The same set of transform compose is working for dictionary implementation, but not for array implementation. This is because the dictionary implementation of Spacingd and Orientationd returns only one variable While Array implementation of Spacing and Orientation returns three arguments leading to the following error as could be seen in the screenshot below.

This required to primarily to generate patches for the datasets. I need to pass sliding window patches even during training as RandSpatialCrop or RandCropByPosNegLabel are not performing that well for my problem. I used the following code based on the tutorial for patch generation.

Screenshots image

image

Environment

================================
Printing MONAI config...
================================
MONAI version: 0.4.0
Numpy version: 1.19.4
Pytorch version: 1.7.1
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1

Optional dependencies:
Pytorch Ignite version: NOT INSTALLED or UNKNOWN VERSION.
Nibabel version: 3.2.1
scikit-image version: NOT INSTALLED or UNKNOWN VERSION.
Pillow version: NOT INSTALLED or UNKNOWN VERSION.
Tensorboard version: NOT INSTALLED or UNKNOWN VERSION.
gdown version: NOT INSTALLED or UNKNOWN VERSION.
TorchVision version: NOT INSTALLED or UNKNOWN VERSION.
ITK version: NOT INSTALLED or UNKNOWN VERSION.
tqdm version: NOT INSTALLED or UNKNOWN VERSION.
lmdb version: NOT INSTALLED or UNKNOWN VERSION.
psutil version: NOT INSTALLED or UNKNOWN VERSION.

For details about installing the optional dependencies, please visit:
    https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies


================================
Printing system config...
================================
`psutil` required for `print_system_info`

================================
Printing GPU config...
================================
Num GPUs: 1
Has CUDA: True
CUDA version: 10.2
cuDNN enabled: True
cuDNN version: 7605
Current device: 0
Library compiled for CUDA architectures: ['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75']
Info for GPU: 0
        Name: Quadro RTX 5000
        Is integrated: False
        Is multi GPU board: False
        Multi processor count: 48
        Total memory (GB): 15.7
        Cached memory (GB): 0.0
        Allocated memory (GB): 0.0
        CUDA capability (maj.min): 7.5

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
Nic-Macommented, Dec 30, 2020

Hi @architraj29 ,

Almost all the transforms in MONAI expect data shape CHWD, so you need to apply AddChannel transform before Spacing.

Thanks.

1reaction
Nic-Macommented, Dec 30, 2020

Hi @architraj29 ,

You are right, the GridPatchDataset can’t support dict data so far, we may need to enhance it later. For general Dataset, I think you can try below pseudo code:

images = sorted(glob.glob(os.path.join(root_dir, "im*.nii.gz")))
segs = sorted(glob.glob(os.path.join(root_dir, "seg*.nii.gz")))
train_files = [{"img": img, "seg": seg} for img, seg in zip(images[:20], segs[:20])]

train_trans = Compose([
    LoadImaged(keys=["img", "seg"]),
    AddChanneld(keys=["img", "seg"]),
    ScaleIntensityd(keys=["img"]),
    ToTensord(keys=["img", "seg"]),
])
ds = Dataset(data=train_files, transform=train_trans)
dataloader = DataLoader(ds, ...)

If you must use GridPatchDataset and Spacing & Orientation transforms, you can define a simple function to execute transforms instead of Compose them:

def exec_trans(data):
    img = LoadImage(image_only=True)(data)
    img = AddChannel()(img)
    img, affine = Spacing()(img)
    img = ScaleIntensity()(img)
    return img
...

Thanks.

Thanks.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Transforms — MONAI 1.1.0 Documentation
The composed transforms will be set the same global random seed if user called ... transform when error happened, for NumPy array and...
Read more >
Compose UI - Android Developers
Build animations in their Jetpack Compose applications to enrich the user experience. compose.compiler, Transform @Composable functions and enable optimizations ...
Read more >
Finding optimal rotation and translation between ... - Nghia Ho
Finding the optimal/best rotation and translation between two sets of corresponding 3D point data, so that they are aligned/registered, is a common problem ......
Read more >
pytorch3d.transforms
Converts rotation matrices to 6D rotation representation by Zhou et al. ... is the angle turned anticlockwise in radians around the vector's direction....
Read more >
Module: transform — skimage v0.19.2 docs
Resize an array with the local mean / bilinear scaling. skimage.transform.rotate ... Remap image to polar or log-polar coordinates space. skimage.transform.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found