DICOM loaded with wrong orientation
See original GitHub issueDescribe the bug Hi, I’m loading some DICOM from this dataset : https://www.ircad.fr/fr/recherche/3d-ircadb-01-fr/ The way I load it is with LoadImaged
The problem is that the orientation of the data is wrong (see screenshot below)
To Reproduce Download the dataset mentioned above Run this code (replace ircad_dir by the folder where you downloaded the data)
from monai.transforms import (Compose, LoadImaged, SaveImaged, Spacingd, Orientationd)
from monai.data import DataLoader, Dataset
from monai.transforms.utility.dictionary import AddChanneld
import os
root_dir = r'..\datas\\'
ircad_dir = root_dir + r'data_ircad\\'
train_images_ircad = [os.path.join(ircad_dir, x, 'PATIENT_DICOM') for x in os.listdir(ircad_dir)]
train_labels_ircad = [os.path.join(ircad_dir, x, 'LABELLED_DICOM') for x in os.listdir(ircad_dir)]
data_dict = [{'image':train_images_ircad[0], 'label':train_labels_ircad[0]}]
transforms = Compose([
LoadImaged(keys=["image","label"]),
AddChanneld(keys=["image", "label"]),
Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 2.0), mode=("bilinear", "nearest")),
Orientationd(keys=["image", "label"], axcodes="RAS"),
SaveImaged(keys=["image", "label"], output_postfix='_after_trans')
])
ds = Dataset(data=data_dict, transform=transforms)
dl = DataLoader(ds, batch_size=1, num_workers=0)
for data in dl:
continue
Expected behavior the data should have the correct orientation
Screenshots note : the ground truth is obtained by opening the DICOM from 3D Slicer Environment
================================ Printing MONAI config…
MONAI version: 0.4.0+127.g380f042 Numpy version: 1.19.2 Pytorch version: 1.7.1 MONAI flags: HAS_EXT = False, USE_COMPILED = False MONAI rev id: 380f042dec9c55ba1a5ab241ae6b1a7b3e2b07fb
Optional dependencies: Pytorch Ignite version: 0.4.2 Nibabel version: 3.2.1 scikit-image version: 0.18.1 Pillow version: 8.1.1 Tensorboard version: 2.4.1 gdown version: 3.12.2 TorchVision version: 0.8.2 ITK version: 5.1.2 tqdm version: 4.56.0 lmdb version: 1.1.1 psutil version: 5.8.0
For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
================================ Printing system config…
System: Windows Win32 version: (‘10’, ‘10.0.18362’, ‘SP0’, ‘Multiprocessor Free’) Win32 edition: Core Platform: Windows-10-10.0.18362-SP0 Processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel Machine: AMD64 Python version: 3.8.8 Process name: python.exe Command: [‘C:\Users\Camille\anaconda3\envs\kitware\python.exe’, ‘c:\Users\Camille\Desktop\important\Kitware\liver-segmentation\segmentation\tmp.py’] Open files: [popenfile(path=‘C:\Program Files\WindowsApps\Microsoft.LanguageExperiencePackfr-FR_18362.35.108.0_neutral__8wekyb3d8bbwe\Windows\System32\fr-FR\kernel32.dll.mui’, fd=-1), popenfile(path=‘C:\Program Files\WindowsApps\Microsoft.LanguageExperiencePackfr-FR_18362.35.108.0_neutral__8wekyb3d8bbwe\Windows\System32\fr-FR\KernelBase.dll.mui’, fd=-1)] Num physical CPUs: 6 Num logical CPUs: 12 Num usable CPUs: 12 CPU usage (%): [9.7, 1.4, 16.7, 1.4, 12.7, 2.8, 9.9, 9.9, 8.5, 1.4, 5.6, 56.9] CPU freq. (MHz): 2208 Load avg. in last 1, 5, 15 mins (%): [0.0, 0.0, 0.0] Disk usage (%): 90.1 Avg. sensor temp. (Celsius): UNKNOWN for given OS Total physical memory (GB): 15.9 Available memory (GB): 8.2 Used memory (GB): 7.7
================================ Printing GPU config…
Num GPUs: 1 Has CUDA: True CUDA version: 10.2 cuDNN enabled: True cuDNN version: 7605 Current device: 0 Library compiled for CUDA architectures: [‘sm_37’, ‘sm_50’, ‘sm_60’, ‘sm_61’, ‘sm_70’, ‘sm_75’, ‘compute_37’] GPU 0 Name: GeForce GTX 1070 with Max-Q Design GPU 0 Is integrated: False GPU 0 Is multi GPU board: False GPU 0 Multi processor count: 16 GPU 0 Total memory (GB): 8.0 GPU 0 Cached memory (GB): 0.0 GPU 0 Allocated memory (GB): 0.0 GPU 0 CUDA capability (maj.min): 6.1
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (4 by maintainers)
Top GitHub Comments
Hi @JumpLK, I will work on a patch for this.
Hi, thank you for your answers, are you working on the issue or should I try to make a workaround on my side ?