cannot load dataset similar to spleen example
See original GitHub issueI am trying to use unet_training_dict.py on my own set of data. I can generate the random test files the script requires and everything works fine. However, I tried to create a set of 5 CT scans that have the same dimensions as the spleen example (512,512,z). The spleen data does not load and it gives me the same error as my data, and it happens at the tuple check (which when disabled the data loader does not work)
Traceback (most recent call last):
File "unet_training_dict.py", line 235, in <module>
main()
File "unet_training_dict.py", line 132, in main
check_data = monai.utils.misc.first(check_loader)
File "/home/mayotic/.local/lib/python3.8/site-packages/monai/utils/misc.py", line 41, in first
for i in iterable:
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 363, in __next__
data = self._next_data()
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data
return self._process_data(data)
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data
data.reraise()
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/mayotic/.local/lib/python3.8/site-packages/monai/transforms/utils.py", line 276, in apply_transform
return transform(data)
File "/home/mayotic/.local/lib/python3.8/site-packages/monai/transforms/croppad/dictionary.py", line 387, in __call__
self.randomize(label, image)
File "/home/mayotic/.local/lib/python3.8/site-packages/monai/transforms/croppad/dictionary.py", line 378, in randomize
self.spatial_size = fall_back_tuple(self.spatial_size, default=label.shape[1:])
File "/home/mayotic/.local/lib/python3.8/site-packages/monai/utils/misc.py", line 140, in fall_back_tuple
user = ensure_tuple_rep(user_provided, ndim)
File "/home/mayotic/.local/lib/python3.8/site-packages/monai/utils/misc.py", line 99, in ensure_tuple_rep
raise ValueError(f"sequence must have length {dim}, got length {len(tup)}.")
ValueError: sequence must have length 2, got length 3.
then i took the spleen_segmentation_3d example with my own data and i get this
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mayotic/.local/lib/python3.8/site-packages/monai/networks/nets/unet.py", line 127, in forward
x = self.model(x)
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mayotic/.local/lib/python3.8/site-packages/monai/networks/layers/simplelayers.py", line 33, in forward
return torch.cat([x, self.submodule(x)], self.cat_dim)
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/mayotic/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mayotic/.local/lib/python3.8/site-packages/monai/networks/layers/simplelayers.py", line 33, in forward
return torch.cat([x, self.submodule(x)], self.cat_dim)
RuntimeError: Sizes of tensors must match except in dimension 2. Got 4 and 3 (The offending index is 0)
im not sure why UNet is not working. any guidance on how to properly format my training dataset would be great. sorry if this is the incorrect place to post this, I could not find any discussion forum.
also i am not sure why but the size shown differs between the example dataset and mine even though x,y are the same 512,512 image shape: torch.Size([512, 416, 31]), label shape: torch.Size([512, 416, 31]) image shape: torch.Size([487, 499, 100]), label shape: torch.Size([487, 499, 100])
the CT data from ImageJ
Ubuntu 20.04 LTS nvidia-driver-440 Cuda compilation tools, release 10.1, V10.1.243
MONAI version: 0.2.0 Python version: 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0] Numpy version: 1.19.1 Pytorch version: 1.6.0
Optional dependencies: Pytorch Ignite version: 0.3.0 Nibabel version: 3.1.1 scikit-image version: 0.17.2 Pillow version: 7.0.0 Tensorboard version: 2.3.0
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (3 by maintainers)
Top GitHub Comments
sorry for not replaying sooner. I addressed the issue once all requirements for UNet were made, chiefly the dim size. in unet_training_dict.py changing AsChannelFirstd (used for the synthetic data) to AddChanneld addressed some of the issues. I also had to play around with the ROI of the sliding window inference. Overall, got things to work. Thanks!
MONAI is amazing project and framework. We, at Dental Software Foundation, are looking at machine learning and UNet segmentation to detect/segment inferior alveolar nerve. MONAI has made our research easier!
@talmazov Hi, I try to run your final version code and meet the problem below,
RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent call last): File “/usr/local/lib/python3.7/dist-packages/monai/transforms/transform.py”, line 48, in apply_transform return transform(data) File “/usr/local/lib/python3.7/dist-packages/monai/transforms/io/dictionary.py”, line 103, in call d = dict(data) ValueError: dictionary update sequence element #0 has length 1; 2 is required
I have a training set with 100 pieces of 512*512 images and I just wonder how to resize my data to fit the request of the code? Thanks.