Aggregator return error when the tensors in batch are integer tensors.
See original GitHub issue🐛Bug
Aggregator return error when the tensors in batch are integer tensors when invoking get_output_tensor()
in ‘average’ mode.
To reproduce
import torch
import torchio as tio
from torch.utils.data import DataLoader
# create target image
int_im = tio.ScalarImage(tensor=(torch.rand(1, 256, 256, 256) * 65535).type(torch.int32))
# create sampler
subject = tio.Subject(im=int_im)
grid_sampler = tio.GridSampler(subject, patch_overlap=20, patch_size=128)
inference_dl = DataLoader(grid_sampler, batch_size=4)
aggregator = tio.GridAggregator(grid_sampler, 'average')
# sampler and rebuild tensor
for i, mb in enumerate(inference_dl):
aggregator.add_batch(mb['im'][tio.DATA], mb[tio.LOCATION])
o = aggregator.get_output_tensor()
Simple fix aggregator.py https://github.com/alabamagan/torchio/commit/084caaca5c9b41bcb99aa734e46a2eee43c5a9da
Expected behavior
Reconstruct the output integer data array correctly.
Actual behavior
Produced an error.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-6-90a464355da2> in <module>()
8 for i, mb in enumerate(inference_dl):
9 aggregator.add_batch(mb['image'][tio.DATA], mb[tio.LOCATION])
---> 10 o = aggregator.get_output_tensor()
~/Toolkits/Anaconda2/envs/py3_cuda10/lib/python3.7/site-packages/torchio/data/inference/aggregator.py in get_output_tensor(self)
163 self._output_tensor = self._output_tensor.type(torch.int32)
164 if self.overlap_mode == 'average':
--> 165 output = self._output_tensor / self._avgmask_tensor
166 else:
167 output = self._output_tensor
RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.
System info
Platform: Linux-4.15.0-140-generic-x86_64-with-debian-stretch-sid
TorchIO: 0.18.37
PyTorch: 1.6.0
NumPy: 1.19.1
Python: 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0]
**```z**
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Cannot add tensor to the batch: number of elements does not ...
Any idea where I'm going wrong or how to batch this dataset properly as without dataset = dataset.batch(32, drop_remainder = True) , the...
Read more >Source code for monai.metrics.regression
Tensor ): raise ValueError("the data to aggregate must be PyTorch Tensor.") f, not_nans = do_metric_reduction(data, reduction or self.reduction) return (f, ...
Read more >Structure Overview — PyTorch-Metrics 0.11.0 documentation
This pattern is implemented for the following operators (with a being metrics and b being metrics, tensors, integer or floats):. Addition ( a...
Read more >tf.Variable | TensorFlow v2.11.0
Computes the absolute value of a tensor. Given a tensor of integer or floating-point values, this operation returns a tensor of the same...
Read more >Error in training process—— TypeError: only integer tensors of ...
Error in training process—— TypeError: only integer tensors of a single element can be ... return self.run_epoch('val', epoch, data_loader)
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I didn’t notice at first because I was working on integer tensors, but I think changes in be4bf4d to using
true_division
might lead to loss in precision if the averaged images patches are meant to be float values. Might have big impact if the averaged patches are normalized to be ranged from 0-1. Might worth revisiting.In short, I am working on index arrays and deformation tensor, so they took form of integer tensors during slicing, but I can perform manual casting as long as the pipeline is not broken. I think you can close this issue in light that newer versions of pytorch don’t have this problem.