ValueError: Expected y_min for bbox (tensor(0.4942), tensor(1.1944), tensor(0.5233), tensor(1.1998), tensor(45, dtype=torch.int32)) to be in the range [0.0, 1.0], got 1.1944444179534912.
See original GitHub issue🐛 Bug
I just tries to use one of the transformation to bounding boxes, that is RandomSizedBBoxSafeCrop(width=640, height=640, erosion_rate=0.2)
, but I end up with the following error, ValueError: Expected y_min for bbox (tensor(0.4942), tensor(1.1944), tensor(0.5233), tensor(1.1998), tensor(45, dtype=torch.int32)) to be in the range [0.0, 1.0], got 1.1944444179534912.
To Reproduce
Steps to reproduce the behavior:
1. transforms = [RandomBrightnessContrast(width=640, height=640, erosion_rate=0.2)]
augmentor = Compose(transforms, bbox_params=BboxParams(format='pascal_voc'))
- Input boxes:
[ [ 1138, 152, 1203, 175, 19 ], [ 1253, 1348, 1312, 2491, 23 ], [ 1309, 0, 1652, 3263, 23 ], [ 1231, 0, 1252, 949, 23 ], [ 539, 293, 566, 342, 44 ], [ 565, 373, 580, 420, 44 ], [ 668, 2000, 685, 2018, 44 ],
Original Traceback (most recent call last): File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/keerti/PycharmProjects/ml-prototype/src/prototype/modules/dataset/Mapillary/mapillary_dataset.py", line 117, in __getitem__ return self.load_dataset(image_path=img_path, mask_path=lbl_path, bbox_data=bbox_data_init) File "/home/keerti/PycharmProjects/ml-prototype/src/prototype/modules/dataset/dataset_pipeline.py", line 190, in load_dataset data = self.pipeline(**data) File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/albumentations/core/composition.py", line 207, in __call__ p.preprocess(data) File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/albumentations/core/utils.py", line 85, in preprocess data[data_name] = self.check_and_convert(data[data_name], rows, cols, direction="to") File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/albumentations/core/utils.py", line 93, in check_and_convert return self.convert_to_albumentations(data, rows, cols) File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/albumentations/augmentations/bbox_utils.py", line 51, in convert_to_albumentations return convert_bboxes_to_albumentations(data, self.params.format, rows, cols, check_validity=True) File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/albumentations/augmentations/bbox_utils.py", line 311, in convert_bboxes_to_albumentations return [convert_bbox_to_albumentations(bbox, source_format, rows, cols, check_validity) for bbox in bboxes] File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/albumentations/augmentations/bbox_utils.py", line 311, in <listcomp> return [convert_bbox_to_albumentations(bbox, source_format, rows, cols, check_validity) for bbox in bboxes] File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/albumentations/augmentations/bbox_utils.py", line 259, in convert_bbox_to_albumentations check_bbox(bbox) File "/home/keerti/anaconda3/envs/research_framework/lib/python3.10/site-packages/albumentations/augmentations/bbox_utils.py", line 344, in check_bbox raise ValueError( ValueError: Expected y_min for bbox (tensor(0.4942), tensor(1.1944), tensor(0.5233), tensor(1.1998), tensor(45, dtype=torch.int32)) to be in the range [0.0, 1.0], got 1.1944444179534912.
Environment
- Albumentations version : >=1.1
- Python version : 3.10.4
- OS : Linux
Issue Analytics
- State:
- Created a year ago
- Comments:8
@invincible-28 You have to check your input bounding boxes if they are within the image and discard the ones which are outside the image boundary. For example, you could check it with:
0 <= x_min <= 1 and 0 <= y_min <= 1 and 0 <= x_max <= 1 and 0 <= y_max <= 1
If this is true then they are inside the image.Other reason could be that the bounding boxes are very small. To filter these boxes you can use:
I hope this helps
Okay.