question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

anchor_labeler: batch_label_anchors issue with single bounding box

See original GitHub issue

Hi @rwightman

Thank you so much for making this repo.

I am currently experiencing an issue calling batch_label_anchors when ground truth bounding box list only has 1 bbox. Not sure what might caused this issue, I am wondering if you can take a look. Thanks in advance 😃

Issue: anchor_labeler.batch_label_anchors () has index out of range error. Trace attached

Setup anchor:

model_config = get_efficientdet_config('tf_efficientdet_d0')
model = EfficientDet(model_config,pretrained_backbone=True)
model_config.num_classes = 1
model_config.image_size = 512

anchors = Anchors(
    model_config.min_level,model_config.max_level,
    model_config.num_scales, model_config.aspect_ratios,
    model_config.anchor_scale, model_config.image_size
    )


anchor_labeler = AnchorLabeler(anchors,model_config.num_classes,match_threshold=0.5)

Reproduce:

tb = torch.tensor([[468.,353.,52.,386.5]])
tb = tb.int().float()
tlbl = torch.tensor([1.])
cls_targets, box_targets,num_positives = anchor_labeler.batch_label_anchors(1,[tb],[tlbl])

Trace:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-45-e8bceaf11fb2> in <module>
----> 1 cls_targets, box_targets,num_positives = anchor_labeler.batch_label_anchors(1,[tb],[tlbl])

/opt/conda/envs/fastai2/lib/python3.7/site-packages/effdet/anchors.py in batch_label_anchors(self, batch_size, gt_boxes, gt_classes)
    394             # cls_weights, box_weights are not used
    395             cls_targets, _, box_targets, _, matches = self.target_assigner.assign(
--> 396                 anchor_box_list, BoxList(gt_boxes[i]), gt_classes[i])
    397 
    398             # class labels start from 1 and the background class = -1

/opt/conda/envs/fastai2/lib/python3.7/site-packages/effdet/object_detection/target_assigner.py in assign(self, anchors, groundtruth_boxes, groundtruth_labels, groundtruth_weights)
    144         match_quality_matrix = self._similarity_calc.compare(groundtruth_boxes, anchors)
    145         match = self._matcher.match(match_quality_matrix)
--> 146         reg_targets = self._create_regression_targets(anchors, groundtruth_boxes, match)
    147         cls_targets = self._create_classification_targets(groundtruth_labels, match)
    148         reg_weights = self._create_regression_weights(match, groundtruth_weights)

/opt/conda/envs/fastai2/lib/python3.7/site-packages/effdet/object_detection/target_assigner.py in _create_regression_targets(self, anchors, groundtruth_boxes, match)
    167         zero_box = torch.zeros(4, device=device)
    168         matched_gt_boxes = match.gather_based_on_match(
--> 169             groundtruth_boxes.boxes(), unmatched_value=zero_box, ignored_value=zero_box)
    170         matched_gt_boxlist = box_list.BoxList(matched_gt_boxes)
    171         if groundtruth_boxes.has_field(self._keypoints_field_name):

/opt/conda/envs/fastai2/lib/python3.7/site-packages/effdet/object_detection/matcher.py in gather_based_on_match(self, input_tensor, unmatched_value, ignored_value)
    171         input_tensor = torch.cat([ss, input_tensor], dim=0)
    172         gather_indices = torch.clamp(self.match_results + 2, min=0)
--> 173         gathered_tensor = torch.index_select(input_tensor, 0, gather_indices)
    174         return gathered_tensor

IndexError: index out of range in self

Here are some values:
ipdb> p input_tensor
tensor([[  0.,   0.,   0.,   0.],
        [  0.,   0.,   0.,   0.],
        [468., 353.,  52., 386.]])

ipdb> p gather_indices.shape
torch.Size([49104])

ipdb> p gather_indices
tensor([    1,     1,     1,  ...,     1,     1, 24554])

ipdb> p self.match_results
tensor([   -1,    -1,    -1,  ...,    -1,    -1, 24552])

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:23 (10 by maintainers)

github_iconTop GitHub Comments

4reactions
rwightmancommented, Oct 2, 2020

@bguan @lgvaz the target bbox tensors passed to the bench by my code are shape [B, N, 4] … where B is the batch size, and N is the number of detection instances. classes are [B, N] for scalar class targets. The bench currently iterates over the batch dimension and passes bbox [N, 4], and class targets [N] to the anchor labeller. For the boxes you cannot strip the first dimension N in the case where there is one (or no boxes).

As per the TF implementation I fix the size of N for all samples regardless of the number of actual boxes. This is done in my collate fn for efficiency. So by default for my training routine I pass one bbox tensor and one class tensor for the batch of size [B, 100, 4], and [B, 100] as 100 is the default max number of instances for coco. I tried setting N to 1 for my batch tensors in the past and it worked fine.

For the fixed N instances, I currently pad the unused instances with zeros. However, in taking another pass over the original code recently I noticed the original looks like it pads out boxes and classes to -1, which ends up as -2 for the classes after subtracting the background, and -2 is treated as a different sentinel value than -1 (background) by the loss/anchor/match code. I’ve had good results as is, but I’m trying runs with -1 padding to see if it has an impact.

Making targets a list of tensors as in your example, with differing N… well I’ve certainly never tried that. I can’t say it wouldn’t/shouldn’t work for the anchor/matching code. An empty dim [] tensor certainly won’t. (0, 4) might. I never thought about this approach as it seems less efficient, you’ll be moving more tensors separately to the GPU, should be less latency to move it in fewer transfers.

Either way, you definitely cannot pass boxes that don’t have a leading N dimension.

1reaction
Chris-hughes10commented, Jul 24, 2020

Hi @rwightman, by single bounding box I mean a Nx4 tensor; one ground truth bbox per image. When I used an image with two ground truth boxes I didn’t hit the problem.

Read more comments on GitHub >

github_iconTop Results From Across the Web

ValueError: All bounding boxes should have positive height ...
Changing this should fix the issue. We have btw recently added box conversion utilities to torchvision (thanks to @oke-aditya ), they can be...
Read more >
Bounding Box Prediction from Scratch using PyTorch
This article talks about the case when there is only one object of interest present in an image. The focus here is more...
Read more >
Using Albumentations to augment bounding boxes for object ...
Define functions to visualize bounding boxes and class labels on an image¶ ... thickness=2): """Visualizes a single bounding box on the image""" x_min, ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found