[BUG] {Incorrect spatial location for Yolo detections}
See original GitHub issueDescribe the bug Bounding box of the YoloSpatialDetectionNetwork doesn’t overlap the location of the detected object, thus resulting in incorrect distance estimates, especially on the z (depth) axis.
To Reproduce Steps to reproduce the behavior:
- Just run the ‘RGB&TinyYolo with Spatial Data’ example from your website here:
- Set the bounding box factor to one, i.e.
setBoundingBoxScaleFactor(1)
. (You can leave it as it is, but the problem will be better observed with a larger bounding box.) - Place an object (a bottle, for example) that is identified by the neural network in front of the camera and observe how the bounding box in the depth image DOES NOT align with the actual object detected by the neural network.
Expected behavior The bounding box surrounding the object from the NN image should surrond the same object in the depth image.
Screenshots In the GIF attached below, you can see that the roundabout sign is correctly detected by the NN and subsequently framed in a bounding box, but in the depth image, the bounding box is to the right of the object, barely capturing some of the left margin of the object under it. This leads to incorrect depth estimation, that is dependent on the background of the object.
Issue Analytics
- State:
- Created a year ago
- Reactions:2
- Comments:12
@ZeicBeniamin We recently enabled in
develop
branch. See here: https://github.com/luxonis/depthai-python/blob/develop/examples/SpatialDetection/spatial_tiny_yolo.py#L85If the issue has already been solved in another branch/commit (though I doubt it), please paste a link here. Or if you have any suggestions as to where modifications should be made, I would greatly appreciate it, since I need to get this feature working as fast as possible. Thank you!