question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

What is the best way to integrate Custom Detection + Tracking + Depth/Spatial Location Calculation

See original GitHub issue

Hello I am currently trying to use YOLOv5 with multiple object tracking (specifically SORT), retrieve the spatial coordinates and then save the associated depth frame.

I have used some of the code from https://github.com/luxonis/depthai-experiments/tree/master/gen2-yolov5 for decoding the detections from YOLOv5 and then passed that to my MOT to get bounding boxes associated with the tracked objects. All of this so far has worked well. I have used the depthai_demo.py code and inserted my own snippets to make it work, and I can verify it via the previews.

Then, I’ve tried using SpatialLocationCalculator, similar to how it was done here https://docs.luxonis.com/projects/api/en/latest/samples/SpatialDetection/spatial_location_calculator/#spatial-location-calculator . However it seems like there is a frame or two of lag between enqueue’ing the ROIs and when I can actually read those same ROIs from the “spatialData” queue.

I have tried setting the SpatialLocationCalculator

slc.setWaitForConfigInput(True) and using

spatialDataQueue.get()

And that fixes the ordering of messages received but seems to cause issues with reading new depth frames, which I can see in the depth preview window and also in the end print statement, where TOTAL FPS will usually show color FPS much higher than depth or depthRaw. The color stream looks to be unchanged, though. It seems to me that the setWaitForConfigInput(True) also pauses the depth stream while it waits for a config input… is this expected behavior?

Also is there a better way for me to do this?

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:11

github_iconTop GitHub Comments

2reactions
szabi-luxoniscommented, Nov 6, 2021

@AlexanderJGomez

does this mean that for that example the bounding boxes we see are not actually properly aligned?

Yes.

The bounding boxes which are obtained from preview output of RGB camera are not aligned properly with depth frame because depth is aligned to rectified right perspective by default (right stereo camera). The only alignment that is done is scaling the bounding box, taking into consideration FOV difference between RGB and right stereo camera. To obtain proper alignment you need to use setDepthAlign and setIspScale option as mentioned before, which will align the depth frame to RGB camera perspective, so bounding boxes will have proper aligment.

1reaction
szabi-luxoniscommented, Nov 15, 2021

@AlexanderJGomez calc_spatials uses the mean value in ROI while on-device SLC uses the average. Here is the equivalent of SLC algorithm on host: https://github.com/luxonis/depthai-experiments/blob/master/gen2-spi/spatial-location-calculator/main.py#L150

Read more comments on GitHub >

github_iconTop Results From Across the Web

About calculated metrics [beta] - Analytics Help - Google Support
In the VIEW column, click Calculated Metrics > NEW CALCULATED METRIC. You'll then see the Add Calculated Metric interface: Enhanced Ecommerce: Add Calculated...
Read more >
Azure Maps Samples
This sample shows how to create a custom geolocation control that displays the users location on the map. Run Sample Open In New...
Read more >
Fundamental concepts | ARCore - Google Developers
ARCore detects visually distinct features in the captured camera image called feature points and uses these points to compute its change in location....
Read more >
Physical Drivers of Thin Snowpack Spatial Structure from ...
Snow depth spatial variability is a function of interactions among static variables, such as terrain, vegetation, and soil properties and dynamic ...
Read more >
Simple object tracking with OpenCV - PyImageSearch
You will learn how to perform simple object tracking using OpenCV, Python, and the centroid tracking algorithm used to track objects in ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found