Depth Calculator Node in Gen2
See original GitHub issueStart with the why:
In the Gen1 pipeline builder of DepthAI, one of the core functionalities that is hard-coded in is the capability to get the depth of an neural-network object detector object. The algorithm averages over the center of the object, with the size of this averaging area selectable via API (via the padding_factor). This can be seen below, where the blue box is where the depth is averaged from:

DepthAI then reprojects this depth, using the camera intrinsics, to the location of the object in physical space in X, Y, and Z in meters relative to the center of the right grayscale cameras.
This functionality, however, is in a hard-coded position in the Gen1 pipeline. So it is only useful if the hard-coded placement in the pipeline is useful to the end application. Enabling this functionality as a node in the Gen2 Pipeline Builder (#136) allows this functionality to be used in all sorts of other permutations (including more CV/AI functions being performed prior to getting the object location, say to improve location accuracy). Or for example, being used with object tracking (instead of an object detector), or some other region of interest (ROI) not produced by an object detector, but perhaps some other node (or series of nodes) that produces this ROI.
Move to the how:
Leverage the Gen2 Pipeline Builder (#136) architecture to build a depth calculator node. Which then will work with the stereo node input to generate a depth and 3D coordinates (XYZ in meters) for a given passed-in ROI.
Move to the what:
A node that takes an ROI and calculates the depth of that ROI.
For the initial version of this node, let’s just have it run the same algorithm as Gen1, so the padding_factor approach. Later, we can expose other options for the node (for example, maybe some more sophisticated depth-edge detection and averaging only inside that region, or the capability to take in a semantically input (pixel-map) input, but for now let’s ignore this and save for later).
Issue Analytics
- State:
- Created 3 years ago
- Comments:16

Top Related StackOverflow Question
Here is the example for tiny-yolo-v3 and v4. https://github.com/luxonis/depthai-python/pull/163
Install the following depthai library:
Download blob from here: https://artifacts.luxonis.com/artifactory/luxonis-depthai-data-local/network/tiny-yolo-v4_openvino_2021.2_6shave.blob Or tiny-yolo-v3 : https://artifacts.luxonis.com/artifactory/luxonis-depthai-data-local/network/tiny-yolo-v3_openvino_2021.2_6shave.blob
Run as:
python3 26_3_spatial_tiny_yolo.py <path_to>/tiny-yolo-v4_openvino_2021.2_6shave.blobYes, it will work by combining recently added example 26/28 with example
22_tiny_yolo_v3_device_side_decoding.py