Projection of Depth map into point cloud
See original GitHub issueStart with the why
:
The depth map that we can currently obtain from the DepthAI API is an image that contains depth information at each index of the matrix/image.
Projecting this depth map into point cloud allows us to exploit features like point cloud’s registration, subsampling, reconstruction etc.
For the initial implementation, this step is done on the host side instead of the Myriad X
It is important to do this on the Myriad X itself since it can unlock the above-mentioned features to be carried out on the Myriad X and reducing more load on the CPU.
The How
:
Considering the depth map image is relative to the right camera of the stereo camera. Using its intrinsic parameters, we can obtain the (x_k, y_k, 1) in the camera reference frame. Multiplying z value from the depth map will provide us with the (x, y, z) of each point captured by the stereo camera.
The what
Support point cloud projection on the DepthAI itself, with options like
- sub-sampling
- registration
Issue Analytics
- State:
- Created 3 years ago
- Comments:13 (5 by maintainers)
Added RGB Point Cloud Visualizer with depth information here.
Created an host based example for rgb alignment using gen2. Code can be found here P.S: Follow the instructions in the example carefully on how to modify calibration information manually since calibration api is not available in gen2 yet.