RGB Alignment with Depth
See original GitHub issueStart with the why
:
In cases where both color is required for neural inference to work properly -and- the neural data needs to be perfectly aligned with the depth data, alignment between the RGB and depth information needs to exist.
A canonical example is doing semantic segmentation of color defects and needing to know their physical location. In this case, both color-based neural inference is needed (and it is per-pixel, since the network is a semantic segmentor) and depth information is needed to be aligned per-pixel.
Move to the how
:
The Myriad X already has the capability to perform the transform to provide alignment between cameras. What is needed is a system for doing the calibration to determine this transform matrix, including at the differing resolutions of the color camera and the grayscale cameras (which are the source of the depth map).
Move to the what
:
Provide a system for DepthAI users to calibrate the RGB camera against the right grayscale camera and the mechanism to do this alignment.
And for units with onboard cameras, improve our calibration system to perform this RGB-right calibration in the factory for all future production.
Issue Analytics
- State:
- Created 3 years ago
- Comments:8
This was just initially released in https://github.com/luxonis/depthai-python and https://github.com/luxonis/depthai-core (2.4+)
Thank You ☺️