Extremely confusing API
See original GitHub issue-
what is the difference between
createMonoCamera
andcreateColorCamera
? Aren’t they both referring to the same Monocular camera’s in the right and the left? If it’ sjust the grayscale that makes the difference, having a seperate name asMonoCamera
is confusing as the camera remains the same but it’s just that the frames are being converted from rgb to grayscale. -
The compiler warns that 'can not find reference to MonoCameraProperties in depthai.py`. This makes it hard to navigate inside the API to see the possible settings or the API that I could use? This makes development hard as I need to search the documentation online for every other settings that depthai supports.
-
The code is too verbose. For instance,
depth_preview.py
has the following code
# Define a source - two mono (grayscale) cameras
left = pipeline.createMonoCamera()
left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
left.setBoardSocket(dai.CameraBoardSocket.LEFT)
right = pipeline.createMonoCamera()
right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
right.setBoardSocket(dai.CameraBoardSocket.RIGHT)
# Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way)
depth = pipeline.createStereoDepth()
depth.setConfidenceThreshold(200)
depth.setOutputDepth(False)
but wouldn’t it be easier to create a depth node with a single line and retrieve the left
and right
frames as follows,
cam.retrieve_image(left_matrix, View.LEFT) # left rgb
cam.retrieve_image(right_matrix, View.RIGHT) #right rgb
cam.retrieve_depth() #16 bit depth image
- Please add a simplistic example to retrieve 1. RGB Left 2. RGB Right 3. Depth Image 4. Distance on mouse click. I see an example with open3d but having too many dependencies makes it hard to integrate.
Issue Analytics
- State:
- Created 2 years ago
- Comments:10 (2 by maintainers)
Hello @Zumbalamambo, I’m currently just working on this for another project. I will make a demo that shows how to achieve this and add that to
depthai-experiments
, hopefully, this week. I will circle back here when I push it:)The code that converts ROI to XYZ positions (you could just provide a single pixel) on the host is here, note here that color and depth frames are aligned.
Thanks, Erik
Hello @Zumbalamambo,
depthai
package are you using? Did you follow the installation instructions? If so, could you please provide the error that is returned and the script you are using? We would be more than happy to take a look.main
because it needs calibration to be read from gen1 and manually storing it on host and feeding it to the device. If that is not what you are looking for please correct me and I can add it to my TODO of experiments/demos to make:)And apology for the late reply!