question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Why Depth output is not always available in my pipeline?

See original GitHub issue

I am currently using saved left and right channel videos to feed into the pipeline and show Depth output. However, it doesn’t always work. For example, it may not output depth images for quite a long period, then it will work quite well, and may or may not drop some frames in between. If I use the get() call then it will get stuck quite quickly. If using tryGet(), then it will work but the jumpy nature of it is very frustrating and cannot be useful. How can I go about troubleshooting this problem? My code:

#!/usr/bin/env python3

from pathlib import Path
import sys
import cv2
import depthai as dai
import numpy as np
import time
import datetime
import os

# Get argument first
monoLPath = str(Path("./mono1_1.mp4").resolve().absolute())
monoRPath = str(Path("./mono2_1.mp4").resolve().absolute())

if len(sys.argv) > 2:
    monoLPath = sys.argv[1]
    monoRPath = sys.argv[2]

# Start defining a pipeline
pipeline = dai.Pipeline()

# Create xLink input to which host will send frames from the video file
xinLFrame = pipeline.createXLinkIn()
xinLFrame.setStreamName("inLeftFrame")

xinRFrame = pipeline.createXLinkIn()
xinRFrame.setStreamName("inRightFrame")

# Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way)
depth = pipeline.createStereoDepth()
depth.setConfidenceThreshold(200)
depth.setOutputDepth(True)
# Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default)
median = dai.StereoDepthProperties.MedianFilter.KERNEL_3x3  # For depth filtering
depth.setMedianFilter(median)
depth.setInputResolution(1280, 720)

'''
If one or more of the additional depth modes (lrcheck, extended, subpixel)
are enabled, then:
 - depth output is FP16. TODO enable U16.
 - median filtering is disabled on device. TODO enable.
 - with subpixel, either depth or disparity has valid data.
Otherwise, depth output is U16 (mm) and median is functional.
But like on Gen1, either depth or disparity has valid data. TODO enable both.
'''
# Better handling for occlusions:
depth.setLeftRightCheck(False)
# Closer-in minimum depth, disparity range is doubled:
depth.setExtendedDisparity(False)
# Better accuracy for longer distance, fractional disparity 32-levels:
depth.setSubpixel(False)

xinLFrame.out.link(depth.left)
xinRFrame.out.link(depth.right)

# Create output
xout = pipeline.createXLinkOut()
xout.setStreamName("depth")
depth.depth.link(xout.input)

startTime = time.monotonic()

# Pipeline is defined, now we can connect to the device
with dai.Device(pipeline) as device:
    # Start pipeline
    device.startPipeline()

    # Input queue will be used to send video frames to the device.
    qInL = device.getInputQueue(name="inLeftFrame")
    qInR = device.getInputQueue(name="inRightFrame")

    # Output queue will be used to get the disparity frames from the outputs defined above
    q = device.getOutputQueue(name="depth", maxSize=10, blocking=False)

    def to_planar(arr: np.ndarray, shape: tuple) -> np.ndarray:
        return cv2.resize(arr, shape).transpose(2, 0, 1).flatten()

    capL = cv2.VideoCapture(monoLPath)
    capR = cv2.VideoCapture(monoRPath)

    timestamp_ms = 0
    frame_interval_ms = 33
    count = 0
    countFPS = 0
    color = (255, 0, 0)

    while capL.isOpened():
        read_L_correctly, frameL = capL.read()
        if not read_L_correctly:
            break

        count += 1  # i.e. at 30 fps, this advances one second
        countFPS += 1
        current_time = time.monotonic()
        if (current_time - startTime) > 1:
            fps = countFPS / (current_time - startTime)
            counterFPS = 0
            startTime = current_time

        # capL.set(1, count)  # skip to the next

        if capR.isOpened():
            read_R_correctly, frameR = capR.read()
            if not read_R_correctly:
                break

            # capR.set(1, count)

            tstamp = datetime.timedelta(seconds=timestamp_ms // 1000,
                                        milliseconds=timestamp_ms % 1000)
            imgL = dai.ImgFrame()
            imgL.setData(to_planar(frameL, (1280, 720)))
            imgL.setTimestamp(tstamp)
            imgL.setWidth(1280)
            imgL.setHeight(720)
            qInL.send(imgL)
            if timestamp_ms == 0:  # Send twice for first iteration
                qInL.send(imgL)

            imgR = dai.ImgFrame()
            imgR.setData(to_planar(frameR, (1280, 720)))
            imgR.setTimestamp(tstamp)
            imgR.setWidth(1280)
            imgR.setHeight(720)
            qInR.send(imgR)
            # if timestamp_ms == 0:  # Send twice for first iteration
            #    qInR.send(imgR)

            print("Sent frames.", 'timestamp_ms:', timestamp_ms)

            timestamp_ms += frame_interval_ms

            if 1:  # Optional delay between iterations, host driven pipeline
                time.sleep(frame_interval_ms / 1000)

            inDepth = q.tryGet()  # blocking call, will wait until a new data has arrived
            if inDepth is None:
                print("  no depth output")
                continue

            frame0 = inDepth.getFrame()
            frame = cv2.normalize(frame0, None, 0, 255, cv2.NORM_MINMAX)
            frame = np.uint8(frame)

            # Available color maps: hqttps://docs.opencv.org/3.4/d3/d50/group__imgproc__colormap.html
            frame = cv2.applyColorMap(frame, cv2.COLORMAP_JET)

            cv2.putText(frame, "Fps: {:.2f}".format(
                fps), (2, frame.shape[0] - 4), cv2.FONT_HERSHEY_TRIPLEX, 0.4, color)

            # frame is ready to be shown
            cv2.imshow("depth", frame)
            # cv2.imshow("left", frameL)

            key = cv2.waitKey(1)
            if key == ord('q'):
                break
            elif key == ord('p'):
                # save to files
                cv2.imwrite('depth3x3_{:d}.png'.format(count), frame)
                with open('depth3x3_{:d}.npy'.format(count), 'wb') as f:
                    np.save(f, frame0)

    capL.release()
    capR.release()

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:18

github_iconTop GitHub Comments

1reaction
szabi-luxoniscommented, Jul 13, 2021

@laorient Sorry for the delay on this issue. You can check out this branch/codebase, it supports recording / replaying frames, it would fit your needs. As far as I understand it was fairly well tested and used by a customer.

https://github.com/luxonis/depthai-experiments/tree/replay_memory_fix/gen2-replay

1reaction
laorientcommented, Jun 27, 2021

How does it look coming from the cameras directly?

I used one of the sample app (forgot which one) that captures both channels and saves them to files. Checked the length and frame count of both and they are equal. Opened them individually and verified they are valid.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Capturing Photos with Depth | Apple Developer Documentation
Enabling depth capture on a dual camera locks the zoom factor of both the wide and telephoto cameras. Choose Settings. Once your photo...
Read more >
Pipelines - Hugging Face
This depth estimation pipeline can currently be loaded from pipeline() using the following task identifier: "depth-estimation" . See the list of available ......
Read more >
Device - DepthAI documentation - Luxonis
On all of our devices there's a powerful Robotics Vision Core (RVC). ... tryGet() for non-blocking # Send a message to the device...
Read more >
Depth Test - OpenGL Wiki
The Fragment's output depth value may be tested against the depth of the ... no depth buffer, then the depth test behaves as...
Read more >
Customize pipeline configuration - GitLab Docs
On the left sidebar, select Settings > CI/CD. Expand General pipelines. In the CI/CD configuration file field, enter the filename. If the file:...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found