question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Image visualization while acquiring

See original GitHub issue

I need to reconstruct with precision the time points that correspond to each individual frame acquired in a movie. To do this, I am recording the timestamps as explained in the code from one of your samples (grabchunkimage.py). While acquiring, I want to display a small fraction of frames (let’s say one every 100 ms), so that I get an idea of what is happening in real-time, but as I am reading the frames one after the other from the buffer, and the buffer is getting filled faster than it can be read out, the images displayed belong to an outdated time point in the past. Is there a way to grab the latest image for display without deleting it from the buffer, so that I can still save all images?

I hope that the question is clear. Here is my current program:

from pypylon import pylon
from pypylon import genicam
import os
import cv2
import numpy as np
import time
from skimage.io import imsave as TiffWriter


def convert_time(t0, t):
    conversion_factor = 1e6  # for conversion in ms
    tout = round((t - t0)/conversion_factor, 2)
    return tout


def save_tiff(name_out, movie):
    print(f'Exporting Tiff file to: {name_out}')
    frame_out = movie[:, :, 0]
    TiffWriter(name_out, frame_out, plugin='tifffile', metadata={'axes': 'CTZYX'}, bigtiff=True)
    for time in range(movie.shape[2]-1):
        TiffWriter(name_out, movie[:, :, time+1], plugin='tifffile', metadata={'axes': 'CTZYX'}, bigtiff=True, append=True)
    print('movie saved')


# Example of a device specific handler for image events.
class SampleImageEventHandler(pylon.ImageEventHandler):
    def OnImageGrabbed(self, camera, grabResult):
        # The chunk data is attached to the grab result and can be accessed anywhere.

        # Native parameter access:
        # When using the device specific grab results the chunk data can be accessed
        # via the members of the grab result data.
        if genicam.IsReadable(grabResult.ChunkTimestamp):
            print("OnImageGrabbed: TimeStamp (Result) accessed via result member: ", grabResult.ChunkTimestamp.Value)


# Number of images to be grabbed.
countOfImagesToGrab = 150
Time_frames = np.zeros(countOfImagesToGrab, dtype=np.float)
#### REPLACE PATH FOR YOUR CASE #########
path_root = 'C:\\Users\\atreides\\Desktop\\Pylon'
log = open(os.path.join(path_root, 'TimeStamp.txt'), 'w')
img2 = pylon.PylonImage()  # holder for the image
# The JPEG format that is used here supports adjusting the image
# quality (100 -> best quality, 0 -> poor quality).
ipo = pylon.ImagePersistenceOptions()
quality = 100
ipo.SetQuality(quality)

# The exit code of the sample application.
exitCode = 0

try:
    # Only look for cameras supported by Camera_t
    info = pylon.DeviceInfo()
    info.SetDeviceClass("BaslerUsb")

    # Create an instant camera object with the first found camera device that matches the specified device class.
    camera = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice(info))
    di = camera.GetDeviceInfo()
    print(di.GetDeviceClass())

    # Print the model name of the camera.
    print("Using device ", camera.GetDeviceInfo().GetModelName())

    # Register an image event handler that accesses the chunk data.
    #camera.RegisterImageEventHandler(SampleImageEventHandler(), pylon.RegistrationMode_Append, pylon.Cleanup_Delete)

    # Open the camera.
    camera.Open()
    camera.MaxNumBuffer = 150
    camera.Gain = 15
    camera.AcquisitionFrameRateEnable = True
    camera.AcquisitionFrameRate = 50
    camera.ExposureTime = 12000
    Width = camera.Width.Value
    Height = camera.Height.Value
    stack = np.zeros((Height, Width, countOfImagesToGrab), dtype='uint16')
    # A GenICam node map is required for accessing chunk data. That's why a small node map is required for each grab result.
    # Creating a node map can be time consuming, because node maps are created by parsing an XML description file.
    # The node maps are usually created dynamically when StartGrabbing() is called.
    # To avoid a delay caused by node map creation in StartGrabbing() you have the option to create
    # a static pool of node maps once before grabbing.
    camera.StaticChunkNodeMapPoolSize = camera.MaxNumBuffer.GetValue()

    # Enable chunks in general.
    if genicam.IsWritable(camera.ChunkModeActive):
        camera.ChunkModeActive = True
    else:
        raise pylon.RUNTIME_EXCEPTION("The camera doesn't support chunk features")

    # Enable time stamp chunks.
    camera.ChunkSelector = "Timestamp"
    camera.ChunkEnable = True

    # Enable CRC checksum chunks.
    camera.ChunkSelector = "PayloadCRC16"
    camera.ChunkEnable = True

    # Start the grabbing of c_countOfImagesToGrab images.
    # The camera device is parameterized with a default configuration which
    # sets up free-running continuous acquisition.

    counter = 0
    t = time.time()
    cv2.namedWindow('Live Image', cv2.WINDOW_NORMAL)  # used to bring to frontmost cv2
    scale = 0.5
    dim = (round(Width*scale), round(Height*scale))
    # Camera.StopGrabbing() is called automatically by the RetrieveResult() method
    # when c_countOfImagesToGrab images have been retrieved.
    camera.StartGrabbingMax(countOfImagesToGrab)
    while camera.IsGrabbing():
        counter += 1
        print('Frame #: ' + str(counter))

        # Wait for an image and then retrieve it. A timeout of 5000 ms is used.
        # RetrieveResult calls the image event handler's OnImageGrabbed method.
        grabResult = camera.RetrieveResult(50, pylon.TimeoutHandling_ThrowException)
        img2.AttachGrabResultBuffer(grabResult)
        Frame_data = img2.GetArray()
        stack[:, :, counter-1] = Frame_data

        elapsed = time.time() - t
        if elapsed >= 0.1:
            t = time.time()
            grabResult = camera.RetrieveResult(50, pylon.TimeoutHandling_ThrowException)
            Frame_display = grabResult.GetArray()
            resized_image = cv2.resize(Frame_display, dim, interpolation=cv2.INTER_AREA)
            cv2.imshow('Live Image', cv2.equalizeHist(resized_image))
            cv2.waitKey(1)
        
        # The result data is automatically filled with received chunk data.
        # (Note:  This is not the case when using the low-level API)

        # Check to see if a buffer containing chunk data has been received.
        if pylon.PayloadType_ChunkData != grabResult.PayloadType:
            raise pylon.RUNTIME_EXCEPTION("Unexpected payload type received.")

        # Since we have activated the CRC Checksum feature, we can check
        # the integrity of the buffer first.
        # Note: Enabling the CRC Checksum feature is not a prerequisite for using
        # chunks. Chunks can also be handled when the CRC Checksum feature is deactivated.
        #if grabResult.HasCRC() and not grabResult.CheckCRC():
        #    raise pylon.RUNTIME_EXCEPTION("Image was damaged!")

        # Access the chunk data attached to the result.
        # Before accessing the chunk data, you should check to see
        # if the chunk is readable. When it is readable, the buffer
        # contains the requested chunk data.
        if genicam.IsReadable(grabResult.ChunkTimestamp):
            if counter == 1:
                t0 = grabResult.ChunkTimestamp.Value
            time_in_msec = convert_time(t0, grabResult.ChunkTimestamp.Value)
            print("TimeStamp (Result): ", time_in_msec)
            Time_frames[counter-1] = time_in_msec
            print("TimeStamp (Result): ", grabResult.ChunkTimestamp.Value)

        img2.Release()


    # Disable chunk mode.
    camera.ChunkModeActive = False
    camera.Close()
    for time_point in Time_frames:
        log.write(str(time_point) + '\r')
    log.close()
    largest_time_jump = np.max(Time_frames[1:] - Time_frames[0:-1])
    movie_name = os.path.join(path_root, "Recording.tif")
    save_tiff(movie_name, stack)
    cv2.destroyAllWindows()
except genicam.GenericException as e:
    # Error handling.
    print("An exception occurred.", e.GetDescription())
    exitCode = 1

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:7

github_iconTop GitHub Comments

1reaction
Micro-Sandwormscommented, Dec 10, 2021

Hi Thies, you got the right point. I implemented your suggestion, and I do not need multithreading to get a good response from the camera. Thank you.

0reactions
thiesmoellercommented, Dec 9, 2021

From looking at your code I think I found the root cause of your performance problems:

stack = np.zeros((Height, Width, countOfImagesToGrab), dtype='uint16')

This will result in extremely inefficient memory accesses.

The ndarray should be created with the item with the lowest change rate" to the left. The item with the highest “change rate” to the right.

(countOfImagesToGrab, Height, Width)

The setup in your code has the effect, that your calls to write or read the image will access each pixels with a gap of countOfImagesToGrab-pixels in between… Thus killing most hardware optimizations …

Read more comments on GitHub >

github_iconTop Results From Across the Web

Visual Imagery | Classroom Strategies | Reading Rockets
Through guided visualization, students learn how to create mental ... Creating mental images while reading can improve comprehension.
Read more >
Preview and Acquire Image Data with Image Acquisition ...
The Image Acquisition Explorer app from Image Acquisition Toolbox™ can help you quickly get started with acquiring images and video from ...
Read more >
Image Acquisition / Photo Technique | FHWA
This section covers methods for acquiring digital images and some basic tools for image manipulation and correction.
Read more >
What Is 3D Image Visualization and How Does It Work?
3D image visualization is effective for quick representations of objects, and for producing high-quality demonstrations. In this respect, the method is valuable ...
Read more >
How to See Bright, Vivid Images in Your Mind's Eye
If I closed my eyes and tried to visualize, say, an elephant, this is my best ... When I insisted, I was told...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found