question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Distinguish moving object vs stationary object

See original GitHub issue

Hi ya. Giving frigate a try and am having problems getting it up and running. I’m installing amd64 version in Docker on my Linux server. Here’s the docker run command I’m using:

root@omv:/sharedfolders/appdata/frigate# docker run --rm --name frigate --privileged -v /dev/bus/usb:/dev/bus/usb -v /sharedfolders/appdata/frigate/:/config:ro -v /etc/localtime:/etc/localtime:ro -p 5000:5000 -e FRIGATE_RTSP_PASSWORD='redacted' blakeblackshear/frigate:stable-amd64

and here’s my config.yml file:

# Optional: port for http server (default: shown below)
web_port: 5000

# Optional: detectors configuration
# USB Coral devices will be auto detected with CPU fallback
detectors:
  # Required: name of the detector
  coral:
    # Required: type of the detector
    # Valid values are 'edgetpu' (requires device property below) and 'cpu'.
    type: edgetpu
    # Optional: device name as defined here: https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api
    device: usb
    
# Required: mqtt configuration
mqtt:
  # Required: host name
  host: 192.168.0.25
  topic_prefix: frigate


# Optional: Global configuration for saving clips
save_clips:
  # Optional: Maximum length of time to retain video during long events. (default: shown below)
  # NOTE: If an object is being tracked for longer than this amount of time, the cache
  #       will begin to expire and the resulting clip will be the last x seconds of the event.
  max_seconds: 300
  # Optional: Location to save event clips. (default: shown below)
  clips_dir: /config/clips
  # Optional: Location to save cache files for creating clips. (default: shown below)
  # NOTE: To reduce wear on SSDs and SD cards, use a tmpfs volume.
  cache_dir: /config/cache

# Optional: Global object filters for all cameras.
# NOTE: can be overridden at the camera level
objects:
  # Optional: list of objects to track from labelmap.txt (default: shown below)
  track:
    - person
  # Optional: filters to reduce false positives for specific object types
  filters:
    person:
      # Optional: minimum width*height of the bounding box for the detected object (default: 0)
      min_area: 5000
      # Optional: maximum width*height of the bounding box for the detected object (default: max_int)
      max_area: 100000
      # Optional: minimum score for the object to initiate tracking (default: shown below)
      min_score: 0.5
      # Optional: minimum decimal percentage for tracked object's computed score to be considered a true positive (default: shown below)
      threshold: 0.85

# Required: configuration section for cameras
cameras:
  # Required: name of the camera
  front:
    # Required: ffmpeg settings for the camera
    ffmpeg:
      # Required: Source passed to ffmpeg after the -i parameter.
      # NOTE: Environment variables that begin with 'FRIGATE_' may be referenced in {}
      input: rtsp://admin:{FRIGATE_RTSP_PASSWORD}@192.168.0.26/livestream/11?action=play&media=video_audio_data
      # Optional: camera specific global args (default: inherit)
      global_args:
      # Optional: camera specific hwaccel args (default: inherit)
      hwaccel_args:
      # Optional: camera specific input args (default: inherit)
      input_args:
      # Optional: camera specific output args (default: inherit)
      output_args:
    
    # Optional: desired fps for your camera
    # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
    #       Frigate will attempt to autodetect if not specified.
    fps: 5


    # Optional: timeout for highest scoring image before allowing it
    # to be replaced by a newer image. (default: shown below)
    best_image_timeout: 60

    # Optional: camera specific mqtt settings
    mqtt:
      # Optional: crop the camera frame to the detection region of the object (default: False)
      crop_to_region: True
      # Optional: resize the image before publishing over mqtt
      snapshot_height: 300

    # Optional: Configuration for the snapshots in the debug view and mqtt
    snapshots:
      # Optional: print a timestamp on the snapshots (default: shown below)
      show_timestamp: True
      # Optional: draw zones on the debug mjpeg feed (default: shown below)
      draw_zones: False
      # Optional: draw bounding boxes on the mqtt snapshots (default: shown below)
      draw_bounding_boxes: True

    # Optional: Camera level object filters config. If defined, this is used instead of the global config.
    objects:
      track:
        - person
        - car
      filters:
        person:
          min_area: 5000
          max_area: 100000
          min_score: 0.5
          threshold: 0.85

and finally, here’s the error:

Fontconfig error: Cannot load default config file
On connect called
Traceback (most recent call last):
  File "detect_objects.py", line 441, in <module>
    main()
  File "detect_objects.py", line 202, in main
    ffmpeg_output_args = ["-r", str(config.get('fps'))] + ffmpeg_output_args
TypeError: can only concatenate list (not "NoneType") to list

I did try remove portions of the optional items in the yaml config file in hopes of getting things to work but I was not able. Can anyone tell me what the problem might be? I see that the error is for ‘fps’ and I do have it configured for the one camera that I’m trying. Is it not seeing this value?

Thanks for you help!

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:13 (4 by maintainers)

github_iconTop GitHub Comments

3reactions
blakeblackshearcommented, Dec 13, 2020

Should be easy enough. I have all the information I need to distinguish a moving object from a stationary object.

2reactions
noelhibbardcommented, Jun 28, 2021

Just wanted to give a +1 on this. I have a zone for my driveway and then wrote an app that monitors the MQTT events and then sends the thumbnail to a Telegram group. I only send the Telegrams if it’s in specific zones so I don’t get constant alerts on my phone. I typically park my cars all the way up the driveway in a masked area where I exclude car detection. This works well but on occasion I park outside of the mask and on nights like that I get constant notification every time there is motion. I’d love to see someway of filtering static objects. I’m sure this is easier said than done though.

Read more comments on GitHub >

github_iconTop Results From Across the Web

What happens when a moving object collides with a stationary ...
Situation 1: One object travels left to right at velocity V and hits a very solid wall. The wall is undamaged because it...
Read more >
Forces on stationary objects
For both stationary and moving objects with unchanging speed and direction, all the forces acting on the objects are in balance with each...
Read more >
is there difference b/w moving object in a fluid and static object ...
There's no difference whatsoever between a steadily moving object in a stationary fluid, and a stationary object in a steadily moving fluid.
Read more >
collision: stationary vs. moving objects - Oracle Communities
Hi all, I wonder if it makes a difference to define a collision behaviour for a stationary object and wait for a moving...
Read more >
Momentum and Collisions Review - with Answers #1
When a moving object collides with a stationary object of identical mass, the stationary object encounters the greater collision force. When a moving...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found