Processed events with no matches produce a lot of missed images
See original GitHub issue@erikarenhill, curious to hear your thoughts on this as well.
I’ve been testing some of the recent code changes with untrained detectors. One thing I’ve noticed is that if SAVE_UNKNOWN
is set to true
, then a lot of images will be produced for each event. Each retry per detector could produce an image if a person is found. For example, if I’m running compreface and deepstack and the snapshot and latest retries are set to 10 each, then I could in theory have 40 images saved for just a single event.
Maybe just taking a couple of the best results from the unknown images instead of saving out every single one to display on the UI.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:6 (6 by maintainers)
Top Results From Across the Web
How to troubleshoot Meta Pixel error and warning messages ...
Error and warning messages you may see for the Meta Pixel in Events Manager · Missing event name: This error appears when your...
Read more >How to Fix Meta Pixel (Facebook Pixel) Errors and Test Events
Make sure that the Pixel ID you see in this extension matches the one you have in the events manager. One way to...
Read more >jakowenko/double-take: Unified UI and API for ... - GitHub
To improve the chances of finding a match, the processing of the images will repeat until the amount of retries is exhausted or...
Read more >Solutions to common product CSV import problems
Troubleshooting product CSV files. Identify missing fields or headers, illegal formatting or identifier duplications before contacting support.
Read more >Troubleshoot creatives rejected by Display & Video 360 or ...
Looking for a specific rejection code you saw in Display & Video 360? To search for it on this page, press Ctrl +...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I went ahead and made the changes I mentioned above in beta. Rather than each detector pulling images from Frigate on their own, the image is pulled then passed to each detector. This allows me to reduce the amount of images that are written. Hope this may help with the Frigate API issues too.
This also sets up the code a little better to handle the MQTT snapshot issue you brought up - #18
Maybe instead of having each detector pull the image, I write the image first and put that image through each detector. It could take a little longer, but would reduce the amount of images in general and Frigate API hits.