No video output for custom classes.
See original GitHub issueHello, I am trying to use FastMOT to detect custom classes but there in no detection or tracking output when I run the app.py script, it simply gives me the same input video but resized and with the text “visible: 0” on the top left corner.
I run:
python3 app.py --input_uri custom/v1_second_tree.mp4 --mot --output_uri results.mp4
And get output:

I’ve read issue #30 and I have Driver Version: 471.11 (Using Ubuntu 20.04) and I’ve also tried to assign “computes=52” in the makefile as I am using a GTX 970, which has a compute capability of 5.2.
It is important to point out that when I run "docker run … ", a message appears:

I think the gpu is available within the docker container, because when I execute the command “nvidia-smi” and “/usr/local/cuda/bin/nvcc --version”, it outputs this:
And also when I run app.py, I can see that the GPU resources are being used as well as its temperature rising.
Regarding the setup to track custom classes, I simply followed the guide:
- Successfully trained YOLOv4-p5 using AlexeyAB darknet framework;
- Converted the trained model to .onnx using the provided script “/scripts/yolo2onnx.py”
- Disabled fast-reid (according to #35 (comment));
- Changed subclass YOLO to:

- Changed class labels to the only class I want to detect:

- And changed mot.json accordingly:

Any ideas of what might be happening? Thank you so much again for your project and kind support.
Issue Analytics
- State:
- Created 2 years ago
- Comments:16 (8 by maintainers)

Top Related StackOverflow Question
It’s for all scaled YOLOv4 models because these are usually trained with letterbox preprocessing. Will push a fix today.
Should be fixed now.
For your question, you need to count confirmed tracks. So you can count unique ID’s in
mot.visible_tracksat every frame and accumulate them with a data structure like set. Or you can just output a MOT challenge log with the-loption and count the number of unique ID’s in the log.