Evaluation on coco2017 (5000 images) is extremely slow
See original GitHub issue❓ Questions and Help
Hi, I found that evaluation on coco2017 with 5000 images is extremely slow.
I haven’t finished the evaluation process yet, but it seems that this would take about 3 hours to complete.
This is the command I used on 1 gpu,
python tools/test_net.py --config-file 'configs/e2e_mask_rcnn_R_50_FPN_1x.yaml' TEST.IMS_PER_BATCH 4
And I found that the gpu usage is zero, this is quite weird.
I did not change the parameter MODEL.ROI_HEADS.DETECTIONS_PER_IMG. Could you help me figure it out?
Issue Analytics
- State:
- Created 5 years ago
- Reactions:1
- Comments:19 (14 by maintainers)
Top Results From Across the Web
Stop Wasting Time with PyTorch Datasets! | by Eric Hofesmann
Specifically, I am using the COCO 2017 dataset which I can load directly ... We need the height and width of images later...
Read more >Deep Learning for Generic Object Detection: A Survey
(2012), 2012, PAMI, A thorough and detailed evaluation of detectors in monocular images. 4, Detecting Faces in Images: A Survey, Yang et al....
Read more >arXiv:2109.15099v1 [cs.CV] 17 Sep 2021
118k images, and evaluated on COCO-2017[30] validation set with 5000 images using the common COCO AP met- ric of a single scale.
Read more >The COCO Dataset: Best Practices for Downloading ...
How to download, visualize, and explore the COCO dataset or subsets with FiftyOne and add model predictions and evaluate them with ...
Read more >LVIS: A Dataset for Large Vocabulary Instance Segmentation
To build our dataset, we adopt an evaluation-first design principle. ... We report detailed analysis on the 5000 image val subset that we...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
If you are using ‘configs/e2e_mask_rcnn_R_50_FPN_1x.yaml’ for testing without specifying new
MODEL.WEIGHT
, it means you use an untrained model for inference. So, you will get very bad detection results. It also will be very slow because there is some post-processing after CNN inference. For example, using some threshold to filter out low confidence predictions. In your case, because you didn’t train the model and so the thresholding does not work properly.If you just want to test the detection, you can use the script in
config/caffe2
, in this case, the program will download the trained detection model from facebook server automatically and run the test.Otherwise, you need to train the model first. Then use your trained model for inference by during adding
MODEL.WEIGHT YOURMODEL
in your testing command.I have been using WEIGHT: “https://download.pytorch.org/models/maskrcnn/e2e_mask_rcnn_R_50_FPN_1x.pth” as my weights and it takes less than 20 minutes to run inference on all 5000 images. I don’t have the exact time because I am running it on AzureML and I have to build the libraries each time I do my tests. I am also running on a single gpu with TEST.IMS_PER_BATCH: 10. I hope this helps.