Why is inference slower than Detectron?
See original GitHub issue❓ Questions and Help
According to the inference speed from the maskrcnn-benchmark and Detectron, Mask R-CNN with R-101-FPN
as backbone is 25% slower (0.15384 VS 0.119). Moreover, V100 is suppose to be 20% faster than P100. Acoording to the results reported in Tensormask, mask rcnn with R-101-fpn backbone runs at 90 ms in V100.
What may be the reason?
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (5 by maintainers)
Top Results From Across the Web
Detectron2 Speed up inference instance segmentation
When I inference image it takes about 3 second / image on GPU. How can I speed up it faster ? Image I...
Read more >How to speed up detection in Detectron2 | by Anuja Ihare
One way of increasing the FPS is by lowering your image resolution. The lower your image resolution the faster will be your inference...
Read more >Why is TF significantly slower than PyTorch in inference? I ...
PyTorch takes about 3ms for inference whereas TF is taking 120-150ms? I have to be doing something wrong. Hey, guys.
Read more >Inference on CPU for detectron2
Hello all ! I have trained my model on GPU with maskrcnn and ResNet 101 on GitHub - facebookresearch/detectron2: Detectron2 is FAIR's ...
Read more >Object Detection from 9 FPS to 650 FPS
It's not that Python is by definition much slower than C++, rather, doing inference in C++ makes it much easier to control exactly...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@fmassa, I think the inference time on MODEL_ZOO is not accurate too. The current speed of maskrcnn-benchmark actually is 15~20% faster than it was. I think it is related to this update https://github.com/pytorch/pytorch/pull/13420. For example, the
@chengyangfu definitely, the speedup of indexing brings quite some speedup to inference, and a bit to testing as well.