question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Why is inference slower than Detectron?

See original GitHub issue

❓ Questions and Help

According to the inference speed from the maskrcnn-benchmark and Detectron, Mask R-CNN with R-101-FPN as backbone is 25% slower (0.15384 VS 0.119). Moreover, V100 is suppose to be 20% faster than P100. Acoording to the results reported in Tensormask, mask rcnn with R-101-fpn backbone runs at 90 ms in V100.

What may be the reason?

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:7 (5 by maintainers)

github_iconTop GitHub Comments

3reactions
chengyangfucommented, Apr 6, 2019

@fmassa, I think the inference time on MODEL_ZOO is not accurate too. The current speed of maskrcnn-benchmark actually is 15~20% faster than it was. I think it is related to this update https://github.com/pytorch/pytorch/pull/13420. For example, the

Model (Det) MODEL_ZOO number Re-evaluated on 1080Ti
R-50-FPN 126ms 93ms
R-101-FPN 143ms 116ms
0reactions
fmassacommented, Apr 6, 2019

@chengyangfu definitely, the speedup of indexing brings quite some speedup to inference, and a bit to testing as well.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Detectron2 Speed up inference instance segmentation
When I inference image it takes about 3 second / image on GPU. How can I speed up it faster ? Image I...
Read more >
How to speed up detection in Detectron2 | by Anuja Ihare
One way of increasing the FPS is by lowering your image resolution. The lower your image resolution the faster will be your inference...
Read more >
Why is TF significantly slower than PyTorch in inference? I ...
PyTorch takes about 3ms for inference whereas TF is taking 120-150ms? I have to be doing something wrong. Hey, guys.
Read more >
Inference on CPU for detectron2
Hello all ! I have trained my model on GPU with maskrcnn and ResNet 101 on GitHub - facebookresearch/detectron2: Detectron2 is FAIR's ...
Read more >
Object Detection from 9 FPS to 650 FPS
It's not that Python is by definition much slower than C++, rather, doing inference in C++ makes it much easier to control exactly...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found