caffe c++ demo speed is too slow
See original GitHub issueHi, First of all, thank your great project.
I convert the pytorch model to caffe, then realized it in c++ code, the accuracy is pretty good But I met the speed problem.
Caffe::set_mode(Caffe::GPU);
...
double t1 = static_cast<double>(cv::getTickCount());
net_->Forward();
double t2 = static_cast<double>(cv::getTickCount());
double inferenceTime = (t2 - t1) / cv::getTickFrequency() * 1000;
std::cout << "model infrence time is: " << inferenceTime << std::endl;
...
In my GPU card (1080ti), It needs about 30 ms for each inference, Could you please tell the reason or potential fix method ? thanks a lot.
Issue Analytics
- State:
- Created 4 years ago
- Comments:8 (4 by maintainers)
Top Results From Across the Web
How to Fix a Slow WordPress Website (COMPLETE GUIDE)
SUPER SIMPLE, Easy to Follow Steps That Will Speed Up WordPress & Resolve 99% of WordPress Slow Loading Problems & Wordpress Speed Issues....
Read more >24 Tips to Speed Up WordPress Performance (UPDATED)
The primary causes for a slow WordPress website are: Web Hosting – When your web hosting server is not properly configured it can...
Read more >Caffe Assist | Facebook
Book a demo on our website, or email assist@caffeassist.com.au. 2 days ago ... If you do too, we can help you get perfect...
Read more >Demos - Rust Wiki
Below are a few commands available for altering the playback of a demo file. Timescale. demo.timescale <speed> - Adjust playback speed of the...
Read more >Online Assessment FAQ
These assessments are proctored online, in order to ensure the integrity of the test, ... This error occurs if your RAM or Processor...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@Daniil-Osokin So kind of you! At last I realize the caffe model with depthconv, The speed of inference is about 10ms.
By the way, only in the condition of dilation=1 , you can change convolution to DepthwiseConvolution, or the result will is a little strange.
Great, that it works!