Very Slow Inference on CPU
See original GitHub issueHi I found the model is very very slow on CPU using the pretrained weight “checkpoint_iter_370000.pth”. I have attached the code below. I have tested different scenarios, and summarize the results below: GPU w pretrained weight: 0.007 sec GPU w/o pretrained weight: 0.007 sec CPU w pretrained weight: 2.829 sec CPU w/o pretrained weight: 0.376 sec
Could you kindly explain why the inference time using CPU and pretrained weight is so slow ?
import argparse
import cv2
import numpy as np
import torch
from models.with_mobilenet import PoseEstimationWithMobileNet
from modules.keypoints import extract_keypoints, group_keypoints
from modules.load_state import load_state
from modules.pose import Pose, propagate_ids
import time
#from val import normalize, pad_width
device = torch.device('cpu')
model_Mobilenet = PoseEstimationWithMobileNet().to(device)
checkpoint = torch.load('checkpoint_iter_370000.pth', map_location=lambda storage, loc: storage)
load_state(model_Mobilenet, checkpoint)
model_Mobilenet.eval()
input = torch.Tensor(2, 3, 368, 368).to(device)
since = time.time()
stages_output= model_Mobilenet(input)
PAF_Mobilenet, Heatmap_Mobilenet = stages_output[-1], stages_output[-2]
print('Mobilenet Inference time is {:2.3f} seconds'.format(time.time() - since))
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Inference on cpu is very slow - PyTorch Forums
I use gpu to train ResNet and save the parameters. Then I load the parameters and use ResNet on the cpu to do...
Read more >PyTorch slow inference - CoolMind
It is very slow recently. Previously there were not so many jobs running, so the CPU usage was low, but the inference procedure...
Read more >Libtorch's CPU inference is much slower on Windows than on ...
I found that PyTorch / LibTorch version 1.10 with certain topologies (classifiers with Fully-Connected/Dense-Layers) is during CPU inference ...
Read more >[D] Is there are a way to speed up the inference time on CPU?
I want to run a PyTorch model on CPU (inference only). Is ... If it's too slow on the CPU, why don't you...
Read more >Yolov5 slow inference on Jetson Xavier NX16
I have finally gotten my Xavier NX16 up and running and I want to test out my Yolov5 model which I have normally...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Daniil, Yes. It is reproduced with real image. I rounded over the pretrained weight parameters by 4 decimals. The inference speed is back to 0.3 sec. I didn’t find any accuracy drop. For your information. Not sure if you can reproduce this scenario though. Wonder if there is any theoretical explanation on this ?
This can be closed.