using the onnx model with opencv
See original GitHub issuehi 😉 i’m trying to use the lightweight-human-pose model on colab:
!wget 'https://drive.google.com/uc?export=dowload&id=1T2Kq01WXzPMrQdnEOUEiVBhwouW8Pka5' -O pose.onnx
net = cv2.dnn.readNet("pose.onnx")
# load 256x256 image and sent through net
net.dumpToFile("pose_dot.txt")
!dot pose_dot.txt -Tpng -opose_dot.png
and the last layer :
that looks weird. 38 heatmaps ? shouldn’t it have 19 heat and 2x19 paf maps (=57) ? or, if it’s meant to be used for single persons , 19 heatmaps only ?
i look at the model: https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/blob/55093417d383c25a0f2454eba352198c944290d1/models/with_mobilenet.py#L90
and it should have returned [heatmaps, pafs] here: https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/blob/55093417d383c25a0f2454eba352198c944290d1/models/with_mobilenet.py#L86
but it looks to me, like we only have the pafs returned from the forward pass in opencv
the newly added opencv Keypoints model tries to loop over all of them, and results look quite bad 😭
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:5 (2 by maintainers)
Top GitHub Comments
the current model (exported from your script) luckily has heatmaps and pafs in seperate outputs, so it can be used as is (dnn user can choose).
thanks for all your support, the issue moved on to opencv, so i’ll close this one.
I believe the commit author should check, then reconvert the model with right output (heatmaps, not pafs) and upload proper model. If you want, you can do it by yourself, just remove last
pafs
output in the model:return stages_output[:-1]
and remove last output name from conversion script correspondingly.