Coverting torchvision trained models for use with DeepStream
See original GitHub issueHi
This might not be completely relevant here, but I have a ResNet-50 architecture, which was fetched from torchvision, and then fine-tuned on our own dataset. torch2trt is able to generate a trt_model as instructed. I want to use the generated engines with deepstream (DS). The generated engine file works, file and is deserialized successfully when used with DS. However, I noticed that I was not getting any classification results. I manually inspected the final layer output, and it showed that the model outputs scores instead of probabilities. The deepstream nvinfer
plugin expects to work only with probabilities. The torchvision models do not have a final softmax layer, and are trained using nn.CrossEntropyLoss()
.
I tried to add a softmax layer to our model manually using:
model.add_module('softmax', torch.nn.Softmax(1))
and then export the model to model_trt using:
model_trt = torch2trt(model, [x])
I was once again able to generate an engine file, however, that engine file also outputs scores instead of probabilities. Is there a correct way to add a softmax layer so that torch2trt is able to work with it and generate an appropriate engine file? Or am I limited to post-processing the output myself? I am not really too familiar with tensorrt. If this is relevant, any help would be appreciated.
Regards Salman
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (1 by maintainers)
Top GitHub Comments
Hi Salman,
Thanks for reaching out!
I think you’re on the right track, but I don’t think that
model.add_module
changes the execution behavior of the resnet50 module.Can you try wrapping the model in a Sequential module and adding softmax? For example
Please let me know if this works or you run into any issues.
Best, John
I followed pytorch ssd example tutorial creating pth model https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md
by the way on tutorial doesn’t exist the steps to export to engine in deepstream, I exported to onnx but is not working maybe Bbox Custom, the issue is not show any bbox detection while running a test video example Not exist example for deepstream app