output from trt model is different from pytorch
See original GitHub issueHi, I see a big difference in output values for trt model after conversion vs original pytorch model. The network that I’m converting is not complex and it does not give any error when doing torch2trt conversion. The network is linked here
I do the conversion using this line
pnet_trt = torch2trt(pnet, [1,3,109,193], fp16_mode=False)
and use these lines to measure the difference, the difference is around 0.7
print(output.flatten()[0:10])
print(output_trt.flatten()[0:10])
print('max error: %f' % float(torch.max(torch.abs(output - output_trt))))
I am trying on Jetson Nano with Jetpack 4.4 CUDA 10.2, CuDNN 8.0, TensorRT 7.1.0, PyTorch 1.5.0
thank you
Issue Analytics
- State:
- Created 3 years ago
- Comments:8
Top Results From Across the Web
When i found my network's output from pytorch model have ...
When i found my network's output from pytorch model have different in trt model which input the same input data. what should i...
Read more >How to Convert a Model from PyTorch to TensorRT and ...
Learn how to convert a PyTorch model to TensorRT to speed up inference. We provide step by step instructions with code.
Read more >Torch-TensorRT - Using Dynamic Shapes - PyTorch
This notebook has the following sections: 1. TL;DR Explanation 1. Setting up the model 1. Working with Dynamic shapes in Torch TRT. torch_tensorrt....
Read more >Runtime Phase — Torch-TensorRT v1.4.0dev0+88fed13 ...
When the Torch-TensorRT is loaded, it registers an operator in the PyTorch JIT operator library called trt::execute_engine(Tensor[] inputs, __torch__.torch.
Read more >Transfering a Model from PyTorch to Caffe2 and Mobile using ...
torch_out is the output after executing the model. Normally you can ignore this output, but here we will use it to verify that...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@yptheangel TensorRT operates directly on the data buffer (cf links I posted ealier: https://github.com/NVIDIA-AI-IOT/torch2trt/issues/220#issuecomment-569949961). This means that it works directly on the memory and assumes that the tensor data is contiguously stored. If it is not the tensorRT model will still access the data as the place and might therefore get random data as input.
If also found out that in some case the model needs to be in evaluation model before the torch2rt conversion, otherwise there is some slights changes in the tensortRT outputs.
OK, thanks for your reply.