question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

On which GPU did you test centerpoint TensorRT engine

See original GitHub issue

Hi @CarkusL , I am using Tensor 7.2.3.4 on V100. I find the latency is almost twice slower than reported. Could you share us your specific environment settings?

&&&& RUNNING TensorRT.sample_onnx_centerpoint # ./centerpoint
[09/15/2021-10:37:46] [I] Building and running a GPU inference engine for CenterPoint
----------------------------------------------------------------
Input filename:   ../data/centerpoint/pointpillars_trt.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    pytorch
Producer version: 1.7
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[09/15/2021-10:37:47] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/15/2021-10:37:47] [I] [TRT] ModelImporter.cpp:135: No importer registered for op: ScatterND. Attempting to import as plugin.
[09/15/2021-10:37:47] [I] [TRT] builtin_op_importers.cpp:3771: Searching for plugin: ScatterND, plugin_version: 1, plugin_namespace: 
[09/15/2021-10:37:47] [I] [TRT] builtin_op_importers.cpp:3788: Successfully created plugin: ScatterND
[09/15/2021-10:37:47] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/15/2021-10:37:47] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/15/2021-10:37:47] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/15/2021-10:37:47] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/15/2021-10:37:47] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/15/2021-10:37:47] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/15/2021-10:37:48] [W] [TRT] TensorRT was linked against cuDNN 8.1.0 but loaded cuDNN 8.0.3
[09/15/2021-10:37:53] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[09/15/2021-10:38:05] [I] [TRT] Detected 2 inputs and 42 output network tensors.
[09/15/2021-10:38:05] [W] [TRT] TensorRT was linked against cuDNN 8.1.0 but loaded cuDNN 8.0.3
[09/15/2021-10:38:05] [I] getNbInputs: 2 

[09/15/2021-10:38:05] [I] getNbOutputs: 42 

[09/15/2021-10:38:05] [I] getNbOutputs Name: 594 

[09/15/2021-10:38:05] [W] [TRT] TensorRT was linked against cuDNN 8.1.0 but loaded cuDNN 8.0.3
filePath[idx]: ../data/centerpoint//points/0a0d6b8c2e884134a3b48df43d54c36a.bin
[09/15/2021-10:38:05] [I] [INFO] pointNum : 278272
[09/15/2021-10:38:05] [I] PreProcess Time: 13.3244 ms
[09/15/2021-10:38:05] [I] inferenceDuration Time: 13.3018 ms
[09/15/2021-10:38:05] [I] PostProcessDuration Time: 7.13283 ms
&&&& PASSED TensorRT.sample_onnx_centerpoint # ./centerpoint

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
HaohaoNJUcommented, Dec 21, 2021

@CarkusL Thanks for your great work, I wrote a new project based on your code, where computations of pre-process && post-process are done with Cuda, it runs much faster.

Here is the code : https://github.com/Abraham423/CenterPointTensorRT.git

2reactions
xavidzocommented, Oct 12, 2021

Hi @CarkusL, can you give us the inference time for batch_size =1 of your TensorRT implementation including also the preprocess and postprocess?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Quick Start Guide :: NVIDIA Deep Learning TensorRT ...
NVIDIA ® TensorRT™ is an SDK for optimizing-trained deep learning models to enable high-performance inference. TensorRT contains a deep learning ...
Read more >
lidar_centerpoint - Autoware Universe Documentation
lidar_centerpoint is a package for detecting dynamic 3D objects. Inner-workings / Algorithms#. In this implementation, CenterPoint [1] uses a PointPillars-based ...
Read more >
What is TensorRT? - Roboflow Blog
Once you have TensorRT installed you can use it with NVIDIA's C++ and Python APIs. To get started, we recommend that you check...
Read more >
A Deep Learning Framework Performance Evaluation to Use ...
We confirmed that using TensorFlow results in high latency. We also confirmed that TensorFlow-TensorRT (TF-TRT) and TRT using Tensor Cores provide the most ......
Read more >
Release 2.15.1 - MMDetection's documentation!
You can simply install mmdetection with the following command: ... The required GPU workspace size in GiB to build TensorRT engine.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found