question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

TensorRT Backend Installation Error

See original GitHub issue

Description A clear and concise description of what the bug is.

Hello,

I am trying to install tensorRT backend. First, I clone this repo https://github.com/triton-inference-server/tensorrt_backend Then follow the steps.

cmake -DCMAKE_INSTALL_PREFIX:PATH=pwd/install -DTRITON_BACKEND_REPO_TAG=r21.07 -DTRITON_CORE_REPO_TAG=r21.07 -DTRITON_COMMON_REPO_TAG=r21.07 …

when I run $make install command, I got an error like this:

[ 60%] Building CXX object CMakeFiles/triton-tensorrt-backend.dir/src/tensorrt.cc.o /home/user/Downloads/tritonserver2.12.0-jetpack4.6/backends/tensorrt_backend/src/tensorrt.cc:36:10: fatal error: triton/common/nvtx.h: No such file or directory #include “triton/common/nvtx.h” ^~~~~~~~~~~~~~~~~~~~~~ compilation terminated. CMakeFiles/triton-tensorrt-backend.dir/build.make:81: recipe for target ‘CMakeFiles/triton-tensorrt-backend.dir/src/tensorrt.cc.o’ failed make[2]: *** [CMakeFiles/triton-tensorrt-backend.dir/src/tensorrt.cc.o] Error 1 CMakeFiles/Makefile2:165: recipe for target ‘CMakeFiles/triton-tensorrt-backend.dir/all’ failed make[1]: *** [CMakeFiles/triton-tensorrt-backend.dir/all] Error 2 Makefile:148: recipe for target ‘all’ failed

What should I do?

I think, I have to give the tensorRT path to the cmake command but I couldnt find the TensorRT path. I am using Jetpack 4.6 and TensorRT 8.0.1.6 is installed on my Jetson.

Where can I find the TensorRT path ? And what else should I do to install tensorRT backend.

Thanks

Triton Information What version of Triton are you using? r21.07 Are you using the Triton container or did you build it yourself? Build myself To Reproduce Steps to reproduce the behavior.

Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).

Expected behavior A clear and concise description of what you expected to happen.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
zhenxingshcommented, Nov 24, 2021

I think your tensorrt_backend is not in r21.07 branch,you should keep branch consistent with common\backend\core.

0reactions
CoderHamcommented, Nov 30, 2021

@zhenxingsh is right. @sarperkilic please fix the branch names to be consistent with the version you are building for

Read more comments on GitHub >

github_iconTop Results From Across the Web

onnx-tensorrt install / test failure - NVIDIA Developer Forums
All CUDA, CuDNN and tensorrt tests have passed. This is the failed test output: python3 onnx_backend_test.py OnnxBackendRealModelTest Traceback ...
Read more >
TensorRT Support — mmdeploy 0.10.0 documentation
We strongly suggest you install TensorRT through tar file. After installation, you'd ... Error Cannot found TensorRT headers or Cannot found TensorRT libs....
Read more >
Serving a Torch-TensorRT model with Triton - PyTorch
Let's discuss step-by-step, the process of optimizing a model with Torch-TensorRT, deploying it on Triton Inference Server, and building a client to query ......
Read more >
Install TensorFlow with pip
Note: The error message "Status: device kernel image is invalid" indicates that the TensorFlow package does not contain PTX for your architecture. You...
Read more >
Triton Inference Server: The Basics and a Quick Tutorial
TensorRT Models; TorchScript Models; Triton Client Libraries; Tutorial: Install and Run Triton; 1. Install Triton Docker Image; 2. Create Your Model Repository ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found