question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Triton python backend build failed (main branch)

See original GitHub issue

Description The triton build failed if I try to build without cuda.

./build.py --cmake-dir=$(pwd)/build --build-dir=/tmp/citritonbuild --enable-logging --enable-stats --enable-tracing --enable-metrics --filesystem=azure_storage --endpoint=http --endpoint=grpc --repo-tag=common:main --repo-tag=core:main --repo-tag=backend:main --repo-tag=thirdparty:main --backend=ensemble --backend=identity:main --backend=repeat:main --backend=tensorflow2:main --backend=python:main --repoagent=checksum:main

/tmp/tritonbuild/python/src/python.cc: In member function ‘TRITONSERVER_Error* triton::backend::python::ModelInstanceState::GetInputTensor(uint32_t, triton::backend::python::Tensor*, TRITONBACKEND_Request*, std::vector<TRITONBACKEND_Response*>&)’:
/tmp/tritonbuild/python/src/python.cc:1429:7: error: ‘cudaSetDevice’ was not declared in this scope
 1429 |       cudaSetDevice(src_memory_type_id);
      |       ^~~~~~~~~~~~~
/tmp/tritonbuild/python/src/python.cc:1430:7: error: ‘cudaError_t’ was not declared in this scope
 1430 |       cudaError_t err = cudaIpcGetMemHandle(
      |       ^~~~~~~~~~~
/tmp/tritonbuild/python/src/python.cc:1432:11: error: ‘err’ was not declared in this scope; did you mean ‘erf’?
 1432 |       if (err != cudaSuccess) {
      |           ^~~
      |           erf
/tmp/tritonbuild/python/src/python.cc:1432:18: error: ‘cudaSuccess’ was not declared in this scope
 1432 |       if (err != cudaSuccess) {
      |                  ^~~~~~~~~~~
/tmp/tritonbuild/python/src/python.cc:1435:55: error: ‘cudaGetErrorName’ was not declared in this scope
 1435 |                                           std::string(cudaGetErrorName(err)))
      |                                                       ^~~~~~~~~~~~~~~~
/tmp/tritonbuild/python/src/python.cc:1439:7: error: ‘CUdeviceptr’ was not declared in this scope
 1439 |       CUdeviceptr start_address;
      |       ^~~~~~~~~~~
/tmp/tritonbuild/python/src/python.cc:1440:7: error: ‘CUresult’ was not declared in this scope
 1440 |       CUresult cuda_err = cuPointerGetAttribute(
      |       ^~~~~~~~
/tmp/tritonbuild/python/src/python.cc:1443:11: error: ‘cuda_err’ was not declared in this scope
 1443 |       if (cuda_err != CUDA_SUCCESS) {
      |           ^~~~~~~~
/tmp/tritonbuild/python/src/python.cc:1443:23: error: ‘CUDA_SUCCESS’ was not declared in this scope; did you mean ‘EXIT_SUCCESS’?
 1443 |       if (cuda_err != CUDA_SUCCESS) {
      |                       ^~~~~~~~~~~~
      |                       EXIT_SUCCESS
/tmp/tritonbuild/python/src/python.cc:1445:9: error: ‘cuGetErrorString’ was not declared in this scope
 1445 |         cuGetErrorString(cuda_err, &error_string);
      |         ^~~~~~~~~~~~~~~~
/tmp/tritonbuild/python/src/python.cc:1453:45: error: ‘start_address’ was not declared in this scope
 1453 |                     reinterpret_cast<char*>(start_address);
      |                                             ^~~~~~~~~~~~~
/tmp/tritonbuild/python/src/python.cc:1454:7: error: ‘gpu_tensors_map_’ was not declared in this scope
 1454 |       gpu_tensors_map_.insert(
      |       ^~~~~~~~~~~~~~~~
make[2]: *** [CMakeFiles/triton-python-backend.dir/build.make:82: CMakeFiles/triton-python-backend.dir/src/python.cc.o] Error 1
make[2]: Leaving directory '/tmp/tritonbuild/python/build'
make[1]: *** [CMakeFiles/Makefile2:239: CMakeFiles/triton-python-backend.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 97%] Linking CXX executable triton_python_backend_stub
/usr/bin/cmake -E cmake_link_script CMakeFiles/triton-python-backend-stub.dir/link.txt --verbose=0
make[2]: Leaving directory '/tmp/tritonbuild/python/build'
[ 97%] Built target triton-python-backend-stub
make[1]: Leaving directory '/tmp/tritonbuild/python/build'
make: *** [Makefile:149: all] Error 2
version 2.13.0dev
default repo-tag: main
backend "ensemble" at tag/branch "main"
backend "identity" at tag/branch "main"
backend "repeat" at tag/branch "main"
backend "tensorflow2" at tag/branch "main"
backend "python" at tag/branch "main"
repoagent "checksum" at tag/branch "main"
Building Triton Inference Server
component "common" at tag/branch "main"
component "core" at tag/branch "main"
component "backend" at tag/branch "main"
component "thirdparty" at tag/branch "main"
error: make install failed
error: docker run tritonserver_builder failed

Triton Information What version of Triton are you using? Main branch

Are you using the Triton container or did you build it yourself? Try to build by myself

To Reproduce Steps to reproduce the behavior. As above

Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well). N/A

Expected behavior A clear and concise description of what you expected to happen. Build succeeded.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
CoderHamcommented, Jul 27, 2021

Generally main branch is shippable and tested regularly but will some dev changes may cause unexpected issues in the build.

0reactions
Tabriziancommented, Aug 12, 2021

@NonStatic2014 This has been fixed now. Feel free to re-open if you are still having any issues.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Python backend stub compilation fails because of ... - GitHub
I'm trying to create a custom Python backend stub, as described here, using nvcr.io/nvidia/tritonserver:22.04-py3 as a compilation base ...
Read more >
Install Triton Python backend - DeepStream SDK
I am using the latest Deepstream-triton container. I'd like to run a custom Python model in Triton Server from Deepstream.
Read more >
Using Triton for production deployment of TensorRT models
Article 4. Learn how to use Triton in the context of production deployment of TensorRT models.
Read more >
Triton Inference Server: The Basics and a Quick Tutorial
NVIDIA's open-source Triton Inference Server offers backend support for most machine learning (ML) frameworks, as well as custom C++ and python backend.
Read more >
Introducing Triton: Open-Source GPU Programming for Neural ...
We're releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found