question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

onnxruntime: `--no-container-build` not honored

See original GitHub issue

Description When building with the --no-container-build flag and --backend=onnxruntime, the build will still try to build onnxruntime with docker.

[ 46%] Building ONNX Runtime
../tools/gen_ort_dockerfile.py --ort-build-config="Release" --triton-container="nvcr.io/nvidia/tritonserver:22.07-py3-min" --ort-version="1.12.0" --trt-version="" --onnx-tensorrt-tag="" --output=Dockerfile.ort
docker build --cache-from=tritonserver_onnxruntime --cache-from=tritonserver_onnxruntime_cache0 --cache-from=tritonserver_onnxruntime_cache1 -t tritonserver_onnxruntime -f ./Dockerfile.ort /build/citritonbuild/onnxruntime
make[2]: docker: Command not found
make[2]: Leaving directory '/build/citritonbuild/onnxruntime/build'
make[2]: *** [CMakeFiles/ort_target.dir/build.make:74: onnxruntime/lib/libonnxruntime.so] Error 127
make[1]: *** [CMakeFiles/Makefile2:145: CMakeFiles/ort_target.dir/all] Error 2
make[1]: Leaving directory '/build/citritonbuild/onnxruntime/build'
make: *** [Makefile:136: all] Error 2
Building Triton Inference Server
platform linux
machine x86_64
version 2.24.0
build dir /build/citritonbuild
install dir /tritonserver
cmake dir /build
default repo-tag: r22.07
container version 22.07
upstream container version 22.07
endpoint "http"
endpoint "grpc"
backend "onnxruntime" at tag/branch "r22.07"
backend "onnxruntime" CMake override "-DTRITON_ENABLE_ONNXRUNTIME_OPENVINO=OFF"
backend "onnxruntime" CMake override "-DTRITON_ONNXRUNTIME_DOCKER_BUILD=OFF"
component "common" at tag/branch "r22.07"
component "core" at tag/branch "r22.07"
component "backend" at tag/branch "r22.07"
component "thirdparty" at tag/branch "r22.07"
error: build failed
The command '/bin/sh -c ./build.py 	-v 	-j1 	--no-container-build 	--enable-logging 	--cmake-dir=$(pwd) 	--build-dir=$(pwd)/citritonbuild 	--install-dir=/tritonserver 	--endpoint=http 	--endpoint=grpc 	--backend=onnxruntime 	--override-backend-cmake-arg=onnxruntime:TRITON_ENABLE_ONNXRUNTIME_OPENVINO=OFF 	--override-backend-cmake-arg=onnxruntime:TRITON_ONNXRUNTIME_DOCKER_BUILD=OFF' returned a non-zero code: 1

Triton Information

2.24.0

Are you using the Triton container or did you build it yourself?

I am building myself.

To Reproduce

./build.py \
	-v \
	-j1 \
	--no-container-build \
	--enable-logging \
	--cmake-dir=$(pwd) \
	--build-dir=$(pwd)/citritonbuild \
	--install-dir=/tritonserver \
	--endpoint=http \
	--endpoint=grpc \
	--backend=onnxruntime \
	--override-backend-cmake-arg=onnxruntime:TRITON_ENABLE_ONNXRUNTIME_OPENVINO=OFF \
	--override-backend-cmake-arg=onnxruntime:TRITON_ONNXRUNTIME_DOCKER_BUILD=OFF

Expected behavior

I’m expecting the backend to build without trying to run docker as requested by the build parameter.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
tanmayv25commented, Aug 5, 2022

Yes. You can take a look at this issue for the same: https://github.com/triton-inference-server/onnxruntime_backend/issues/65

You can build rest of the triton using build.py, then copy the ort backend built without docker.

0reactions
krishung5commented, Sep 9, 2022

Closing issue due to lack of activity. Please re-open the issue if you would like to follow up with this.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Tune performance | onnxruntime
IOBinding. When working with non-CPU execution providers, it's most efficient to have inputs (and/or outputs) arranged on the target device (abstracted ...
Read more >
Working with Microsoft's ONNX Runtime - arpieb
The ONNX Runtime is no exception, as seen in Figure 1. Figure 1: Strong cross-platform support (Source). Beyond cross-platform support, ...
Read more >
How do you run a ONNX model on a GPU? - Stack Overflow
Try uninstalling onnxruntime and install GPU version, like pip install onnxruntime-gpu . Then: >>> import onnxruntime as ort >>> ort.get_device ...
Read more >
ONNX Runtime Performance Tuning
You can enable ONNX Runtime latency profiling in code: ... When working with non-CPU execution providers it's most efficient to have inputs (and/or...
Read more >
ONNX Runtime for inferencing machine learning models now ...
Then, create an inference session to begin working with your model. import onnxruntime session = onnxruntime.InferenceSession("your_model.onnx").
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found