question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Triton with python backend: not Using Python execution env *.tar.gz file

See original GitHub issue

Hello. I am using triton with python backend.

  • I followed this issues: https://github.com/triton-inference-server/server/issues/3189
  • This is config.pbtxt file backend: "python" .... parameters: { key: "EXECUTION_ENV_PATH", value: {string_value: "/home/gioipv/workspaces/ekyc_glasses/triton/model_repo2/model1/test2.tar.gz"} }
  • when i run. docker run --gpus=1 --shm-size=5G -p8111:8111 -p8222:8222 -p8333:8333 --rm -v /home/gioipv/workspaces/ekyc_glasses/triton/model_repo2:/models --name tritonserver nvcr.io/nvidia/tritonserver:20.11-py3 tritonserver --model-repository=/models --log-verbose 20
  • It raises an error

ModuleNotFoundError: No module named 'librosa

  • I checked in my logs, it don’t show the line:

Using Python execution env ***.tar.gz Could you please help me with this …

Triton Information Triton docker images 20.11 version release

To Reproduce Following this issues https://github.com/triton-inference-server/server/issues/3189

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
ghostcommented, Aug 11, 2021

@gioipv You should point EXECUTION_ENV_PATH out properly.

parameters: {
  key: "EXECUTION_ENV_PATH",
  value: { string_value: "/models/model1/test2.tar.gz" }
}

because you mounted /home/gioipv/workspaces/ekyc_glasses/triton/model_repo2 in /models in a docker container.

1reaction
Tabriziancommented, Aug 12, 2021

I think you should update the GPU driver version. The version of the Python backend and the server must match.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Cannot start Triton inference server with Python backend stub ...
I built a custom Python 3.9 execution environment stub and tar file according to the instructions here (both steps 1 and 2), ...
Read more >
Using Triton for production deployment of TensorRT models
NVIDIA Triton Inference Server is an open source solution created for fast and scalable deployment of deep learning inference in production.
Read more >
py-triton - PyPI
Triton - Kinesis Data Pipeline. ... Triton Project Python Utility code for building a Data Pipeline with AWS Kinesis. ... or the config...
Read more >
Triton Inference Server Release 21.06.1
The Python backend now allows the use of conda to create a unique execution environment for your Python model.
Read more >
Security Xray Scan Knife Detection - Seeed Wiki
tgz. The tar file here contains the Triton server executable and shared libraries including the C++ and Python client libraries and examples. For...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found