question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Dockerfile in building triton

See original GitHub issue
RUN apt-get -yqq update && apt -yqq install libgl1-mesa-glx

I think this is a common problem. So, should we add the line above into build.py file of the repo.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

3reactions
csvancecommented, Nov 18, 2021

@gioipv If you install opencv through pip, you can install opencv-python-headless in order to avoid the dependency. conda-pack works just fine with pip packages as well.

Conda OpenCV also uses OpenMP for multi threading which doesn’t play nicely with many computing paradigms without a ton of extra configuration and debugging. The pip packaging of OpenCV uses vanilla pthreads, which are much easier to work with and play nicer with other things.

1reaction
Tabriziancommented, Aug 20, 2021

It looks like it is more a problem with the way conda packages the dependencies. We can’t change Triton container to include dependencies for Python packages. I think the only work around for now would be to create another Docker container that contains the dependencies required for your Python environment.

Read more comments on GitHub >

github_iconTop Results From Across the Web

server/build.md at main · triton-inference-server/server - GitHub
The easiest way to build Triton is to use Docker. The result of the build will be a Docker image called tritonserver that...
Read more >
Building NVIDIA Triton Inference Server from Scratch for ...
I have created my own docker image of it. I have build this for only for tensorflow backend. If you want for other...
Read more >
Triton Inference Server Release 21.08
The Triton Inference Server Docker image contains the inference server executable and related shared libraries in /opt/tritonserver .
Read more >
Work with Docker containers - Documentation
Our focus is on making the Triton Elastic Docker Host the best place to run your Docker images; building Docker images using the...
Read more >
Serving a Torch-TensorRT model with Triton - PyTorch
Let's first pull the NGC PyTorch Docker container. You may need to create an account and get the ... Step 3: Building a...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found