question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Installation of Triton Server with helm chart

See original GitHub issue

Description Hi,

I am trying to install the Triton Inference Server on Kubernates cluster. I am getting the following error:

helm install --generate-name .

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.containers[0].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext

Triton Information Using master branch, I see the deployment.yaml is the same on most latest branches.

Expected behavior Deployment fails, expecting to create a deployment on Kubernetes.

Thank you

Svetlana

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:16 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
xiaofei-ducommented, Aug 26, 2020

You don’t need to use the Triton metrics, or prometheus, if you don’t want to.

I managed to get the Triton Inference Server to run on Kubernetes 1.18. Maybe its worth updating the helm yaml files. I also added configuration in deployment.yaml to be able to use a local /model-repository, which is really useful if you don’t want to add your models to gs.

You don’t need to use the Triton metrics, or prometheus, if you don’t want to.

I managed to get the Triton Inference Server to run on Kubernetes 1.18. Maybe its worth updating the helm yaml files. I also added configuration in deployment.yaml to be able to use a local /model-repository, which is really useful if you don’t want to add your models to gs.

Hi, my k8s is 1.17 and i use a local model repository, but failed. So, can you tell me what did you do in detail? Thx

You can use hostPath to mount your local model repository into your pod:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  ...
  template:
    ...
    spec:
      volumes:
      - name: triton-volume
        hostPath:
          # Directory location on host
          path: {{ .Values.image.modelRepositoryPath }}
          type: Directory
      containers:
          ...
          volumeMounts:
          - mountPath: /models
            name: triton-volume
            ...
          args: ["tritonserver", "--model-store=/models"]
          ...

Just be aware hostPath is not recommended to be used (Kubernetes Storage Lingo 101) since it breaks the k8s principle: workload portability.

0reactions
svetlana41commented, Feb 8, 2021

You don’t need to use the Triton metrics, or prometheus, if you don’t want to.

I managed to get the Triton Inference Server to run on Kubernetes 1.18. Maybe its worth updating the helm yaml files. I also added configuration in deployment.yaml to be able to use a local /model-repository, which is really useful if you don’t want to add your models to gs.

Can you let me know the configuration in deployment.yaml to be able to use local model-repository?

Hi,

I have this in ‘values.yaml’ with the model repository path:

` replicaCount: 1

image: imageName: nvcr.io/nvidia/tritonserver:20.03-py3 pullPolicy: IfNotPresent modelRepositoryPath: /models numGpus: 1

service: type: LoadBalancer`

And this in deployment.yaml:

`apiVersion: apps/v1 kind: Deployment metadata: name: {{ template “triton-inference-server.fullname” . }} namespace: {{ .Release.Namespace }} labels: app: {{ template “triton-inference-server.name” . }} chart: {{ template “triton-inference-server.chart” . }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: {{ template “triton-inference-server.name” . }} release: {{ .Release.Name }} template: metadata: labels: app: {{ template “triton-inference-server.name” . }} release: {{ .Release.Name }}

spec:
  containers:
    - name: {{ .Chart.Name }}
      image: "{{ .Values.image.imageName }}"
      imagePullPolicy: {{ .Values.image.pullPolicy }}

      resources:
        limits:
          nvidia.com/gpu: {{ .Values.image.numGpus }}

      args: ["trtserver", "--model-store={{ .Values.image.modelRepositoryPath }}"]

. . . . . volumes: - name: tritonvol hostPath: path: /home/to/model_repository `

This is a partial file.

Svetlana

Read more comments on GitHub >

github_iconTop Results From Across the Web

Triton Inference Server Helm Chart - NVIDIA NGC
Simple helm chart for installing a single instance of the NVIDIA Triton Inference Server. This guide assumes you already have a functional Kubernetes ......
Read more >
Helm Charts - triton-inference-server/model_navigator - GitHub
Installing the Helm Chart​​ The Helm Chart for top N models is located in the workspace/charts/{model-variant} directory. The model on the Triton Inference...
Read more >
Running A More Recent Triton Helm Chart - Deploying AI
Use A Newer Docker Image​​ The table shows which image versions contain which Triton Inference Server versions and other dependencies there as ...
Read more >
Tutorial: Edge AI with Triton Inference Server, Kubernetes ...
The NFS client provisioner Helm chart helps us in exposing an existing NFS share to Kubernetes. Replace the NFS server IP address with...
Read more >
clearml-serving 0.7.0 - Artifact Hub
ClearML Serving Helm Chart. ... API Access Key","apiHost":"http://clearml-server-apiserver:8008" ... ClearML serving Triton configurations.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found