question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

pvc problem with custom inferenceservice

See original GitHub issue

/kind bug

What steps did you take and what happened:

I’m trying to deploy a ‘custom’ inferenceservice (just using the base tensorflow/serving image). while i am able to declare the volumeMounts within spec/default/predictor/custom/container, there does not seem to be a place to specify the volumes themselves. looking at the base InferenceService spec, only default and canary seem to be valid at the top level of the spec. Here i’m using a volume i know to work when creating a standalone pod (and it works in general with a tensorflow predictor when referenced with ‘storageUri: pvc://…’

apiVersion: serving.kubeflow.org/v1alpha2
kind: InferenceService
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: nlu-confidence-exam
spec:
  volumes:
    - name: azure-file-volume
      persistentVolumeClaim:
        claimName: azurefile
  default:
    predictor:
      custom:
        container:
          image: tensorflow/serving
          env:
            - name: MODEL_NAME
              value: "nlu-confidence-exam"
            - name: MODEL_BASE_PATH
              value: "/data/nlu-models/confidence/exam"
          ports:
            - containerPort: 8500
          volumeMounts:
          - mountPath: /data
            name: azure-file-volume
            readOnly: true

What did you expect to happen: Expected the volume to be mounted but instead I get this error:

Error from server: error when creating "nlu-confidence-exam.yaml": admission webhook "inferenceservice.kfserving-webhook-server.validator" denied the request: Custom container validation error: volumeMount has no matching volume: volumeMounts[0].name

Anything else you would like to add:

Environment: Running with tag 0.2.1 on top of KF 0.7 base install.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:9 (6 by maintainers)

github_iconTop GitHub Comments

6reactions
deceweicommented, Jun 6, 2020

Can you try this? there is a well-known environment variable STORAGE_URI for custom container which kfserving understands and create the pvc volume.

apiVersion: serving.kubeflow.org/v1alpha2
kind: InferenceService
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: nlu-confidence-exam
spec:
  default:
    predictor:
      custom:
        container:
          image: tensorflow/serving
          env:
            - name: MODEL_NAME
              value: "nlu-confidence-exam"
            - name: MODEL_BASE_PATH
              value: "/mnt/models"
            - name: STORAGE_URI
              value: pvc://{PVC_NAME}/export
          ports:
            - containerPort: 8500

Is there a way to write to pvc? The PVC mounted here is read only. But I confirmed this method is working for read only purpose pvc.

1reaction
yuzisuncommented, Nov 22, 2019

Can you try this? there is a well-known environment variable STORAGE_URI for custom container which kfserving understands and create the pvc volume.

apiVersion: serving.kubeflow.org/v1alpha2
kind: InferenceService
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: nlu-confidence-exam
spec:
  default:
    predictor:
      custom:
        container:
          image: tensorflow/serving
          env:
            - name: MODEL_NAME
              value: "nlu-confidence-exam"
            - name: MODEL_BASE_PATH
              value: "/mnt/models"
            - name: STORAGE_URI
              value: pvc://{PVC_NAME}/export
          ports:
            - containerPort: 8500
Read more comments on GitHub >

github_iconTop Results From Across the Web

pvc problem with custom inferenceservice · Issue #569 - GitHub
I'm trying to deploy a 'custom' inferenceservice (just using the base tensorflow/serving image). while i am able to declare the volumeMounts ...
Read more >
PVC - KServe Documentation Website
This doc shows how to store a model in PVC and create InferenceService with a saved model on PVC. Create PV and PVC¶....
Read more >
Deploying an Existing Model Within Your Namespace Using a ...
Issues This KB Resolves Serving a model using an InferenceService ... type for /mnt/pvc/model/ and failed InferenceService predictor pod.
Read more >
PVC - 《KServe v0.7 Documentation》 - 书栈网 · BookStack
This doc shows how to store a model in PVC and create InferenceService with a saved model on PVC. Create PV and PVC....
Read more >
samples - Go Packages
Deploy InferenceService with Predictor. KFServing provides a simple Kubernetes CRD to allow deploying single or multiple trained models onto ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found