question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

MLServer not working with deployments options with Ambassador, seldon-core

See original GitHub issue

I have been trying to deploy a micro service we developed to use with MLServer. We have been deploying it previously with seldon-core and we are using ambassador as well.

seldon_deployment.yaml file is given below:

apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: enmltranslation 
spec:
  protocol: kfserving
  annotations:
    project_name: enmltranslation
    deployment_version: v0.1.0
    seldon.io/rest-timeout: '60000'
    seldon.io/rest-connection-timeout: '60000'
    seldon.io/grpc-read-timeout: '60000'
    seldon.io/ambassador-config: |
      ---
      apiVersion: ambassador/v1
      kind: Mapping
      name: enmltrans_mapping_v0.1.0
      prefix: /microservices/nlp/enmltrans/v0/getpredictions
      service: enmltranslation-pod.default:8000
      rewrite: /nlp/enml/predictions

  predictors:
    - name: default
      graph:
        name: manglish-model 
        type: MODEL
      componentSpecs:
        - spec:
            containers:
              - name: manglish-model  
                image: manglishdummy:v0.1.0 
                ports:
                  - containerPort: 8080
                    name: http
                    protocol: TCP
                  - containerPort: 8081
                    name: grpc
                    protocol: TCP

When accessing the URL via ambassador we are 503 HTTP error indicating service is unavailable.

Update: (April 1, 2022)

I was able to bring up a normal kubernetees deployment by following a deployment.yaml similar to the one provided in tutorial. Yet ambassador support for MLServer seems not working at the moment.

Update (April 20, 2022)

With help of @adriangonz solution, by passing no-executors in seldon-core we are now able to customize the URL with ambassador. Yet is it possible without by passing no-executors so that we can avail the seldon-core’s graph functionality?

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
adriangonzcommented, Apr 22, 2022

Hey @kurianbenoy-sentient ,

On the question about the URL rewrite, I’m not an expert on Ambassador, but perhaps the Ambassador community may be better at answering that one?

On the second one, if you disable the executor you will lose the graph functionality on that particular model. The problem with custom endpoints is that the model’s requests and responses become a black box, so there’s no way for the Seldon Core executor to know how to chain them, and propagate them through the graph.

Given that the pending remaining issues seem to be related to other projects (i.e. Ambassador and Seldon Core), I’ll be closing this ticket. Feel free to open up a new one if you find any other MLServer issue or if you’ve any extra questions.

1reaction
adriangonzcommented, Apr 8, 2022

Hey @kurianbenoy-sentient, you can find more info on bypassing the executor in the Seldon Core docs:

https://docs.seldon.io/projects/seldon-core/en/latest/graph/svcorch.html#bypass-service-orchestrator-version-0-5-0

I’d try that first, removing the Ambassador annotation. Mainly to avoid getting side effects.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Seldon core installation with Ambassador in K8s 1.22+ #3872
We are using seldon-core with datawire/ambassador Latest datawire/ambassador ... MLServer not working with deployments options with Ambassador, seldon-core ...
Read more >
Example model explanations with Seldon and v2 Protocol
MLServer is a Python server for your machine learning models through a REST and gRPC ... to Setup Cluster with Ambassador Ingress and...
Read more >
Troubleshooting Guide — seldon-core documentation
If your Seldon Deployment does not seem to be running here are some tips to diagnose the issue. My model does not seem...
Read more >
Testing Your Model Endpoints — seldon-core documentation
There are several options for testing your model before deploying it. Running your model directly with the Python Client. Running your model as...
Read more >
Install Seldon-Core
First install Helm 3.x. When helm is installed you can deploy the seldon controller to manage your Seldon Deployment graphs. If you want...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found