question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Multi stage model inference pipelines

See original GitHub issue

/kind feature In many situations, inferenceservice is contained many small services. If we want to deploy this big servcie by KFServing, we should create a server to connect these small services together, and use custom model to deploy this big service. If we want to change the connection, we should recode the server and rebuild the image. If KFServing can handle this by edit the deploy yaml, it’ll save much developer time. Describe the solution you’d like [A clear and concise description of what you want to happen.]

                                                            +-----------------+
                                                            |   big-service   |
                                                            |                 |
                                                        +--->                 +-----+
          +---------------+        +----------------+   |   |    step 3-1/4   |     |      +-----------------+
          |  big-service  |        | big-service    |   |   |                 |     |      |    big-service  |
          |               |        |                |   |   +-----------------+     |      |                 |
          |               +------->+                +---+                           +----->+                 |
          |   step 1/4    |        |    step 2/4    |   |   +-----------------+     |      |                 |
          |               |        |                |   |   |   big-service   |     |      |    step 4/4     |
          +---------------+        +----------------+   |   |                 |     |      |                 |
                                                        +--->                 +-----+      +-----------------+
                                                            |   step 3-2/4    |
                                                            |                 |
                                                            +-----------------+

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:2
  • Comments:17 (11 by maintainers)

github_iconTop GitHub Comments

1reaction
Nagarajjcommented, Sep 10, 2021

@yuzisun @cliveseldon what is the plan for Inference Graph, it is on roadmap but don’t see anything concrete. Would be good to get some clarity. Thanks.

1reaction
Iamlovingitcommented, May 27, 2020

@yuzisun Thanks for your reply. Some pattern recognition methods is splited some steps. For example, the face recognition:

  1. find the face area
  2. compute the features of these face
  3. match the result from face database Usually, 1,2 are two different models. 1 output is 2 input.
Read more comments on GitHub >

github_iconTop Results From Across the Web

Using Amazon SageMaker inference pipelines with multi ...
In this step, you train multiple models, one for each location. Start by accessing the built-in linear learner algorithm: from sagemaker.amazon.
Read more >
Multi stage model inference pipelines · Issue #846 - GitHub
Multi stage model inference pipelines #846 ... In many situations, inferenceservice is contained many small services.
Read more >
Inference Pipeline with Scikit-learn and Linear Learner
In the following notebook, we will demonstrate how you can build your ML Pipeline leveraging the Sagemaker Scikit-learn container and SageMaker Linear Learner ......
Read more >
Multi-model pipelines - Apache Beam
Composing multiple RunInference transforms within a single DAG makes it possible to build a pipeline that consists of multiple ML models. In ...
Read more >
ML inference in Dataflow pipelines | Google Cloud Blog
Multi -model inference pipelines ... Before we outline the pattern, let's look at the various stages of making a call to an inference...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found