Multi stage model inference pipelines
See original GitHub issue/kind feature
In many situations, inferenceservice is contained many small services. If we want to deploy this big servcie by KFServing, we should create a server to connect these small services together, and use custom
model to deploy this big service. If we want to change the connection, we should recode the server and rebuild the image. If KFServing can handle this by edit the deploy yaml, it’ll save much developer time.
Describe the solution you’d like
[A clear and concise description of what you want to happen.]
+-----------------+
| big-service |
| |
+---> +-----+
+---------------+ +----------------+ | | step 3-1/4 | | +-----------------+
| big-service | | big-service | | | | | | big-service |
| | | | | +-----------------+ | | |
| +------->+ +---+ +----->+ |
| step 1/4 | | step 2/4 | | +-----------------+ | | |
| | | | | | big-service | | | step 4/4 |
+---------------+ +----------------+ | | | | | |
+---> +-----+ +-----------------+
| step 3-2/4 |
| |
+-----------------+
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:17 (11 by maintainers)
Top Results From Across the Web
Using Amazon SageMaker inference pipelines with multi ...
In this step, you train multiple models, one for each location. Start by accessing the built-in linear learner algorithm: from sagemaker.amazon.
Read more >Multi stage model inference pipelines · Issue #846 - GitHub
Multi stage model inference pipelines #846 ... In many situations, inferenceservice is contained many small services.
Read more >Inference Pipeline with Scikit-learn and Linear Learner
In the following notebook, we will demonstrate how you can build your ML Pipeline leveraging the Sagemaker Scikit-learn container and SageMaker Linear Learner ......
Read more >Multi-model pipelines - Apache Beam
Composing multiple RunInference transforms within a single DAG makes it possible to build a pipeline that consists of multiple ML models. In ...
Read more >ML inference in Dataflow pipelines | Google Cloud Blog
Multi -model inference pipelines ... Before we outline the pattern, let's look at the various stages of making a call to an inference...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@yuzisun @cliveseldon what is the plan for Inference Graph, it is on roadmap but don’t see anything concrete. Would be good to get some clarity. Thanks.
@yuzisun Thanks for your reply. Some pattern recognition methods is splited some steps. For example, the face recognition: