Documentation: Loading pipelines for inference
See original GitHub issueReading about saving and loading, I find it hard to understand how to save and load a model in order to use it for inference.
In particular, it’s not clear to me how the setup()
phase relates to saving and loading.
This page gives an overview of the lifecycle of a model.
Here it is implied that ‘setup()’ occurs after load. However, it does not seem like the setup method is being called anywhere, except when fitting a model.
It would be nice with documentation on how the lifecycle works for inference. For example:
-
am i supposed to call
setup()
manually, after load? If so,- how do i recursively setup a pipeline without implementing it myself?
- is setup supposed to be run on a loaded state? this requires care, as one could easily overwrite lodaded state in setup. this should be documented.
-
am i supposed to run
setup()
before load?- this seems unlikely looking at the flowchart
- does not work with the ‘load()’ method in
ExecutionContext
which returns a new instance
-
am i supposed to not run
setup()
before inference?- im suspecting this is the idea
- it would be nice with documentation on this, with an example. It has implications on how both saving, loading and setup needs to be written.
- i cant make sense of the flowchart if this is the case.
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Pipelines for inference - Hugging Face
The pipeline() makes it simple to use any model from the Hub for inference on any language, computer vision, speech, and multimodal tasks....
Read more >Host models along with pre-processing logic as serial ...
An inference pipeline is a Amazon SageMaker model that is composed of a linear sequence of two to fifteen containers that process requests...
Read more >Inference Pipeline - OpenVINO™ Documentation
Inference Pipeline ¶ · Create a Core object. 1.1. (Optional) Load extensions · Read a model from a drive. 2.1. (Optional) Perform model...
Read more >Inference Pipeline with Scikit-learn and Linear Learner
Typically a Machine Learning (ML) process consists of few steps: data gathering with various ETL jobs, pre-processing the data, featurizing the dataset by ......
Read more >open3d.ml.tf.pipelines.SemanticSegmentation
This pipeline has multiple stages: Pre-processing, loading dataset, testing, and inference or training. Example: This example loads the Semantic ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hello Joel!
Here is some useful information with regards to your question:
side note : I think the flowchart may be getting old a bit.
Specifically with regards to your 3 options, the third one is the intended usage. It is expected that setup is only called through pipeline fit calls. From there, here are a couple of options you have :
Overall, I agree with you that setup is poorly documented and might need to be revisited eventually.
Feel free to ask more questions if you have any, I’ll be glad to help you. Cheers!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs in the next 180 days. Thank you for your contributions.