Serving a model
See original GitHub issueIs your feature request related to a problem? Please describe. Serving a trained model in production.
Describe the solution you’d like I’d like to understand how to interface with tensorflow.
Describe alternatives you’ve considered
I’m able to save and load a model, but not sure how to restore and serve it using TF.
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (6 by maintainers)
Top Results From Across the Web
A guide to ML model serving - Ubuntu
Model served as an API ... The most commonly used method today is model serving or model as a service. ... Modern MLOps...
Read more >Best Tools to Do ML Model Serving - neptune.ai
Best Tools to Do ML Model Serving · 1. BentoML · 2. Cortex · 3. TensorFlow Serving · 4. TorchServe · 5. KFServing...
Read more >Deploying Machine Learning Models, Part 2: model serving
In practice, it usually means that a model is deployed as a web service, and other services can communicate with it, ask for...
Read more >Introduction to Model Serving - Coursera
This week, I'll give you an introduction to model serving, where you will explore methods for deploying models before getting hands-on and ...
Read more >Serving ML Models in Production: Common Patterns - Anyscale
Through this, we've seen 4 common patterns of machine learning in production: pipeline, ensemble, business logic, and online learning. In the ML ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Hi @dimidd, thanks for the feature request.
At the moment the development branch does not support exposing the model through tensorflow serving. However, we’re in the middle of a large refactor (#148) that should migrate finetune onto the tensorflow estimator API. Can’t make any promises, but as tensorflow serving has explicit support for the estimator framework it seems likely that exposing some functionality to make finetune work with tensorflow serving will be straightforward. Will keep you posted via this ticket.
–Madison
Thanks! I’ll dive into the code.