Make TorchServe multi framework
See original GitHub issueWe’ve been assuming so far that Torchserve can only work with Pytorch Eager mode or Torchscripted models but our current handler is general enough to make it possible to support ONNX models.
The idea is a hack one of our partners mentioned that involves
- Adding
onnx
as a dependency in docker file or requirements.txt - Loading
onnx
model in initialize handler - Making an inference in the inference handler
It may not necessarily be the best way to serve ONNX models but it lets people avoid having to use a different serving infrastructure for each different type of model
This is a good level 3-4 bootcamp task - the goal would be to
- Get a Pytorch model like Resnet 18
- Export it using ONNX exporter
- Run and inference with it in an ONNX handler and submit it as an example in this repo
Issue Analytics
- State:
- Created 2 years ago
- Reactions:2
- Comments:7 (4 by maintainers)
Top Results From Across the Web
1. TorchServe — PyTorch/Serve master documentation
Multi Modal Framework - Build and deploy a classifier that combines text, audio and video input data. Dual Translation Workflow -.
Read more >Serving PyTorch models with TorchServe | by Álvaro Bartolomé
TorchServe is the ML model serving framework developed by PyTorch. This post explains how to train and serve a CNN transfer learning model....
Read more >Introducing TorchServe: a PyTorch model serving framework
TorchServe makes it easy to deploy PyTorch models at scale in ... With powerful TorchServe features including multi-model serving, ...
Read more >A Practical Guide to TorchServe - Medium
... model serving framework for PyTorch that makes it easy to deploy ... This command-line call takes in the single or multiple models...
Read more >How to Serve PyTorch Models with TorchServe - YouTube
Hamid Shojanazeri is a Partner Engineer at PyTorch, here to demonstrate the basics of using TorchServe. As the preferred model serving ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
This was just merged, will be featured in next release today
Hello @msaroufim
Thanks for your initiative! Would love to see Torchserve serving ONNX “out-of-the-box”. Any feedback on these benchmarks?