Onnx support?
See original GitHub issueHi, really like this work!
Given its advantage on faster inference, have you considered adding support functions, like the example below, to compile SetFitTrainer
into the onnx format for production-wise usage?
If that sounds promising, I will be happy to make this feature work!
Example:
# Train
trainer.train()
# Compile to onnx
onnx_path = "path/to/store/compiled/model.onnx"
trainer.to_onnx(onnx_path, **onnx_related_kwargs)
Issue Analytics
- State:
- Created a year ago
- Comments:15 (12 by maintainers)
Top Results From Across the Web
ONNX | Home
ONNX makes it easier to access hardware optimizations. Use ONNX-compatible runtimes and libraries designed to maximize performance across hardware. SUPPORTED ...
Read more >onnx/onnx: Open standard for machine learning interoperability
ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining ...
Read more >ONNX models - Microsoft Learn
Windows Machine Learning supports models in the Open Neural Network Exchange (ONNX) format. ONNX is an open format for ML models, ...
Read more >ONNX Runtime | Home
Support for a variety of frameworks, operating systems and hardware ... Please help us improve ONNX Runtime by participating in our customer survey....
Read more >ONNX: Easily Exchange Deep Learning Models
ONNX (Open Neural Network Exchange Format) is a format designed to represent any type of Machine Learning and Deep Learning model.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’m not sure if this is helpful, but I was working on deploying some of these models using ONNX and this is what I came up with so far. If others are looking for a place to start here is some code that will convert the base model and the head and then you can run them separately. I haven’t been able to merge them into one graph yet but hopefully it’s a start while we wait for #8 😃.
@AnshulP10 please take a look at the PR we’ve been working on #156. @kgourgou pointed out the above script has some things that you need to modify for some models. This PR hopefully addresses those concerns. In the PR there is a function called export_onnx which should do what you want. Let me know if you still have trouble.