question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Feature Request: Optimization for large models

See original GitHub issue

Hello,

I have a very large model (>2GB) which I would like to optimize using this library. Unfortunately I cannot shrink my model under 2GB. Therefore while using the optimization, I get the memory exceed error of protobuf onnx.ModelProto exceeds maximum protobuf size of 2GB. Is there a way to use_external_data or some other “tricks” like in the infer_shapes-case mentioned here https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md?

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:9 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
JulesBelvezecommented, Apr 11, 2022

+1

0reactions
HSQ79815commented, Aug 1, 2022

@michaelroyzen you could get more information fromm this pr . In the past, Large model(>2G) is loaded by onnx.load(..., load_external_data=True) and saved by onnx.save(..., save_as_external_data=True), while this functions are implemented byPython. We implement c++ load and save function that support large model.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Optimize a Deep Learning Model | by Zachary Warnes
Hyperparameter optimization is a critical part of deep learning. Just selecting a model is not enough to achieve exceptional performance.
Read more >
Best Tools for Model Tuning and Hyperparameter Optimization
Some of it's Bayesian optimization algorithms for hyperparameter tuning are TPE, GP Tuner, Metis Tuner, BOHB, and more. Here are the steps you...
Read more >
Optimization story: Bloom inference - Hugging Face
This article gives you the behind-the-scenes of how we made an efficient inference server that powers bloom. inference server that powers ...
Read more >
Best practices for performance and cost optimization for ...
If you're loading large modules (for example, TensorFlow Hub models) within the worker nodes in Dataflow, consider increasing the size of the ...
Read more >
Large-Scale Optimization of Hierarchical Features for Saliency ...
We identify those instances of a richly-parameterized bio-inspired model family (hierarchical neuromorphic networks) that successfully predict image saliency.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found