question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Pruning model speed up error

See original GitHub issue

After using the pruning API on the model, when I try to perform the speed up, it throws this error :

Error: TypeError: forward() missing 1 required positional argument: input

Traceback: m_speedup.speedup_model()

File “/pruning/lib/python3.6/site-packages/nni/compression/pytorch/speedup/compressor.py”, line 503, in speedup model self.infer_modules_masks() File “/pruning/Python-test/lib/python3.6/site-packages/nni/compression/pytorch/speedup/compressor.py”, line 349. in infer_modules masks self.update_direct_sparsity(curnode) File “/pruning/Python-test/lib/python3.6/site-packages/nni/compression/pytorch/speedup/compressor.py”, line 219, in update direct sparsity state_dict-copy.deepcopy(module.state_dict()). batch_dim=self.batch_dim) File “/pruning/Python-test/lib/python3.6/site-packages/nni/compression/pytorch/speedup/infer_mask.py”, line 80, in_init__ self.output = self.module("dummy_input) File “/pruning/Python-test/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 889, in _call_impl result = self.forward("input, **kwargs)

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:14 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
hitesh-hitucommented, Mar 25, 2022

Yes, I’ll provide the logs soon, please keep this issue open.

0reactions
J-shangcommented, Jun 16, 2022

yes, the dummy input is used for tracing the model graph, please refer usage of example_inputs in torch.jit.trace https://pytorch.org/docs/stable/generated/torch.jit.trace.html

Read more comments on GitHub >

github_iconTop Results From Across the Web

SPDY: Accurate Pruning with Speedup Guarantees
Specifically, SPDY determines good layer-wise error scores via local search, which assesses the quality of profiles determined through our DP algorithm by how ......
Read more >
Pruning deep neural networks to make them fast and small
If we prune too much at once, the network might be damaged so much it won't be able to recover. So in practice...
Read more >
Speedup Model with Mask - Neural Network Intelligence
Masks can be used to check model performance of a specific pruning (or sparsity), but there is no real speedup. Since model speedup...
Read more >
Why Not Prune Your Neural Network? - Cross Validated
So the result of pruning is often a neural network that is smaller, but no faster and has worse performance. In many cases,...
Read more >
Model Compression via Pruning - Towards Data Science
In short, pruning eliminates the weights with low magnitude (That does not contribute much to the final model performance).
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found