question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Revamping our Quantized models docs

See original GitHub issue

We’re re-writing our models docs to make them clearer, simpler, and to properly document the upcoming multi-weight API. This issue is about adding docs for the Quantized classification models.

Our latest new docs are currently here (this link is likely outdated by the time you look at it, but it doesn’t matter; the skeleton is there). We created a separate section that will eventually be merged into the main one. We have documented a few models, but most of them are still missing. The list of models that still need docs is listed below. If you’d like to participate, please comment below with a message saying “I’m working on XYZ” where XYZ is a model, so that others don’t pick the same as you do. To keep things simple, please submit one PR per model, but feel free to contribute more than one model.

How to write docs for a model

Note: below are detailed instructions. This makes it look more complicated than it actually is. Don’t be scared!

A great place to start is to look a the changes in this PR that documents SqueezeNet. You’ll need to do exactly the same for your model:

  • Create a new .rst file in https://github.com/pytorch/vision/tree/main/docs/source/models. The file should look like this, with a link to the original paper, and a list of the corresponding model builders. It should also mention the base model class and link to the .py file where it is defined.
  • Update the list in https://github.com/pytorch/vision/blame/main/docs/source/models_new.rst to link to this new file (without the .rst suffix). Please keep the list alphabetically sorted
  • Update the docstring of each new model builder, similarly to this one.
    • there is a 1:1 mapping between a model builder and a Weight enum. For example, the docstring of squeezenet1_0 makes direct references to SqueezeNet1_0_Weights. For Quantized models there might be more weights for a single model builder, check the link above!
    • don’t forget the autoclass directive in the docstring. This will auto-generate documentation for the Weight enums. You don’t need to understand how this is done, but if you’re curious, it’s done here.

To build the docs locally, please look at our contributing guide. You won’t need to worry about the gallery example, so always use make html-noplot instead of make html to save time.

Please don’t hesitate to ping us if you need any help / guidance or if you have any question!


Quantized models that need docs are:

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:15 (15 by maintainers)

github_iconTop GitHub Comments

9reactions
datumboxcommented, May 13, 2022

@frgfm This the open-source version of music chairs: Ping more contributors than tickets and see what happens. 😄 😆

In all seriousness thanks for responding and no worries. We are very lucky to have you all supporting us like this. We got many more improvements on the pipeline coming up that we could use your help. Thanks for the support!

6reactions
datumboxcommented, May 17, 2022

Wow I think we are done! Thanks a lot for the help!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Basic Functionalities — pytorch-quantization master ...
So we only export fake quantized model into a form TensorRT will take. Fake quantization will be broken into a pair of QuantizeLinear/DequantizeLinear...
Read more >
glow/Quantization.md at master · pytorch/glow - GitHub
Our quantization conversion works using a two-phase process. First, we statically instrument the network with special profiling nodes that record the ranges ...
Read more >
docs/source/quantization.rst - GitLab EPFL
A quantized model executes some or all of the operations on tensors with integers rather than floating point values.
Read more >
Post-Training Quantization Best Practices
To improve model performance after Accuracy-aware Quantization, try the "tune_hyperparams" setting and set it to True . It will enable searching for optimal ......
Read more >
(beta) Static Quantization with Eager Mode in PyTorch
By the end of this tutorial, you will see how quantization in PyTorch can result in significant decreases in model size while increasing...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found