question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

New Model Architectures - Implementation and Documentation Details

See original GitHub issue

🚀 The feature

When adding a new model architecture there are some design/implementation details and documentation requirements that need to be taken into account. This issue intents to track such details in a dynamic manner, as it is possible to change over time.

Motivation, pitch

New Model Architectures - Implementation Details

Model development and training steps

When developing a new model there are some details not to be missed:

  • Implement a model factory function for each of the model variants

  • in the module constructor, pass layer constructor instead of instance for configurable layers like norm, activation, and log the api usage with _log_api_usage_once(self)

  • fuse layers together with existing common blocks if possible; For example consecutive conv, bn, activation layers could be replaced by ConvNormActivation

  • define __all__ in the beginning of the model file to expose model factory functions; import model public APIs (e.g. factory methods) in torchvision/models/__init__.py

  • create the model builder using the new API and add it to the prototype area. Here is an example on how to do this. The new API requires adding more information about the weights such as the preprocessing transforms necessary for using the model, meta-data about the model, etc

  • Make sure you write tests for the model itself (see _check_input_backprop, _model_params and _model_params in test/test_models.py) and for any new operators/transforms or important functions that you introduce

  • the new model should be torch scriptable (using torch.jit.script)

  • the new model should be fx compatible (using torch.fx.symbolic_trace)

Note that this list is not exhaustive and there are details here related to the code quality etc, but these are rules that apply in all PRs (see Contributing to TorchVision).

Once the model is implemented, you need to train the model using the reference scripts. For example, in order to train a classification resnet18 model you would:

  1. go to references/classification

  2. run the train command (for example torchrun --nproc_per_node=8 train.py --model resnet18)

After training the model, select the best checkpoint and estimate its accuracy with a batch size of 1 on a single GPU. This helps us get better measurements about the accuracy of the models and avoid variants introduced due to batch padding (read here for more details).

Finally, run the model test to generate expected model files for testing. Please include those generated files in the PR as well.:

EXPECTTEST_ACCEPT=1 pytest test/test_models.py -k {model_name}

Documentation and Pytorch Hub

  • docs/source/models.rst:

    • add the model to the corresponding section (classification/detection/video etc.)

    • describe how to construct the model variants (with and without pre-trained weights)

    • add model metrics and reference to the original paper

  • hubconf.py:

  • README.md under the reference script folder:

    • command(s) to train the model

Alternatives

No response

Additional context

No response

Issue Analytics

  • State:open
  • Created 2 years ago
  • Reactions:9
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

4reactions
jdsgomescommented, Feb 7, 2022

That is a good point, and has been discussed previously with @datumbox and also during this PR review. In short I think is fair so say that there are no strong feelings either way, but there were two main arguments to keep it in a ticket for now. First we didn’t want to make the contribution guidelines too long, second the content can change. So I would still favour to keep it here for a while and if it seems stable enough we can move it to a .md file

3reactions
datumboxcommented, Feb 4, 2022

I’ve pinned the issue for now but we should consider making use of the Wiki pages, which are better suited for this kind of content. PyTorch core uses them extensively, so it might be worth aligning.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Background - DODAF - DOD Deputy Chief Information Officer
Visualizing architectural data is accomplished through models (e.g., the products described in previous versions of DoDAF). Models can be documents, ...
Read more >
The TOGAF Standard, Version 9.2 - Phase G: Implementation ...
Perform appropriate Architecture Governance functions for the solution and any implementation-driven architecture Change Requests. 14.2 Inputs. This section ...
Read more >
Documentation and Implementation Diagrams
At all levels of detail, diagrams in the Documentation & Implementation style help viewers understand an implementation or product-related technical details ...
Read more >
AWS Well-Architected Framework
The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS.
Read more >
Review of deep learning: concepts, CNN architectures ...
Thus, the CNN model misses the relevant information. 3. Activation Function (non-linearity) Mapping the input to the output is the core function ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found