[Roadmap WIP] Standardize and increase coverage for TorchBench
See original GitHub issueMotivation
TorchBench
is a collection of open-source benchmarks used to evaluate PyTorch performance. It provides a standardized API for benchmark drivers, both for evaluation (eager/jit) and training. Plenty of popular models are involved in TorchBench
. Users are convenient to debug and profile.
In order to standardize the performance evluation and increase coverage, TorchBench
can be enhanced in the following 3 aspects in CPU:
- Fit for typical user scenarios
- Well integrate new features of PyTorch
- Increase benchmark coverage
Detailed proposal
Fit for typical user scenarios (especially in test.py
/test_bench.py
)
add CPU runtime configuration options into launch script for user
- Add core binding option
- Add gomp/iomp option
- Add memory allocator option
support performance metrics in test.py
/test_bench.py
- Add throughput: Total time / samples
- Add latency: Samples / Total time
- Add fps-like report
Well integrate new features of PyTorch
- Enable bf16 datatype support both for inference and training
- Fully support channels_last both for inference and training
- Extend a complier option to support Dynamo
- Support JIT tracing and cover more models with JIT support
Increase benchmark coverage
Increase model coverage
- Add models from community with popularity (e.g, RNN-T)
- Add models from real customers (Multi-Band MelGAN, ViT and Wav2vec)
- Fix some models not implemented in CPU (e.g, DALLE2_pytorch, moco, pytorch_struct, tacotron2, timm_efficientdet, vision_maskrcnn)
Port OpBench to TorchBench
- Increase OpBench coverage
- Complete support of dtypes, memory-format and inplace version for ops
Issue Analytics
- State:
- Created 10 months ago
- Reactions:3
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Issues · pytorch/benchmark - GitHub
TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance ... [Roadmap WIP] Standardize and increase coverage for TorchBench.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@chuanqi129 We plan to deliver the first CPU userbenchmark within a month, it will be about the stableness of CPU latency across all TorchBench models. I suggest Intel can work on a their own userbenchmark (a new one).
I created a userbenchmark doc here: https://github.com/pytorch/benchmark/pull/1328
Thanks @xuzhao9 for the update, I will check it.