question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Roadmap WIP] Standardize and increase coverage for TorchBench

See original GitHub issue

Motivation

TorchBench is a collection of open-source benchmarks used to evaluate PyTorch performance. It provides a standardized API for benchmark drivers, both for evaluation (eager/jit) and training. Plenty of popular models are involved in TorchBench. Users are convenient to debug and profile.

In order to standardize the performance evluation and increase coverage, TorchBench can be enhanced in the following 3 aspects in CPU:

  • Fit for typical user scenarios
  • Well integrate new features of PyTorch
  • Increase benchmark coverage

Detailed proposal

Fit for typical user scenarios (especially in test.py/test_bench.py)

add CPU runtime configuration options into launch script for user

  • Add core binding option
  • Add gomp/iomp option
  • Add memory allocator option

support performance metrics in test.py/test_bench.py

  • Add throughput: Total time / samples
  • Add latency: Samples / Total time
  • Add fps-like report

Well integrate new features of PyTorch

  • Enable bf16 datatype support both for inference and training
  • Fully support channels_last both for inference and training
  • Extend a complier option to support Dynamo
  • Support JIT tracing and cover more models with JIT support

Increase benchmark coverage

Increase model coverage

  • Add models from community with popularity (e.g, RNN-T)
  • Add models from real customers (Multi-Band MelGAN, ViT and Wav2vec)
  • Fix some models not implemented in CPU (e.g, DALLE2_pytorch, moco, pytorch_struct, tacotron2, timm_efficientdet, vision_maskrcnn)

Port OpBench to TorchBench

  • Increase OpBench coverage
  • Complete support of dtypes, memory-format and inplace version for ops

Issue Analytics

  • State:open
  • Created 10 months ago
  • Reactions:3
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
xuzhao9commented, Nov 29, 2022

@chuanqi129 We plan to deliver the first CPU userbenchmark within a month, it will be about the stableness of CPU latency across all TorchBench models. I suggest Intel can work on a their own userbenchmark (a new one).

I created a userbenchmark doc here: https://github.com/pytorch/benchmark/pull/1328

0reactions
chuanqi129commented, Nov 30, 2022

Thanks @xuzhao9 for the update, I will check it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Issues · pytorch/benchmark - GitHub
TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance ... [Roadmap WIP] Standardize and increase coverage for TorchBench.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found