Provide tiny wrapper over pytorch ThroughputBenchmark
See original GitHub issue🚀 Feature
PyTorch utils module provides ThroughputBenchmark since 1.2.0
>>> from torch.utils import ThroughputBenchmark
>>> bench = ThroughputBenchmark(my_module)
>>> # Pre-populate benchmark's data set with the inputs
>>> for input in inputs:
# Both args and kwargs work, same as any PyTorch Module / ScriptModule
bench.add_input(input[0], x2=input[1])
>>> # Inputs supplied above are randomly used during the execution
>>> stats = bench.benchmark(
num_calling_threads=4,
num_warmup_iters = 100,
num_iters = 1000,
)
>>> print("Avg latency (ms): {}".format(stats.latency_avg_ms))
>>> print("Number of iterations: {}".format(stats.num_iters))
It would be interesting to provide a tiny wrapper over this to simplify usage with ignite.
Issue Analytics
- State:
- Created 4 years ago
- Comments:11 (9 by maintainers)
Top Results From Across the Web
fidelity/stoke: A lightweight wrapper for PyTorch that ... - GitHub
A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, ...
Read more >PyTorch Benchmark
This recipe provides a quick-start guide to using PyTorch benchmark module to measure and compare code performance. Introduction. Benchmarking is an important ...
Read more >Lightweight PyTorch Wrapper For ML Researchers - YouTube
PyTorch Lightning is a lightweight PyTorch wrapper that helps you scale your models and write less boilerplate code. In this Tutorial we ...
Read more >PyTorch vs TensorFlow — spotting the difference
However, PyTorch is not a simple set of wrappers to support popular ... that your model is behind a brick wall with several...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’ve used a custom
event_filter
(mostly because I just want to get more familiar with the code)I hope it is fine, that I am not including the docstrings and the input tests here, to keep the output short and because we are still discussing the design. (These are my first contributions to OS projects, sorry for asking trivial questions sometimes)
Hi, I would like to contribute. 😃 But the documentation of throughput_benchmark is rather short. For example, I can’t figure out, what the point of
x2=input[1]
is. Looking at the documentation it doesn’t seem like anything is done with this value.Looking at the C/C++ binding (which I have very little experience with) https://github.com/pytorch/pytorch/blob/master/torch/csrc/utils/throughput_benchmark.cpp#L108
I don’t quite follow if
x2
is used or not. In would assume that the label of the data point is just discarded in the benchmark?But aside from the internal working, how would you like to structure the wrapper? What should the ignite wrapper provide for value? Should it automatically move the model to a given device similar to
create_supervised_*
? Should it optionally create a JIT trace? Or should it just attach to an engine with mostly the same code?Thanks, Kai