Performance Profile Log
See original GitHub issueIn attempting how best to tackle testing for #500, I think the best solution would be to add and maintain a performance profile log with the repo, similar to a coverage report.
Tests could be written with timeit
, cProfile
, memory_profile
, etc., and perhaps we could write something like a pytest
fixture that would generate an instaviz
report, or similar? I’m not sure how to implement yet (perhaps I would need to create a separate PyPI package or pytest
plug-in the project would use?). Please let me know your thoughts.
I think a log makes the most sense because:
-
The alternative (in my mind) would be maintaining old copies of code to test against to demonstrate PRs introduce performance improvements, which seems silly and unmaintainable, and
-
It may be desirable in the future to add a PR that reduces performance to some degree for a big benefit in readability or usability, so rather than force future PRs to meet or exceed a certain level of performance, we could reference the log and use it as a guide (similar to how coverage reports are used as a guide, but aren’t gospel).
What are your thoughts?
@akaszynski @MatthewFlamm @banesullivan I’m unaware of any such python packages or plug-ins that would implement this. If you have any suggestions, that would be greatly appreciated! I think this could be a great addition!
Goals:
(@pyvisa/developers : Please update/correct as needed)
-
Verifying specific Planned PR’s are performance improvements
- Create workflow for marking PR as performance improvement and require testing against
pyvista:main
inpyvista/pyvista-benchmarks
- Add to
CONTRIBUTING.md
documentation how to perform performance improvement test and reference in PR for reviewers
- Create workflow for marking PR as performance improvement and require testing against
-
Keep a rough idea of library performance and catch unexpected regressions
- Create
pyvista/pyvista-benchmarks
repo - Add benchmark scripts
- Identify version of VTK to test against
- Identify hardware to use for test isolation and consistency
- Create
Issue Analytics
- State:
- Created a year ago
- Comments:17 (17 by maintainers)
Top GitHub Comments
Just as an idea: I personally like to use https://github.com/airspeed-velocity/asv for benchmarking - it is the benchmark-suite used, e.g., by NumPy and Scipy (an example for SciPy from Pauli Virtanen can be seen at https://pv.github.io/scipy-bench/).
That is brilliant @adeak , I wish I thought of that. Doing this plus the fact that
would solve the first problem.
Completely agreed. We could get away with doing performance regression testing on a limited, but regular basis.