A basic CI asv check would be useful
See original GitHub issueIs your feature request related to a problem? Please describe. Currently there’s no automated check to ensure that changes to the benchmarks are valid. For example in #1443 and #1445 I think currently the best way to be sure that the new benchmarks are valid (i.e. they won’t error when being run in the nightly job) is to checkout the PR locally and try it out manually. It would be nice if this was done automatically somehow.
An automated benchmark check would also prevent us from forgetting to update the benchmarks when we make breaking changes to pvlib itself.
Describe the solution you’d like A new github actions workflow that builds the asv environments and executes the benchmarks at least once to ensure validity. Note that I’m not suggesting that we actually use the timing results for anything: the goal is to verify that the benchmarks execute without error, not to detect performance regressions. The latter will still be the nightly VM’s responsibility.
Describe alternatives you’ve considered Running the benchmarks in earnest for PRs would also solve this, but that is still a complicated problem that I don’t want to take on at this point. I think this small step in that direction makes more sense for now.
Additional context
asv run --quick
seems to do what I want (ref):
Do a “quick” run, where each benchmark function is run only once. This is useful to find basic errors in the benchmark functions faster. The results are unlikely to be useful, and thus are not saved.
--strict
is probably also useful here, although see https://github.com/airspeed-velocity/asv/issues/1199
Issue Analytics
- State:
- Created a year ago
- Reactions:3
- Comments:10 (10 by maintainers)
Top GitHub Comments
I have added
--show-stderr
in workflow.yml. But it didn’t show error traceback in summary. I am a little confused with thesed
command. And I will learn about it later.I have tried to omit
--python=same
. However, it will showsUnknown branch master in configuration
. And I didn’t find something about the branch inasv.conf
.Thanks @roger-lcc, this looks great! A couple comments:
--show-stderr
to show information about failed benchmarks? Right now it just prints the name of the failed benchmark, which is fine, but some more information (like an error traceback or something) would be nice.--python=same
so that the environments specified in the configuration file are used. That would make it so that you don’t have topip install ephem
andnumba
in the workflow file too.Is it time to open a PR containing that workflow file? It would be good to get this in place before merging #1443 and #1445.