fail_under setting with precision is not working
See original GitHub issueSummary
I have report: precision
set to 2 and fail_under
set to 97.47, and my test coverage total is reading as 97.47, but I’m getting a failure message and failure code (exit code 2).
Expected vs actual result
Expected: test coverage passes
Actual: FAIL Required test coverage of 97.47% not reached. Total coverage: 97.47%
I even tried modifying fail_under
to 97.469, in which case I got this even more nonsensical message:
FAIL Required test coverage of 97.469% not reached. Total coverage: 97.47%
Reproducer
Versions
Output of relevant packages pip list
, python --version
, pytest --version
etc.
Make sure you include complete output of tox
if you use it (it will show versions of various things).
Python 3.7.5
pipenv, version 2018.11.26
pytest version 5.4.1
pytest-cov 2.8.1
Config
Include your tox.ini
, pytest.ini
, .coveragerc
, setup.cfg
or any relevant configuration.
# .coveragerc
[report]
fail_under = 97.47
precision = 2
skip_covered = true
show_missing = true
Code
Link to your repository, gist, pastebin or just paste raw code that illustrates the issue.
If you paste raw code make sure you quote it, eg:
https://github.com/votingworks/arlo/pull/447/commits/89c50e43216963f06af6e4c5104b67fd33e4ff36
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:6
Top GitHub Comments
Unrelated to this bug, but I also realized as I’ve been working with the test coverage more that it would be more useful to me to be able to set a threshold for the actual number of missed lines, instead of a percentage.
I am introducing test coverage to a repo that didn’t have it before, so I’m trying to lock in the coverage at it’s current state so I don’t regress (until I have time to invest in covering all the remaining bits). The problem with using a percentage is that whenever I write new code, it changes the percentage. Even if all the new code is covered, the percentage increases… So I’ll have to update the fail_under threshold with each PR.
If I could lock in the actual number of uncovered lines, then it would be a much more useful baseline to compare to when I add new code.
Wondering if you have thoughts on this. If useful, I could open up a new issue to discuss.
If you are seeing this issue, can you increase the reporting precision to see what the actual coverage value is? For example, if the total coverage is 93.18757, it will be reported to two decimal places as 93.19, but the actual value is less than 93.189.