question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

fail_under setting with precision is not working

See original GitHub issue

Summary

I have report: precision set to 2 and fail_under set to 97.47, and my test coverage total is reading as 97.47, but I’m getting a failure message and failure code (exit code 2).

Expected vs actual result

Expected: test coverage passes Actual: FAIL Required test coverage of 97.47% not reached. Total coverage: 97.47%

I even tried modifying fail_under to 97.469, in which case I got this even more nonsensical message:

FAIL Required test coverage of 97.469% not reached. Total coverage: 97.47%

Reproducer

Versions

Output of relevant packages pip list, python --version, pytest --version etc.

Make sure you include complete output of tox if you use it (it will show versions of various things).

Python 3.7.5
pipenv, version 2018.11.26
pytest version 5.4.1
pytest-cov 2.8.1

Config

Include your tox.ini, pytest.ini, .coveragerc, setup.cfg or any relevant configuration.

# .coveragerc
[report]
fail_under = 97.47
precision = 2
skip_covered = true
show_missing = true

Code

Link to your repository, gist, pastebin or just paste raw code that illustrates the issue.

If you paste raw code make sure you quote it, eg:

https://github.com/votingworks/arlo/pull/447/commits/89c50e43216963f06af6e4c5104b67fd33e4ff36

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:2
  • Comments:6

github_iconTop GitHub Comments

1reaction
jonahkagancommented, Apr 24, 2020

Unrelated to this bug, but I also realized as I’ve been working with the test coverage more that it would be more useful to me to be able to set a threshold for the actual number of missed lines, instead of a percentage.

I am introducing test coverage to a repo that didn’t have it before, so I’m trying to lock in the coverage at it’s current state so I don’t regress (until I have time to invest in covering all the remaining bits). The problem with using a percentage is that whenever I write new code, it changes the percentage. Even if all the new code is covered, the percentage increases… So I’ll have to update the fail_under threshold with each PR.

If I could lock in the actual number of uncovered lines, then it would be a much more useful baseline to compare to when I add new code.

Wondering if you have thoughts on this. If useful, I could open up a new issue to discuss.

0reactions
nedbatcommented, Dec 13, 2020

If you are seeing this issue, can you increase the reporting precision to see what the actual coverage value is? For example, if the total coverage is 93.18757, it will be reported to two decimal places as 93.19, but the actual value is less than 93.189.

Read more comments on GitHub >

github_iconTop Results From Across the Web

PytestWarning: Failed to generate report: No data to report
I am running the command in the terminal of pycharm within the repo and my main code is located under main/lambda_function. I did...
Read more >
Solved: Can`t get my WD15 to fully work Insprion 7590
Solved: Good day to all, I need some help in regard to this. I had recently purchased the Dell WD15 dock and Inspiron...
Read more >
Configuration — pytest-cov 4.0.0 documentation
This plugin provides a clean minimal set of command line options that are added to pytest. For further control of coverage use a...
Read more >
coverage-py
Configuration for Python test coverage measurement. ... you must also set [report] precision properly in the coverage.py config file to make use of...
Read more >
Improve coverage regex pytest-cov (Python) - GitLab.org
Other issues arise when setting coverage --cov-fail-under and ... with support for an arbitrary precision I suggest the following regex.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found