question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Make `--strict`/`--strict-markers`/`xfail_strict` consistent

See original GitHub issue

(It’s a follow-up for https://twitter.com/codewithanthony/status/1263130307848335360)

While reading the doc about markers, I encountered mentions of --strict/--strict-markers CLI args and an example for adding them via addopts. It was not clear if they are the same or not so I added both. Since some options support being specified as both a config setting and a CLI option, I first tried setting them outside of addopts (it didn’t work). I then recalled that there’s also xfail_strict and compared the behaviors. xfail_strict turns out to be a config-only thing while --stict/--strict-markers are CLI-only. Later I learned that --stict==--strict-markers.

It’s not obvious why they exist in these specific forms but it seems like it’d be beneficial to adjust this inconsistency and maybe add a few more clarifications to the docs.

WDYT?

  • a detailed description of the bug or suggestion
  • output of pip list from the virtual environment you are using
  • pytest and operating system versions
  • minimal example if possible

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (8 by maintainers)

github_iconTop GitHub Comments

2reactions
nicoddemuscommented, Jul 23, 2020

Here’s the long story. 😁

--strict and --strict-markers are aliases (the latter is more recent), and both are independent from xfail_strict.

--strict-markers: makes unknown markers raise errors. This was historically named --strict, which is confusing because it is too broad, while it only applies to markers. The --strict alias remains for backward compatibility only (so we should definitely hunt and change all instances of --strict to --strict-markers).

xfail_strict: this changes the default value of the strict parameter of pytest.mark.xfail.

Historically (nobody remembers exactly when, I think), xfail changed from failing the test suite if the test passes (meaning, you assume a test should fail, but if it passes, then there’s something wrong so the test suite fails so you will notice), to just show the test as x in the terminal if an xfail-marked test passes, but otherwise does not fail the test suite.

We believe this was changed so pytest.mark.xfail could be used to mark “flaky” tests, which fail on occasion, in a way to not break the test suite (but again I’m not sure).

Later on, some users correctly point out that they have true “xfail” tests, meaning that if the test passes, the test suite should fail (for example, they test against a known bug of a third party library that makes a certain test fail; if an upgrade causes the test to pass, then they want to know about it loudly, so the test suite fails accordingly). To allow for that use case, we included the strict parameter to pytest.mark.xfail, defaulting to False (again for backward compatibility). Using @pytest.mark.xfail(strict=True) will have this behavior.

The xfail_strict ini option changes the default value of the mark, so users can apply their preference broadly across the project, rather than having to enforce it in every xfail mark.


pytest is a very mature and stable project, and has grown options/docs/features organically through the years. This leads sometimes to some options not being so clear as they should, as well as a ton of backward compatibility concerns.

That’s why this days we are very careful about introducing new APIs because we know how long we have to end up supporting those, as well as having backward compatibility in place.

Again, thanks for bringing this up, this is another of the areas we could use some improvement. 👍

1reaction
nicoddemuscommented, May 22, 2020

Hi @DahlitzFlorian,

Thanks for the offer to work on this, we appreciate it a lot!

Should we add a short note about this to the documentation, the preference fpr --strict-markers, and that --strict might be extended in the future?

Probably it is worth adding a historical note somewhere, but there are no plans to extend --strict in the future (I believe it would only lead to more confusion).

Feel free to open a PR and we can discuss the fine details over there. 👍

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to use skip and xfail to deal with tests that cannot succeed
You can change this by setting the strict keyword-only parameter to True : @pytest.mark.xfail(strict=True) def test_function(): ... This will make XPASS ...
Read more >
Marking test functions with attributes — pytest documentation
It's easy to create custom markers or to apply markers to whole test classes ... When the --strict command-line flag is passed, any...
Read more >
How and why I use pytest's xfail - Paul Ganssle
Still, strict xfailing tests is better than the alternative, which gives you no insight into whether or not your test was failing. Reasons...
Read more >
@pytest.mark.xfail - pytest Quick Start Guide [Book] - O'Reilly
This parameter defines two distinct behaviors for the mark: With strict=False (the default), the test will be counted separately as an XPASS (if...
Read more >
pytest Documentation - Read the Docs
2.13 How to use skip and xfail to deal with tests that cannot succeed . ... When the --strict-markers command-line flag is passed, ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found