Unable to simulate TDD with pytest
See original GitHub issueI’m not sure how it is with previous versions, but with Python 3.7.3 and Pytest 5.0.1, tests are run in alphabetical order rather than in the order they appear in the test file. As I understand it, Exercism’s explicit pedagogical aim is to simulate TDD, and so it seems pretty important that the tests get run in the order they appear in the file such that pytest -x
stops at the simplest of the failing test cases – i.e., the case a test-driven developer would have written first.
As it stands now, the first failing test is often for an edge case that isn’t really useful until the program’s basic functionality is up and running.
In the minitest files for the Ruby track, if I remember correctly, # skip
lines are included for all but the first test, and the learner is instructed to uncomment the skip for the next test once they get a given test to pass.
I would imagine there’s a more elegant fix for this than simply going back through all the track’s test files and adding hundreds of # @pytest.mark.skip
decorators, but I’ve yet to figure out what it is. Maybe there’s some way to configure pytest to run the tests in the order they appear in the test file that I’m just unaware of? I’ve spent a couple of hours looking into this to no avail so far.
Issue Analytics
- State:
- Created 4 years ago
- Comments:15 (12 by maintainers)
Top GitHub Comments
And that’s helpful, however I’m of the mind we should close this Issue as wontfix because of that lack of guarantees in the upstream data. The canonical data repo is currently locked to major changes and it remains to be seen precisely how it will be incorporated into V3, so I don’t think we’re able to meaningfully move on this right now except by introducing something like that logic, which without those same guarantees really just slows down tests to provide a false sense of order.
Anyone strongly disagree with that?
In favor of
won't fix