Feature request: run tests and write results as expected output
See original GitHub issueThe most annoying thing about doctests is having to manually update them all when something changes the repr of your outputs, or a random seed, etc. If there was an xdoctest
mode which wrote the output instead of comparing it, I’d switch over tomorrow 😀
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (5 by maintainers)
Top Results From Across the Web
How to Write Test Cases: The Ultimate Guide with Examples
Expected Result ; Actual result; Pass/Fail; Comments. Basic Format of Test Case Statement. Verify Using [tool name, ...
Read more >Explore test results | IntelliJ IDEA Documentation - JetBrains
The Test Runner toolbar located above the list of test results allows you to show and hide successful and ignored tests, display how...
Read more >Review test results - Azure Pipelines | Microsoft Learn
Results : lists all tests executed and reported as part of the current build or release. The default view shows only the failed...
Read more >Unit test reports - GitLab Docs
You can configure your job to use Unit test reports, and GitLab displays a report on the merge request so that it's easier...
Read more >How to Write Test Cases for Software: Examples & Tutorial
Writing test cases is paramount to software development. ... The steps involved may also be intended to induce a Fail result as opposed...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
FYI, I’m warming up to this idea. I’ve experienced several cases where it would have been useful. I might implement it. I’ll comment if I start on it, so anyone else who wants to have a go at it feel free to get to it before I do.
So my feeling on intentional updates is that its a normal part of code maintenance. If you update the code in a way that modifies behavior or adds an additional attribute, it seems reasonable that part of that update would also involve making changes to tests. Either that, or tests could be written in such a way that they only check that expected components of the output exist, and allow new components to pass through. I’ve certainly encountered this problem before, but I haven’t felt a dire need to automate this part of the development cycle.
I’m worried about two cases:
Incorrect output blindly being put in the code. Part of the reason why I like manually updating “want” strings when things change is it allows me as a maintainer to gut-check if that change is reasonable or not. I think saying “all my tests are correct, document the output” is a pretty unsafe thing to do.
Superfluous output in the code. I recently had a case that demonstrates this point.
My doctest:
failed because I had changed the output from integers to floats, so it wanted a bunch of
.00
at the end of each number. However, I had also added extra loop reporting output because the function was taking a long time for larger inputs.Therefore, if I had auto-changed the “want” string to what the doctest “got”, I would have had this mess:
(Also the doctest would also fail again, because a timestamp appeared in the “want” string)
Instead of the desired:
Note that xdoctest’s got/want check only verifies the trailing outputs lines, so the
...
isn’t really needed.That being said, its a reasonable use case, and code that writes code is something I’m generally into. I’m not against giving the user the tools to shoot themselves in the foot, although I would want to clearly document that something like this would have that capability. I don’t think I would want to implement this myself, but I’d probably accept a PR if you wanted to take a shot at it.
Each DocTest object (defined on xdoctest.doctest_example.DocTest), knows the file (self.modpath) and line number (self.lineno) that the doctest beings on, and contains a list of xdoctest.doctest_part.DoctestPart objects (self._parts). Each DoctestPart contains
exec_lines
andwant_lines
, so it should possible to determine the start and ending line number of every “want” statement you would like to replace. The best way to do this would likely add some code in xdoctest.runner.doctest_module, which after all doctests are finished running, looks at the “failed” doctests (seerun_summary['failed']
), loops though those and builds a list of the files, line numbers, and “want” outputs for each doctest, and then goes through those files / line numbers in reverse order and updates the text in the files. This new features should have a CLI flag inxdoctest.__main__
that default to False, and the functionality should happen after_print_summary_report
is called.