Expect test to fail (xfail style)
See original GitHub issueš Feature Proposal
The ability to mark a test as expected to fail probably with the following syntax:
describe('Test isCool function works', {
test('jeff is the best', () => {
expect(isCool("jeff")).toBe(true);
});
test.xfail("for some reason I'm not coming out as cool...", () => {
expect(isCool("will")).toBe(true);
});
});
And suggested output:
Test Suites: 1 failed, 5 passed 6 total
Tests: 1 failed, 16 passed, 2 expected failures, 2 unexpected passes, 21 total
Snapshots: 0 total
Time: 2.627s, estimated 4s
Ran all test suites.
Motivation
Many testing frameworks allow you to specify a test as a expected failure. This can be very useful for long-term TDD and for testing known-bugs are still bugs without implying that itās intended behaviour.
Example
In the above output, an engineer should be surprised (and pleased) by the 2 unexpected passes
and change those tests to test
from test.xfail
if their commit caused the fixes. Otherwise they can leave as is for an engineer who understands those tests to mark as resolved.
Pitch
I feel this is a core feature as itās a fundamentally new result status for tests.
See some great comments from an older issue #8317
Issue Analytics
- State:
- Created 3 years ago
- Reactions:96
- Comments:5
Top Results From Across the Web
How and why I use pytest's xfail - Paul Ganssle
Making failing to fail a failure. By default, when you mark a test as xfail, the test will be run as part of...
Read more >What's the purpose of xfail tests?
The xfail test should pass and is expected to pass in the future, but is expected to fail given the current state of...
Read more >How to mark a failing test as an expected failure?
I have a test that is constantly failing, but it's a known bug and the ticket for this issue has been reported a...
Read more >Pytest: expect a test to fail (xfail or TODO tests) - Code Maven
Pytest: expect a test to fail (xfail or TODO tests). Use the @pytest.mark.xfail decorator to mark the test. examples/pytest/test_mymod_3.py.
Read more >lit - LLVM Integrated Tester ā LLVM 16.0.0git documentation
lit is a portable tool for executing LLVM and Clang style test suites, ... XFAIL. The test failed, but that is expected. This...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
+1 on this. Sometimes we have frontend functionality which needs to be hard-disabled due to a backend/API bug. Often times this is code which is tested and works, but the data isnāt reliable enough to show it to customers. After stubbing the code we can use it.skip we can disable these tests, but there is nothing which forces them to be re-enabled after the upstream services are working and the code is re-enabled (someone unaware of the test may leave the it.skip in - after all, āthe tests are passingā). An xfail like interface would start failing after the feature was re-enabled in this case, ensuring that the test is also turned on in conjunction.
I just came here to make a similar request, I was going to call it test.quarantine but the functionality is the same. I have a test that is currently broken and I accept that but would like to know if something happens to make it pass. So ideally this would make a failing test pass until it would actually pass, at which point it should fail.
updated: I previously added some workarounds, none of which were very good. Iāve since come up with a ābetterā one (although itās pretty unpleasant code).
used as
which shows
Iāll leave this here in case itās useful to anyone.