question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItĀ collects links to all the places you might be looking at while hunting down a tough bug.

And, if youā€™re still stuck at the end, weā€™re happy to hop on a call to see how we can help out.

Expect test to fail (xfail style)

See original GitHub issue

šŸš€ Feature Proposal

The ability to mark a test as expected to fail probably with the following syntax:

describe('Test isCool function works', {
  test('jeff is the best', () => {
    expect(isCool("jeff")).toBe(true);
  });

  test.xfail("for some reason I'm not coming out as cool...", () => {
    expect(isCool("will")).toBe(true);
  });
});

And suggested output:

Test Suites: 1 failed, 5 passed 6 total
Tests:       1 failed, 16 passed, 2 expected failures, 2 unexpected passes, 21 total
Snapshots:   0 total
Time:        2.627s, estimated 4s
Ran all test suites.

Motivation

Many testing frameworks allow you to specify a test as a expected failure. This can be very useful for long-term TDD and for testing known-bugs are still bugs without implying that itā€™s intended behaviour.

Example

In the above output, an engineer should be surprised (and pleased) by the 2 unexpected passes and change those tests to test from test.xfail if their commit caused the fixes. Otherwise they can leave as is for an engineer who understands those tests to mark as resolved.

Pitch

I feel this is a core feature as itā€™s a fundamentally new result status for tests.

See some great comments from an older issue #8317

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:96
  • Comments:5

github_iconTop GitHub Comments

13reactions
snoozbustercommented, Jun 9, 2021

+1 on this. Sometimes we have frontend functionality which needs to be hard-disabled due to a backend/API bug. Often times this is code which is tested and works, but the data isnā€™t reliable enough to show it to customers. After stubbing the code we can use it.skip we can disable these tests, but there is nothing which forces them to be re-enabled after the upstream services are working and the code is re-enabled (someone unaware of the test may leave the it.skip in - after all, ā€œthe tests are passingā€). An xfail like interface would start failing after the feature was re-enabled in this case, ensuring that the test is also turned on in conjunction.

1reaction
dgkimptoncommented, Nov 30, 2021

I just came here to make a similar request, I was going to call it test.quarantine but the functionality is the same. I have a test that is currently broken and I accept that but would like to know if something happens to make it pass. So ideally this would make a failing test pass until it would actually pass, at which point it should fail.

updated: I previously added some workarounds, none of which were very good. Iā€™ve since come up with a ā€œbetterā€ one (although itā€™s pretty unpleasant code).

test.quarantine = function (description, func) {
  describe("in quarantine", () => {
    try {
      func()
      test(description, () => {
        const e = new Error("[" + description + "] was expected to fail, but it passed")
        e.name = 'Quarantine Escape'
        throw e
      })
    } catch (e) {
      test.todo('fix ' + description)
    }
  })
}

used as

  test.quarantine('testA', () => {
    expect(false).toBe(true)
  })
  test.quarantine('testB', () => {
    expect(false).toBe(false)
  })

which shows

in quarantine                                                                                                                                                                                                                                                                                             
      Ɨ testB (1 ms)                                                                                                                                                                                                                                                                                          
      āœŽ todo fix testA      

Iā€™ll leave this here in case itā€™s useful to anyone.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How and why I use pytest's xfail - Paul Ganssle
Making failing to fail a failure. By default, when you mark a test as xfail, the test will be run as part of...
Read more >
What's the purpose of xfail tests?
The xfail test should pass and is expected to pass in the future, but is expected to fail given the current state of...
Read more >
How to mark a failing test as an expected failure?
I have a test that is constantly failing, but it's a known bug and the ticket for this issue has been reported a...
Read more >
Pytest: expect a test to fail (xfail or TODO tests) - Code Maven
Pytest: expect a test to fail (xfail or TODO tests). Use the @pytest.mark.xfail decorator to mark the test. examples/pytest/test_mymod_3.py.
Read more >
lit - LLVM Integrated Tester ā€” LLVM 16.0.0git documentation
lit is a portable tool for executing LLVM and Clang style test suites, ... XFAIL. The test failed, but that is expected. This...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found