Mark tests that test overall behaviour other tests expand on in more detail
See original GitHub issueIt should be possible to mark a test as testing overall behaviour that other tests are testing for more specific behaviour on. i.e. We should be able to let know that if a certain test fails, a bunch of other tests are expected to fail and aren’t really relevant.
We could do this either with a way to mark a test in a suite as important, or some way of marking a bunch of tests as dependent on another test.
Imagine this abstract scenario where we have a foo()
that returns 1
and then it’s changed to return "string"
.
// Old implementation
function foo() {
return 1;
}
// New implementation
function foo() {
return "string";
}
describe('foo()', () => {
it('should be a number', () => {
expect(typeof foo()).toBe('number');
});
it('should be a positive number', () => {
expect(foo()).toBeGreaterThan(0);
});
it('should be a finite number', () => {
expect(foo()).toBeLessThan(Infinity);
expect(foo()).toBeGreaterThan(-Infinity);
});
it('should not be a fraction', () => {
expect(Number.isInteger(foo()).toBe(true);
});
});
Every one of these tests is going to fail because the return of foo()
is a string instead of a number. But should be a number
is the only test that matters here, the moment that should be a number
fails every other test is irrelevant. e.g. should be a finite number
telling you that a string is not an infinite number is irrelevant to solving the fact that a string is not a number, which is already tested.
However when Jest runs a test suite like this, it will output all 4 failures to the console with a long diff of details for each and every one of them. But the only one that actually tells us what we want is the first test. In fact in a complex application where the more detailed failures give us too specific errors, unless we scroll all the way back up the failures at the end will just make it difficult to figure out what has failed.
If we are able to mark a test in some way as a parent to, dependent of, more important than, … other tests, then Jest can try and improve the test results in this case. We could ignore the other tests entirely, or we could output the fact the test failed but not output the detailed diffs for any tests we are expecting to fail so the developer can go straight to figuring out why the parent test failed.
As a practical example, I am writing tests for a Redux store that represents a form template. Instead of hardcoding the structure of this template in my test suite I use a snapshot test to make sure that the newTemplate()
action is behaving correctly then I use the newTemplate()
action to generate an initial template. Then I test the other actions (the ones that just modify the template) by using newTemplate()
to create the empty template that the modification actions modify.
This keeps my tests sane, however if newTemplate()
is broken I know that every other test is going to fail. And their failures are all irrelevant since the real failure is in newTemplate()
and that is what I have to fix rather than the individual modification actions.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:10
- Comments:45 (16 by maintainers)
Top GitHub Comments
I’m starting to like this idea:
It gives lots of flexibility in designing the test suit. And it reads well.
I’m more fan of just “all other tests in the same and descendent scopes are skipped” than being able to point to some tests directly from other tests