Proposal: enable spec globbing, enforce isolated renderer processes for each spec
See original GitHub issueWhat users want
- To be able to run multiple specs files
How we could do this
- Add glob support to
cypress run
cypress run --spec cypress/integration/login/**/*
Doesn’t sound that hard, but there are many potential areas this will affect.
What I’m proposing
- Enable multiple specs, but force each spec to run in its own isolated renderer process
Why is this necessary?
The default behavior is bad
As of today, we automatically “merge” and group multiple spec files by using a magical route called: __all
.
This means that when you iterate on the spec itself: it’s isolated because it’s the only spec file served. But when you run them all together, they are no longer isolated. This can cause unforeseen issues to arise when running all the specs together which are difficult if not impossible to reproduce.
Users don’t use the Test Runner and the CLI as intended
We constantly see users running all their specs from the Test Runner.
The Test Runner is not designed for this use case. It’s designed to run a single or handful of specs at a time as you iterate and build your tests. The Test Runner is specialized for debuggability while the CLI is designed for running all of your tests.
When the Test Runner runs, it’s constantly creating snapshots, holding references in memory, etc. These quickly add up and your tests end up going slower and slower and oftentimes crash the entire renderer process. This is a design decision, not a bug. When running from the Test Runner it’s specialized to enable users to interact with the tests’ commands. Running from the Test Runner never “ends” or exits.
This is much different than the CLI. Running from the CLI prevents you from interacting with anything. The CLI will run “to completion” and exit whereas the Test Runner never actually ends. Because of these differences, when running from the CLI, Cypress makes several internal performance optimizations to prevent memory exhaustion and to run as fast as possible.
Watch mode is disabled automatically on ‘All Tests’
Users expect “Run All Tests” to reload when you change a spec file. File watching is a CPU intensive task and because we’ve had users write hundreds of spec files, decided not to always do this. This goes back to the theory that you shouldn’t be using this mode to run all your tests.
Solving the Test Runner problem
One possible solution is to count the number of tests you’re running in the Test Runner and automatically disable the built in debuggability features and to run the same way the CLI does. We’d need to show a warning message to the user indicating we’re “switching off” these features and you won’t be able to interact with any of the tests.
This may be pretty annoying but at least it will communicate the intent of the Test Runner well. This option could be exposed via cypress.json
such as { interactabilityThreshold: 5 }
. Users could then set this really high or disable it to get the old functionality back.
The Downsides
Running All the Tests
By enforcing specs to run in their own isolated process, we instantly have a huge problem for the Test Runner.
For instance there is a:
- ‘Run All Tests’ button
- A seeded
example_spec
that contains wayyyyy too many tests and does not represent a normal use case.
This could be solved by removing the Run All Tests
button but users would likely complain about this and it’s unexpected.
Why can every other testing tool run all tests but Cypress can’t?
The example_spec
could be solved by splitting up the kitchen sink tests into a series of files:
querying_spec
stubbing_spec
network_spec
- etc, etc, etc
Splitting up the example_spec
into smaller files is an easy win: but then we’re back to the problem above: “How do I run all my tests?”
We could kick the can on this, and continue to let users run all of their tests from the Test Runner. Unfortunately, if we introduce a “warning message” this would continue to communicate confusing intent.
Why is it warning me like I’m doing something wrong when all I’ve done is click the first button I’ve seen!
Another option is when running “All Tests” from the Test Runner we iterate and highlight each spec but then kill the process when switching to the new spec. This kind of defeats the purpose of the GUI as you would not be able to interact (and failures would instantly go away). Not a good option.
The Upsides (why this would help all users)
Forcing each spec to run in its own isolated renderer would do a few very important things:
- It would prevent any global leakage from one spec to the next
- It would automatically restart the browser / renderer process in between each spec
- Memory would automatically be purged and we’d never have to worry about memory leaks
- The browser would likely crash much less often
- This positions users to better maintain and manage a growing suite of tests
- Introducing parallelization and load balancing becomes painlessly simple
- Users would overall see more consistent test results
Parallelization
Forcing specs to run isolated is actually the same thing as performing parallelizations by spinning up multiple docker containers or instances and then dividing all of the spec files between them.
When this happens you still have all the same problems as what we listed above:
- You will need to stitch together multiple external reports
- You will receive multiple isolated videos
- You will receive multiple
stdout
text
Because users are already doing this (manually) it means they’re overcoming these problems, or they aren’t really problems to begin with - which is why I’m okay with making this the default behavior.
This problem is further mitigated by the way the Dashboard Service works. It can automatically associate groups of independent specs to the same run. This means all the data will be aggregated in the same place making it appear as if the run happened on a single machine.
Load Balancing
Splitting up the specs enables us to easily load balance parallelization.
Instead of manually balancing by machine like this:
- Machine A gets Spec 1,2,3
- Machine B gets Spec 4,5
- Machine C gets Spec 6,7,8,9
- Machine D gets Spec 10,11
Each machine would run at the same time and simply ask “for the next spec file”.
That means the work completes as fast as possible without needing to manually balance or tweak anything.
Load balancing is something our Dashboard Service is already doing on internal projects.
Failure Isolation
Another benefit is that by splitting up the specs and taking multiple videos, you have a much easier time focusing on the real thing you care about - failures.
Instead of having a 30+min run, your videos would only be the duration of each spec file. Much better and easier to work with.
More consistent results
Splitting out spec files will absolutely without a doubt yield better, more consistent results. Browsers aren’t perfect and Cypress pushes them to their absolute limit. Garbage collection does not work in our favor - letting the browser decide when to GC oftentimes leads it to eating up huge chunks of “primed” memory and there is no way to force it to release.
We’ve been internally running in isolated specs (oftentimes thousands of specs across dozens of files) and it has removed nearly all flake. Previously this flake only cropped up when running all the specs at once and it was incredibly difficult to default and took a huge amount of effort.
Better video names
Videos would no longer have a random name. Instead they could be named after the spec file: cypress/videos/login_spec.js.mp4
New Challenges
The default spec
reporter and the additional Cypress messages would run for as many N specs you have. Instead of receiving a single “report” Cypress would essentially iterate through each spec file and start the “Tests Running” message all the way through to “Tests Finished”. Internally we’d keep state with the number of failures and still exit with the total aggregate.
External Reporters
This would unfortunately create a problem for external / 3rd party reporters. Instead of receiving a single report, it would generate a report for each spec file. While this sounds bad it’s not that different than what already happens when you parallelize a run.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:81
- Comments:14 (10 by maintainers)
@brian-mann
"One of our next big feature releases will have spec load balancing / parallelization to help you do just that."
, is there a way to track the implementation of this? As mentioned previously in this thread, parallelizing through docker & glob is the preferred method right now, but we are wondering wether to spend time doing this, or wait until Cypress has support for it.Do you know what this feature might look like as well?
@ryan-mulrooney https://docs.cypress.io/guides/guides/command-line.html#Run-tests-specifying-a-glob-of-where-to-look-for-test-files
Spec globbing has nothing to do with running concurrently. Trying to run concurrent e2e tests is not recommended and doesn’t work the way you really expect it to. Browsers will not act the same if they are not in focus. It’s better to parallelize specs at the operating system level in CI (usually with docker containers).
One of our next big feature releases will have spec load balancing / parallelization to help you do just that.