The `legacy-unit-tests-saucelabs` CI job is flaky
See original GitHub issueThe legacy-unit-tests-saucelabs
CI job started to be quite flaky recently (no exact date, but roughly a few weeks ago). Based on today’s (04/29/2021) 10 commits to the master branch, that CI job failed 3 times (30% rate). Here is a link to the most recent failure. Curious if there is anything we can do (like reduce the number of concurrent connections, etc) or this is on the Saucelabs side?
// cc @josephperrott @devversion
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:10 (9 by maintainers)
Top Results From Across the Web
CI flake in legacy-unit-tests-saucelabs/IE11 due to "Expected no ...
Successfully merging a pull request may close this issue. test(upgrade): change flaky test to not be affected by other tests gkalpak/angular.
Read more >Understand Test Failures and Flakes with Extended Debugging
In particular, flaky tests are those that fail for no obvious reason – so the failure is usually assigned to the testing infrastructure....
Read more >How to Fix Flaky Tests - Semaphore CI
Randomly failing tests are the hardest to debug. Here's a framework you can use to fix them and keep your test suite healthy....
Read more >Slow, Flaky and Legacy Tests: FTFY - SlideShare
Developers are responsible for unit and functional tests, working alongside test specialists who are part of every delivery team for guidance.
Read more >But They Worked Locally ... How To Fix Flaky Tests ... - YouTube
But They Worked Locally ... How To Fix Flaky Tests On The Real Device Cloud - Daniel Paulus | Senior Software Engineer at...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
This is great! I guess that would also allow us to do retries on failures (with Bazel)? If yes, that’d really improve the situation.
We have a couple monitoring jobs that we run nightly that also use Saucelabs:
https://github.com/angular/angular/blob/master/.circleci/config.yml#L348-L393
I’m curious if it might be ok to run
legacy-unit-tests-saucelabs
jobs nightly too (instead of running for each PR)…