Run flaky tests separately
See original GitHub issueWe have a handful of flaky tests which, when failed, require running the whole Travis job.
This rerun will run all the test suite so it’ll be very inefficient.
Instead, we could mark the flaky tests and run them separately so the flaky-test-job is faster to run and a job failure easier to diagnose.
Examples of flaky tests:
com.lightbend.lagom.scaladsl.testkit.ServiceTestSpec
(also javadsl)com.lightbend.lagom.internal.cluster.ClusterDistributionSpec
com.lightbend.lagom.scaladsl.persistence.cassandra.CassandraClusteredPersistentEntitySpec
(also javadsl)
Issue Analytics
- State:
- Created 5 years ago
- Comments:8 (8 by maintainers)
Top Results From Across the Web
How to Fix Flaky Tests - Semaphore CI
Flaky tests can happen when the test suite and the application run in separate processes. When a test performs an action, the application...
Read more >How to Deal with Flaky Tests - The New Stack
Luckily, there are viable methods to eliminate flakiness from your tests. Rerun your tests multiple times, change their execution order, and ...
Read more >How to handle flaky tests | Serverless First
Quarantine test suites that simply must be run in isolation and run these separately after the rest have finished. This is a good...
Read more >Flaky Tests: Getting Rid Of A Living Nightmare In Testing
Flaky tests have even given folks nightmares and sleepless nights. ... Some tests may not be able to run independently or in a...
Read more >A Practical Guide to Reducing the Burden of Flaky Tests
Move tests that produce inconsistent results into a separate test run group. This creates a clear expectation that only the tests in this...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
When moving forward I suggest that we create one issue for each test class (or category of tests) that is failing and then ping that issue when it fails again. In that way we can get a feeling for how often it fails.
Collecting (linking to) logs is also very important to be able to fix test failures.
Closing this ticket in favor of the more specific tickets.