ci: Performance test failing randomly
See original GitHub issueMeltano Version
2.5.0
Python Version
3.10
Bug scope
Other
Operating System
Windows
Description
The test in tests/meltano/cli/test_cli.py::TestLargeConfigProject::test_list_config_performance
is failing for combinations of Python and OS:
source: https://github.com/meltano/meltano/runs/8078349477?check_suite_focus=true#step:11:687
Code
No response
Issue Analytics
- State:
- Created a year ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
Fixing Tests in CI/CD: Why are Your Tests Failing?
Taking failing tests seriously is important. Tests are meant to help. Even so, most information about running tests is based on opinion.
Read more >How to Fix Flaky Tests - Semaphore CI
You may be thinking that if a test fails randomly, you can game the system by retrying it until it passes. If this...
Read more >CI tests are failing randomly · Issue #6655 · ckeditor/ckeditor5
we changed something in the engine (performance, conversion, rendering),; these are two unrelated issues and I'm paranoic.
Read more >How to reduce flaky test failures - CircleCI
Flaky tests, also known as flappers, fail to produce accurate and consistent results. These tests may be unreliable because of newly-written ...
Read more >Unit tests sometimes failing, sometimes passing - Stack Overflow
running into issuees with multi-threading tests on a high-powered CI that doesn't occur on dev machines? temporary network glitches blocking ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@aaronsteers Simple solution is to bump the tolerance.
Long term I wonder if this is the best way to run benchmarks. I looked around a bit during research for https://github.com/meltano/meltano/issues/6613, https://github.com/meltano/sdk/pull/887 and https://github.com/meltano/sdk/discussions/906.
It seems like we need something like https://github.com/marketplace/actions/continuous-benchmark that simply alerts us when time exceeds a certain threshold, and lets us keep a history of performance in key components.
That’s essentially what this is. The Windows VMs occasionally perform worse, and we can’t control that or mock them performing better.
But I agree that resolving the immediate issue takes precedence, and increasing the timeout is the fastest/simplest way to resolve that. I’ll open a PR.