Automated test runs have a trend to run longer for each consecutive run
See original GitHub issueRunning a benchmark for one test produces increasing over time results
For the following benchmark runs, notice the trend for cpu time. It steadily increases for each run. Checked on two different boxes
yarn selenium --count 10 --framework react-v16.1.0-keyed --benchmark 05_
{
"framework": "react-v16.1.0-keyed",
"benchmark": "05_swap1k",
"type": "cpu",
"min": 143.35,
"max": 165.883,
"mean": 150.12820000000002,
"median": 147.577,
"geometricMean": 149.9899010135873,
"standardDeviation": 6.5547284276314635,
"values": [
143.35,
144.187,
145.593,
146.429,
147.074,
148.08,
150.784,
153.298,
156.604,
165.883
]
}
yarn selenium --count 10 --framework vue-v2.5.3-keyed --benchmark 05_
{
"framework": "vue-v2.5.3-keyed",
"benchmark": "05_swap1k",
"type": "cpu",
"min": 51.671,
"max": 75.257,
"mean": 66.8957,
"median": 67.283,
"geometricMean": 66.51271760851245,
"standardDeviation": 6.908922000572883,
"values": [
51.671,
58.62,
65.352,
66.945,
66.992,
67.574,
68.764,
72.681,
75.101,
75.257
]
}
yarn selenium --count 100 --framework aurelia-v1.1.5-non-keyed --benchmark 05_
{
"framework": "aurelia-v1.1.5-non-keyed",
"benchmark": "05_swap1k",
"type": "cpu",
"min": 11.667,
"max": 44.557,
"mean": 20.461249999999996,
"median": 20.791,
"geometricMean": 20.10100657907201,
"standardDeviation": 3.9583844592838626,
"values": [
11.667,
11.716,
11.723,
12.501,
13.068,
13.268,
13.992,
14.8,
15.779,
25.318,
.... some lines skipped
25.566,
26.284,
27.331,
44.557
]
}
Issue Analytics
- State:
- Created 5 years ago
- Comments:8 (6 by maintainers)
Top Results From Across the Web
How to make your test automation run in parallel - TechBeacon
A good rule of thumb I use in my teams is that an automated acceptance test should not run longer than one minute...
Read more >22 Reasons Why Test Automation Fails For Your Web App
As a result, the test cases are halted abruptly due to Queue timeout issues, all because you are executing them sequentially. Sequential ...
Read more >Automated Functional Testing - Global App Testing
With automated testing, companies aren't in any way limited by the limits of their manual testers. Instead, they can increase the number of...
Read more >Test Automation Frameworks | SmartBear
See how testing frameworks can improve the efficacy and efficiency of any automated testing process. Learn more about the most common types of...
Read more >Configure the Test Results Trend (Advanced) widget - Azure ...
Identify long running tests that are impacting a pipeline's efficiency by monitoring the average test duration on each day; Identify patterns in ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’m more concerned about partial update case… where most of the time spent is in layouting… when browser recalculates all the cell widths for 10k rows. And it mostly depends on weather framework can replace all the content at once… or one by one. We’ve experimented a bit around and were able to drop 160ms timing to 10ms… for aurelia that is.
Thanks for your report. That would be really bad. I‘ll take a closer look at it but I‘m pretty sure it‘s just that all results are sorted (n.b. your results are all strictly increasing) due to the following line: https://github.com/krausest/js-framework-benchmark/blob/master/webdriver-ts/src/benchmarkRunner.ts#L328 But I‘ll take a closer look at it this evening to make sure there are indeed no further mistakes.