Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How does this benchmark get consistent results?

See original GitHub issue

I was wondering which parts of this benchmark make the results consistent? Is it the forked child process? the chromedriver?

Asking because I set up a js-framework-benchmark lite which is quite variable at the moment. Can be seen at It just uses Puppeteer because it seemed simpler for my purpose.

The idea was to have this bench run on every release and have each version compared to Vanilla JS and React for now, and have it nicely plotted out to make the most drastic visuals 😄

The code is very minimal, it matches most of the basic tests in this benchmark, only problem is that the standardDeviation can be quite large sometimes. (Feel free to use the code in anyway you like)

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

krausestcommented, Aug 28, 2019

I’m on holidays right now so I could just take a short look at your code. Is my assumption correct that you are trying to measure the duration of a benchmark by measuring the difference between two Performance.getMetrics().Timestamp values ( from the test driver client?

If that’s right you’re depending on puppeter’s latency for calls to chrome. The js-framework-benchmark measures duration right from chrome’s timeline and thus avoids any dependency on test driver latency (and BTW extracting the timeline events is a major source for complexity…) (Maybe a smaller issue: Is there a guarantee that chrome has finished painting and compositing before the benchmark’s xpath condition is fulfilled? That’s another importance principle of my benchmark: It measures duration from the initial click event to the end of the paint event and not only the duration of the javascript event handler.)

krausestcommented, Sep 3, 2019

Good to hear this helped. I think I can close this issue now.

Read more comments on GitHub >

github_iconTop Results From Across the Web

8 Steps of the Benchmarking Process | Lucidchart Blog
Through the benchmarking process, any business can compare itself against a standard and develop a consistent way of measuring performance. See how.
Read more >
How to get consistent results when benchmarking on Linux?
The conclusion is that the strategy depends on a distribution of the results, and there is no substitute for plotting the data and...
Read more >
How do I get consistent criterion benchmarks, or interpret ...
I need to know how to get results either to be consistent across runs so that I can compare them, or how to...
Read more >
The Benefits of Benchmarking in Business Operations
Benchmarking can be used to measure internal progress, performance against competitors and how your processes rank against world-class organizations.
Read more >
Benchmarking 101: Why Your Results Are the Only Ones That ...
While benchmarking is a powerful way to analyze data, it's only useful when you're comparing like results. Otherwise, it's all apples and ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found