How does this benchmark get consistent results?
See original GitHub issueI was wondering which parts of this benchmark make the results consistent? Is it the forked child process? the chromedriver?
Asking because I set up a js-framework-benchmark
lite which is quite variable at the moment. Can be seen at https://luwes.github.io/sinuous/bench/
It just uses Puppeteer because it seemed simpler for my purpose.
The idea was to have this bench run on every release and have each version compared to Vanilla JS and React for now, and have it nicely plotted out to make the most drastic visuals 😄
The code is very minimal, it matches most of the basic tests in this benchmark, only problem is that the standardDeviation
can be quite large sometimes.
https://github.com/luwes/sinuous/tree/master/bench (Feel free to use the code in anyway you like)
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (6 by maintainers)
Top GitHub Comments
I’m on holidays right now so I could just take a short look at your code. Is my assumption correct that you are trying to measure the duration of a benchmark by measuring the difference between two Performance.getMetrics().Timestamp values (https://github.com/luwes/sinuous/blob/master/bench/utils.js#L33-L40) from the test driver client?
If that’s right you’re depending on puppeter’s latency for calls to chrome. The js-framework-benchmark measures duration right from chrome’s timeline and thus avoids any dependency on test driver latency (and BTW extracting the timeline events is a major source for complexity…) (Maybe a smaller issue: Is there a guarantee that chrome has finished painting and compositing before the benchmark’s xpath condition is fulfilled? That’s another importance principle of my benchmark: It measures duration from the initial click event to the end of the paint event and not only the duration of the javascript event handler.)
Good to hear this helped. I think I can close this issue now.