How does this benchmark get consistent results?See original GitHub issue
I was wondering which parts of this benchmark make the results consistent? Is it the forked child process? the chromedriver?
Asking because I set up a
js-framework-benchmark lite which is quite variable at the moment. Can be seen at https://luwes.github.io/sinuous/bench/
It just uses Puppeteer because it seemed simpler for my purpose.
The idea was to have this bench run on every release and have each version compared to Vanilla JS and React for now, and have it nicely plotted out to make the most drastic visuals 😄
The code is very minimal, it matches most of the basic tests in this benchmark, only problem is that the
standardDeviation can be quite large sometimes.
https://github.com/luwes/sinuous/tree/master/bench (Feel free to use the code in anyway you like)
- Created 4 years ago
- Comments:6 (6 by maintainers)
Top GitHub Comments
I’m on holidays right now so I could just take a short look at your code. Is my assumption correct that you are trying to measure the duration of a benchmark by measuring the difference between two Performance.getMetrics().Timestamp values (https://github.com/luwes/sinuous/blob/master/bench/utils.js#L33-L40) from the test driver client?
Good to hear this helped. I think I can close this issue now.