Improvements to browser frame-rate during Dataflow evaluation
See original GitHub issueHey there, long-time library user, first-time issue raiser š
Iāve recently been looking at trying to improve the performance of dashboards that render multiple Vega charts with a large amount of data. I found a way to improve the FPS of Vega when rendering multiple charts from 0 to 5.5, and I thought Iād post it here to discuss.
Problem
When multiple Vega charts are created at the same time - Vega never ātakes a breakā to let other browser events be triggered & handled. Once vega-embed is called, itāll be in a tight evaluation block until itās managed to render the chart completely. This can cause the webpage to āhangā or feel ānon-responsiveā since the browser isnāt able to respond to any user input events.
Iāve created an example code-sandbox https://vxdyv.csb.app/ (src) that demonstrates the issue:
- Click
Render Charts
- Observe that the CSS animated spinner ceases to rotate, as vega is doing a lot of uninterrupted data processing and canvas rendering so thereās no chance for a DOM render.
Potential Solution *
The change I found, that improves the FPS from 0 to 5-6 (with almost negligible impact on the rendering time), is by wrapping the Dataflow.prototype.evaluate with requestIdleCallback.
You can see this for yourself with the same demo I linked before, by toggling the āApply Patchā checkbox before clicking āRender Chartsā.
The code change I made is to https://github.com/vega/vega/blob/master/packages/vega-dataflow/src/dataflow/run.js#L28
export async function evaluate(encode, prerun, postrun) {
await new Promise(resolve => requestIdleCallback(resolve));
...
}
* requestIdleCallback
isnāt supported in Safari or IE, so itās not a silver bullet yet
Other solutions?
In theory, Iād have imagined that vega would do some sort of streaming or work chunking, so that it could schedule work intermittently without continually blocking the thread. Originally, I was ambitiously looking for the one place that vega called out to do the processing of the scenegraph - before it rendered it using canvas/svg, but I wasnāt familiar enough with vegaās codebase and how everything fit together to track that down (my conclusion was that the view/dataflow graph are fairly closely coupled - and that processing the data separately, e.g. in worker threads, is a much larger piece of work.)
Would love to hear any suggestions or feedback you have on how we can improve the framerate when rendering charts with large amounts of data, I know youāre already looking at improving performance in https://github.com/vega/vega/issues/2619 and Iām happy to contribute however I can. I think - in the absence of moving the work entirely off the main thread using worker threads or some other approach - that measuring not only how long charts take to render but also the average FPS for the render to be super important going forward.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:3
- Comments:7 (5 by maintainers)
Top GitHub Comments
Hey sorry for the delay!
Unfortunately requestAnimationFrame or setTimeout donāt work they have significantly different semantics to requestIdleCallback.
Multiple calls to
requestAnimationFrame()
will schedule all work right before the next browser repaint Similarly, multiple calls tosetTimeout()
will schedule all work to run after (at least) X millisecondsrequestIdleCallback()
will instead maintain a work queue that waits for the browser to be āidleā before executing the next callback in its list. What this means is that compared to the other two functions, multiple vega charts scheduling callbacks will have their rendering throttled by how busy the browser is (instead of simply delaying their rendering by an interval and then all running at once).If anything - the main intent behind this ticket was to draw some attention to the issue we were having when rendering multiple large charts at the same time. Iād love to find a way for vega to more incrementally process data without impacting latency-critical events such as animation and input response on large inputs, but I recognise itās a really hard problem and appreciate all the work you do š
@jheer might be interested in your idea. Iāll reopen and let him close.