question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Bolt build is very slow

See original GitHub issue

I’m opening this as a meta-issue to track the holistic problem that the Bolt design system is seeing with very slow build times. I’d like to provide tools to help make it faster, and in doing so hopefully improve the performance for all users with similar use cases.

Analysis

At @sghoweri’s suggestion, I’ve been testing performance on the test/sass-compile-test branch, with the following results:

  • LibSass with a monolithic entrypoint file: about 40s for initial compilation and rebuilds, no matter what file was changed.
  • LibSass with many different entrypoints combined via Webpack: about 17s for initial compilation, 17s for rebuilds when @bolt/core/styles/index.scss is modified, and 1s for rebuilds when an individual component is modified.
  • Dart Sass with a monolithic entrypoint file: about 47s for initial compilation and rebuilds, no matter what file was changed.
  • Dart Sass with many different entrypoints combined via Webpack: about 47s for initial compilation, 47s for rebuilds when @bolt/core/styles/index.scss is modified, and 1s for rebuilds when an individual component is modified.

Note: when compiling with Dart Sass, I’m using my own branch as well as a local version of Dart Sass with a fix for https://github.com/sass/dart-sass/issues/811. I’m compiling with Fibers enabled to trigger the much-faster synchronous code path.

It’s not surprising that Dart Sass is slower than LibSass for monolithic compilations, since pure JS is always going to be somewhat slower than C++, but it is surprising that LibSass benefits from multiple entrypoints while Dart Sass does not. @mgreter or @xzyfer, do you have any insight into why that could be? Is LibSass doing some sort of caching across compilations, or is it able to run multiple compilations in parallel?

I then attached a profiler to the Dart Sass compilation to see if I could determine where it’s spending all that time. It looks like by far the biggest culprit—about 40% of the total compilation time—is spent resolving @imports. Most of this is spent waiting for filesystem calls to determine exactly which files exist. The remaining time is spent doing mostly bread-and-butter interpreter stuff, with a slight emphasis on built-in map manipulation functions.

Command-Line Compilation

As an experiment, I also set up a version of the repo where the monolithic entrypoint can be compiled from the command-line. Compiling this with the native-code Dart Sass (using sass -I node_modules docs-site/sass-compile-test.scss > /dev/null) takes about 11s, although of course it has no caching across compilations so incremental compilations would be much more expensive.

Interestingly, SassC takes about 19s for the same compilation, which is also much faster than the monolithic compilation when driven via Webpack. It’s not clear to me what’s causing this major discrepancy… the command-line run comments out the export-data() function, but commenting it out in the Webpack run doesn’t substantially increase its performance. It’s possible that some of it is just performance improvements to LibSass itself between the version available through Node Sass (3.5.5) and the version I was testing with (3.6.1-9-gc713).

When profiling the Dart VM compilation, it looks like it’s spending vastly less time (about 4.5% of the total compilation time) checking the filesystem. I think this is because Dart Sass’s import semantics, especially in the presence of importers, are subtly different from the JavaScript API’s in a way that allows it to cache the vast majority of lookups.

Possible Solutions

Note: any solution we come up with should avoid substantially regressing the single-component-recompilation case.

Embedded Dart Sass

This is likely to be by far the easiest solution. Dart Sass is currently easiest to use from JS as a pure-JS package, but as mentioned above JS as a language imposes a considerable amount of overhead. We’re planning on launching an embedded mode that will run the Dart VM as a subprocess (https://github.com/sass/dart-sass/issues/248), which should substantially improve performance relative to the pure JS version. It’s hard to say exactly how much benefit this would provide (especially because it depends on which precise importer and caching semantics we decide on), but my guess is it would at least make Dart Sass’s performance competitive with LibSass’s.

Better Caching Semantics

As I mentioned earlier, Dart Sass running in JS library mode doesn’t cache its import resolution within a single compilation. This is necessary to maintain strict compatibility with Node Sass, but it doesn’t have to be locked in place forever. As part of https://github.com/sass/sass/issues/2509, we should look into defining a new set of semantics (like those in native Dart Sass) that are more amenable to caching.

Module System

One of the features of the new module system is ensuring that a given file is only loaded once. How much this will help depends on how much the current setup is importing the same files multiple times, though.

Cross-Compilation Caching

The current compilation setup compiles many different entrypoints and then uses Webpack to combine them. This has the benefit of allowing Webpack to avoid unnecessary recompilation when an individual component is modified, but it currently means that Sass (or at least Dart Sass) doesn’t share any state across compilations of each separate entrypoint.

In general, it’s not safe for Sass to assume that separate compilations have anything in common—the entire filesystem could have changed between two calls to render(). But when Webpack kicks off a batch of compilations, it’s aware that they’re all expected to work the same filesystem state. Sass could provide some API—perhaps a Compiler object—that makes the assumption that nothing changes across multiple compilations, so it can share cached import resolutions between them.

We could even go a step further and provide the ability for the Compiler to be informed when changes do happen, so that the cache can be invalidated only as much as necessary. Dart Sass already has support for this internally for --watch mode; we’d just need to provide an API for it. I’m not sure if Webpack exposes this information, though—maybe @evilebottnawi can provide insight here.

Loaded Module Caching

This is the furthest-reaching possibility, but also one that could get monolithic compilation to within the speed of file-by-file compilation for a single modified component. The module system defines a clear notion of the loaded state of a module, and we could cache this state across compilations and avoid even evaluating a module again once it’s loaded.

The major complexity here is that loading a module can have side effects, including changing the state of another loaded module. We’d need to have some way of marking modules—as well as anything downstream from them—as uncachable when this happens. But uncachable modules are likely to be a small minority, so this should still provide considerable benefits.

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:32 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
mgretercommented, Sep 9, 2019

Another update; had quite some fun with the MSVC profiler. Got bolt runtime even further down 🚀 Also used a trick to get a “warm cache” by training the MSVC compiler with the bolt bench run, Not sure if this is exactly fair as I don’t really know how or which dart sass I execute for comparison. None the less, the resulting executable produces the same output just in less time 😄 It seems to give around 20% of free performance on MSVC, biased towards bolt use case. We might need to see how we can use this for release binaries, gcc should support this too.

image

dart-sass: 8.90s
psass (mingw): 1.21s
sassc (MSVC): 1.08s
sassc (trained): 0.87s

Overall at least a 20x fold improvement over current libsass master, and up to 10x faster than dart-sass, as far as I can measure it. And yes, there are still a few edges left to optimize, but it now boils down to micro bench-marking. Anyway I think this is already pretty impressive 🐢 🐇 .

1reaction
nex3commented, Aug 29, 2019

You have to run yarn install too, I think.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Why so slow? | Chevy Bolt EV Forum
All EVs slow down as state of charge increases, but the Bolt has a conservative charging curve. Post 3 in the link shows...
Read more >
If you're upset about the Bolt's relatively "slow" rate of DC fast ...
I've read a lot of people talking about the Bolt being slow at fast charging and the Kia/Hyundai being very fast but honestly,...
Read more >
Neo4J Java Bolt CREATE Node is slow. How to improve it?
I'm trying to insert a bunch of ...
Read more >
bolt performance vs C# - Unity Forum
... make a bolt to C# official automatic conversion in a game build output? ... Visual Script in this scenario is about 8x...
Read more >
`terraspace build placeholder` slow + building too much?
This is because terraspace all commands create multiple processes to deploy stacks in parallel. The implementation got a little tricky when only ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found