Tailwind CLI slow down / memory leak
See original GitHub issueWhat version of Tailwind CSS are you using?
v3.0.22
What build tool (or framework if it abstracts the build tool) are you using?
None
What version of Node.js are you using?
v17.0.1
What browser are you using?
N/A
What operating system are you using?
Windows 10
Reproduction URL
https://github.com/FallDownTheSystem/tailwind-cli-slowdown
Describe your issue
After saving a file in the root folder, that triggers a rebuild by the Tailwind CLI watcher, while the rebuild is still in progress, I think some kind of memory leak happens.
The reproduction requires a file to be saved very rapidly to showcase it, but on larger projects, it can happen naturally, as the build times are longer to begin with.
I’ll paste the reproduction steps and explanation I added to the README.md of the minimal reproduction demo here. I’ve also attached a video that showcases the behavior.
https://github.com/FallDownTheSystem/tailwind-cli-slowdown
npm install
npm run watch
- Spam save ./folder/noonroot.aspx or ./folder/nonroot2.aspx (On Windows you can hold down ctrl + save to rapidly save the file)
- Spam save ./root.aspx for a long while
- Try to spam save one of the nonroot.aspx files again
The CLI now gets “stuck” on adding the rebuild step to the promise chain faster than it can process then, making the chain longer and longer. Once you stop spamming save, the chain will unwind and all the rebuilds will complete. But now each time you attempt to save, the process allocates a larger chunk of memory than originally.
This is even more evident if you spam save the tailwind.config.js file. This takes even longer, and seems to reserve much more memory.
After a while, the memory will be released, but subsequent saves of the noonroot.aspx files will cause much larger chunks of memory to be allocated, and the built times have increased by an order of magnitude.
At the extreme, this will lead to an out-of-memory exception and the node process will crash.
This bug seems to only happen when you edit one of the files in the root folder, and is more evident on larger projects where the building times are longer to begin with, and thus the memory ‘leak’ becomes more apparent faster.
This is harder to reproduce, but from experience, I would argue that this memory ‘leak’ happens often when you attempt to save a file while the rebuild is still in process. In a larger project, my watcher node process will crash several times a day due to out-of-memory exceptions.
The repository also includes a modified-cli.js that I used for debugging purposes. The modified Tailwind CLI has the addition of logging when the watcher runs the on change handler, and when the promise chain is increased or decreased.
https://user-images.githubusercontent.com/8807171/153777233-54acb464-d31f-4cab-8163-5f035060b85a.mp4
What cannot be seen on the video is the memory usage, which at its peak got up to 4 GB.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:14 (1 by maintainers)
We also run the CLI in watch mode on Windows and notice it occasionally running out of memory and crashing Node. It’s infrequent enough that I haven’t bothered to report it previously, so perhaps the problem is more widespread than might be inferred from the number of GH issues.
We also see a gradual increase in duplicate content output to the .css file over many compiles which we clean-up by stopping the CLI and restarting it - this forces a clean build. Clearly there is some kind of state that can hang-around in the CLI between compiles in some circumstances. Unfortunately it’s completely impractical for me to provide a repro URL for this.
Turns out we were essentially doubling the rule cache (not quite but close enough) instead of just adding the few entries to it that needed to be added. This can result in a significant slow down in fairly specific situations.
I’m hoping this has fixed a good portion of the issue here. Can some of you please give our insiders build a test and see if it helps out at all? I’m hopeful it’ll have some positive impact but if it isn’t sufficient I’ll reopen this issue.
Thanks for all the help and bearing with us on this one. ✨