Leak on Linux?
See original GitHub issueBeen troubleshooting a leak in https://github.com/asilvas/node-image-steam (processes millions of images every day) and originally thought it was in my project but after a number of heap dump checks I determined it wasn’t a leak in V8.
In order to break it down the simplest parts I recorded the traffic in a serial form so it can be replayed in a pure sharp script.
https://gist.github.com/asilvas/474112440535051f2608223c8dc2fcdf
npm i sharp request
curl https://gist.githubusercontent.com/asilvas/474112440535051f2608223c8dc2fcdf/raw/be4e593c6820c0246acf2dc9604012653d71c353/sharp.js > sharp.js
curl https://gist.githubusercontent.com/asilvas/474112440535051f2608223c8dc2fcdf/raw/be4e593c6820c0246acf2dc9604012653d71c353/sharp.log > sharp.log
node sharp.js http://img1.wsimg.com/isteam sharp.log
It’s downloading these files on the fly, which avoids any FS caching which will bloat memory usage, and forwards the instructions (in sharp.log) directly to sharp, one at a time.
Memory usage gets into 500MB+ within a few mins (at least on Docker+CentOS), and seems to eventually peak. On some systems I’ve seen over 2GB usage. Only processing a single image at a time should be pretty flat in memory usage. Have you seen this before? Any ideas? I wasn’t aware of anything sharp/vips was doing that should be triggering Linux’s file caching.
Edit: While memory usage on Mac is still higher than I expect for a single image processed at a time (~160MB) after a couple hundred images, it’s nowhere near as high as on Linux… And it seems to peak quickly. So it appears to be a linux only issue. Docker is also involved, so not ruling that out either.
Issue Analytics
- State:
- Created 6 years ago
- Comments:61 (17 by maintainers)

Top Related StackOverflow Question
For anyone else who ends up here - I can concur that changing the memory allocator resolves the leak issue. For whatever reason, I did could not reproduce the leaks when developing on my debian linux machine, but when running in a docker container, I was seeing memory usage increase significantly with each upload. I fixed it by adding the following lines to my debian-based dockerfile:
UPDATE: One year later, I finally had time to rewrite all the system core and took the opportunity to inject
jemallocon the same deploy. It really did the job and fixed the memory issues. Hooray!For anyone wondering how to do that in Heroku, use the jemalloc buildpack.
I disabled
jemallocand the memory usage went through the roof again, so we can be certain that the memory allocator does all the difference here.We can finally downgrade our dynos!