Most efficient way to generate thumbnails of an image (many sizes, many formats)?
See original GitHub issueWhat’s the best way to generate thumbnails of various sizes and formats for a given image as efficiently as possible?
Specifically, let’s say I have a very large source image (say… 4096 x 4096), “A”, that I’d like to generate 1024w, 512w, and 128w thumbnails for, in both WEBP and PNG formats. Obviously, I could do something like this:
const src = sharp('A');
for (let size of [1024, 512, 128]) {
src.resize(size).toFile(`A_${size}.jpg`);
src.resize(size).toFile(`A_${size}.png`);
}
… but (if I’m reading the sharp
source right) that will result in a full scan of the source image for eachtoFile()
call, right?
A more efficient way to do this (I believe) would be to reduce the size of the image at each step, so not as much data needs to be processed each time, like so:
let src = sharp('A');
for (let size of [1024, 512, 128]) {
// replace src with downsampled version on each pass.
// Using lossless WEBP here to avoid compounding compression artifacts
// ISSUE: requires outputting to WEBP, then immediately reparsing. 😢
sizeSource = sharp(await src.resize(1024).toFormat('webp', {lossless: true}).toBuffer());
sizeSource.resize(size).toFile(`A_${size}.jpg`);
sizeSource.resize(size).toFile(`A_${size}.png`);
src = sizeSource;
}
This is faster, but the intermediate generation and reparsing of a WEBP image is (I presume) still pretty inefficient.
So… is there a better way to do this?
My understanding of how sharp works, based on reading the source, is that calls like resize()
are just setting an internal option, that only takes effect when an output format is requested, at which time the input image is scanned and processed. But this means (I think?) that there isn’t a simple way of telling sharp to “apply” settings so that it (internally) has a smaller, more efficiently processed version of the image. So… yeah… I’m wondering if there’s a better way. E.g. something like src.resize(size).apply()
…
Does this make sense?
Issue Analytics
- State:
- Created 2 years ago
- Comments:8 (4 by maintainers)
I was, and thank you for reminding me to respond. Here’s what I ended up going with. This bought us a ~25-30% perf improvement. If you see any obvious mistakes or improvements, please let me know. (Also, feel free to close, and thank you for your help!)
I’ll need to do some performance analysis on this, but my best guess is that WebP encoding may be the bottleneck here rather than resizing, so perhaps loop by format rather than size.
You can debug libvips’ cache by setting the
VIPS_TRACE
environment variable.