How to properly validate the file size of the upload?
See original GitHub issueI’ve spent about an hour trying to find some resources about this, but couldn’t find anything that actually explained how to validate the file size before uploading it to S3 or wherever. All of the file upload guides/tutorials conveniently skip the issue of file size validation.
So, how would I properly validate the size to make sure an absurdly big file isn’t being POSTed? Sure, I know that I can limit the max file size for this entire lib, but that isn’t granular enough. Some mutations might need to limit the max size at 1MB, and others might need to limit it at 100MB.
My only idea right now is to take the stream returned by createReadStream
and try to read it to see how big it is. And then if its ok, create a new read stream for passing to S3 using createReadStream
again.
Another relevant question: If I find the file size to be too big, how can I tell this package to clean up the huge file that’s been written to the temp directory?
Thanks in advance
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:7 (2 by maintainers)
Improved my example so that the stream is uploaded & validated at the same time.
Not sure if I need to destroy all streams at the end, but did it anyway to be safe
Edit: There’s an even better suggestion here https://github.com/mike-marcacci/fs-capacitor/issues/27#issuecomment-631570106. Instead of two pass through streams, you can create a special SizeValidatorStream and just pipe the original stream through it, and then pass the validatorStream into the S3 client.
@fabis94 or somebody else. Why do you need uploadStream ?
Can you use const result = await s3client.upload({Body: validationStream , Bucket: X, Key: Y}).promise();