Make large files great againSee original GitHub issue
I was in the process of stress testing this app before making the switch but I seem to have run into some problems with large files.
I don’t expect to upload this large of a file but I tried to upload a 23GB mkv file through my admin sharex config. This was directly to my publicip:9999, no nginx inbetween. Monitoring the ram and cpu, cpu seemed to spike a few times throughout the upload but ram was pretty strong at 230~MB total usage and it wouldn’t go up which is great. I watched the file gain in size in /uploads and started counting how long it would take from the upload being finished to it giving me the url. It was stuck on the process that was done uploading but still waiting to give a url for about a minute and 30 seconds. After that the file started over with reuploading, although in my uploads folder, the old file remained and a new file was being written for this new transfer.
I cancelled it then since I think it would just loop over and over probably. I tested beforehand with a gigabyte file and it worked all right (there was some delay between the upload being finished to url generated). In my config I have
maxSize: '150000MB', and
The other main options are pretty much default.
Is there a way that I could give you helpful debug logs for these transfers? I’m worried I might be hitting a bottleneck with the architecture of node.
- Created 3 years ago
- Comments:10 (5 by maintainers)
Top GitHub Comments
Just a suggestion: Have you considered using a multi-writer when upload? So rather than treating uploading, and hashing, as two separate steps, it can instead be done at the same time. This is hope kipp handles file uploads.
Frankly hadn’t given that any thought before this issue. Also thought of something like that while this was going on, but I was sure Multer (the current lib we use to parse multipart data) didn’t have stream-based API to hook into. I just gave it a look again, and indeed stream-based API is only on RC versions versions atm. Can probably give that a try as-is, but eh, dunno. Though there are some solutions that involve writing own Multer storage engine, such as this one. Probably a better choice for now.
There’s a few Blake3 implementations here https://www.npmjs.com/search?q=blake3
As @camjac251 suggested, it might also be a good idea to use blake3. md5 should never, ever be used anymore.
Aight, that sounds good to me as well.
This is incredible. Thank you for adding this. It was almost instant after the 23GB file was done uploading until it gave me the link. So much faster now