Memory exhaustion issues when deploying a large site
See original GitHub issueI’m trying to deploy a pretty large (35gb) Gatsby site for the first time to S3 via gatsby-plugin-s3 deploy
with default settings and getting numerous memory exhaustion errors such as:
# Fatal error in , line 0
# API fatal error handler returned after process out of memory on the background thread
And:
class FastBuffer extends Uint8Array {}
^
RangeError: Array buffer allocation failed
at new ArrayBuffer (<anonymous>)
at new Uint8Array (<anonymous>)
at new FastBuffer (internal/buffer.js:788:1)
at createUnsafeBuffer (buffer.js:111:12)
at allocate (buffer.js:322:10)
at Function.allocUnsafe (buffer.js:285:10)
at allocNewPool (internal/fs/streams.js:36:19)
at ReadStream._read (internal/fs/streams.js:141:5)
at ReadStream.Readable.read (_stream_readable.js:457:10)
at ManagedUpload.fillStream (G:\dev\dunedinsound-gatsby\node_modules\aws-sdk\lib\s3\managed_upload.js:420:25)
And:
Error: write ECONNRESET
- stream_base_commons.js:67 WriteWrap.onWriteComplete [as oncomplete]
internal/stream_base_commons.js:67:16
And (possibly unrelated):
RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
- s3.js:585 Request.extractError
[dunedinsound-gatsby]/[aws-sdk]/lib/services/s3.js:585:35
- sequential_executor.js:106 Request.callListeners
[dunedinsound-gatsby]/[aws-sdk]/lib/sequential_executor.js:106:20
- sequential_executor.js:78 Request.emit
[dunedinsound-gatsby]/[aws-sdk]/lib/sequential_executor.js:78:10
- request.js:683 Request.emit
[dunedinsound-gatsby]/[aws-sdk]/lib/request.js:683:14
- request.js:22 Request.transition
[dunedinsound-gatsby]/[aws-sdk]/lib/request.js:22:10
- state_machine.js:14 AcceptorStateMachine.runTo
[dunedinsound-gatsby]/[aws-sdk]/lib/state_machine.js:14:12
- state_machine.js:26
[dunedinsound-gatsby]/[aws-sdk]/lib/state_machine.js:26:10
- request.js:38 Request.<anonymous>
[dunedinsound-gatsby]/[aws-sdk]/lib/request.js:38:9
- request.js:685 Request.<anonymous>
[dunedinsound-gatsby]/[aws-sdk]/lib/request.js:685:12
- sequential_executor.js:116 Request.callListeners
[dunedinsound-gatsby]/[aws-sdk]/lib/sequential_executor.js:116:18
This is on Windows 10 with Node 11 64-bit and 16gb of memory.
Issue Analytics
- State:
- Created 4 years ago
- Comments:10 (7 by maintainers)
Top Results From Across the Web
Fixing Memory Exhaustion Bugs in My Golang Web App
I managed to reproduce the error by deploying PicoShare on a Fly instance with only 256 MB of RAM and then uploading large...
Read more >Things to check when high memory occurs - ASP.NET
This article describes quick things to check when you experience high memory in ASP.NET.
Read more >"Appliance is running low on memory. Add more memory to ...
High percpu memory usage is caused by huge number of dying memory cgroups as result of frequent "VMware Pod" service restart.
Read more >8.5: memory leak on large network deployment topologies - IBM
A memory leak occurs on large topology configurations (that is, many servers in a cell).
Read more >How we find and fix OOM and memory leaks in Java Services
There are many kinds of errors that directly or indirectly affect an application. This post focuses on two of these issues: the OOM...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@FraserThompson Good point. We should probably just pass the
buffer
var toManagedUpload
instead of creating a new read stream.Can you give that a try?
Regardless, still not sure what’s causing the memory leak here, at no point are we loading the entire file into memory or something. EDIT: readFile: Asynchronously reads the entire contents of a file. Woops. Maybe we need find a better way to create the hash.
The parallelLimit function from the async library looks like a fairly nice way to limit concurrency.