Heap out of memory with large contract sets
See original GitHub issueI have a large set of tests.
At some point, I get the following error:
1) Contract: ContractTest "before all" hook: prepare suite:
Error: Timeout of 120000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves.
<--- Last few GCs --->
[12036:0000021291071750] 13244000 ms: Mark-sweep 1421.8 (1566.3) -> 1421.8 (1566.3) MB, 196.5 / 0.0 ms allocation failure scavenge might not succeed
[12036:0000021291071750] 13244199 ms: Mark-sweep 1421.8 (1566.3) -> 1421.8 (1566.3) MB, 199.1 / 0.0 ms last resort GC in old space requested
[12036:0000021291071750] 13244392 ms: Mark-sweep 1421.8 (1566.3) -> 1421.8 (1566.3) MB, 192.4 / 0.0 ms last resort GC in old space requested
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0000014EEEA25EE1 <JSObject>
0: builtin exit frame: endsWith(this=000003220CBF4A29 <Very long string[65884]>,000000D7217AD2F9 <String[1]\: \r>)
1: ondata [readline.js:~140] [pc=000000D1EAB44BB9](this=0000015B6EB387B1 <ReadStream map = 0000020FFFE40949>,data=000003220CBF49B9 <Uint8Array map = 000003457A3417B9>)
2: emit [events.js:~156] [pc=000000D1EAB4732A](this=0000015B6EB387B1 <ReadStream map = 0000020FFFE40...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
I realize that this could be a problem in either one of the following:
- solidity-coverage
- mocha
- node
- Javascript
Any suggestions would be highly appreciated.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:1
- Comments:10 (5 by maintainers)
Top Results From Across the Web
How to Fix JavaScript Heap Out of Memory Error - MakeUseOf
This error usually occurs when the default memory allocated by your system to Node. js is not enough to run a large project....
Read more >Getting Java OutOfMemory Heap Space error while ...
Java can operate on this data without using heap. The file sizes can be quite large (gigabytes) and you can have more than...
Read more >Out-Of-Memory, Churn Rate and more - Dynatrace
Out-of-memory errors occur when there is not enough space available in the heap to create a new object. A JVM will always trigger...
Read more >Heap allocation - IBM
The garbage collector reclaims memory by removing objects when they are no longer required. To find out more about the garbage collector, see...
Read more >JavaScript Heap Out Of Memory Error | Felix Gerschau
If your application keeps running out of memory while trying to import those big sets of data, an alternative solution is to split...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’m aware that this is a fairly ancient issue, but I’ve just encountered it when running coverage on some fairly large contracts, in my case it was fixed by exporting
NODE_OPTIONS="--max-old-space-size=8192"
. I think this can probably be closed on this basis.I tried the setting the
max-old-space-size=8192
but no luckHere is the error I get: