question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[node] memory problems due to large BCH blocks

See original GitHub issue

This problem is different from #1475 since its not caused by any sorts of memory leaks, but rather BCH large blocks.

My bitcore node has been recently failing to sync BCH on testnet starting from the block #1326451 (a 32MB full block), the failure is mostly due to the large number of transactions this blocks holds, more specifically its due to the way the mintOps are processed per block in the node.

There are actually two problems that I noticed, both happen in the function getMintOps https://github.com/bitpay/bitcore/blob/b37e0d6f1c9c256e8a7823be5e932136fc2b9085/packages/bitcore-node/src/models/transaction.ts#L352

The first one is that the array of mintOps, defined here: https://github.com/bitpay/bitcore/blob/b37e0d6f1c9c256e8a7823be5e932136fc2b9085/packages/bitcore-node/src/models/transaction.ts#L363 gets really large when handling a full 32MB block (since we add a mintOp per output per transaction in the block), I’ve seen the number of items reaching to about 550K ops, and that causes the node to run out of heap and crash with the following error (reported in #1475 ):

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x8f9d10 node::Abort() [node]
 2: 0x8f9d5c  [node]
 3: 0xaffd0e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
 4: 0xafff44 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
 5: 0xef4152  [node]
 6: 0xef4258 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [node]
 7: 0xf00332 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
 8: 0xf00c64 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
 9: 0xf038d1 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]
10: 0xeccd54 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node]
11: 0x116cede v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node]
12: 0x16494fb5be1d 
sh: line 1: 14365 Aborted                 node build/src/server.js
npm ERR! code ELIFECYCLE
npm ERR! errno 134
npm ERR! bitcore-node@8.3.4 start: `npm run tsc && node build/src/server.js`
npm ERR! Exit status 134
npm ERR! 
npm ERR! Failed at the bitcore-node@8.3.4 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

A quick fix to this problem is to override the default heap size used by the node (about 1GB) to a larger one (I’ve tested with 8GB). If that’s not an option, then the mintOps logic needs to be refactored so that it doesn’t need to store all the mint ops at once in memory.

Once I got this problem out of the way, I’ve noticed that the node won’t run out of heap, but instead it would throw the following exception (without crashing the node):

error: 2019-09-10 16:58:43.188 UTC | Error syncing | Chain: BCH | Network: testnet RangeError [ERR_BUFFER_OUT_OF_BOUNDS]: Attempt to write outside buffer bounds
    at Buffer.write (buffer.js:922:13)
    at serializeString (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:34:14)
    at serializeInto (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:709:17)
    at serializeObject (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:347:18)
    at serializeInto (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:937:17)
    at serializeObject (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:347:18)
    at serializeInto (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:937:17)
    at serializeObject (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:347:18)
    at serializeInto (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:937:17)
    at BSON.serialize (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/bson.js:63:28)
    at Query.toBin (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/connection/commands.js:144:25)
    at serializeCommands (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/connection/pool.js:1044:43)
    at Pool.write (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/connection/pool.js:1260:3)
    at Cursor._find (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/cursor.js:326:22)
    at nextFunction (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/cursor.js:673:10)
    at Cursor.next (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/cursor.js:824:3)
    at Cursor._next (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb/lib/cursor.js:211:36)
    at fetchDocs (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb/lib/operations/cursor_ops.js:211:12)
    at toArray (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb/lib/operations/cursor_ops.js:241:3)
    at /home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb/lib/utils.js:437:24
    at new Promise (<anonymous>)
    at executeOperation (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb/lib/utils.js:432:10) 

This error will be thrown everytime the p2p worker for BCH tries to sync that large block (the whole sync process for BCH gets stuck because of this).

So After some invistigations, it turned out that this error is caused as a side effect of the large mintOps list and the way this db command is constructed https://github.com/bitpay/bitcore/blob/61ddc54b045f7eca12b4fdeca6a882ab58d94ca6/packages/bitcore-node/src/models/transaction.ts#L436

Due to the large array of mintOps, the set of unique addresses mintOpsAddresses is large too, and this seems to cause mongo to have a ERR_BUFFER_OUT_OF_BOUNDS when it tries to serilize the list of addresses down the stack.

So the fix I’ve tried is splitting the list of addresses given the maxPoolSize, similar to how its done here https://github.com/bitpay/bitcore/blob/61ddc54b045f7eca12b4fdeca6a882ab58d94ca6/packages/bitcore-node/src/models/transaction.ts#L142-L143

and it seems to fix the problem since this limits the size of the addresses per query to avoid any buffer problems.

Any thoughts on whether there’s a better way to handle the second problem?

Issue Analytics

  • State:open
  • Created 4 years ago
  • Reactions:1
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
micahriggancommented, Sep 19, 2019

I spent some time today working on this issue.

https://github.com/bitpay/bitcore/pull/2403

I’m able to sync past the big blocks now without increasing heap size or memory.

I haven’t tested to make sure the wallets are still getting tagged correctly yet though.

0reactions
osaggacommented, Sep 13, 2019

@christroutner the PR only implements the fix to the second problem, if you’re having out of heap problems make sure that you first up the default heap size of the node as I mentioned above.

Check this commit https://github.com/cwcrypto/bitcore-1/commit/3e9af7ec8886d81f360cdb03e985b0218f40a812 to see how to increase the heap size, I’m currently using 8GB as my heap size, so make sure your server can afford that, or maybe try something smaller (the default is 1GB)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Node Knowledge Base · bitcoin-sv/bitcoin-sv Wiki - GitHub
If a large block arrives, the limit may be reached and the process killed. "Missing" Transactions. If a node is low on memory,...
Read more >
Avoiding Memory Leaks in Node.js: Best Practices for ...
An accumulation of such blocks over time could lead to the application not having enough memory to work with or even your OS...
Read more >
Why is Bitcoin Not Scalable? The Bitcoin Scaling Problem
In this article, we take a look at Bitcoin's scaling issues and offer ... A mining node records the details by writing them...
Read more >
Extraction and Analysis of Data from the Bitcoin Cash (BCH ...
Description Issues RPC-JSON calls to 'bitcoind', the daemon of Bitcoin Cash (BCH), to extract transaction data from the blockchain.
Read more >
Understand and Prevent Memory Leaks in a Java Application
Memory leaks are a very real problem in Java and the JVM can only ... memory of the application looks when loading a...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found