[node] memory problems due to large BCH blocks
See original GitHub issueThis problem is different from #1475 since its not caused by any sorts of memory leaks, but rather BCH
large blocks.
My bitcore
node has been recently failing to sync BCH
on testnet
starting from the block #1326451 (a 32MB full block), the failure is mostly due to the large number of transactions this blocks holds, more specifically its due to the way the mintOps
are processed per block in the node.
There are actually two problems that I noticed, both happen in the function getMintOps
https://github.com/bitpay/bitcore/blob/b37e0d6f1c9c256e8a7823be5e932136fc2b9085/packages/bitcore-node/src/models/transaction.ts#L352
The first one is that the array of mintOps
, defined here:
https://github.com/bitpay/bitcore/blob/b37e0d6f1c9c256e8a7823be5e932136fc2b9085/packages/bitcore-node/src/models/transaction.ts#L363
gets really large when handling a full 32MB block (since we add a mintOp
per output per transaction in the block), I’ve seen the number of items reaching to about 550K ops, and that causes the node to run out of heap and crash with the following error (reported in #1475 ):
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x8f9d10 node::Abort() [node]
2: 0x8f9d5c [node]
3: 0xaffd0e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
4: 0xafff44 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
5: 0xef4152 [node]
6: 0xef4258 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [node]
7: 0xf00332 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
8: 0xf00c64 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
9: 0xf038d1 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]
10: 0xeccd54 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node]
11: 0x116cede v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node]
12: 0x16494fb5be1d
sh: line 1: 14365 Aborted node build/src/server.js
npm ERR! code ELIFECYCLE
npm ERR! errno 134
npm ERR! bitcore-node@8.3.4 start: `npm run tsc && node build/src/server.js`
npm ERR! Exit status 134
npm ERR!
npm ERR! Failed at the bitcore-node@8.3.4 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
A quick fix to this problem is to override the default heap size used by the node (about 1GB) to a larger one (I’ve tested with 8GB
). If that’s not an option, then the mintOps
logic needs to be refactored so that it doesn’t need to store all the mint ops at once in memory.
Once I got this problem out of the way, I’ve noticed that the node won’t run out of heap, but instead it would throw the following exception (without crashing the node):
error: 2019-09-10 16:58:43.188 UTC | Error syncing | Chain: BCH | Network: testnet RangeError [ERR_BUFFER_OUT_OF_BOUNDS]: Attempt to write outside buffer bounds
at Buffer.write (buffer.js:922:13)
at serializeString (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:34:14)
at serializeInto (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:709:17)
at serializeObject (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:347:18)
at serializeInto (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:937:17)
at serializeObject (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:347:18)
at serializeInto (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:937:17)
at serializeObject (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:347:18)
at serializeInto (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/parser/serializer.js:937:17)
at BSON.serialize (/home/bitcore/bitcore/packages/bitcore-node/node_modules/bson/lib/bson/bson.js:63:28)
at Query.toBin (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/connection/commands.js:144:25)
at serializeCommands (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/connection/pool.js:1044:43)
at Pool.write (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/connection/pool.js:1260:3)
at Cursor._find (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/cursor.js:326:22)
at nextFunction (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/cursor.js:673:10)
at Cursor.next (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb-core/lib/cursor.js:824:3)
at Cursor._next (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb/lib/cursor.js:211:36)
at fetchDocs (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb/lib/operations/cursor_ops.js:211:12)
at toArray (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb/lib/operations/cursor_ops.js:241:3)
at /home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb/lib/utils.js:437:24
at new Promise (<anonymous>)
at executeOperation (/home/bitcore/bitcore/packages/bitcore-node/node_modules/mongodb/lib/utils.js:432:10)
This error will be thrown everytime the p2p
worker for BCH
tries to sync that large block (the whole sync
process for BCH
gets stuck because of this).
So After some invistigations, it turned out that this error is caused as a side effect of the large mintOps
list and the way this db command is constructed
https://github.com/bitpay/bitcore/blob/61ddc54b045f7eca12b4fdeca6a882ab58d94ca6/packages/bitcore-node/src/models/transaction.ts#L436
Due to the large array of mintOps
, the set of unique addresses mintOpsAddresses
is large too, and this seems to cause mongo to have a ERR_BUFFER_OUT_OF_BOUNDS
when it tries to serilize the list of addresses down the stack.
So the fix I’ve tried is splitting the list of addresses given the maxPoolSize
, similar to how its done here
https://github.com/bitpay/bitcore/blob/61ddc54b045f7eca12b4fdeca6a882ab58d94ca6/packages/bitcore-node/src/models/transaction.ts#L142-L143
and it seems to fix the problem since this limits the size of the addresses per query to avoid any buffer problems.
Any thoughts on whether there’s a better way to handle the second problem?
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:5 (3 by maintainers)
Top GitHub Comments
I spent some time today working on this issue.
https://github.com/bitpay/bitcore/pull/2403
I’m able to sync past the big blocks now without increasing heap size or memory.
I haven’t tested to make sure the wallets are still getting tagged correctly yet though.
@christroutner the PR only implements the fix to the second problem, if you’re having out of heap problems make sure that you first up the default heap size of the node as I mentioned above.
Check this commit https://github.com/cwcrypto/bitcore-1/commit/3e9af7ec8886d81f360cdb03e985b0218f40a812 to see how to increase the heap size, I’m currently using 8GB as my heap size, so make sure your server can afford that, or maybe try something smaller (the default is 1GB)