`FATAL ERROR: v8::HandleScope::CreateHandle() Cannot create a handle without a HandleScope` when using Prisma Client in worker threads
See original GitHub issueHi,
Shortly after integrating Prisma into the stack of one of my projects, our (Github Actions) CI pipeline began “randomly” failing with the ever so ambiguous “Segfault / Aborted (core dumped)” message from node. The package in question is utilizing Ava as its test runner.
As more tests were added, the rate of failure increased to the point of “this isn’t going work.”
Of course, none of my team was able to replicate the issue locally, as that would make things too easy 😄
After a while of digging into the cause of the issue, I was able to capture the following traceback with segfault-handler:
Full Traceback
PID 3675 received SIGSEGV for address: 0x0 /home/runner/work/myproject/myproject/node_modules/.pnpm/segfault-handler@1.3.0/node_modules/segfault-handler/build/Release/segfault-handler.node(+0x3785)[0x7fdfac2fa785] /lib/x86_64-linux-gnu/libpthread.so.0(+0x153c0)[0x7fdfafd0f3c0] node(napi_call_threadsafe_function+0x194)[0xacab24] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x1d5e0b)[0x7fda6a619e0b] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x1812c5)[0x7fda6a5c52c5] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x1264f80)[0x7fda6b6a8f80] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x125f81c)[0x7fda6b6a381c] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x125e857)[0x7fda6b6a2857] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x124c3a5)[0x7fda6b6903a5] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x125e22c)[0x7fda6b6a222c] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x1262cdf)[0x7fda6b6a6cdf] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x12562b8)[0x7fda6b69a2b8] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x1242a56)[0x7fda6b686a56] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x124ad2e)[0x7fda6b68ed2e] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x1265684)[0x7fda6b6a9684] /home/runner/work/myproject/myproject/node_modules/.pnpm/@prisma+client@3.9.2_prisma@3.9.2/node_modules/.prisma/client/libquery_engine-debian-openssl-1.1.x.so.node(+0x1368d63)[0x7fda6b7acd63] /lib/x86_64-linux-gnu/libpthread.so.0(+0x9609)[0x7fdfafd03609] /lib/x86_64-linux-gnu/libc.so.6(clone+0x43)[0x7fdfafc2a293] FATAL ERROR: v8::HandleScope::CreateHandle() Cannot create a handle without a HandleScope 1: 0xb00e10 node::Abort() [node] 2: 0xa1823b node::FatalError(char const*, char const*) [node] 3: 0xceddda v8::Utils::ReportApiFailure(char const*, char const*) [node] 4: 0xe590a2 v8::internal::HandleScope::Extend(v8::internal::Isolate*) [node] 5: 0x10620a4 v8::internal::JSFunction::EnsureClosureFeedbackCellArray(v8::internal::Handle<v8::internal::JSFunction>, bool) [node] 6: 0xd8e78b v8::internal::Compiler::Compile(v8::internal::Isolate*, v8::internal::Handle<v8::internal::JSFunction>, v8::internal::Compiler::ClearExceptionFlag, v8::internal::IsCompiledScope*) [node] 7: 0x11e9bdd v8::internal::Runtime_CompileLazy(int, unsigned long*, v8::internal::Isolate*) [node] 8: 0x15e7cf9 [node] ELIFECYCLE Command failed with exit code 134. Aborted (core dumped) ELIFECYCLE Test failed. See above for more details.
This specific traceback was caught with ava configured to execute all tests serially.
Its worth noting that when running the tests concurrently, they will occasionally fail with:
node: tpp.c:82: __pthread_tpp_change_priority: Assertion `new_prio == -1 || (new_prio >= fifo_min_prio && new_prio <= fifo_max_prio)' failed.
ELIFECYCLE Command failed with exit code 134.
Aborted (core dumped)
instead of the traceback above.
Some additional notes:
-
The tests in question each have an isolated instance of the PrismaClient that creates a connection to a mongomem-server replica set. Prior to shutting down the mongomem-server instance(s), the test’s client is disconnected (via the
$disconnect
method) from the db. -
It appears to vary quite a bit where/when the error occurs per test run.
As a workaround, I was able to adapt our test suite to disable worker threads (via avas --no-worker-threads
see here).
This appears to avoid the issue and works when running serially or concurrently. (🎉)
However, this blocks us from being able to fully utilize our test runner and may not be a reasonable solution to any other persons who run into this.
Unfortunately, the repository in question is private, but I will gladly provide what additional information I can as requested.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:7 (3 by maintainers)
Top GitHub Comments
Ok interesting!
Be aware that the binary behaves a bit different from the Node-API binary, especially the performance characteristics can be impacted (in sum slightly worse usually), so this is not a long term solution but really just a temporary workaround to get around this problem right now. We will definitely have to investigate what is going on there.
@Jolg42 @janpio
As of around ~v3.10 I moved the project in question back to using the default library engineType and have yet to run into this SegFault again.
Also worth mentioning that at the time we had some problematic typescript types that were utilizing large amounts of memory that were also fixed (and quite possibly the real issue behind this).
Given those two points and since it does not appear that anyone else has run into this issue, I am going to go ahead and close it.
Thank you guys for all the hard work! 👍 😄