Memory leaks in Jest when running tests serially with `nApi` enabled
See original GitHub issueBug description
Background
Currently trying to migrate to the library engine for an existing project (>1kloc prisma schema). After enabling the nApi
flag, I’ve noticed that Jest will periodically crash while running test suites serially (hundreds of tests spread across almost 30 test suites).
After taking a closer look using jest’s --logHeapUsage
flag, I’ve found that memory usage goes up by 100MB+ per test suite. About thirds into a test run, jest would be eating up over 2GB of memory and crash soon after.
Limitations
Unfortunately, I’ve found no successful mechanism that would allow running tests in parallel in an isolated environment when using Prisma. I’ve tried setting up a test environment that creates temporary schemas (since Prisma doesn’t seem to allow the usage of pg_temp
) and applies migrations for each suite, but haven’t achieved desirable results.
The problem
Instantiating a library engine leads to memory leaks when using jest (barebones example), which is noticeable when running tests using the --runInBand
flag. The issue also gets picked up when using --detectLeaks
.
I’ve also tested a version of the library engine with logging disabled (repo) and did not see the issue: neither on a simple instantiation(barebones example), nor when using it in a generated prisma client (by manually replacing the path in node_modules/@prisma/client/runtime/index.js
).
How to reproduce
Minimal reproduction repo
https://github.com/driimus/prisma-leaks - see the action runs for logs
Steps:
- Enable the
nApi
feature - Run some Jest test suites using the
--runInBand
or-w 1
flag and monitor memory usage (e.g. by also using the--logHeapUsage
flag) - Note the memory usage going up.
- When running large test suites, it is noted that the runner may crash:
<--- Last few GCs --->
[594:0x5d8b870] 327150 ms: Scavenge (reduce) 1912.3 (2075.9) -> 1912.0 (2076.4) MB, 3.8 / 0.0 ms (average mu = 0.394, current mu = 0.400) allocation failure
[594:0x5d8b870] 327155 ms: Scavenge (reduce) 1912.7 (2076.4) -> 1912.3 (2076.4) MB, 3.0 / 0.0 ms (average mu = 0.394, current mu = 0.400) allocation failure
[594:0x5d8b870] 327162 ms: Scavenge (reduce) 1913.0 (2076.4) -> 1912.6 (2076.9) MB, 3.0 / 0.0 ms (average mu = 0.394, current mu = 0.400) allocation failure
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xb02cd0 node::Abort() [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
2: 0xa1812d node::FatalError(char const*, char const*) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
3: 0xceb72e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
4: 0xcebaa7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
5: 0xeb5485 [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
6: 0xeb5f74 [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
7: 0xec43e7 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
8: 0xec779c v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
9: 0xe89d25 v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
10: 0xe82934 v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawWithImmortalMap(int, v8::internal::AllocationType, v8::internal::Map, v8::internal::AllocationAlignment) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
11: 0xe84630 v8::internal::FactoryBase<v8::internal::Factory>::NewRawOneByteString(int, v8::internal::AllocationType) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
12: 0x110da42 v8::internal::String::SlowFlatten(v8::internal::Isolate*, v8::internal::Handle<v8::internal::ConsString>, v8::internal::AllocationType) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
13: 0x1097e77 v8::internal::JSRegExp::Initialize(v8::internal::Handle<v8::internal::JSRegExp>, v8::internal::Handle<v8::internal::String>, v8::base::Flags<v8::internal::JSRegExp::Flag, int>, unsigned int) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
14: 0x10987ff v8::internal::JSRegExp::Initialize(v8::internal::Handle<v8::internal::JSRegExp>, v8::internal::Handle<v8::internal::String>, v8::internal::Handle<v8::internal::String>) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
15: 0x120b798 v8::internal::Runtime_RegExpInitializeAndCompile(int, unsigned long*, v8::internal::Isolate*) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
16: 0x15cddf9 [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
Aborted
Expected behavior
No response
Prisma information
Schema: https://github.com/driimus/prisma-leaks/blob/main/prisma/schema.prisma
Environment & setup
- OS:
macOS
,debian
- Database:
PostgreSQL
- Node.js version:
v16.7.0
,LTS
Prisma Version
prisma : 2.30.0
@prisma/client : 2.30.0
Current platform : debian-openssl-1.1.x
Query Engine (Binary) : query-engine 60b19f4a1de4fe95741da371b4c44a92f4d1adcb (at node_modules/@prisma/engines/query-engine-debian-openssl-1.1.x)
Migration Engine : migration-engine-cli 60b19f4a1de4fe95741da371b4c44a92f4d1adcb (at node_modules/@prisma/engines/migration-engine-debian-openssl-1.1.x)
Introspection Engine : introspection-core 60b19f4a1de4fe95741da371b4c44a92f4d1adcb (at node_modules/@prisma/engines/introspection-engine-debian-openssl-1.1.x)
Format Binary : prisma-fmt 60b19f4a1de4fe95741da371b4c44a92f4d1adcb (at node_modules/@prisma/engines/prisma-fmt-debian-openssl-1.1.x)
Default Engines Hash : 60b19f4a1de4fe95741da371b4c44a92f4d1adcb
Studio : 0.422.0
Preview Features : nApi
Issue Analytics
- State:
- Created 2 years ago
- Reactions:12
- Comments:40 (17 by maintainers)
Top GitHub Comments
I had a look at this today. I think this is happening because we keep a collection of engines around here https://github.com/prisma/prisma/blob/f395abad8eb85739a1df35071caa9e5050993696/packages/engine-core/src/library/LibraryEngine.ts#L44
And each time we create a new PrismaClient we create a new engine. This leads to a duplication of the whole engine which creates a lot of memory. I think we need to change the way we create the Library engine to only use one engine.
I can confirm my company’s Jest CI tests started leaking like crazy after upgrading to Prisma 3.x. This was the only changed variable so there’s definitely something happening. I am trying to reproduce with a simpler setup.