question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItĀ collects links to all the places you might be looking at while hunting down a tough bug.

And, if youā€™re still stuck at the end, weā€™re happy to hop on a call to see how we can help out.

napi engine cause OOMKilled in container environment

See original GitHub issue

Bug description

We were running prisma in our production environment inside our k8s cluster for several month now. Using the legacy NodeEngine result in avg 700m ~ 800m ram consumption after start up. After startup, the ram consumption seems pretty fine, the gc in the js side would maintain that ram at around 800m ~ 1gb.

After trying the preview NapiEngine in our staging server, the ram jumped to 1.4gb on startup, and the worst part is it keeps getting higher to 4gb and never gets down until it gets OOMKilled (which is fairly quick).

I tried setting maxOldSpaceSize and increase the requested/limited memory for the service, but none seems can help improving the situation.

Right now, we have switched back to the old NodeEngine and everything is back to normal again.

From what I see, the memory some how never gets released by NapiEngine, and just gets killed pretty quickly if one trying to run a lot of the query at the same time. I tried set DEBUG to see what engine is printing, but after the client printed something, before engine print anything the container just gets OOMKilled

How to reproduce

Sorry I canā€™t share our repo or cluster environment with you guys. And donā€™t have the time and resource to create a repo to reproduce this, but Iā€™m happy to provide more information and run the test for your guysšŸ™ .

Expected behavior

NapiEngine should provide same if not better performance than NodeEngine

Prisma information

Environment & setup

The below information is on NodeEngine

prisma               : 2.20.1
@prisma/client       : 2.20.1
Current platform     : linux-musl
Query Engine         : query-engine 60ba6551f29b17d7d6ce479e5733c70d9c00860e (at ../root/.npm/_npx/2778af9cee32ff87/node_modules/@prisma/engines/query-engine-linux-musl)
Migration Engine     : migration-engine-cli 60ba6551f29b17d7d6ce479e5733c70d9c00860e (at ../root/.npm/_npx/2778af9cee32ff87/node_modules/@prisma/engines/migration-engine-linux-musl)
Introspection Engine : introspection-core 60ba6551f29b17d7d6ce479e5733c70d9c00860e (at ../root/.npm/_npx/2778af9cee32ff87/node_modules/@prisma/engines/introspection-engine-linux-musl)
Format Binary        : prisma-fmt 60ba6551f29b17d7d6ce479e5733c70d9c00860e (at ../root/.npm/_npx/2778af9cee32ff87/node_modules/@prisma/engines/prisma-fmt-linux-musl)
Default Engines Hash : 60ba6551f29b17d7d6ce479e5733c70d9c00860e
Studio               : 0.365.0

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:4
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
gogooutcommented, Apr 19, 2021

@pimeys I had a quick test on our staging environment, seems itā€™s not get OOMKilled. Will update if find anything further. So far it seems pretty good.

0reactions
pimeyscommented, Apr 16, 2021

prisma@2.22.0-dev.13 should have the changes, would be nice if youā€™d check it out to see if you still have issues, early enough so we can do more work before the next release if somethingā€™s still off.

Read more comments on GitHub >

github_iconTop Results From Across the Web

OOMKilled: Troubleshooting Kubernetes Memory Requests ...
OOMKilled Because of Limit Overcommitā€‹ā€‹ The OOMKilled: Limit Overcommit error can occur when the sum of pod limits is greater than the available...
Read more >
How to Fix OOMKilled Kubernetes Error (Exit Code 137)
OOMKilled: Common Causes ; Container memory limit was reached, and application is experiencing a memory leak, Debug the application and resolve the memory...
Read more >
Kubernetes OOMKilled out of memory diagnosis - Dynatrace
Out-of-memory root cause: Wrong settings or shortage of resources? The fix: Adjust the memory settings to avoid OOMKilled errors; Lesson learnedĀ ...
Read more >
Out-of-memory (OOM) in Kubernetes ā€“ Part 2 - Mihai-Albert.com
As one of the Kubernetes committers says ā€œContainers are marked as OOM killed only when the init pid gets killed by the kernel...
Read more >
How to solve OOM Killed 137 pod problem kubernetes GKE?
A Pod always runs on a Node and is the basic unit in a kubernetes engine. A Node is a worker machine in...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found