Actor does not release memory
See original GitHub issueDescribe the bug In 2 different actors with PuppeteerCrawler there is problem, that if we kick off runs with Chrome, after some time currentConcurrency go down to 1 and memory usage is still growing:
launchContext: {
useChrome: true,
launchOptions: {
stealth: true,
headless: false,
devtools: false,
},
},

We also tried set lower closeInactiveBrowserAfterSecs
:
browserPoolOptions: {
closeInactiveBrowserAfterSecs: 120,
}
but it not help.
Once we set:
launchContext: {
useChrome: false,
}
it is working correctly. It is happening after upgrade to node 16 and SDK 2.0.
System information:
- OS: docker
apify/actor-node-puppeteer-chrome:16
- Node.js version 16
- Apify SDK version 2.1.0
Issue Analytics
- State:
- Created 2 years ago
- Comments:9 (6 by maintainers)
Top Results From Across the Web
TArray memory not freed in editor only actor
This actor holds a TArray of floats that is not marked with UPROPERTY. I add some test data to the array in the...
Read more >[Core] Actors leaking memory · Issue #24216 · ray-project/ray
So I went and made a small reproduction of our ray setup showing that actors are not releasing memory when they should.
Read more >Memory (RAM) not being released by Ray
I am using custom resources for each actor. After stop any of job, actors are killed perfectly and releasing cpu and custom resources....
Read more >Solved: Actor Memory Usage and Release - NI Community
Memory should be released when the VI is unloaded, not when the top-level VI finishes. The Actor and Actor Core VIs are shared...
Read more >Always stop unused Akka actors
Akka actors do not magically disappear when you no longer need them. ... Is stopping an actor sufficient to release memory?
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Nope, this was addressed only on platform, no fix in the sdk code directly. We basically disallowed the creation of core dumps inside docker.
There are no errors visible in the log, just regular page handler timeouts. At it looks like they are generated regardless of such errors, so my guess is it happens when we try to close the browser (as it has no effect on the running crawler and no errors are printed anywhere).