question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Idle memory use increasing over time

See original GitHub issue

In the process of debugging some memory issues I noticed that memory usage of a scheduler+worker with no client connection was steadily increasing over time.

Command: dask-scheduler --no-bokeh & dask-worker localhost:8786 --nthreads 1 --nprocs 120 --memory-limit 3.2GB --no-bokeh & (default config.yaml) Result: after a few idle hours, total memory usage went from about 1GB at startup to 4GB (as reported by the Google Cloud dashboard).

I’m aware that there are a lot of subtleties around measuring memory usage on Linux so I’m not sure if this is a real issue or maybe an artifact of the measurement process, but it seemed like a lot of memory for totally inactive processes. Curious if anyone has any thoughts about what might be happening.

Issue Analytics

  • State:open
  • Created 6 years ago
  • Comments:16 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
cpaulikcommented, Aug 2, 2018

We are still running into this issue. And I was not yet able to find a minimal example.

There are two things that I noticed in our processing chain that might cause issues:

  1. We are running in docker containers on Google Cloud. Our idle workers always have 10% memory consumption. In https://github.com/dask/distributed/issues/2079 I understood that this should not be the case so it might be related to some container settings.

  2. Our processing uses compiled extensions that do not release the GIL. Could that be a cause for a memory increase?

I will try to get a minimal example still but I thought this information might help narrow it down a little.

1reaction
ameetshah1983commented, Apr 20, 2018

We are running dask scheduler on windows VM and memory utilization gradually increases till system memory usage reaches 98%. We then have to restart the scheduler as else we receive timeouts from workers trying to connect. This does take a few days and our allocated memory for VM is 16GB.

We are currently on distributed ‘1.21.3’ and dask ‘0.17.1’

Sorry one thing to add, in our case the grid is not completely idle but do have jobs running from time to time. Please let me know if this should be listed as a separate issue in that case.

Read more comments on GitHub >

github_iconTop Results From Across the Web

PC memory usage slowly increasing, even when idle
That level of memory usage is not normal and the increasing use over time is typical of a memory leak. This is not...
Read more >
How to fix high RAM usage when Windows 11 is idle?
How to fix high RAM usage when Windows 11 is idle? ; Pres Ctrl + Shift + Esc to open Task Manager; Click...
Read more >
High RAM usage while Idle - Microsoft Community
Method 2. Run memory diagnostic tool: Memory diagnostic tool is a RAM test to check if there is any issues with RAM. -...
Read more >
Fix High RAM Memory Usage Issue on Windows 11/10 [10 ...
1. Close Unnecessary Running Programs/Applications; 2. Disable Startup Programs; 3. Defragment Hard Drive & Adjust Best Performance; 4. Fix Disk ...
Read more >
Gradually increasing RAM usage on windows 10?
This sounds like a memory leak. Make sure to update all your drivers and uninstall any programs that you do not use. A...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found