Idle memory use increasing over time
See original GitHub issueIn the process of debugging some memory issues I noticed that memory usage of a scheduler+worker with no client connection was steadily increasing over time.
Command: dask-scheduler --no-bokeh & dask-worker localhost:8786 --nthreads 1 --nprocs 120 --memory-limit 3.2GB --no-bokeh &
(default config.yaml)
Result: after a few idle hours, total memory usage went from about 1GB at startup to 4GB (as reported by the Google Cloud dashboard).
I’m aware that there are a lot of subtleties around measuring memory usage on Linux so I’m not sure if this is a real issue or maybe an artifact of the measurement process, but it seemed like a lot of memory for totally inactive processes. Curious if anyone has any thoughts about what might be happening.
Issue Analytics
- State:
- Created 6 years ago
- Comments:16 (7 by maintainers)
Top Results From Across the Web
PC memory usage slowly increasing, even when idle
That level of memory usage is not normal and the increasing use over time is typical of a memory leak. This is not...
Read more >How to fix high RAM usage when Windows 11 is idle?
How to fix high RAM usage when Windows 11 is idle? ; Pres Ctrl + Shift + Esc to open Task Manager; Click...
Read more >High RAM usage while Idle - Microsoft Community
Method 2. Run memory diagnostic tool: Memory diagnostic tool is a RAM test to check if there is any issues with RAM. -...
Read more >Fix High RAM Memory Usage Issue on Windows 11/10 [10 ...
1. Close Unnecessary Running Programs/Applications; 2. Disable Startup Programs; 3. Defragment Hard Drive & Adjust Best Performance; 4. Fix Disk ...
Read more >Gradually increasing RAM usage on windows 10?
This sounds like a memory leak. Make sure to update all your drivers and uninstall any programs that you do not use. A...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
We are still running into this issue. And I was not yet able to find a minimal example.
There are two things that I noticed in our processing chain that might cause issues:
We are running in docker containers on Google Cloud. Our idle workers always have 10% memory consumption. In https://github.com/dask/distributed/issues/2079 I understood that this should not be the case so it might be related to some container settings.
Our processing uses compiled extensions that do not release the GIL. Could that be a cause for a memory increase?
I will try to get a minimal example still but I thought this information might help narrow it down a little.
We are running dask scheduler on windows VM and memory utilization gradually increases till system memory usage reaches 98%. We then have to restart the scheduler as else we receive timeouts from workers trying to connect. This does take a few days and our allocated memory for VM is 16GB.
We are currently on distributed ‘1.21.3’ and dask ‘0.17.1’
Sorry one thing to add, in our case the grid is not completely idle but do have jobs running from time to time. Please let me know if this should be listed as a separate issue in that case.