LocalCluster does not respect memory_limit keyword when it is large
See original GitHub issuefrom dask.distributed import Client
client = Client(memory_limit="300 GB")
client.run(lambda dask_worker: dask_worker.memory_limit)
{'tcp://127.0.0.1:62196': 17179869184,
'tcp://127.0.0.1:62199': 17179869184,
'tcp://127.0.0.1:62200': 17179869184,
'tcp://127.0.0.1:62204': 17179869184}
It seems to respect the keyword when it’s lower than available memory, but not when it’s greater than. Granted I don’t have 1.2 TB of memory on my laptop, but maybe it makes sense to allow the user to over-subscribe.
Issue Analytics
- State:
- Created a year ago
- Comments:7 (6 by maintainers)
Top Results From Across the Web
Set worker memory limit to current free memory, not total memory ...
The memory limit is set to about 2GB, i.e. total cluster memory is at 8GB ... LocalCluster does not respect memory_limit keyword when...
Read more >Why Dask is not respecting the memory limits for LocalCluster?
worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to...
Read more >Worker Memory Management - Dask.distributed
For cluster-wide memory-management, see Managing Memory. Workers are given a target memory limit to stay under with the command line --memory-limit keyword ......
Read more >Best practices for passing a large dictionary to local cluster
I am running into a few issues: If I use all my available workers on my macbook (8) then each worker has 2...
Read more >sbatch - Slurm Workload Manager
Other than the batch script itself, Slurm does no movement of user files. ... NOTE: currently, federated job arrays only run on the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I consider this expected behavior. Is there any sane use case for allowing larger values?
From a UX POV we should raise a warning if this happens such that the user knows what’s going on.
This also relates roughly to https://github.com/dask/distributed/issues/6895 which discusses making the
system.MEMORY_LIMIT
even stricterSounds like a fine outcome to me.