SLURMCluster example in API docs is out of date
See original GitHub issueThe example for SLURMCluster
in the API docs uses the threads
keyword which has been removed. Using it raises the exception,
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-c817ba318976> in <module>()
3 walltime='00:10:00',
4 local_directory='$SHARED_SCRATCH/wtb2',
----> 5 memory='10GB',processes=6,threads=4)
~/.conda/envs/hic_simulation/lib/python3.6/site-packages/dask_jobqueue/slurm.py in __init__(self, queue, project, walltime, job_cpu, job_mem, job_extra, **kwargs)
72 job_extra = dask.config.get('jobqueue.slurm.job-extra')
73
---> 74 super(SLURMCluster, self).__init__(**kwargs)
75
76 # Always ask for only one task
~/.conda/envs/hic_simulation/lib/python3.6/site-packages/dask_jobqueue/core.py in __init__(self, name, cores, memory, processes, interface, death_timeout, local_directory, extra, env_extra, walltime, threads, **kwargs)
109 # """
110 if threads is not None:
--> 111 raise ValueError(threads_deprecation_message)
112
113 if not self.scheduler_name:
ValueError: The threads keyword has been removed and the memory keyword has changed.
Please specify job size with the following keywords:
- cores: total cores per job, across all processes
- memory: total memory per job, across all processes
- processes: number of processes to launch, splitting the quantities above
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (6 by maintainers)
Top Results From Across the Web
Add RemoteSlurmJob to connect SLURMCluster to a remote ...
For security reasons I would think that cluster sys-admins would not allow connecting to the Slurm REST API endpoint from the outside, but...
Read more >DASK workers with different walltimes - Stack Overflow
I am using dask-jobqueue to launch many 2-5 min jobs (using subprocess) on a small SLURM cluster. I am running several 1000s of...
Read more >Slurm cluster fast insufficient capacity fail-over
When a job is submitted to a compute resource dynamic node and an insufficient capacity error is detected, the node is placed in...
Read more >dask_jobqueue.SLURMCluster - Dask-Jobqueue
Launch Dask on a SLURM cluster. Parameters. queuestr. Destination queue for each worker job. Passed to #SBATCH -p option. projectstr.
Read more >dask/dask - Gitter
I would like to give back a full file to my client (from my worker) in dask-distributed. How could I achieve this? When...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
That would be great! If you can have a quick look to double-check this problem does not appear in other parts of the doc (using
git grep
may be handy) even better!Ok I’ll submit a PR during the sprint tomorrow