question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

SLURMCluster example in API docs is out of date

See original GitHub issue

The example for SLURMCluster in the API docs uses the threads keyword which has been removed. Using it raises the exception,

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-10-c817ba318976> in <module>()
      3                        walltime='00:10:00',
      4                        local_directory='$SHARED_SCRATCH/wtb2',
----> 5                        memory='10GB',processes=6,threads=4)

~/.conda/envs/hic_simulation/lib/python3.6/site-packages/dask_jobqueue/slurm.py in __init__(self, queue, project, walltime, job_cpu, job_mem, job_extra, **kwargs)
     72             job_extra = dask.config.get('jobqueue.slurm.job-extra')
     73 
---> 74         super(SLURMCluster, self).__init__(**kwargs)
     75 
     76         # Always ask for only one task

~/.conda/envs/hic_simulation/lib/python3.6/site-packages/dask_jobqueue/core.py in __init__(self, name, cores, memory, processes, interface, death_timeout, local_directory, extra, env_extra, walltime, threads, **kwargs)
    109         # """
    110         if threads is not None:
--> 111             raise ValueError(threads_deprecation_message)
    112 
    113         if not self.scheduler_name:

ValueError: The threads keyword has been removed and the memory keyword has changed.

Please specify job size with the following keywords:

-  cores: total cores per job, across all processes
-  memory: total memory per job, across all processes
-  processes: number of processes to launch, splitting the quantities above

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
lestevecommented, Jul 15, 2018

That would be great! If you can have a quick look to double-check this problem does not appear in other parts of the doc (using git grep may be handy) even better!

0reactions
wtbarnescommented, Jul 15, 2018

Ok I’ll submit a PR during the sprint tomorrow

Read more comments on GitHub >

github_iconTop Results From Across the Web

Add RemoteSlurmJob to connect SLURMCluster to a remote ...
For security reasons I would think that cluster sys-admins would not allow connecting to the Slurm REST API endpoint from the outside, but...
Read more >
DASK workers with different walltimes - Stack Overflow
I am using dask-jobqueue to launch many 2-5 min jobs (using subprocess) on a small SLURM cluster. I am running several 1000s of...
Read more >
Slurm cluster fast insufficient capacity fail-over
When a job is submitted to a compute resource dynamic node and an insufficient capacity error is detected, the node is placed in...
Read more >
dask_jobqueue.SLURMCluster - Dask-Jobqueue
Launch Dask on a SLURM cluster. Parameters. queuestr. Destination queue for each worker job. Passed to #SBATCH -p option. projectstr.
Read more >
dask/dask - Gitter
I would like to give back a full file to my client (from my worker) in dask-distributed. How could I achieve this? When...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found