question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

No workers error when trying to scatter

See original GitHub issue

I am trying to scatter data to workers but getting a TimeoutError: No workers found. I can do sacct and see my workers running, despite dask throwing this error. Here is a minimal working example which produces the error on my machine:

from dask_jobqueue import SLURMCluster
from dask.distributed import Client
cluster = SLURMCluster(cores=1,memory="19 GB",interface='ib0')
cluster.scale(2)
client = Client(cluster)
[future] = client.scatter([1.0], broadcast=True)

Issue Analytics

  • State:open
  • Created 4 years ago
  • Reactions:1
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
crusaderkycommented, May 11, 2020

I’m facing this problem too. The proposed workaround above has the big issue that keeps a copy of all the data on the scheduler, which quickly balloons into the gigabytes for me.

0reactions
danpfcommented, Aug 12, 2019

The way I’ve handled this is by submitting, not scattering.

future = client.submit(list, [1.0])

seems to work well enough.

Read more comments on GitHub >

github_iconTop Results From Across the Web

ERROR - error from worker No such file or directory: 'filepath'
It looks like you're trying to do exactly this, except that you're scattering dask dataframes rather than pandas dataframes.
Read more >
Managing Memory — Dask.distributed 2022.12.1 documentation
Dask.distributed stores the results of tasks in the distributed memory of the worker nodes. The central scheduler tracks all data on the cluster...
Read more >
Spark Standalone Mode - Spark 3.3.1 Documentation
You can launch a standalone cluster either manually, by starting a master and workers by hand, or use our provided launch scripts.
Read more >
Deploy services to a swarm - Docker Documentation
If the worker fails to pull the image, the service fails to deploy on that worker node. Docker tries again to deploy the...
Read more >
Documentation: 15: 20.4. Resource Consumption - PostgreSQL
If this value is specified without units, it is taken as blocks, that is BLCKSZ ... With huge_pages set to try , the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found