question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Spurious CancelledError on LocalCluster.close()

See original GitHub issue

The following repro produces a spurious log error from tornado/asyncio on closing the LocalCluster:

[00:36:18.413 ERROR  ] Exception in Future <Future cancelled> after timeout
Traceback (most recent call last):
  File "../lib/python3.5/site-packages/tornado/gen.py", line 970, in error_callback
    future.result()
  File "../lib/python3.5/asyncio/futures.py", line 286, in result
    raise CancelledError
concurrent.futures._base.CancelledError

Here is the repro:

from joblib import parallel_backend

import distributed.joblib
from distributed import Client, LocalCluster


def run_example():
    lc = LocalCluster(n_workers=1)
    client = Client(lc)
    with parallel_backend('dask.distributed', scheduler_host=client.scheduler.address):
        pass
    client.close()
    lc.close()


if __name__ == "__main__":
    run_example()

Couldn’t figure out how to silence or what exactly is wrong here. Appreciate any guidance.

Distributed versions is 1.23.1 on python 3.5.6 and joblib 0.12.3

dask                      0.19.1                   py35_0  
dask-core                 0.19.1                   py35_0  
tornado                   5.1              py35h14c3975_0 

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:13 (10 by maintainers)

github_iconTop GitHub Comments

1reaction
ericnesscommented, Nov 30, 2018

I was also getting the same error message. I couldn’t figure out how to stop the error message so I manually closed the client before exiting and caught the exception.

from concurrent.futures._base import CancelledError

...

try:
    client.close()
except CancelledError:
    print('Dask distributed processing did not shut down cleanly.')
0reactions
somewackocommented, Jan 24, 2019

Yes, installing @danpf’s patch on my client machine seems to have fixed the issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Changelog — Dask.distributed 2022.12.1 documentation
This release fixed a potential security vulnerability relating to single-machine Dask clusters. Clusters started with dask.distributed.LocalCluster or dask.
Read more >
What is the "right" way to close a Dask LocalCluster?
Expanding on skibee's answer, here is a pattern I use. It sets up a temporary LocalCluster and then shuts it down.
Read more >
Changelog — Dask.distributed 2.11.0 documentation
Update worker_kwargs description in LocalCluster constructor (GH#3438) ... worker.close() awaits batched_stream.close() (GH#3291) Mads R. B. ...
Read more >
Cancelling dask task and handling of BaseException(asyncio ...
CancelledError . Calling the cancel() method on the dask future would not work since the task is already started.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found