question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Resource Warnings with LocalCluster

See original GitHub issue

Hi Everyone,

After starting a local cluster on my machine as shown here:


    print(cluster)

    client = Client()

    print(client)

    cluster.close()

I got so many Resource Warnings as shown below.


 ResourceWarning:

unclosed file <_io.BufferedWriter name='/home/amal/PycharmProjects/dynamicparcels/dynpar/dask-worker-space/worker-4t43rjqe.dirlock'>

/usr/local/lib/python3.6/dist-packages/distributed/diskutils.py:161: ResourceWarning:

unclosed file <_io.BufferedWriter name='/home/amal/PycharmProjects/dynamicparcels/dynpar/dask-worker-space/worker-4fyxnn5d.dirlock'>

<Client: scheduler='tcp://127.0.0.1:43123' processes=8 cores=8>
distributed.scheduler - INFO - Clear task state
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:37009'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41811'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:38581'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:42381'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41167'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:39899'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:43027'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34385'
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:39323
distributed.core - INFO - Removing comms to tcp://127.0.0.1:39323
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:46047
distributed.core - INFO - Removing comms to tcp://127.0.0.1:46047
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:45833
distributed.core - INFO - Removing comms to tcp://127.0.0.1:45833
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:36641
distributed.core - INFO - Removing comms to tcp://127.0.0.1:36641
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:42175
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42175
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:33033
distributed.core - INFO - Removing comms to tcp://127.0.0.1:33033
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:46337
distributed.core - INFO - Removing comms to tcp://127.0.0.1:46337
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:37449
distributed.core - INFO - Removing comms to tcp://127.0.0.1:37449
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
/usr/lib/python3.6/asyncio/base_events.py:511: ResourceWarning:

unclosed event loop <_UnixSelectorEventLoop running=False closed=False debug=False>

Do anyone have an idea how can I fix that please ?

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:7 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
fjettercommented, Jul 14, 2021

@jonmoore the LoopRunner should be closed if the Client is closed, see https://github.com/dask/distributed/blob/67f57bc0c880101bc3336557c4dce43d4fa5ec43/distributed/client.py#L1489-L1490 I guess, for some reason, the client is not closed properly before the interpreter shuts down. We’ve been fixing a few stability related things over the past months and I would encourage you trying a more recent version to see if the problem persists.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Resource Warnings with LocalCluster · Issue #2486 - GitHub
Hi Everyone, After starting a local cluster on my machine as shown here: print(cluster) client ... Resource Warnings with LocalCluster #2486.
Read more >
API — Dask.distributed 2022.12.1 documentation
**kwargs: If you do not pass a scheduler address, Client will create a LocalCluster object, passing any extra keyword arguments.
Read more >
Understanding the V-16-1-40130 error message for UNIX - VOX
VCS WARNING V-16-1-40130 Resource name does not exist in the local cluster. The error occurs when the resource is not configured in the ......
Read more >
Disable All Resource Warnings Emission in Third party ...
I am getting multiple emissons of ResourceWarnings when using third party python modules say numpy or tornado. Without editing source code from ...
Read more >
Monitoring cluster status - IBM
The Cluster Resource Services graphical interface displays warning message ... is shown if the node is inconsistent: The local cluster node is not...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found