Resource Warnings with LocalCluster
See original GitHub issueHi Everyone,
After starting a local cluster on my machine as shown here:
print(cluster)
client = Client()
print(client)
cluster.close()
I got so many Resource Warnings as shown below.
ResourceWarning:
unclosed file <_io.BufferedWriter name='/home/amal/PycharmProjects/dynamicparcels/dynpar/dask-worker-space/worker-4t43rjqe.dirlock'>
/usr/local/lib/python3.6/dist-packages/distributed/diskutils.py:161: ResourceWarning:
unclosed file <_io.BufferedWriter name='/home/amal/PycharmProjects/dynamicparcels/dynpar/dask-worker-space/worker-4fyxnn5d.dirlock'>
<Client: scheduler='tcp://127.0.0.1:43123' processes=8 cores=8>
distributed.scheduler - INFO - Clear task state
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:37009'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41811'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:38581'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:42381'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:41167'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:39899'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:43027'
distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34385'
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:39323
distributed.core - INFO - Removing comms to tcp://127.0.0.1:39323
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:46047
distributed.core - INFO - Removing comms to tcp://127.0.0.1:46047
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:45833
distributed.core - INFO - Removing comms to tcp://127.0.0.1:45833
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:36641
distributed.core - INFO - Removing comms to tcp://127.0.0.1:36641
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:42175
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42175
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:33033
distributed.core - INFO - Removing comms to tcp://127.0.0.1:33033
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:46337
distributed.core - INFO - Removing comms to tcp://127.0.0.1:46337
distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:37449
distributed.core - INFO - Removing comms to tcp://127.0.0.1:37449
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
/usr/lib/python3.6/asyncio/base_events.py:511: ResourceWarning:
unclosed event loop <_UnixSelectorEventLoop running=False closed=False debug=False>
Do anyone have an idea how can I fix that please ?
Issue Analytics
- State:
- Created 5 years ago
- Comments:7 (5 by maintainers)
Top Results From Across the Web
Resource Warnings with LocalCluster · Issue #2486 - GitHub
Hi Everyone, After starting a local cluster on my machine as shown here: print(cluster) client ... Resource Warnings with LocalCluster #2486.
Read more >API — Dask.distributed 2022.12.1 documentation
**kwargs: If you do not pass a scheduler address, Client will create a LocalCluster object, passing any extra keyword arguments.
Read more >Understanding the V-16-1-40130 error message for UNIX - VOX
VCS WARNING V-16-1-40130 Resource name does not exist in the local cluster. The error occurs when the resource is not configured in the ......
Read more >Disable All Resource Warnings Emission in Third party ...
I am getting multiple emissons of ResourceWarnings when using third party python modules say numpy or tornado. Without editing source code from ...
Read more >Monitoring cluster status - IBM
The Cluster Resource Services graphical interface displays warning message ... is shown if the node is inconsistent: The local cluster node is not...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@jonmoore the
LoopRunner
should be closed if theClient
is closed, see https://github.com/dask/distributed/blob/67f57bc0c880101bc3336557c4dce43d4fa5ec43/distributed/client.py#L1489-L1490 I guess, for some reason, the client is not closed properly before the interpreter shuts down. We’ve been fixing a few stability related things over the past months and I would encourage you trying a more recent version to see if the problem persists.See https://github.com/dask/distributed/pull/6122 and https://github.com/mwilliamson/locket.py/issues/14