dask.distributed zarr reads slows down with additional workers
See original GitHub issueProblem description
I have a zarr store with around 50 arrays on disk with a wide variety of dtypes. all stored with blosc compressor. While trying to parallelize some calculations with dask, I noticed that adding more workers to local dask cluster slows down the _decode_chunk calls. In the documentation, there is some mention of blosc multithreading not working with multiprocessing and I tried setting BLOSC_NOBLOCK environment variable to see if makes a difference but didn’t see any difference. Is this expected behavior with blosc?
def test():
zs = zarr.open('/tmp/zarr')
for n,a in zs.arrays():
a[:,:100].shape
delayed_res = []
for i in range(1000):
delayed_res.append(dask.delayed(test))
dask,.compute(*dask_delayed)
Running this with a single worker each test call takes aroun 3.5 seconds, if I increase worker count to 10 than each call takes 4.5 seconds.
Version and installation information
- zarr.version : 2.2.0
- numcodecs.version: 0.5.5
- Python 3.6.6
- Linux
- zarr installed using conda
Issue Analytics
- State:
- Created 5 years ago
- Comments:11 (7 by maintainers)
Top Results From Across the Web
Dask Best Practices - Dask documentation
This is a short overview of Dask best practices. This document specifically focuses on best practices that are shared among all of the...
Read more >Managing memory use for a simple vstack/rechunk/store ...
The problem comes when the distributed scheduler looks around and says "I have free workers, and all these waiting tasks. I'll start on...
Read more >Reducing memory usage in Dask workloads by 80% - Coiled
Workers might exceed their memory limits and crash, losing progress and requiring tasks to be recomputed on a pool of workers that were...
Read more >How to store data from dask.distributed on disk?
Mainly my problem is saving data from distributed computations back to an in-memory Zarr array while using Dask chaching and graph ...
Read more >Using da.delayed for Zarr processing: memory overhead ...
We are working on using dask for image processing of OME-Zarr files. ... slower running code and much more memory-inefficient processing.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
FWIW in my experience I get best CPU utilisation when blosc is using 1 thread internally and I max out on number of dask workers (= cpu count).
Thanks. Let us know if other issues crop up.