question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

dask.distributed zarr reads slows down with additional workers

See original GitHub issue

Problem description

I have a zarr store with around 50 arrays on disk with a wide variety of dtypes. all stored with blosc compressor. While trying to parallelize some calculations with dask, I noticed that adding more workers to local dask cluster slows down the _decode_chunk calls. In the documentation, there is some mention of blosc multithreading not working with multiprocessing and I tried setting BLOSC_NOBLOCK environment variable to see if makes a difference but didn’t see any difference. Is this expected behavior with blosc?

def test():
     zs = zarr.open('/tmp/zarr')
    for n,a in zs.arrays():
         a[:,:100].shape

delayed_res = []
for i in range(1000):
     delayed_res.append(dask.delayed(test))

dask,.compute(*dask_delayed)

Running this with a single worker each test call takes aroun 3.5 seconds, if I increase worker count to 10 than each call takes 4.5 seconds.

Version and installation information

  • zarr.version : 2.2.0
  • numcodecs.version: 0.5.5
  • Python 3.6.6
  • Linux
  • zarr installed using conda

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:11 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
alimanfoocommented, Nov 20, 2018

FWIW in my experience I get best CPU utilisation when blosc is using 1 thread internally and I max out on number of dask workers (= cpu count).

0reactions
jakirkhamcommented, Nov 26, 2018

Thanks. Let us know if other issues crop up.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Dask Best Practices - Dask documentation
This is a short overview of Dask best practices. This document specifically focuses on best practices that are shared among all of the...
Read more >
Managing memory use for a simple vstack/rechunk/store ...
The problem comes when the distributed scheduler looks around and says "I have free workers, and all these waiting tasks. I'll start on...
Read more >
Reducing memory usage in Dask workloads by 80% - Coiled
Workers might exceed their memory limits and crash, losing progress and requiring tasks to be recomputed on a pool of workers that were...
Read more >
How to store data from dask.distributed on disk?
Mainly my problem is saving data from distributed computations back to an in-memory Zarr array while using Dask chaching and graph ...
Read more >
Using da.delayed for Zarr processing: memory overhead ...
We are working on using dask for image processing of OME-Zarr files. ... slower running code and much more memory-inefficient processing.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found