question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

I think there are some places where zarr would benefit immensely from some async capabilities when reading and writing data. I will try to illustrate this with the simplest example I can.

Let’s consider a zarr array stored in a public S3 bucket, which we can read with fsspec’s HTTPFileSystem interface (no special S3 API needed, just regular http calls).

import fsspec
url_base = 'https://mur-sst.s3.us-west-2.amazonaws.com/zarr/time'
mapper = fsspec.get_mapper(url_base)
za = zarr.open(mapper)
za.info

image

Note that this is a highly sub-optimal choice of chunks. The 1D array of shape (6443,) is stored in chunks of only (5,) items, resulting in over 1000 tiny chunks. Reading this data takes forever, over 5 minutes

%prun tdata = za[:]
         20312192 function calls (20310903 primitive calls) in 342.624 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1289  139.941    0.109  140.077    0.109 {built-in method _openssl.SSL_do_handshake}
     2578   99.914    0.039   99.914    0.039 {built-in method _openssl.SSL_read}
     1289   68.375    0.053   68.375    0.053 {method 'connect' of '_socket.socket' objects}
     1289    9.252    0.007    9.252    0.007 {built-in method _openssl.SSL_CTX_load_verify_locations}
     1289    7.857    0.006    7.868    0.006 {built-in method _socket.getaddrinfo}
     1289    1.619    0.001    1.828    0.001 connectionpool.py:455(close)
   930658    0.980    0.000    2.103    0.000 os.py:674(__getitem__)
...

I believe fsspec is introducing some major overhead by not reusing a connectionpool. But regardless, zarr is iterating synchronously over each chunk to load the data:

https://github.com/zarr-developers/zarr-python/blob/994f2449b84be544c9dfac3e23a15be3f5478b71/zarr/core.py#L1023-L1028

As a lower bound on how fast this approach could be, we bypass zarr and fsspec and just fetch all the chunks with requests:

import requests
s = requests.Session()

def get_chunk_http(n):
    r = s.get(url_base + f'/{n}')
    r.raise_for_status()
    return r.content

%prun all_data = [get_chunk_http(n) for n in range(za.shape[0] // za.chunks[0])] 
         12550435 function calls (12549147 primitive calls) in 98.508 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     2576   87.798    0.034   87.798    0.034 {built-in method _openssl.SSL_read}
       13    1.436    0.110    1.437    0.111 {built-in method _openssl.SSL_do_handshake}
   929936    1.042    0.000    2.224    0.000 os.py:674(__getitem__)

As expected, reusing a connection pool sped things up, but it still takes 100 s to read the array.

Finally, we can try the same thing with asyncio

import asyncio
import aiohttp
import time

async def get_chunk_http_async(n, session):
    url = url_base + f'/{n}'
    async with session.get(url) as r:
        r.raise_for_status()
        data = await r.read()
    return data

async with aiohttp.ClientSession() as session:
    tic = time.time()
    all_data = await asyncio.gather(*[get_chunk_http_async(n, session)
                                    for n in range(za.shape[0] // za.chunks[0])])
    print(f"{time.time() - tic} seconds")

# > 1.7969944477081299 seconds

This is a MAJOR speedup!

I am aware that using dask could possibly help me here. But I don’t have big data here, and I don’t want to use dask. I want zarr to support asyncio natively.

I am quite new to async programming and have no idea how hard / complicated it would be to do this. But based on this experiment, I am quite sure there are major performance benefits to be had, particularly when using zarr with remote storage protocols.

Thoughts?

cc @cgentemann

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:9
  • Comments:93 (76 by maintainers)

github_iconTop GitHub Comments

10reactions
rabernatcommented, Nov 3, 2020

I believe we can now close this issue, now that #630 and #606 have both been merged!

Just to close the loop, I just did a simple benchmarking exercise from Pangeo Cloud in Google Cloud.

import zarr
import os
import fsspec
import gcsfs
import numpy as np
import uuid

# get a temporary place to write to GCS
uid = uuid.uuid1()
path = os.environ['PANGEO_SCRATCH'] + str(uid) + '.zarr'
gcs = gcsfs.GCSFileSystem()
mapper = gcs.get_mapper(path)

# create a zarr array with many small chunks
shape = (100, 1000)
chunks = (1, 1000)
arr = zarr.create(shape, chunks=chunks, store=mapper)

# time write
%time arr[:] = 9

# time read
%time _ = arr[:]

My test environment included the following versions:

  • zarr 2.5.1.dev30
  • gcsfs 0.7.1+4.g77b5993
  • fsspec 0.8.4+42.g67b2e6f

My old (pre-async) environment was

  • zarr 2.4.0
  • gcsfs 0.7.1

Here is a comparison of the speeds

write read
old 5.65 s 7.27 s
new (w/ async) 1.32 s 245 ms

This is a fantastic improvement, particularly for reading (over 20x)!

👏 👏 👏 for all the effort by @martindurant and the other devs who helped make this happen!

7reactions
martindurantcommented, Jun 18, 2020

Using https://github.com/intake/filesystem_spec/pull/310 , currently finding

import fsspec
urls = ['https://mur-sst.s3.us-west-2.amazonaws.com/zarr/time/{}'.format(i) for i in range(1289)]
fs = fsspec.filesystem('http')
%time out2 = fs.mcat(urls)
# 3.15s

on master

import fsspec
urls = ['https://mur-sst.s3.us-west-2.amazonaws.com/zarr/time/{}'.format(i) for i in range(1289)]
fs = fsspec.filesystem('http')
%time out = {u: fs.cat(u) for u in urls}
# 2m49s

The plan is to have a multiget method for the mapper, or perhaps allow mapper[[key1, key2, ...]]… and to plumb this into the other backends. Where the backend doesn’t support it, getting multiple files will fall back to sequential operation.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Understanding Async
async will give a speed-up if multiple chunks are being read at once (i.e., the dask partition is larger than the zarr chunksize...
Read more >
zarr-developers/community
Is there a solution for this? I am wondering whether we can concatenate all the zarr files as a single file (e.g. something...
Read more >
Release notes — zarr 2.13.3 documentation - Read the Docs
Zarr arrays now support NumPy-style fancy indexing with arrays of integer coordinates. ... Use async to fetch/write result concurrently when possible.
Read more >
Asynchronous Xarray writing to Zarr - dask
to_zarr is blocking. This can really slow things down when there are straggler chunks that block the continuation of the loop. Is there...
Read more >
Using "zarr-lite" / Trevor Manz
The top level export of "zarr-lite" adds array indexing and slicing selections of ... default: null, getJson: async ƒ(store, key), openArray: async ƒ(…) ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found