Proof of concept: CloudFilesStore
See original GitHub issueWe currently rely 100% on fsspec and its implementations for accessing cloud storage (s3fs, gcsfs, adlfs). Cloud storage is complicated, and for debugging purposes, it could be useful to have an alternative. Since I met @william-silversmith a few years ago, I have been curious about CloudFiles:
https://github.com/seung-lab/cloud-files/
CloudFiles was developed to access files from object storage without ever touching disk. The goal was to reliably and rapidly access a petabyte of image data broken down into tens to hundreds of millions of files being accessed in parallel across thousands of cores. The predecessor of CloudFiles, CloudVolume.Storage, the core of which is retained here, has been used to processes dozens of images, many of which were in the hundreds of terabyte range. Storage has reliably read and written tens of billions of files to date.
Highlights
- Fast file access with transparent threading and optionally multi-process.
- Google Cloud Storage, Amazon S3, local filesystems, and arbitrary web servers making hybrid or multi-cloud easy.
- Robust to flaky network connections. Uses exponential random window retries to avoid network collisions on a large cluster. > Validates md5 for gcs and s3.
- gzip, brotli, and zstd compression.
- Supports HTTP Range reads.
- Supports green threads, which are important for achieving maximum performance on virtualized servers.
- High efficiency transfers that avoid compression/decompression cycles.
- High speed gzip decompression using libdeflate (compared with zlib).
- Bundled CLI tool.
- Accepts iterator and generator input.
Today I coded up a quick CloufFiles-based store for Zarr
from cloudfiles import CloudFiles
class CloudFilesMapper:
def __init__(self, path, **kwargs):
self.cf = CloudFiles(path, **kwargs)
def clear(self):
self.cf.delete(self.cf.list())
def getitems(self, keys, on_error="none"):
return {item['path']: item['content'] for item in self.cf.get(keys, raw=True)}
def setitems(self, values_dict):
self.cf.puts([(k, v) for k, v in values_dict.items()])
def delitems(self, keys):
self.cf.delete(keys)
def __getitem__(self, key):
return self.cf.get(key)
def __setitem__(self, key, value):
self.cf.put(key, value)
def __iter__(self):
for item in self.cf.list():
yield item
def __len__(self):
raise NotImplementedError
def __delitem__(self, key):
self.cf.delete(key)
def __contains__(self, key):
return self.cf.exists(key)
def listdir(self, key):
for item in self.cf.list(key):
yield item.lstrip(key).lstrip('/')
def rmdir(self, prefix):
self.cf.delete(self.cf.list(prefix=prefix))
In my test with GCS, it works just fine with Zarr, Xarray, and Dask: https://nbviewer.jupyter.org/gist/rabernat/dde8b835bb7ef0590b6bf4034d5e0b2f
Distributed read performance was about 50% slower than gcsfs, but my benchmark is probably biased.
It might be useful to have the option to switch between the fsspec-based stores and this one. If folks are interested, we could think about adding this to zarr-python as some kind of optional alternative to fsspec.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:3
- Comments:7 (4 by maintainers)
Note that fsspec uses asyncio to fetch multiple chunks concurrently, so this can greatly increase performance by setting each dask partition to be larger than the zarr chunksize.
Somewhat related discussion about using multiple
ThreadPoolExecutor
s per Dask Worker from earlier today here ( https://github.com/dask/distributed/issues/4655#issuecomment-854881294 )