question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Performance issues with adlfs mappers

See original GitHub issue

What happened:

I’m noticing that the write performance of adlfs, specifically when using xarray/dask to write to zarr, is much slower than that of ABSStore (built in to Zarr). I’ve noticed that the performance further diverges with larger datasets (more chunks). I have a hunch that this is an async issue but I’m not sure how to test that theory.

What you expected to happen:

I expected adlfs performance to be on par or better with the Zarr implementation.

Minimal Complete Verifiable Example:

from adlfs import AzureBlobFileSystem
from zarr.storage import ABSStore
import xarray as xr

# sample data
ds = xr.tutorial.load_dataset('rasm').chunk({'x': 140, 'y': 105, 'time': 18},)

# zarr mapper
store1 = ABSStore(container='carbonplan-scratch', prefix='test/store1', account_name='carbonplan', account_key=...)

# adlfs mapper
fs = AzureBlobFileSystem(account_name='carbonplan', account_key=...)
store2 = fs.get_mapper('carbonplan-scratch/test/store2')

%%timeit -n 5 -r 5
ds.to_zarr(store1, mode='w')
# 1.02 s ± 79.6 ms per loop (mean ± std. dev. of 5 runs, 5 loops each)

%%timeit -n 5 -r 5
ds.to_zarr(store2, mode='w')
# 9.1 s ± 1.98 s per loop (mean ± std. dev. of 5 runs, 5 loops each)

Anything else we need to know?:

The example below was tested on adlfs@master and https://github.com/zarr-developers/zarr-python/pull/620.

Environment:

  • Dask version: 2.30.0
  • adlfs version: v0.5.9
  • xarray version: 0.16.2
  • zarr version: 2.2.0a2.dev650
  • Python version: 3.7.9
  • Operating System: Linux
  • Install method (conda, pip, source): conda/pip

cc @TomAugspurger

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:9

github_iconTop GitHub Comments

3reactions
hayesgbcommented, Jan 11, 2021

@jhamman – I have a new branch that implements asynchronous read and write. Timing it, I the following:

%%timeit -n 5 -r 5

with fs.open("data/root/a1/file.txt", "w") as f:
    f.write("0123456789")

with fs.open("data/root/a1/file.txt") as f:
    d = f.read()
assert d == b"0123456789"

# 519 ms ± 30 ms per loop (mean ± std. dev. of 5 runs, 5 loops each)
# 479 ms ± 10.5 ms per loop (mean ± std. dev. of 5 runs, 5 loops each)
# 505 ms ± 16.8 ms per loop (mean ± std. dev. of 5 runs, 5 loops each)

vs adlfs==0.5.9

# 913 ms ± 34.6 ms per loop (mean ± std. dev. of 5 runs, 5 loops each)
# 918 ms ± 14.7 ms per loop (mean ± std. dev. of 5 runs, 5 loops each)
# 915 ms ± 11 ms per loop (mean ± std. dev. of 5 runs, 5 loops each)

I’ve modified pipe_file (which is called by fsmapper) to directly upload to azure asynchronously. I’d be curious what you’re seeing with this branch.

0reactions
jhammancommented, Jan 14, 2021

@hayesgb - happy to see this closed now. I agree there may be a few bits of further tuning to do but let’s get this released and address them down the road. Thanks again for your help on this.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Mapping data flow performance and tuning guide
Learn about key factors that affect the performance of mapping data flows in Azure Data Factory and Azure Synapse Analytics pipelines.
Read more >
fsspec Documentation - Read the Docs
HDFS interface (which solves some security issues with hdfs3). ... mapper = fsspec.get_mapper('protocol://server/path', args) list(mapper).
Read more >
Azure Data Factory Mapping Data Flows Performance Pitfall to ...
The purpose of this blog post is to describe how to avoid a common performance pitfall when using Azure Data Factory Mapping Data...
Read more >
Re: [Python] - Dataset API - What's happening under the hood?
I am not aware of the specifics of the Azure > issue but I know we can handle ... do memory mapping, and...
Read more >
Reading and Writing the Apache Parquet Format
Reading Parquet and Memory Mapping¶. Because Parquet data needs to be decoded from the Parquet format and compression, it can't be directly mapped...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found