question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Disabling loading consolidated Parquet `_metadata`

See original GitHub issue

For large Parquet datasets, loading the consolidated _metadata file, especially when it’s a remote file, can be quite slow. We should consider adding an flag to optionally disable reading _metadata. Initially I thought setting gather_statistics=False would do this, but that turned out to not be the case (at least for the pyarrow-dataset and fastparquet engines)

cc @rjzamora in case you have thoughts on this topic

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:10 (9 by maintainers)

github_iconTop GitHub Comments

1reaction
jorisvandenbosschecommented, Aug 17, 2021

@iameskild that doesn’t seem directly related. It is an error coming from parsing the _metadata file, and so having an option to not use that file (what this issue is about) would also help for your case as a workaround. But the fact that you get an error reading this file should have some cause / is a bug (for which I would recommend opening a separate issue).

0reactions
jrbourbeaucommented, Aug 24, 2021

Closing this issue as an ignore_metadata_file option was added in https://github.com/dask/dask/pull/8034 and @rjzamora opened https://github.com/dask/dask/issues/8058 to discuss improving metadata processing

Read more comments on GitHub >

github_iconTop Results From Across the Web

Optimizing Access to Parquet Data with fsspec
This post details how the filesystem specification's new parquet model provides a format-aware byte-cashing optimization.
Read more >
How to handle changing parquet schema in Apache Spark
When I disable writing metadata, Spark was said to infer the whole schema from the first file within the given Parquet path and...
Read more >
fastparquet Documentation - Read the Docs
Fastparquet will automatically use metadata information to load such columns as categorical if the data was written by fastparquet/pyarrow.
Read more >
Source code for dask.dataframe.io.parquet.core
By default will be inferred from the pandas parquet file metadata, if present. ... will load categories automatically for data written by dask/fastparquet, ......
Read more >
Parquet file merging or other optimisation tips
I am loading a set of parquet files using : df = sqlContext. ... a feature to improve metadata caching in parquet specifically...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found