Mismatch between parquet statistics and parts silently drops data
See original GitHub issueWhat happened: Hi,
I’ve had some difficulty reading in (older) parquet data files, due to not Dask not finding statistics for about 2/3 of the files.
I’m running the following simple code:
df = dask.dataframe.read_parquet(urlpath, engine='fastparquet').persist()
Where urlpath is a glob path that resolves to 5 folders, each with a _metadata and a _common_metadata file, and 17 parquet files (one of the folders has 16). The total number of partitions should be (17*4)+16 = 84, but df.npartitions returns 36; the other files are not included in the df. After a bunch of trying to trace the problem through the code, I ended up putting a print here: https://github.com/dask/dask/blob/7446308083e2f284229a11e9b3e330aa9e73cf96/dask/dataframe/io/parquet/core.py#L347 Which gives me:
len(meta)=0, len(statistics)=36, len(parts)=84, len(index)=15
Due to the zip() on this line: https://github.com/dask/dask/blob/7446308083e2f284229a11e9b3e330aa9e73cf96/dask/dataframe/io/parquet/core.py#L1143-L1150 Only the first 36 of the parts will be included in df.
What you expected to happen:
It is quite probable that the root problem here occurred back when this data was written (although I have no clue how to investigate that further). I can also circumvent the problem by passing in gather_statistics=False. However, I can’t see how cases where len(statistics) != len(parts) would return reliable results, and would at least expect a warning here. Does anyone have pointers on how to further investigate and/or improve Dask’s handling of this (exceptional?) case? Or an explanation on why the current behavior is desired?
Minimal Complete Verifiable Example:
Not possible without the data in question.
Anything else we need to know?:
/
Environment:
- Dask version: 2021.11.1
- Fastparquet version: 0.7.1 (currently, for reading, unclear which version was used for writing)
- Python version: 3.9.7
- Operating System: Ubuntu 20.04.3 LTS
- Install method (conda, pip, source): Conda
Issue Analytics
- State:
- Created 2 years ago
- Comments:12 (9 by maintainers)
You might try a glob pattern like “root_dir/**.parquet” (or whatever the data files are called).
Yes!
At a quick skim of the report, I imagine indeed we don’t test for a case where only some of the data contains statistics. The statistics should be aligned with the “parts”, so that you only exclude parts when you can be sure there is no good data there.