question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

BUG: `DataFrame.to_parquet` does not correctly write index information if `partition_cols` are provided

See original GitHub issue
  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.


Code Sample, a copy-pastable example

df = pd.DataFrame({'Data': [1, 2], 'partition': [1, 2]}, index=['2000-01-01', '2010-01-02'])

data_path_with_partitions = 'with_partitions.parquet'
df.to_parquet(data_path_with_partitions, partition_cols=['partition'])
df_read_with_partitions = pd.read_parquet(data_path_with_partitions)
pd.testing.assert_frame_equal(df, df_read_with_partitions)  # <-- this fails because the index has been turned into an extra column __index_level_0

results in an AssertionError:

>       pd.testing.assert_frame_equal(df, df_read_with_partitions)  # <-- this fails because the index has been turned into an extra column __index_level_0
E       AssertionError: DataFrame are different
E       
E       DataFrame shape mismatch
E       [left]:  (2, 2)
E       [right]: (2, 3)

Because of an extra column __index_level_0__ containing the index values

   Data __index_level_0__ partition
0     1        2000-01-01         1
1     2        2010-01-02         2

On the other hand, without partitions works as expected:

data_path_without_partitions = 'without_partitions.parquet'
df.to_parquet(data_path_without_partitions)
df_read_without_partitions = pd.read_parquet(data_path_without_partitions)
pd.testing.assert_frame_equal(df, df_read_without_partitions)  # <-- this passes

Problem description

If serializing data with pyarrow parquet using the partition_cols parameter, deserializing does not correctly reset the index, even though it has been serialized.

In the above example, df_read_with_partitions contains an extra column __index_level_0__.

Debugging into this, I believe this is a problem with the pandas integration inside pyarrow. In pyarrow/parquet.py:1723 the sub-tables for each of the partitions get generated, but the b'pandas' metadata incorrectly overwrites the index column information of subschema.

Expected Output

I expect read_parquet to yield the same output whether or not it has been written with partition_cols.

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None python : 3.7.4.final.0 python-bits : 64 OS : Windows OS-release : 10 machine : AMD64 processor : Intel64 Family 6 Model 158 Stepping 13, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : None.None pandas : 1.0.4 numpy : 1.17.3 pytz : 2019.3 dateutil : 2.8.1 pip : 19.0.3 setuptools : 40.8.0 Cython : None pytest : 5.2.2 hypothesis : None sphinx : 2.3.1 blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : 2.10.3 IPython : 7.15.0 pandas_datareader: None bs4 : None bottleneck : None fastparquet : 0.3.2 gcsfs : None lxml.etree : None matplotlib : 3.1.1 numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : 0.17.1 pytables : None pytest : 5.2.2 pyxlsb : None s3fs : None scipy : 1.3.1 sqlalchemy : 1.3.10 tables : None tabulate : None xarray : 0.14.1 xlrd : None xlwt : None xlsxwriter : None numba : 0.46.0

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
TomAugspurgercommented, Sep 4, 2020

Thanks. I’m watching this on the arrow JIRA. Let’s close this and we can reopen if they determine the bug to be in pandas rather than pyarrow.

0reactions
stestonicommented, Feb 15, 2022

Facing the same issue, which still seems to be unresolved. Does this mean the data in the file are effectively lost, or is there a way to restore them?

Read more comments on GitHub >

github_iconTop Results From Across the Web

python - Losing index information when using dask.dataframe ...
The issue was in the pyarrow's backend. I filed a bug report on their JIRA webpage: https://issues.apache.org/jira/browse/ARROW-7782.
Read more >
pandas.DataFrame.to_parquet
Column names by which to partition the dataset. Columns are partitioned in the order they are given. Must be None if path is...
Read more >
dask.dataframe.to_parquet - Dask documentation
The function should accept an integer (partition index) as input and return a string which will be used as the filename for the...
Read more >
Solved: Spark 2 Can't write dataframe to parquet table
I'm trying to write a dataframe to a parquet hive table and keep getting an error saying that the table is HiveFileFormat and...
Read more >
Spark SQL, DataFrames and Datasets Guide
The Dataset API is available in Scala and Java. Python does not have the support for the Dataset API. But due to Python's...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found