BUG: `DataFrame.to_parquet` does not correctly write index information if `partition_cols` are provided
See original GitHub issue-
I have checked that this issue has not already been reported.
-
I have confirmed this bug exists on the latest version of pandas.
Code Sample, a copy-pastable example
df = pd.DataFrame({'Data': [1, 2], 'partition': [1, 2]}, index=['2000-01-01', '2010-01-02'])
data_path_with_partitions = 'with_partitions.parquet'
df.to_parquet(data_path_with_partitions, partition_cols=['partition'])
df_read_with_partitions = pd.read_parquet(data_path_with_partitions)
pd.testing.assert_frame_equal(df, df_read_with_partitions) # <-- this fails because the index has been turned into an extra column __index_level_0
results in an AssertionError
:
> pd.testing.assert_frame_equal(df, df_read_with_partitions) # <-- this fails because the index has been turned into an extra column __index_level_0
E AssertionError: DataFrame are different
E
E DataFrame shape mismatch
E [left]: (2, 2)
E [right]: (2, 3)
Because of an extra column __index_level_0__
containing the index values
Data __index_level_0__ partition
0 1 2000-01-01 1
1 2 2010-01-02 2
On the other hand, without partitions works as expected:
data_path_without_partitions = 'without_partitions.parquet'
df.to_parquet(data_path_without_partitions)
df_read_without_partitions = pd.read_parquet(data_path_without_partitions)
pd.testing.assert_frame_equal(df, df_read_without_partitions) # <-- this passes
Problem description
If serializing data with pyarrow parquet using the partition_cols
parameter, deserializing does not correctly reset the index, even though it has been serialized.
In the above example, df_read_with_partitions
contains an extra column __index_level_0__
.
Debugging into this, I believe this is a problem with the pandas integration inside pyarrow. In pyarrow/parquet.py:1723
the sub-tables for each of the partitions get generated, but the b'pandas'
metadata incorrectly overwrites the index column information of subschema
.
Expected Output
I expect read_parquet
to yield the same output whether or not it has been written with partition_cols
.
Output of pd.show_versions()
INSTALLED VERSIONS
commit : None python : 3.7.4.final.0 python-bits : 64 OS : Windows OS-release : 10 machine : AMD64 processor : Intel64 Family 6 Model 158 Stepping 13, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : None.None pandas : 1.0.4 numpy : 1.17.3 pytz : 2019.3 dateutil : 2.8.1 pip : 19.0.3 setuptools : 40.8.0 Cython : None pytest : 5.2.2 hypothesis : None sphinx : 2.3.1 blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : 2.10.3 IPython : 7.15.0 pandas_datareader: None bs4 : None bottleneck : None fastparquet : 0.3.2 gcsfs : None lxml.etree : None matplotlib : 3.1.1 numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : 0.17.1 pytables : None pytest : 5.2.2 pyxlsb : None s3fs : None scipy : 1.3.1 sqlalchemy : 1.3.10 tables : None tabulate : None xarray : 0.14.1 xlrd : None xlwt : None xlsxwriter : None numba : 0.46.0
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (3 by maintainers)
Top GitHub Comments
Thanks. I’m watching this on the arrow JIRA. Let’s close this and we can reopen if they determine the bug to be in pandas rather than pyarrow.
Facing the same issue, which still seems to be unresolved. Does this mean the data in the file are effectively lost, or is there a way to restore them?