Python Feather Breaks on Files with More Than 268,434,943 rows
See original GitHub issueI am trying to save large files with 100MM’s of rows as feather.
But when a file has more than 268,434,943 rows, the data seems to become corrupted.
Please see below as an example:
I created a random dataframe with 400MM rows df_orig
. Then, I wrote it as a feather file and re-read it as a dataframe df_copy
df = pd.DataFrame(np.random.randint(0,100,size=(400000000, 1)), columns=list('A'))
df_orig = df.reset_index(drop=False).rename(columns={"index":"base"})
df_orig['twice']=df_orig.base*2
df_orig['triple']=df_orig.base*3
feather.write_dataframe(df_orig, './test.feather')
df_copy = feather.read_dataframe('./test.feather')
Below is the results i get when I print out the 268,434,943th and 268,434,944th index from df_orig
print(df_orig.ix[268434943,:])
print("--------------------------")
print(df_orig.ix[268434944,:])
base 268434943
A 78
twice 536869886
tripple 805304829
Name: 268434943, dtype: int64
--------------------------
base 268434944
A 83
twice 536869888
tripple 805304832
Name: 268434944, dtype: int64
But when i perform the same function to df_copy
, I get below results:
print(df_copy.ix[268434943,:])
print("--------------------------")
print(df_copy.ix[268434944,:])
base 268434943
A 78
twice 536869886
triple 805304829
Name: 268434943, dtype: int64
--------------------------
base 93
A 0
twice 0
triple 3940649673949204
Name: 268434944, dtype: int64
As you can see, data is not identical at 268,434,944th index. This data error continues to show in the subsequent rows
Below are the versions I am using:
python vesrion: 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 2 2016, 17:53:06)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
feather version: 0.4.0
pandas veresion: 0.18.1
Issue Analytics
- State:
- Created 6 years ago
- Comments:11 (6 by maintainers)
Top Results From Across the Web
Python Feather Breaks on Files with More Than 268434943 rows
I am trying to save large files with 100MM's of rows as feather. But when a file has more than 268,434,943 rows, the...
Read more >Feather File Format — Apache Arrow v10.0.1
Feather is a portable file format for storing Arrow tables or data frames (from languages like Python or R) that utilizes the Arrow...
Read more >2 .feather files with same data, completely different sizes?
I have 2 feather files based on the same data. The only difference is the way the data is obtained. File 1 has...
Read more >Comparing performances of CSV to RDS, Parquet, and ...
Looking into performance (median for write/read), we can see Feather is by far the most efficient file format. out of 10 runs, reading...
Read more >pandas.read_feather — pandas 1.5.2 documentation
pandas.read_feather(path, columns=None, use_threads=True, storage_options=None)[source]#. Load a feather-format object from the file path. Parameters.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I think that’s a different error, I opened https://issues.apache.org/jira/browse/ARROW-3058 about at least making the error message better. The Feather format has some underlying limitations for very large data frames – these limitations can be fixed but a pre-requisite is being able to ship R bindings for Apache Arrow. That work is under way but it will be some time off yet
cc @hadley @romainfrancois
This should definitely be fixed by Feather V2, coming soon in Arrow 0.17.0