PandasCursor doesn't automatically convert int columns with NA's to floats
See original GitHub issueI’m querying a large athena table and can successfully run a query using the below code, however it’s really slow (for reasons covered in #46).
conn = pyathena.connect(**at.athena_creds)
df = pd.read_sql(sql, conn)
I would really like to take advantage of the performance boost that PandasCursor offers, however, when I run the code below, I get a value error.
conn = pyathena.connect(**at.athena_creds, cursor_class=PandasCursor)
cursor = at_con.cursor()
df = cursor.execute(sql).as_pandas()
>>> ValueError: Integer column has NA values in column 18
Now I understand why I’m getting this value error. I have a int column in my athena table which has NA values in it, which Pandas notoriously doesn’t handle well (NaN’s are floats in Pandas eyes, not ints). The pd.read_sql()
seems to handle this gracefully. It recognizes there is an int column with NaN’s and converts it to a float column. It would be great if pyathena did the same thing.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:2
- Comments:15 (9 by maintainers)
Top Results From Across the Web
Stop Pandas from converting int to float due to an insertion in ...
I understand that if I insert NaN into the int column, Pandas will convert all the int into float because there is no...
Read more >IO tools (text, CSV, HDF5, …) — pandas 1.5.2 documentation
Indicate number of NA values placed in non-numeric columns. ... which will convert all valid parsing to floats, leaving the invalid parsing as...
Read more >How to Convert Integers to Floats in Pandas DataFrame
In this short guide, you'll see two approaches to convert integers to floats in Pandas DataFrame: (1) The astype(float) approach:
Read more >Handling Missing Data in Pandas: NaN Values Explained
Here make a dataframe with 3 columns and 3 rows. The array np.arange(1,4) is copied into each row. Copy. import pandas as ...
Read more >Spark SQL, DataFrames and Datasets Guide
A DataFrame is a Dataset organized into named columns. ... The Scala interface for Spark SQL supports automatically converting an RDD containing case ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Pandas 0.24+ has support for nullable ints, so I was able to keep my int columns as ints (rather than converting to double) by changing converter.py like so:
If you’re willing to set the minimum requirements to pandas >=0.24, I think this fix would be cleaner than converting to double.
https://github.com/pandas-dev/pandas/issues/24326
All tests passed. 🎉 https://github.com/laughingman7743/PyAthena/pull/80 Drop Python 3.4 support. It will work with Python 3.4 unless you use PandasCusrsor.