question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

BUG: pd.read_sql returns empty list if query has no results and chunksize is set

See original GitHub issue
  • [ x] I have checked that this issue has not already been reported.

  • [ x] I have confirmed this bug exists on the latest version of pandas.

  • [x ] (optional) I have confirmed this bug exists on the master branch of pandas.


Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.

Code Sample, a copy-pastable example

import pandas as pd
import sqlite3

# Create empty test table in memory
conn = sqlite3.connect(':memory:')
conn.cursor().execute('CREATE TABLE test (column_1 INTEGER);')

# Run the query without chunksize, works as expected
pd.read_sql('select * from test', conn)

# Run the query with chunksize, returns generator as expected
pd.read_sql('select * from test', conn, chunksize=5)

# However, the generator is empty
list(pd.read_sql('select * from test', conn, chunksize=5))

# I would expect, that for all cases where chunksize isn't necessary,
# than the following two lines would return exactly the same
# result, but the second throws "ValueError: No objects to concatenate"
pd.read_sql('select * from test', conn)

pd.concat(pd.read_sql('select * from test', conn, chunksize=5))

Problem description

In many cases, returning zero rows is an expected result, and the code should run fine on the returned DataFrame (iterating over it, getting all values in a row, etc.).

The current behaviour instead returns an empty list, with no information about for example the columns in the dataframe.

Expected Output

The expected output would be a list containing a single empty dataframe, with the correct column metadata. I would expect that for all queries that run fine without chunkbite being set, the following equality should hold:

pd.testing.assert_frame_equal(
    pd.read_sql(query, conn),
    pd.concat(pd.read_sql(query, conn, chunksize=5))
)

Output of pd.show_versions()

INSTALLED VERSIONS

commit : 333db4b765f8e88c0c2392943cb7d6c6013dc6e8 python : 3.8.2.final.0 python-bits : 64 OS : Darwin OS-release : 18.7.0 Version : Darwin Kernel Version 18.7.0: Thu Jan 23 06:52:12 PST 2020; root:xnu-4903.278.25~1/RELEASE_X86_64 machine : x86_64 processor : i386 byteorder : little LC_ALL : None LANG : en_GB.UTF-8 LOCALE : en_GB.UTF-8

pandas : 1.1.0.dev0+1685.g333db4b76.dirty numpy : 1.18.4 pytz : 2020.1 dateutil : 2.8.1 pip : 20.1.1 setuptools : 46.4.0.post20200518 Cython : 0.29.19 pytest : 5.4.2 hypothesis : 5.16.0 sphinx : 3.0.4 blosc : None feather : None xlsxwriter : 1.2.8 lxml.etree : 4.5.1 html5lib : 1.0.1 pymysql : None psycopg2 : None jinja2 : 2.11.2 IPython : 7.14.0 pandas_datareader: None bs4 : 4.9.1 bottleneck : 1.3.2 fastparquet : 0.4.0 gcsfs : None matplotlib : 3.2.1 numexpr : 2.7.1 odfpy : None openpyxl : 3.0.3 pandas_gbq : None pyarrow : 0.17.1 pytables : None pyxlsb : None s3fs : 0.4.2 scipy : 1.4.1 sqlalchemy : 1.3.17 tables : 3.6.1 tabulate : 0.8.7 xarray : 0.15.1 xlrd : 1.2.0 xlwt : 1.3.0 numba : 0.48.0

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
JohanKahrstromcommented, Jul 8, 2020

Make sure to set the query variable first, i.e.

query = "SELECT * FROM table"
1reaction
JohanKahrstromcommented, Jun 22, 2020

The reported bug only happens if the query results has zero rows, in which case the generator is empty (and in your code example since the generator is empty the loop never executes). For that case the workaround is:

generator = pd.read_sql(query, conn, chunksize=50000)
try:
    df = pd.concat(generator)
catch ValueError:
    # We know the query has zero rows, so it's safe not to pass a chunksize
    # (unless the table has been populated since the first execution of the query)
    df = pd.read_sql(query, conn)

My PR for a fix has been accepted, but I don’t know when it will be merged I’m afraid.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Handle empty result with read_sql chunked - Stack Overflow
I need to return an empty dataframe even if there is no rows (the columns must be there). I need to keep the...
Read more >
pandas.read_sql — pandas 1.5.2 documentation
List of column names to select from SQL table (only used when reading a table). If specified, return an iterator where chunksize is...
Read more >
Loading SQL data into Pandas without running out of memory
The problem: you're loading all the data into memory at once. If you have enough rows in the SQL query's results, it simply...
Read more >
pandas read_sql vs read_sql_query - You.com | The AI ...
It seems that read_sql_query only checks the first 3 values returned in a column to determine the type of the column. So if...
Read more >
Python Examples of pandas.read_sql - ProgramCreek.com
The following are 30 code examples of pandas.read_sql(). ... Unknown error. ... ResourceClosedError: # Query didn't return results return None. Example #7 ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found