question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Inserting into SQL server with fast_executemany results in MemoryError

See original GitHub issue

Environment

To diagnose, we usually need to know the following, including version numbers. On Windows, be sure to specify 32-bit Python or 64-bit:

  • Python: 3.6.8
  • pyodbc: 4.0.26
  • OS: Alpine 3.8
  • DB: Azure SQL Database
  • driver: Microsoft ODBC Driver 17 for SQL Server

Issue

I’m loading data from a SQL Server 2016 towards Azure SQL Database. When inserting rows with a parameterized insert statement and fast_executemany=False, it works perfectly fine. When turning fast_executemany on, a very brief error message is displayed: in bulk_insert_rows cursor.executemany(sql, row_chunk) MemoryError

This is all I get. I’ve tried setting different encodings on the connection, such as described here: https://github.com/mkleehammer/pyodbc/wiki/Unicode. It fails every single time with fast_executemany on True and succeeds every single time with it turned off.

Looking for other ideas to troubleshoot. Thanks.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:15 (6 by maintainers)

github_iconTop GitHub Comments

3reactions
v-chojascommented, Apr 11, 2019

@gordthompson yes, using SQL_WVARCHAR works:

cursor.setinputsizes([(pyodbc.SQL_WVARCHAR, 0, 0)])

(The (0, 0) for size and precision instructs the driver to bind as nvarchar(max) instead of regular nvarchar — and is needed if you want to insert >4000 characters.)

0reactions
v-chojascommented, Aug 21, 2020

One my suggestion about the issue is to change code and add check

Although the fast_executemany feature was designed with SQL Server in mind, it is meant to be as generic as pyODBC, so it would not be a good idea to add references to DB-specific types (and how would it even know - it just looks like a very large character/binary column at the ODBC interface.) If you do have 2GB free (definitely possible on a 64-bit system) it can certainly make use of it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

pyodbc: Memory Error using fast_executemany with TEXT ...
It works when I avoid using fast_executemany but then inserts become very slow. driver = 'ODBC Driver 17 for SQL Server' conn =...
Read more >
pyodbc: Memory Error using fast_executemany with TEXT ...
I'm having an issue with inserting rows into a database. ... driver = 'ODBC Driver 17 for SQL Server' conn = pyodbc.connect('DRIVER=' +...
Read more >
How to Make Inserts Into SQL Server 100x faster with Pyodbc
I've been recently trying to load large datasets to a SQL Server database ... related to “fast_executemany” when loading data to SQL Server....
Read more >
Resolve Out Of Memory Issues - SQL Server - Microsoft Learn
What to keep in mind when using In-Memory OLTP in a virtualized environment. Resolve database restore failures due to OOM. When you attempt...
Read more >
Benchmarks for writing pandas DataFrames to SQL Server ...
Failed implementations¶ · BULK INSERT . · turbodbc + fast_executemany , as this method is not implemented for that SQLAlchemy dialect · pymssql...
Read more >

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found