SelectQuery loads all models into memory at once
See original GitHub issueI have a code like this:
for book in Book.select().where(Book.merchant == m).order_by(Book.last_checked_at):
And there are approximately 100k rows in the table (PostgreSQL). When I don’t use any limit
functions, peewee loads all those 100k rows at the very beginning of the loop and that takes 250mb of memory. When I use limit(1000)
it only takes 30mb. Is there any way to use cursors to pull models incrementally when they are requested by for loop and not read entire table into memory?
Issue Analytics
- State:
- Created 10 years ago
- Comments:13 (8 by maintainers)
Top Results From Across the Web
Hibernate: Avoiding reading all the records to memory at once
The basic list() method in Criteria and Query interfaces looks dangerous: I quess it pretty much has to read all the records into...
Read more >Excessive memory usage when loading models with relations
The problem with loading multiple relations is that TypeORM will make one SELECT query with several JOIN s which will return a lot...
Read more >Create, load, or edit a query in Excel (Power Query)
In the Power Query Editor, do one of the following: To load to a worksheet, select Home > Close & Load > Close...
Read more >QuerySet API reference | Django documentation
This document describes the details of the QuerySet API. It builds on the material presented in the model and database query guides, so...
Read more >Querying — peewee 3.15.4 documentation
Once a model instance has a primary key, any subsequent call to save() will ... When you construct a SELECT query, the database...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Try this one:
What I’m curious about is whether this memory usage is related to caching instances on the results wrapper (so that iterating a query multiple times does not cause multiple queries), or is just due to the way psycopg2 handles large result sets.
Much better, thank you!