[firestore-bigquery-export] Resources exceeded during query execution error
See original GitHub issue- Extension name: firestore-bigquery-export
- Extension version: 0.1.16
- Configuration values (redact info where appropriate): -Cloud Functions location: us-central1 -BigQuery Dataset location: us
Getting an error selecting from one of my ported collections. all other much larger collections work fine.
Running: select * from firestore_export.mycollection_raw_latest limit 10 Produces: Resources exceeded during query execution: The query could not be executed in the allotted memory. Peak usage: 122% of limit. Top memory consumer(s): sort operations used for analytic OVER() clauses: 98% other/unattributed: 2%
No other query against anything in my db, other firestore ported collections, materialized views, scheduled generated tables from firestore collection ports, is throwing that error.
Workaround: Added my own version of _latest view that I believe is functionally equivalent without the memory pressure:
create view myschema.mycollection_raw_latest as
SELECT document_name, document_id, timestamp, event_id, operation, data
FROM
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY document_name ORDER BY timestamp DESC) rn
FROM firestore_export.mycollection_raw_changelog
)
WHERE operation<>'DELETE' and rn=1
Issue Analytics
- State:
- Created 2 years ago
- Comments:9 (7 by maintainers)
Top Results From Across the Web
What to do about BigQuery error “Resources exceeded during ...
Resources exceeded during query execution: Not enough resources for query planning - too many subqueries or query is too complex. when running queries...
Read more >Error messages | BigQuery - Google Cloud
Error message HTTP code Description
stopped 200 This status code returns when a job is canceled.
timeout 400 The job timed out.
Read more >"Error: Resources exceeded during query execution" resulting ...
Run the query without ORDER BY and save in a dataset table. Export the content from that table to a bucket in GCS...
Read more >Export Firebase Crashlytics data to BigQuery
BigQuery exports contain raw crash data including device type, operating system, exceptions (Android apps) or errors (Apple apps), and Crashlytics logs, as well ......
Read more >Chapter 4. Loading Data into BigQuery - O'Reilly
Therefore, federated querying of these formats will provide better query performance than if the data was stored in row-based formats such as CSV...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The first query creates a fake changelog table that will reproduce the error, pretend its just a really big document that gets modified very often. The second is your view that goes kaboom. I tried making my own version but you don’t give imports event_ids so nope not happening…
create or replace table changelog_table as with a as ( select DATE_FROM_UNIX_DATE(CAST(RAND() * 10000 AS INT64)) as timestamp, GENERATE_UUID() as event_id, ‘projects/myproject/databases/(default)/documents/mycollection/mydocument’ as document_name, ‘UPDATE’ as operation, string_agg(concat(word, word), ‘’) as data, ‘mydocument’ as document_id from `publicdata.samples.shakespeare` )select a.* from a, `publicdata.samples.shakespeare` limit 20000;
create or replace table your_latest as – Retrieves the latest document change events for all live documents. – timestamp: The Firestore timestamp at which the event took place. – operation: One of INSERT, UPDATE, DELETE, IMPORT. – event_id: The id of the event that triggered the cloud function mirrored the event. – data: A raw JSON payload of the current state of the document. – document_id: The document id as defined in the Firestore database SELECT document_name, document_id, timestamp, event_id, operation, data FROM ( SELECT document_name, document_id, FIRST_VALUE(timestamp) OVER( PARTITION BY document_name ORDER BY timestamp DESC ) AS timestamp, FIRST_VALUE(event_id) OVER( PARTITION BY document_name ORDER BY timestamp DESC ) AS event_id, FIRST_VALUE(operation) OVER( PARTITION BY document_name ORDER BY timestamp DESC ) AS operation, FIRST_VALUE(data) OVER( PARTITION BY document_name ORDER BY timestamp DESC ) AS data, FIRST_VALUE(operation) OVER( PARTITION BY document_name ORDER BY timestamp DESC ) = “DELETE” AS is_deleted FROM changelog_table ORDER BY document_name, timestamp DESC ) WHERE NOT is_deleted GROUP BY document_name, document_id, timestamp, event_id, operation, data;
Draft PR here https://github.com/firebase/extensions/pull/1288 to fix this.