Reading only requested number of partitions from BQ table doesn't work
See original GitHub issueI am trying to query table which is partitioned by date field. Like this,
val prog_logs = spark.read.format("bigquery")
.option("table", "project1:dataset.table")
.option("filter", " date between '2019-09-10' and '2019-09-11' ")
.load()
.cache()
This is reading entire table instead of only ‘2019-09-10’ and ‘2019-09-11’ partitions.
Issue Analytics
- State:
- Created 4 years ago
- Comments:8 (3 by maintainers)
Top Results From Across the Web
Reading only requested number of partitions from BQ table ...
I am trying to query table which is partitioned by date field. Like this, val prog_logs = spark.read.format("bigquery") .option("table", ...
Read more >Managing partitioned tables | BigQuery - Google Cloud
This document describes how to manage partitioned tables in BigQuery. Note: The information in Managing tables also applies to partitioned tables.
Read more >query time partitioned table row counts per partition for 0 bytes ...
1 Answer 1 ... This is not possible with column based partitioned table. In your first query, you are selecting the pseudo column...
Read more >BigQuery Partition Tables: 3 Critical Aspects - Learn - Hevo Data
How does BigQuery Partition Work? When you try to execute a query requesting data for a particular date, Google BigQuery can read only...
Read more >Optimizing your BigQuery tables using partitioning - Usercentrics
When using partitions, you can run your queries only on a specific set of the partitioned tables and thus are saving many rows...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Sorry for the late response but for proper push down in spark 2.4 I think you need: date between date(‘2019-09-10’) and date(‘2019-09-11’) (i.e. explicitly cast values to date).
@jesuejunior is it still an issue? Can you please share the log of a sample app (it shows the pushed down filters)