[SUPPORT] Zordering clustering on a moderate size dataset taking large amounts of time.
See original GitHub issueDescribe the problem you faced
I am trying to play with z-ordering on a 50G+ dataset locally to understand everything. Noticed large number of stages, and its pretty slow due to that. I want to make sure this is expected.
To Reproduce
Steps to reproduce the behavior:
- Any 50GB+ dataset. I am using the amazon reviews dataset here https://s3.amazonaws.com/amazon-reviews-pds/readme.html
- Run inline compaction
val df = spark.read.parquet(inputPath)
val commonOpts = Map("hoodie.bulk_insert.shuffle.parallelism" -> "10",
"hoodie.clustering.inline" -> "true",
"hoodie.clustering.inline.max.commits" -> "1",
"hoodie.layout.optimize.enable" -> "true",
"hoodie.clustering.plan.strategy.sort.columns" -> "product_id,customer_id,review_date")
df.write.format("hudi").
option(PRECOMBINE_FIELD.key(), "review_id").
option(RECORDKEY_FIELD.key(), "review_id").
option("hoodie.table.name", "amazon_reviews_hudi").
option(OPERATION.key(),"bulk_insert").
option(BULK_INSERT_SORT_MODE.key(), "NONE").
options(commonOpts).
mode(Overwrite).
save(outputPath)
Expected behavior
A clear and concise description of what you expected to happen.
Environment Description
-
Hudi version : 0.10-SNAPSHOT
-
Spark version : Apache Spark 3.0
-
Hive version :
-
Hadoop version :
-
Storage (HDFS/S3/GCS…) : Local filesystem
-
Running on Docker? (yes/no) :
Additional context
Issue Analytics
- State:
- Created 2 years ago
- Comments:17 (13 by maintainers)
Top Results From Across the Web
Processing Petabytes of Data in Seconds with Databricks Delta
Using Databricks Delta's built-in data skipping and ZORDER clustering features, large cloud data lakes can be queried in a matter of seconds by ......
Read more >10 Tips for Choosing the Optimal Number of Clusters | by Matt.0
I will be using a lesser known data set from the cluster package: ... only make three sizes: small, medium and large. we're...
Read more >Querying One Trillion Rows of Data with PowerBI and Azure ...
For Delta table management, we're going to focus on three techniques: data partitioning, z-ordering, and creating aggregation tables. These will ...
Read more >Z-Ordering will be ineffective, not collecting stats
Problem. You are trying to optimize a Delta table by Z-Ordering and receive an error about not collecting stats for the columns. Console...
Read more >a focus on query processing and machine learning algorithms ...
Several approaches use big data technology to enable processing of large data sets often in combination with machine learning to analyze the ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@rubenssoto @vinothchandar i have run the test。 suggestions: if you use z-order/hilbert in clustering, pls set option(“hoodie.clustering.plan.strategy.max.bytes.per.group”, Long.MaxValue.toString) we need as many files as possible participate in sorting,in this way the sorting effect will be the best and there is no problem of parallelism
On the issue of parallelism, the current mechanism of cluster itself makes it impossible to do z-sort in parallel Let me submit a PR to solve this problem
will hoodie.clustering.plan.strategy.sort.columns trigger a re-sort of all rows?