[SUPPORT] Upgrade from 0.8.0 to 0.10.0 decreases Upsert performance
See original GitHub issueDescribe the problem you faced
Recently, we upgraded our testing environment from Hudi 0.8.0 to Hudi 0.10.0, and after the upgrade we noticed that upsert jobs for some of our existing tables run much slower compared to how they ran in Hudi 0.8.0.
For our Hudi tables, we ran one bulk_insert job to ingest snapshot, and schedule an upsert job every 10 mins to ingest incremental updates after the completion of bulk_insert job.
To reproduce the issue, we ran upsert job on a table with the size around 1.8T. The job took in 11 tsv files (< 150 MB in total) containing both new records and updates.
In Hudi 0.8.0, the job took 8.5 mins to complete whereas in Hudi 0.10.0, the job took 19 mins. And we notice that the main difference seemed to come from the steps “Getting small files from partitions”.
0.8.0
0.10.0
We also ran the same upsert job as a fresh table that has no pre-existing snapshot and incremental data, and the job in both 0.8.0 and 0.10.0 took around 8 mins to complete.
Based on the result, we speculate that in Hudi 0.10.0, the upsert performance somehow drops as more upsert jobs completed which make the size of the table grow, whereas in Hudi 0.8.0, we didn’t notice such kind of performance degradation.
Environment Description
- Hudi version : 0.10.0
- Spark version : 2.4.7
- Hive version : 2.3.7
- Hadoop version : 2.10.1
- Storage (HDFS/S3/GCS…) : S3
- Running on Docker? (yes/no) : no
- AWS EMR: 5.33.0, 1 master(r6g.16xlarge) with 20 cores(r6g.16xlarge)
Additional context
Spark configs:
--deploy-mode cluster
--executor-memory 43g
--driver-memory 43g
--executor-cores 6
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer
--conf spark.sql.hive.convertMetastoreParquet=false
--conf spark.hadoop.fs.s3.maxRetries=30
--conf spark.yarn.executor.memoryOverhead=5g
Hudi configs:
hoodie.consistency.check.enabled -> true
hoodie.datasource.write.table.type -> "COPY_ON_WRITE"
hoodie.datasource.write.keygenerator.class -> "org.apache.hudi.keygen.ComplexKeyGenerator"
hoodie.upsert.shuffle.parallelism -> 1500
hoodie.parquet.max.file.size -> 500 * 1024 * 1024
hoodie.datasource.write.operation -> "upsert"
hoodie.metadata.enable -> true
hoodie.metadata.validate -> true
hoodie.fail.on.timeline.archiving -> false
hoodie.clean.automatic -> true
hoodie.cleaner.commits.retained: 72
hoodie.keep.min.commits: 100
hoodie.keep.max.commits: 150
Please let me know if you need any more information, thanks.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:2
- Comments:11 (5 by maintainers)
Good morning,
We are experiencing the same issue with .10 and .9 (see UI below). Also using S3, but using AWS Glue not EMR. What stands out to me are the 3 consecutive ‘Getting small files from partitions’ stages that with 4, 20, 100 tasks respectively. The stages with 4 and 20 tasks obviously getting very poor parallelization. The identical behavior exists on my UI and ChiehFu’s
@nsivabalan we still see these behavior but I didn’t see this in slack. The stages in the Spark UI are more clear in versions after 0.9.0 but we are still using that version