question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[SUPPORT] Issue while writing to HDFS via hudi. Only `/.hoodie` folder is written.

See original GitHub issue

Hi, all. We are doing a POC experimenting with syncing our data in micro-batches from Kafka to HDFS. We are currently using the general Kafka consumer API’s and converting them to DataSet and then writing it on to HDFS via hudi. We are facing some problems with this.

 // `items`  is List<String> containing data from kafka		
 final Dataset<Record> df = spark.createDataset(items, Encoders.STRING()).toDF()
			                   .map(new Mapper(), Encoders.bean(Record.class))
                                           .filter(new Column("name").equalTo("aaa"));			

 df.write().format("hudi")
		.option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY(), "id")
		.option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY(), "batch")
		.option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY(), "timestamp")
		.option(HoodieWriteConfig.TABLE_NAME, table)
                .mode(SaveMode.Append)
		.save(output);
		//.parquet(output);

a) When using save option to write dataset, only /.hoodie folder exists after writing. No actual data is present. From the logs we are not seeing any issues. The following set of lines are repeated continuously in the write phase.

91973 [Executor task launch worker for task 11129] INFO  org.apache.spark.storage.BlockManager  - Found block rdd_36_624 locally
91974 [Executor task launch worker for task 11128] INFO  org.apache.spark.storage.BlockManager  - Found block rdd_36_623 locally
91974 [Executor task launch worker for task 11129] INFO  org.apache.spark.executor.Executor  - Finished task 624.0 in stage 19.0 (TID 11129). 699 bytes result sent to driver
91975 [dispatcher-event-loop-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 625.0 in stage 19.0 (TID 11130, localhost, executor driver, partition 625, PROCESS_LOCAL, 7193 bytes)
91975 [Executor task launch worker for task 11130] INFO  org.apache.spark.executor.Executor  - Running task 625.0 in stage 19.0 (TID 11130)
91975 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 624.0 in stage 19.0 (TID 11129) in 16 ms on localhost (executor driver) (624/1500)
91985 [Executor task launch worker for task 11128] INFO  org.apache.spark.executor.Executor  - Finished task 623.0 in stage 19.0 (TID 11128). 871 bytes result sent to driver
91985 [dispatcher-event-loop-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 626.0 in stage 19.0 (TID 11131, localhost, executor driver, partition 626, PROCESS_LOCAL, 7193 bytes)
91985 [task-result-getter-1] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 623.0 in stage 19.0 (TID 11128) in 27 ms on localhost (executor driver) (625/1500)
91986 [Executor task launch worker for task 11131] INFO  org.apache.spark.executor.Executor  - Running task 626.0 in stage 19.0 (TID 11131)

We have verified that there is no issue with fetching data from kafka or creating data set. Only issue seems to be with the write.

b) When using parquet option to write dataset, actual data is written in parquet file format in the output directory. But without any partition folders. Is this expected? What is difference in save v/s parquet? Also, while querying this parquet data in Spark shell, via Spark SQL, I was not able to find any hudi meta fields. For eg:

spark.sql("select id, name, `_hoodie_commit_time` from table1 limit 5").show();

The query was throwing error that there are no such field called _hoodie_commit_time

c) Where can i find the meta data regarding the data currently present in hudi tables. ie, what are the new commits? When was the last commit? etc., From the documentation it seemed these data are managed by hudi.

d) How is data compaction managed by hudi? Is there any background jobs running?

Sorry, if these are naive questions. But we are completely new to this. Also, it would be helpful if someone could point us to a little detailed documentation on these.

Thanks.

Steps to reproduce the behavior:

  1. Code snippet used for write has been shared.

Expected behavior

Currently, when using write only /.hoodie folder alone is being written without any data. Expected behaviour is Data should also be written.

A clear and concise description of what you expected to happen.

Environment Description

  • Hudi version : 0.5.2-incubating

  • Spark version : 2.4.0

  • Hive version : -

  • Hadoop version : 2.9.2

  • Storage (HDFS/S3/GCS…) : HDFS

  • Running on Docker? (yes/no) : No

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
vinothchandarcommented, Apr 21, 2020

@jenu9417 Thanks for taking the time to report this.

a) is weird… The logs do indicate that tasks got scheduled atleast… but I think the job died before getting to write any data… Do you have access to Spark UI? to see how the jobs are doing…

b) So .parquet() does not use hudi at all (I suspect)… It uses the Spark parquet datasource and you can look at official spark docs to understand how you can partition that write (I think .partitionBy("batch")). .save() will invoke the save method of the datasource you configured using format(...)… Spark docs will do a better job of explaining this than me 😃

The query was throwing error that there are no such field called _hoodie_commit_time

parquet and hudi are different things… Only hudi datasets have this field

c) .hoodie will contain all the metadata

d) You can find more on compaction here https://cwiki.apache.org/confluence/display/HUDI/Design+And+Architecture#DesignAndArchitecture-Compaction

0reactions
jenu9417commented, Apr 23, 2020

@lamber-ken @vinothchandar The above mentioned suggestions works fine. Time to write has now reduced drastically. Thank you for the continued support.

Closing the ticket, since the original issue is resolved now.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Troubleshooting - Apache Hudi
This error generally occurs when the schema has evolved in backwards incompatible way by deleting some column 'col1' and we are trying to...
Read more >
Hudi: Uber Engineering's Incremental Processing Framework ...
In short, Hudi (Hadoop Upsert Delete and Incremental) is an analytical, scan-optimized data storage abstraction which enables applying mutations ...
Read more >
PrestoDB and Apache Hudi ·
Update/Delete Records: Hudi provides support for updating/deleting records, using fine grained file/record level indexes, while providing ...
Read more >
New features from Apache Hudi 0.7.0 and 0.8.0 available on ...
In the following example, we create two Copy on Write (CoW) Hudi tables: amazon_reviews and amazon_reviews_clustered using Amazon EMR ...
Read more >
The Art of Building Open Data Lakes with Apache Hudi, Kafka ...
Currently, it only supports sequential processing of tables to be ingested and Copy on Write table type. We are using HoodieDeltaStreamer to ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found