question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

presto - querying nested object in parquet file created by hudi

See original GitHub issue

Describe the problem you faced

Using an AWS EMR spark job to create a hudi parquet record in S3 from a kinesis stream. Querying this record from presto is fine, but I can’t seem to query a nested column

Update: From my further investigation I think not being to query nested objects or using select * from ... is just a symptom of taking an array object off a kinesis stream and saving using hudi.

To Reproduce

Steps to reproduce the behavior:

  1. Spark job that reads from kinesis stream, saves hudi file to S3
  2. AWS glue job creates database from record
  3. Log into AWS EMR with presto installed
  4. run presto-cli --catalog hive --schema schema --server server:8889
  5. queries:

works without nesting

presto:schema> select id from default;
    id    
----------
 34551832 
(1 row)

Query 20200211_212022_00055_hej8h, FINISHED, 1 node
Splits: 17 total, 17 done (100.00%)
0:01 [1 rows, 93B] [1 rows/s, 179B/s]

query that doesn’t work with nesting

presto:schema> select id, order.channel from default;
Query 20200211_212107_00056_hej8h failed: line 1:12: mismatched input 'order'. Expecting: '*', <expression>, <identifier>
select id, order.channel from default

table structure

presto:data-lake-database-dev-adam-8> show columns from default
                                   -> ;
         Column         |                                                                                                                                                                                                                                     
------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 _hoodie_commit_time    | varchar                                                                                                                                                                                                                             
 _hoodie_commit_seqno   | varchar                                                                                                                                                                                                                             
 _hoodie_record_key     | varchar                                                                                                                                                                                                                             
 _hoodie_partition_path | varchar                                                                                                                                                                                                                             
 _hoodie_file_name      | varchar                                                                                                                                                                                                                             
 eventtimestamp         | varchar                                                                                                                                                                                                                             
 id                     | bigint                                                                                                                                                                                                                              
 order                  | row(channel varchar, customer row(address row(country varchar, postcode varchar, region varchar), birthdate varchar, createddate varchar, email varchar, firstname varchar, id bigi
(11 rows)

Deploy script

aws emr add-steps --cluster-id j-xxxxxx --steps Type=spark,Name=ScalaStream,Args=[\
--deploy-mode,cluster,\
--master,yarn,\
--packages,\'org.apache.hudi:hudi-spark-bundle:0.5.0-incubating\',\
--jars,\'/usr/lib/spark/external/lib/spark-avro.jar,/usr/lib/spark/external/lib/spark-streaming-kinesis-asl-assembly.jar\',\
--conf,spark.yarn.submit.waitAppCompletion=false,\
--conf,yarn.log-aggregation-enable=true,\
--conf,spark.dynamicAllocation.enabled=true,\
--conf,spark.cores.max=4,\
--conf,spark.network.timeout=300,\
--conf,spark.serializer=org.apache.spark.serializer.KryoSerializer,\
--conf,spark.sql.hive.convertMetastoreParquet=false,\
--class,ScalaStream,\
s3://xxx.xxx/simple-project_2.11-1.0.jar\
],ActionOnFailure=CONTINUE

sbt file

name := "Simple Project"

version := "1.0"

scalaVersion := "2.11.12"

libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.4"
libraryDependencies += "org.apache.spark" %% "spark-streaming" % "2.4.4"
libraryDependencies += "org.apache.spark" %% "spark-streaming-kinesis-asl" % "2.4.4"
libraryDependencies += "org.apache.hudi" % "hudi-spark-bundle" % "0.5.0-incubating"


scalacOptions := Seq("-unchecked", "-deprecation")

AWS glue

in order for the AWS crawler to identify the ‘default’ directory where hudi has placed the data. I have had to add exclusions to the crawler. They are:

  • **/.hoodie_partition_metadata
  • **/default_$folder$
  • **/.hoodie_$folder$
  • **/.hoodie/hoodie.properties

Expected behavior

Nest row object to be output in query result.

Environment Description

  • Hudi version : hudi-spark-bundle:0.5.0-incubating, (with org.apache.spark:spark-avro_2.11:2.4.4)

  • Spark version : 2.4.4

  • Hive version : Hive 2.3.6,

  • Pig 0.17.0,

  • Presto 0.227

  • Hadoop version : Amazon 2.8.5

  • Storage (HDFS/S3/GCS…) : S3

  • Running on Docker? (yes/no) : no

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:20 (12 by maintainers)

github_iconTop GitHub Comments

1reaction
bhasudhacommented, Feb 14, 2020

@adamjoneill apologies for the delayed response. Havent gotten a chance to look at this thread. Let me also try and reproduce this and get back soon.

1reaction
vinothchandarcommented, Feb 14, 2020

Thanks @adamjoneill let me try to reproduce as well and see whats going on tonight.

Read more comments on GitHub >

github_iconTop Results From Across the Web

presto - querying nested object in parquet file created by hudi ...
Describe the problem you faced Using an AWS EMR spark job to create a hudi parquet record in S3 from a kinesis stream....
Read more >
Work with a Hudi dataset - Amazon EMR - AWS Documentation
The following example shows how to create a DataFrame and write it as a Hudi dataset. ... To paste code samples into the...
Read more >
PrestoDB and Apache Hudi ·
This supported querying COW Hudi tables and read optimized querying of MOR Hudi tables (only fetch data from compacted base parquet files). At ......
Read more >
RFC-27 Data skipping index to improve query performance
How to apply query predicates in Hudi? Hive; Spark; Presto ... So if we create new data file (say f1-c2.parquet), then we add...
Read more >
Unable to query parquet data with nested fields in presto db
What is interesting that I'm able to query other non nested columns without any issues. Create table looks like following: CREATE TABLE hive....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found