question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Saner approaches to getting metadata for Relations

See original GitHub issue

Up to now, the dbt-spark plugin has leveraged a handful of metadata commands:

-- find all current databases
show databases

-- find all current tables in a database
show tables in my_db

-- determine if a relational object is a view or table
show tblproperties my_db.my_relation ('view.default.database')

The main issue with running one statement per relation is that it’s very slow. This is justifiable for the get_catalog method of dbt docs generate, but not so at the start of a dbt run. Most databases have more powerful, accessible troves of metadata, often stored in an information_schema. Spark offers nothing so convenient; describe database returns only information about the database; describe table [extended] must be run for every relation.

@drewbanin ended up finding a semi-documented statement in the Spark source code that does most of the thing we want:

show table extended in my_db like '*'

It returns the same three columns as show tables in my_db, for all relations in my_db, with a bonus column information that packs a lot of good stuff:

Database: my_db
Table: view_model
Owner: root
Created Time: Wed Jan 29 01:58:46 UTC 2020
Last Access: Thu Jan 01 00:00:00 UTC 1970
Created By: Spark 2.4.4
Type: VIEW
View Text: select * from my_db.seed
View Default Database: default
View Query Output Columns: [id, first_name, last_name, email, gender, ip_address]
Table Properties: [transient_lastDdlTime=1580263126, view.query.out.col.3=email, view.query.out.col.0=id, view.query.out.numCols=6, view.query.out.col.4=gender, view.default.database=default, view.query.out.col.1=first_name, view.query.out.col.5=ip_address, view.query.out.col.2=last_name]
Serde Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
InputFormat: org.apache.hadoop.mapred.SequenceFileInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
Storage Properties: [serialization.format=1]
Schema: root
 |-- id: long (nullable = true)
 |-- first_name: string (nullable = true)
 |-- last_name: string (nullable = true)
 |-- email: string (nullable = true)
 |-- gender: string (nullable = true)
 |-- ip_address: string (nullable = true)

The same command also exists in Hive. This is a “big if true” find that could immediately clean up our workarounds for relation types. We’ll want to check that it’s supported by all vendors/implementations of Spark before committing to this approach, so we’ll benefit from folks testing in other environments.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:5
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

2reactions
jtcohen6commented, Jan 29, 2020

Thanks for the pointer to that, @aaronsteers!

I have updated my original issue comment to reflect the issue on my end (faulty/outdated JDBC driver) that was causing me to encounter errors with variants of describe. That said, show table extended in my_db like '*' is still the closest thing we have to an information schema that we can access all at once.

If it works across the board, I think it offers a more performant approach to the get_catalog updates in #39 and #41, versus running describe table extended for every relation in the project. The difficulty there is in parsing the information column, which is a big string delimited by \n, rather than additional rows per property as in describe table extended.

2reactions
aaronsteerscommented, Jan 29, 2020

Great find! This looks promising and I imagine it could create significant performance benefits. And thank you for the link to the source code. I traced the blame on this line and it looks like the latest commit was >3 years ago, with even the prior version of that line shown here appearing to support the same syntax.

With that said, my guess is that support for this would likely correlate with spark version number moreso than with vendor, and it appears this has been in at least since version Spark 2.2 and likely longer. (Someone else jump in if you have additional/different info.)

For my part, I think this is a relatively safe bet and likely worth the performance boost. Although due to the noted lack of documentation, I also think some type of safe failover or feature flag might be advisable.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Four Steps for Managing Your Metadata - Tamr Inc.
Harnessing metadata in a robust catalog opens dramatic opportunities for organizations. Four best practices can help you do just that.
Read more >
29th SANER (CSMR-WCRE) 2022: Honolulu, HI, USA - DBLP
Bibliographic content of SANER 2022. ... VELVET: a noVel Ensemble Learning approach to automatically locate VulnErable sTatements. 959-970 text to speech.
Read more >
The role of metadata in reproducible computational research
Metadata provide context and provenance to raw data and methods and are essential to both discovery and validation.
Read more >
Approaches for the Clustering of Geographic Metadata and ... - MDPI
Metadata indicates the purpose, quality, timeliness, location, subjects, and relationships enabling the discovery, evaluation, and application of geospatial ...
Read more >
Evaluating the privacy properties of telephone metadata - PNAS
We find that telephone metadata is densely interconnected, can trivially be reidentified, enables automated location and relationship ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found