Saner approaches to getting metadata for Relations
See original GitHub issueUp to now, the dbt-spark plugin has leveraged a handful of metadata commands:
-- find all current databases
show databases
-- find all current tables in a database
show tables in my_db
-- determine if a relational object is a view or table
show tblproperties my_db.my_relation ('view.default.database')
The main issue with running one statement per relation is that it’s very slow. This is justifiable for the get_catalog method of dbt docs generate, but not so at the start of a dbt run. Most databases have more powerful, accessible troves of metadata, often stored in an information_schema. Spark offers nothing so convenient; describe database returns only information about the database; describe table [extended] must be run for every relation.
@drewbanin ended up finding a semi-documented statement in the Spark source code that does most of the thing we want:
show table extended in my_db like '*'
It returns the same three columns as show tables in my_db, for all relations in my_db, with a bonus column information that packs a lot of good stuff:
Database: my_db
Table: view_model
Owner: root
Created Time: Wed Jan 29 01:58:46 UTC 2020
Last Access: Thu Jan 01 00:00:00 UTC 1970
Created By: Spark 2.4.4
Type: VIEW
View Text: select * from my_db.seed
View Default Database: default
View Query Output Columns: [id, first_name, last_name, email, gender, ip_address]
Table Properties: [transient_lastDdlTime=1580263126, view.query.out.col.3=email, view.query.out.col.0=id, view.query.out.numCols=6, view.query.out.col.4=gender, view.default.database=default, view.query.out.col.1=first_name, view.query.out.col.5=ip_address, view.query.out.col.2=last_name]
Serde Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
InputFormat: org.apache.hadoop.mapred.SequenceFileInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
Storage Properties: [serialization.format=1]
Schema: root
|-- id: long (nullable = true)
|-- first_name: string (nullable = true)
|-- last_name: string (nullable = true)
|-- email: string (nullable = true)
|-- gender: string (nullable = true)
|-- ip_address: string (nullable = true)
The same command also exists in Hive. This is a “big if true” find that could immediately clean up our workarounds for relation types. We’ll want to check that it’s supported by all vendors/implementations of Spark before committing to this approach, so we’ll benefit from folks testing in other environments.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:5
- Comments:7 (7 by maintainers)

Top Related StackOverflow Question
Thanks for the pointer to that, @aaronsteers!
I have updated my original issue comment to reflect the issue on my end (faulty/outdated JDBC driver) that was causing me to encounter errors with variants of
describe. That said,show table extended in my_db like '*'is still the closest thing we have to an information schema that we can access all at once.If it works across the board, I think it offers a more performant approach to the
get_catalogupdates in #39 and #41, versus runningdescribe table extendedfor every relation in the project. The difficulty there is in parsing theinformationcolumn, which is a big string delimited by\n, rather than additional rows per property as indescribe table extended.Great find! This looks promising and I imagine it could create significant performance benefits. And thank you for the link to the source code. I traced the blame on this line and it looks like the latest commit was >3 years ago, with even the prior version of that line shown here appearing to support the same syntax.
With that said, my guess is that support for this would likely correlate with spark version number moreso than with vendor, and it appears this has been in at least since version Spark 2.2 and likely longer. (Someone else jump in if you have additional/different info.)
For my part, I think this is a relatively safe bet and likely worth the performance boost. Although due to the noted lack of documentation, I also think some type of safe failover or feature flag might be advisable.