[CT-96] Allow unique_key for incremental materializations to take a list
See original GitHub issueDescribe the feature
Right now when creating a model in dbt with materialization set to incremental you can pass in a single column to unique key that will act as the key for merging.
In an ideal world you would be able to pass in multiple columns as there are many cases where a table has more than one column that defines it’s primary key.
The simplest solution would be to change to unique_key to take in a list (in addition to a string for backwards compatibility) and create the predicates for the merge based on the list vs. just the single column.
This might not be ideal as the param name unique_key implies a single key. Alternatives would be adding a new optional parameter unique_key_list or unique_keys that always take a list and eventually deprecate the unique_key parameter.
Describe alternatives you’ve considered
Not necessarily other alternatives but another thing to consider is the use of unique_key throughout the dbt project. It would stand to reason that whatever change is made here would apply to all other usages of unique_key. This can be done in one large roll-out or in stages such as with merges first, then upserts, then snapshots, etc.
Additional context
This feature should work across all databases.
Who will this benefit?
Hopefully most dbt users. Currently the only workaround for this is using dbt_utils.surrogate_key which a) doesn’t work for BigQuery and b) should ideally be an out of the box dbt feature.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:11
- Comments:17 (15 by maintainers)

Top Related StackOverflow Question
Seeing the discussion and the proposed solution, could we rename this issue to “Allow unique_key to take a list”? To make it more generic.
And I am ok with using
unique_keyfor both astrandList[str]even thoughunique_keysis more accurate for the latter. This happens inpandassometimes too, e.g.dropandpivothave acolumnsparameter that can be bothstrandList[str]. In my opinion this is better than adding a new parameter.I see what you mean. When I think of creating a surrogate key for an incremental model, I’m thinking of creating a column within that model, to be stored in the resulting table and passed as the
unique_keyfor subsequent incremental runs:You’re right that, as a result of the way that merge macros are implemented on BigQuery, you cannot create the surrogate key directly within the config like so:
I’ve now heard this change requested from several folks now, including (if I recall correctly) some Snowflake users who have found that merging on cluster keys improves performance somewhat. So I’m not opposed to passing an array of column names. I’m worried that
unique_keysis ambiguous; following the lead of the dbt-utils test, I’m thinking along the lines ofunique_combination_of_columns.@drewbanin What do you think? Is that too much config-arg creep?