Can block lists be defined outside of the resource block in which they are used?

I’m familiar with using bigquery_table which uses schema attribute to define the columns in the table as a JSON document. This is really handy because it means the schema can be stored in a separate JSON file and be referred to using file() or templatefile().

I’ve recently joined a company that uses Snowflake rather than BigQuery and in this case columns in snowflake_table are defined as a block list like so:

  column {
    name     = "data"
    type     = "text"
    nullable = false

  column {
    name = "DATE"
    type = "TIMESTAMP_NTZ(9)"

The Snowflake approach doesn’t allow me to use file()/templatefile() like I can with BigQuery which means the terraform files become thousands of lines long - which is disconcerting and impacts productivity.

Is there a mechanism to define block lists elsewhere and refer to them in a resource block? This would allow me to define the table schema elsewhere, in dedicated files, like I do with BigQuery. I’m pretty sure the answer to my question is “no” but I thought I’d ask anyway.

Can anyone think of a solution to this problem?

By using dynamic blocks - Dynamic Blocks - Configuration Language | Terraform | HashiCorp Developer - you can supply (e.g.) a list of maps - which you can generate using any Terraform expressions, including loading JSON - and have them used to generate nested blocks.

The slight downside is you have to write out the logic for each field you’re mapping from your input data to a block attribute - and since snowflake_table’s column block has further nested blocks inside it, you might need to write out nested dynamic blocks too, to conditionally define or not those nested default and identity blocks, if you need them.

It’ll be a bit messy, but you can write it once and copy-paste or make it a re-usable module thereafter.

1 Like

Thank you @maxb . That sounds like a good option, especially loading maps from JSON. I suspect there’s a trade-off here of complexity versus conciseness, I’ll need to assess whether its worth it or not (we don’t have many experienced terraform devs here so I don’t want to make it too complicated - and I do think dynamic blocks veer toward high complexity). Thanks again.

Just another thought on this…it would be a nice if there a one-click/one-button way to collapse all column blocks within a file, I don’t know of a way of doing that in my editor (VSCode) though.


I guess they do a bit, when you see them first, but the concept is pretty simple once you’ve become familiar with it.

Here, I made a more thoroughly worked example of what I’m suggesting (though, please note, as I don’t have a snowflake account, it is tested only as far as terraform validate):

resource "snowflake_table" "something" {
  database = "something"
  schema   = "something"
  name     = "something"

  dynamic "column" {
    // Use any Terraform expression you like to set this,
    // including jsondecode(file("..."))
    for_each = [
        name = "something"
        type = "something"
        nullable = true
        default = {
          constant = "something"

    content {
      // Required attributes
      name =
      type = column.value.type

      // Optional attributes
      comment        = try(column.value.comment, null)
      masking_policy = try(column.value.masking_policy, null)
      nullable       = try(column.value.nullable, null)

      // Optional singleton nested blocks
      dynamic "default" {
        for_each = try([column.value.default], [])
        content {
          constant   = try(default.value.constant, null)
          expression = try(default.value.expression, null)
          sequence   = try(default.value.sequence, null)

      dynamic "identity" {
        for_each = try([column.value.identity], [])
        content {
          start_num = try(identity.value.start_num, null)
          step_num  = try(identity.value.step_num, null)

That’s great, thanks @maxb