One template file, create multiple files with a file provisioner

Hi,
I have a template file which I would like to use to create several files with a file provisioner, but it’s not working.

 variable "list" {
   type = list(string)
   default = ["one", "two", "three"]
 }
 data "template_file" "templates" {
   count = length(var.list)
 
   template = file("${path.module}/template.tpl")
 
   vars = {
     sring = ${element(var.list, count.index)}
   }
 }
 {
 ...
   provisioner "file" {
     count       = length(var.list)
     content     = data.template_file.templates[count.index].rendered
     destination = "/tmp/${element(var.list, count.index)}.conf"
   }
 ...
 }

I’ve also tried to use the “count” outside of the provisioner configuration, but that causes an other problem. Because I have to replace the module with -replace=“module…” at terraform plan. That would no longer work because it would enter the module multiple times into the state file.

Maybe someone of you can help me here.

Thank you for your answers in advance.

Kind regards
Thorsten

Hi @cRaZy-T,

You didn’t mention which platform you are creating virtual machines on, but if you are using one that supports some idea of “user data” and you are using a machine image which includes cloud-init then I’d encourage first trying what’s described in Passing data into virtual machines and other compute resources under Provisioners are a Last Resort.

For example, if you were using AWS EC2 then you could include in user_data a cloud-init configuration to create the three files without any need for Terraform to connect to the system via SSH, like this:

resource "aws_instance" "example" {
  # ...

  user_data = <<-EOT
    #cloud-config
    ${yamlencode({
      write_files = [
        for sring in var.list : {
          path     = "/tmp/${sring}.conf"
          encoding = "b64"
          content  = base64encode(templatefile("${path.module}/template.tpl", {
            sring = sring
          }))
        }      
      ]
    })}
  EOT
}

This is intended to dynamically generate something like the cloud-config example Writing out arbitrary files, which instructs cloud-init to create some files on disk during the early boot process. The content of those files will be base64-encoded and included as part of the “cloud-config” YAML, which means that the resulting YAML document must be within the size limits of user data on your chosen platform. For Amazon EC2, that limit is 16KiB. Other platforms typically have limits of a similar order of magnitude.

The Terraform documentation I linked above describes some equivalent features for various other cloud platforms, so hopefully you can translate this example to whichever platform you are using.


If you find that cloud-init can’t work for your situation and you’re convinced that provisioners are the only viable alternative, the only way to dynamically declare multiple provisioner blocks is to declare a dynamic number of resource instances. It sounds like you already tried putting count in the resource block of the object you want to provision, and that didn’t work for you because you only want one VM with multiple files, not one VM per file.

If you need a dynamic number of files on a single VM then I think the main way to achieve that would be to separate the provisioner into a separate resource which does nothing except run the provisioner. The null_resource resource type belonging to the hashicorp/null provider is a typical choice for this because it has no actual create-time or destroy-time behavior of its own, and so it can serve as a container that only runs provisioners.

Again I’m going to use aws_instance for the sake of example because you didn’t mention which platform you are using. Hopefully you can translate this example to your chosen platform’s equivalent virtual machine resource type.

terraform {
  required_providers {
    aws  = { source = "hashicorp/aws" }
    null = { source = "hashicorp/null" }
  }
}

resource "aws_instance" "example" {
  # ...
}

resource "null_resource" "each_file" {
  for_each = toset(var.list)

  triggers = {
    ip_address = aws_instance.example.private_ip
    content = templatefile("${path.module}/template.tpl", {
      sring = each.key
    })
  }

  connection {
    type = "ssh"
    host = self.triggers.ip_address
    # ...
  }

  provisioner "file" {
    content     = self.triggers.content
    destination = "/tmp/${each.key}.conf"
  }
}

# OPTIONAL: If you need to take some other provisioning
# action after all of the files are uploaded, include this
# additional resource which depends on all of the
# file-uploading resource instances.
resource "null_resource" "after_files" {

  triggers = {
    ip_address = aws_instance.example.private_ip

    # This will make sure the provisioners in this
    # resource re-run if any of the file provisioners
    # are re-run, or if you change the number of
    # files.
    file_provisions = sha1(jsonencode({
      for k, o in null_resource.each_file : k => o.id
    }))
  }

  connection {
    type = "ssh"
    host = self.triggers.ip_address
    # ...
  }

  provisioner "remote-exec" {
    # (whatever you need to do when the files are all uploaded.)
    # ...
  }
}

This is significantly more complex than the cloud-init based approach, so I would not recommend it except as a last resort.

HI apparentlymart,

thank you for the feedback.
The VM is created at Azure and a bunch of file provisioners are required to set everything up. This works as intended for us. There is only one point that bothers me and that is some files that are generated from the same template, I would have liked to have done that dynamically using the list(string) variable.

Kind regards
Thorsten