Hi @cRaZy-T,
You didn’t mention which platform you are creating virtual machines on, but if you are using one that supports some idea of “user data” and you are using a machine image which includes cloud-init then I’d encourage first trying what’s described in Passing data into virtual machines and other compute resources under Provisioners are a Last Resort.
For example, if you were using AWS EC2 then you could include in user_data
a cloud-init configuration to create the three files without any need for Terraform to connect to the system via SSH, like this:
resource "aws_instance" "example" {
# ...
user_data = <<-EOT
#cloud-config
${yamlencode({
write_files = [
for sring in var.list : {
path = "/tmp/${sring}.conf"
encoding = "b64"
content = base64encode(templatefile("${path.module}/template.tpl", {
sring = sring
}))
}
]
})}
EOT
}
This is intended to dynamically generate something like the cloud-config example Writing out arbitrary files, which instructs cloud-init to create some files on disk during the early boot process. The content of those files will be base64-encoded and included as part of the “cloud-config” YAML, which means that the resulting YAML document must be within the size limits of user data on your chosen platform. For Amazon EC2, that limit is 16KiB. Other platforms typically have limits of a similar order of magnitude.
The Terraform documentation I linked above describes some equivalent features for various other cloud platforms, so hopefully you can translate this example to whichever platform you are using.
If you find that cloud-init can’t work for your situation and you’re convinced that provisioners are the only viable alternative, the only way to dynamically declare multiple provisioner blocks is to declare a dynamic number of resource instances. It sounds like you already tried putting count
in the resource block of the object you want to provision, and that didn’t work for you because you only want one VM with multiple files, not one VM per file.
If you need a dynamic number of files on a single VM then I think the main way to achieve that would be to separate the provisioner into a separate resource which does nothing except run the provisioner. The null_resource
resource type belonging to the hashicorp/null
provider is a typical choice for this because it has no actual create-time or destroy-time behavior of its own, and so it can serve as a container that only runs provisioners.
Again I’m going to use aws_instance
for the sake of example because you didn’t mention which platform you are using. Hopefully you can translate this example to your chosen platform’s equivalent virtual machine resource type.
terraform {
required_providers {
aws = { source = "hashicorp/aws" }
null = { source = "hashicorp/null" }
}
}
resource "aws_instance" "example" {
# ...
}
resource "null_resource" "each_file" {
for_each = toset(var.list)
triggers = {
ip_address = aws_instance.example.private_ip
content = templatefile("${path.module}/template.tpl", {
sring = each.key
})
}
connection {
type = "ssh"
host = self.triggers.ip_address
# ...
}
provisioner "file" {
content = self.triggers.content
destination = "/tmp/${each.key}.conf"
}
}
# OPTIONAL: If you need to take some other provisioning
# action after all of the files are uploaded, include this
# additional resource which depends on all of the
# file-uploading resource instances.
resource "null_resource" "after_files" {
triggers = {
ip_address = aws_instance.example.private_ip
# This will make sure the provisioners in this
# resource re-run if any of the file provisioners
# are re-run, or if you change the number of
# files.
file_provisions = sha1(jsonencode({
for k, o in null_resource.each_file : k => o.id
}))
}
connection {
type = "ssh"
host = self.triggers.ip_address
# ...
}
provisioner "remote-exec" {
# (whatever you need to do when the files are all uploaded.)
# ...
}
}
This is significantly more complex than the cloud-init based approach, so I would not recommend it except as a last resort.