Dynamic block within a child module

We have been successfully using a module to create virtual machines in vSphere according to the contents of var.virtual_machines, which is a map(object) that can be used to create several virtual machines, each with unique specs.

Recently we found a need to make our library more efficient by employing the DRY principle.
The original coding was copied each time a new application was deployed, instead of just making the tfvars different.

So, we changed the code to place the VM Build within a child module. Everything looked good when running a plan - the plan detail exactly represented the desired outcome. However, when we run apply, we get an error:

│ Error: disk.0: ServerFaultCode: The object 'vim.Datastore:<computed>' has already been deleted or has not been completely created
│   with module.windows_server.vsphere_virtual_machine.vm["us6ltrc060x"],
│   on ..\modules\build_vm\main.tf line 58, in resource "vsphere_virtual_machine" "vm":
│   58: resource "vsphere_virtual_machine" "vm" {

In doing some research we find posts saying that dynamic blocks do not work in child modules. Oddly enough, the error above occurs on the “disk” attribute of the vsphere_virtual_machine resource, which is a dynamic block:

resource "vsphere_virtual_machine" "vm" {
  for_each = var.virtual_machines
    name                       = each.key
    resource_pool_id           = data.vsphere_compute_cluster.cluster[each.key].resource_pool_id
    datastore_cluster_id       = data.vsphere_datastore_cluster.datastore_cluster[each.key].id
    num_cpus                   = each.value.vm_cpu
    memory                     = each.value.vm_ram
    guest_id                   = data.vsphere_virtual_machine.template[each.key].guest_id
    scsi_type                  = data.vsphere_virtual_machine.template[each.key].scsi_type
    firmware                   = each.value.vm_firmware
    wait_for_guest_net_timeout = -1

    network_interface {
      network_id               = data.vsphere_network.network[each.key].id
      adapter_type             = each.value.vm_adapter
    dynamic  "disk" {
      for_each = {for idx, size in each.value.vm_disks: idx=>size}
        content {
          label                = "disk${disk.key}"
          unit_number          = disk.key
          size                 = disk.value
          thin_provisioned     = false
          eagerly_scrub        = false
    cdrom {
      client_device = true
    clone {
      template_uuid           = data.vsphere_virtual_machine.template[each.key].id

The variable declaration for virtual_machines is:

variable virtual_machines {
  type = map(object({
    vm_adapter                = string   
    vm_cpu                    = number
    vm_disks                  = list(number)
    vm_dns_list               = list(string)
    vm_dns_search             = list(string)
    vm_domain                 = string
    vm_firmware               = string
    vm_ip_address             = string
    vm_ip_gateway             = string
    vm_ip_netmask             = string
    vm_ram                    = number 
    vm_template               = string
    vm_timezone               = string
    vsphere_cluster           = string
    vsphere_datacenter        = string
    vsphere_datastore_cluster = string
    vsphere_network           = string

And the information in terraform.tfvars to load values into the virtual_machine variable is shown below. You can see that vm_disks corresponds to the disks attribute of the resource.

virtual_machines = {

  usalabvm060 = {
    vm_adapter                = "vmxnet3"
    vm_cpu                    = 2
    vm_disks                  = [ 
    vm_dns_list               = [
    vm_dns_search             = ["lab.com"]
    vm_domain                 = "lab.com"
    vm_firmware               = "efi"   # use this option for Server 2022
#    vm_firmware               = "bios" # use this option for Server 2019
    vm_ip_address             = ""
    vm_ip_gateway             = ""
    vm_ip_netmask             = "24"
    vm_ram                    = 8192
    vm_template               = "windows2022-packer"
    vm_timezone               = "004" #This is GMT -8
    vsphere_cluster           = "LAB-CLUSTER"
    vsphere_datacenter        = "USA-Lab"
    vsphere_datastore_cluster = "LAB-DS-CLUSTER"
    vsphere_network           = "LabNetwork67"

One of the posts that said dynamic blocks do not work in child modules suggested the workaround was to use another child module (no detail of course). This is the part I cannot get my head around.

  1. fail to see how another module can be used to overcome a dynamic block.
  2. don’t know when the module is called during execution so I’m not sure what to pass it.
  3. assume that a module reference would replace the disk attribute value:
  disk    =     module.disk_list.vm_disks

I realize this is a bit of a shotgun blast.
Just looking for some direction from some of you guys with more terraform experience.


Hi @roy.madden,

I’m not sure what posts you might be referring to, but there does seem to be some misunderstanding. A dynamic block will operate the same regardless of which module the resource might be in, so I think you are looking at the problem from the wrong angle.

The error itself seems very specific to that resource type, and is probably not something caused by Terraform. Guessing by the error text, the provider is attempting to make multiple changes without waiting for the first to complete, or attempting something in the incorrect order, but those more familiar with the provider may have more insight. The error text might also come directly form the API, in which case searching for that may yield more results.

Hi @jbardin ,

Thanks for the reply.
Good to know that dynamic blocks are supposed to operate the same whether in a parent module or a child module.
I’m going to reverse my steps and test the original failure again.


Went back to my original run, refreshing the variable values, and it performed perfectly.
Not sure why we encountered the error at all - it happened multiple times - but we can dig a little deeper for a cause.
Thanks for your input, it saved us from an unnecessary and complicated detour.