vSphere virtual machine replaced

I am having difficulty with the vsphere_virtual_machine resource where an existing vm is replaced when we add a new vm to the input variable map.

Specifically, the map is defined like this:

variable virtual_machines {
  type = map(object({
    vm_adapter                = string   
    vm_cpu                    = number
    vm_disks                  = list(number)
    vm_dns_list               = list(string)
    vm_dns_search             = list(string)
    vm_domain                 = string
    vm_firmware               = string
    vm_ip_address             = string
    vm_ip_gateway             = string
    vm_ip_netmask             = string
    vm_ram                    = number 
    vm_template               = string
    vm_timezone               = string
    vsphere_cluster           = string
    vsphere_datacenter        = string
    vsphere_datastore_cluster = string
    vsphere_network           = string
  }))
}

The map is loaded with data upon running the code. We started with only one entry in the map and successfully created the vm. Now we add another vm to the map:

virtual_machines = {
  us1ladc001x = {
    vm_adapter                = "vmxnet3"
    vm_cpu                    = 2
    vm_disks                  = [ 
                                100,
                                100] 
    vm_dns_list               = [
                                "172.28.99.10",
                                "172.28.99.11",
                                "172.28.51.10"]
    vm_dns_search             = ["bmrn-lab.com"]
    vm_domain                 = "bmrn-lab.com"
    vm_firmware               = "efi"   # use this option for Server 2022
#    vm_firmware               = "bios" # use this option for Server 2019
    vm_ip_address             = "172.28.99.10"
    vm_ip_gateway             = "172.28.99.1"
    vm_ip_netmask             = "24"
    vm_ram                    = 4096
    vm_template               = "windows2022-packer"
    vm_timezone               = "004" #This is GMT -8
    vsphere_cluster           = "US1LVSP91XA"
    vsphere_datacenter        = "US1-Lab"
    vsphere_datastore_cluster = "us1labds-cluster01"
    vsphere_network           = "tfLabNetwork99-servers"
  }

  us1ladc002x = {
    vm_adapter                = "vmxnet3"
    vm_cpu                    = 2
    vm_disks                  = [ 
                                100,
                                100] 
    vm_dns_list               = [
                                "172.28.99.10",
                                "172.28.99.11",
                                "172.28.51.10"]
    vm_dns_search             = ["bmrn-lab.com"]
    vm_domain                 = "bmrn-lab.com"
    vm_firmware               = "efi"   # use this option for Server 2022
#    vm_firmware               = "bios" # use this option for Server 2019
    vm_ip_address             = "172.28.99.11"
    vm_ip_gateway             = "172.28.99.1"
    vm_ip_netmask             = "24"
    vm_ram                    = 4096
    vm_template               = "windows2022-packer"
    vm_timezone               = "004" #This is GMT -8
    vsphere_cluster           = "US1LVSP91XA"
    vsphere_datacenter        = "US1-Lab"
    vsphere_datastore_cluster = "us1labds-cluster01"
    vsphere_network           = "tfLabNetwork99-servers"
  }
}

So, now we run an apply and the plan shows the new vm will be added and the existing vm will be replaced (destroy/add)

We have been careful in the past to identify minor changes that we list in the lifecycle>ignore_changes:

resource "vsphere_virtual_machine" "vm" {
  for_each = var.virtual_machines
    name                       = each.key
    resource_pool_id           = data.vsphere_compute_cluster.cluster[each.key].resource_pool_id
    datastore_cluster_id       = data.vsphere_datastore_cluster.datastore_cluster[each.key].id
    num_cpus                   = each.value.vm_cpu
    memory                     = each.value.vm_ram
    guest_id                   = data.vsphere_virtual_machine.template[each.key].guest_id
    scsi_type                  = data.vsphere_virtual_machine.template[each.key].scsi_type
    firmware                   = each.value.vm_firmware
    wait_for_guest_net_timeout = -1

    network_interface {
      network_id               = data.vsphere_network.network[each.key].id
      adapter_type             = each.value.vm_adapter
    }
    dynamic  "disk" {
      for_each = {for idx, size in each.value.vm_disks: idx=>size}
        content {
          label                = "disk${disk.key}"
          unit_number          = disk.key
          size                 = disk.value
          thin_provisioned     = false
          eagerly_scrub        = false
        }  
    }
    cdrom {
      client_device = true
    }
    clone {
      template_uuid           = data.vsphere_virtual_machine.template[each.key].id
      customize {
        timeout               = 30
      windows_options {
        computer_name         = each.key
#        workgroup             = each.value.vm_workgroup
        admin_password        = var.local_adminpass
        time_zone             = each.value.vm_timezone
        join_domain           = each.value.vm_domain
        domain_admin_user     = var.domain_admin_user
        domain_admin_password = var.domain_admin_password
        auto_logon            = true
        auto_logon_count      = "1"
        run_once_command_list = ["powershell.exe -file C:\\updates\\runonce-tfdeploy.ps1"]
      }
      network_interface {
        ipv4_address          = each.value.vm_ip_address
        ipv4_netmask          = each.value.vm_ip_netmask
        dns_server_list       = each.value.vm_dns_list
        dns_domain            = each.value.vm_domain
      }
      ipv4_gateway            = each.value.vm_ip_gateway
      dns_suffix_list         = each.value.vm_dns_search
    }

  }
  lifecycle {
    ignore_changes = [
      clone[0].customize[0].windows_options[0].domain_admin_user,
      clone[0].customize[0].windows_options[0].domain_admin_password,
      clone[0].customize[0].windows_options[0].admin_password,
      clone[0].template_uuid,
    ]
  }
}

Can someone see something in my plan that is forcing the vm replacement?
Is there a documented list of the triggers that force replacement (may as well see them all and decide which we want to exclude up front)?

  # vsphere_virtual_machine.vm["us1ladc002x"] must be replaced
-/+ resource "vsphere_virtual_machine" "vm" {
      + annotation                              = (known after apply)
      - boot_delay                              = 0 -> null
      - boot_retry_enabled                      = false -> null
      ~ change_version                          = "2023-08-09T22:54:50.675164Z" -> (known after apply)
      - cpu_hot_add_enabled                     = false -> null
      - cpu_hot_remove_enabled                  = false -> null
      - cpu_performance_counters_enabled        = false -> null
      - cpu_reservation                         = 0 -> null
      ~ cpu_share_count                         = 2000 -> (known after apply)
      - custom_attributes                       = {} -> null
      ~ datastore_id                            = "datastore-10087" -> (known after apply)
      ~ default_ip_address                      = "172.28.99.11" -> (known after apply)
      - efi_secure_boot_enabled                 = false -> null
      - enable_disk_uuid                        = false -> null
      - enable_logging                          = false -> null
      - extra_config                            = {} -> null
      ~ guest_ip_addresses                      = [
          - "172.28.99.11",
        ] -> (known after apply)
      ~ hardware_version                        = 19 -> (known after apply)
      ~ host_system_id                          = "host-8200" -> (known after apply)
      ~ id                                      = "423122e4-c6e9-a006-c7f9-cb55a4e69b37" -> (known after apply)
      + imported                                = (known after apply)
      - memory_hot_add_enabled                  = false -> null
      - memory_reservation                      = 0 -> null
      ~ memory_share_count                      = 40960 -> (known after apply)
      ~ moid                                    = "vm-11043" -> (known after apply)
        name                                    = "us1ladc002x"
      - nested_hv_enabled                       = false -> null
      - pci_device_id                           = [] -> null
      ~ power_state                             = "on" -> (known after apply)
      ~ reboot_required                         = false -> (known after apply)
      - run_tools_scripts_before_guest_reboot   = false -> null
      + storage_policy_id                       = (known after apply)
      - sync_time_with_host                     = false -> null
      - sync_time_with_host_periodically        = false -> null
      - tags                                    = [] -> null
      ~ uuid                                    = "423122e4-c6e9-a006-c7f9-cb55a4e69b37" -> (known after apply)
      ~ vapp_transport                          = [] -> (known after apply)
      - vbs_enabled                             = false -> null
      ~ vmware_tools_status                     = "guestToolsRunning" -> (known after apply)
      ~ vmx_path                                = "us1ladc002x/us1ladc002x.vmx" -> (known after apply)
      - vvtd_enabled                            = false -> null
        # (34 unchanged attributes hidden)

      ~ cdrom {
          ~ device_address = "ide:0:0" -> (known after apply)
          ~ key            = 3000 -> (known after apply)
            # (1 unchanged attribute hidden)
        }

      ~ clone {
          - linked_clone    = false -> null
          - ovf_network_map = {} -> null
          - ovf_storage_map = {} -> null
            # (2 unchanged attributes hidden)

          ~ customize {
              - dns_server_list      = [] -> null
              - windows_sysprep_text = (sensitive value) -> null
                # (3 unchanged attributes hidden)

              ~ network_interface {
                  ~ dns_server_list = [
                      + "172.28.99.10",
                      + "172.28.99.11",
                        "172.28.51.10",
                      - "172.28.51.11",
                    ]
                  - ipv6_netmask    = 0 -> null
                    # (3 unchanged attributes hidden)
                }

              ~ windows_options {
                  - product_key           = (sensitive value) -> null
                    # (11 unchanged attributes hidden)
                }
            }
        }

      ~ disk {
          ~ datastore_id      = "datastore-10087" -> "<computed>"
          ~ device_address    = "scsi:0:0" -> (known after apply)
          ~ io_share_count    = 1000 -> 0
          ~ key               = 2000 -> 0
          ~ path              = "us1ladc002x/us1ladc002x.vmdk" -> (known after apply)
          + storage_policy_id = (known after apply)
          ~ uuid              = "6000C292-659b-047f-e314-ec14bbc82f92" -> (known after apply)
            # (14 unchanged attributes hidden)
        }
      ~ disk {
          ~ datastore_id      = "datastore-10087" -> "<computed>"
          ~ device_address    = "scsi:0:1" -> (known after apply)
          ~ io_share_count    = 1000 -> 0
          ~ key               = 2001 -> 0
          ~ path              = "us1ladc002x/us1ladc002x_1.vmdk" -> (known after apply)
          + storage_policy_id = (known after apply)
          ~ uuid              = "6000C297-360a-33e8-63b1-108c0a406d16" -> (known after apply)
            # (14 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ bandwidth_share_count = 50 -> (known after apply)
          ~ device_address        = "pci:0:7" -> (known after apply)
          ~ key                   = 4000 -> (known after apply)
          ~ mac_address           = "00:50:56:b1:4d:76" -> (known after apply)
          - use_static_mac        = false -> null
            # (5 unchanged attributes hidden)
        }
    }

Plan: 2 to add, 0 to change, 1 to destroy.

Thanks in advance for any assistance on this.

Okay, we did some more testing and have an update.
We discovered what is causing the destroy/add but not sure why.

The vm_dns_list variable in the virtual_machines map was changed in this scenario due to adjustments in the network infrastructure. The result changed the value of vm_dns_list as follows.

Before:

    vm_dns_list               = [
                                "172.28.51.10",
                                "172.28.51.11"]

After:

    vm_dns_list               = [
                                "172.28.99.10",
                                "172.28.99.11",
                                "172.28.51.10"]

This change caused the destroy/add behavior.

Next, we experimented and discovered the IP addresses could be changed without causing a destroy/add provided the number of strings in the list did not increase.

In other words, we changed the value back to the original vm_dns_list value of two strings. Then we modified one of the addresses and ran the apply. The plan reported the actions as “1 to change” - perfect!

Introduce a third string to the list and run the apply - “1 to add, 1 to destroy”

So, now we know what caused it.
Still don’t understand the “why”.

Any input?