Error: disk.x: unit_number on disk "diskx" too high (15) (terraform, vmware)

Hi All,

I have two servers hosted on VMware that were managed using the same Terraform code (module) because they had the same hardware specs. Now, I’d like to group them into two separate groups as their hardware specs differ.

I removed one of them from the Terraform state, modified my Terraform code, and then imported it back into Terraform. However, the Terraform plan returns the following error while the server has only 6 disks:

Terraform plan

Plan: 0 to add, 1 to change, 0 to destroy.

│ Error: disk.2: unit_number on disk “disk2” too high (15) - maximum value is 14 with 1 SCSI controller(s)

│ with module.db3_server.vsphere_virtual_machine.db3vm[0],
│ on modules/db3/db3_servers.tf line 1, in resource “vsphere_virtual_machine” “db3vm”:
│ 1: resource “vsphere_virtual_machine” “db3vm” {

Any help from you guys to solve this issue would be greatly appreciated.

Below is the details of the scsi controllers in vmware

Hard disks
6 total | 4.68 TB
Hard disk 1 200 GB | SCSI(0:0)
Hard disk 2 500 GB | SCSI(1:0)
Hard disk 3 1024 GB | SCSI(0:1)
Hard disk 4 1024 GB | SCSI(1:1)
Hard disk 5 1024 GB | SCSI(2:0)
Hard disk 6 1024 GB | SCSI(3:0)

Below is the code for vsphere_virtual_machine

resource “vsphere_virtual_machine” “db3vm” {

count = length(var.db3_vm_ips) == 0 ? 0 : length(var.db3_vm_ips)
name = var.db3_vm_hostnames[count.index]
resource_pool_id = data.vsphere_resource_pool.rpool.id
num_cpus = var.db3_vm_cpu
num_cores_per_socket = var.db3_vm_cpsocket
memory = var.db3_vm_ram
guest_id = var.vm_guestid
memory_hot_add_enabled = true
cpu_hot_add_enabled = true
cpu_hot_remove_enabled = true
extra_config_reboot_required = false

network_interface {
network_id = data.vsphere_network.db3vmpg.id
adapter_type = var.vm_adapter_type
}

folder = “{var.vm_parent_folder}/{upper(var.env)}/”

dynamic “disk” {
for_each = {
for idx, d in data.vsphere_virtual_machine.db3vmtemplate.disks : idx => d
}

content {
  label            = "disk${disk.key}"
  unit_number      = disk.key 
  size             = disk.value.size
  eagerly_scrub    = disk.value.eagerly_scrub
  thin_provisioned = contains(keys(disk.value), "thin_provisioned") ? disk.value.thin_provisioned : true
}

}

clone {
template_uuid = data.vsphere_virtual_machine.db3vmtemplate.id

customize { 
  linux_options {
    host_name       = var.db3_vm_hostnames[count.index]
    domain          = var.db3_vm_domain
  }

  network_interface {
    ipv4_address    = var.db3_vm_ips[count.index]
    ipv4_netmask    = var.db3_vm_netmask
  }

  ipv4_gateway      = var.db3_vm_gateway
  dns_server_list   = var.vm_dns_server
  dns_suffix_list   = var.vm_dns_suffix
}

}

…// ansible playbooks

// Avoid rebuild servers due to changes inside vmware
lifecycle {
ignore_changes = [
datastore_cluster_id,
resource_pool_id,
clone[0].template_uuid,
datastore_id,
disk,
annotation,
tags,
network_interface,
#extra_config,
]
}
}

Disable VMware Distributed Resource Scheduler

resource “vsphere_storage_drs_vm_override” “db3vm_drs_vm_override” {
count = length(var.db3_vm_ips) == 0 ? 0 : length(var.db3_vm_ips)
datastore_cluster_id = data.vsphere_datastore_cluster.dscluster.id
virtual_machine_id = “${vsphere_virtual_machine.db3vm[count.index].id}”
sdrs_enabled = false
}

Put the DB3 VM on different esxi host within the cluster

resource “vsphere_compute_cluster_vm_anti_affinity_rule” “db3vm_compute_anti_affinity_rule” {
count = length(var.db3_vm_ips) == 0 ? 0 : 1
name = “upp-{terraform.workspace}-db3vm-compute-anti-affinity-rule" enabled = true mandatory = true compute_cluster_id = data.vsphere_compute_cluster.ccluster.id virtual_machine_ids = "{vsphere_virtual_machine.db3vm[*].id}”
}