Clone multiple disk ==> Order of the discs mixed up

Hello,

I want to clone a VM with three disks. For one disk I want to change the size. With the following script it “almost” always works:

variable "hostname" {}
variable "ipadress" {}

#Virtual Machine Resource
resource "vsphere_virtual_machine" "ECOM" {
  name             = "${var.hostname}"
  resource_pool_id = "${data.vsphere_compute_cluster.cluster.resource_pool_id}"
  datastore_id     = "${data.vsphere_datastore.datastore_fast.id}"

  num_cpus = 6
  memory   = 16384
  guest_id = "${data.vsphere_virtual_machine.template.guest_id}"

  scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"
  #firmware = "bios"
  firmware = "efi"

  network_interface {
    network_id   = "${data.vsphere_network.network.id}"
    adapter_type = "vmxnet3"
  }

  disk {
    label            = "${var.hostname}"
    size             = "${data.vsphere_virtual_machine.template.disks.0.size}"
    eagerly_scrub    = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
    unit_number      = 0
  }

  disk {
    label            = "${var.hostname}_01"
    #size             = "${data.vsphere_virtual_machine.template.disks.1.size}"
    size             = "70"
    eagerly_scrub    = "${data.vsphere_virtual_machine.template.disks.1.eagerly_scrub}"
    thin_provisioned = "${data.vsphere_virtual_machine.template.disks.1.thin_provisioned}"
    unit_number      = 1
  }

  disk {
    label            = "${var.hostname}_02"
    size             = "${data.vsphere_virtual_machine.template.disks.2.size}"
    eagerly_scrub    = "${data.vsphere_virtual_machine.template.disks.2.eagerly_scrub}"
    thin_provisioned = "${data.vsphere_virtual_machine.template.disks.2.thin_provisioned}"
    unit_number      = 2
  }


#  dynamic "disk" {
#    for_each = [for s in data.vsphere_virtual_machine.template.disks: {  
#      label =  index(data.vsphere_virtual_machine.template.disks, s)
#      unit_number =  index(data.vsphere_virtual_machine.template.disks, s)
#      size = s.size
#      eagerly_scrub = s.eagerly_scrub
#      thin_provisioned = contains(keys(s),"thin_provisioned") ? s.thin_provisioned : "true"
#    }]
#    content {
#      label = disk.value.label
#      unit_number = disk.value.unit_number
#      size = disk.value.size
#      datastore_id = "${data.vsphere_datastore.datastore.id}"
#      eagerly_scrub = disk.value.eagerly_scrub 
#      thin_provisioned = disk.value.thin_provisioned
#      #io_limit = "${var.iops_limit == "unlimited" ? null : var.iops_limit}" 
#    }
#  } 


  clone {
    template_uuid = "${data.vsphere_virtual_machine.template.id}"

    customize {
      linux_options {
        host_name = "${var.hostname}"
        domain    = "xxxxxx"
      }

      network_interface {
        ipv4_address = "${var.ipadress}"
        ipv4_netmask = 24
      }

      ipv4_gateway = "xxxxx"
      dns_server_list = ["xxxxx"]
    }
  }




  provisioner "remote-exec" {
    inline = [
      "pvresize /dev/sdb",
      "lvextend -l+100%FREE /dev/mapper/ol_var_lib_docker-VAR_LIB_DOCKER",
      "resize2fs /dev/mapper/ol_var_lib_docker-VAR_LIB_DOCKER",
      "update-ca-trust extract",
      "mkdir -p /docker-volumes/ora19-rman/database/ldb",
      "mkdir -p /docker-volumes/ora19-rman/backup",
      "chmod -R 777 /docker-volumes/ora19-rman",
      #"/tmp/script.sh args",
    ]

    connection {
    type     = "ssh"
    user     = "root"
    #password = "${var.root_password}"
    password = "xxxxx"
    #host     = self.default_ip_address
    host     = vsphere_virtual_machine.ECOM.default_ip_address
    }
  }

}

Unfortunately it happened once during cloning that the order of the disks got mixed up. This changed the size of the wrong disc.:

1. try

[dockeradm@xxxx ecom]$ lsblk 
NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                    8:0    0   10G  0 disk 
├─sda1                                 8:1    0  600M  0 part /boot/efi
├─sda2                                 8:2    0    1G  0 part /boot
└─sda3                                 8:3    0  8,4G  0 part 
  ├─ol_xxxx-root                252:0    0  7,4G  0 lvm  /
  └─ol_xxxx-swap                252:1    0    1G  0 lvm  [SWAP]
sdb                                    8:16   0   15G  0 disk 
└─ol_docker--volumes-DOCKER--VOLUMES 252:2    0   15G  0 lvm  /docker-volumes
sdc                                    8:32   0   70G  0 disk 
└─ol_var_lib_docker-VAR_LIB_DOCKER   252:3    0   30G  0 lvm  /var/lib/docker

2. try

[dockeradm@xxxxx ~]$ lsblk 
NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                    8:0    0   10G  0 disk 
├─sda1                                 8:1    0  600M  0 part /boot/efi
├─sda2                                 8:2    0    1G  0 part /boot
└─sda3                                 8:3    0  8,4G  0 part 
  ├─ol_xxxx-root                252:0    0  7,4G  0 lvm  /
  └─ol_xxxx-swap                252:1    0    1G  0 lvm  [SWAP]
sdb                                    8:16   0   70G  0 disk 
└─ol_var_lib_docker-VAR_LIB_DOCKER   252:2    0   70G  0 lvm  /var/lib/docker
sdc                                    8:32   0   15G  0 disk 
└─ol_docker--volumes-DOCKER--VOLUMES 252:3    0   15G  0 lvm  /docker-volumes

How can this be prevented? Is it possible to address the disc (or LVM) to be cloned with a name?

Is there a solution for this issue? We have the same issue. In the plan the disks are set correctly but on the OS they are not:

" + disk {“,
" + attach = false”,
" + controller_type = "scsi"“,
" + datastore_id = ""”,
" + device_address = (known after apply)“,
" + disk_mode = "persistent"”,
" + disk_sharing = "sharingNone"“,
" + eagerly_scrub = false”,
" + io_limit = -1",
" + io_reservation = 0",
" + io_share_count = 0",
" + io_share_level = "normal"“,
" + keep_on_remove = false”,
" + key = 0",
" + label = "disk0"“,
" + path = (known after apply)”,
" + size = 30",
" + storage_policy_id = "ed70d485-6a15-42ec-a08d-275dba44bf63"“,
" + thin_provisioned = true”,
" + unit_number = 0",
" + uuid = (known after apply)“,
" + write_through = false”,
" }“,
" + disk {”,
" + attach = false",
" + controller_type = "scsi"“,
" + datastore_id = ""”,
" + device_address = (known after apply)“,
" + disk_mode = "persistent"”,
" + disk_sharing = "sharingNone"“,
" + eagerly_scrub = false”,
" + io_limit = -1",
" + io_reservation = 0",
" + io_share_count = 0",
" + io_share_level = "normal"“,
" + keep_on_remove = false”,
" + key = 0",
" + label = "disk1"“,
" + path = (known after apply)”,
" + size = 10",
" + storage_policy_id = "ed70d485-6a15-42ec-a08d-275dba44bf63"“,
" + thin_provisioned = true”,
" + unit_number = 1",
" + uuid = (known after apply)“,
" + write_through = false”,
" }“,
" + disk {”,
" + attach = false",
" + controller_type = "scsi"“,
" + datastore_id = ""”,
" + device_address = (known after apply)“,
" + disk_mode = "persistent"”,
" + disk_sharing = "sharingNone"“,
" + eagerly_scrub = false”,
" + io_limit = -1",
" + io_reservation = 0",
" + io_share_count = 0",
" + io_share_level = "normal"“,
" + keep_on_remove = false”,
" + key = 0",
" + label = "disk2"“,
" + path = (known after apply)”,
" + size = 100",
" + storage_policy_id = "ed70d485-6a15-42ec-a08d-275dba44bf63"“,
" + thin_provisioned = true”,
" + unit_number = 2",
" + uuid = (known after apply)“,
" + write_through = false”,
" }",

Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: Virtual disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdc: 30 GiB, 32212254720 bytes, 62914560 sectors
Disk model: Virtual disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0644a98a

Device Boot Start End Sectors Size Id Type
/dev/sdc1 * 2048 1050623 1048576 512M 83 Linux
/dev/sdc2 1050624 62914559 61863936 29.5G 8e Linux LVM

Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: Virtual disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd2074153

In my experience, the Linux kernel does not offer any guarantees these days about ordering of devices, and I have observed boot failures on OpenStack VMs after sda and sdb spontaneously swapped between one boot and the next.

Ultimately the role Terraform is playing here is limited to orchestrating some APIs, so unless you find evidence that shows the ordering is getting messed up before it reaches vSphere, it’s probably not involved in the issue, and you’ll likely need to change what you’re doing within the VM to avoid depending on the sequential assignment of Linux device node letters.

I think you’d be better off asking in forums closer to the likely source of the problem - i.e. vSphere or Linux OS related.