Azure: Deploy Multiple VMs with 2 NICs

I am trying to create multiple VMs (2) each with 2 NICs, one with a “public” nic with a public IP and the other an “internal” nic with acc networking enabled. There are many examples out there, none of which seem to accomplish the task.

I am very, very close. I can create all of the resources I need and attach them to the VMs (boot diags, proximity group, NSG, cloud-init config, etc.) and it looks like Terraform at least thinks the nic resources are created (based on “terraform state show”). But in the portal only the first 2 nics are present and they are attached to each VM instead of 2 each. Everything builds cleanly with no errors anywhere.

Code is here:
https://github.com/kvietmeier/Terraform/tree/master/azure/upftesting-1

You can see the nics here - and they get assigned the correct subnet etc. but they don’t show up in the resource group in Azure.

PS C:\\GitHub\Terraform\azure\upftesting-1> terraform state list
data.template_cloudinit_config.config
data.template_file.system_setup
azurerm_dev_test_global_vm_shutdown_schedule.autoshutdown[0]
azurerm_dev_test_global_vm_shutdown_schedule.autoshutdown[1]
azurerm_linux_virtual_machine.vms[0]
azurerm_linux_virtual_machine.vms[1]
azurerm_network_interface.internal[0]
azurerm_network_interface.internal[1]
azurerm_network_interface.primary[0]
azurerm_network_interface.primary[1]
azurerm_network_security_group.ssh
azurerm_proximity_placement_group.proxplace_grp
azurerm_public_ip.public_ips[0]
azurerm_public_ip.public_ips[1]
azurerm_resource_group.upf_rg
azurerm_storage_account.diagstorageaccount
azurerm_subnet.subnets[0]
azurerm_subnet.subnets[1]
azurerm_subnet_network_security_group_association.mapnsg
azurerm_virtual_network.vnet
random_id.randomId

I followed some examples with similar issues that claim to be correct but still can’t get it to work - I know I’m missing something obvious…

NICS -

###- Create 2 NICs - one primary w/PubIP, one internal with SRIOV enabled
resource "azurerm_network_interface" "primary" {
  count                         = var.node_count
  name                          = "${var.vm_prefix}-nic-${format("%02d", count.index)}"
  location                      = azurerm_resource_group.upf_rg.location
  resource_group_name           = azurerm_resource_group.upf_rg.name
  enable_accelerated_networking = "false"

  ip_configuration {
    primary                       = true
    name                          = "Primary-${var.vm_prefix}"
    private_ip_address_allocation = "Dynamic"
    #private_ip_address_allocation = "Static"
    subnet_id                     = element(azurerm_subnet.subnets[*].id, 0)
    #private_ip_address            = element(var.subnet1_ips[*].id, count.index)
    public_ip_address_id          = element(azurerm_public_ip.public_ips[*].id, count.index)
  }
}

resource "azurerm_network_interface" "internal" {
  count                         = var.node_count
  location                      = azurerm_resource_group.upf_rg.location
  resource_group_name           = azurerm_resource_group.upf_rg.name
  name                          = "${var.vm_prefix}-nic-${format("%02d", count.index)}"
  enable_accelerated_networking = "true"

  ip_configuration {
    primary                       = false
    name                          = "Internal-${var.vm_prefix}"
    private_ip_address_allocation = "Dynamic"
    subnet_id                     = element(azurerm_subnet.subnets[*].id, 1)
    #private_ip_address            = element(var.subnet2_ips[*].id, count.index)
  }
}

In the VM -

###- Put it all together and build the VM
resource "azurerm_linux_virtual_machine" "vms" {
  location                        = azurerm_resource_group.upf_rg.location
  resource_group_name             = azurerm_resource_group.upf_rg.name
  count                           = var.node_count
  name                            = "${var.vm_prefix}-${format("%02d", count.index)}"
  size                            = "${var.vm_size}"
  
  # Attach the 2 NICs
  network_interface_ids = [
    "${element(azurerm_network_interface.primary.*.id, count.index)}",
    "${element(azurerm_network_interface.internal.*.id, count.index)}",
  ]

Any help is appreciated -
Karl Vietmeier

Do I get credit for answering my own question?

It was a Terraform beginner mistake I’m sure. The Azure resource provider was doing the right thing, as was TF. I made an error in my resource declaration.

It was this line in my NIC definition:

name = "${var.vm_prefix}-nic-${format("%02d", count.index)}"

After some thought and review of ARM, I realized I was reusing the same name for the resource so when TF executed the creation in the first iteration Azure created the resource, but then TF treated the follow-on iteration as an update to an existing resource so you end up with 4 TF resources but only 2 actual NICs. Subtle but in retrospect - obvious.

The “fix” - use a unique name for each iteration of the resource creation:

name  = "${var.vm_prefix}-PrimaryNIC-${format("%02d", count.index)}"