Nested loops flatten function

I see some strange behaviour while creating multiple vnets / subnets in azure. In the below .tfvars i define a object.

network = {
  vnet1 = {
    name     = "vnet-dev-westeurope-001"
    location = "westeurope"
    cidr     = ["10.18.0.0/16"]
    subnets  = {
      sn1 = {
        name      = "subnet-dev-westeurope-001"
        location  = "westeurope"
        cidr      = ["10.18.1.0/24"]
        nsg       = "nsg-dev-westeurope-001"
        endpoints = []
      }
    }
  },

  vnet2 = {
    name     = "vnet-dev-eastus2-001"
    location = "eastus2"
    cidr     = ["10.19.0.0/16"]
    subnets = {
      sn1 = {
        name     = "subnet-dev-eastus2-001"
        location = "eastus2"
        cidr     = ["10.19.1.0/24"]
        nsg      = "nsg-dev-eastus2-001"
        endpoints = []
      }
    }
  },

  vnet3 = {
    name     = "vnet-dev-southeastasia-001"
    location = "southeastasia"
    cidr     = ["10.20.0.0/16"]
    subnets = {
      sn1 = {
        name     = "subnet-dev-southeastasia-001"
        location = "southeastasia"
        cidr     = ["10.20.1.0/24"]
        nsg      = "nsg-dev-southeastasia-001"
        endpoints = []
      }
    }
  }
}

In the below code i iterate through the key network to create multiple vnets.

resource "azurerm_virtual_network" "vnets" {
  for_each = can(var.network) ? var.network : null

  name                = each.value.name
  resource_group_name = azurerm_resource_group.rg["network"].name
  location            = each.value.location
  address_space       = each.value.cidr
}

To be able to create multiple subnets on each vnet using the tfvars i create i local variable with the flatten function. The resource subnet iterates through this local variable using the below code.

locals {
  network_subnets = flatten([
    for network_key, network in var.network : [
      for subnet_key, subnet in network.subnets : {

        network_key          = network_key
        subnet_key           = subnet_key
        address_prefixes     = subnet.cidr
        subnet_name          = subnet.name
        nsg_name             = subnet.nsg
        location             = subnet.location
        endpoints            = subnet.endpoints
        virtual_network_name = azurerm_virtual_network.vnets[network_key].name
      }
    ]
  ])
}

resource "azurerm_subnet" "subnets" {
  for_each = {
    for sn in local.network_subnets : "${sn.network_key}.${sn.subnet_key}" => sn
  }

  name                 = each.value.subnet_name
  resource_group_name  = azurerm_resource_group.rg["network"].name
  virtual_network_name = each.value.virtual_network_name
  address_prefixes     = each.value.address_prefixes
  service_endpoints    = lookup(each.value, "endpoints", null)
}

The below output show the local.network_subnets list it produced:

Changes to Outputs:
  + network = [
      + {
          + address_prefixes     = [
              + "10.18.1.0/24",
            ]
          + endpoints            = []
          + location             = "westeurope"
          + network_key          = "vnet1"
          + nsg_name             = "nsg-dev-westeurope-001"
          + subnet_key           = "sn1"
          + subnet_name          = "subnet-dev-westeurope-001"
          + virtual_network_name = "vnet-dev-westeurope-001"
        },
      + {
          + address_prefixes     = [
              + "10.19.1.0/24",
            ]
          + endpoints            = []
          + location             = "eastus2"
          + network_key          = "vnet2"
          + nsg_name             = "nsg-dev-eastus2-001"
          + subnet_key           = "sn1"
          + subnet_name          = "subnet-dev-eastus2-001"
          + virtual_network_name = "vnet-dev-eastus2-001"
        },
      + {
          + address_prefixes     = [
              + "10.20.1.0/24",
            ]
          + endpoints            = []
          + location             = "southeastasia"
          + network_key          = "vnet3"
          + nsg_name             = "nsg-dev-southeastasia-001"
          + subnet_key           = "sn1"
          + subnet_name          = "subnet-dev-southeastasia-001"
          + virtual_network_name = "vnet-dev-southeastasia-001"
        },

Finally i also iterate through this local regarding the nsg’s and associations

resource "azurerm_network_security_group" "nsg" {
  for_each = {
    for subnet in local.network_subnets : "${subnet.network_key}.${subnet.subnet_key}" => subnet
  }

  name                = each.value.nsg_name
  resource_group_name = azurerm_resource_group.rg["network"].name
  location            = each.value.location
}
resource "azurerm_subnet_network_security_group_association" "nsg_as" {
  for_each = {
    for subnet in local.network_subnets : "${subnet.network_key}.${subnet.subnet_key}" => subnet
  }

  subnet_id                 = azurerm_subnet.subnets[each.key].id
  network_security_group_id = azurerm_network_security_group.nsg[each.key].id
}

The code works… only the problem i am facing is that randomly not all subnets are created. Terraform says it is created, but the subnet is not existing on for example 1 of the 3 vnets.

However the subnets are always created when i add --parallelism=2 in my terraform apply command.

My question is:
Is my code properly constructed? Everything should be properly implicitly linked together. Is this the case or i am i missing something?

Or is this a limitation in the azure rest api?

Hi @dkooll1!

The first thing that I noticed reading your comment was your initial for_each example:

  for_each = can(var.network) ? var.network : null

This is an interesting statement because can(var.network) will always either return true (if var.network is declared) or produce an error (if it isn’t). Terraform doesn’t allow dynamically declaring new variables, so a variable is either declared or it isn’t. Therefore I think what you write above is exactly equivalent to the following:

  for_each = var.network

(null is also not a valid value for for_each, but I think that didn’t matter because in practice can(var.network) could never return false and thus could never select that result.

However, when reading the rest of your question this initial quirk doesn’t seem super important to what you are asking, so I mention it only in case it’s helpful for learning and I’ll switch now to focusing on the rest of your question.


You mention in the second part of your question that not all subnets were created, but the two resource blocks you showed seem to be about a security group and a subnet security group association rather than the subnet itself. Do you mean that only a subset of the azurerm_network_security_group.nsg and azurerm_subnet_network_security_group_association.nsg_as instances actually got created?

The behavior differing based on the amount of concurrency (as you overrode using -parallelism) does make this seem like a provider quirk or a remote API quirk, since Terraform Core itself shouldn’t change what requests it makes based on the concurrency limit; only the timing and ordering of those requests would change.

Do you see messages saying that Terraform is “Creating …” and “Created” all of the instances you were expecting, or are some missing from that output too? What I’m wondering is whether the remote API is enforcing some sort of request rate limit or concurrency limit but either it isn’t returning an error or the provider is ignoring that error, making Terraform report that the operation succeeded even if it didn’t.

If you can see Terraform reporting that it began creating and completed creating all of the objects you declared then I’d suggest opening an issue in the Azure provider’s GitHub repository to discuss that.

If you see particular instances missing from Terraform’s own output too then that might mean that there’s a problem with this configuration that I’ve not noticed yet; it might help if you could share the full output from terraform apply so I can see what Terraform is proposing to do and what it actually did after you accepted the proposed plan.

hi @apparentlymart,

Thanks for your answer. The function can seems unnecessary indeed. I will remove that one!

The NSG part was just to show a more complete picture. It has nothing to do with the actual problem. I did a quick test this morning with 5 vnets and 1 subnet on each. All resources are included in the apply output. In fact terraform says all vnets / subnets are created.

azurerm_resource_group.rg: Creating...
azurerm_resource_group.rg: Creation complete after 1s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001]
azurerm_virtual_network.vnets["eastus2"]: Creating...
azurerm_virtual_network.vnets["southeastasia"]: Creating...
azurerm_virtual_network.vnets["westeurope"]: Creating...
azurerm_virtual_network.vnets["eastus"]: Creating...
azurerm_virtual_network.vnets["southcentralus"]: Creating...
azurerm_virtual_network.vnets["westeurope"]: Creation complete after 4s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-westeurope-001]
azurerm_virtual_network.vnets["eastus"]: Creation complete after 5s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-eastus-001]
azurerm_virtual_network.vnets["eastus2"]: Creation complete after 5s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-eastus2-001]
azurerm_virtual_network.vnets["southcentralus"]: Creation complete after 5s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-southcentralus-001]
azurerm_virtual_network.vnets["southeastasia"]: Creation complete after 6s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-southeastasia-001]
azurerm_virtual_network_dns_servers.dns["westeurope"]: Creating...
azurerm_virtual_network_dns_servers.dns["southeastasia"]: Creating...
azurerm_virtual_network_dns_servers.dns["eastus"]: Creating...
azurerm_virtual_network_dns_servers.dns["eastus2"]: Creating...
azurerm_virtual_network_dns_servers.dns["southcentralus"]: Creating...
azurerm_subnet.subnets["southeastasia.sn1"]: Creating...
azurerm_subnet.subnets["westeurope.sn1"]: Creating...
azurerm_subnet.subnets["southcentralus.sn1"]: Creating...
azurerm_subnet.subnets["eastus2.sn1"]: Creating...
azurerm_subnet.subnets["eastus.sn1"]: Creating...
azurerm_virtual_network_dns_servers.dns["eastus2"]: Creation complete after 2s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-eastus2-001/dnsServers/default]
azurerm_virtual_network_dns_servers.dns["southcentralus"]: Creation complete after 3s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-southcentralus-001/dnsServers/default]
azurerm_virtual_network_dns_servers.dns["southeastasia"]: Creation complete after 4s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-southeastasia-001/dnsServers/default]
azurerm_virtual_network_dns_servers.dns["westeurope"]: Creation complete after 4s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-westeurope-001/dnsServers/default]
azurerm_subnet.subnets["eastus.sn1"]: Creation complete after 6s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-eastus-001/subnets/sn-dev-eastus-001]
azurerm_subnet.subnets["westeurope.sn1"]: Creation complete after 8s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-westeurope-001/subnets/sn-dev-westeurope-001]
azurerm_subnet.subnets["eastus2.sn1"]: Creation complete after 8s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-eastus2-001/subnets/sn-dev-eastus2-001]
azurerm_subnet.subnets["southcentralus.sn1"]: Creation complete after 9s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-southcentralus-001/subnets/sn-dev-southcentralus-001]
azurerm_virtual_network_dns_servers.dns["eastus"]: Creation complete after 10s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-eastus-001/dnsServers/default]
azurerm_subnet.subnets["southeastasia.sn1"]: Still creating... [10s elapsed]
azurerm_subnet.subnets["southeastasia.sn1"]: Creation complete after 14s [id=/subscriptions/cb3cf69c-4cb7-4e42-8fd8-1aae3624f329/resourceGroups/rg-network-dev-001/providers/Microsoft.Network/virtualNetworks/vnet-dev-southeastasia-001/subnets/sn-dev-southeastasia-001]

However this time only the eastus subnet azurerm_subnet.subnets["eastus.sn1"]: Creation complete after 6s is not created. This apply used the max parallelism. This issue occurs randomly. So for example next time 2 out of 5 subnets are not created.

If i run this with parallelism=2, then everything is always created succesfully.

Hi @dkooll1,

If Terraform is reporting that it asked the provider to create all of the objects and that it succeeded in creating them then this does seem like a misbehavior of either the provider or of the remote API.

I’d suggest opening a bug report in the provider repository to discuss this further, since the Azure provider team doesn’t typically monitor this forum and they’ll be in a better position to understand what happened here and whether there’s something the provider could to to mitigate it. (For example, if it does turn out to be a rate limit in the underlying API then the provider could potentially use its own semaphore or similar to enforce a particular maximum concurrency regardless of what Terraform’s own setting is set to.)

Hi, @apparentlymart

I will raise a issue there. Thanks for your help so far!