Upgrade azurerm provider from 1.44.0 to 2.0.0 forces AKS-Cluster recreation

Hello all!

since we’ve upgraded from Terraform Azurerm Provider 1.44.0 to 2.0.0 the TF-Plan wants to recreate our AKS Cluster while we didn’t changed its configuration.
Do you know, what could be the issue? Maybe some side-effect from new Default-Properties of the Update?

Current Versions:
Terraform v0.12.6 (also tested and got same behavior with 0.12.22 and 0.13.5)

  • provider.azurerm v1.42.0 -> v2.0.0 (also tested and got same behavior with 2.33.0)
  • provider.local v1.4.0
  • provider.null v2.1.2
  • provider.random v2.3.0
  • provider.tls v2.2.0

Affected Resource(s)

azurerm_kubernetes_cluster

Terraform Configuration Files

      resource "azurerm_kubernetes_cluster" "aks" {
        name                = "examplenameaks"
        location            = var.location
        resource_group_name = azurerm_resource_group.aks_rg.name
        dns_prefix          = "exampledns"
        kubernetes_version  = var.k8s_version
        linux_profile { 
          admin_username = "kubeadmin"
          ssh_key {
            key_data = var.vm_ssh_key
          }
        }
        role_based_access_control {
          enabled = true
          azure_active_directory {
            client_app_id = var.aad_client_app_id
            server_app_id = var.aad_server_app_id
            server_app_secret = var.aad_server_app_secret
          }
        }
        default_node_pool {
          name                    = "examplename"
          node_count              = var.vm_count
          vm_size                 = var.vm_size
          type                    = "AvailabilitySet"
          os_disk_size_gb         = var.vm_os_disk_size_gb
          node_taints             = []
          enable_node_public_ip   = false
          enable_auto_scaling     = false
          vnet_subnet_id = azurerm_subnet.subnet.id
        }
        
        service_principal {
          client_id     = var.azure_client_id
          client_secret = var.azure_client_secret
        }

        network_profile {    
          load_balancer_sku = "Basic"
          network_plugin = "azure"
        }
      }

Expected Behavior

No changes to the aks cluster

Actual Behavior

Terraform will perform the following actions:

module.azure.module.aks.azurerm_kubernetes_cluster.aks must be replaced

      -/+ resource "azurerm_kubernetes_cluster" "aks" {
      ...
      ~ network_profile {
   
      ~ load_balancer_sku  = "Basic" -> "standard" # forces replacement
        network_plugin     = "azure"
      + network_policy     = (known after apply)
      + pod_cidr           = (known after apply)
     
      + load_balancer_profile {
          + effective_outbound_ips    = (known after apply)
          + managed_outbound_ip_count = (known after apply)
          + outbound_ip_address_ids   = (known after apply)
          + outbound_ip_prefix_ids    = (known after apply)
        }
    }
      ...

Thanks!

Nevermind. We’ve solved the problem. The config was correct… we’ve just created the Plan locally on nearly the same cluster which hadn’t this line in the network_profile configuration:

      load_balancer_sku = "Basic"

So this was a Layer 8 problem.