AKS Cluster Update doesn't affect default_node_pool

Hi everyone.

We are using the most recent version of hashicorp/azurerm which is 3.92.0.

Our AKS K8s Cluster was 1.26.6 and we wanted to have it updated to 1.27.7. So we changed kubernetes_version in the azurerm_kubernetes_cluster resource and executed terraform.
This resulted in the Kubernetes version in Azure Portal (kubernetes_version in azurerm_kubernetes_cluster from Terraform perspective) getting updated to 1.27.7 as intended.

But when we had a closer look at the node pool (AutoScaleSet), we saw that it remained to be 1.26.6. Neither was the intended version updated nor were the nodes changed. orchestrator_version in azurerm_kubernetes_cluster.default_node_pool also was still 1.26.6.
We tried to add the previously unset orchestrator_version to 1.27.7, but Terraform did not recognize anything to change.

After a lot of back and forth, we ended up updating the node K8s version in Azure Portal manually. After that, we re-executed Terraform to see what may have changed by that. But surprisingly, again it did not notice any changes.
Out of curiosity, we tried to set orchestrator_version in azurerm_kubernetes_cluster.default_node_pool to 1.26.6 again and tried to execute Terraform again. As a result, terraform plan still does not recognize any changes, and terraform show keeps showing orchestrator_version as 1.27.7.

This is a DEV environment. We are preparing for the QA and PROD upgrades by playing around with this. We really would prefer to perform the real upgrade for QA and PROD using Terraform entirely, without these manual steps.
What is it that we are missing?

Our Cluster resource looks like this:

resource "azurerm_kubernetes_cluster" "terra-cluster" {
  name                = local.aks_name
  kubernetes_version  = var.aks_version
  location            = azurerm_resource_group.cluster-rg.location
  resource_group_name = azurerm_resource_group.cluster-rg.name
  dns_prefix          = "aks"
  tags = merge(
      var.default_tags,
      {
      },
  )

  default_node_pool {
    name       = "default"
    node_count = 1
    enable_auto_scaling = true
    min_count = 1
    max_count = 5
    vm_size    = var.azure_vm_size
    vnet_subnet_id = azurerm_subnet.sub1.id
    orchestrator_version = var.aks_version
    tags = merge(
       var.default_tags,
       {
       },
    )
  }

  lifecycle {
    ignore_changes = [
      # autoscaling may change node_count independent from terraform
      default_node_pool["node_count"]
    ]
  }

  network_profile {
    network_plugin  = "kubenet"
    pod_cidr        = var.pod_cidr
    service_cidr    = var.service_cidr
    dns_service_ip  = var.dns_service_ip
  }

  azure_active_directory_role_based_access_control {
    managed = true
    azure_rbac_enabled = true
    admin_group_object_ids = var.aks_aad_admin_groups
  }

  identity {
    type = "SystemAssigned"
  }

}

[Best Guess]

To address the issue where updating the AKS cluster version in Terraform doesn’t affect the default_node_pool , ensure the orchestrator_version in the default_node_pool block explicitly matches the updated kubernetes_version . If Terraform doesn’t detect changes, use terraform refresh to sync Terraform’s state with Azure. If issues persist, consider manually upgrading node pools using Azure CLI as a workaround and check for updates or report the issue to the Terraform AzureRM provider’s GitHub page.