Implicit teardown of pod managed identity when replacing azurerm_kubernetes cluster?

So, I have a situation with Terraform and AKS, or at least I think I do. Before going to the time and effort to extract a complete working case from my code to submit a bug report (because it would be significant), I thought I’d ask if anyone else had seen this?

  • Create an azurerm_user_assigned_identity resource for your AKS cluster, and assign it to the cluster
  • Create an azurerm_user_assigned_identity resource for your pod identity.
  • Create RBAC assignments for all of the things the pod identity needs to be able to do
  • Change your Terraform to do something that causes the cluster to be replaced (in my case, I changed the disk size on the system node pool definition)
  • Plan/apply
  • On completion, the cluster is recreated. The managed identity you assigned to it still exists, and is linked to it.
  • On completion, the pod identity you created is gone. If you re-plan after the first apply, it decides correctly that it needs to recreate the pod identity.

Given that there isn’t a hard dependency between the cluster and the pod identity (they’re created independently of one another, and the identity should be able to live without the cluster), it seems like a bug, unless I’m missing something?