Failed Azure AKS cluster doesn't update Terraform state

Hi, I am using terraform cli to provision resources like AKS clusters and node pools in azure cloud. I am using azure storage container blob storage to store my backend i.e., state file.
I have tried provisioning AKS clusters and node pools multiple times. I find this bug/issue: during my terraform apply phase, if my apply fails because of following error: Code=“ReconcilePrivateDNS” Message="Reconcile private dns failed. Details: Code="BadRequest" Message="A virtual network cannot be linked to multiple zones with overlapping namespaces.
But I see that the cluster got provisioned/visible in azure portal with “failed” state.

I go ahead and rectify the above error by adding the private DNS to the respective VNET. After that I again try to recreate the cluster by using terraform apply but I run into “resource already exists - to be managed via Terraform this resource needs to be imported into the State” error stating that the resource already exists. But when I explore my state file, it is completely blank. My state file has no information about this AKS cluster which got provisioned with failed state.

Can anyone please help me to understand whether this behaviour is an expected one or whether it is a bug? Does this mean that terraform updates the state file only if the resource got created successfully? and also can anyone please help me to understand what are the steps that need to be followed to overcome this situation?