Hi @Krishg,
I have been able to reproduce the issue in only one of your scenarios (unless I have misunderstood them)
Running a terraform apply
after terraform destroy
:
In this scenario I received no errors. Because the destroy carries out the operations in reverse of the apply (due to dependencies) the following occurs during destroy
azurerm_key_vault_access_policy.example: Destroying...
azurerm_key_vault_access_policy.example: Destruction complete after 6s
azurerm_key_vault.example: Destroying...
azurerm_key_vault.example: Destruction complete after 2s
Therefore the keyvault, when it is deleted no longer has any access policies. So when you next apply there are no errors, as the keyvault is created and then the access policies recreated:
azurerm_key_vault.example: Creating...
azurerm_key_vault.example: Creation complete after 2m8s
azurerm_key_vault_access_policy.example: Creating...
azurerm_key_vault_access_policy.example: Creation complete after 6s
Running a terraform apply
after deleting the keyvault from the portal:
In this case I did see the behaviour you describe:
azurerm_key_vault_access_policy.example: Creating...
╷
│ Error: A resource with the ID "/subscriptions/***/resourceGroups/example-resources/providers/Microsoft.KeyVault/vaults/example-keyvault-sp1999/objectId/***
already exists - to be managed via Terraform this resource needs to be imported
into the State. Please see the resource documentation for "azurerm_key_vault_access_policy" for more information.
And this is because the keyvault is deleted while there are still access policies attached. As you say, when the azurerm
provider restores the keyvault it does so with the attached access policies. This then causes the creation of the azurerm_key_vault_access_policy
to fail as the access policies exist (and have appeared since the plan determined that they, along with the keyvault, required recreation during the state refresh)
Conclusion
This could probably be reported to the Azurerm provider maintainers as an issue Issues · hashicorp/terraform-provider-azurerm (github.com) but it is somewhat of an edge case.
Unfortunately I don’t have a way to ‘work-around’ this issue
A few comments on these scenarios, however, which would mitigate the risk of them occurring:
- If you are managing your infrastructure via Terraform it is good-practice to restrict access to the resources via the portal (perhaps only allow read) to ensure that all infrastructure changes must be applied via Terraform and any changes would go through the appropriate deployment pipelines, gates and approvals. No access = no ability to delete

- In the case of deleting via the portal, a recovery via the portal would mean that this issue would not occur (providing recovery was done prior to a terraform apply being accepted). Again, I would expect that this issue would be picked up on a pipeline plan stage and the pre-apply gate/reviewer would see the keyvault and its access policies were being shown as needed to be created when they were not expected to be
- The addition of a delete resource lock (Applied outside of the module that creates the keyvault) would prevent the keyvault being deleted via terraform or the portal without manual intervention (to remove the lock) -
- The lock could be applied via a separate terraform module and pipeline
- The lock could be applied via Azure Policy at resource creation (This could be a policy specifically targeting keyvault resources)
Hope that helps.
Lastly:
As you are referring (via an expression reference) to an attribute azurerm_key_vault.example.id
in the below resource this creates an implicit dependency to azurerm_key_vault.example
. Therefore your explicit dependency is not required.
resource "azurerm_key_vault_access_policy" "example" {
key_vault_id = azurerm_key_vault.example.id # <------- Implicit dependency
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
depends_on = [azurerm_key_vault.example] # <------ This is not needed
}
As per The depends_on Meta-Argument - Configuration Language | Terraform | HashiCorp Developer:
You should use depends_on
as a last resort because it can cause Terraform to create more conservative plans that replace more resources than necessary.
Happy Terraforming