Key Vault using wrong provider for refresh with multiple providers

I am having an issue where Terraform can successfully apply my configuration that utilizes two azurerm providers (default and one aliased), but cannot refresh successfully afterwards. The issue appears to be the same as what has been reported as a bug in the provider by another user, however it has been stated it is a configuration issue (Terraform Trying To Create A Key Vault Multiple Times After Successfully Creating On Previous Run. PLEASE STOP CLOSING !!! · Issue #14808 · hashicorp/terraform-provider-azurerm · GitHub). Currently I am struggling to see how it is a configuration issue and not a bug, perhaps someone can provide some insight.

Here is my test configuration to reproduce the issue (shown with workarounds commented out):

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 2.92.0"
  required_version = ">= 1.1.0"

provider "azurerm" {
  features {}
  subscription_id = var.azure_subscription_ids[terraform.workspace]
  tenant_id       = var.azure_tenant_id

# provider "azurerm" {
#   features {}
#   alias           = "main"
#   subscription_id = var.azure_subscription_ids[terraform.workspace]
#   tenant_id       = var.azure_tenant_id
# }

provider "azurerm" {
  features {}
  skip_provider_registration = true
  alias                      = "certificate"
  subscription_id            = var.dns_certificate_vault_subscription_id
  tenant_id                  = var.azure_tenant_id

data "azurerm_client_config" "current" {}

resource "azurerm_resource_group" "main" {
  name     =
  location = var.resource_group.location

resource "azurerm_key_vault" "main" {
  #provider                 = azurerm.main
  name                     = "kvdnscertificatetest${terraform.workspace}"
  location                 = azurerm_resource_group.main.location
  resource_group_name      =
  sku_name                 = "standard"
  tenant_id                = var.azure_tenant_id
  purge_protection_enabled = false

resource "azurerm_key_vault_access_policy" "main_client" {
  #provider                = azurerm.main
  key_vault_id            =
  tenant_id               = data.azurerm_client_config.current.tenant_id
  object_id               = data.azurerm_client_config.current.object_id
  certificate_permissions = []
  key_permissions = [
  secret_permissions = [
  storage_permissions = [

data "azurerm_key_vault" "certificate_vault" {
  provider            = azurerm.certificate
  resource_group_name = var.dns_certificate_vault_resource_group_name
  name                = var.dns_certificate_vault_name

data "azurerm_key_vault_secret" "test" {
  name         = var.dns_certificate_name
  key_vault_id =

resource "azurerm_app_service_certificate" "test" {
  name                = format("%s-%s", var.dns_certificate_vault_name, var.dns_certificate_name)
  location            = azurerm_resource_group.main.location
  resource_group_name =
  pfx_blob            = data.azurerm_key_vault_secret.test.value

This configuration successfully can be applied with terraform apply however when running terraform refresh (or any other command dependent on refreshing the state) the azurerm_key_vault and azurerm_key_vault_access_policy resources fail to refresh state as they use the aliased provider’s subscription instead.

The workaround I have found is to add an additional aliased provider duplicating the default provider and reference it in those resources. Using that workaround it acts as expected (uncommenting out the commented provider bits). This seems consistent with what other users have seen.

I would like to know what is wrong with my original configuration to cause this since this is not a bug but a configuration issue as all the terraform documentation states the resources should use the default provider if you do not specify one (and it does for the initial apply).

I am using terraform 1.1.4 and azurerm 2.92.0 (saw the same thing in previous versions as well).