Race condition in resource update

Dear community,

I’ve a tf code that creates multiple key values in the azure appConfig from the input yaml file:

locals {
  yaml_input = yamldecode(file("../../addons.yml"))["addons"]
  version_map = flatten([for key, value in local.yaml_input : [
  for cmp,cmp_val in value:{
    addon_name = cmp
    addon_version = cmp_val
    sem_version = key
  }
  ]])
}

resource "azurerm_app_configuration_key" "test" {
  count = length(local.version_map)
  configuration_store_id = azurerm_app_configuration.appconf.id
  type  = "kv"
  key   = "adpk8s-addons-versionmaps-${lookup(element(local.version_map,count.index), "addon_name")}-${lookup(element(local.version_map,count.index), "sem_version")}"
  value = lookup(element(local.version_map,count.index), "addon_version")
  depends_on = [azurerm_role_assignment.appconf_dataowner]
}

My YAML input file looks like this:

addons:
  6.1.0:
    addon1: 1.4.0
    addon2: 1.2.0
    addon3: 1.0.0
    addon4: 4.1.0

When I add a new version & it’s components in the yaml, the resources (KV pairs) are updated/recreated in parallel. So, I land into race condition with the error: a resource with ID .... already exists. to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_app_configuration_key" for more information.

Rerunning fixes it as one would expect. Is there anyway to fix this by introducing some kind of delay ?

Thanks for your time!

Hi @prasad.devaraj891,

The example shown here doesn’t explain why you are seeing the errors about existing IDs. This could be a problem with the provider returning before the old resource was completely removed, the remote service reporting the resource removed before it effectively is, or you have a dependency forcing this resource to be create_before_destroy when it is not supported.

Aside from that however, I think you should be able get a lot more flexibility if you avoid using count here, and only update the specific resource instances as necessary, preventing the recreation of resources.

Leaving the version_map declaration as-is for now, we could use for_each in the resource to simplify things

resource "azurerm_app_configuration_key" "test" {
  for_each = { for v in local.version_map: v.addon_name => v }
  configuration_store_id = azurerm_app_configuration.appconf.id
  type  = "kv"
  key   = "adpk8s-addons-versionmaps-${each.key}-${each.value.sem_version}"
  value = each.value.addon_version
  depends_on = [azurerm_role_assignment.appconf_dataowner]
}