Update/replace resource when a dependency is changed #8099

I’m creating this topic to continue discussion from
Update/replace resource when a dependency is changed
, which I’m locking so that people can continue to subscribe just to . I’m happy to see people are helping each other work around this, and I’m hoping moving the discussion here will support that without creating excess noise for people who just want succinct “it’s done” type updates in GitHub.

1 Like

Hi, I would like to have some help with a workaround to this exact issue, till Hashicorp will release something official.
My problem is the kubernetes config-map resource which a pod have a dependency on.
I would like that every time a Variable is changed in the config-map, the parent resource (pod) will be recreated to grab the change, as for now I need to manually recreate it through kubectl. Thanks.

Hi. I think the replace_on_change is a good idea.
My case is also with the kubernetes. I have a Pod use a PVC and use a PV. When the PV is modified, it should delete the Pod, then the PVC, then the PV, then create PV, PVC, and Pod again.
What happening now is that the PV stuck in Still destroying... til timeout.

I had a similar issue. This is how I ended up solving it:

  1. Populated kubernetes_config_map resource with local_file resource
resource local_file conf {
 filename = "files/config.conf"
 content = "some content"
}

resource kubernetes_config_map conf {
  metadata {
    name = "conf"
    namespace = "apps"
  }
  data = {
    config-file = local_file.conf.content
  }
}
  1. In the deployment I set up an environment variable which was a sha hash of the config file. This forces a new deployment when the config file changes:
resource kubernetes_deployment deploy {
 ...
  spec {
    template {
      spec {
        container {
           env {
             name = "CONFIG_SHA1"
             value = sha1(local_file.conf.content)
           }
  ...
}

Hope that helps.

Came here from the origin GitHub issue and I need help finding workaround.

I am using Auth0 Terraform provider to create clients in Auth0, and then saving a secret of client created in Azure KeyVault using AzureRM Terraform Provider, here is my configuration of resources:

resource "auth0_client" "client" {
   name = "Client Name"
   app_type = "non_interactive"
   client_secret_rotation_trigger = {
     version = 1
   }
}

data azurerm_key_vault "keyvault" {
  name = "My KeyVault"
  resource_group_name = "My RG"
}

resource "azurerm_key_vault_secret" "client_secret" {
  key_vault_id = data.azurerm_key_vault.keyvault.id
  name = "my-secret-name"
  value =  auth0_client.client.client_secret
}

When auth0_client.client.client_secret_rotation_trigger has any change, it triggers a rotation of client secret and that should trigger the update of client secret stored in the KeyVault, but it doesn’t happen.
The only time when I actually manage to change the secret stored in KeyVault is by running terraform the second time.

I have tried various workarounds to try to make this work - using null_resource, using depends_on, even setting tags in azurerm_key_vault_secret.client_secret that uses the same input as auth0_client.client.client_secret_rotation_trigger - that last one causes it fail altogether with this error:

Error: Provider produced inconsistent final plan

When expanding the plan for module.test.azurerm_key_vault_secret.client_secret
to include new values learned so far during apply, provider
Terraform Registry” produced an invalid new value for
.value: inconsistent values for sensitive attribute.

This is a bug in the provider, which should be reported in the provider’s own
issue tracker.

Any idea on what else I can do to actually make this work with current Terraform limitations?

So, just to have some clarity on this - what is the status of the original issue? It has been reported a few years back and (to my knowledge) not resolved. Are there any plans to address it? After the issue was locked in GitHub, visitors cannot leave thumbs up to show that the issue is still relevant to them.

There wasn’t any movement in GitHub for over a year and I think it could use just a little update message, even if it would be just “Haven’t had a chance to address it yet, but still on the backlog” or “Sorry, we won’t address it”.

The only reason I am asking is that I landed on this problem numerous times over the last few months and each time stopped on that GH issue. I am not familiar enough with go to propose any PR myself but would love to know if there are any generic solutions, other than just tainting the resources manually.

Thanks in advance!

1 Like

No works about this subject since ?

@LoicMahieu, the linked issue was closed with the introduction of the replace_triggered_by feature. Is that what you’re asking?

2 posts were split to a new topic: Replace one object when another one is replaced using only provider logic

Hi, no replace_triggered_by is not what I am asking for. Is there something like change_triggered_by ? I do not want the resource to be replaced, it just needs to be updated. Thanks.