Continual diff in google_runtimeconfig_variable

Hello.
We are running terraform 0.12.7 with version 2.13.0 for all the google providers.

I have a situation where each time i do a terraform plan/apply it shows the same resource needs to be updated even though the update writes the same value each time. (as verified via getting the runtime config variable via gcloud).

My terraform plan/apply output

Terraform will perform the following actions:

  # module.examplecorp-uat.google_runtimeconfig_variable.data_transfer_account will be updated in-place
  ~ resource "google_runtimeconfig_variable" "data_transfer_account" {
        id          = "projects/examplecorp-uat-pipeline-XXXX/configs/network-runtime-config/variables/examplecorp/infra/data-transfer-account"
        name        = "examplecorp/infra/data-transfer-account"
        parent      = "network-runtime-config"
        project     = "examplecorp-uat-pipeline-XXXX"
      ~ text        = "serviceAccount:project-XXXXXXXXXXX@storage-transfer-service.iam.gserviceaccount.com" -> (known after apply)
        update_time = "2019-08-08T20:35:40.457462484Z"
    }

  # module.examplecorp-uat.module.data_transfer_project.data.google_storage_transfer_project_service_account.storage_transfer_sa will be read during apply
  # (config refers to values not yet known)
 <= data "google_storage_transfer_project_service_account" "storage_transfer_sa"  {
      + email   = (known after apply)
      + id      = (known after apply)
      + project = "examplecorp-uat-data-transfer-XXXX"
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.examplecorp-uat.module.data_transfer_project.data.google_storage_transfer_project_service_account.storage_transfer_sa: Refreshing state...

The associated resource declaration is

resource "google_runtimeconfig_variable" "data_transfer_account" {
  parent  = google_runtimeconfig_config.environment_config.name
  name    = "examplecorp/infra/data-transfer-account"
  text    = module.data_transfer_project.data_transfer_service_account
  project = module.data_pipeline_project.project_id
}

The text parameter is consuming the following output

output "data_transfer_service_account" {
  description = "the data transfer service account"
  value       = "serviceAccount:${data.google_storage_transfer_project_service_account.storage_transfer_sa.email}"
}

We have many other runtime configs that get updated 1 time and do not show up in a diff unless they are changed. Only the one that continually has a diff relies on a data source (https://www.terraform.io/docs/providers/google/d/google_storage_transfer_project_service_account.html). I’m thinking that this is the problem but not sure how to go about retrieving the service account name in a different fashion.

This link https://www.terraform.io/docs/configuration/data-sources.html#data-resource-behavior seems to imply to me that for whatever reason this particular resource cannot be retrieved during the plan phase and is why i keep getting the situation where terraform thinks a change is needed.

Any tips on how to change things so terraform knows that no change is actually needed?
TIA.

Hi @footshooter!

I think you have the right idea that something about the module.examplecorp-uat.module.data_transfer_project.data.google_storage_transfer_project_service_account.storage_transfer_sa data resource is forcing it to be read during apply every time, and thus it can never converge.

A common cause of that is using depends_on with data resources. If you do that, then Terraform must always pessimistically assume the data resource is going to change at apply time because it can’t tell which aspect of the object you’re depending on matters for the data resource result. If you’re using depends_on in that data block, you’ll need to remove it and represent that dependency some other way to get the behavior you are looking for.

If you’re not sure how to do that, or if depends_on doesn’t seem to be the problem here, please share the configuration of that data resource and we can see what change might be possible to let this converge.

@apparentlymart
Wow you nailed it! My data block looks like below. I commented out the depends_on and then ran terraform plan which showed no differences.

data "google_storage_transfer_project_service_account" "storage_transfer_sa" {
  project    = module.data-transfer-project.project_id
  depends_on = [module.data-transfer-project]
}

Are there any patterns/docs you could point out that would help me come up with a strategy for handling dependencies with data blocks? I’ll do a clean apply without the depends_on and see if it’s actually required as well.

I’ve been away from terraform for some years now. Now that i am back using the tool at work i’m impressed to see that you are still giving timely, accurate, detailed, professional and generally awesome support to the community! Thanks for all you do for the terraform community.

I removed the depends_on line and applied to a clean ephemeral environment and everything worked fine without the depends_on so I think that was maybe preemptive and not required.

I think i’m good now!
Thanks again!

Hi @footshooter!

Indeed, most of the time depends_on isn’t needed because Terraform can infer the required dependencies by inspecting the other expressions you in the configuration block. In this case, the configuration refers to module.data-transfer-project.project_id and so the data resource therefore indirectly depends on everything that contributes to the value in the output "project_id" block in your module, which seems to be enough for this case.


The usual reason why depends_on might be necessary is if there is a relationship between two resources that Terraform can’t see itself, because the action that causes the dependency is being done by some component other than Terraform.

For example, if you are creating a compute instance with an associated role and that role also has a policy that is represented by a separate resource, this tends to create a situation where the compute instance automatically depends on the role but not on the role policy, because the policy configuration only refers to the role. But it’s likely that software running on that compute instance (which Terraform can’t see) is assuming it will have the rights granted by the policy, so we’d use depends_on to tell Terraform that the compute instance also depends on the policy, thus ensuring that it won’t boot up until the necessary policy is in place.

1 Like