Data sources forcing replacement even though nothing has changed

I’ve experienced issues with using data sources. In particular, within Azure. Some resources need, for instance vnet IDs to be passed to them. However, because of this, every time I run an apply, the resource “must be replaced”, because the vnet ID is “known after apply”. So, even though nothing has changed, things must be replaced. I wish that when an apply is run, any data sources would be read, and anything that uses those values, would be updated, and checked against state, because, resources are being forced to be replaced, even though nothing has changed.

Hi @cbus-guy,

It’s hard to guess what might be going on here without seeing more details, but I do have some general ideas about what can cause this sort of thing, so hopefully this will be useful in dealing with the problem.

Terraform aims to read data sources during the planning step whenever possible, but there are two main situations where that isn’t possible:

  • If the configuration for the data resource includes unknown values from managed resources elsewhere in the configuration. In that case, Terraform must wait until the apply step to read the data resource because the configuration isn’t yet complete and so it isn’t possible to ask the provider to read the data.
  • If the data resource depends on some other resource that also has a pending change in the current plan. In this case it would technically be possible for Terraform to ask the provider to read the data source during planning, but doing so is likely to produce the wrong result: either the object hasn’t been created at all yet and so the read would fail, or the object exists but hasn’t yet been updated and so the result would be the old values for that object, rather than the results of the update.

It sounds like you have a situation where you already know that the data result will be the same as it was before, but Terraform cannot prove that and so it must conservatively assume that the value might change, because otherwise if the value does change during apply then Terraform would be stuck: it can’t replace an object if it didn’t propose to do so during the plan step, and it also can’t just quietly leave the old object in place because it doesn’t match the configuration.

The only viable alternative to the current behavior would be to explicitly fail during the apply step to report the unexpected need to replace, which I think most folks would consider to be worse because Terraform would far more often fail partway through applying a plan.

I think probably the best path forward for you would be to try to understand why some of your data sources are being deferred to the apply step in every plan. That suggests that there’s something else in your configuration that isn’t converging – where “converging” means that after terraform apply succeeds your remote system matches the configuration and so a subsequent plan will return “No changes.”. If you find and fix whichever managed resource is failing to converge then Terraform should have no reason to defer reading your data source until the apply step and therefore you should not run into this problem.

I’m talking only in very general terms here because you didn’t share much detail about how your configuration is designed; if you have specific questions about what I’ve shared here then it’d be helpful if you could also share more details about exactly what behavior you are seeing, ideally including the full output from terraform plan showing the proposed set of changes that’s causing problems for you and the configuration of the objects involved in that plan.

I’ve worked around the issue by running
‘terraform refresh’ prior to running terraform plan