Force replacement caused by location (non)-change

Hi

I came across a very tricky situation and I hope you can help me on figuring this out.

I have some Azure VMs deployed using the same module, one per environment. Everything seems to be working fine for all, except “acceptance” VM. For this, out of sudden, Terraform Plan reports that the instance needs to be replaced, arguing that the location had been changed. The location was not changed at all though.

Some actions:

  • I’ve double-checked the location in the state file, and it’s using the correct one.
  • No changes on the parameters for this env that could cause the replacement.

Snippet from Plan output

  # module.windows.azurerm_windows_virtual_machine.windows-vm["vm-app-01"] must be replaced
-/+ resource "azurerm_windows_virtual_machine" "windows-vm" {
      ~ admin_password               = (sensitive value)
      - availability_set_id          = "" -> null
      - dedicated_host_id            = "" -> null
      - encryption_at_host_enabled   = false -> null
      - eviction_policy              = "" -> null
      ~ id                           = "/subscriptions/..." -> (known after apply)
      - license_type                 = "" -> null
      ~ location                     = "westeurope" -> (known after apply) # forces replacement
        name                         = "vm-app-01"

Snippet State file

          "attributes": {
            "id": "/subscriptions/...",
            "location": "westeurope",
            "name": "vm-app-01",
            "tags": {

My point is… Why is TF trying to re-create the resources arguing location change only for this environment?

Any clues would be really appreciated.

Hi @hutger,

We can’t tell exactly what is going on without the associated configuration, but the plan indicates that the value being assigned to the location attribute is not known during the plan. It may be that the location is going to end up being westeurope again after all the dependencies have been applied, but Terraform cannot predict that will happen, and therefore must plan to replace the instance.

Hi @jbardin thanks for your input.

The location had been obtained from a data resource. e.g.

data "azurerm_resource_group" "example" {
  name = "dsrg_test"
}

resource "azurerm_windows_virtual_machine" "example" {
...
  location             = "${data.azurerm_resource_group.example.location}"
...
}

I’ve managed to overcome this by using a variable with the location.

  location             = var.location

For some reason, the location is not being pulled from the data source for one environment only, whereas works for the other ones.

The most common reason for a data source not being read during the plan is due to inappropriate use of depends_on. Since you mentioned these are located within modules, a depends_on within a module block would be my best guess here.

If location is always known statically, then the data source would not be needed here and the variable is fine. If you do require looking up the location via a data source, you will need to ensure that data source can always be read during the plan in order to prevent unexpected changes like you have shown above.

Hi @jbardin , got what you mean.

I’ve just found the root cause in my code, which was an inappropriate chain of depends_on .

Thanks for that. :+1: Really appreciated