Terraform Remote state error with what is currently deployed

Hello pretty, simple bug to explain but so far cannot find the reason behind.

Context

I am using terraform and Openstack as a cloud provider. I haven’t change my terraform configuration however all the compute instance previsously created needs to be ‘recreated’ for unknown reason.

Terraform detects an addition in the ‘network’ section of the compute instances but what it tries to add is already there.

Output of terraform plan:

```hcl
# module.toto.openstack_compute_instance_v2.virtual_machine must be replaced
-/+ resource "openstack_compute_instance_v2" "virtual_machine" {
      + access_ip_v4        = (known after apply)
      + access_ip_v6        = (known after apply)
      ~ all_metadata        = {} -> (known after apply)
      ~ all_tags            = [
          - "preprod",
        ] -> (known after apply)
      ~ flavor_id           = "5b2af4c8-d265-4a19-8b9d-51db0cc1d300" -> (known after apply)
      ~ id                  = "3b19e42a-169a-47b9-aaab-5dc33e218bfd" -> (known after apply)
      ~ image_id            = "Attempt to boot from volume - no image supplied" -> (known after apply)
      + image_name          = (known after apply)
        name                = "xxxxxx"
      ~ region              = "RegionOne" -> (known after apply)
        tags                = [
            "xxxxx",
        ]
        # (7 unchanged attributes hidden)

      ~ block_device {
            # (6 unchanged attributes hidden)
        }
      ~ block_device {
          - volume_size           = 0 -> null
            # (5 unchanged attributes hidden)
        }

      + network { # forces replacement
          + access_network = false
          + fixed_ip_v4    = (known after apply)
          + fixed_ip_v6    = (known after apply)
          + floating_ip    = (known after apply)
          + mac            = (known after apply)
          + name           = (known after apply)
          + port           = (known after apply)
          + uuid           = "166722ca-6e4f-4cfb-bc46-b2235a94388f" # forces replacement
        }
    }

Pulling the terraform remote state for that actual ressource.

# module.toto.openstack_compute_instance_v2.virtual_machine:
resource "openstack_compute_instance_v2" "virtual_machine" {
    access_ip_v4        = "XXXXXXXXX"
    all_metadata        = {}
    all_tags            = [
        "preprod",
    ]
    availability_zone   = "xxxx"
    flavor_id           = "xxxx"
    flavor_name         = "m1.large-amd-sev"
    force_delete        = false
    id                  = "xxxx"
    image_id            = "Attempt to boot from volume - no image supplied"
    name                = "xxxx"
    power_state         = "active"
    region              = "RegionOne"
    security_groups     = [
        "xxxx",
    ]
    stop_before_destroy = false
    tags                = [
        "xxx",
    ]
    user_data           = "xxxxx"

    block_device {
        boot_index            = 0
        delete_on_termination = true
        destination_type      = "volume"
        source_type           = "image"
        uuid                  = "xxxxx"
        volume_size           = 20
    }
    block_device {
        boot_index            = -1
        delete_on_termination = true
        destination_type      = "volume"
        source_type           = "volume"
        uuid                  = "xxxxxx"
        volume_size           = 0
    }

    network {
        access_network = false
        fixed_ip_v4    = "192.168.31.38"
        mac            = "fa:16:3e:76:43:72"
        name           = "xxxxxx"
        uuid           = "166722ca-6e4f-4cfb-bc46-b2235a94388f"
    }
}

As we can observed here the only thing that requires a “Replacement” is the network section and precisely the “UUID” of the network which as clearly haven’t changed.

Note:

The remote state is stored in a PostgreSQL database

I would be very interested in:

  1. Why and how this kind of diff can fail.
  2. How to mitigate them.

So far I am bit blocked as I don’t know if it’s even possible to ignore thoses changes.

Thanks in advance,

That does seem odd. Is there any possibility that the version of terraform-provider-openstack you are using could have changed?

It is indeed… Unfortunately not I have a gitted providers.tf file which have not change. Here the configuration if it helps.

# Define required providers
terraform {
  required_version = ">= 0.14.0"
  required_providers {
    openstack = {
      source  = "terraform-provider-openstack/openstack"
      version = "~> 1.48.0"
    }
  }
  backend "pg" {
    schema_name = "xxx"
    conn_str    = "xxxxx"
  }
}

provider "openstack" {
xxxxx
}

So, after further investigation. In any case someone ends up in the same situation.

terraform plan/apply only compare the remote state with the actual situation. So basically here terraform is saying that some change were made outside it’s terraform.

I was not able to understand the root cause of the diff and since it’s executed by a CI/CD only nothing have been change outside of it.

That being said to mitigate it. I would recommend to use 'terraform plan -refresh-only and then 'terraform apply -refresh-only

TLDR: This will sync you remote state or state file with the actual infrastructure. So no change on the infra side only a “resync” on your state. To use with precaution.

If someone ever find what could be the root cause of such thing I would be gratefull.

Thanks,

The above post from @scottgarner483 is spam, posting an incorrect answer which should be disregarded, whilst engaging in link reputation farming abuse. (The full stop at the end of the first sentence is a camouflaged link.)

I have flagged it for moderator attention.