Terraform detects more changes for aws resources when running terraform plan

Hi,
I added and change few variables inside terraform module and now I am seeing more than required changes showing up. I should be only seeing few changes and 2 additions. But instead when I run terraform plan, it says some destroy also. This destroy shouldn’t be showing up. Does anyone know why I am seeing destroy when terraform plan is run.

it seems there is a disconnect between terraform changes i made v/s what is being detected by terraform plan

Hi @kf200467,

In order to advise on this we’d need to see the output from terraform plan and the configuration changes you made. Unfortunately without seeing the details it’s impossible to guess what might be happening for you here.

Below is one such output from a machine and it is confusing as to why there is so many changes. When I just added only 2 variables below i.e. cluster and dc.

# module.xxxxx_ir.aws_instance.ec2_machine[1] must be replaced
-/+ resource "aws_instance" "ec2_machine" {
      ~ arn                                  = "arn:aws:ec2:eu-west-1:xxxxxxxxxx:instance/i-095f82aa33ee1925b" -> (known after apply)
      ~ availability_zone                    = "eu-west-1b" -> (known after apply)
      ~ cpu_core_count                       = 4 -> (known after apply)
      ~ cpu_threads_per_core                 = 2 -> (known after apply)
      + host_id                              = (known after apply)
      ~ id                                   = "i-095f82aa33ee1925b" -> (known after apply)
      ~ instance_state                       = "running" -> (known after apply)
      ~ ipv6_addresses                       = [
          - "2a0c:93c0:8022:b01:efa:73bb:7342:d06c",
        ] -> (known after apply)
      + key_name                             = (known after apply)
      + network_interface_id                 = (known after apply)
      + password_data                        = (known after apply)
      + placement_group                      = (known after apply)
      ~ primary_network_interface_id         = "eni-02e505a4b47a68bab" -> (known after apply)
      ~ private_dns                          = "ip-10-62-40-8.eu-west-1.compute.internal" -> (known after apply)
      ~ private_ip                           = "10.62.40.8" -> (known after apply)
      + public_dns                           = (known after apply)
      + public_ip                            = (known after apply)
      ~ security_groups                      = [] -> (known after apply)
      ~ tags                                 = {
          ~ "CxxxtApplicationName"        = "xxx-primary-scheduler" -> "xxx-eu-primary"
          ~ "CxxxtApplicationRole"        = "Application" -> "xxxx"
          + "cluster"                       = "xxx-eu-primary"
          ~ "dc"                            = "ir" -> "eu-west-1"
            # (13 unchanged elements hidden)
        }
      ~ tenancy                              = "default" -> (known after apply)
      ~ user_data                            = "81305eb1f770231c9b7b2378096df98bfef8a703" -> "8a95cf117de335bf7531519b956950d02a2118a8" # forces replacement
      ~ volume_tags                          = {} -> (known after apply)
      ~ vpc_security_group_ids               = [
          + "sg-0c54be654eabacab2",
            # (3 unchanged elements hidden)
        ]
        # (12 unchanged attributes hidden)

Found the issue with user data being changed, it was causing infrastructure to be re-created.
To resolve followed some other thread on github, may be this may benefit some other folks