Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes

Hello,

I proceeded with the [plan-apply] process and confirmed that it was applied normally.
If you enter [terraform plan] immediately, an update(?) occurs as shown below.
However, this is the [No changes. Your infrastructure matches the configuration.] output.

What the hell is this?

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

# module.test has changed
   ~ resource "aws_ecs" "test" {
         id              = "test_ecs"
       + labels          = {}
         tags            = {
             "Owner"                             = "Devops"
             "User"                              = "Terraform"
         }
         # (14 unchanged attributes hidden)
         # (3 unchanged blocks hidden)
     }

   # module.ecs.aws_security_group.ecs-sg has changed
   ~ resource "aws_security_group" "ecs-sg" {
         id                     = "sg-0f88"
       ~ ingress                = [
           + {
               + cidr_blocks      = []
               + from_port        = 0
               + ipv6_cidr_blocks = []
               + prefix_list_ids  = []
               + protocol         = "tcp"
               + security_groups  = [
                   + "sg-0f088",
                 ]
               + self             = false
               + to_port          = 65535
             },
           + {
               + cidr_blocks      = []
               + from_port        = 443
               + ipv6_cidr_blocks = []
               + prefix_list_ids  = []
               + protocol         = "tcp"
               + security_groups  = [
                   + "sg-0f088",
                 ]
               + self             = false
               + to_port          = 443
             },
         ]
         name                   = "ecs-sg"
         tags                   = {
             "Owner"      = "Devops"
             "User"       = "Terraform"
         }
         # (7 unchanged attributes hidden)
     }

 Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using
 ignore_changes, the following plan may include actions to undo or respond to these changes.

 ────────────────────────────────────────────────────────────────────────────────────────────────────────────────

 No changes. Your infrastructure matches the configuration.

Your configuration already matches the changes detected above. If you'd like to update the Terraform state to
match, create and apply a refresh-only plan:
  terraform apply -refresh-only

Hi @oliverpark999,

I’m not sure exactly what you’re asking which isn’t described in the output. The diff shown under “Terraform detected the following changes made outside of Terraform” is just that, the provider has returned something different from last recorded state. In this particular case it sounds like rather than detecting some external drift, the provider is changing some of the values in a way that results in no actual changes to remote resources. While this is not technically correct behavior from the provider, because it is not triggering any changes when compared to the configuration it is mostly harmless.

From the output it also looks as if you’re not using the latest version of Terraform, which reduces this drift output to only what can be attributed to changes in the actual plan, leaving terraform apply -refresh-only as the method for seeing all external resource drift.

I think specifically what’s happened here is that there’s a aws_security_group_rule resource somewhere else in this configuration which has, in effect, modified the aws_security_group object after it was originally created. Although in this case it’s not quite right to say that the change happened “outside of Terraform”, the provider is modifying an object it previously returned later in the same run and so from the perspective of this security group resource alone this object has been modified outside of this resource, by another resource.

This seems to be the problem covered by this bug report in the provider repository:

In summary, the provider is subtly “breaking the rules” as a pragmatic way to allow specifying security group rules in two different ways: either inline in the security group resource or in separate resources.

This quirk is the consequence of breaking those rules. On the next plan, the provider reconciles the inconsistency it created, causing the content of that resource to show as having changed since its most recent apply.

I Use Terraform Version 1.1.5,
Does this issue occur in that version?

When tested in 1.2.5 version, the symptoms seem to have disappeared. We need to do some more tests.

This general process occurs in every version of Terraform, when the stored state is updated to reflect what the providers report as the current state of each resource, though this was not directly shown in earlier versions. In Terraform v1.0 additional CLI output was added to help users track down unexpected external changes during a plan, often referred to as “drift”. In v1.2 this output was reduced to only what can be attributed to changes within the plan, unless a -refresh-only plan was created.

I’m in the process of migrating from 0.13 to 1.3.7 and am seeing similar:

Terraform will perform the following actions:

  # aws_launch_template.fortigates[0] will be updated in-place
  ~ resource "aws_launch_template" "fortigates" {
      + description                          = ""
      + ebs_optimized                        = ""
        id                                   = "lt-xxxxxxxxxxxxxxxxx"
      + kernel_id                            = ""
        name                                 = "fortigate-0-launch-template"
      + name_prefix                          = ""
      + ram_disk_id                          = ""
        tags                                 = {
            "Name"      = "fortigate-0-launch-template"
            "Terraform" = "true"
        }
      # Warning: this attribute value will be marked as sensitive and will not
      # display in UI output after applying this change. The value is unchanged.
      ~ user_data                            = (sensitive value)
        # (11 unchanged attributes hidden)

        # (5 unchanged blocks hidden)
    }

Do I understand correctly that these “drift” messages indicate something like new attributes that 1.3.7 handles that 0.13 did not and, since they drive zero changes, can be safely ignored?

Is the state change reversable – that is, if other considerations drive us back to the older TF for awhile, can the older TF deal with the state that the newer one created?

Making userdata sensitive is a great idea, but want to be sure it’s not a one way trip, for safety.

And … this issue pretty much explains what I’m seeing. Looks like a harmless artifact, thanks!

The plan you’ve shown here is not a “Changes outside of Terraform” plan, but instead Terraform is proposing to actually change your infrastructure to match the configuration.

I think ultimately this is still caused by some quirks of the AWS provider, but this time it’s different quirks: the provider treats the empty string the same as unset for several of its arguments, but it seems to be changing its mind about how to represent this and so it’s telling Terraform core that it’s going to set all of these arguments to the empty string even though that’s a meaningless change because that’s functionally equivalent to leaving them unset in the first place.

It looks like what caused this to show up in the first place is that you were previously using a version of Terraform which didn’t yet know how to track sensitive values, and so after this upgrade Terraform is noticing for the first time that you’ve used a sensitive value as part of user_data, and so Terraform is telling you that the “sensitivity” of this argument is changing even though the value is not.

That should really have been the only thing mentioned in this plan, and Terraform would then have just directly updated the state file and not made any changes in AWS at all. But this other quirky behavior from the provider added some extra noise to the plan which made Terraform Core see this as an “update-in-place”, and so when you apply it the provider will still send a request to modify this object but I expect that it’ll be updating it to have exactly the same settings as before, because the changes it planned to make are not meaningful. Therefore it shouldn’t do any harm to apply this.

Normally I would suggest reporting stuff like this to the AWS provider team in their repository, but in this case I’m pretty sure they are already aware of this quirk because it applies to a number of different resource types that have been present in the provider for a long time, originally built against the type system of much older versions of Terraform.

Thanks for the reply! My only concern is that in the course of this upgrade to 1.3.7 we will need to transition back and forth between the old 0.13 and the new 1.3.7 a number of times for testing while maintaining production capability. As long as the different ways of storing state for the reported items can transition seamlessly back and forth things sound fine. But if not, I’d hate to get involved in a manual state cleanup.

Sounds likely safe though …

@rpattcorner, no you cannot move back to version v0.13 from a post v1version of Terraform. There were a number of non-backwards-compatible changes made to the state format prior to the v1 release, and Terraform will prevent you from using it in the earlier version.

Indeed, even if the AWS provider hadn’t added some noise to this plan, applying this change would have made a new state snapshot using the newer state snapshot format and so it would then no longer be readable by Terraform v0.13.

Sure am glad I asked. Thanks for the heads up!