Terraform plan reporting "Note: Objects have changed outside of Terraform"


Starting a few days ago, nearly every terraform plan, across multiple environments, is showing “Note: Objects have changed outside of Terraform”. Some of these environments haven’t had any changes in weeks.

Upon needing to update security groups I’m seeing that Terraform wants to change more than just my security group.

What all of these environments have in common is that they have ebs_volumes and ebs_volume attachments.

Terraform plan is displaying this:

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply" which may have affected this plan:

  # module.ec2_instance["private"].aws_instance.ignore_ami[0] has changed
  ~ resource "aws_instance" "ignore_ami" {
        id                                   = "i-<string>"
        tags                                 = {
            "Name"      = "<tag>"
            "Owner"     = "<tag>"
            "Project"   = "<tag>"
            "Purpose"   = "<tag>"
            "Terraform" = "true"
        # (31 unchanged attributes hidden)

      + ebs_block_device {
          + delete_on_termination = false
          + device_name           = "/dev/sdf"
          + encrypted             = true
          + iops                  = 100
          + kms_key_id            = "arn:aws:kms:us-west-2:<account>:key/<string>"
          + tags                  = {}
          + throughput            = 0
          + volume_id             = "vol-<string>"
          + volume_size           = 60
          + volume_type           = "gp2"

        # (9 unchanged blocks hidden)

Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.

Technically, the object it wants to “add” is already there, but it wasn’t added as an ebs_block_device, but rather as an aws_ebs_volume and and aws_volume_attachment. The volume ID it is detecting in the above changes matches what is running in AWS (what was previous deployed through the volume and attachment resources).

The volumes have been attached for a while, through previous terraform runs, and this issue never popped up until just a couple of days ago. Did AWS change something on their end that is causing it to be detected differently?

I’m on terraform 1.7.5, and using aws provider 5.38.0

As I mentioned above, the object (resource) that it wants to add is already there, so I went ahead and ran the “apply”. No (noticeable) changes to my infrastructure were made as a result of that

Additionally, subsequent terraform plans no longer show the above drift.

Hi @dach,

I’m not super familiar with the AWS provider, but I would make sure you’ve seen the note under ebs_block_device in the docs

Currently, changes to the ebs_block_device configuration of existing resources cannot be automatically detected by Terraform. To manage changes and attachments of an EBS block to an instance, use the aws_ebs_volume and aws_volume_attachment resources instead. If you use ebs_block_device on an aws_instance , Terraform will assume management over the full set of non-root EBS block devices for the instance, treating additional block devices as drift. For this reason, ebs_block_device cannot be mixed with external aws_ebs_volume and aws_volume_attachment resources for a given instance.

This explains the drift you’ve seen, and may help if there are other issues relating to these values not converging in a normal plan + apply.

Thanks @jbardin , I had seen that. But in this instance I’m not using ebs_block_device, anywhere.

My config for one of the affected stacks looks like this:

module "tikv_instance" {
  source  = "terraform-aws-modules/ec2-instance/aws"
  version = "5.6.1"

  ignore_ami_changes = true
  count              = length(data.terraform_remote_state.vpc.outputs.azs)

  create                      = local.tikv.create
  name                        = "${local.tikv.name}-az${count.index + 1}"
  ami                         = local.tikv.ami
  instance_type               = local.tikv.instance_type
  key_name                    = local.tikv.key_name
  monitoring                  = local.tikv.monitoring
  vpc_security_group_ids      = local.tikv.security_group_ids
  subnet_id                   = local.tikv.subnet_id[count.index]
  associate_public_ip_address = local.tikv.associate_public_ip_address
  user_data                   = local.tikv_user_data

  tags = data.terraform_remote_state.vpc.outputs.default_tags

resource "aws_ebs_volume" "tikv_data" {
  count = length(data.terraform_remote_state.vpc.outputs.azs)
  availability_zone = data.terraform_remote_state.vpc.outputs.azs[count.index]
  size              = 60
  type              = "gp2"

resource "aws_volume_attachment" "tikv_data" {
  count = length(data.terraform_remote_state.vpc.outputs.azs)
  device_name = "/dev/sdf"
  volume_id   = aws_ebs_volume.tikv_data[count.index].id
  instance_id = module.tikv_instance[count.index].id

The other stacks are using the same pattern.

I strongly suspect that this is arising only because the AWS provider has created an ambiguous situation for itself.

The provider offers two different ways to declare that an EC2 instance should have an EBS block device, but the remote API itself doesn’t distinguish between those two cases and so the AWS provider ends up confusing itself: the change “outside of Terraform” is, in this case, really a “change outside of this particular resource instance”.

In other words, the aws_instance gets applied first without the EBS volume, and then the aws_ebs_volume_attachment gets applied and effectively modifies the aws_instance that was created earlier, and so when the provider later refreshes the aws_instance it believes the EBS volume was added outside of Terraform because the aws_instance resource type wasn’t the one responsible for adding it.

If that’s true then there’s not really anything to worry about here: it’s just a provider design quirk. As you saw, it resolves itself after one more plan/apply round because then the aws_instance object has been updated to include the EBS volume, and the provider’s logic for aws_instance is written to treat additional ebs_block_device objects as acceptable so aws_instance doesn’t then propose updating itself to detach the new volume.

Unless Terraform is actually proposing to make a change (rather than just reporting a change that already happened, as seems to be the case here) all you need to do is confirm whether that was a change you expected to have happened, and if so then apply the plan to resynchronize Terraform’s records with the remote system.

The remote API doesn’t provide any way to unambiguously distinguish between an EBS instance declared as part of ec2:RunInstances and an EBS instance attached later using ec2:AttachVolume, and so unfortunately this is just inherently ambiguous and so the provider is choosing to be conservative and report it just in case it’s concerning.