Ignore_changes - removing

I have an s3 bucket that was configured with


lifecycle {
    ignore_changes = [replication_configuration]
  }

I want to remove that part now and bring the replication configuration back under control of terraform. However it doesn’t seem to save the new replication configuration I have applied using aws_s3_bucket_replication_configuration. I have removed the lifecycle config for the s3 bucket. terraform tells me it is going to create the replication configuration before I apply. I then apply it, it is created and I can see it in the AWS console. However the state file only has


"replication_configuration": [],

I am kind of baffled on how to proceed. Then when I run terraform apply again, with the same config where I have the aws_s3_bucket_replication_configuration tag it tells me it is going to remove the replication configuration. And then it does! It is like it is continuing to get it from the old replication_configuration setting inside the aws_s3_bucket tag. But also combined with ignore_changes somehow. I am really blocked and out of ideas.

To clarify. I have no lifecycle tag for the s3 bucket config. I have a separate tag for aws_s3_bucket_replication_configuration. I run terraform apply and it tells me it is going to create the replication configuration. It does. Then I run terraform apply again and it tells me it is going to remove the replication configuration. It does. The I run terraform apply again and it tells me it is going to create the replication configuration …

Any ideas?

I am on terraform 1.1.9. And aws provider 3.75.1.

Hmmmm, it also does it for aws_s3_bucket_logging as well. It literally adds it, then removes it, then adds it flicking back and forth each time I run terraform apply. This is very weird but doesn’t look to be related to removing ignore_changes. It seems confused about whether to follow the deprecated approach or the new approach of using tags like aws_s3_bucket_logging.

I recently upgrade to use aws_s3_bucket_versioning tags to replace deprecated versioning config inside aws_s3_bucket tag and did not have this problem.

I have tried removing all .terraform files locally and running apply/plan again. Same problems. I must be missing something simple.

State files are definitely being updated as I just added a new s3 bucket and can see it in the state files in s3.

Something is seriously busted or I am really missing something very, very obvious (probably the latter). This has nothing to do with ignore_changes. It is something wrong with aws_s3_bucket_replication_configuration.

I have stripped this back and made a very simple stack and the same problem keeps happening. The state for the replication config does not update in the state file. So I can run terraform apply and it creates it. Then terraform apply and it removes it.

I am using terraform 1.1.9 and aws provider 3.75.1.

provider "aws" {
  region  = "ap-southeast-2"
}

resource "aws_s3_bucket" "media_backup" {
  bucket = "1234-media-backup"
}

resource "aws_s3_bucket_versioning" "media_backup" {
  bucket = aws_s3_bucket.media_backup.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_acl" "media_backup" {
  bucket = aws_s3_bucket.media_backup.id
  acl    = "private"
}
resource "aws_s3_bucket" "media" {
  bucket = "1234-media"
  
}

resource "aws_s3_bucket_versioning" "media" {
  bucket = aws_s3_bucket.media.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_acl" "media" {
  bucket = aws_s3_bucket.media.id
  acl    = "private"
}

resource "aws_iam_role" "media_replication" {
  name = "1234-media-replication"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "s3.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
POLICY
}


resource "aws_s3_bucket_replication_configuration" "media" {
  depends_on = [
      aws_s3_bucket_versioning.media_backup,
      aws_s3_bucket_versioning.media
    ]

  role   = aws_iam_role.media_replication.arn
  bucket = aws_s3_bucket.media.id

  rule {
    id = "media-rr"

    status = "Disabled"

    destination {
      bucket        = aws_s3_bucket.media_backup.arn
      storage_class = "ONEZONE_IA"
    }
  }
}

Looks like it works properly if I use aws provider 4.15.1.

Perhaps there is some inconsistency with terraform 1.1.9 and aws provider 3.x? Maybe aws provider 3.x only works properly with terraform 1.0? or terraform 0.15?