Pattern for s3 bucket object that doesn't upload a new ZIP every run

So I think I’ve read every open and closed issue on github for the s3 bucket object resource, and the documentation for the resource.

The problem is, nowhere is there an example on how to use the etag param (with or without SSE) to stop it from uploading a new file every single time.

I’ve used every combination of the aws provider from 2.23.0 and Terraform version from 12.1 forward to try to stop it from doing this, and it just upload the file again every time.

Here’s what I’ve tried…

resource "aws_s3_bucket" "packages" {
  bucket        = my-bucket
  acl           = private

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  versioning {
    enabled = true
  }

}

resource "aws_s3_bucket_object" "package" {
  bucket                 = aws_s3_bucket.packages.id
  key                    = local.s3_key
  source                 = local.source_path
  etag                   = filemd5(local.source_path)
  server_side_encryption = "AES256"
}


resource "aws_s3_bucket" "packages" {
  bucket        = my-bucket
  acl           = private

  versioning {
    enabled = true
  }

}

resource "aws_s3_bucket_object" "package" {
  bucket                 = aws_s3_bucket.packages.id
  key                    = local.s3_key
  source                 = local.source_path
  etag                   = filemd5(local.source_path)
  server_side_encryption = "AES256"
}


resource "aws_s3_bucket" "packages" {
  bucket        = my-bucket
  acl           = private

  versioning {
    enabled = true
  }

}

resource "aws_s3_bucket_object" "package" {
  bucket                 = aws_s3_bucket.packages.id
  key                    = local.s3_key
  source                 = local.source_path
  etag                   = filemd5(local.source_path)
}


resource "aws_s3_bucket" "packages" {
  bucket        = my-bucket
  acl           = private

  versioning {
    enabled = true
  }

}

resource "aws_s3_bucket_object" "package" {
  bucket                 = aws_s3_bucket.packages.id
  key                    = local.s3_key
  source                 = local.source_path
}

The ZIP was created once.
The ZIP file was created from using the windows “Add to Archive” menu option (stuck with Windows).
The ZIP doesn’t change at all.

I don’t want to use archive_file to zip the file, because the packaging of code doesn’t belong in TF, it belongs in the CI/CD pipline itself.

What is the proper way, using SSE or not, to upload an s3 object that is a ZIP without it updating on every run?

Turns out this was an issue with the cache_control option, which I’ve opened an issue for.

My final working config is:
resource “aws_s3_bucket” “packages” {
bucket = my-bucket
acl = private

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket_object" "object" {
  count                  = local.push_package
  bucket                 = aws_s3_bucket.packages.id
  key                    = local.s3_key
  source                 = local.source_path
  etag                   = filemd5(local.source_path)
  server_side_encryption = "AES256"
  cache_control          = var.deploy_package_cc_option
}