How to prevent an s3 bucket from being destroy on apply

Given a bucket like so:

resource aws_s3_bucket nfcisbenchmark_config {
  bucket        = "${var.name}-config"
  acl           = "private"

  #  ensure the CloudTrail S3 bucket has access logging is enabled.
  logging {
    target_bucket = aws_s3_bucket.log_bucket_config.id
    target_prefix = "log/"
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.nfcisbenchmark.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }

  // Tags
  tags = {
    Name              = "${var.name}-config"
    cost_environment  = "${local.cost_environment}" 
    cost_category     = "SEC"
    cost_team_owner   = "MOPRAV"
  }
}

I’d like to know how I can prevent this bucket from ever being destroyed on an apply. I’ve seen the lifecycle_rule, but there is nothing that indicates how to use this to prevent destroying a bucket.

Hi @EvanGertis,

The most robust way to prevent a bucket from being destroyed is to use AWS IAM policy to block the s3:DeleteBucket action on this bucket for whatever user or role Terraform is authenticating as. I would recommend this as the first choice, because it keeps that rule out of band of the configuration that might cause the attempt to delete the object and thus makes it much less likely that someone will accidentally allow deleting the object at the same time as planning to delete it.

Because everything Terraform does is client-side, it has no way to robustly block the author of the configuration from doing anything – the configuration is arbitrary code and so Terraform is not the correct place in the stack for access restrictions.

However, if you’re approaching this more in the sense of a “guardrail” to reduce the chance of mistakes rather than as a security requirement then you might choose to set the prevent_destroy lifecycle argument for that resource. If Terraform sees that the configuration contains that setting and the generated plan calls for destroying the corresponding remote object then the planning step will fail with an error. However, if the configuration author removes that setting (either alone, or by removing the entire resource block that contains it) then Terraform will not block destroying the object, because the lifecycle rule would no longer be active.

1 Like

I’ve added the

lifecycle {
    prevent_destroy = true
  }

Like so,

resource aws_s3_bucket nfcisbenchmark_config {
  bucket        = "${var.name}-config"
  acl           = "private"
  #  ensure the CloudTrail S3 bucket has access logging is enabled.
  logging {
    target_bucket = aws_s3_bucket.log_bucket_config.id
    target_prefix = "log/"
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.nfcisbenchmark.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }

  // Tags
  tags = {
    Name              = "${var.name}-config"
    cost_environment  = "${local.cost_environment}" 
    cost_category     = "SEC"
    cost_team_owner   = "MOPRAV"
  }

  lifecycle {
    prevent_destroy = true
  }
}

The issue is that the plan still comes back with

# aws_s3_bucket.log_bucket_config will be destroyed
  - resource "aws_s3_bucket" "log_bucket_config" {
      - acl                         = "log-delivery-write" -> null
      - arn                         = "arn:aws:s3:::nfcisbenchmark-nf-sandbox-log-config" -> null
      - bucket                      = "nfcisbenchmark-nf-sandbox-log-config" -> null
      - bucket_domain_name          = "nfcisbenchmark-nf-sandbox-log-config.s3.amazonaws.com" -> null
      - bucket_regional_domain_name = "nfcisbenchmark-nf-sandbox-log-config.s3.amazonaws.com" -> null
      - force_destroy               = false -> null
      - hosted_zone_id              = "XXXX" -> null
      - id                          = "nfcisbenchmark-nf-sandbox-log-config" -> null
      - region                      = "us-east-1" -> null
      - request_payer               = "BucketOwner" -> null
      - tags                        = {
          - "Name"             = "nfcisbenchmark-nf-sandbox-log-config"
          - "cost_category"    = "SEC"
          - "cost_environment" = "non-production"
          - "cost_team_owner"  = "MOPRAV"
        } -> null
      - tags_all                    = {
          - "Name"             = "nfcisbenchmark-nf-sandbox-log-config"
          - "cost_category"    = "SEC"
          - "cost_environment" = "non-production"
          - "cost_team_owner"  = "MOPRAV"
        } -> null

      - versioning {
          - enabled    = false -> null
          - mfa_delete = false -> null
        }
    }

Also, we do not want to enforce the block on s3:DeleteBucket since administrators still need to delete buckets.

Hi @EvanGertis,
The initial Terraform code you have there will not and cannot delete the S3 bucket if it has any data in it.

I ran a quick test with the following fairly minimal test case.

resource aws_s3_bucket nfcisbenchmark_config {
  bucket_prefix = "benchmark-test"
  acl           = "private"
}

Fairly obviously when I just ran terraform apply -auto-approve followed by terraform apply -destroy -auto-approve the bucket was created and then deleted as expected.

When I uploaded a file into the bucket before trying to destroy it, I got the expected error message

Error: error deleting S3 Bucket (benchmark-test20211204215238196200000001): BucketNotEmpty: The bucket you tried to delete is not empty
status code: 409, request id: WR5QMMJXDBM2JNM9, host id: …

If you are looking to prevent accidents, then the code you have already will work just fine because recreating an empty bucket in a Terraform managed environment is trivial. If you are looking to prevent malicious behaviour from your staff, then you should probably follow @apparentlymart’s suggestion of blocking access to the s3:DeleteBucket action.

That’s the problem. Obviously, I don’t want to delete the bucket. There should be a way to conditionally say “do not delete this bucket based on condition x” i.e.

resource aws_s3_bucket nfcisbenchmark_cloudtrail {
  count = var.environment == "logging" ? 1 : 0 
  bucket        = var.nf_logging_bucket_name
  acl           = "private"
  # 3.6 Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket (Automated)
  logging {
    target_bucket = aws_s3_bucket.log_bucket_cloudtrail[count.index].id
    target_prefix = "log/"
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.nfcisbenchmark.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }

count = var.environment == "logging" ? 1 : 0 only create this bucket if it’s in a particular account. The concept behind

lifecycle {
    prevent_destroy = true
  }

is to prevent the destruction of a bucket is it not? This implies that I should be able to tag each bucket with the lifecycle argument. Then apply the conditional statement to the single cloudtrail bucket such that only the cloudtrail bucket associated with nf-logging is destroyed meanwhile the other buckets are preserved.

There is no feature in Terraform/aws_s3_bucket at the moment that allows apply -destroy to skip the deletion of an S3 bucket and finish with a success status. The only features available at the moment will cause either the plan phase or the apply phase to fail. This differs from CloudFormation which has the DeletionPolicy attribute for this purpose.

According to DeletionPolicy Attribute - Retain · Issue #902 · hashicorp/terraform-provider-aws · GitHub this will require changes to Terraform itself.

Maybe you can split the S3 bucket (and related CloudTrail configuration) into a separate Terraform configuration. That would allow you to run apply -destroy on your application without destroying the CloudTrail bucket. In the past I have used the AWS SSM Parameter Store as discussed in The terraform_remote_state Data Source - Terraform by HashiCorp to share the bucket name between the two configurations.

From a logical perspective it would make sense to be able to conditionally tag a bucket to say “do not delete”. What is the point of?

lifecycle {
    prevent_destroy = true
  }

The solution that you are presenting is fairly complicated for something so trivial.