When I manage AWS resources with Terraform, I like to implement a “destroy protection” configuration flag, set to
true by default.
The terraform configuration will then set corresponding protection flag on any AWS resources supporting it (e.g. EC2 instances, ELBs, etc.).
This prevents accidental destruction of critical resources by accidentally running
terraform destroy - it requires an operator to first flip the flag in terraform root module, run
terraform apply to disable protections and then run
terraform destroy to destroy infrastructure.
While we do review and approve
terraform apply plans, this approach provides an additional layer of protection for production, while making infra management in dev and test easy (one can simply keep the protection flag set to
The problem I face now is that I would like to implement the same thing for S3 buckets. I am aware that you cannot delete non-empty buckets, but I’d like to be able to implement explicit protection. Using
prevent_destroy is not an option, as
lifecycle values cannot be variables and our S3 config is implemented as a module.
I know that you can set a bucket policy to prevent deletion, but the problem here is that
aws_s3_bucket_policy is a separate resource, dependent on
aws_s3_bucket, which means that when we run
terraform destroy, the policy resource gets destroyed first, effectively removing deletion protection.
aws_s3_bucket does have a
policy attribute which would allow me to implement dynamic policy to allow/deny bucket deletion, but the attribute is marked as deprecated in the docs and it seems to be read-only on the current version of the AWS provider.
Is there a way to work around the problem?