I’m finding prevent_destroy as an option to not really help most of the time in scenarios I would at a glance think it would make sense. Hear is a common pattern…
- I want to make some new module, many of the resources in that module will be iterated over, created and destroyed many times. Also destroyed when idle in order to save cost.
- Some resources in that module, I never want to destroy, like an s3 bucket, or dynamo db.
- prevent_destroy will just cause an error when I run terraform destroy. Not really ok, I just want the resource to silently survive, or just offer a non-critical warning. Everything else around those resource can just be destroyed, and my destroy pipeline should pass green.
- I know you can instead destroy with flags to get specific, thats not really an solution when there are hundreds of resources.
Does this problematic (anti) pattern affect other people too?
My work around breaks the desired modularity of my modules but instead I do this. I have to run terraform twice:
- once in an init folder, and this folder never runs terraform destroy. I place resources that must always persist here (S3 buckets, dynamodb)
- once in the terraform as usual folder. This has resources that are allowed to be destroyed.
This is the only way I can solve this problem and have my spinup and teardown pipelines green. I dont really like it. I’d prefer to just have a lifecycle flag like:
…or something like that. Thoughts? opinions?