Prevent_destroy should be able to be silent. Terraform modules need persistent resources

I’m finding prevent_destroy as an option to not really help most of the time in scenarios I would at a glance think it would make sense. Hear is a common pattern…

  • I want to make some new module, many of the resources in that module will be iterated over, created and destroyed many times. Also destroyed when idle in order to save cost.
  • Some resources in that module, I never want to destroy, like an s3 bucket, or dynamo db.
  • prevent_destroy will just cause an error when I run terraform destroy. Not really ok, I just want the resource to silently survive, or just offer a non-critical warning. Everything else around those resource can just be destroyed, and my destroy pipeline should pass green.
  • I know you can instead destroy with flags to get specific, thats not really an solution when there are hundreds of resources.

Does this problematic (anti) pattern affect other people too?

My work around breaks the desired modularity of my modules but instead I do this. I have to run terraform twice:

  • once in an init folder, and this folder never runs terraform destroy. I place resources that must always persist here (S3 buckets, dynamodb)
  • once in the terraform as usual folder. This has resources that are allowed to be destroyed.

This is the only way I can solve this problem and have my spinup and teardown pipelines green. I dont really like it. I’d prefer to just have a lifecycle flag like:
prevent_destroy_silent

…or something like that. Thoughts? opinions?

Hi @queglay,

I think what you’re describing is a misapplication of what prevent_destroy is intended to do. It is not meant to “disown” or forget about a managed resource. If a resource is not managed within the Terraform configuration, and should not be destroyed along with that configuration, it would normally be accessed via data source. There are many situations where it does not make sense to try and manage all resources within a single configuration, especially when the lifetimes of those resources do not align with one another.

There is some work being done to offer other ways to interact with resources that actually cannot be destroyed, but that would be a new feature unrelated to prevent_destroy. If actual enforcement of certain actions is required, that is almost always better done outside of Terraform, since there is no way Terraform can enforce a policy via configuration which it is defined in the same configuration the user can change.

1 Like

Indeed, the prevent_destroy option is not intended for this situation as @jbardin said. This feature should really have been called “prevent replace”, because that’s what it was designed to achieve. Folks would accidentally change something like the instance type of their Amazon RDS instance and then the AWS provider would propose to replace it, and it was originally up to the operator to know that wasn’t acceptable.

If we were designing that anew today, rather than simply preserving backward compatibility with an old design, I expect that we would consider that problem to be one of policy rather than one of Terraform’s own change lifecycle, since it’s really just a special case of arbitrary automatic plan review by automation.

Regardless though, it seems like what you are looking for is a way to tell Terraform to “forget” an object instead of destroying it, in which case the beginnings of such a thing are already on the way in the forthcoming Terraform 1.7, although if I recall correctly this first round won’t fully solve your problem just yet.

What’s definitely coming in 1.7 is the ability to hint to Terraform that it should “forget” something that was already removed from the configuration, by replacing the resource block with a removed block, like this:

removed {
  from = aws_instance.foo
  lifecycle {
    destroy = false
  }
}

This hint means, essentially, “if there’s an object bound to aws_instance.foo in the prior state, plan to forget it instead of planning to destroy it”.

The shape of this is designed to allow adding a similar option to the lifecycle blocks for resource blocks that still exist in the configuration too, which would then have a meaning similar to what I think you want: “any time an object associated with this resource would be planned for destroy, plan to forget it instead”.

We’ve held back that extension of the idea for now because providing a configuration-based replacement for terraform state rm is the immediate goal, and the more general form has some unanswered design questions about how exactly it should behave when an object is planned for replacement, where otherwise the object would’ve been destroyed before creating a new one in its place. However, the language design is shaped to accommodate that being added in a later release.

An important consequence of this is that once Terraform has “forgotten” an object you would need to delete it manually outside of Terraform if you want to get rid of it, because Terraform will no longer know the object exists unless you explicitly re-import it.

Do you think that would meet the need you have?

Thanks it’s good to see to see this being considered. So if we ran a destroy, and ran an apply again, would the refs to the resource still function? I’m not getting how that would work.

If you run terraform destroy then run terraform apply then, assuming you configured destroy = false, you would now have two objects: the one you told Terraform to forget about, which it now has no awareness of, and the one newly created by the apply command. If you wanted only one object then you’d need to insert an additional terraform import command between the two, to reintroduce Terraform to the object it has just forgotten about.

If you do need Terraform to remember an object across a full destroy then in that case what @jbardin suggested will be the best answer, I think. That is how you can explain to Terraform what is being managed by a particular configuration and therefore what should be destroyed (or forgotten) by terraform destroy, vs. what is managed elsewhere and should therefore not be changed by this Terraform configuration at all.

Yes using the data resource is what I do now as @jbardin suggests and it still seems like that would be the only solution still going forward. I just think its not great that I essentially have to run terraform twice (once in a folder that I will probably not destroy), and manage that dependency myself.